This Pull Request introduces a SabreDAV plugin that will block all older clients than 1.6.1 to connect and sync with the ownCloud instance.
This has multiple reasons:
1. Old ownCloud client versions before 1.6.0 are not properly working with sticky cookies for load balancers and thus generating sessions en masse
2. Old ownCloud client versions tend to be horrible buggy
In some cases we had in 80minutes about 10'000 sessions created by a single user. While this change set does not really "fix" the problem as 3rdparty legacy clients are affected as well, it is a good work-around and hopefully should force users to update their client
As an additional security hardening it's sensible to serve these files with a Content-Disposition of 'attachment'. Currently they are served 'inline' and get a "secure mimetype" assigned in case of potential dangerous files.
To test this change ensure that:
- [ ] Syncing with the Desktop client still works
- [ ] Syncing with the Android client still works
- [ ] Syncing with the iOS client still works
I verified that the 1.8 OS X and iOS client still work with this change.
Reasoning:
- a WebDAV server is not required to implement locking support
- WebDAV Locking is know to break the sync algorithm
- the current lock implementation is known to be broken (locks are not moved if a file is moved, locks on shared files don't work)
- VObject fixes for Sabre\VObject 3.3
- Remove VObject property workarounds
- Added prefetching for tags in sabre tags plugin
- Moved oc_properties logic to separate PropertyStorage backend (WIP)
- Fixed Sabre connector namespaces
- Improved files plugin to handle props on-demand
- Moved allowed props from server class to files plugin
- Fixed tags caching for files that are known to have no tags
(less queries)
- Added/fixed unit tests for Sabre FilesPlugin, TagsPlugin
- Replace OC\Connector\Sabre\Request with direct call to
httpRequest->setUrl()
- Fix exception detection in DAV client when using Sabre\DAV\Client
- Added setETag() on Node instead of using the static FileSystem
- Also preload tags/props when depth is infinity
This changeset removes the static class `OC_Request` and moves the functions either into `IRequest` which is accessible via `\OC::$server::->getRequest()` or into a separated `TrustedDomainHelper` class for some helper methods which should not be publicly exposed.
This changes only internal methods and nothing on the public API. Some public functions in `util.php` have been deprecated though in favour of the new non-static functions.
Unfortunately some part of this code uses things like `__DIR__` and thus is not completely unit-testable. Where tests where possible they ahve been added though.
Fixes https://github.com/owncloud/core/issues/13976 which was requested in https://github.com/owncloud/core/pull/13973#issuecomment-73492969
\Sabre\DAV\Auth\Backend\AbstractBasic::authenticate was only calling \OC_Connector_Sabre_Auth::validateUserPass when the response of \Sabre\HTTP\BasicAuth::getUserPass was not null.
However, there is a case where the value can be null and the user could be authenticated anyways: The authentication via ownCloud web-interface and then accessing WebDAV resources. This was not possible anymore with this patch because it never reached the code path in this scenario.
This patchs allows authenticating with a session without isDavAuthenticated value stored (this is for ugly WebDAV clients that send the cookie in any case) and thus the functionality should work again.
To test this go to the admin settings and test if the WebDAV check works fine. Furthermore all the usual stuff (WebDAV / Shibboleth / etc...) needs testing as well.
There are a lot of clients that support multiple WebDAV accounts in the same application. However, they resent all the cookies they received from one of the accounts also to the other one. In the case of ownCloud this means that we will always show the user from the session and not the user that is specified in the basic authentication header.
This patch adds a workaround the following way:
1. If the user authenticates via the Sabre Auth Connector add a hint to the session that this was authorized via Basic Auth (this is to prevent logout CSRF)
2. If the request contains this hint and the username specified in the basic auth header differs from the one in the session relogin the user using basic auth
Fixes https://github.com/owncloud/core/issues/11400 and https://github.com/owncloud/core/issues/13245 and probably some other issues as well.
This requires proper testing also considering LDAP / Shibboleth and whatever instances.
When uploading files to an OC ext storage backend or when using server
to server sharing storage, part files aren't needed because the backend
already has its own part files and takes care of the final atomic rename
operation.
This also fixes issues when using two encrypted ownCloud instances where
one mounts the other either as external storage (ownCloud backend) or
through server to server sharing.
I was getting a lot of these in my logs for no apparent reason, and file
uploads were failing:
{"app":"webdav","message":"Sabre\\DAV\\Exception\\ServiceUnavailable: ","level":4,"time":"2015-01-06T15:33:39+00:00"}
In order to debug it, I had to add unique messages to all the places where
this exception was thrown, to identify which one it was, and that made the
logs much more useful:
{"app":"webdav","message":"Sabre\\DAV\\Exception\\ServiceUnavailable: Encryption is disabled","level":4,"time":"2015-01-06T15:36:47+00:00"}
Added oc:tags and oc:favorites in PROPFIND response.
It is possible to update them with PROPPATCH.
These properties are optional which means they need to be requested
explicitly
Convert \OCP\Files\StorageNotAvailableException to
\Sabre\DAV\Exception\ServiceUnavailable for every file/directory
operation happening inside of SabreDAV.
This is necessary to avoid having the exception bubble up to remote.php
which would return an exception page instead of an appropriate response.
When doing a PROPFIND on the root and one of the mount points is not
available, the returned quota attributes will now be zero.
This fix prevents the expected exception to make the whole call fail.
Assume a permission issue whenever a file could not be deleted.
This is because some storages are not able to return permissions, so a
permission denied situation can only be triggered during direct
deletion.