In GDrive, filenames aren't unique, and directories are just
special files - so you can have multiple files with the same
name, multiple directories with the same name, and even files
with the same names as directories.
OC doesn't handle this at all, though, and just wants to act
as if file and directory names *are* unique. So when renaming,
we must check if there's an existing object with the same
file or directory name before we commit the rename, and
explicitly delete it if the rename is successful. (Other
providers like dropbox do the same for files, but intentionally
don't do it for directories; we really need to do it for
directories too.)
A good way to observe this is to run the storage unit tests
and look at the state of the Drive afterwards. Without this
commit, there will be several copies of all the test files
and directories. After this commit, there's just one of each.
We can't just say "hey, Drive lets us do this, what's the
problem?" because we don't handle multiple-objects, same-name
cases - getDriveFile() just bails and prints an error if it
searches for the file or directory with a given name and gets
multiple results.
Sometimes there are bugs that cause setupFS() to be called for
non-existing users. Instead of failing hard and breaking the instance,
this fix simply logs a warning.
ownCloud passes us a Unix time integer, but the GDrive API wants
an RFC3339-formatted date. Actually it wants a single particular
RFC3339 format, not just anything that complies will do - it
requires the fractions to be specified, though RFC3339 doesn't.
This resolves issue #11267 (and was also noted by PVince81 in
reviewing PR #6989).
This is a slightly hacky workaround for
https://github.com/google/google-api-php-client/issues/59 .
There's a bug in the Google library which makes it go nuts on
file uploads and transfer *way* too much data if compression is
enabled and it's using its own IO handler (not curl). Upstream
'fixed' this (by disabling compression) for one upload
mechanism, but not for the one we use. The bug doesn't seem to
happen if the google lib detects that curl is available and
decides to use it instead of its own handler. So, let's disable
compression, but only if it looks like the Google lib's check
for curl is going to fail.
Allow specifying a protocol in the host field when mounting another
ownCloud instance. Note that this was already possible with the WebDAV
config but this bug made it inconsistent.
folder size and mtime is always unknown in s3
more s3 fixes
make rescanDelay of root dir configurable, add on the fly update of legacy storage ids, !isset -> empty when checking strings
reduce number of http calls on remove and rmdir
fix typo
maintain deprecated \OC::$session when getting or setting the session via the server container or UserSession
restore order os OC::$session and OC::$CLI
remove unneded initialization of dummy session
write back session when $useCustomSession is true
log warning when deprecated app is used
As constants not defined within a class cannot be automatically found by the
autoloader moving those constants into a class makes them accessible to
code which uses them.
Signed-off-by: Stephan Peijnik <speijnik@anexia-it.com>
For some reason the aws-sdk-php package does not caclulate the
signiture correctly when accessing an object in a bucket with a name of
'.'.
When we are at the top of a S3 bucket there is a need(?) to have a directory
name. Per standard Unix the name picked was '.' (dot or period). This
choice exercises the aws-sdk bug.
This fix is to add a field to the method to store the name to use instead of
'.' which at this point is hard coded to '<root>'. We also add a private
function 'cleanKey()' which will test for the '.' name and replace it with
the variable. Finally all calls to manipulate objects where the path is
not obviously not '.' are processed through cleanKey().
An example where we don't process through clean key would be
'Key' => $path.'/',
Use correct relationship operator
Per feed back use === instead of ==
use '/' instead of '<root>'
Now the external storage correctly returns the mount points visible only
for the current user by using the method getAbsoluteMountPoints() which
is already filtered.
Since that call was missing the backend name which is important for the
UI, this one was added as well.
Each storage backend has a default priority, assigned to any system mounts
created in ownCloud. mount.json can be manually modified to change these
priorities.
The priority order is as follows:
* Personal
* User
* Group
* Global
Within each mount type, the mount with the highest priority is active.
The storage backend defaults were chosen to be the following:
* Local - 150
* Remote storage - 100
* SMB / CIFS with OC login - 90