The OC\Files\Storage\Local#writeStream use system provided file_put_contents.
However, it overrides file_put_contents, thus expects that the default behaviour
can be different.
Use Local#file_put_contents in writeStream to benefit from class specific functionality.
Signed-off-by: Tigran Mkrtchyan <tigran.mkrtchyan@desy.de>
In certain cases changeLock to EXCLUSIVE fails
and throws LockedException. This leaves the
file locked as SHARED in file_put_contents,
which prevents retrying (because on second
call file_put_contents takes another SHARED
lock on the same file, and changeLock doesn't
allow more than a single SHARED lock to promote
to EXCLUSIVE).
To avoid this case, we catch the LockedException
and unlock before re-throwing.
Signed-off-by: Ashod Nakashian <ashod.nakashian@collabora.co.uk>
As per https://docs.oracle.com/cd/B19306_01/server.102/b14200/functions060.htm
Oracle uses the first value to cast the rest or the values.
So when the first value is a plain int, instead of doing the math,
it will cast the expression to int and continue with a potential 0.
Signed-off-by: Joas Schilling <coding@schilljs.com>
In the 99% case the bucket is just always there. And if it is not the
read/write will fail hard anyways. Esp on big instances the Objectstore
is not always fast and this can save a few hundered ms of each request
that acess the objectstore.
In short it is adding
'verify_bucket_exists' => false
To the S3 config part
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
* removes the ability for users to import their own certificates (for external storage)
* reliably returns the same certificate bundles system wide (and not depending on the user context and available sessions)
The user specific certificates were broken in some cases anyways, as they are only loaded if the specific user is logged in and thus causing unexpected behavior for background jobs and other non-user triggered code paths.
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
Since we try to do range requests this will fail hard.
However since empty files are not that interesting to read anyways we
just read from an emptry memory stream.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
When we want to get the permissions we now do stat at least 5 times for
each entry. Which is a bit much. Especially since the permssions are all
just in the database already.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
This fixes an issue where the files_trashbin hierarchy of a user could
not been created as the mkdir operations were blocked by the quota
storage wrapper. Even with 0 quota, users should be able to have a
trashbin for external storages.
Signed-off-by: Julius Härtl <jus@bitgrid.net>
since `json_encode` returns `false` if it's input isn't utf8, all non utf8 paths passed to normalizePath will currently return the same cached result.
Fixing this makes working with non utf8 storages a *little* bit more possible for apps
Signed-off-by: Robin Appelman <robin@icewind.nl>
If the object store errors we should not always delete the filecache
entry. As this might lead to people losing access to their files.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Co-authored-by: Christoph Wurst <christoph@winzerhof-wurst.at>
Signed-off-by: Julius Härtl <jus@bitgrid.net>
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
Currently the "add new files during scanning" call stack is smaller than
the "remove deleted files during scanning" call stack. This can lead to
the scanner adding folders in the folder tree that are to deep to be
removed.
This changes the `removeChildren` logic to be non recursive so there is
no limit to the depth of the folder tree during removal
Signed-off-by: Robin Appelman <robin@icewind.nl>
Fixes#16876
Before we'd just fetch everything from all storages we'd have access to.
Then we'd sort. And filter in php. Now this of course is tricky if a
user shared just a file with you and then has a ton of activity.
Now we try to contruct the prefix path. So that the filtering can happen
right away in the databae.
Now this will make the DB more busy. But it should help overall as in
most cases less queries are needed then etc.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
* introduces a new IRootMountProvider to register mount points inside the root storage
* adds a AppdataPreviewObjectStoreStorage to handle the split between preview folders and bucket number
Ref #22033
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
having the "cache rename" after the "storage move" caused the target
to get the fileid from the source file, without taking care that the object
is stored under the original file id.
By doing the "cache rename" first, we trigger the "update existing file"
logic while moving the file to the object store and the object gets stored for the
correct file id
Signed-off-by: Robin Appelman <robin@icewind.nl>
"Exception: substr() expects parameter 3 to be int, bool given" can occur on Line 378 $mimePart = substr($icon, 0, strpos($icon, '-'));
This happens, when '-' is not found and strpos returns false instead of an int.
When this occurs, e.g., Activity hangs.
Signed-off-by: lui87kw <lukas.ifflaender@uni-wuerzburg.de>
the php ftp streamwrapper doesn't handle hashes correctly and will break when it tries to enter a path containing a hash.
By filtering out paths containing a hash we can at least stop the external storage from breaking completely
Signed-off-by: Robin Appelman <robin@icewind.nl>
while some code paths do wrap the "raw" locking exception into one with a proper path, not all of them do
by adding the proper path to the original exception we ensure that we always have the usefull information in out logs
Signed-off-by: Robin Appelman <robin@icewind.nl>
The S3 client enables this by default and then tries to read
`.aws/config`. This causes `open_basedir` restriction related error for
some setups. So this patch disables the CSM because it's most likely
unused anyway.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
while this scan *should* never be triggered, it's good to have some failsafe to ensure
that the users home contents don't end up getting scanned in the root storage
Signed-off-by: Robin Appelman <robin@icewind.nl>
Some S3 providers need a custom upload part size (500 MB static value in Nextcloud).
Here is a commit to change this value via S3 configuration, instead of using S3_UPLOAD_PART_SIZE constant.
A new parameter is added for an S3 connection : uploadPartSize
Signed-off-by: Florent <florent@coppint.com>
Right now if you want to get events via the Node API you have to have a
real instance of the Root. Which in turns sets up the whole FS.
We should make sure this is done lazy. Else enabling the preview
generator for example makes you setup the whole FS on each and every
authenticated call.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Else if a lot of writes happen. It might happen that an old stat result
is used. Resulting in a wrong file size for the file. For example the
text app when a lot of people edit at the same time.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>