When an contact is moved to another address book, the contact is copied to
the second address book.
During copying, the birthday event is created - but it gets the same UID
as the contact's birthday event in the first address book.
To prevent the "Calendar object with uid already exists" error that followed,
we need to delete the old entry before the new one is created.
Resolves: https://github.com/nextcloud/server/issues/20492
Signed-off-by: Christian Weiske <cweiske@cweiske.de>
some smb servers are very insistent in reporting that the root of the share is readonly, even if it isn't.
This works around the problem by adding a hidden option to overwrite the permissions of the root of the share.
This can be enabled using
```bash
occ files_external:config <mount id> root_force_writable true
```
where you can find your mount id using
```bash
occ files_external:list
```
Signed-off-by: Robin Appelman <robin@icewind.nl>
Updating a user or group share now uses the correct method for the
validation of the expiration date. Instead of using the one from links
it uses the one for internal shares.
To avoid future confusion, the method "validateExpirationDate" has been
renamed to "validateExpirationDateLink".
Signed-off-by: Vincent Petry <vincent@nextcloud.com>
The remote URL of a share is always stored in the database with a
trailing slash. However, when a cloud ID is generated trailing slashes
are removed.
The ID of a remote storage is generated from the cloud ID, but the
"cleanup-remote-storage" command directly used the remote URL stored in
the database. Due to this, even if the remote storage was valid, its ID
did not match the ID of the remote share generated by the command and
ended being removed.
Now the command generates the ID of remote shares using the cloud ID
instead, just like done by the remote storage, so there is no longer a
mismatch.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
even thought we currently have no proper way of limiting the search itself, we can at least limit the construction of the result objects.
this saves about 40% of the time spend in the search request in my local testing
Signed-off-by: Robin Appelman <robin@icewind.nl>
streams get closed automatically when dropped, and in some cases the stream seems to be already closed by the S3 library, in which case trying to close it again will raise an error
Signed-off-by: Robin Appelman <robin@icewind.nl>