The "Download" item in the menu of public share pages is no longer shown
in wide (>768px) windows (although the element is in the DOM and shown
if resized to a narrow window).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
In the acceptance tests the link share menu is automatically opened if
needed before interacting with an item in the menu; if the menu is not
open it is opened by clicking on its toggle.
However, since a recent change the link share menu is automatically
opened by the regular UI after the link share is created. This causes
that, sometimes, after the creation of a link share the acceptance tests
check whether the menu is shown or not before the menu was automatically
opened; as the menu is not open then the acceptance tests proceed to
click on the toggle, but in the meantime the link share was created and
the menu opened, so clicking on the toggle now closes it. As the menu is
closed it is not possible to interact with its items and the test fails.
To prevent that now the acceptance tests wait for the link share menu to
open after a link share is created before continuing with the other
steps.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
If the sendmail binary can't be found at all we fallback to the default
path.
It most likely is not there but then at least a proper error message
pops up.
Updated the tests to also properly pass.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The update share tests only checked that the share returned by
"update()" had the expected values. However, as "update()" returns the
same share that was given as a parameter the tests were not really
verifying that the values were updated in the database.
In a similar way, the test that checked that a password was removed did
not set a password first, so even if the database returned null it could
be simply returning the default value for the share; a password must be
set first to ensure that it is removed.
Besides that, a typo was fixed too that made the checks on the original
share instead of on the one returned by "update()"; right now it is the
same share, so the change makes no difference, but it is how the check
should be done anyway.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Although now it is possible to create several link shares the acceptance
tests currently handles only the first link share; this first link share
is now created by clicking an "Add new share" button instead of a
checkbox.
Besides that, the "Copy link" button has been moved from the menu to the
row, next to the menu trigger.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
If a user can't authenticate normally (because they have 2FA that is not
available on their devices for example). The redirect that is generated
should be of the proper format.
This means
1. Include the protocol
2. Include the possible subfolder
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
this removes the need for temporary storages with some external storage backends.
The new method is added to a separate interface to maintain compatibility with
storage backends implementing the storage interface directly (without inheriting common)
Currently the interface is implemented for objectstorage based storages and local storage
and used by webdav uploads
Signed-off-by: Robin Appelman <robin@icewind.nl>
Before we'd round up all preview request to their nearest power of two.
This resulted still in a lot of possible images. Generating a lot of
server load and taking up a lot of space.
This moves it to previews to be powers of 4: 64, 256, 1024 and 4096
Also the first two powers are always skipped (4, 16) as it doesn't make
sense to generate previews for that.
We cache preview pretty agressively and I feel this is a better
tradeoff.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The PHP built-in server can crash when certain actions are performed in
Nextcloud (but although the crash is triggered by Nextcloud it does not
seem to be a Nextcloud bug), which can lead to failures in the
acceptance tests that would have otherwise passed.
A crash of the PHP built-in server during an acceptance test can be
identified by the message "sh: 1: kill: No such process" in the
acceptance tests output; as the PHP built-in server crashed its process
does no longer exist when it is tried to be killed when the scenario
ends.
Although the crash has been observed in other tests too it is more
prevalent in the tests for tags and the theming app. In order to
reduce the false positives those tests are now run on Apache instead of
on the PHP built-in sever. However, the rest of tests are still run on
the PHP built-in server due to its lower resource consumption.
In order to run a feature or just a scenario using Apache it has to be
tagged with "@apache"; features or scenarios without that tag (the
default) will run on the PHP built-in server instead.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
In order to run the acceptance tests in Apache "/var/www/html" has to be
linked to the root directory of the Nextcloud server. Before this was
automatically done when launching the acceptance tests through
"./run.sh", but an explicit command was needed when run in Drone. Now
the linking was moved from "run.sh" to "run-local.sh", so it is
automatically done when run through "./run.sh" and in Drone, including
when running the tests for an app instead of for the server.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Each time a new actor appears in a scenario the browser window of the
new actor is put in front of the browser windows of the previous actors.
Before, when acting again as a previous actor his browser window stayed
in the background; in most cases everything worked fine even if the
window was in the background, but due to a bug in the Firefox driver of
Selenium and/or maybe in Firefox itself when the window was in the
background it was not possible to set the value of an input field that
had a range selected.
Now, when acting again as a previous actor his browser window is brought
to the foreground. This prevents the bug from manifesting, but also
reflects better how a user would interact with the browser in real life.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
As discussed in https://github.com/nextcloud/server/issues/11594 when discovering if
x-forwarded-for is working properly its not possible to use getRemoteAddr because
the "client ip" is returned. For this check the ip of the last hop would be required.
Signed-off-by: Daniel Kesselberg <mail@danielkesselberg.de>
Some related tests had to be changed because they relied on internals, see also from the PHPUnit documentation:
"Exercise caution when using [the at] matcher as it can lead to brittle tests which are too closely tied to specific implementation details."
Signed-off-by: Zulan <git@zulan.net>
In 2f87fb6b45 this header was introduced. The referenced documentation says:
> When delivered with a response from https://example.com/clear, the following header will cause cookies associated with the origin https://example.com to be cleared, as well as cookies on any origin in the same registered domain (e.g. https://www.example.com/ and https://more.subdomains.example.com/).
This also applies if `https://nextcloud.example.com/` sends the `Clear-Site-Data: "cookies"` header.
This is not the behavior we want at this point!
So I removed the deletion of cookies from the header. This has no effect on the logout process as this header is supported only recently and the logout works in old browsers as well.
Signed-off-by: Patrick Conrad <conrad@iza.org>
This is IMO a bit more readable and it seems to make the code faster.
Tested it on the company instance where there are over 3k calls to this
function. It shaves off around 10ms.
The advantage here is that the pattern gets optimized by php itsel and
cached.
Also looking for all patterns at the same time and especially no longer
looping for /./ patterns should save time.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
fixes#11617
The OCS routes are only absolute for now as they are often exposed to
the outside anyway and are on a different endpoint than index.php in
anyway.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Now that we allow enforcing 2 factor auth it make sense if we also allow
and endpoint where the clients can in the background fetch an
apppassword if they were configured before the login flow was present.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Generate a notification to generate backup codes if you enable an other
2FA provider but backup codes are not yet generated.
* Add event listner
* Insert background job
* Background job tests and emits notification every 2 weeks
* If the backup codes are generated the next run will remove the job
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Since there is no calendar release for 15 yet we should use an app that
we can quickly release for 15 as well.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Before each scenario of the acceptance tests is run the Nextcloud server
is reset to a default state. To do this the full directory of the
Nextcloud server is commited to a local Git repository and then reset to
that commit when needed.
Unfortunately, Git does not support including empty directories in a
commit. Due to this, when the default state was restored, it could
happen that the file cache listed an empty directory that did not exist
because it was not properly restored (for example,
"data/appdata_*/css/icons"), and that in turn could lead to an error
when the directory was used.
Currently the only way to force Git to include an empty directory is to
add a dummy file to the directory (so it will no longer be empty,
but that should not be a problem in the affected directories, even if
the dummy file is not included in the file cache); although Git FAQ
suggests using a ".gitignore" file a ".keep" file was used instead, as
it conveys better its purpose.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
We use the same logic for creating accounts without a password and there the 12h is a bit short. Users don't expect that the signup link needs to be clicked within 12h - 7d should be a more expected behavior.
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
When two or more user share the same email address its not possible to
reset password by email. Even when only one account is active.
This pr reduce list of users returned by getByEmail by disabled users.
Signed-off-by: Daniel Kesselberg <mail@danielkesselberg.de>
Else if a preview provider is registerd but not available (for example
missing support in some external lib). It will do 💥. This way the
providers can at least do the sanity checks required.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
- implement isAvailable
- run tests only if ImageMagick with HEIC support is available in the
environment
Signed-off-by: Sebastian Steinmetz <me@sebastiansteinmetz.ch>
Using file will overwrite the $file parameter in the template base.
Leading to trying to include a file that is the exception message. Which
will of course fail.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Tokens will be used to give access to a share to guests in public rooms.
Although the token itself is created in the provider of room shares and
no changes are needed for that, due to the code structure it is
necessary to explicitly call the provider from the manager when getting
a room share by token.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
If the 2FA provider registry has not been populated yet, we have to make
sure all available providers are loaded and queried on login. Otherwise
previously active 2FA providers aren't detected as enabled.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
only update the encrypted version after the write operation is finished and the stream is closed
Signed-off-by: Bjoern Schiessle <bjoern@schiessle.org>
This is required to not break compatibility with existing consumers of that endpoint like the apps management or the client
Signed-off-by: Julius Härtl <jus@bitgrid.net>
When a password was set for a mail share an e-mail was sent to the
recipient with the password. Now the e-mail is no longer sent if the
password is meant to be sent by Talk.
However, before the e-mail was not sent when the share was updated but
the password was not changed. Now an e-mail is sent in that case too if
switching from a password sent by Talk to a password sent by mail.
On the other hand, when switching from a password sent by mail to a
password sent by Talk it is mandatory to change the password; otherwise
the recipient would already have access to the share without having to
call the sharer to verify her identity.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The share link UI no longer uses its own layout below the other shares;
now it is shown as a share row with a menu for the actions (except
enabling it, which is shown in the row itself), just like the other
shares.
The share link is no longer shown, either; now the link is got by
clicking on a "Copy URL" menu item, which copies the link to the
clipboard. As the clipboard is not accessible from the acceptance tests
the URL is now extracted from the attributes of that menu item (although
the menu item is clicked anyway to mimic the user behaviour).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, each section of the Files app ("All files", "Favorites"...) had
its own sidebar element. Now there is a single sidebar element for all
the sections in the Files app.
Signed-off-by: John Molakvoæ (skjnldsv) <skjnldsv@protonmail.com>
This can be caused by the code releasing more locks then it acquires,
once the lock value becomes negative it's likely that it will never be able
to change into an exclusive lock again.
Signed-off-by: Robin Appelman <robin@icewind.nl>
As "selenium.server" is a simulated variable it is not recognized by
Mink, so it must be always replaced by its value in "behat.yml" before
the file is parsed by Behat.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The "wd_host" parameter of Selenium2 sessions specify the URL used by
the Selenium driver to connect with the Selenium server. Thus, when the
Selenium server is at a different host or port than the default one (for
example, when run on Drone) the "wd_host" parameter must be set for each
of the Selenium2 sessions defined in "behat.yml".
The "BEHAT_PARAMS" environment variable, which extends the "behat.yml"
configuration file, was used for that. However, this required adding to
the "BEHAT_PARAMS" in "run-local.sh" each new session added to
"behat.yml", including those added in the acceptance tests of apps.
To address that limitation, this commit introduces a simulated variable,
"selenium.server"; just before the acceptance tests are run the
"selenium.server" variable in the "wd_host" parameter is replaced in the
"behat.yml" file used by the acceptance tests. Note that the file that
is modified is the one inside the Docker container used to run the
acceptance tests, so the original file is not touched.
Note that a simulated variable is needed because Behat does not support
overridding nor setting configuration parameters with environment
variables.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, the acceptance tests checked the header colour just once, as the
header colour was immediately changed once the new theming colour was
saved. This is no longer the case, as currently a transition is used to
change between the original colour and the new one, so now the
acceptance tests check repeteadly for the expected header colour until
it matches or the timeout expires.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
This adds persistence to the Nextcloud server 2FA logic so that the server
knows which 2FA providers are enabled for a specific user at any time, even
when the provider is not available.
The `IStatefulProvider` interface was added as tagging interface for providers
that are compatible with this new API.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
According to the array_merge documentation, "If the input arrays have
the same string keys, then the later value for that key will overwrite
the previous one." Thus, the default options must be the first parameter
passed to array_merge.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
* gives the admin a chance to discover the missing indexes and improve the performance of the instance without digging through the manual
* nicely integrated in the setup checks where this kind of hints belong to
* also adds an option to integrate this from an app based on events
* fix style of setting warnings
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
This avoids having to do it at all the places we want cached responses.
We can't inject the ITimeFactor without breaking public API.
However we can perfectly overwrite the service (resulting in the same
testable effect).
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Although in the case of the acceptance tests for the server it is not
strictly needed it was modified for consistency with the configuration
used for the acceptance tests in apps.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Due to a bug in the Mink Extension for Behat it is not possible to use
the "paths.base" parameter in the path to the custom Firefox profile.
"paths.base" is a special parameter in the Behat configuration that
refers to the directory in which "behat.yml" is stored. This comes in
very handy to set the path to custom Firefox profiles in the acceptance
tests for apps, as even if the "behat.yml" file belongs to an app its
paths are relative to the directory in which the tests are run, that is,
the "tests/acceptance" directory of the server.
Until the bug is fixed, just before the acceptance tests are run the
"paths.base" parameter in the path to the custom Firefox profile is
replaced by its value in the "behat.yml" file used by the acceptance
tests. Note that the file that is modified is the one inside the Docker
container used to run the acceptance tests, so the original file is not
touched.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The acceptance tests are currently run on Firefox 47; in that version
the CSS grid support was not enabled by default, but it could be enabled
through a setting in the Firefox profile.
By default Selenium uses a clean Firefox profile when a new session is
started, but it also allows the customization of the profile through a
zipped "user.js" file. The contents of that file have to be provided in
the "firefox_profile" capability when the Firefox session is created.
In the Mink extension for Behat several Mink sessions can be defined in
the "behat.yml" file. Each Mink session uses a different browser session
in Selenium, and each of those browser sessions is initialized with the
capabilities provided in the "behat.yml" file.
From the point of view of the acceptance tests each Mink session is an
actor, so different actors can use different browsers with different
capabilities.
Due to all this a new actor was introduced, "Rubeus", who uses a Firefox
browser that has CSS grid support; this actor is meant to be used only
in those acceptance tests that require proper support for CSS grids.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
However due to the nature of what we store in the token (encrypted
passwords etc). We can't just delete the tokens because that would make
the oauth refresh useless.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>