The PHP built-in server can crash when certain actions are performed in
Nextcloud (but although the crash is triggered by Nextcloud it does not
seem to be a Nextcloud bug), which can lead to failures in the
acceptance tests that would have otherwise passed.
A crash of the PHP built-in server during an acceptance test can be
identified by the message "sh: 1: kill: No such process" in the
acceptance tests output; as the PHP built-in server crashed its process
does no longer exist when it is tried to be killed when the scenario
ends.
Although the crash has been observed in other tests too it is more
prevalent in the tests for tags and the theming app. In order to
reduce the false positives those tests are now run on Apache instead of
on the PHP built-in sever. However, the rest of tests are still run on
the PHP built-in server due to its lower resource consumption.
In order to run a feature or just a scenario using Apache it has to be
tagged with "@apache"; features or scenarios without that tag (the
default) will run on the PHP built-in server instead.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
In order to run the acceptance tests in Apache "/var/www/html" has to be
linked to the root directory of the Nextcloud server. Before this was
automatically done when launching the acceptance tests through
"./run.sh", but an explicit command was needed when run in Drone. Now
the linking was moved from "run.sh" to "run-local.sh", so it is
automatically done when run through "./run.sh" and in Drone, including
when running the tests for an app instead of for the server.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Each time a new actor appears in a scenario the browser window of the
new actor is put in front of the browser windows of the previous actors.
Before, when acting again as a previous actor his browser window stayed
in the background; in most cases everything worked fine even if the
window was in the background, but due to a bug in the Firefox driver of
Selenium and/or maybe in Firefox itself when the window was in the
background it was not possible to set the value of an input field that
had a range selected.
Now, when acting again as a previous actor his browser window is brought
to the foreground. This prevents the bug from manifesting, but also
reflects better how a user would interact with the browser in real life.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
As discussed in https://github.com/nextcloud/server/issues/11594 when discovering if
x-forwarded-for is working properly its not possible to use getRemoteAddr because
the "client ip" is returned. For this check the ip of the last hop would be required.
Signed-off-by: Daniel Kesselberg <mail@danielkesselberg.de>
Some related tests had to be changed because they relied on internals, see also from the PHPUnit documentation:
"Exercise caution when using [the at] matcher as it can lead to brittle tests which are too closely tied to specific implementation details."
Signed-off-by: Zulan <git@zulan.net>
In 2f87fb6b45 this header was introduced. The referenced documentation says:
> When delivered with a response from https://example.com/clear, the following header will cause cookies associated with the origin https://example.com to be cleared, as well as cookies on any origin in the same registered domain (e.g. https://www.example.com/ and https://more.subdomains.example.com/).
This also applies if `https://nextcloud.example.com/` sends the `Clear-Site-Data: "cookies"` header.
This is not the behavior we want at this point!
So I removed the deletion of cookies from the header. This has no effect on the logout process as this header is supported only recently and the logout works in old browsers as well.
Signed-off-by: Patrick Conrad <conrad@iza.org>
This is IMO a bit more readable and it seems to make the code faster.
Tested it on the company instance where there are over 3k calls to this
function. It shaves off around 10ms.
The advantage here is that the pattern gets optimized by php itsel and
cached.
Also looking for all patterns at the same time and especially no longer
looping for /./ patterns should save time.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
fixes#11617
The OCS routes are only absolute for now as they are often exposed to
the outside anyway and are on a different endpoint than index.php in
anyway.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Now that we allow enforcing 2 factor auth it make sense if we also allow
and endpoint where the clients can in the background fetch an
apppassword if they were configured before the login flow was present.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Generate a notification to generate backup codes if you enable an other
2FA provider but backup codes are not yet generated.
* Add event listner
* Insert background job
* Background job tests and emits notification every 2 weeks
* If the backup codes are generated the next run will remove the job
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Since there is no calendar release for 15 yet we should use an app that
we can quickly release for 15 as well.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Before each scenario of the acceptance tests is run the Nextcloud server
is reset to a default state. To do this the full directory of the
Nextcloud server is commited to a local Git repository and then reset to
that commit when needed.
Unfortunately, Git does not support including empty directories in a
commit. Due to this, when the default state was restored, it could
happen that the file cache listed an empty directory that did not exist
because it was not properly restored (for example,
"data/appdata_*/css/icons"), and that in turn could lead to an error
when the directory was used.
Currently the only way to force Git to include an empty directory is to
add a dummy file to the directory (so it will no longer be empty,
but that should not be a problem in the affected directories, even if
the dummy file is not included in the file cache); although Git FAQ
suggests using a ".gitignore" file a ".keep" file was used instead, as
it conveys better its purpose.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
We use the same logic for creating accounts without a password and there the 12h is a bit short. Users don't expect that the signup link needs to be clicked within 12h - 7d should be a more expected behavior.
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
When two or more user share the same email address its not possible to
reset password by email. Even when only one account is active.
This pr reduce list of users returned by getByEmail by disabled users.
Signed-off-by: Daniel Kesselberg <mail@danielkesselberg.de>
Else if a preview provider is registerd but not available (for example
missing support in some external lib). It will do 💥. This way the
providers can at least do the sanity checks required.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
- implement isAvailable
- run tests only if ImageMagick with HEIC support is available in the
environment
Signed-off-by: Sebastian Steinmetz <me@sebastiansteinmetz.ch>
Using file will overwrite the $file parameter in the template base.
Leading to trying to include a file that is the exception message. Which
will of course fail.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Tokens will be used to give access to a share to guests in public rooms.
Although the token itself is created in the provider of room shares and
no changes are needed for that, due to the code structure it is
necessary to explicitly call the provider from the manager when getting
a room share by token.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
If the 2FA provider registry has not been populated yet, we have to make
sure all available providers are loaded and queried on login. Otherwise
previously active 2FA providers aren't detected as enabled.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
only update the encrypted version after the write operation is finished and the stream is closed
Signed-off-by: Bjoern Schiessle <bjoern@schiessle.org>
This is required to not break compatibility with existing consumers of that endpoint like the apps management or the client
Signed-off-by: Julius Härtl <jus@bitgrid.net>
When a password was set for a mail share an e-mail was sent to the
recipient with the password. Now the e-mail is no longer sent if the
password is meant to be sent by Talk.
However, before the e-mail was not sent when the share was updated but
the password was not changed. Now an e-mail is sent in that case too if
switching from a password sent by Talk to a password sent by mail.
On the other hand, when switching from a password sent by mail to a
password sent by Talk it is mandatory to change the password; otherwise
the recipient would already have access to the share without having to
call the sharer to verify her identity.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The share link UI no longer uses its own layout below the other shares;
now it is shown as a share row with a menu for the actions (except
enabling it, which is shown in the row itself), just like the other
shares.
The share link is no longer shown, either; now the link is got by
clicking on a "Copy URL" menu item, which copies the link to the
clipboard. As the clipboard is not accessible from the acceptance tests
the URL is now extracted from the attributes of that menu item (although
the menu item is clicked anyway to mimic the user behaviour).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, each section of the Files app ("All files", "Favorites"...) had
its own sidebar element. Now there is a single sidebar element for all
the sections in the Files app.
Signed-off-by: John Molakvoæ (skjnldsv) <skjnldsv@protonmail.com>
This can be caused by the code releasing more locks then it acquires,
once the lock value becomes negative it's likely that it will never be able
to change into an exclusive lock again.
Signed-off-by: Robin Appelman <robin@icewind.nl>
As "selenium.server" is a simulated variable it is not recognized by
Mink, so it must be always replaced by its value in "behat.yml" before
the file is parsed by Behat.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The "wd_host" parameter of Selenium2 sessions specify the URL used by
the Selenium driver to connect with the Selenium server. Thus, when the
Selenium server is at a different host or port than the default one (for
example, when run on Drone) the "wd_host" parameter must be set for each
of the Selenium2 sessions defined in "behat.yml".
The "BEHAT_PARAMS" environment variable, which extends the "behat.yml"
configuration file, was used for that. However, this required adding to
the "BEHAT_PARAMS" in "run-local.sh" each new session added to
"behat.yml", including those added in the acceptance tests of apps.
To address that limitation, this commit introduces a simulated variable,
"selenium.server"; just before the acceptance tests are run the
"selenium.server" variable in the "wd_host" parameter is replaced in the
"behat.yml" file used by the acceptance tests. Note that the file that
is modified is the one inside the Docker container used to run the
acceptance tests, so the original file is not touched.
Note that a simulated variable is needed because Behat does not support
overridding nor setting configuration parameters with environment
variables.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, the acceptance tests checked the header colour just once, as the
header colour was immediately changed once the new theming colour was
saved. This is no longer the case, as currently a transition is used to
change between the original colour and the new one, so now the
acceptance tests check repeteadly for the expected header colour until
it matches or the timeout expires.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
This adds persistence to the Nextcloud server 2FA logic so that the server
knows which 2FA providers are enabled for a specific user at any time, even
when the provider is not available.
The `IStatefulProvider` interface was added as tagging interface for providers
that are compatible with this new API.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
According to the array_merge documentation, "If the input arrays have
the same string keys, then the later value for that key will overwrite
the previous one." Thus, the default options must be the first parameter
passed to array_merge.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
* gives the admin a chance to discover the missing indexes and improve the performance of the instance without digging through the manual
* nicely integrated in the setup checks where this kind of hints belong to
* also adds an option to integrate this from an app based on events
* fix style of setting warnings
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
This avoids having to do it at all the places we want cached responses.
We can't inject the ITimeFactor without breaking public API.
However we can perfectly overwrite the service (resulting in the same
testable effect).
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Although in the case of the acceptance tests for the server it is not
strictly needed it was modified for consistency with the configuration
used for the acceptance tests in apps.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Due to a bug in the Mink Extension for Behat it is not possible to use
the "paths.base" parameter in the path to the custom Firefox profile.
"paths.base" is a special parameter in the Behat configuration that
refers to the directory in which "behat.yml" is stored. This comes in
very handy to set the path to custom Firefox profiles in the acceptance
tests for apps, as even if the "behat.yml" file belongs to an app its
paths are relative to the directory in which the tests are run, that is,
the "tests/acceptance" directory of the server.
Until the bug is fixed, just before the acceptance tests are run the
"paths.base" parameter in the path to the custom Firefox profile is
replaced by its value in the "behat.yml" file used by the acceptance
tests. Note that the file that is modified is the one inside the Docker
container used to run the acceptance tests, so the original file is not
touched.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The acceptance tests are currently run on Firefox 47; in that version
the CSS grid support was not enabled by default, but it could be enabled
through a setting in the Firefox profile.
By default Selenium uses a clean Firefox profile when a new session is
started, but it also allows the customization of the profile through a
zipped "user.js" file. The contents of that file have to be provided in
the "firefox_profile" capability when the Firefox session is created.
In the Mink extension for Behat several Mink sessions can be defined in
the "behat.yml" file. Each Mink session uses a different browser session
in Selenium, and each of those browser sessions is initialized with the
capabilities provided in the "behat.yml" file.
From the point of view of the acceptance tests each Mink session is an
actor, so different actors can use different browsers with different
capabilities.
Due to all this a new actor was introduced, "Rubeus", who uses a Firefox
browser that has CSS grid support; this actor is meant to be used only
in those acceptance tests that require proper support for CSS grids.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
However due to the nature of what we store in the token (encrypted
passwords etc). We can't just delete the tokens because that would make
the oauth refresh useless.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Fixes#4577
Users with a quota of 0 are a special case. Since they can't (ever)
create files on their own storage. Therefor it makes no real that they
can create folders (and possible share those etc).
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The offset is based on the last known comment instead of limit-offset,
so new comments don't mess up requests which get the history of an object-
Signed-off-by: Joas Schilling <coding@schilljs.com>
Before there was a button to "quickly" add the untrusted domain to the config. This button often didn't worked, because the generated URL was often untrusted as well. Thus removing it and providing proper docs seems to be the better approach to handle this rare case.
Also the log should not be spammed by messages for the untrusted domain accesses, because they are user related and not necessarily an administrative issue.
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
If an app requires a specific minor or path level server version,
the version_compare prevented the installation as only the major
version had been compared and that checks obviously returns `false`.
Now the full version is used for comparison, making it possible to
release apps for a specific minor or patch level version of Nextcloud.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
For consistency with the helper for the Apache web server the helper for
the PHP built-in web server was renamed too.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The default and only helper to run acceptance tests run them on the PHP
built-in web server. This commit introduces a new helper that can be
used to run them on an Apache web server instead.
This helper is meant to be used by the acceptance tests of apps that
require a multi-threaded web server to run (like Talk, due to its use of
long polling). To use the helper it is only needed to set it in the
Behat configuration for the acceptance tests of the app, as explained in
the "NextcloudTestServerContext" documentation.
It is assumed that the acceptance tests are run using the default setup,
and therefore inside a Docker container based on the image for
acceptance tests from Nextcloud. Due to that the helper is expected to
have root permissions, and thus it starts and stops the Apache web
server directly using "service start/stop apache2". In the same way it
also restores the owner and group for "apps", "config" and "data" to
"www-data", as it is the user that Apache sub-processes are run as.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, the domain was automatically added assuming that the
NextcloudTestServerContext had no parameters defined in the Behat
configuration. However, in order to use a helper for Apache it would
need to be specified in the configuration with something like:
- NextcloudTestServerContext:
nextcloudTestServerHelper: NextcloudTestServerLocalApacheHelper
The substitution now works both when a helper is specified and when it
is not; note, however, that providing custom parameters to the helper is
not supported, although they are not needed anyway so it is not really a
problem.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Apache sub-processes are run as the www-data user, and they need to be
able to write to the "apps", "config" and "data" directories, so they
have to belong to that user, and therefore the Nextcloud server has to
be installed and configured too as the www-data user. The PHP built-in
web server will still be run as the root user, but in that case the
owner of those directories makes no difference, so this is compatible
with both cases.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The Docker image for acceptance tests provides support for both the PHP
built-in web server and the Apache web server; the acceptance tests for
the server are run on the PHP built-in web server, but the acceptance
tests for some apps will have to be run on the Apache web server (for
example, Talk, as it uses long polling), so a Docker image to support
both cases has to be used in "run.sh". ".drone.yml" was just updated for
consistency, although it was not really needed.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
When the acceptance tests were run they were always loaded from the
"tests/acceptance" directory of the Nextcloud server. Now it is possible
to set the directory used to look for the Behat configuration and the
Nextcloud installation script, which makes possible to run acceptance
tests for the apps too instead of only for the server (although if no
directory is explicitly given the tests for the server are the ones
run).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
In order to autoload the server context classes the "bootstrap"
directory was explicitly listed in Behat autoload configuration. This is
fine in the configuration of acceptance tests for the server, but it
would force the configuration of acceptance tests for the apps to
explicitly include the path for the server context classes to be able to
use them (for example, for the login step).
Besides with its own configuration Behat also supports autoloading
classes using Composer, so now context classes are autoloaded using
Composer instead; thanks to this the server context classes are
autoloaded also in the acceptance tests for apps without any explicit
configuration in them.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
When on php7.2 we can use the new and improved ARGON2I hashing.
This adds support for that to the hasher. When verifying an old hash
we'll update rehash to move all hashes eventually to the new hash
function.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>