If the server is too slow, changing to a different file immediately
after sending a new comment but without waiting for the comment to be
shown for the original file could cause the new comment to be shown for
the current file instead.
This is, indeed, a bug in the comments. However, it is not possible to
test it reliably in the acceptance tests, as it depends on how fast the
server adds the message and how fast the client changes to a different
file; sometimes the test would fail and sometimes it would not.
Therefore, now it is waited for the comment to be added before changing
to another file, as in this case it can be reliably tested that changing
to a different file does not cause the comments from the previous file
to be shown in the current file (this was a different bug already fixed
and due to which this test was added in the first place).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
When the "Comments" tab is open the empty content element is always in
the DOM, although it is only shown once the message collection was
fetched and there were no messages. Due to this it is necessary to
explicitly wait for it to be shown instead of relying on the implicit
wait made to find the element; otherwise it would be found immediately
and if the collection was not fetched yet it would not be visible,
causing the test to fail.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Having both "FilesAppSharingContext" and "FilesSharingAppContext" was
confusing, so "FilesSharingAppContext" was renamed to a more descriptive
name.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
To reshare a file there must be at least three enabled users in the
system; although it would be possible to run the steps to create a third
user in the scenarios that need it for convenience a third enabled user
besides "admin" and "user0" was added to the default setup.
In a similar way, a new step was added too to login as a given user
name, similar to the steps to log in as "user0" and as "admin".
Finally, another actor, "Jim", was introduced for those scenarios which
should be played by three standard actors (that is, without a special
configuration like "Rubeus").
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The "Download" item in the menu of public share pages is no longer shown
in wide (>768px) windows (although the element is in the DOM and shown
if resized to a narrow window).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
In the acceptance tests the link share menu is automatically opened if
needed before interacting with an item in the menu; if the menu is not
open it is opened by clicking on its toggle.
However, since a recent change the link share menu is automatically
opened by the regular UI after the link share is created. This causes
that, sometimes, after the creation of a link share the acceptance tests
check whether the menu is shown or not before the menu was automatically
opened; as the menu is not open then the acceptance tests proceed to
click on the toggle, but in the meantime the link share was created and
the menu opened, so clicking on the toggle now closes it. As the menu is
closed it is not possible to interact with its items and the test fails.
To prevent that now the acceptance tests wait for the link share menu to
open after a link share is created before continuing with the other
steps.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
If the sendmail binary can't be found at all we fallback to the default
path.
It most likely is not there but then at least a proper error message
pops up.
Updated the tests to also properly pass.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The update share tests only checked that the share returned by
"update()" had the expected values. However, as "update()" returns the
same share that was given as a parameter the tests were not really
verifying that the values were updated in the database.
In a similar way, the test that checked that a password was removed did
not set a password first, so even if the database returned null it could
be simply returning the default value for the share; a password must be
set first to ensure that it is removed.
Besides that, a typo was fixed too that made the checks on the original
share instead of on the one returned by "update()"; right now it is the
same share, so the change makes no difference, but it is how the check
should be done anyway.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Although now it is possible to create several link shares the acceptance
tests currently handles only the first link share; this first link share
is now created by clicking an "Add new share" button instead of a
checkbox.
Besides that, the "Copy link" button has been moved from the menu to the
row, next to the menu trigger.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
If a user can't authenticate normally (because they have 2FA that is not
available on their devices for example). The redirect that is generated
should be of the proper format.
This means
1. Include the protocol
2. Include the possible subfolder
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
this removes the need for temporary storages with some external storage backends.
The new method is added to a separate interface to maintain compatibility with
storage backends implementing the storage interface directly (without inheriting common)
Currently the interface is implemented for objectstorage based storages and local storage
and used by webdav uploads
Signed-off-by: Robin Appelman <robin@icewind.nl>
Before we'd round up all preview request to their nearest power of two.
This resulted still in a lot of possible images. Generating a lot of
server load and taking up a lot of space.
This moves it to previews to be powers of 4: 64, 256, 1024 and 4096
Also the first two powers are always skipped (4, 16) as it doesn't make
sense to generate previews for that.
We cache preview pretty agressively and I feel this is a better
tradeoff.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The PHP built-in server can crash when certain actions are performed in
Nextcloud (but although the crash is triggered by Nextcloud it does not
seem to be a Nextcloud bug), which can lead to failures in the
acceptance tests that would have otherwise passed.
A crash of the PHP built-in server during an acceptance test can be
identified by the message "sh: 1: kill: No such process" in the
acceptance tests output; as the PHP built-in server crashed its process
does no longer exist when it is tried to be killed when the scenario
ends.
Although the crash has been observed in other tests too it is more
prevalent in the tests for tags and the theming app. In order to
reduce the false positives those tests are now run on Apache instead of
on the PHP built-in sever. However, the rest of tests are still run on
the PHP built-in server due to its lower resource consumption.
In order to run a feature or just a scenario using Apache it has to be
tagged with "@apache"; features or scenarios without that tag (the
default) will run on the PHP built-in server instead.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
In order to run the acceptance tests in Apache "/var/www/html" has to be
linked to the root directory of the Nextcloud server. Before this was
automatically done when launching the acceptance tests through
"./run.sh", but an explicit command was needed when run in Drone. Now
the linking was moved from "run.sh" to "run-local.sh", so it is
automatically done when run through "./run.sh" and in Drone, including
when running the tests for an app instead of for the server.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Each time a new actor appears in a scenario the browser window of the
new actor is put in front of the browser windows of the previous actors.
Before, when acting again as a previous actor his browser window stayed
in the background; in most cases everything worked fine even if the
window was in the background, but due to a bug in the Firefox driver of
Selenium and/or maybe in Firefox itself when the window was in the
background it was not possible to set the value of an input field that
had a range selected.
Now, when acting again as a previous actor his browser window is brought
to the foreground. This prevents the bug from manifesting, but also
reflects better how a user would interact with the browser in real life.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
As discussed in https://github.com/nextcloud/server/issues/11594 when discovering if
x-forwarded-for is working properly its not possible to use getRemoteAddr because
the "client ip" is returned. For this check the ip of the last hop would be required.
Signed-off-by: Daniel Kesselberg <mail@danielkesselberg.de>
Some related tests had to be changed because they relied on internals, see also from the PHPUnit documentation:
"Exercise caution when using [the at] matcher as it can lead to brittle tests which are too closely tied to specific implementation details."
Signed-off-by: Zulan <git@zulan.net>
In 2f87fb6b45 this header was introduced. The referenced documentation says:
> When delivered with a response from https://example.com/clear, the following header will cause cookies associated with the origin https://example.com to be cleared, as well as cookies on any origin in the same registered domain (e.g. https://www.example.com/ and https://more.subdomains.example.com/).
This also applies if `https://nextcloud.example.com/` sends the `Clear-Site-Data: "cookies"` header.
This is not the behavior we want at this point!
So I removed the deletion of cookies from the header. This has no effect on the logout process as this header is supported only recently and the logout works in old browsers as well.
Signed-off-by: Patrick Conrad <conrad@iza.org>
This is IMO a bit more readable and it seems to make the code faster.
Tested it on the company instance where there are over 3k calls to this
function. It shaves off around 10ms.
The advantage here is that the pattern gets optimized by php itsel and
cached.
Also looking for all patterns at the same time and especially no longer
looping for /./ patterns should save time.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
fixes#11617
The OCS routes are only absolute for now as they are often exposed to
the outside anyway and are on a different endpoint than index.php in
anyway.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Now that we allow enforcing 2 factor auth it make sense if we also allow
and endpoint where the clients can in the background fetch an
apppassword if they were configured before the login flow was present.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Generate a notification to generate backup codes if you enable an other
2FA provider but backup codes are not yet generated.
* Add event listner
* Insert background job
* Background job tests and emits notification every 2 weeks
* If the backup codes are generated the next run will remove the job
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Since there is no calendar release for 15 yet we should use an app that
we can quickly release for 15 as well.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Before each scenario of the acceptance tests is run the Nextcloud server
is reset to a default state. To do this the full directory of the
Nextcloud server is commited to a local Git repository and then reset to
that commit when needed.
Unfortunately, Git does not support including empty directories in a
commit. Due to this, when the default state was restored, it could
happen that the file cache listed an empty directory that did not exist
because it was not properly restored (for example,
"data/appdata_*/css/icons"), and that in turn could lead to an error
when the directory was used.
Currently the only way to force Git to include an empty directory is to
add a dummy file to the directory (so it will no longer be empty,
but that should not be a problem in the affected directories, even if
the dummy file is not included in the file cache); although Git FAQ
suggests using a ".gitignore" file a ".keep" file was used instead, as
it conveys better its purpose.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
We use the same logic for creating accounts without a password and there the 12h is a bit short. Users don't expect that the signup link needs to be clicked within 12h - 7d should be a more expected behavior.
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
When two or more user share the same email address its not possible to
reset password by email. Even when only one account is active.
This pr reduce list of users returned by getByEmail by disabled users.
Signed-off-by: Daniel Kesselberg <mail@danielkesselberg.de>
Else if a preview provider is registerd but not available (for example
missing support in some external lib). It will do 💥. This way the
providers can at least do the sanity checks required.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
- implement isAvailable
- run tests only if ImageMagick with HEIC support is available in the
environment
Signed-off-by: Sebastian Steinmetz <me@sebastiansteinmetz.ch>
Using file will overwrite the $file parameter in the template base.
Leading to trying to include a file that is the exception message. Which
will of course fail.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Tokens will be used to give access to a share to guests in public rooms.
Although the token itself is created in the provider of room shares and
no changes are needed for that, due to the code structure it is
necessary to explicitly call the provider from the manager when getting
a room share by token.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
If the 2FA provider registry has not been populated yet, we have to make
sure all available providers are loaded and queried on login. Otherwise
previously active 2FA providers aren't detected as enabled.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
only update the encrypted version after the write operation is finished and the stream is closed
Signed-off-by: Bjoern Schiessle <bjoern@schiessle.org>
This is required to not break compatibility with existing consumers of that endpoint like the apps management or the client
Signed-off-by: Julius Härtl <jus@bitgrid.net>
When a password was set for a mail share an e-mail was sent to the
recipient with the password. Now the e-mail is no longer sent if the
password is meant to be sent by Talk.
However, before the e-mail was not sent when the share was updated but
the password was not changed. Now an e-mail is sent in that case too if
switching from a password sent by Talk to a password sent by mail.
On the other hand, when switching from a password sent by mail to a
password sent by Talk it is mandatory to change the password; otherwise
the recipient would already have access to the share without having to
call the sharer to verify her identity.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The share link UI no longer uses its own layout below the other shares;
now it is shown as a share row with a menu for the actions (except
enabling it, which is shown in the row itself), just like the other
shares.
The share link is no longer shown, either; now the link is got by
clicking on a "Copy URL" menu item, which copies the link to the
clipboard. As the clipboard is not accessible from the acceptance tests
the URL is now extracted from the attributes of that menu item (although
the menu item is clicked anyway to mimic the user behaviour).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, each section of the Files app ("All files", "Favorites"...) had
its own sidebar element. Now there is a single sidebar element for all
the sections in the Files app.
Signed-off-by: John Molakvoæ (skjnldsv) <skjnldsv@protonmail.com>
This can be caused by the code releasing more locks then it acquires,
once the lock value becomes negative it's likely that it will never be able
to change into an exclusive lock again.
Signed-off-by: Robin Appelman <robin@icewind.nl>
As "selenium.server" is a simulated variable it is not recognized by
Mink, so it must be always replaced by its value in "behat.yml" before
the file is parsed by Behat.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The "wd_host" parameter of Selenium2 sessions specify the URL used by
the Selenium driver to connect with the Selenium server. Thus, when the
Selenium server is at a different host or port than the default one (for
example, when run on Drone) the "wd_host" parameter must be set for each
of the Selenium2 sessions defined in "behat.yml".
The "BEHAT_PARAMS" environment variable, which extends the "behat.yml"
configuration file, was used for that. However, this required adding to
the "BEHAT_PARAMS" in "run-local.sh" each new session added to
"behat.yml", including those added in the acceptance tests of apps.
To address that limitation, this commit introduces a simulated variable,
"selenium.server"; just before the acceptance tests are run the
"selenium.server" variable in the "wd_host" parameter is replaced in the
"behat.yml" file used by the acceptance tests. Note that the file that
is modified is the one inside the Docker container used to run the
acceptance tests, so the original file is not touched.
Note that a simulated variable is needed because Behat does not support
overridding nor setting configuration parameters with environment
variables.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, the acceptance tests checked the header colour just once, as the
header colour was immediately changed once the new theming colour was
saved. This is no longer the case, as currently a transition is used to
change between the original colour and the new one, so now the
acceptance tests check repeteadly for the expected header colour until
it matches or the timeout expires.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
This adds persistence to the Nextcloud server 2FA logic so that the server
knows which 2FA providers are enabled for a specific user at any time, even
when the provider is not available.
The `IStatefulProvider` interface was added as tagging interface for providers
that are compatible with this new API.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
According to the array_merge documentation, "If the input arrays have
the same string keys, then the later value for that key will overwrite
the previous one." Thus, the default options must be the first parameter
passed to array_merge.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
* gives the admin a chance to discover the missing indexes and improve the performance of the instance without digging through the manual
* nicely integrated in the setup checks where this kind of hints belong to
* also adds an option to integrate this from an app based on events
* fix style of setting warnings
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
This avoids having to do it at all the places we want cached responses.
We can't inject the ITimeFactor without breaking public API.
However we can perfectly overwrite the service (resulting in the same
testable effect).
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Although in the case of the acceptance tests for the server it is not
strictly needed it was modified for consistency with the configuration
used for the acceptance tests in apps.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Due to a bug in the Mink Extension for Behat it is not possible to use
the "paths.base" parameter in the path to the custom Firefox profile.
"paths.base" is a special parameter in the Behat configuration that
refers to the directory in which "behat.yml" is stored. This comes in
very handy to set the path to custom Firefox profiles in the acceptance
tests for apps, as even if the "behat.yml" file belongs to an app its
paths are relative to the directory in which the tests are run, that is,
the "tests/acceptance" directory of the server.
Until the bug is fixed, just before the acceptance tests are run the
"paths.base" parameter in the path to the custom Firefox profile is
replaced by its value in the "behat.yml" file used by the acceptance
tests. Note that the file that is modified is the one inside the Docker
container used to run the acceptance tests, so the original file is not
touched.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The acceptance tests are currently run on Firefox 47; in that version
the CSS grid support was not enabled by default, but it could be enabled
through a setting in the Firefox profile.
By default Selenium uses a clean Firefox profile when a new session is
started, but it also allows the customization of the profile through a
zipped "user.js" file. The contents of that file have to be provided in
the "firefox_profile" capability when the Firefox session is created.
In the Mink extension for Behat several Mink sessions can be defined in
the "behat.yml" file. Each Mink session uses a different browser session
in Selenium, and each of those browser sessions is initialized with the
capabilities provided in the "behat.yml" file.
From the point of view of the acceptance tests each Mink session is an
actor, so different actors can use different browsers with different
capabilities.
Due to all this a new actor was introduced, "Rubeus", who uses a Firefox
browser that has CSS grid support; this actor is meant to be used only
in those acceptance tests that require proper support for CSS grids.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
However due to the nature of what we store in the token (encrypted
passwords etc). We can't just delete the tokens because that would make
the oauth refresh useless.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
Fixes#4577
Users with a quota of 0 are a special case. Since they can't (ever)
create files on their own storage. Therefor it makes no real that they
can create folders (and possible share those etc).
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The offset is based on the last known comment instead of limit-offset,
so new comments don't mess up requests which get the history of an object-
Signed-off-by: Joas Schilling <coding@schilljs.com>
Before there was a button to "quickly" add the untrusted domain to the config. This button often didn't worked, because the generated URL was often untrusted as well. Thus removing it and providing proper docs seems to be the better approach to handle this rare case.
Also the log should not be spammed by messages for the untrusted domain accesses, because they are user related and not necessarily an administrative issue.
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
If an app requires a specific minor or path level server version,
the version_compare prevented the installation as only the major
version had been compared and that checks obviously returns `false`.
Now the full version is used for comparison, making it possible to
release apps for a specific minor or patch level version of Nextcloud.
Signed-off-by: Christoph Wurst <christoph@winzerhof-wurst.at>
For consistency with the helper for the Apache web server the helper for
the PHP built-in web server was renamed too.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The default and only helper to run acceptance tests run them on the PHP
built-in web server. This commit introduces a new helper that can be
used to run them on an Apache web server instead.
This helper is meant to be used by the acceptance tests of apps that
require a multi-threaded web server to run (like Talk, due to its use of
long polling). To use the helper it is only needed to set it in the
Behat configuration for the acceptance tests of the app, as explained in
the "NextcloudTestServerContext" documentation.
It is assumed that the acceptance tests are run using the default setup,
and therefore inside a Docker container based on the image for
acceptance tests from Nextcloud. Due to that the helper is expected to
have root permissions, and thus it starts and stops the Apache web
server directly using "service start/stop apache2". In the same way it
also restores the owner and group for "apps", "config" and "data" to
"www-data", as it is the user that Apache sub-processes are run as.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Before, the domain was automatically added assuming that the
NextcloudTestServerContext had no parameters defined in the Behat
configuration. However, in order to use a helper for Apache it would
need to be specified in the configuration with something like:
- NextcloudTestServerContext:
nextcloudTestServerHelper: NextcloudTestServerLocalApacheHelper
The substitution now works both when a helper is specified and when it
is not; note, however, that providing custom parameters to the helper is
not supported, although they are not needed anyway so it is not really a
problem.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Apache sub-processes are run as the www-data user, and they need to be
able to write to the "apps", "config" and "data" directories, so they
have to belong to that user, and therefore the Nextcloud server has to
be installed and configured too as the www-data user. The PHP built-in
web server will still be run as the root user, but in that case the
owner of those directories makes no difference, so this is compatible
with both cases.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The Docker image for acceptance tests provides support for both the PHP
built-in web server and the Apache web server; the acceptance tests for
the server are run on the PHP built-in web server, but the acceptance
tests for some apps will have to be run on the Apache web server (for
example, Talk, as it uses long polling), so a Docker image to support
both cases has to be used in "run.sh". ".drone.yml" was just updated for
consistency, although it was not really needed.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
When the acceptance tests were run they were always loaded from the
"tests/acceptance" directory of the Nextcloud server. Now it is possible
to set the directory used to look for the Behat configuration and the
Nextcloud installation script, which makes possible to run acceptance
tests for the apps too instead of only for the server (although if no
directory is explicitly given the tests for the server are the ones
run).
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
In order to autoload the server context classes the "bootstrap"
directory was explicitly listed in Behat autoload configuration. This is
fine in the configuration of acceptance tests for the server, but it
would force the configuration of acceptance tests for the apps to
explicitly include the path for the server context classes to be able to
use them (for example, for the login step).
Besides with its own configuration Behat also supports autoloading
classes using Composer, so now context classes are autoloaded using
Composer instead; thanks to this the server context classes are
autoloaded also in the acceptance tests for apps without any explicit
configuration in them.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
When on php7.2 we can use the new and improved ARGON2I hashing.
This adds support for that to the hasher. When verifying an old hash
we'll update rehash to move all hashes eventually to the new hash
function.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
* Remove the HTTP Helper
* Remove from Server Containter
* Removed legacy share tests that use it
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
The "FileListContext" provides steps to interact with and check the
behaviour of a file list. However, the "FileListContext" does not know
the right file list ancestor that has to be used by the file list steps,
so until now the file list steps were explicitly wired to the Files app
and they could be used only in that case.
Instead of duplicating the steps with a slightly different name (for
example, "I create a new folder named :folderName in the public shared
folder" instead of "I create a new folder named :folderName") the steps
were generalized; now contexts that "know" that certain file list
ancestor has to be used by the FileListContext steps performed by
certain actor from that point on (until changed again) set it
explicitly. For example, when the current page is the Files app then the
ancestor of the file list is the main view of the current section of the
Files app, but when the current page is a shared link then the ancestor
is set to null (because there will be just one file list, and thus its
ancestor is not relevant to differentiate between instances)
A helper trait, "FileListAncestorSetter", was introduced to reduce the
boilerplate needed to set the file list ancestor from other contexts.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The file list is used in other places besides the Files app (for
example, the File sharing app); in those cases the locators for the file
list elements are the same, but not for the ancestor of the file list.
To make possible to reuse the file list locators in those cases too now
they receive the ancestor to use.
Note that the locators for the file actions menu were not using an
ancestor locator because it is expected that there is only one file
actions menu at a time in the whole page; that may change in the future,
but for the time being it is a valid assumption and thus the ancestor
was not added to those locators in this commit.
Although the locators were generalized the steps themselves still use
the "FilesAppContext::currentSectionMainView" locator as ancestor; the
steps will be generalized in a following commit.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Besides the extraction some minor adjustments (sorting locators for file
action menu entries to reflect the order of the menu entries in the UI,
moving parametrized locators like "createMenuItemFor" above the locators
that use them and placing "descendantOf" calls always in a new line)
were made too.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
NoSuchElement exceptions are sometimes thrown instead of
StaleElementReference exceptions. This can happen when the Selenium2
driver for Mink performs an action on an element through the WebDriver
session instead of directly through the WebDriver element. In that case,
if the element with the given ID does not exist, a NoSuchElement
exception would be thrown instead of a StaleElementReference exception,
so those cases are handled like StaleElementReference exceptions.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
MoveTargetOutOfBounds exceptions are sometimes thrown instead of
ElementNotVisible exceptions. This can happen when the Selenium2 driver
for Mink moves the cursor on an element using the "moveto" method of the
Webdriver session, for example, before clicking on an element. In that
case, if the element is not visible, "moveto" would throw a
MoveTargetOutOfBounds exception instead of an ElementNotVisible
exception, so those cases are handled like ElementNotVisible exceptions.
Note that MoveTargetOutOfBounds exceptions could be thrown too if the
element was visible but "out of reach"; there is no problem in handling
those cases as if the element was not visible, as the exception will be
thrown again anyway once it is verified that the element is indeed
visible.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
if file_get_contents fails remove the file. And traverse up the tree
checking if the other folders are there.
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
This is more of a hack. But one of the nodes won't properly run this. No
sense in waiting 60 minutes
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>
1. Local users should not be returned when searching for empty string
2. The limit of the response should be respected
Signed-off-by: Joas Schilling <coding@schilljs.com>
Instead of checking that the list contains one comment it is now checked
that a comment with certain message is visible. This makes the step (and
the locator) more reusable in future tests and also simplifies the code.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
Depending on the previous steps the new comment field may be already
shown or not when the step to create a new comment is executed.
Therefore, the timeout was increased from 2 to the "standard" 10 seconds
used in other tests.
If the new comment field was found there is no need to use a timeout
when looking for the new comment button; it is either there or not, it
will not appear after some time.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
The locators are moved above the step definitions for consistency with
other context files; besides that I made some minor adjustments for
consistency too in the locator descriptions and identation, and moved
the locators for ".newCommentRow" descendants together.
Signed-off-by: Daniel Calviño Sánchez <danxuliu@gmail.com>
This caused more troubles then it had benefits, especially
when an app got disabled or was removed without being disabled.
Signed-off-by: Joas Schilling <coding@schilljs.com>
it is used by clients for formatting reasons, there is no reason not format
the author if her handle is included in the comment body.
It is unrelated to sending out notifications.
Signed-off-by: Arthur Schiwon <blizzz@arthur-schiwon.de>
PHPDoc (of the public API) says that this method returns string but it also returns null, which is not allowed in some method calls. This fixes that behaviour and returns an empty string and fixes all code paths that explicitly checked for null to be still compliant.
Found while enabling the strict_typing for lib/private for the PHP7+ migration.
Signed-off-by: Morris Jobke <hey@morrisjobke.de>
* Add typehints
* Add return types
* Opcode opts from phpstorm
* Made strict
* Fixed tests: No need to test bogus values anymore strict typing fixes
this
Signed-off-by: Roeland Jago Douma <roeland@famdouma.nl>