Closed Bug 1479850 Opened 2 years ago Closed 2 years ago

[wdspec] Add remaining unhandled prompt behavior tests to existent user prompt tests

Categories

(Testing :: geckodriver, enhancement, P1)

Version 3
enhancement

Tracking

(firefox63 fixed)

RESOLVED FIXED
mozilla63
Tracking Status
firefox63 --- fixed

People

(Reporter: whimboo, Assigned: whimboo)

References

Details

Attachments

(2 files)

Before I finish off bug 1473814 I want to get the remaining prompt behavior tests added to existent commands. Maybe we can figure out a better parametrization logic too because `accept`/`dismiss` and `accept and notify`/`dismiss and notify`/`default` have the exact same code, except differences for parameters.
Hi Dave and Raphael. I would need another advice from you guys related to a maybe possible better way to parametrize the following tests, or generate them by pytest. As showed below I have a lot of tests which basically have the exact same code in the test, but only vary in the marker and parametrize setting (here specifically the retval parameter). Here an example:

> @pytest.mark.capabilities({"unhandledPromptBehavior": "accept"})
> @pytest.mark.parametrize("dialog_type, retval", [
>     ("alert", None),
>     ("confirm", True),
>     ("prompt", ""),
> ])
> def test_accept(session, create_cookie, create_dialog, dialog_type, retval):

and 

> @pytest.mark.capabilities({"unhandledPromptBehavior": "dismiss"})
> @pytest.mark.parametrize("dialog_type, retval", [
>     ("alert", None),
>     ("confirm", False),
>     ("prompt", None),
> ])
> def test_dismiss(session, create_cookie, create_dialog, dialog_type, retval):

I would like to save me from having to add that much unnecessary code duplication. Can you see a way to further abstract the above shown tests, and to let pytest generate the specific tests?

The only thing I can see for now is to have a fixture for all the code, but not sure if it a good idea to do all the assertions there. You both have more experience with pytest, so I would appreciate your feedback. Thanks.
Flags: needinfo?(rpierzina)
Flags: needinfo?(dave.hunt)
Having a look at the try results the close window tests are timing out pretty often for debug and QuantumRender builds:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=b9afb4aea811023b03236ae23bcda3cabc3f4435

What's strange is the used timeout until the test got aborted with a timeout error. For opt builds it's set to 30, and for debug builds to 80. That's not what we have defined as default for wdspec:

https://searchfox.org/mozilla-central/rev/196560b95f191b48ff7cba7c2ba9237bba6b5b6a/testing/web-platform/tests/tools/wptrunner/wptrunner/wpttest.py#424

Given that all the tests are marked with `# Meta: timeout=long` I would expect that tests can take up to 180s. I'm even not sure what the difference is between opt/debug and which multiplier we make use of here. So it might even be longer.

Andreas, do you have an idea what goes wrong here?
Flags: needinfo?(ato)
I pushed a try build with some debug logging added. Maybe it will give us some information:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=93990d6a8acb4a2a734b7b97344124c083462372
Depends on: 1480061
(In reply to Henrik Skupin (:whimboo) from comment #3)
> Andreas, do you have an idea what goes wrong here?

I filed bug 1480061 to get this fixed.
Flags: needinfo?(ato)
(In reply to Henrik Skupin (:whimboo) from comment #2)
> The only thing I can see for now is to have a fixture for all the code, but
> not sure if it a good idea to do all the assertions there. You both have
> more experience with pytest, so I would appreciate your feedback. Thanks.

As talked with Dave this morning it will be fine to have assertions in the code. So I think that I will take that route and move all the duplicated code into fixtures. It means we will have three of them:

* closed_without_notification (accept, dismiss)
* closed_with_notification (accept and notify, dismiss and notify, default)
* not_closed (ignore)

It will not reduce the overall size of the file that much, but definitely helps with maintainability.
Sorry Henrik, I'm not going to get to this before leaving for PTO, so I'm deferring to :raphael (although he's also currently on PTO).
Flags: needinfo?(dave.hunt)
No worries. I think I'm fine for now but will leave the ni? set for raphael. When he comes back and has an idea then we can follow-up in a new bug.
Attachment #8996394 - Flags: review?(ato)
Comment on attachment 8996394 [details]
Bug 1479850 - [wdspec] Add remaining unhandled prompt behavior tests to existent user prompt tests.

https://reviewboard.mozilla.org/r/260508/#review268018
Attachment #8996394 - Flags: review?(ato) → review+
We're sorry, Autoland could not rebase your commits for you automatically. Please manually rebase your commits and try again.

hg error in cmd: hg rebase -s 0d05a2b103db9bb8dd9c63e8a92cefd5712646f5 -d f36f38c7ba0c: rebasing 476463:0d05a2b103db "Bug 1479850 - [wdspec] Add remaining unhandled prompt behavior tests to existent user prompt tests. r=ato" (tip)
merging testing/web-platform/meta/MANIFEST.json
warning: conflicts while merging testing/web-platform/meta/MANIFEST.json! (edit, then use 'hg resolve --mark')
unresolved conflicts (see hg resolve, then hg rebase --continue)
Pushed by hskupin@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/32a8875a1721
[wdspec] Add remaining unhandled prompt behavior tests to existent user prompt tests. r=ato
https://hg.mozilla.org/mozilla-central/rev/32a8875a1721
Status: ASSIGNED → RESOLVED
Closed: 2 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla63
Created web-platform-tests PR https://github.com/web-platform-tests/wpt/pull/12298 for changes under testing/web-platform/tests
Upstream PR merged
Flags: needinfo?(rpierzina)
You need to log in before you can comment on or make changes to this bug.