Closed
Bug 1484157
Opened 6 years ago
Closed 6 years ago
[wdspec] Add tests for invalid floats values for ‘duration’ parameter of actions
Categories
(Testing :: geckodriver, enhancement, P1)
Testing
geckodriver
Tracking
(firefox63 fixed)
RESOLVED
FIXED
mozilla63
Tracking | Status | |
---|---|---|
firefox63 | --- | fixed |
People
(Reporter: ato, Assigned: ato)
References
(Blocks 2 open bugs)
Details
Attachments
(1 file, 1 obsolete file)
5.46 KB,
patch
|
whimboo
:
review+
|
Details | Diff | Splinter Review |
The ‘pause’ and related actions that take the ‘duration’ parameter
should only accept positive integers, but this is untested in WPT.
It was discovered that chromedriver allows floating points whereas
geckodriver does not. We should therefore have a test to ensure
conformance.
See https://github.com/mozilla/geckodriver/issues/1355 for context.
Comment 1•6 years ago
|
||
Given that Marionette handles that perfectly, and the same for geckodriver I assume you want a wdspec test created.
Component: Marionette → geckodriver
Summary: Disallow floats in ‘duration’ parameter to actions → [wdspec] Add tests for invalid floats values for ‘duration’ parameter of actions
Assignee | ||
Updated•6 years ago
|
Assignee: nobody → ato
Status: NEW → ASSIGNED
Assignee | ||
Updated•6 years ago
|
Blocks: webdriver, webdriver-actions
Assignee | ||
Comment 2•6 years ago
|
||
The pause action takes an unsigned integer as input, but it was
discovered in https://github.com/mozilla/geckodriver/issues/1355
that chromedriver accepts floating point values.
To achieve conformance in this area, this will test various allowed and
disallowed values so that the pause action's bounds are properly checked.
Attachment #9001888 -
Flags: review?(hskupin)
Assignee | ||
Updated•6 years ago
|
Priority: -- → P1
Comment 3•6 years ago
|
||
Comment on attachment 9001888 [details] [diff] [review]
Add bounds check tests for WebDriver pause action.
Review of attachment 9001888 [details] [diff] [review]:
-----------------------------------------------------------------
::: testing/web-platform/tests/webdriver/tests/actions/bounds.py
@@ +5,5 @@
> +
> +def perform_actions(session, actions):
> + return session.transport.send(
> + "POST",
> + "/session/{session_id}/actions".format(session_id=session.session_id),
Please make this line using `vars()` as we do for all the other commands.
Note, that this is the first action test which is not using the webdriver client command! We should also change all the others to do this. So thank you for the start.
@@ +11,5 @@
> +
> +
> +@pytest.mark.parametrize("action_type", ["none", "key", "pointer"])
> +def test_pause_positive_integer(session, action_type):
> + for valid_duration in [0, 1]:
Please parametrize that in a different `@pytest.mark.parametrize` line.
@@ +20,5 @@
> + "type": "pause",
> + "duration": valid_duration
> + }]
> + }]
> + response = perform_actions(session, actions)
For validation this is fine. It might be good to actually check if it was waiting. But that can be done later.
@@ +23,5 @@
> + }]
> + response = perform_actions(session, actions)
> + assert_success(response)
> +
> + for invalid_duration in [-1, 0.0, None, "foo", True, [], {}]:
Make this a separate test, it even doesn't fit into "positive_integer".
Attachment #9001888 -
Flags: review?(hskupin) → review-
Assignee | ||
Comment 4•6 years ago
|
||
(In reply to Henrik Skupin (:whimboo) from comment #3)
> ::: testing/web-platform/tests/webdriver/tests/actions/bounds.py
> @@ +5,5 @@
> > +
> > +def perform_actions(session, actions):
> > + return session.transport.send(
> > + "POST",
> > + "/session/{session_id}/actions".format(session_id=session.session_id),
>
> Please make this line using `vars()` as we do for all the other commands.
That shouldn’t be necessary, this is functionally equivalent.
> @@ +11,5 @@
> > +
> > +
> > +@pytest.mark.parametrize("action_type", ["none", "key", "pointer"])
> > +def test_pause_positive_integer(session, action_type):
> > + for valid_duration in [0, 1]:
>
> Please parametrize that in a different `@pytest.mark.parametrize` line.
I don’t think this test warrants that level of complexity.
> @@ +20,5 @@
> > + "type": "pause",
> > + "duration": valid_duration
> > + }]
> > + }]
> > + response = perform_actions(session, actions)
>
> For validation this is fine. It might be good to actually check if it was
> waiting. But that can be done later.
The response body is empty.
> @@ +23,5 @@
> > + }]
> > + response = perform_actions(session, actions)
> > + assert_success(response)
> > +
> > + for invalid_duration in [-1, 0.0, None, "foo", True, [], {}]:
>
> Make this a separate test, it even doesn't fit into "positive_integer".
Assignee | ||
Comment 5•6 years ago
|
||
The pause action takes an unsigned integer as input, but it was
discovered in https://github.com/mozilla/geckodriver/issues/1355
that chromedriver accepts floating point values.
To achieve conformance in this area, this will test various allowed and
disallowed values so that the pause action's bounds are properly checked.
Attachment #9001923 -
Flags: review?(hskupin)
Assignee | ||
Updated•6 years ago
|
Attachment #9001888 -
Attachment is obsolete: true
Comment 6•6 years ago
|
||
(In reply to Andreas Tolfsen ﹝:ato﹞ from comment #4)
> > Please make this line using `vars()` as we do for all the other commands.
>
> That shouldn’t be necessary, this is functionally equivalent.
We use that for each and every other test. As such I don't see why you are refusing to make this change so that we keep everything equal.
> > > +@pytest.mark.parametrize("action_type", ["none", "key", "pointer"])
> > > +def test_pause_positive_integer(session, action_type):
> > > + for valid_duration in [0, 1]:
> >
> > Please parametrize that in a different `@pytest.mark.parametrize` line.
>
> I don’t think this test warrants that level of complexity.
The complexity is what you have written. Lets really make use of the features pytest provides. Here just two examples you might want to take into account:
1) If the test for 0 fails, we won't know about 1 because it will not be executed at all.
2) If the test for 0 would have to be disabled, you can only disable the full test with every test scenario.
And this applies to both test methods now.
Also the test for `"duration": -1` has to be part of test_pause_invalid_types.
> > > + "type": "pause",
> > > + "duration": valid_duration
> > > + }]
> > > + }]
> > > + response = perform_actions(session, actions)
> >
> > For validation this is fine. It might be good to actually check if it was
> > waiting. But that can be done later.
>
> The response body is empty.
I don't talk about the response body here but the time which passed-by while waiting for the response.
Assignee | ||
Comment 7•6 years ago
|
||
(In reply to Henrik Skupin (:whimboo) from comment #6)
> (In reply to Andreas Tolfsen ﹝:ato﹞ from comment #4)
> > > Please make this line using `vars()` as we do for all the other commands.
> >
> > That shouldn’t be necessary, this is functionally equivalent.
>
> We use that for each and every other test. As such I don't see why you are
> refusing to make this change so that we keep everything equal.
I don’t understand why you are refusing a patch that works.
> > > > +@pytest.mark.parametrize("action_type", ["none", "key", "pointer"])
> > > > +def test_pause_positive_integer(session, action_type):
> > > > + for valid_duration in [0, 1]:
> > >
> > > Please parametrize that in a different `@pytest.mark.parametrize` line.
> >
> > I don’t think this test warrants that level of complexity.
>
> The complexity is what you have written. Lets really make use of the
> features pytest provides. Here just two examples you might want to take into
> account:
>
> 1) If the test for 0 fails, we won't know about 1 because it will not be
> executed at all.
> 2) If the test for 0 would have to be disabled, you can only disable the
> full test with every test scenario.
>
> And this applies to both test methods now.
My concern is that excessive parametrization necessarily causes the
flow of code to be harder to understand. This is what I mean by
“complexity”, not that my code is complicated.
I’m not claiming that the updating of expected result data isn’t a
reasonable argument, but I question if it is strictly _necessary_
in this highly theoretical case.
I updated the patch to split the test in two functions, as requested.
One now tests the valid types and the bounds checks, and one tests
the invalid types.
> Also the test for `"duration": -1` has to be part of
> test_pause_invalid_types.
-1 is not an invalid type.
> > > For validation this is fine. It might be good to actually
> > > check if it was waiting. But that can be done later.
> >
> > The response body is empty.
>
> I don't talk about the response body here but the time which
> passed-by while waiting for the response.
Right, OK.
But this is a bounds check test, and the valid_duration is 0, so
it will return immediately. Unless you have some way to instrument
the internals of the implementation to have increment a counter for
the system clock I don’t see how you could reliably test that.
Assignee | ||
Updated•6 years ago
|
Flags: needinfo?(hskupin)
Comment 8•6 years ago
|
||
(In reply to Andreas Tolfsen ﹝:ato﹞ from comment #7)
> > (In reply to Andreas Tolfsen ﹝:ato﹞ from comment #4)
> > > > Please make this line using `vars()` as we do for all the other commands.
> > >
> > > That shouldn’t be necessary, this is functionally equivalent.
> >
> > We use that for each and every other test. As such I don't see why you are
> > refusing to make this change so that we keep everything equal.
>
> I don’t understand why you are refusing a patch that works.
Do you really want to spend more time on discussing this simply change? I don't see value in it given that there are more important things on our plate.
> > > > > +@pytest.mark.parametrize("action_type", ["none", "key", "pointer"])
> > > > > +def test_pause_positive_integer(session, action_type):
> > > > > + for valid_duration in [0, 1]:
> > > >
> > > > Please parametrize that in a different `@pytest.mark.parametrize` line.
> > >
> > > I don’t think this test warrants that level of complexity.
> >
> > The complexity is what you have written. Lets really make use of the
> > features pytest provides. Here just two examples you might want to take into
> > account:
> >
> > 1) If the test for 0 fails, we won't know about 1 because it will not be
> > executed at all.
> > 2) If the test for 0 would have to be disabled, you can only disable the
> > full test with every test scenario.
> >
> > And this applies to both test methods now.
>
> My concern is that excessive parametrization necessarily causes the
> flow of code to be harder to understand. This is what I mean by
> “complexity”, not that my code is complicated.
Nothing will change with the flow of code which would make it harder to understand. You simply get an extra argument for the test method, and by that save an extra `for` loop.
Given that you are that hesitated on this topic, I would like to get some more details about a good approach here with Pytest from Dave or Raphael.
> I’m not claiming that the updating of expected result data isn’t a
> reasonable argument, but I question if it is strictly _necessary_
> in this highly theoretical case.
Not sure what you mean here.
> > Also the test for `"duration": -1` has to be part of
> > test_pause_invalid_types.
>
> -1 is not an invalid type.
Huh? Since when do we have negative durations? Please check the spec which also confirms that:
https://w3c.github.io/webdriver/#dfn-process-a-pause-action
Flags: needinfo?(rpierzina)
Flags: needinfo?(dave.hunt)
Updated•6 years ago
|
Flags: needinfo?(hskupin)
Comment 9•6 years ago
|
||
(In reply to Henrik Skupin (:whimboo) from comment #8)
> Given that you are that hesitated on this topic, I would like to get some
> more details about a good approach here with Pytest from Dave or Raphael.
I don't have much of an opinion here. There are many way you could write such a test (you could have an error message parameter, and when it's not None we can assert for failure, rather than writing two test methods) and although Henrik has valid points, I suspect the gain of parameterising here is negligible. As this isn't an disagreement over test coverage, I would suggest landing it as it is and following up with a patch to parameterise it if it's considered to be important rather than spending the time discussing it.
Flags: needinfo?(dave.hunt)
Assignee | ||
Comment 10•6 years ago
|
||
(In reply to Henrik Skupin (:whimboo) from comment #8)
> Do you really want to spend more time on discussing this simply
> change? I don't see value in it given that there are more
> important things on our plate.
I’m sorry, I don’t have time or will to adapt every trivial patch
to your exact specification. It would be quicker if you wrote the
patch yourself. I’m merely pointing out that this patched worked
in its initial version, worked in the second iteration where I
adopted some of your review comments, and work in its current form.
> Given that you are that hesitated on this topic, I would like to
> get some more details about a good approach here with Pytest from
> Dave or Raphael.
You are looping in two staff members to affirm that double
parametrization of all the tests’ input data is a good idea? I
didn’t contest that.
My point is that this extra churn on a 43 line long patch for
minimal gain is not a good use of anyone’s time for arguably what is
only a theoretical failure scenario.
> > > Also the test for `"duration": -1` has to be part of
> > > test_pause_invalid_types.
> >
> > -1 is not an invalid type.
>
> Huh? Since when do we have negative durations? Please
> check the spec which also confirms that:
> https://w3c.github.io/webdriver/#dfn-process-a-pause-action
I… wrote the specification. -1 is out of bounds, but not an invalid
type since it is not a floating point.
Assignee: ato → nobody
Status: ASSIGNED → NEW
Priority: P1 → P3
Assignee | ||
Updated•6 years ago
|
Attachment #9001923 -
Flags: review?(hskupin)
Comment 11•6 years ago
|
||
(In reply to Dave Hunt [:davehunt] ⌚️UTC+1 from comment #9)
> although Henrik has valid points, I suspect the gain of parameterising here
> is negligible. As this isn't an disagreement over test coverage, I would
> suggest landing it as it is and following up with a patch to parameterise it
> if it's considered to be important rather than spending the time discussing
> it.
Ok, that's something we can surely do. Thanks for providing that information.
(In reply to Andreas Tolfsen ﹝:ato﹞ from comment #10)
> I’m sorry, I don’t have time or will to adapt every trivial patch
> to your exact specification. It would be quicker if you wrote the
> patch yourself. I’m merely pointing out that this patched worked
> in its initial version, worked in the second iteration where I
> adopted some of your review comments, and work in its current form.
As I talked with Andreas on IRC, one problem here with the usage of `**vars()` was just a comment. If mozreview could have been used, I wouldn't have created an issue for that. But given that this is missing in Splinter but also in Phabricator we have to be more careful in how we express requests for changes, and just comments.
> > > > Also the test for `"duration": -1` has to be part of
> > > > test_pause_invalid_types.
> > >
> > > -1 is not an invalid type.
> >
> > Huh? Since when do we have negative durations? Please
> > check the spec which also confirms that:
> > https://w3c.github.io/webdriver/#dfn-process-a-pause-action
>
> I… wrote the specification. -1 is out of bounds, but not an invalid
> type since it is not a floating point.
As we have also noticed in our talk is that I misread the test name which is "test_pause_positive_integer" as valid values. Given that difference the test as written is fine.
I will set back Andreas as assignee of this bug given that the patch is ready to land.
Assignee: nobody → ato
Status: NEW → ASSIGNED
Flags: needinfo?(rpierzina)
Priority: P3 → P1
Updated•6 years ago
|
Attachment #9001923 -
Flags: review+
Comment 12•6 years ago
|
||
Pushed by atolfsen@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/51acbb8dc501
Add bounds check tests for WebDriver pause action. r=whimboo
Created web-platform-tests PR https://github.com/web-platform-tests/wpt/pull/12595 for changes under testing/web-platform/tests
Can't merge web-platform-tests PR due to failing upstream checks:
Github PR https://github.com/web-platform-tests/wpt/pull/12595
* continuous-integration/appveyor/pr
* continuous-integration/travis-ci/pr (https://travis-ci.org/web-platform-tests/wpt/builds/418739134?utm_source=github_status&utm_medium=notification)
Comment 15•6 years ago
|
||
bugherder |
Status: ASSIGNED → RESOLVED
Closed: 6 years ago
status-firefox63:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla63
Upstream PR merged
You need to log in
before you can comment on or make changes to this bug.
Description
•