|Submitter||Diff||Changes||Open Issues||Last Updated|
|Error loading review requests:|
To investigate less frequent intermittent failures locally it would be of great help to make use of --repeat but also let the test execution stop once a failure is hit. Something which already exists for Mochitests (bug 1046515).
Maja, mind helping me to figure out the correct handling of self.results for the runner in case of the harness unit tests? Given that we currently set 'run_test_set' as an instance of Mock, there are no entries added to `self.results()`. Without having to dig too deep into pytest and fixtures, how could be made sure that there are always some results available?
You could patch run_test_tests to always populate self.results with whatever results you like. You could maybe generate results from a dummy test case, as in http://searchfox.org/mozilla-central/source/testing/marionette/harness/marionette_harness/tests/harness_unit/test_marionette_test_result.py#29
(In reply to Henrik Skupin (:whimboo) from comment #2) > Given that we currently set > 'run_test_set' as an instance of Mock, there are no entries added to > `self.results()`. My answer above assumes that you won't use mock_runner directly, since it doesn't suit your needs.
Maja, the problem is that I don't want to add new tests, but the failures I see are for currently existing tests. So it means that it's unclear to me what needs to be changed. All of them make use of the `mock_runner` directly. Here the list of currently failing tests in test_marionette_runner.py: * test_add_tests (line 400) * test_initialize_test_run (line 385) * test_reset_test_stats (line 378) * test_cleanup_with_manifest[enabled_disabled] / test_cleanup_with_manifest[enabled] (line 364) So we currently set `run_test_set` as simple Mock object. Maybe we should modify it to return a results object by default for those tests as passed in?
Could you provide error logs, please? Or show a try push?
Created attachment 8876838 [details] failures for marionette_harness unit tests Attached you can find all the failures.
I would just like to voice my support for implementing something like this. In addition to mochitest, it also exists for WPT tests and I’ve found it immensely useful in finding hard-to-reproduce bugs.
(In reply to Henrik Skupin (:whimboo) from comment #5) > So we currently set `run_test_set` as simple Mock object. Maybe we should > modify it to return a results object by default for those tests as passed in? Setting a return_value won't work because the runner expects self.run_test_sets to update self.results. (Sarcastic-yay for maintaining a tonne of state in BaseMarionetteTestRunner instead of passing data around.) However, you should be able to achieve what you need by setting a side-effect. In the definition of mock_runner you could add something like: ``` def make_results(*args, **kwargs): // TODO: define a fake result class that suits the situation runner.results = [FakeResult()] * len(runner.tests) * runner.repeat runner.run_test_sets = Mock(side_effect=make_results) ```
Thanks Maja. I tried to use that and while doing it I had a more closer look to how results are handled and noticed that there is a `runner.failed` property available! It perfectly encapsulates all the handling of results, and increases the count whenever a test failed. By using this property for the break logic, I don't have to add any modifications to the runner mock! I will still have to check how I an get a unit test written for it.
Attachment #8860892 - Flags: review?(mjzffr)
Comment on attachment 8860892 [details] Bug 1358978 - Implement --run-until-failure for Marionette. https://reviewboard.mozilla.org/r/132902/#review153506 r+wc ::: commit-message-6efe7:4 (Diff revision 2) > +Bug 1358978 - Implement --run-until-failure for Marionette. > + > +To help debugging intermittent failures the --repeat option can be > +used, but that doesn't stop if a failure occurres and continuous until s/occurres/occurs/ s/continuous/instead it continues/ ::: testing/marionette/harness/marionette_harness/runner/base.py:306 (Diff revision 2) > type=int, > - default=0, > help='number of times to repeat the test(s)') > + self.add_argument("--run-until-failure", > + action="store_true", > + help="Run tests repeatedly and stops on the first time a test fails. " s/stops/stop/ ::: testing/marionette/harness/marionette_harness/runner/base.py:574 (Diff revision 2) > 'browser.tabs.remote.autostart': True, > 'browser.tabs.remote.force-enable': True, > 'extensions.e10sBlocksEnabling': False, > }) > > + # If no repeat has been set, default to 30 runs I think this comment is superfluous. If you want to keep it, the wording "30 extra runs" is more accurate, since self.repeat of 30 results in 31 runs in total. ::: testing/marionette/harness/marionette_harness/tests/harness_unit/conftest.py:67 (Diff revision 2) > 'prefs_args': None, > 'prefs_files': None, > 'profile': None, > 'pydebugger': None, > - 'repeat': 0, > + 'repeat': None, > + 'run_until_failure': None, I'm pretty sure other kwargs are missing, too -- they should have been added with other recent changes to the runner. Since you're editing this dict anyway, could you also update it to match whatever we find in `vars(args)` when running `./mach marionette-test` (no extra options). Thanks! https://dxr.mozilla.org/mozilla-central/rev/b266a8d8fd595b84a7d6218d7b8c6b7af0b5027c/testing/marionette/mach_commands.py#59 ::: testing/marionette/harness/marionette_harness/tests/harness_unit/test_marionette_runner.py:473 (Diff revision 2) > with pytest.raises(AssertionError) as e: > mock_runner.run_tests([u'test_fake_thing.py']) > assert "configuration (self.e10s) does not match browser appinfo" in e.value.message > > + > +@pytest.mark.parametrize('repeat', (None, 42)) It just occurred to me that the runner allows negative values for repeat. We should probably reject that in the argument parser.
Attachment #8860892 - Flags: review?(mjzffr) → review+
Comment on attachment 8860892 [details] Bug 1358978 - Implement --run-until-failure for Marionette. https://reviewboard.mozilla.org/r/132902/#review153506 > I'm pretty sure other kwargs are missing, too -- they should have been added with other recent changes to the runner. > > Since you're editing this dict anyway, could you also update it to match whatever we find in `vars(args)` when running `./mach marionette-test` (no extra options). Thanks! > > https://dxr.mozilla.org/mozilla-central/rev/b266a8d8fd595b84a7d6218d7b8c6b7af0b5027c/testing/marionette/mach_commands.py#59 I think this is really something we should do in another bug, which could be marked as mentored. I don't want to squeeze in those unrelated changes in this patch. Lets keep the changes to what we want to achive here. > It just occurred to me that the runner allows negative values for repeat. We should probably reject that in the argument parser. Good find. I added that to the `verify_usage` method. Sadly we cannot test this in the harness unit tests because the method is not called. If that is something you want, we have to get another bug filed.
Pushed by email@example.com: https://hg.mozilla.org/integration/autoland/rev/5ac504d5cd21 Implement --run-until-failure for Marionette. r=maja_zf
Status: ASSIGNED → RESOLVED
Last Resolved: a year ago
status-firefox56: --- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla56
You need to log in before you can comment on or make changes to this bug.