Problem particularly noticed with the update tests wrapper, but all automation test scripts should be verified to return exit code 0 if everything passes, or something else if there's a failure. That way we can drive these scripts from cron jobs and whatnot and email on failures.
This patch restores the exit codes returned from Mozmill. This is working for a simple case where tests fail during the testrun.
Assignee: nobody → dave.hunt
Attachment #551413 - Flags: feedback?(hskupin)
Comment on attachment 551413 [details] [diff] [review] Restore exit codes from Mozmill failures. v1.0 I'm not sure if we can use that approach. But lets discuss it so we can align on a process. The try/catch has been included so that we can run restart tests after the non-restart tests, even if the latter ones fail. More important it is for add-ons tests. We do not want that if a test for one add-on fails, that all the tests for the remaining add-ons are not executed. So the exit code should be determined by the test results, means if tests are failing (or skipped?). If we have a failure in setting up the test-run, we should clearly exit immediately with the proper exit code. Here I think we have to check how many different cases of exit ways we have and assign a unique number to it.
Attachment #551413 - Flags: feedback?(hskupin) → feedback-
Taking bug because it's somewhat important to exit with the correct code.
Assignee: nobody → hskupin
Hardware: x86 → All
Dave, I have investigated the current situation and there is an open question we have to solve first. How do we want to handle test failures? Personally I would like to have the same behavior as what pytest is doing for Selenium. That would allow us later to easily migrate to something new when webdriver is in place for Mozmill too. So I need an answer for: 1. What's the exit code for a pytest command when a test failed? 2. Does the exit code reflect skipped tests? 3. Is there a difference in pytest if a test failed or e.g. an exception occurred? 4. Do we want to handle failed and skipped tests only through the JUnit report? I would kindly appreciate your feedback!
Status: NEW → ASSIGNED
Current version of my work in progress patch. Will continue once we have figured out the remaining open questions.
Attachment #551413 - Attachment is obsolete: true
1. 1 2. No, just failed as far as I can tell. 3. Yes, if an exception occurs the exit code is 3. If an interrupt occurs the exit code is 2. 4. What do you mean by handle? The JUnit output is simply a report. Here is the relevant source for pytest: http://projects.pyverted.com/hpk42/pytest/src/e3065c26d6f5/_pytest/main.py#cl-89
(In reply to Dave Hunt (:davehunt) from comment #7) > 2. No, just failed as far as I can tell. Hm. It also fails for skipped tests? That would mean that we would have broken builds until tests are marked as skipped. I don't think that this is right. I think we should not obey skipped tests for the exit code. > 4. What do you mean by handle? The JUnit output is simply a report. For what are you using the reports exactly? Sorry forgot about it. Is it only for statistics and graphs? > Here is the relevant source for pytest: > http://projects.pyverted.com/hpk42/pytest/src/e3065c26d6f5/_pytest/main. > py#cl-89 Thanks, that helps.
(In reply to Henrik Skupin (:whimboo) from comment #8) > (In reply to Dave Hunt (:davehunt) from comment #7) > > 2. No, just failed as far as I can tell. > > Hm. It also fails for skipped tests? That would mean that we would have > broken builds until tests are marked as skipped. I don't think that this is > right. I think we should not obey skipped tests for the exit code. Eh? I just said it only fails for failed tests. > > 4. What do you mean by handle? The JUnit output is simply a report. > > For what are you using the reports exactly? Sorry forgot about it. Is it > only for statistics and graphs? It is only for Jenkins reports.
Ok, so lets go this route: * Exit with 2 if a test failed for any of the builds under test * Exit with 1 for any Python related exception * Exit with 0 if everything is ok
Attachment #585908 - Flags: review?(dave.hunt) → review+
Status: ASSIGNED → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → FIXED
Product: Mozilla QA → Mozilla QA Graveyard
You need to log in before you can comment on or make changes to this bug.