Closed Bug 1144764 Opened 9 years ago Closed 9 years ago

[Raptor] Accumulate test failures until end of tests

Categories

(Firefox OS Graveyard :: Gaia::PerformanceTest, defect)

ARM
Gonk (Firefox OS)
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: Eli, Assigned: Eli)

Details

(Keywords: perf)

Attachments

(1 file)

The current functionality in Raptor causes testing to be halted if there is a timeout or error in a test run. We attempt to retry a run if something goes wrong, but if we cannot, we process.exit(1) and output the error. If we are testing multiple apps, e.g. APPS="clock,settings", an error in clock will exit, and no tests for settings will run. Over time this means we accumulate more data for applications that are defined first than those at risk by being defined later.

Change the way errors are handled so that we accumulate them and exit at the end of all tests instead of right away.
I think I have all the corner cases from this one figured out, was kinda tricky. Please let me know if something doesn't seem right.
Attachment #8597257 - Flags: review?(rwood)
Comment on attachment 8597257 [details] [review]
Link to Github pull-request: https://github.com/mozilla-b2g/raptor/pull/34

LGTM and works great. Tested with these cases:

- launch test with no errors single and multiple apps & runs [PASS]
- launch test single app but cause an error i.e. spell app name wrong, should error [PASS]
- launch test multiple apps but spell first app name wrong, should run on 2nd app, then report error after for first app [PASS]
- same test as above but 4 apps, two spelled wrong [PASS]
- run with multiple apps one app name wrong and verity still tries to post to virtualization db [PASS]
- setting timeout to 1000 so all tests timeout and do test with multiple apps [PASS]
- multiple apps but spelled right but mess up the first run on purpose by launching different app manually; verify it does a retry before errors out; and then continues on to 2nd app [PASS]
- same as above but mess up 2nd run [PASS]
- multiple apps but have priming fail on first one [PASS]
- multiple apps but have priming fail on 2nd one [PASS]
- multiple apps but have no device attached at start [PASS]
- multiple apps but pull out device after first one done [PASS]
- one app but multiple runs and manually mess up one launch/one run [PASS]
- multiple apps but multiple runs and manually mess up one launch/one run in first app [PASS]

I have one suggestion. When running the launch test with multiple apps, and one or more apps fail out and the other apps work fine, the console output can be somewhat confusing. It is not clear which apps failed, and which apps worked. You can look at the command line app order and figure it out but IMO that's not ideal.

I would suggest clarifying the console output a bit more. For example:
- '[Test] Starting run N' could maybe have the app name appended to the end
- '[Test] Suite aborted due to error' could maybe have the app name appended
- For successful runs, the name of the app could be listed in the actual Metrics table that is output

R+ (on clarifying the console output a bit more, if you agree)
Attachment #8597257 - Flags: review?(rwood) → review+
Thanks for the review! I'll see what I can do about improving the logging situation.
In master: https://github.com/mozilla-b2g/gaia/commit/a9f6218d6b79e93072677515a5fc15f226233f5c
Released in gaia-raptor@1.4.1
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: