https://treeherder.mozilla.org/ui/logviewer.html#?job_id=814772&repo=b2g-inbound Either our log parser is missing something or we're not outputting enough information in the logs to properly flag these.
For some reason, the suite is aborting early, about half way through, and we don't report the test summary (number of passed/failed tests). It isn't at all clear why this is happening. It could be a hard crash that isn't leaving any crash report. Any other theories, Ahal?
Sigh, this could be bug 1093296 (which was filed as an offshoot from bug 965705).
It could be, although these runs aren't timing out, and it looks like the parent process would have to terminate early to cause the suite to end this way. We could update the mozharness script to at least flag this condition more explicitly, although that won't help it be resolved.
The last test that ran in this failing case was /tests/dom/canvas/test/webgl-mochitest/test_hidden_alpha.html; if that's consistent with these cases, we could consider disabling that test. Meanwhile, I'll make a mozharness patch to flag this when it happens, which should at least make this more sheriffable.
Hmm, I'm actually not sure how best to handle this in the world of structured logging. The problem is that the JS harness is dying without executing http://dxr.mozilla.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/TestRunner.js#634, and we don't have a good way to detect that sans a crash report. The 'SUITE-END' event we check for is emitted by the Python harness, not the JS side. We could update http://hg.mozilla.org/build/mozharness/file/d8d1a4283056/mozharness/mozilla/structuredlog.py#l58 to look for this explicitly...is there a better way?
Looks like bug 1048855 and bug 1043428. The plumbing for it landed last week, but we aren't using structuredlog.py for mochitests yet. If the goal is to print a message that's sherrifable in the short term I would add something to http://hg.mozilla.org/build/mozharness/file/d8d1a4283056/mozharness/mozilla/testing/unittest.py#l167 that logs something about no summary found or no checks run at the error level, and I’ll put something compatible in structuredlog.py before turning it on for mochitests. This actually highlights that we'll have to emit suite_end or some other final token from the js side that we check for to detect failures like these before doing that.
Created attachment 8521780 [details] [diff] [review] Log a warning when no test summary is found,
Comment on attachment 8521780 [details] [diff] [review] Log a warning when no test summary is found, Review of attachment 8521780 [details] [diff] [review]: ----------------------------------------------------------------- ::: mozharness/mozilla/testing/unittest.py @@ +171,5 @@ > + > + # Account for the possibility that no test summary was output. > + if self.pass_count <= 0 and self.fail_count <= 0 and \ > + (self.known_fail_count is None or self.known_fail_count <= 0): > + self.warning('No tests run or test summary not found') Does this need to be "error" to show up in treeherder?
(In reply to Chris Manchester [:chmanchester] from comment #10) > > Does this need to be "error" to show up in treeherder? Hmm, I think so. Updated and pushed: https://hg.mozilla.org/build/mozharness/rev/bea2df1c0276
(In reply to TBPL Robot from comment #14) > submit_timestamp: 2014-11-13T22:37:05 > log: > https://treeherder.mozilla.org/ui/logviewer.html#?repo=b2g- > inbound&job_id=839650 > repository: b2g-inbound > who: philringnalda[at]gmail[dot]com > machine: tst-linux64-spot-365 > buildname: b2g_emulator_vm b2g-inbound opt test mochitest-3 > revision: 69bda3e20579 > > No tests run or test summary not found > Return code: 1 So the mozharness patch is in place and is working. That doesn't, of course, fix the underlying problem, which at best guess is a B2G crash that produces no crash report. Do you want to continue tracking that here?
If we close this bug out, I think we should rename it to something along the lines of what ended up landing here. As-filed, I think it makes sense to just leave it open for further starring otherwise. That said, I'm not sure the harness is really leaving us in a satisfactory place yet? I'm not sure what information we're able to give an interested developer for debugging this that we couldn't before.
Unfortunately, it isn't clear what additional information the harness could provide. We rely on crash reports being written to the profile to handle crashes, but for some reason that apparently isn't happening here. We could possibly make the harness smarter about knowing when the JS side terminated prematurely, so at least we could get the test name in the error message.