Open Bug 847275 Opened 11 years ago Updated 2 years ago

make assertions cause test failures in mochitest-browser-chrome

Categories

(Testing :: Mochitest, defect)

defect

Tracking

(Not tracked)

People

(Reporter: dbaron, Unassigned, Mentored)

References

(Blocks 2 open bugs)

Details

(Whiteboard: [leave open])

Attachments

(2 files)

Bug 404077 made NS_ASSERTIONs in debug builds cause test failures for all types of mochitests other than mochitest-browser-chrome.

This bug is a followup to do the same for mochitest-browser-chrome.

The approach should be relatively straightforward; see:
https://hg.mozilla.org/mozilla-central/rev/d1490171893b
https://hg.mozilla.org/mozilla-central/rev/02b878360c64

I suspect some of the machinery there will even be in common, but the point at which the assertions are checked is different since mochitest-browser-chrome moves between tests differently.
Blocks: 279923
Assignee: nobody → dbaron
Status: NEW → ASSIGNED
I tested all three error messages manually by making different
SimpleTest.expectAssertions() calls in
browser/base/content/test/browser_aboutHome.js .
Attachment #784178 - Flags: review?(dao)
(Though maybe I should try to make expectAssertions be on SimpleTest but not be in the global scope.  I don't see an obvious existing pattern for doing that, though, but maybe I'm just getting lost in the twisty maze of browser-test.js.)
Though browser-chrome mochitests are annoyingly inconsistent with other mochitests already; e.g., in browser-chrome mochitests waitForExplicitFinish is on the global scope but not on SimpleTest; in other mochitests it's on SimpleTest but not the global scope.

I guess I'll wait to hear what dao (and maybe others familiar with browser-chrome mochitests) have to say before fiddling with the patch further.
(In reply to David Baron [:dbaron] (don't cc:, use needinfo? instead) from comment #3)
> Try push with patch 1 only (SHOULD SHOW FAILURES):
> https://tbpl.mozilla.org/?tree=Try&rev=9bd5f0d7fb09
> 
> Try push with patch 1 and patch 2:
> https://tbpl.mozilla.org/?tree=Try&rev=66ddcb780f2a

Both runs were as-expected; lots of failures in the first, all green in the second, using:

try: --build d --platform linux,macosx64,win32 --unittests mochitest-bc --talos none --no-emails
(In reply to David Baron [:dbaron] (don't cc:, use needinfo? instead) from comment #5)
> Though browser-chrome mochitests are annoyingly inconsistent with other
> mochitests already; e.g., in browser-chrome mochitests waitForExplicitFinish
> is on the global scope but not on SimpleTest; in other mochitests it's on
> SimpleTest but not the global scope.

Indeed there is an existing inconsistency. I don't think you need to address it here, and it's probably better to just go with the existing pattern (i.e. your current patch).
Comment on attachment 784178 [details] [diff] [review]
patch 1:  Make assertions cause test failures in browser-chrome mochitest.

>+      var debugsvc = Cc["@mozilla.org/xpcom/debug;1"].getService(Ci.nsIDebug2);
>+      if (debugsvc.isDebugBuild) {
>+        var newAssertionCount = debugsvc.assertionCount;
>+        var numAsserts = newAssertionCount - this.lastAssertionCount;
>+        this.lastAssertionCount = newAssertionCount;
>+
>+        var max = testScope.__expectedMaxAsserts;
>+        var min = testScope.__expectedMinAsserts;

nit: please use 'let'

>+    if (typeof(min) != "number" || typeof(max) != "number" ||
>+      min < 0 || max < min) {

nit: the second line needs more indentation (2 spaces)
Attachment #784178 - Flags: review?(dao) → review+
Comment on attachment 784179 [details] [diff] [review]
patch 2:  Temporarily disable failures from checking of assertions in browser-chrome mochitests so that we can annotate the expected assertions before enabling.

>         } else if (numAsserts < min) {
>           let msg = "Assertion count " + numAsserts +
>                     " is less than expected range " +
>                     min + "-" + max + " assertions.";
>-          // TEST-UNEXPECTED-PASS
>-          this.currentTest.addResult(new testResult(false, msg, "", true));
>+          // TEST-UNEXPECTED-PASS (TEMPORARILY TEST-KNOWN-FAIL)
>+          //this.currentTest.addResult(new testResult(false, msg, "", true));
>+          this.currentTest.addResult(new testResult(true, msg, "", true));

Since no browser chrome test is using expectAssertions yet, the assertion count can't be less than 'min' (i.e. less than 0). So I don't see the point in temporarily making that case a known failure. I think you can drop that part of this patch.
Attachment #784179 - Flags: review?(dao) → review+
(In reply to Dão Gottwald [:dao] from comment #9)
> Since no browser chrome test is using expectAssertions yet, the assertion
> count can't be less than 'min' (i.e. less than 0). So I don't see the point
> in temporarily making that case a known failure. I think you can drop that
> part of this patch.

Except *not* taking this piece would require that we do all of the annotation and the enabling in a single push, which makes it hard to find and avoid intermittent variation before enabling.
(In reply to David Baron [:dbaron] (don't cc:, use needinfo? instead) from comment #10)
> (In reply to Dão Gottwald [:dao] from comment #9)
> > Since no browser chrome test is using expectAssertions yet, the assertion
> > count can't be less than 'min' (i.e. less than 0). So I don't see the point
> > in temporarily making that case a known failure. I think you can drop that
> > part of this patch.
> 
> Except *not* taking this piece would require that we do all of the
> annotation and the enabling in a single push

Err, why? Adding expectAssertions to a single test should still be green (i.e. numAsserts < min should still be false) as long as your 'min' value isn't off. This seems completely separate from whether you use expectAssertions in other tests and make numAsserts > max a failure in the same push.
Right, but the min value might well be off.  For example, suppose a test asserts 8 times 99% of the time, but 0 times the remaining 1%.  The initial annotation might be done using a sample without a 0, with the idea being that we'd pull a few days of logs to check the annotations before actually enabling.  I'm not sure if my log analysis scripts will handle that case correctly up front (since there's no error message in the 0 case when there's no annotation), though I think they might.

(And I'd actually like to hand off the annotation work to somebody else, and it's better if they have the chance to make some correctable mistakes in the annotations without breaking the tree.  We can then scan a day's worth of logs for the relevant KNOWN-FAIL messages before enabling the checking.)
(In reply to David Baron [:dbaron] (don't cc:, use needinfo? instead) from comment #12)
> The initial
> annotation might be done using a sample without a 0, with the idea being
> that we'd pull a few days of logs to check the annotations before actually
> enabling.  I'm not sure if my log analysis scripts will handle that case
> correctly up front (since there's no error message in the 0 case when
> there's no annotation), though I think they might.

Sorry, the logic here got confused in the process of editing.  These two sentences are actually two separate cases, both predicated on the first sentence in the paragraph.
Also, I probably won't be able to finish this up in the near future, but I'm willing to explain to someone else what needs doing.  At a high level, it requires pulling down tinderbox logs, figuring out which tests have variable number of assertions and addressing that variation in some reasonable way, and then annotating the tests, landing those annotations, and finally backing out https://hg.mozilla.org/integration/mozilla-inbound/rev/b560ee360b68 .  There are some scripts in bug 404077 that should help with a bunch of the analysis of logs, although they probably need some modifications.  The work is also similar to the work documented in bug 404077, but smaller since there's a smaller set of tests to deal with.
Whiteboard: [leave open] → [leave open][mentor=dbaron]
I intend to work on this as I have time to do so.
Feel free to ping me if you need pointers, though the above comment might be enough info for you depending on how comprehensible those scripts are.
Mentor: dbaron
Whiteboard: [leave open][mentor=dbaron] → [leave open]
Blocks: 1048775
dbaron, this is assigned to you and mentored by you, I assume you plan on mentoring this bug as needed and not fixing it yourself?
Assignee: dbaron → nobody
(In reply to Joel Maher (:jmaher) from comment #19)
> dbaron, this is assigned to you and mentored by you, I assume you plan on
> mentoring this bug as needed and not fixing it yourself?

yes.
No assignee, updating the status.
Status: ASSIGNED → NEW
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: