Closed Bug 987849 Opened 10 years ago Closed 10 years ago

Add fail-if support to Mochitest manifests

Categories

(Testing :: Mochitest, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: ted, Assigned: jmaher)

References

Details

Attachments

(3 files, 6 obsolete files)

Now that we're running Mochitests from manifests, we should be able to support the "fail-if" key in manifests. This will require passing the expected pass/fail result down from the Python harness to the JavaScript harness, somewhere around here:
http://hg.mozilla.org/mozilla-central/annotate/c69c55582faa/testing/mochitest/runtests.py#l475

and then handling this appropriately in the JavaScript harness. You can refer to the xpcshell harness for pointers, since it already handles this.
Would we allow the test to run and expect it to fail, but then mark all failures as known-failure in the reporting part?

right now a lot of tests are skip-if, and the only reason to not do this is to ensure we don't have crashes, hangs, oom, etc. when a test is run even though we don't expect it to pass.  When we have a single test case (file) that has 1000+ asserts that are normally hit, but 5 of them fail, how would that work?

what about tests that cause leaking windows or docshells?

I would assume we would need random-if as well.
(In reply to Joel Maher (:jmaher) from comment #1)
> Would we allow the test to run and expect it to fail, but then mark all
> failures as known-failure in the reporting part?

Yes, I believe that's what we do for xpcshell right now.

> right now a lot of tests are skip-if, and the only reason to not do this is
> to ensure we don't have crashes, hangs, oom, etc. when a test is run even
> though we don't expect it to pass.  When we have a single test case (file)
> that has 1000+ asserts that are normally hit, but 5 of them fail, how would
> that work?

Individual subtest failures should show up as KNOWN-FAIL. The overall test status should be KNOWN-FAIL if it's marked as fail-if.

> what about tests that cause leaking windows or docshells?

This is a good question! I don't know what we currently do in xpcshell, although that's simpler since we run each test in a separate process. I guess it makes sense that a failure due to a leak from a test that's known to fail shouldn't cause an actual failure.
 
> I would assume we would need random-if as well.

I don't think we have this in xpcshell currently, so this could be separate, but it would be nice to have.
It's crazy that this doesn't work, and there's no warning I could find about it not working.

Silent failures like this lead to a bunch of wasted time.
Does ths need to be fixed here somewhere? 
http://mxr.mozilla.org/mozilla-central/source/testing/mochitest/runtests.py#152

This could be useful for testing failure reporting in the mochitest harness from the client side.
Blocks: 1048446
I think it would need fixing in a number of places. We'd probably have to pass the expected state down from Python to the HTML/JS harness so it could properly deal with each individual test result.

In the xpcshell harness this is not so hard to deal with, since we do all the test result interpretation in the Python harness. I don't think it's that simple in Mochitest. Maybe with structured logging we could remove most of the result interpretation logic from JS and do it in Python?
Two more engineers lost a number of hours today to this bug because there's no indication that fail-if does nothing.
To pile on comment 6, there are two problems here. First, the silent-failing behavior. Second, the fact that this missing feature is something that we really need in many places, so until it's available we have to reimplement it in a different place, leading to a bad solution with two totally different systems coexisting for fail-if and skip-if.
See Also: → 1052240
Assignee: nobody → vaibhavmagarwal
Attached patch Check for unknown tags (obsolete) — Splinter Review
This patch is used to detect if any un-supported tag is present in the manifest. I have added a unit test for checking whether the ParseError is raised correctly when an unknown tag is present. Also since other unit tests were having un-supported tags, I edited them to have tags which are supported. Tested locally, all the unit tests were passing.
Attachment #8493960 - Flags: review?(jmaher)
Attachment #8493960 - Flags: review?(hskupin)
Attachment #8493960 - Flags: review?(ahalberstadt)
Try push for mochitest-1 and xpcshell-1: https://tbpl.mozilla.org/?tree=Try&rev=cc9b121f7b23
Comment on attachment 8493960 [details] [diff] [review]
Check for unknown tags

Review of attachment 8493960 [details] [diff] [review]:
-----------------------------------------------------------------

some others might have more input regarding the unittests.http://people.mozilla.org/~klahnakoski/talos/Alert-Talos.html#sampleMin=2014-09-14&platform=x86_64&sampleMax=2014-09-21&branch=Fx-Team&test=SUMMARY&os=mac.OS+X+10.8
Attachment #8493960 - Flags: review?(jmaher) → review+
Comment on attachment 8493960 [details] [diff] [review]
Check for unknown tags

Review of attachment 8493960 [details] [diff] [review]:
-----------------------------------------------------------------

I don't think we should land this because the supported keys vary from harness to harness, this means that:

A) we'd need to update manifestparser anytime any test suite anywhere adds a new key.
B) there will still be silent failures.. e.g requesttimeoutfactor is only implemented by XPCShell, but this code makes it look like a universally implemented option.

It's also not really clear to me whether the outrage is because arbitrary keys can be defined in the manifests, or the specific fact that fail-if appears to be a supported key [1] yet doesn't do anything in mochitest. If it's the latter, then let's just either implement it in mochitest, or hardcode a check for it in the mochitest harness until we do. If it's the former, then let's add a mechanism for consumers of manifestparser to define which keys they do or don't support on their own (though personally, I'd still rather not do this). Either way, we shouldn't be keeping track of valid keys in manifestparser itself.

[1] http://dxr.mozilla.org/mozilla-central/source/testing/mozbase/manifestparser/manifestparser/manifestparser.py#1063
Attachment #8493960 - Flags: review?(ahalberstadt) → review-
Agree. Short-term we should make Mochitest error if you specify fail-if in a manifest, so that this isn't a silent footgun. Longer-term we should actually implement it. (Although we will probably need the selftests working in order to do so properly.)
Comment on attachment 8493960 [details] [diff] [review]
Check for unknown tags

Review of attachment 8493960 [details] [diff] [review]:
-----------------------------------------------------------------

I'm not really a reviewer for this patch.
Attachment #8493960 - Flags: review?(hskupin)
We actually want to implement support for fail-if in mochitests; Milan said it would be really useful for his team.
I agree! However, that's likely to take some time to get right. We should be able to very quickly land a patch that errors out on this, so that we at least aren't confusing developers and wasting their time.
This patch is a temporary fix, it raises an Exception if a 'fail-if' is encountered. I have implemented it in the mochitest harness as I am not sure if 'fail-if' works for other harnesses. Since it is done in mochitest harness, it will work when a 'fail-if' is encountered for an "active test" (a test is known as active if the fail-if tag affects that environment. eg:
fail-if = toolkit == 'android', this will make raise an exception if android environment is used, else it will pass on other environments).
Attachment #8493960 - Attachment is obsolete: true
Attachment #8494766 - Flags: review?(ahalberstadt)
Comment on attachment 8494766 [details] [diff] [review]
Exception raised if 'fail-if' encountered

Review of attachment 8494766 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks, this looks great for now as we work to add fail-if support.

::: testing/mochitest/runtests.py
@@ +1579,5 @@
>            tests = manifest.active_tests(disabled=disabled, options=options, **info)
>      paths = []
>  
>      for test in tests:
> +      if 'expected' in test and test['expected'] == 'fail':

nit: this could be "if test.get('expected') == 'fail':"
Attachment #8494766 - Flags: review?(ahalberstadt) → review+
Carrying forward the r+. Attaching a pastebin when 'fail-if' is encountered: https://pastebin.mozilla.org/6607643
Attachment #8494766 - Attachment is obsolete: true
Attachment #8494773 - Flags: review+
(In reply to Vaibhav (:vaibhav1994) from comment #9)
> Try push for mochitest-1 and xpcshell-1:
> https://tbpl.mozilla.org/?tree=Try&rev=cc9b121f7b23

this try run shows a complete bustage, could you provide a new try run, thanks!
Keywords: checkin-needed
The new try push: https://tbpl.mozilla.org/?tree=Try&rev=980636e02087 for the current patch.
I am thinking of the possible approaches to add support for the 'fail-if' in Mochitests. After discussing with :jmaher on IRC, I have proposed this way:

Edit the test required to fail and add this line: ok(false, "a call to ok") in that particular test. This would make the test to fail, and we can mark it as a known failure.

It is one of the possible ways and I am open to discuss the pros/cons of this method, and other possible ways to do it.
I am having college exams for the next 10 days, and will not be able to work on this bug till then. 
I have seen bits and pieces of the code for 'fail-if' in python and javascript harness, like, in SimpleTest.js: http://dxr.mozilla.org/mozilla-central/source/testing/mochitest/tests/SimpleTest/SimpleTest.js#299

Manifestparser: http://dxr.mozilla.org/mozilla-central/source/testing/mozbase/manifestparser/manifestparser/manifestparser.py#1099

This bug involves more of examining the existing code and writing the actual code is probably not that much. Since I am busy for next 10 days, feel free to take up this bug.
uploading this so we have code in case I get context switched out.  this works locally, need to get try to see green on all platforms/tests.
ok, this is sort of hacky, but it is green on linux opt tests (not on debug though): 
https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=bf125c0c6f61

Some questions to resolve before completing this:
1) does fail-if mean that every single assertion is expected to fail (current code does this), or 1 or more assertions will fail?
2) does fail-if only apply to assertions from simpletest, or does it apply to other simpletest failures (leaked window, timeout, exceptions)
3) does fail-if need to apply to python issues such as leaks, timeouts, crashes
4) is the approach to bubble this up to the JS harness solid?  It is not doing anything in the python harness
5) fail-if applied to the 'default' root of a manifest applies to all tests for that manifest, is that correct?

answering these questions as well as some comments on the code can ensure we complete this task in a timely manner.
Attachment #8499107 - Attachment is obsolete: true
Attachment #8499651 - Flags: feedback?(cmanchester)
Attachment #8499651 - Flags: feedback?(ahalberstadt)
Flags: needinfo?(ted)
Great questions!  My 2 cents:

> 1) does fail-if mean that every single assertion is expected to fail (current code does this), or 1 or 
> more assertions will fail?

I think it should mean 1 or more assertions will fail.  Otherwise, we'll not be able to use fail-if with many use cases.

> 2) does fail-if only apply to assertions from simpletest, or does it apply to other simpletest 
> failures (leaked window, timeout, exceptions)

This is harder.  Some of these kinds of failures will potentially have side-effects on later tests, so I'm not sure whether we want to allow ignoring them.  For a first iteration, I think I'd say just apply this to assertions.

> 3) does fail-if need to apply to python issues such as leaks, timeouts, crashes

I'd say no, at least now.  A harness-level crash or timeout isn't recoverable (i.e., the run is aborted), so I think these should still be hard failures.

> 5) fail-if applied to the 'default' root of a manifest applies to all tests for that manifest, is 
> that correct?

That makes sense to me.
I agree with :jgriffin.
This seems like something the mozlog test results are good for, so I took a stab at doing it in runtests.py. If others agree we should do this on the js side of course that will work, but I think this is easier to deal with. Note with this approach the html ui in the browser still turns red and reports a fail count regardless of fail-if.

An outstanding ambiguity for me is whether a todo assertion that gets an unexpected pass is a failure in the sense meant by fail-if in a manifest. In this version it is.

On try green with failing assertions in tests marked fail-if:
https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=abe6bdb03edb

On try orange with failing assertions removed from tests marked fail-if:
https://treeherder.mozilla.org/ui/#/jobs?repo=try&revision=7206a900a4d4
Attachment #8500252 - Flags: feedback?(jmaher)
Attachment #8500252 - Flags: feedback?(ahalberstadt)
Comment on attachment 8500252 [details] [diff] [review]
Add fail-if to Mochitest

Review of attachment 8500252 [details] [diff] [review]:
-----------------------------------------------------------------

This looks like a simpler approach, I like the simplicity of it.  We should go to the graphics guys and ask them if they want fails-if for jobs on treeherder only, or for treeherder + local.  If they want local, then this would be a no go.

The good thing about this patch it is solves the question #1 from comment #26.  It also solves question #2 (although we are probably on slippery ground with either approach here- should we add timeout-if, leak-if, throw-if, etc.)

My biggest concern is that we are just hacking the status of which gets parsed by our dashboards.  I lean towards that being wrong, but I don't understand the whole set of issues surrounding the need for this.

::: testing/mochitest/runtests.py
@@ +160,5 @@
> +        """
> +        if test.startswith("/tests/"):
> +            test = test[len("/tests/"):]
> +        elif test.startswith("chrome://mochitests/content/browser/"):
> +            test = test[len("chrome://mochitests/content/browser/"):]

this overlooks the mochitest-chrome case, but we have plain + browser-chrome.

@@ +211,5 @@
> +        if 'test' in message:
> +            test = self._normalize_test_name(message['test'])
> +        if test in self.expected_failures:
> +            if action == 'test_status' and 'expected' in message:
> +                self.expected_fail_count += 1

I don't understand the use of expected_fail_count.

::: testing/mochitest/tests/browser/browser.ini
@@ +30,1 @@
>  #  browser_fail_throw.js

I would like to see what happens with this test case.
Attachment #8500252 - Flags: feedback?(jmaher) → feedback-
Comment on attachment 8500252 [details] [diff] [review]
Add fail-if to Mochitest

Review of attachment 8500252 [details] [diff] [review]:
-----------------------------------------------------------------

If KNOWN-FAILs never touched the fail count in the first place, then we wouldn't need these hacks to appease mozharness. Does that require this to be done on the JS side? Or can we do that in the mozlog handler? It seems like being able to handle KNOWN-FAIL in a shared place would be useful outside of mochitest as well..

Also sorry for being slow, but why would this be a no-go for running locally?

::: testing/mochitest/runtests.py
@@ +178,5 @@
> +            count = int(parts[1])
> +        except ValueError:
> +            return
> +        count -= self.expected_fail_count
> +        self.expected_fail_count = 0

If there are unexpected passes, count could be negative (I think?).
Attachment #8500252 - Flags: feedback?(ahalberstadt) → feedback-
My concern with this locally is that the UI will display failures even though the developer intends for it to fail.  That seems like a way to waste half a day looking into it to realize that the fail-if assumption was incorrect.  But if we could find a method to communicate this better in the firefox webUI, then I would have less concern.
Right, good point. I agree that that would be confusing. I think handling this on the JS side might be the way to go then.
(In reply to Andrew Halberstadt [:ahal] from comment #31)
> Comment on attachment 8500252 [details] [diff] [review]
> Add fail-if to Mochitest
> 
> Review of attachment 8500252 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> If KNOWN-FAILs never touched the fail count in the first place, then we
> wouldn't need these hacks to appease mozharness. Does that require this to
> be done on the JS side? Or can we do that in the mozlog handler? It seems
> like being able to handle KNOWN-FAIL in a shared place would be useful
> outside of mochitest as well..
> 
> Also sorry for being slow, but why would this be a no-go for running locally?

As it is we parse the html ui, dump it from javascript, and then mozharness uses that to contribute to a tbpl status. So as-is this requires js changes or something like I have here, but the mozharness issue is going away.

> 
> ::: testing/mochitest/runtests.py
> @@ +178,5 @@
> > +            count = int(parts[1])
> > +        except ValueError:
> > +            return
> > +        count -= self.expected_fail_count
> > +        self.expected_fail_count = 0
> 
> If there are unexpected passes, count could be negative (I think?).

Depends on how we would deal with todos, that's a valid concern.
(In reply to Joel Maher (:jmaher) from comment #30)
> Comment on attachment 8500252 [details] [diff] [review]
> Add fail-if to Mochitest
> 
> Review of attachment 8500252 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> This looks like a simpler approach, I like the simplicity of it.  We should
> go to the graphics guys and ask them if they want fails-if for jobs on
> treeherder only, or for treeherder + local.  If they want local, then this
> would be a no go.
> 
> The good thing about this patch it is solves the question #1 from comment
> #26.  It also solves question #2 (although we are probably on slippery
> ground with either approach here- should we add timeout-if, leak-if,
> throw-if, etc.)
> 
> My biggest concern is that we are just hacking the status of which gets
> parsed by our dashboards.  I lean towards that being wrong, but I don't
> understand the whole set of issues surrounding the need for this.

For what it is worth that is how xpcshell works, but this is a pretty different UI.

> 
> ::: testing/mochitest/runtests.py
> @@ +160,5 @@
> > +        """
> > +        if test.startswith("/tests/"):
> > +            test = test[len("/tests/"):]
> > +        elif test.startswith("chrome://mochitests/content/browser/"):
> > +            test = test[len("chrome://mochitests/content/browser/"):]
> 
> this overlooks the mochitest-chrome case, but we have plain + browser-chrome.

Ok. We should just be logging consistent names, I filed a bug on that.

> 
> @@ +211,5 @@
> > +        if 'test' in message:
> > +            test = self._normalize_test_name(message['test'])
> > +        if test in self.expected_failures:
> > +            if action == 'test_status' and 'expected' in message:
> > +                self.expected_fail_count += 1
> 

It's how the failure gets inverted for mozharness. Right now mozharness parses the fail count directly as javascript dumps it.

> I don't understand the use of expected_fail_count.
> 
> ::: testing/mochitest/tests/browser/browser.ini
> @@ +30,1 @@
> >  #  browser_fail_throw.js
> 
> I would like to see what happens with this test case.

I would too. Not sure about that.
If there's concensus we need javascript to understand known failures there's no need to belabor the point, but I think whoever is using this is going to need to understand a little bit about how it works, because the meaning of fail-if isn't completely obvious and we already have todo which means something similar. I might be a little surprised if I marked a test as an expected fail in a manifest and then found running locally that my failure count was 0 and everything was green, as well.
(In reply to Chris Manchester [:chmanchester] from comment #36)
> If there's concensus we need javascript to understand known failures there's
> no need to belabor the point, but I think whoever is using this is going to
> need to understand a little bit about how it works, because the meaning of
> fail-if isn't completely obvious and we already have todo which means
> something similar. I might be a little surprised if I marked a test as an
> expected fail in a manifest and then found running locally that my failure
> count was 0 and everything was green, as well.

While we do have todo, WebGL at least imports a huge number of tests, and it would be impractical to modify these tests to use todo.
I am currently not looking at this bug.
Assignee: vaibhavmagarwal → nobody
Attachment #8499651 - Attachment is obsolete: true
Attachment #8499651 - Flags: feedback?(cmanchester)
Attachment #8499651 - Flags: feedback?(ahalberstadt)
Comment on attachment 8499651 [details] [diff] [review]
support fail-if in mochitests from manifest definition (0.9)

Review of attachment 8499651 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry for neglecting this.. I like this approach better than the pure python one. It's definitely more complicated, but it feels like the right thing to do. I'm not super familiar with the JS mochitest harness, but nothing stands out as horribly wrong. So if this works then f+ from me.

::: testing/mochitest/runtests.py
@@ +1605,5 @@
>  
>        testob = {'path': tp}
>        if test.has_key('disabled'):
>          testob['disabled'] = test['disabled']
> +      if test.has_key('expected'):

nit: this should be "'expected' in test". has_key was removed in python 3.

::: testing/mochitest/server.js
@@ +417,5 @@
>        [links[key], childCount] = list(key, file, recurse);
>        count += childCount;
>      } else {
>        if (file.leafName.charAt(0) != '.') {
> +        links[key] = {'test': {'url': key, 'expected': 'pass'}};

It's possible I don't understand how this works, but couldn't this be:
links[key] = { 'expected': 'pass' }

instead? The way it is, 'key' is being stored in two places.

::: testing/mochitest/tests/SimpleTest/TestRunner.js
@@ +528,5 @@
>      TestRunner.originalTestURL = $("current-test").innerHTML;
>  
>      SpecialPowers.registerProcessCrashObservers();
> +TestRunner.structuredLogger.info("JMAHER - in testrunner.runtests");
> +TestRunner.structuredLogger.info(JSON.stringify(arguments));

I assume you'll remove these in the final patch :p

::: testing/mochitest/tests/manifesttest/browser_fail.js
@@ +4,5 @@
>    isnot(true, true, "fail isnot");
> +//TODO: JMAHER: ensure todo statements are ignored in pass/fail land
> +//  todo(true, "fail todo");
> +//  todo_is(true, true, "fail todo_is");
> +//  todo_isnot(true, false, "fail todo_isnot");

I'm not really familiar with todo, when do these get used and how do they mess things up in pass/fail land?
Attachment #8499651 - Attachment is obsolete: false
Comment on attachment 8499651 [details] [diff] [review]
support fail-if in mochitests from manifest definition (0.9)

Heh, figures this gets obsoleted right in the middle of being reviewed :p
Attachment #8499651 - Attachment is obsolete: true
Comment on attachment 8504732 [details] [diff] [review]
support fail-if in mochitests from manifest definition (0.92)

Review of attachment 8504732 [details] [diff] [review]:
-----------------------------------------------------------------

::: testing/mochitest/tests/SimpleTest/TestRunner.js
@@ +798,5 @@
> +            TestRunner.expected = TestRunner._urls[TestRunner._currentTest]['test']['expected'];
> +        } else {
> +            url = TestRunner._urls[TestRunner._currentTest];
> +            TestRunner.expected = 'pass';
> +        }

This code is repeated before. May be we could put this in a separate function and call it?
Attachment #8504732 - Flags: review+
after looking over this bug, I have incorporated all feedback and comments to date, but I didn't remove the duplication and change:
link[key] = {'url': key, 'expected': true|false}

this is interesting because we iterate a directory and previously we would assign:
link[key] = true|false <- true was a leafnode, false was a directory

This is great, but in different parts of the harness we interact with this code.  Specifically in setup.js/server.js/chrome-harness.js.  What is interesting is in all harnesses we end up getting an array of test files:
http://dxr.mozilla.org/mozilla-central/source/testing/mochitest/server.js#536

Here we need to have the file name and the expected value, so we could create the json structure here (and related functions) which would then give us what we need.  That can be changed.  I opted for keeping the change as close to the source as possible.
Assignee: nobody → jmaher
Attachment #8504732 - Attachment is obsolete: true
Status: NEW → ASSIGNED
Attachment #8505496 - Flags: review?(ahalberstadt)
Flags: needinfo?(ted)
Comment on attachment 8505496 [details] [diff] [review]
support fail-if in mochitests from manifest definition (1.0)

Review of attachment 8505496 [details] [diff] [review]:
-----------------------------------------------------------------

::: testing/mochitest/tests/SimpleTest/SimpleTest.js
@@ +259,5 @@
> +        SimpleTest.num_failed++;
> +        test.result = !test.result;
> +      }
> +      var successInfo = {status:"PASS", expected:"PASS", message:"TEST-PASS"};
> +      var failureInfo = {status:"PASS", expected:"PASS", message:"TEST-EXPECTED-FAIL"};

I think failureInfo as {status:"FAIL", expected:"FAIL", message:"TEST-KNOWN-FAIL"} would make sense here.
Comment on attachment 8505496 [details] [diff] [review]
support fail-if in mochitests from manifest definition (1.0)

Review of attachment 8505496 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good. r- for now over the comment regarding TEST-PASS vs TEST-KNOWN-FAIL. If that wasn't a mistake please clarify.

::: testing/mochitest/browser-test.js
@@ +357,5 @@
>          }
>        };
>  
> +      if (testScope.__expected == 'fail' && testScope.__num_failed <= 0) {
> +        this.currentTest.addResult(new testResult(false, "We expected at least one assertion to fail because this test file was marked as fail-if in the manifest", false));

The call to "testResult" below passes in 4 parameters, while this only passes in 3. Was that intentional?

::: testing/mochitest/chrome/test_sanityManifest.xul
@@ +14,5 @@
> +<script type="application/javascript"><![CDATA[
> +
> +ok(false, "a call to ok");
> +
> +]]></script> 

nit: whitespace

::: testing/mochitest/chrome/test_sanityManifest_pf.xul
@@ +15,5 @@
> +
> +ok(true, "a true call to ok");
> +ok(false, "a false call to ok");
> +
> +]]></script> 

nit: whitespace

::: testing/mochitest/tests/SimpleTest/SimpleTest.js
@@ +259,5 @@
> +        SimpleTest.num_failed++;
> +        test.result = !test.result;
> +      }
> +      var successInfo = {status:"PASS", expected:"PASS", message:"TEST-PASS"};
> +      var failureInfo = {status:"PASS", expected:"PASS", message:"TEST-EXPECTED-FAIL"};

+1 to chmanchester's comment. successInfo and failureInfo are the same in both sides of the conditional, is that a mistake? Even if it wasn't I think logging TEST-KNOWN-FAIL is better than TEST-PASS.

@@ +843,5 @@
> +    if (SimpleTest.expected == 'fail' && SimpleTest.num_failed <= 0) {
> +        msg = 'We expected at least one failure';
> +        var test = {'result': false, 'name': 'fail-if condition in manifest', 'diag': msg};
> +        var successInfo = {status:"PASS", expected:"PASS", message:"TEST-PASS"};
> +        var failureInfo = {status:"FAIL", expected:"PASS", message:"TEST-UNEXPECTED-FAIL"};

Same here.

::: testing/mochitest/tests/SimpleTest/TestRunner.js
@@ +569,5 @@
> +        url = TestRunner._urls[TestRunner._currentTest]['test']['url'];
> +        TestRunner.expected = TestRunner._urls[TestRunner._currentTest]['test']['expected'];
> +    } else {
> +        url = TestRunner._urls[TestRunner._currentTest];
> +        TestRunner.expected = 'pass';

I keep seeing if statements like this, what's the case where 'test' isn't in the object?

::: testing/mochitest/tests/browser/browser_fail.js
@@ +6,5 @@
> +  //todo statements should always fail (in default (pass) or fail-if mode)
> +  //with fail-if we do not modify the expected outcome of todo statements
> +//  todo(true, "fail todo");
> +//  todo_is(true, true, "fail todo_is");
> +//  todo_isnot(true, false, "fail todo_isnot");

Please either remove this or file a follow up to fix whatever it is that caused us to comment them out (and add the bug number here).
Attachment #8505496 - Flags: review?(ahalberstadt) → review-
addressed feedback:
* fixed whitespace nits
* adjusted test-expected-fail -> test-known-fail (verified with local unittests)
* removed browser_fail.js from running tests (issue with todo statements)
* the case where we don't have ['test'] in the object- this is the osld style manifest format which has runtests / excludetests.  These need to be in that format as we filter tests by taking the gtestlist from setup.js/server.js and then filtering (once in js land) what to run or what to exclude.  I would have liked to morph the old format into the new, but it isn't straightforward with the way it is dealt with.  The approach to fix it is to get everything off the old format :)
Attachment #8505496 - Attachment is obsolete: true
Attachment #8506389 - Flags: review?(cmanchester)
Attachment #8506389 - Flags: review?(ahalberstadt)
Comment on attachment 8506389 [details] [diff] [review]
support fail-if in mochitests from manifest definition (1.1)

Review of attachment 8506389 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks, lgtm!
Attachment #8506389 - Flags: review?(ahalberstadt) → review+
Is this bug closable now?
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Keywords: leave-open
Resolution: --- → FIXED
Depends on: 1095024
See Also: → 1262433
You need to log in before you can comment on or make changes to this bug.