Closed Bug 959493 Opened 10 years ago Closed 8 years ago

Check for arbitrary js exceptions during B2G test runs in tbpl

Categories

(Testing :: General, defect)

Other
Gonk (Firefox OS)
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: zcampbell, Unassigned)

References

Details

Often in b2g startup on all platforms Javascript Errors are raised in the logs.

I'm told mochitest is the best place for a TBPL test to pick up these failures.

The test should start the b2g process and wait until it has finished starting up and stabilised. The exact point when the test can stop I am not clear on; there can be exceptions past the point when the Homescreen has loaded as many asynchronous processes finish starting up.
So I don't think this is going to work with a mochitest very well, a test is not capable of restarting the b2g process. What we could do is just have mozharness highlight these types of errors as failures (since the mochitest harness does load everything). I think it would be pretty obvious if it was a test failure or something from the platform. If we did this we'd have to be careful that the javascript errors aren't being caused by weird test configurations though.

We could also create a new job or something, but I don't know if this would work as well.

Jgriffin, any thoughts?
Component: Mochitest → General
Summary: B2G startup test to pick up and arbitrary js exceptions → B2G startup test to pick up any arbitrary js exceptions
(In reply to Andrew Halberstadt [:ahal] from comment #1)
> So I don't think this is going to work with a mochitest very well, a test is
> not capable of restarting the b2g process. What we could do is just have
> mozharness highlight these types of errors as failures (since the mochitest
> harness does load everything). I think it would be pretty obvious if it was
> a test failure or something from the platform. If we did this we'd have to
> be careful that the javascript errors aren't being caused by weird test
> configurations though.

Do we have any existing run logs we could examine to see how noisy this would be? I do think it's the right approach, but it might not be as easily xfailed/disabled as a new test would be. I'm afraid if we introduce behavior that immediately calls out red flags that aren't easily findable, people will just back out the behavior.

Would it help to have a single test result for "any exceptions" in the results (even if it's synthesized by the harness after examining its own session)? That would let us mark it as a known-fail, right?
(In reply to Geo Mealer [:geo] from comment #2)
> Do we have any existing run logs we could examine to see how noisy this
> would be? I do think it's the right approach, but it might not be as easily
> xfailed/disabled as a new test would be. I'm afraid if we introduce behavior
> that immediately calls out red flags that aren't easily findable, people
> will just back out the behavior.

Yeah, we'd have to enable it on a project branch like Cedar and work to get the existing failures down to zero before pushing it to other branches. This is how enabling any new suite of tests on tbpl usually goes though.
 
> Would it help to have a single test result for "any exceptions" in the
> results (even if it's synthesized by the harness after examining its own
> session)? That would let us mark it as a known-fail, right?

Yeah, I think that's the best way forward. We already do something similar for leaks. We could just have the harness scan the logcat (since I don't think these js exceptions generally show up in the regular mochitest logs) and print a "N javascript errors found" after the summary which turns the job orange. Mozrunner would probably be a good place for this logic to live.
Note that some of these exceptions I've only seen on device/emulator due to the extra time it takes to run automation against those targets. In fact, running the Gaia UI tests against the emulator throws these JavaScript exceptions in the same way the devices do. Therefore getting bug 916368 fixed will likely take care of most of these. I do like the idea of parsing a log file though, for extra coverage.
If we were to do this anywhere, gaia-ui-tests probably makes more sense than mochitest, since we muck with the profile for mochitest, which could produce some false flags.

Are we interested in reporting exceptions anywhere, or just startup exceptions?  If the latter, we may want to add a startup test job, mostly to avoid having people associate the reported exceptions with gaia-ui-tests (or any other specific harness).
(In reply to Jonathan Griffin (:jgriffin) from comment #5)
> If we were to do this anywhere, gaia-ui-tests probably makes more sense than
> mochitest, since we muck with the profile for mochitest, which could produce
> some false flags.
> 
> Are we interested in reporting exceptions anywhere, or just startup
> exceptions?  If the latter, we may want to add a startup test job, mostly to
> avoid having people associate the reported exceptions with gaia-ui-tests (or
> any other specific harness).

I meant just startup exceptions here. As I'm trying to move Gaia-ui-tests away from being susceptible to these errors (because it makes our tests look intermittent when they're not) but if I am successful then there might not be anybody to raise these errors promptly.

As it happens Dietrich and Kevin were talking about this yesterday too so it's something we collectively want to do at dev and qa level.

gaiatest framework can definitely do it but the reporting doesn't really fit with our model, thus I agree it might be better as a separate job.
(In reply to Jonathan Griffin (:jgriffin) from comment #5)
> If we were to do this anywhere, gaia-ui-tests probably makes more sense than
> mochitest, since we muck with the profile for mochitest, which could produce
> some false flags.

That's true, though I'd argue that if we support those prefs, our code should work no matter what they are set to. Though it would be more or less impossible for developers to test without using try, so I agree with you there that gaia-ui-tests might be better.

Zac you were worried that tacking this onto Gu would increase negative perception of it, but this proposal would work similar to leaktests (i.e instead of a test failure, it would be a footnote after the summary). Would that be acceptable? If not we could look into a new job or putting it somewhere else.

> Are we interested in reporting exceptions anywhere, or just startup
> exceptions?  If the latter, we may want to add a startup test job, mostly to
> avoid having people associate the reported exceptions with gaia-ui-tests (or
> any other specific harness).

Personally I think cracking down on these everywhere wouldn't be a bad thing and could help prevent issues down the road, but Zac or Dietrich should probably answer that question.
Fyi, there is bug 920191, where it seems that a tbpl parser is also reporting on js exceptions.
Fyi, I opened this up to broader discussion on dev.b2g
The feedback on dev.b2g was overwhelmingly positive. The easiest solution is to use mozharness since that already has log parsing with regex logic built-in, but Zac (I think?) pointed out that this would make it difficult for developers to test it out locally. So I'm thinking maybe the b2g subclass of mozrunner would be better as this also encapsulates most b2g related harnesses.

Jgriffin also suggested that we use an in-tree list of patterns to match. That way we can start with an empty list, have it turned on right away, and then add specific errors to it as they get fixed. Once all errors are gone we can replace all the existing entries with a generic "JavaScript Exception" clause.
(In reply to Martijn Wargers [:mwargers] (QA) from comment #8)
> Fyi, there is bug 920191, where it seems that a tbpl parser is also
> reporting on js exceptions.

That bug seems to be about JS exceptions created by the tests and appearing in the actual test logs. This is about scanning the logcat for unrelated-to-the-test errors to perform a sanity sweep.
Updating title to more accurately reflect the proposal and discussion that came out of the dev.b2g thread.
Summary: B2G startup test to pick up any arbitrary js exceptions → Check for arbitrary js exceptions during B2G test runs in tbpl
Status: NEW → RESOLVED
Closed: 8 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.