Closed Bug 833666 (tbpl-gaia-unit) Opened 9 years ago Closed 7 years ago
[Tracking Bug] Need Gaia Unit tests in buildbot automation
David Scravaglieri, manager of the Gaia team, would like us to get Gaia Unit tests running in buildbot automation on b2g desktop builds. This is a distinct set of tests, separate from the Gaia UI tests already in progress in bug 802317. These tests utilize a Marionette JS client written by James Lal. This will be used as a tracking bug to track overall progress. Things needed, for which separate bugs will be filed as needed: 1 - Determine a target OS. We should use either linux64 or macosx64; we can't easily use linux32 since those test slaves are already overloaded. Vivien, do you have a preference between linux64 and macosx64? Eventually, do we want both OS's, and if so, can we focus on just one first? Would there be any benefit to running these tests on panda boards over desktop builds? 2 - Once we determine the target OS, begin building b2g desktop builds per-commit. 3 - Gaia Unit tests also require a Gaia profile, which is not built as part of the b2g desktop build. We will need to build these separately, either as part of the b2g desktop build, or not. 4 - Identify the dependencies for these tests, and make sure they're installed on the target test slaves. Last time I looked, these required a particular version of node.js and some specific node packages. James, can you provide a list of requirements? 5 - Write a mozharness script to invoke the tests. 6 - Deal with any test failures. David has agreed to normal sheriffing rules for these, once we get these in production; that is, test failures that can be pinned to a particular commit will be grounds for backout by the sheriffs.
Gaia unit tests and gaia integration tests are separate things... The unit tests run (and have a runner) that strictly runs in the browser (cross platform) and do not use marionette directly. These tests do not run in a real application context. They are similar to what web developers in general use. The integration tests use the marionette client described above. --- We have a python runner for the unit tests. I wrote it as a big hack but could be used as the basis of the runner for those: https://github.com/lightsofapollo/js-test-agent/tree/master/python/test_agent IMO - the biggest win would be to get the unit tests running first... We have a lot of activity here and a decent base of tests across major apps... The gaia-ui-tests are much further ahead and provide decent integration test coverage where the JS integration tests only provide some (small) tests for email, calendar and system.
Can we run it on the cloud? We're experimenting with Ubuntu 64-bits. On another note, can this be answered? (unless I missed it) "Would there be any benefit to running these tests on panda boards over desktop builds?"
Thanks for the clarification James. Could the Python runner for these be merged into the Gaia repo? It's a bit complicated using random github repos in buildbot automation. I believe we could run these on the cloud in Ubuntu64 VM's, but we'd have to try them on one of the VM's and verify they work.
Hey Jonathan, That code can live where ever you like. If you remember we had a brief discussion about this a few months back and this runner was a result of a request for a python runner for test-agent. It is far from perfect but is a good basis for addition. The node runner is far more stable/tested (we have engineers using those tools daily) that is integrated directly into our Makefile. Let me know how we can help best... There are a few more people across various timezones who are familiar with the unit tests.
James, thanks for the reminder. Both of the test runners have compiled module dependencies (various node modules for the node runner, and twisted for the Python runner). Neither dump output in a format that's compatible with TBPL. I think modifying the Python runner will be easier and will mesh a little more easily with mozharness, so I suggest we start with that and modify as needed. Armen, I don't think there's any reason why we couldn't run these on an Ubuntu64 VM. Can I get access to one and verify that I can run them there? Then we can begin working out the details of exactly how we can implement all the necessary infrastructure.
(In reply to Jonathan Griffin (:jgriffin) from comment #5) > > Armen, I don't think there's any reason why we couldn't run these on an > Ubuntu64 VM. Can I get access to one and verify that I can run them there? > Then we can begin working out the details of exactly how we can implement > all the necessary infrastructure. rail is working on them. If you poke him at the end of the day he will probably have it ready by then.
I've verified that the Ubuntu 64 VM's will work fine as a platform for running these tests. Will file some more specific bugs to get different pieces of this going.
(In reply to Jonathan Griffin (:jgriffin) from comment #0) > This will be used as a tracking bug to track overall progress. > > Things needed, for which separate bugs will be filed as needed: > > 1 - Determine a target OS. We should use either linux64 or macosx64; we > can't easily use linux32 since those test slaves are already overloaded. > Vivien, do you have a preference between linux64 and macosx64? Eventually, > do we want both OS's, and if so, can we focus on just one first? Would > there be any benefit to running these tests on panda boards over desktop > builds? > We definitely should be using macosx64. My experience with linux64 was extremely suboptimal; there would be platform specific crashes/freezing. I had difficulty getting people to help resolve the issues since most of the devs are using b2g desktop on macosx, and the problems didn't show up there. Macosx64 builds have been very stable and far less prone to crashing.
(In reply to Jonathan Griffin (:jgriffin) from comment #7) > I've verified that the Ubuntu 64 VM's will work fine as a platform for > running these tests. Will file some more specific bugs to get different > pieces of this going. Hm, if they work consistently, that would be great. I'm concerned about getting support if/when we run into the startup segfaulting I've seen before on my ubuntu 12.04 machine and the AWS ubuntu images we were using a while back in our homebrew automation.
We won't be able to scale :(
(In reply to Armen Zambrano G. [:armenzg] from comment #10) > We won't be able to scale :( Ah, okay. Since this will be on tbpl, hopefully people will be more easily swayed to fixing platform issues in order to keep tests green.
Depends on: b2g-testing
We have our first run on cedar: 14:59:11 INFO - gaia-unit-tests INFO | Passed: 2332 14:59:11 INFO - gaia-unit-tests INFO | Failed: 12 14:59:11 INFO - gaia-unit-tests INFO | Todo: 0 It's red on TBPL due to an issue with the mozharness script which I'll fix. I'll file a bug for the 12 failing tests for gaia developers to fix.
I think the tests are green enough on b2g-inbound to unhide, so I've done that; I'll unhide on other trees as well after I verify that all the changes that are required to green them have migrated to those trees.
Rehidden for: https://wiki.mozilla.org/Sheriffing/Job_Visibility_Policy#6.29_Outputs_failures_in_a_TBPL-starrable_format Please can the remaining requirements there be met as well (eg documentation; does this work on trychooser etc?).
Product: mozilla.org → Release Engineering
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Component: General Automation → General
You need to log in before you can comment on or make changes to this bug.