Closed Bug 1039140 Opened 6 years ago Closed 2 years ago
Create manifestparser manifests for gaia-integration tests
Part of the reason for running gaia-integration over a python toolchain is to use manifestparser for the manifests. This will allow us to skip tests if a certain condition is true, rather than just disabling the test wholesale.
This adds manifests for all of the tests that are being run by default. This might be missing tests that are currently not run by default, but should be a good start for now. These manifests will likely become outdated quickly (if new tests are added/deleted) so we'll need to update them now and again until they are being used in tbpl/treeherder.
Attachment #8456618 - Flags: review?(mdas)
Comment on attachment 8456618 [details] [review] Adds manifests for currently running Gi tests I added some comments about new/moved tests in the PR. The notification tests are passing in other builds. Is it an intermittent? Or does this change somehow affect how those tests are running?
Attachment #8456618 - Flags: review?(mdas) → review+
Thanks for the review! I'll add the missing tests and remove the skip. These manifests won't be used in production at all for quite some time. I mainly want to land them so we can start testing out the python service from bug 994847. Between now and when that goes live in some sort of continuous integration, these manifests will become out-dated again, so we'll need to re-update them at some point in the future.
I forgot to resolve this after merging. https://github.com/mozilla-b2g/gaia/commit/30046842cee9eb7816d0f79540fac0c0cea47591
Status: ASSIGNED → RESOLVED
Closed: 6 years ago
Resolution: --- → FIXED
Hey Guys - I'm going to needinfo on a few people and open up a discussion here. I'm not too happy with the current implementation of the manifests that I saw land in gaia. For most gaia tests we use a manifest blacklist (and we can add more conditions if needed), and not a manifest in each app. I don't like having a manifest in each app because: 1 - It's extra maintenance. We shouldn't need to remember to add new tests to this when we create them. Convention > configuration. 2 - We already have tbpl-manifest.json and travis-manifest.json. We should converge on a single way to manage the tests. Arguably the python tests were there first using these .ini files, but I don't want to see us using two different techniques to manage marionette JS tests. I'm open to options, but the current implementation is pretty poor - we need to converge on a single solution asap. My preference is to setup further blacklists if needed - as we do for travis and tbpl. It's what gaia devs are used to.
Hey Kevin, I tried to initiate this discussion in bug https://bugzilla.mozilla.org/show_bug.cgi?id=1034501 I agree with all points and just think we should plan to accommodate devices in the manifest too.
(In reply to Zac C (:zac) from comment #7) > Hey Kevin, I tried to initiate this discussion in bug > https://bugzilla.mozilla.org/show_bug.cgi?id=1034501 > > I agree with all points and just think we should plan to accommodate devices > in the manifest too. I suppose. I guess my thoughts are that it's easier to maintain a few different blacklists (even if there's 4 or 5) instead of creating a manifest which requires we list all tests. Though I could be wrong about that.
Also why are we running the JS tests with a Python runner? We have a JS runner - we should use or augment that for the JS tests.
Currently with several manifests you'll have problems with people enabling/disabling tests in one manifest but not the other and similar problems. Then it's a manual job to go through and reconcile them. One manifest (blacklist or whitelist) would avoid that.
Re comment #10, the tense is wrong - we already suffer this problem!
(In reply to Kevin Grandon :kgrandon from comment #9) > Also why are we running the JS tests with a Python runner? We have a JS > runner - we should use or augment that for the JS tests. Same for unit tests. IIRC the initial goal was to control more tightly the test output. But I also kind of like that the harness team do some of this work instead of us ;) About the manifests, I share the concern about needing to add all tests in the manifest. This is how it's done in Gecko, and as a newbie I often forgot to add new files, or new directories. This is painful.
(In reply to Kevin Grandon :kgrandon from comment #6) > 1 - It's extra maintenance. We shouldn't need to remember to add new tests > to this when we create them. Convention > configuration. The currently checked-in ones aren't being used and I expect them to become out-dated. They are there so we can start running gaia-integration tests on device on a jenkins node. For now feel free to ignore them, but eventually we want to transition all gaia-integration tests off the json style manifests and onto the ini style manifests. Yes, you'd need to remember to add tests once this happens. > 2 - We already have tbpl-manifest.json and travis-manifest.json. We should > converge on a single way to manage the tests. Arguably the python tests were > there first using these .ini files, but I don't want to see us using two > different techniques to manage marionette JS tests. Agreed. Mochitest, xpcshell, marionette-webapi, gaia-ui-tests, +more are all using .ini. Let's converge :). The json blacklists don't work well because you need a new manifest for every possible build configuration. How do you disable a Gi test on debug emulators only when hardware acceleration is disabled? How do you mark something as KNOWN-FAIL only on b2g-desktop, but not on mulet? How do you mark something as RANDOM so we can still run it and collect data on it, without turning all the jobs orange? These are things we need to be able to do in production that the json syntax isn't expressive enough for. > I'm open to options, but the current implementation is pretty poor - we need > to converge on a single solution asap. My preference is to setup further > blacklists if needed - as we do for travis and tbpl. It's what gaia devs are > used to.
(In reply to Kevin Grandon :kgrandon from comment #9) > Also why are we running the JS tests with a Python runner? We have a JS > runner - we should use or augment that for the JS tests. We are still using the JS runner. What I'm doing is replacing marionette-b2gdesktop-host with a host that sends commands over a socket rather than doing all the work of stopping/starting gecko itself. There is a lot of automation in python dealing with managing gecko processes on devices and emulators that doesn't exist in node. Rather than write and maintain an entirely separate automation stack, we are re-using the same basic libraries that every other test harness uses. This means less maintenance for the gaia team. It also means gaia-integration automatically benefits from automation fixes that span multiple test harnesses.
(Fwiw we used to use json whitelist/blacklist manifests for b2g mochitests too and it became a massive pain to maintain. All the gecko devs hated it and we spent a long time transitioning them all to .ini)
I hate JSON for this task because you can't put comments into it :)
For anyone interested, here's the documentation on the manifestparser which outlines some of its abilities: http://mozbase.readthedocs.org/en/latest/manifestparser.html#manifestparser-create-and-manage-test-manifests
Ok, thanks for the info guys. Unpinging some people - but feel free to chime in if you want. I think I am ok going the manifest route, but I think I will still propose that we auto-run any tests matching our pattern of test/marionette/*_test.js, and only run special rules if it's found inside the manifest. I'll file a bug once we are closer to moving over. In the meantime it would be good to file bugs to move tbpl-manifest and travis-manifest over to the .ini files.
(In reply to Kevin Grandon :kgrandon from comment #18) > In the meantime it would be good to file bugs to move tbpl-manifest and > travis-manifest over to the .ini files. Yes, good idea. I filed bug 1045082. Sorry, I wasn't trying to blindside anyone. The .ini manifests aren't ready for production yet and will only be used by an experimental jenkins job that will be running Gi on device. So for now, there is no expectation that they will stay in sync with the tests being added in-tree. We'll need to go through and update them before they start running in a more production-like environment.
FYI, Bug 1072582 has been raised to remove these .ini files. The runner_service code was parsing manifests and passing that to gaia-marionette, but Bug 1056194 will change that flow, so the runner_service is only responsible for being a layer between js and python. If we want to move to manifestparser, then we should change what TEST_MANIFEST is expected to be for gaia-marionette. It currently consume .json files.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
I'm not sure if this is still something wanted or not, but I likely won't be doing the work. Feel free to mark WONTFIX.
Assignee: ahalberstadt → nobody
Firefox OS is not being worked on
Status: REOPENED → RESOLVED
Closed: 6 years ago → 2 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.