Open Bug 1017588 Opened 5 years ago Updated 5 years ago
Create a python tool for dumping the state of test manifests for a given revision
Background: Currently most of our unittests use the same manifest format . The format allows us to mark tests as skipped, expected fail, random, etc. But there is no easy way to pull the data out of these manifests to examine which tests have changed over time, or to get an idea of which platforms lack coverage in certain areas. Goal: Create a tool that collects all data about tests in the manifest and provides a way to filter and drill down on it based on various criteria. This bug is just concerned with "phase 1" but I'll list the other phases for a general idea of where this is going. Phase 1: Create a python library that can collect the raw data and print basic information about it. * It should be both useable from the command line, or as a library for another tool. * It should be able to accept a <branch> and <revision> parameter. Branch can either be a path to local repository (in which case the tool needs to checkout the proper revision) or the name of a branch, e.g mozilla-aurora (in which case the tool needs to find and download the proper tests.zip on ftp.m.o.. this latter method depends on Armen's work in bug 989583). * It should be possible to specify the test suite, platform and subdirectories the user cares about. * Raw results should be JSON, with top-level keys denoting platform, then test suite, then sub-directory. There should also be top-level metadata like branch and revision. * It should provide some default formatters that makes the data readable when printing to stdout, including a "summary" style formatter. Phase 2: * Add some methods to analyze the data. E.g sort data by subdirectory with highest percentage of skipped tests; find subdirectories where tests are enabled on one platform, but mostly disabled on another, etc. * Add support for reftest style manifests. * Create a cron job/webservice that either runs this periodically, or listens to pulse for new builds and publishes results online. Phase 3: * Add ability to subscribe to e-mail notifications when tests are enabled/disabled for given suites and subdirectories. * Add ways to visualize the data to the webservice. My dream end-state here includes a graph that shows how many tests are enabled or not over time (with ability to drill down) and a heatmap of all the subdirectories (also with the ability to drill down into further subdirectories). I think this covers most of what we need. Feel free to add suggestions or provide feedback.  http://mozbase.readthedocs.org/en/latest/manifestdestiny.html#manifestdestiny-create-and-manage-test-manifests
After phase 1 is finished, we can create a mach target for running this on the current source tree.
:jgraham pointed out that this tool probably shouldn't know anything about the vcs, to which I agree. If someone wants to run this on a specific revision, they can just checkout that revision before running it (so ignore the part above about needing to checkout the proper revision). I still think it would be useful to be able to download a tests.zip for a given revision.. though this should probably use an external module like mozdownload (or something) to do this. At the very least this should be capable of operating on a tests.zip in addition to a full source tree.
Note that we produce an all-tests.json during moz.build traversal. This is effectively a snapshot of the active test configuration extracted from parsed manifests. Not sure if that would help.
Eventually, I think we'd like to extend this to non-gecko trees as well, for e.g. gaia-ui-tests, so I think having it make as few assumptions as possible about the environment makes sense. We'd probably want the manifest handler to be pluggable, so that we could later add a manifest handler for gaia-unit-tests, for example, which are totally unlike mochitests or reftests. Also nice would be the ability to diff reports made at different times, so we could generate lists of tests added/enabled/disabled between two dates.
(In reply to Jonathan Griffin (:jgriffin) from comment #4) > Also nice would be the ability to diff reports made at different times, so > we could generate lists of tests added/enabled/disabled between two dates. This. Pretty much every time I push a build system patch I worry about accidentally disabling whole swaths of tests. It would be great to have something like Talos Compare that allowed me to "diff" any two automation runs from high-level jobs that executed to tests that executed to the output of each test.
(In reply to Gregory Szorc [:gps] from comment #3) > Note that we produce an all-tests.json during moz.build traversal. This is > effectively a snapshot of the active test configuration extracted from > parsed manifests. Not sure if that would help. That would require an objdir though. I think we want something that can just operate on a srcdir and dump results for all platforms at the same time.
(In reply to Andrew Halberstadt [:ahal] from comment #6) > (In reply to Gregory Szorc [:gps] from comment #3) > > Note that we produce an all-tests.json during moz.build traversal. This is > > effectively a snapshot of the active test configuration extracted from > > parsed manifests. Not sure if that would help. > > That would require an objdir though. I think we want something that can just > operate on a srcdir and dump results for all platforms at the same time. Technically, it only requires a configure run + limited config.status. To get a dump without involving the build system would require consistently named .ini files. We have just enough one-offs to make that annoying.
Maybe I'm wrong, but I believe in a post bug 989583 world, only manifests that are explicitly included from a root (or intermediary) manifest will be run (like how reftest currently works).
I talked to :ted, and apparently I may be wrong. Though as :gps mentioned, we shouldn't need to build anything to get the info we need. I guess that's good enough.
Also, I thought we were trying to autogenerate the master manifests via moz.build metadata. Also, there has been talk of creating a moz.build execution mode that walked the filesystem for moz.build files and looked at the low-level Python AST to extract things like lists of test manifest paths (as opposed to actually executing moz.build files as Python). Don't hold your breath.
(In reply to Gregory Szorc [:gps] from comment #10) > Also, I thought we were trying to autogenerate the master manifests via > moz.build metadata. Yeah, I think you are right, I discovered I was wrong. Another tweak to the initial proposal Ahmed and I talked about, is to drop the concept of "platform" and replace it with a "query" containing a subset of the keys found in mozinfo.json (or the reftest sandbox). The mach target and/or webservice can be the piece responsible for building the equivalent query for the current objdir or platforms running on tbpl.
You need to log in before you can comment on or make changes to this bug.