Open Bug 978091 Opened 7 years ago Updated 7 years ago
Would like dev
.tree-management to get alerted when global test runtime decreases/increases significantly
Obviously in some cases this is expected, but it'd be good to keep track of, I think, especially as we have issues with some of the tests (cough mochitest-browser cough) taking as long as they do.
First, what issues will this expose? What problems are you trying to address? I don't have data off-hand, but I suspect this could be a little challenging because we often run automation jobs on different hardware configurations and execution times can vary significantly. Any solution would need to consider the hardware configuration when comparing run-times. Catlee should be able to provide more context. Playing devil's advocate a bit, do you think our time is better spent looking at high-level differences or implementing lower-level (per-test) results? Either way, it seems like we should be able to produce a JSON blob describing a test suite's execution and have that uploaded via blobber. jgriffin can answer whether that's on ATeam's roadmap. I /think/ parts of it are (via structured logging). As an aside, I have a dream that one day we'll have a "compare builds" web site/service that gives a breakdown of differences in files built, compiler flags used, tests executed, significantly changed time times, etc. Every time the build system changes drastically, I worry that we accidentally disable tests or some such. Having machine readable output for everything that happens in automation (not log parsing) will enable this "compare builds" functionality.
We've been talking about adding some tooling to allow us to track added/enabled/disabled tests, since we're also concerned that tests are being disabled for other reasons without much visibility. We are slowly adding structured logging to harnesses, which will also help us answer questions related to changes in test durations. But, I agree with gps that alerting when the durations of our non-performance test suites change is a difficult problem, and it seems to me of dubious value. There's quite a lot of variability in things like mochitest-browser-chrome already -- those tests are not designed around the idea of execution time consistency -- so making a robust algorithm to detect "significant" execution time deltas would be hard. I'm also wondering what problem you're trying to solve...maybe there's a more practical approach.
For me, this kind of monitoring/alerting would help us respond more quickly to bugs like https://bugzilla.mozilla.org/show_bug.cgi?id=864085 and https://bugzilla.mozilla.org/show_bug.cgi?id=865549 Tests for are generally run on the same hardware configuration for a given platform, so we should be able to compare numbers between runs.
Direct reason for filing this was bug 978068, where a refactor broke where we set a mostly-test-only attribute, which will have affected how fast test run. Keeping track of the test runtimes would likely have warned us of something like this on the push that broke it. Similarly, other bustage like tests not being included because of a refactor (guilty!) would be caught by things like this. And, as Chris pointed out, it'd help us figure out when tests are written in a way which dramatically increases runtime, which would help keeping overall runtime (which is depressingly high for debug mochitest-bc) under control.
I think the preferred end-state would be to have alerts on the post-treeherder rewrite of gofaster - email alerts to dev.tree-management on the whole go ignored/are annoying to correlate. Though completely understand if it's deemed best to not block on treeherder completion and gofaster rewrite for now, if a dev.tree-management solution was quick to implement as a stopgap in the meantime.
You need to log in before you can comment on or make changes to this bug.