Until now, we rely on manual inspection to notice abnormal variance in results from talos machines. So, a human would look on graph server to see if the three machines reporting perf results for a given project branch are all within some "gutcheck" % deviation from each other. Now, with auto-rebooting, results seem more stable, and the variance seems much reduced.
Is there any way to check for possible variance programatically?
Apart from being more human-friendly, this might also enable us to automatically warning people by email/tinderbox of large changes in results.
Moving into catlee's queue as he's working on the tool that automates this analysis.
The analysis code is checked into the graph server repo at:
And is currently running via a cronjob on cruncher.build.mozilla.org.