Implement a try selector that *just does the right thing*
Categories
(Developer Infrastructure :: Try, enhancement)
Tracking
(Not tracked)
People
(Reporter: jrmuizel, Unassigned)
References
Details
Currently there doesn't seem to be an easy way to run all the jobs that are needed to avoid getting backed out from autoland. The best approximation seems to be just to run all tests on all platforms.
Reporter | ||
Comment 1•5 years ago
|
||
It seems like "ALL" "ALL" doesn't even include some jobs that are run on autoland like the QuantumRender Wrench jobs.
Comment 2•5 years ago
|
||
Since you can get backed out from a job that isn't initially run on your push (but runs on a subsequent push, or even on mozilla-central), this bug as worded doesn't have much meaning. The only way to 100% guarantee not being backed out is to run every task.
There's a balance between using resources and risk of getting backed out that we need to try to meet. Right now the burden of figuring out what that balance is falls on the developers. I think your ask here is more shifting that burden from developers to tooling by supporting a simple mach try
without arguments that does the right thing.
This has long been a goal of ours, but hasn't been a priority until recently (well scheduling in general, not necessarily for try, but it's all related). I think in the past this ask hasn't been very feasible, but we're starting to work on code coverage (and even machine learning) based schedulers that could be re-purposed for something like this. I won't give an ETA or anything, but we're steadily getting there.
For some bonus context, the reason it's not worth using what our current scheduler does in mach try
is that the current scheduler is extremely naive. It doesn't even factor in files changed into its decision. It turns out to work surprisingly well in the context of autoland
where sheriffs can monitor, backfill and backout. It would be next to useless for testing isolated changes on try and probably even cause more frustration as developers would be lead to think they should be good to go when in reality they just ran a random sampling of tasks.
Updated•5 years ago
|
Updated•2 years ago
|
Description
•