Open Bug 1445946 Opened 2 years ago Updated 2 years ago
evaluate our leaf-tasks in the release graph to add more checks to our configurations
We've recently had two incidents with release automation tasks that are leafs in the graph, as in they have no dependencies checking results of their action. * e.g. in bug 1444391 we accidentally updated Google Play Store to app name to contain "Beta" * e.g. in bug 1445672 we accidentally updated release aliases from a beta release. To improve things in the future, we should guard these sensitive changes with more checks. Catching configuration errors is particularly difficult because tasks go green, things look apparently as expected and other non-releng automation can catch them but runs with latency. For this reason, it might be good to assess the leafs tasks in the release graphs, for all products. This includes but is not limited to: * bouncer aliases * mark as shipped * version bump * pushapk (for fennec obviously) Stuff like balrog-scheduling is a leaf task but has the signoff that depends on human factor so that's out of the scope for now I suppose. To start off with an example, for bouncer aliases, we could: a) more configuration checks to make sure we don't mix up the wrong aliases b) checks before we even do the API calls c) a separate task that is to handle just the verification process, that should depend on the original bouncer aliases task
I think the issues are twofold: - scriptworker scripts should all have checks that run before we perform any changes. This should check things like "don't ship devedition to firefox" or "don't ship beta to release". We can add to this test suite over time as we discover new issues, since it can never cover all things. But ideally all new *scripts that perform sensitive user-facing tasks (most of them) should have some of these checks before going live. And ideally any rules for these checks are configurable via puppet, so we can push changes quicker than having to bump and repackage the scripts. For bouncer aliases specifically, we could have seen the issue on maple by looking at the task definition. And if bouncerscript had had no-crossing-channels checks on maple, we would have seen busted bouncer tasks on maple. - aiui we don't run anything for pushapk in staging, or we ignore the output. We should have actual tests in staging for all tasks if at all possible, and they should give us real values - no more perma-green or perma-ignorable-reds. This is more difficult than, say, balrog, since we don't control the other side. At the very least we should run the above checks on the apk and strings, so we know the upstream tasks and inputs are all sane.
Before I forget, bouncer submission should have similar tests as the ones added in bouncer aliases. https://github.com/mozilla-releng/bouncerscript/pull/16.
You need to log in before you can comment on or make changes to this bug.