tl;dr: slow processing of l10n nightly builds can lead to strange partial updates generated. Scenario: * we have an en-US existing nightly (A) * do an en-US nightly (B), a set of nightly l10n repacks is triggered * do another nightly (C), another set of nightly l10n repacks will get triggered We don't allow those repacks to be merged so there will always be two jobs for every locale+platform combo. The partial updates we get depends on when they start: * before C ends: we'll eventually get A->B and B->C * after C ends: we'll get A->C and C->C The latter happens because we pull the 'old' complete.mar from nightly/latest-<branch>-l10n/, and that's already C by the time the first repack happens. catlee suggests we could allow the requests to merge by adjusting http://hg.mozilla.org/build/buildbotcustom/file/027fcab1897f/misc.py#l431. Eg a special case that allows merges if the builder name is the same, and you look up the locale property and they're the same too.
We could also mitigate this by figuring out why the l10n repack jobs aren't happening promptly.
(In reply to Nick Thomas [:nthomas] from comment #1) > We could also mitigate this by figuring out why the l10n repack jobs aren't > happening promptly. we have limited capacity for these on some platforms. we also don't start AWS instances for l10n jobs.
Today it has also been started for localized Nightly builds: http://mozmill-ci.blargon7.com/#/update/report/72e15dc943833b8fcba70aeb5103e4e9 This is blocking our Mozmill update tests to work correctly. Adding our automation-blocked whiteboard entry.
Interestingly it also happens for en-US builds: http://mozmill-ci.blargon7.com/#/update/report/5c0f7d6413fad39e026e38664d0337fa
Nick, does that bug as filed also apply to en-US? We hit a problem with the original builds for m-c over the weekend. http://mm-ci-master.qa.scl3.mozilla.com:8080/job/mozilla-central_update/1230/ Here the same build id has been listed in the pulse message for the given nightly build 20130309030841. No update was available for it given that this was the latest buildid for March 9th. Most likely also a fallout from multiple respins of nightly builds throughout that day.
Whiteboard: [automation-blocked] → [qa-automation-blocked]
Product: mozilla.org → Release Engineering
Lowering our priority to wanted, given that we can run the tests. Sometimes we simply get those failures. Actually not sure when it happened the last time.
Whiteboard: [qa-automation-blocked] → [qa-automation-wanted]
rail, is this still a "thing" with funsize present now?
Yup, Funsize queries Balrog and generates partials to latest builds, so C->C is not possible.
Status: NEW → RESOLVED
Last Resolved: 2 years ago
Resolution: --- → WORKSFORME
Component: General Automation → General
Product: Release Engineering → Release Engineering
You need to log in before you can comment on or make changes to this bug.