Closed Bug 553300 Opened 15 years ago Closed 14 years ago

Get release builders working with schedulerdb

Categories

(Release Engineering :: General, defect, P4)

x86
Linux
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: catlee, Assigned: salbiz)

References

Details

(Whiteboard: [buildmasters][automation])

Attachments

(5 files, 17 obsolete files)

2.41 KB, patch
catlee
: review+
Details | Diff | Splinter Review
32.64 KB, patch
catlee
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
59.87 KB, patch
bhearsum
: review+
catlee
: feedback+
bhearsum
: feedback+
bhearsum
: checked-in+
Details | Diff | Splinter Review
8.97 KB, patch
catlee
: review+
bhearsum
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
13.64 KB, patch
catlee
: review+
bhearsum
: review+
bhearsum
: checked-in+
Details | Diff | Splinter Review
Having multiple sets of release builders/schedulers for different branches won't work with schedulerdb, since the builders and schedulers for different branches all use the same name. A quick fix is to have the scheduler and builder names include the branch name so we get uniqueness. A better solution is to have the release builders be more data-driven...sort of related to bug 502876 and bug 505318.
Priority: -- → P5
Whiteboard: [buildmasters][automation]
Taking to test out a hypothetical fix in my head (releaseBuilderSuffix)
Assignee: nobody → aki
Attached patch releaseBuilder*Prefix* patch (obsolete) — Splinter Review
Ok, went the releaseBuilderPrefix route so they sort together. Changing the directories may involve some clobberer/purge_builds changes as well. This checkconfigs ok; submitting for feedback before I go through a long test cycle.
Attachment #436589 - Flags: feedback?(catlee)
Attachment #436589 - Flags: feedback?(bhearsum)
Comment on attachment 436589 [details] [diff] [review] releaseBuilder*Prefix* patch Most of the details here seem ok. A couple of things: I'm going back and forth on whether 3.6.x makes sense or using the full, exact version. What do you guys think about that? If we go with the former, I don't think it belongs in the configs -- it's easy enough to calculate in release_master.py.
The reason I'd put it in the configs is so we can have things like Lorentz with special names. Also to put mobile in the mobile name, if that's desired. But not really feeling strongly either way.
(In reply to comment #4) > The reason I'd put it in the configs is so we can have things like Lorentz with > special names. Also to put mobile in the mobile name, if that's desired. But > not really feeling strongly either way. Things like Lorentz also have special version numbers, so it's not necessary because of them IMHO. If names are preferred to version numbers, maybe we could use the last part of sourceRepoPath as the prefix. As for Mobile, those builders are already prefixed with "mobile", so my suggestion of using 'version' would work without the possibility of mobile/desktop colliding. Either way, I don't think there's any reason to introduce yet another variable here -- there's already at least 2 or 3 pieces of information in the configs that could be used for this.
Assignee: aki → nobody
Comment on attachment 436589 [details] [diff] [review] releaseBuilder*Prefix* patch So, I'm marking this feedback- because I don't see any reason why we need to introduce yet another config variable. Any approach that creates the prefix based on existing data would be fine with me, in principle.
Attachment #436589 - Flags: feedback?(bhearsum) → feedback-
Attachment #436589 - Flags: feedback?(catlee) → feedback+
Comment on attachment 436589 [details] [diff] [review] releaseBuilder*Prefix* patch I like using the 'releaseBuilderPrefix', but I'd have it use some other variable to start with. maybe releaseBuilderPrefix = 'release_%s' % sourceRepoName is a good starting point? The release clobberer code will have to be updated as well.
Summary: Multiple release builders/schedulers won't work with schedulerdb → Get release builders working with schedulerdb
Blocks: 528475
Assignee: nobody → catlee
Priority: P5 → P3
Blocks: 567035
Priority: P3 → P4
Adding release_config skeletons and release portions of master.cfg to the mozilla bucket in buildbot-configs.
Attachment #479473 - Flags: review?(catlee)
Necessary changes to make buildbot-080 branch work with release jobs. In order to make this work, the following couple of change sets from the default branch were rolled in: 889,907,910,940,941
Attachment #479479 - Flags: review?(catlee)
Applied patch 6e717ac10e28059b0315 from upstream to fix issue with sourcestamp revision being dropped in buildbot-080
Attachment #479481 - Flags: review?(catlee)
Comment on attachment 479479 [details] [diff] [review] [tested]buildbotcustom-release-080 Looks good to me. Needs some extra eyes though.
Attachment #479479 - Flags: review?(catlee)
Attachment #479479 - Flags: review?(bhearsum)
Attachment #479479 - Flags: review+
Attachment #479481 - Flags: review?(catlee) → review+
Comment on attachment 479473 [details] [diff] [review] [tested]buildbot-configs_release-080 We need to get rid of the "if True:" clause. I think putting some variable in master_localconfig.py which lets us turn the release stuff on/off per master would be the best way to go right now.
Attachment #479473 - Flags: review?(catlee) → review-
Attached patch buildbot-configs_release-080 (obsolete) — Splinter Review
Something like this? Seems a bit clunky to put these in master_localconfig. Perhaps it might be more intuitive to put them in release_config, (but not in the releaseConfig dict), since 'staging' doesn't seem release-specific
Attachment #479473 - Attachment is obsolete: true
Attachment #479895 - Flags: feedback?(catlee)
Comment on attachment 479895 [details] [diff] [review] buildbot-configs_release-080 Yeah, that's better...We can't put it in release_config, since we don't know if we _should_ be loading release_config yet. Maybe just change the names to make it more clear that these are 'special' variables? ENABLE_RELEASES, and ENABLE_STAGING_RELEASES (or maybe ENABLE_PRODUCTION_RELEASES instead?) Initially all the localconfigs, except maybe the staging ones, should have releases disabled.
Attachment #479895 - Flags: feedback?(catlee) → feedback+
Comment on attachment 479479 [details] [diff] [review] [tested]buildbotcustom-release-080 Tagging is going to change really soon, when I land bug 508896, but I'll take care of porting that to 0.8.0. Looks good though!
Attachment #479479 - Flags: review?(bhearsum) → review+
Comment on attachment 479895 [details] [diff] [review] buildbot-configs_release-080 I see staging=True in a bunch of production localconfig's, definitely don't want that! Just to make sure I understand correctly: the schedulers and change sources for a release will live on builder masters -- not scheduler masters, so that the release stays on a single master? Why do the scheduler master localconfigs need to be touched at all? It doesn't look like the scheduler masters are going to be aware at all of releases.
Attachment #479895 - Attachment is obsolete: true
Attachment #480168 - Flags: review?(catlee)
Attachment #480168 - Attachment is patch: true
Attachment #480168 - Attachment mime type: application/octet-stream → text/plain
(In reply to comment #17) > Comment on attachment 479895 [details] [diff] [review] > buildbot-configs_release-080 > > I see staging=True in a bunch of production localconfig's, definitely don't > want that! > yup, removed. ENABLE_RELEASES and ENABLE_STAGING_RELEASES which control that functionality default to false in all localconfigs now. I assume we don't want release builders to be on by default in staging? > Just to make sure I understand correctly: the schedulers and change sources for > a release will live on builder masters -- not scheduler masters, so that the > release stays on a single master? > > Why do the scheduler master localconfigs need to be touched at all? It doesn't > look like the scheduler masters are going to be aware at all of releases. I didn't look too closely at that before, since having those declarations in there doesn't do anything by themselves, hence can't break anything. You're right though, no need for it to be there if it isn't needed.
Attachment #480168 - Attachment is obsolete: true
Attachment #480624 - Flags: review?(bhearsum)
Attachment #480168 - Flags: review?(catlee)
Blocks: 398494
(In reply to comment #19) > Created attachment 480624 [details] [diff] [review] > [tested]buildbot-configs_release-080 > > (In reply to comment #17) > > Comment on attachment 479895 [details] [diff] [review] [details] > > buildbot-configs_release-080 > > > > I see staging=True in a bunch of production localconfig's, definitely don't > > want that! > > > yup, removed. ENABLE_RELEASES and ENABLE_STAGING_RELEASES which control that > functionality default to false in all localconfigs now. I assume we don't want > release builders to be on by default in staging? Actually, I think we do. > > Just to make sure I understand correctly: the schedulers and change sources for > > a release will live on builder masters -- not scheduler masters, so that the > > release stays on a single master? > > > > Why do the scheduler master localconfigs need to be touched at all? It doesn't > > look like the scheduler masters are going to be aware at all of releases. > I didn't look too closely at that before, since having those declarations in > there doesn't do anything by themselves, hence can't break anything. You're > right though, no need for it to be there if it isn't needed. Alright, let's remove them, then.
Comment on attachment 480624 [details] [diff] [review] [tested]buildbot-configs_release-080 We definitely want releases enabled on all of the builder masters, here's what I think you should do: - drop ENABLE_RELEASES altogether and always add the release stuff - rename ENABLE_STAGING_RELEASES to 'STAGING' Does that make sense?
Attachment #480624 - Flags: review?(bhearsum) → review-
Ok, replaced the ENABLE_STAGING_RELEASES with just STAGING, and added release-configs for production builders, and enabled production releases, and enabled staging releases on the staging masters. I didn't drop ENABLE_RELEASES, since it needs to be off on try
Attachment #480624 - Attachment is obsolete: true
Attachment #481942 - Flags: review?(bhearsum)
Attachment #481942 - Flags: review?(bhearsum) → review+
I'm going to try a staging run today.
Assignee: catlee → salbiz
Attachment #479479 - Attachment is obsolete: true
Attachment #483481 - Flags: review?(bhearsum)
Comment on attachment 481942 [details] [diff] [review] [tested]buildbot-configs_release-080 Hmmm, I just noticed that the release configs aren't named properly. release_config_[spb]m0?.py should all be symlinks that point to the real configs, which should be named according to branch like in mozilla2/. Gotta switch this to r-.
Attachment #481942 - Flags: review+ → review-
Attached patch reshuffle release_configs (obsolete) — Splinter Review
Hmm, good point. I didn't notice the branch-specific configs before, since I those symlinks were not built in setup-master. I've added the per master symlinks to be tracked in hg as well,
Attachment #481942 - Attachment is obsolete: true
Attachment #483569 - Flags: feedback?(catlee)
Attachment #483569 - Flags: feedback?(bhearsum)
Adding a version of the buildbot-configs patch which has releases disabled on production masters by default. Staging releases are still enabled on staging masters in this patch
Attachment #483968 - Flags: review?(catlee)
(Going to start dropping notes from my staging runs) Can't hardcode "'slavenames': branchConfig['platforms']['macosx']['slaves']" for l10n verify anymore. Probably best to change it to macosx + macosx64 'updates' builder doesn't have release prefix
(In reply to comment #28) > > 'updates' builder doesn't have release prefix Oops, this was my fault.
Depends on: 605267
Comment on attachment 483569 [details] [diff] [review] reshuffle release_configs Per IRC, we may flip off ENABLE_RELEASES in production for now. The release configs are out of date, too, but I guess we can fix that at check-in time -- I don't think it's worth the effort trying to keep them up to date before that.
Attachment #483569 - Flags: feedback?(bhearsum) → feedback+
Comment on attachment 483481 [details] [diff] [review] fixing bitrot in buildbotcustom patch Need to fix the slave lists for l10n verify and partner repacks -- I think both should be using macosx + macosx64 slaves.
Attachment #483481 - Flags: review?(bhearsum) → review-
Attachment #483968 - Flags: review?(catlee) → review+
Fixed l10n_verify and partner-repacks now get slavenames for macosx + macosx64.
Attachment #483481 - Attachment is obsolete: true
Attachment #484498 - Flags: feedback?(catlee)
Attachment #484498 - Flags: feedback?(bhearsum)
Comment on attachment 483569 [details] [diff] [review] reshuffle release_configs Yeah, looks good as long as ENABLE_RELEASES are off in production for now.
Attachment #483569 - Flags: feedback?(catlee) → feedback+
Attachment #484498 - Flags: review+
Attachment #484498 - Flags: feedback?(bhearsum)
Attachment #484498 - Flags: feedback+
Attachment #483569 - Attachment is obsolete: true
Attachment #436589 - Attachment is obsolete: true
Comment on attachment 484498 [details] [diff] [review] extend slave lists to include 64-bit slaves for l10n-verify and partner-repack changeset: 1088:1501598455c8 toot toot, here comes the 0.8.0 release train!
Attachment #484498 - Flags: checked-in+
Comment on attachment 483968 [details] [diff] [review] disable releases on production amsters for now changeset: 3198:420ddf8f7adc
Attachment #483968 - Flags: checked-in+
Attachment #484498 - Flags: feedback?(catlee) → feedback+
Switch to using reserved slaves for release-builds, reading the number of slaves to reserve from `reserved_slaves_${master_name}`.
Attachment #485774 - Flags: feedback?(catlee)
Attachment #485774 - Flags: feedback?(bhearsum)
Attachment #485775 - Flags: feedback?(catlee)
Attachment #485775 - Flags: feedback?(bhearsum)
Moving watchReservedFile() call into the config file, based on IRC conversation with catlee.
Attachment #485775 - Attachment is obsolete: true
Attachment #485797 - Flags: feedback?(catlee)
Attachment #485797 - Flags: feedback?(bhearsum)
Attachment #485775 - Flags: feedback?(catlee)
Attachment #485775 - Flags: feedback?(bhearsum)
Attachment #485774 - Attachment is obsolete: true
Attachment #485798 - Flags: feedback?(catlee)
Attachment #485798 - Flags: feedback?(bhearsum)
Attachment #485774 - Flags: feedback?(catlee)
Attachment #485774 - Flags: feedback?(bhearsum)
Comment on attachment 485797 [details] [diff] [review] set up reservedSlaves files for release masters I think you need to protect the watchReservedFile call behind an 'if RESERVED_SLAVES:'. Looks fine to me otherwise.
Attachment #485797 - Flags: feedback?(bhearsum) → feedback-
Attachment #485798 - Flags: feedback?(bhearsum) → feedback+
Good point. I was letting this slide since _readReservedFile just returns 0 if the name doesn't exist, but it's a waste of time to call it every minute if the filename isn't even set.
Attachment #485797 - Attachment is obsolete: true
Attachment #485890 - Flags: feedback?(catlee)
Attachment #485890 - Flags: feedback?(bhearsum)
Attachment #485797 - Flags: feedback?(catlee)
Attachment #485798 - Flags: feedback?(catlee) → feedback+
Attachment #485890 - Flags: feedback?(catlee) → feedback+
I think watchReservedFile may need some work to be reconfig-safe (or testing at least)
Attachment #485890 - Flags: feedback?(bhearsum) → feedback+
The global LoopingCall was getting overwritten on reconfig, but reactor still had a reference to it and kept it running. This version of the patch forgoes a looping call and opportunistically reads the file when _nextReservedSlaves is called if the last access time is > 60 seconds ago.
Attachment #485798 - Attachment is obsolete: true
Attachment #486184 - Flags: review?(catlee)
Attachment #486184 - Flags: review?(bhearsum)
Attachment #485890 - Attachment is obsolete: true
Attachment #486185 - Flags: review?(catlee)
Attachment #486185 - Flags: review?(bhearsum)
Comment on attachment 486184 [details] [diff] [review] make readReservedSlaves reconfig-safe What are all the changes on ftppoller from?
Attachment #486184 - Flags: review?(catlee) → review-
Comment on attachment 486185 [details] [diff] [review] use reconfig-safe method of polling reserved-slaves file unrelated changes in this patch too?
Attachment #486185 - Flags: review?(catlee) → review-
sorry for the error, previous patch got mangled with other work
Attachment #486185 - Attachment is obsolete: true
Attachment #486344 - Flags: review?(catlee)
Attachment #486344 - Flags: review?(bhearsum)
Attachment #486185 - Flags: review?(bhearsum)
sorry, accidentally mangled this patch with other work
Attachment #486184 - Attachment is obsolete: true
Attachment #486347 - Flags: review?(catlee)
Attachment #486347 - Flags: review?(bhearsum)
Attachment #486184 - Flags: review?(bhearsum)
Comment on attachment 486347 [details] [diff] [review] un-mangle buildbotcustom patch to add reserved slaves I think we need some code to handle the case here _reservedFileName is None. Inside _readReservedFile should work.
Attachment #486347 - Flags: review?(catlee) → review-
Attachment #486344 - Flags: review?(catlee) → review+
Comment on attachment 486347 [details] [diff] [review] un-mangle buildbotcustom patch to add reserved slaves Waiting on new patch, removing review.
Attachment #486347 - Flags: review?(bhearsum)
Attachment #486344 - Flags: review?(bhearsum) → review+
Right, I mistakenly assumed the os.path.exists() call would catch that case, and that the guard to set RESERVED_SLAVES would otherwise.
Attachment #486347 - Attachment is obsolete: true
Attachment #486478 - Flags: review?(catlee)
Attachment #486478 - Flags: review?(bhearsum)
Attachment #486478 - Flags: review?(bhearsum) → review+
Attachment #486478 - Flags: review?(catlee) → review+
Blocks: 609887
Blocks: 505318
landing the reserved-slaves patch should only require a reconfig.
Flags: needs-reconfig?
Comment on attachment 486478 [details] [diff] [review] guard _readReservedFile for case where a Nonetype is passed changeset: 1197:996956fdbe83
Attachment #486478 - Flags: checked-in+
Comment on attachment 486344 [details] [diff] [review] un-mangle configs patch changeset: 3333:2bc23626e94e
Attachment #486344 - Flags: checked-in+
All done here. Turning on in production will be tracked elsewhere.
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: