Closed
Bug 773057
Opened 12 years ago
Closed 11 years ago
Point slaves to the new slavealloc URL
Categories
(Infrastructure & Operations Graveyard :: CIDuty, task)
Infrastructure & Operations Graveyard
CIDuty
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: dustin, Assigned: coop)
References
Details
Attachments
(1 file, 1 obsolete file)
8.75 KB,
patch
|
catlee
:
review+
dustin
:
checked-in+
|
Details | Diff | Splinter Review |
..with PuppetAgain. This is mainly a concrete goal which will result in setting up the generic toplevel::server stuff. Slavealloc isn't currently built with puppet (of any sort), so this will be an improvement in reproducibility! The testing of and transition to the new host should be trivially easy.
Reporter | ||
Comment 1•12 years ago
|
||
Docs at https://wiki.mozilla.org/ReleaseEngineering/PuppetAgain/Modules/shared
Assignee: server-ops-releng → dustin
Attachment #641523 -
Flags: review?(bugspam.Callek)
Reporter | ||
Comment 2•12 years ago
|
||
Comment on attachment 641523 [details] [diff] [review] bug773286.patch (wrong bug)
Attachment #641523 -
Attachment is obsolete: true
Attachment #641523 -
Flags: review?(bugspam.Callek)
Reporter | ||
Comment 3•12 years ago
|
||
Actually, the smarter solution here is to put this on the new releng cluster. I'll work on that.
No longer blocks: 753071
Reporter | ||
Comment 4•12 years ago
|
||
catlee, rail-buildduty said you had lots of free time and were looking for something to do ;) This adds a base_url config to slavealloc that it uses to construct URLs. This is necessary for this bug, where we want to host slavealloc on https://secure.pub.build.mozilla.org/slavealloc, so it needs to know to produce that URL for references. I'll have another patch in a bit to also record the LDAP username of people making changes in the UI. That should help diagnose weirdnesses a little.
Attachment #652747 -
Flags: review?(catlee)
Updated•12 years ago
|
Attachment #652747 -
Flags: review?(catlee) → review+
Reporter | ||
Comment 5•12 years ago
|
||
Comment on attachment 652747 [details] [diff] [review] bug773057-slavealloc-base_url.patch checked in and pushed on the existing server.
Attachment #652747 -
Flags: checked-in+
Reporter | ||
Comment 6•12 years ago
|
||
OK, on the new web cluster, we now have http://slavealloc.pvt.build.mozilla.org/ - allocator https://secure.pub.build.mozilla.org/slavealloc - UI + API authentication is limited to the RelEngWiki group (c.f. bug 783563) I need to fix up backups and add monitoring. Once that's done, we can turn this bug over to releng to change the URLs in runslave.py.
Reporter | ||
Comment 7•12 years ago
|
||
OK, bug 783665 for monitoring, and the backups are in place. This new service has the same backend as the old, so it's interchangeable. Here's what I suggest: * briefly test out http://slavealloc.pvt.build.mozilla.org/ to make sure it's allocating correctly * commit puppet & puppetagain changes to point puppety slaves to this host and, once bug 774354 is complete: * relengers start using https://secure.pub.build.mozilla.org/slavealloc for the UI * I make slavealloc.build.mozilla.org an alias for slavealloc.pvt.build.mozilla.org, thereby moving windows slaves over to that service without having to reconfigure them ** add a 301 redirect from http://slavealloc.build.mozilla.org/ to https://secure.pub.build.mozilla.org/slavealloc for ease of awesomebar transitions * after a week or so, decommission the old slavealloc VM The priority on this is relatively low, but the effort is also pretty minor. It gets us away from the soft SPOF of the single-hosted slavealloc system in scl1, and puts the application in a context that SREs and ops are very familiar with (webapp), so overall it's a stability and maintainability win.
Assignee: dustin → nobody
Component: Server Operations: RelEng → Release Engineering: Machine Management
QA Contact: arich → armenzg
Reporter | ||
Comment 9•11 years ago
|
||
Comment 7 is still accurate here (which impressive in itself, 10 months later!). Bug 774354 has been split out quite a bit in an effort to get traction, but that should be settled next Wednesday. As I mentioned in comment 7, slavealloc.pvt.b.m.o is working already, if you want to test it out and/or start pointing slaves at it. Coop, I'm assigning this to you on the basis of your being "motivated" in bug 864455 :)
Reporter | ||
Updated•11 years ago
|
Assignee: nobody → coop
Reporter | ||
Comment 10•11 years ago
|
||
This is a nice, easy change that will let us kill slavealloc01.build.scl1.mozilla.com.
Assignee | ||
Comment 11•11 years ago
|
||
(In reply to Dustin J. Mitchell [:dustin] from comment #9) > Coop, I'm assigning this to you on the basis of your being "motivated" in > bug 864455 :) I *am* motivated, but despite the fact that this would help me do buildduty more effectively, I likely won't have time to tackle this until next week when I'm off of buildduty.
Assignee | ||
Comment 12•11 years ago
|
||
Wading back into this after a hiatus... Reading comment #7, and indeed poking around https://secure.pub.build.mozilla.org/slavealloc (which slave_health is already using) and http://slavealloc.pvt.build.mozilla.org/, makes me guess that the setup work here is already done. Some questions for you, Dustin: 1) Is the above accurate, i.e. the scope of work here is just to get slaves pointed to http://slavealloc.pvt.build.mozilla.org/ via runslave.py? 2) If I want to update the currently running instance of slavealloc at https://secure.pub.build.mozilla.org/slavealloc, where do I do that? 3) Is there a staging instance of slavealloc in the new location? If not, can I do that trivially in the new location? I'm familiar with the setup script/process from the old slavealloc server. If #1 is true, I can start pointing slaves at the new target this week, starting with a couple of lion/snow slaves that are already unmanaged that I would need to update manually anyway.
Flags: needinfo?(dustin)
Reporter | ||
Comment 13•11 years ago
|
||
(In reply to Chris Cooper [:coop] from comment #12) > 1) Is the above accurate, i.e. the scope of work here is just to get slaves > pointed to http://slavealloc.pvt.build.mozilla.org/ via runslave.py? Correct. > 2) If I want to update the currently running instance of slavealloc at > https://secure.pub.build.mozilla.org/slavealloc, where do I do that? That's the same kind of webops push as for trychooser updates. Basically, you land the changes and then either run a deploy script yourself, or file a webops bug to do so. For the procedural details I'll point you to webops - I'm not sure what the current recommendation is. > 3) Is there a staging instance of slavealloc in the new location? If not, > can I do that trivially in the new location? I'm familiar with the setup > script/process from the old slavealloc server. There's not, but now that we've got everything on the new cluster, it's pretty easy to set up staging envs - bug 841345 is the starting point. > If #1 is true, I can start pointing slaves at the new target this week, > starting with a couple of lion/snow slaves that are already unmanaged that I > would need to update manually anyway. Sounds good!
Flags: needinfo?(dustin)
Reporter | ||
Updated•11 years ago
|
Summary: build a new slavealloc server → Point slaves to the new slavealloc URL
Updated•11 years ago
|
Product: mozilla.org → Release Engineering
Reporter | ||
Comment 14•11 years ago
|
||
We took care of this in bug 899085.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Updated•6 years ago
|
Product: Release Engineering → Infrastructure & Operations
Updated•4 years ago
|
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•