Closed Bug 637973 Opened 13 years ago Closed 13 years ago

Re-add mv-moz2-linux-ix-slave22 to production_config.py and get it back in service

Categories

(Release Engineering :: General, defect, P3)

x86
Linux
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: joduinn, Unassigned)

Details

(Whiteboard: [buildduty])

We scavenged a few unused win64 ix machines for use as buildbot masters in SCL1. This means we currently have no w64-ix-slave{01...06}. 

Now that we're getting closer to the win64 ref image being ready to roll out, we should fill out the win64 rack. Zandr and I talked, and propose to:
* take 6 of the least used IX machines in 650castro that are due to be moved soon anyway (exact machines TBD with RelEng buildduty).
* re-image & rename them to w64-ix-slave{01...06}
* move these into the empty slots in rack in SCL1, beside w64-ix-slave07.b.m.o
Whiteboard: [subject to embargo]
These machines are not connected to production systems so feel free to tackle them. There is no way it could affect any systems. Sorry if it wasn't explicit.

RC1 might be due tomorrow so another embargo could happen tomorrow.
these are from the set of currently-disabled slaves in mtv:

mv-moz2-linux-ix-slave20
mv-moz2-linux-ix-slave21
mv-moz2-linux-ix-slave22
mw32-ix-slave19
mw32-ix-slave20
mw32-ix-slave21
zandr thinks he'll be in scl later today, in which case he will move these as they are already down.
Do I need to do anything special when I turn these back on in scl1, or are they down in such a way that they're "safe"?
they are down in such a way that they're "safe"
The move isn't going to happen today, unscrewing the AFK ratsnest took a long time.
Blocks: 645024
Can't actually do this until bug 645024 is resolved. Reversing the dependency.
No longer blocks: 645024
Depends on: 645024
Assignee: server-ops-releng → zandr
It looks like all six of these have been pressed back into their original purposes - the three w32 boxes are running in try, and the linux boxes are doing linux boxy things.

So, I'm officially aborting this operation.  If we come to need more w64 builders later, we can open up a new bug.

However, somehow of the six hosts, only mv-moz2-linux-ix-slave22 is not in production_config.py.  So back into the releng pool to get that re-added and the reconfig done.
Assignee: zandr → nobody
No longer blocks: support-win64
Component: Server Operations: RelEng → Release Engineering
No longer depends on: 645024
QA Contact: zandr → release
Summary: grab 6 ix machines from 650castro and convert them into w64-ix-slave{{37-42}.b.m.o in SCL1 → (cancelled) grab 6 ix machines from 650castro and convert them into w64-ix-slave{{37-42}.b.m.o in SCL1
Whiteboard: [subject to embargo]
OS: Windows Server 2008 → Linux
Priority: -- → P3
Summary: (cancelled) grab 6 ix machines from 650castro and convert them into w64-ix-slave{{37-42}.b.m.o in SCL1 → Re-add mv-moz2-linux-ix-slave22 to production_config.py and get it back in service
Whiteboard: [slaveduty]
production_config.py changes aren't a slaveduty thing
Whiteboard: [slaveduty] → [buildduty]
mv-moz2-linux-ix-slave22 is actually in the list for try ix's:
 TRY_LINUX_IXS  = ['mv-moz2-linux-ix-slave%02i' % x for x in range(22,24)] + \
but in slavealloc was set to Trust: core, Pool: build-sjc1. Given the existing config, history of try jobs, and try ssh keys, I've corrected those fields (try, try-mtv1) and rebooted it. Puppet went fine.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.