Closed
Bug 566026
Opened 14 years ago
Closed 14 years ago
increase disk space on moz2-linux64-slaveNN VMs
Categories
(mozilla.org Graveyard :: Server Operations, task, P2)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: bhearsum, Assigned: phong)
References
Details
(Whiteboard: [linux64][buildslaves])
These machines keep running out of disk space when doing builds. Their /builds drive is only 15G, we should bump it up to at least 30G.
Updated•14 years ago
|
Whiteboard: [linux64][buildslaves]
Updated•14 years ago
|
Comment 2•14 years ago
|
||
Is there any reason we didn't punt this over to Server Ops yet ?
Comment 3•14 years ago
|
||
NB, the staging slave moz2-linux64-slave07 already has a 39G partition at /builds.
Comment 4•14 years ago
|
||
We are going to use these slaves for the upcoming 3.7a5 release (build should be started tomorrow, June 9). Moving to IT (somehow still assigned to releng) and increasing the importance to blocker. Known build slaves: moz2-linux64-slave01 moz2-linux64-slave02 moz2-linux64-slave03 moz2-linux64-slave04 moz2-linux64-slave05 moz2-linux64-slave06 moz2-linux64-slave08 moz2-linux64-slave09 moz2-linux64-slave10 moz2-linux64-slave11 moz2-linux64-slave12
Assignee: nobody → server-ops
Severity: normal → blocker
Component: Release Engineering → Server Operations
QA Contact: release → mrz
Updated•14 years ago
|
Assignee: server-ops → phong
Assignee | ||
Comment 5•14 years ago
|
||
Can I just blow away the the current 15 GB drive and add a 40 GB drive?
Comment 6•14 years ago
|
||
Yes, blow them away. Could you leave one of the slaves (let's say moz2-linux64-slave12) not touched. We'll process it after we test the rest slaves.
Assignee | ||
Comment 7•14 years ago
|
||
These are done: moz2-linux64-slave01 moz2-linux64-slave02 moz2-linux64-slave03 moz2-linux64-slave04 moz2-linux64-slave05 moz2-linux64-slave06
Assignee | ||
Comment 8•14 years ago
|
||
the rest are done except for slave12 per comment 6.
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Comment 9•14 years ago
|
||
Thanks a lot for the prompt action! All of slaves (except 09, which complains about puppet certificate) are up and running. Could you do the same thing with slave12? It was idle and I shut it down gracefully from production-master03.build.mozilla.org:8010.
Comment 10•14 years ago
|
||
Notes: I had to pull the slaves out of production before to say go. To make the slaves up I did the following using cssh. # fdisk /dev/sdb (allocate all space for a primary partition #1) # mkfs.ext3 /dev/sdb1 # mount -a # cd /builds # mkdir slave logs && chown cltbld: slave logs # rsync -av moz2-linux64-slave12:/builds/slave/buildbot.tac slave/ # sed -i "s/moz2-linux64-slave12/`hostname -s`/g" slave/buildbot.tac # reboot
Assignee | ||
Comment 11•14 years ago
|
||
slave12 is also done.
Comment 12•14 years ago
|
||
Copying the tac file from slave12 pushed all these slaves onto pm03. I've moved * moved 1,2,3,4,6 to pm for m-c/m-1.9.2 (and 5 was already there) * moved 8 to pm02 for 1.9.1 * left 9,10,11 on pm03 for addonsmgr,places,electrolysis 12 should probably go to pm03 when set up again.
Comment 13•14 years ago
|
||
(In reply to comment #12) > Copying the tac file from slave12 pushed all these slaves onto pm03. I've moved > * moved 1,2,3,4,6 to pm for m-c/m-1.9.2 (and 5 was already there) > * moved 8 to pm02 for 1.9.1 > * left 9,10,11 on pm03 for addonsmgr,places,electrolysis Thanks a lot for doing this. > 12 should probably go to pm03 when set up again. Set up and attached to pm03.
Updated•9 years ago
|
Product: mozilla.org → mozilla.org Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•