Closed Bug 701179 Opened 13 years ago Closed 13 years ago

moz2-linux-slave10 is down

Categories

(Release Engineering :: General, defect, P2)

x86
Linux
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: armenzg, Assigned: coop)

Details

(Whiteboard: [buildduty])

We should able to lift this machine up with vcenter.
Whiteboard: [buildduty]
Assignee: nobody → coop
Status: NEW → ASSIGNED
OS: Mac OS X → Linux
Priority: -- → P2
I was already connected to admin.build.m.o, so I reset this VM.
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Double-checked this slave this morning when I noticed it still wasn't connected and saw it was having problems running puppet. Looks like it got into an indeterminate state WRT scratchbox (die, scratchbox, die). Swapping in the old scratchbox.deleteme/ dir briefly allowed it to complete it's puppet run and get back into service.
Both moz2-linux-slave10 and moz2-linux-slave17 seem to be down.
Shall we file a different bug? or reuse this one?
It is also unable to free up enough space:
"Error: unable to free 6.00 GB of space. Free space only 2.86 GB"
We should probably allow releases to be clobbered on staging slaves after X amount of time.
(In reply to Aki Sasaki [:aki] from comment #6)
> We should probably allow releases to be clobbered on staging slaves after X
> amount of time.

Let's file a new bug for that then.

As I mentioned to Armen in IRC, the extreme lack of space is likely why the initial puppet setup was failing for scratchbox on moz2-linux-slave10. That's a pretty large package IIRC.
I had to manually clear space on this slave yesterday to get it back into the cycle.
Product: mozilla.org → Release Engineering
You need to log in before you can comment on or make changes to this bug.