Closed
Bug 722001
(tegra-118)
Opened 13 years ago
Closed 11 years ago
tegra-118 problem tracking
Categories
(Infrastructure & Operations Graveyard :: CIDuty, task, P3)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: philor, Unassigned)
References
()
Details
(Whiteboard: [badslave][buildduty])
Since the evening of January 13th, it has done over 200 jobs without ever hitting anything but exception and retry.
Comment 1•13 years ago
|
||
I called './stop_cp.sh tegra-118' on foopy14 to (hopefully) prevent it doing more jobs.
Updated•13 years ago
|
Priority: -- → P3
Updated•13 years ago
|
Assignee: nobody → bhearsum
Comment 2•13 years ago
|
||
With the dependent bug fixed, I restarted this tegra.
Comment 3•13 years ago
|
||
This tegra is back in production again. The first job it ran was purple, but the next two were green.
Alias: tegra-118
Assignee: bhearsum → nobody
Status: NEW → RESOLVED
Closed: 13 years ago
Component: Release Engineering → Release Engineering: Machine Management
QA Contact: release → armenzg
Resolution: --- → FIXED
Summary: tegra-118 needs help → tegra-118 problem tracking
Updated•13 years ago
|
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Updated•13 years ago
|
Status: REOPENED → RESOLVED
Closed: 13 years ago → 13 years ago
Resolution: --- → FIXED
Comment 6•12 years ago
|
||
ran stop_cp.sh on this tegra
Comment 7•12 years ago
|
||
Went offline midjob, trying a PDU reboot.
Comment 8•12 years ago
|
||
Back in production.
Status: REOPENED → RESOLVED
Closed: 13 years ago → 12 years ago
Resolution: --- → FIXED
Reporter | ||
Comment 9•12 years ago
|
||
21 timeouts in verify.py in a row.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Reporter | ||
Updated•12 years ago
|
Summary: tegra-118 problem tracking → [disable me] tegra-118 problem tracking
Updated•12 years ago
|
Summary: [disable me] tegra-118 problem tracking → tegra-118 problem tracking
Reporter | ||
Comment 10•12 years ago
|
||
Reporter | ||
Comment 11•12 years ago
|
||
Reporter | ||
Comment 12•12 years ago
|
||
Reporter | ||
Comment 13•12 years ago
|
||
Reporter | ||
Comment 14•12 years ago
|
||
Reporter | ||
Comment 15•12 years ago
|
||
Reporter | ||
Comment 16•12 years ago
|
||
Comment 17•12 years ago
|
||
This tegra was doing ok, now it's having a hard time again. Probably needs recovery. I ran stop_cp.sh on it.
Blocks: 806950
Updated•12 years ago
|
Comment 18•12 years ago
|
||
Back in production.
Status: REOPENED → RESOLVED
Closed: 12 years ago → 12 years ago
Resolution: --- → FIXED
Updated•12 years ago
|
Comment 21•12 years ago
|
||
Brought back to life.
Status: REOPENED → RESOLVED
Closed: 12 years ago → 12 years ago
Resolution: --- → FIXED
Comment 22•12 years ago
|
||
...and dying again, please reimage
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Updated•12 years ago
|
Status: REOPENED → RESOLVED
Closed: 12 years ago → 12 years ago
Resolution: --- → FIXED
Comment 23•12 years ago
|
||
No jobs taken on this device for >= 7 weeks
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment 24•12 years ago
|
||
(mass change: filter on tegraCallek02reboot2013)
I just rebooted this device, hoping that many of the ones I'm doing tonight come back automatically. I'll check back in tomorrow to see if it did, if it does not I'll triage next step manually on a per-device basis.
---
Command I used (with a manual patch to the fabric script to allow this command)
(fabric)[jwood@dev-master01 fabric]$ python manage_foopies.py -j15 -f devices.json `for i in 021 032 036 039 046 048 061 064 066 067 071 074 079 081 082 083 084 088 093 104 106 108 115 116 118 129 152 154 164 168 169 174 179 182 184 187 189 200 207 217 223 228 234 248 255 264 270 277 285 290 294 295 297 298 300 302 304 305 306 307 308 309 310 311 312 314 315 316 319 320 321 322 323 324 325 326 328 329 330 331 332 333 335 336 337 338 339 340 341 342 343 345 346 347 348 349 350 354 355 356 358 359 360 361 362 363 364 365 367 368 369; do echo '-D' tegra-$i; done` reboot_tegra
The command does the reboot, one-at-a-time from the foopy the device is connected from. with one ssh connection per foopy
Updated•12 years ago
|
Comment 25•12 years ago
|
||
had to cycle clientproxy to bring this back
Status: REOPENED → RESOLVED
Closed: 12 years ago → 12 years ago
Resolution: --- → FIXED
Updated•12 years ago
|
Assignee | ||
Updated•12 years ago
|
Product: mozilla.org → Release Engineering
Comment 26•11 years ago
|
||
pdu reboot didn't fix this one
Comment 27•11 years ago
|
||
recovery didn't help, dunno what to do
Updated•11 years ago
|
Status: REOPENED → RESOLVED
Closed: 12 years ago → 11 years ago
Resolution: --- → FIXED
Comment 28•11 years ago
|
||
2014-01-16 10:35:14 tegra-118 p online active OFFLINE :: error.flg [Automation Error: Unable to connect to device after 5 attempts]
pdu reboot didn't help
Comment 29•11 years ago
|
||
SD card replaced & reimaged/flashed.
Comment 30•11 years ago
|
||
(In reply to Eric Ramirez [:Eric] from comment #29)
> SD card replaced & reimaged/flashed.
can we try again?
Depends on: 974917
Comment 31•11 years ago
|
||
SD card formatted, tegra reimaged and flashed.
[vle@admin1a.private.scl3 ~]$ fping tegra-118.tegra.releng.scl3.mozilla.com
tegra-118.tegra.releng.scl3.mozilla.com is alive
Reporter | ||
Updated•11 years ago
|
Status: REOPENED → RESOLVED
Closed: 11 years ago → 11 years ago
QA Contact: armenzg → bugspam.Callek
Resolution: --- → FIXED
Updated•7 years ago
|
Product: Release Engineering → Infrastructure & Operations
Updated•5 years ago
|
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•