Closed
Bug 819724
(tegra-347)
Opened 12 years ago
Closed 11 years ago
tegra-347 problem tracking
Categories
(Infrastructure & Operations Graveyard :: CIDuty, task, P3)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: Callek, Unassigned)
References
()
Details
(Whiteboard: [buildduty][buildslaves][capacity])
Not connecting to sut port after rebooting (multiple times)
Reporter | ||
Updated•12 years ago
|
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Reporter | ||
Comment 1•12 years ago
|
||
No jobs taken on this device for >= 7 weeks
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Reporter | ||
Comment 2•12 years ago
|
||
(mass change: filter on tegraCallek02reboot2013)
I just rebooted this device, hoping that many of the ones I'm doing tonight come back automatically. I'll check back in tomorrow to see if it did, if it does not I'll triage next step manually on a per-device basis.
---
Command I used (with a manual patch to the fabric script to allow this command)
(fabric)[jwood@dev-master01 fabric]$ python manage_foopies.py -j15 -f devices.json `for i in 021 032 036 039 046 048 061 064 066 067 071 074 079 081 082 083 084 088 093 104 106 108 115 116 118 129 152 154 164 168 169 174 179 182 184 187 189 200 207 217 223 228 234 248 255 264 270 277 285 290 294 295 297 298 300 302 304 305 306 307 308 309 310 311 312 314 315 316 319 320 321 322 323 324 325 326 328 329 330 331 332 333 335 336 337 338 339 340 341 342 343 345 346 347 348 349 350 354 355 356 358 359 360 361 362 363 364 365 367 368 369; do echo '-D' tegra-$i; done` reboot_tegra
The command does the reboot, one-at-a-time from the foopy the device is connected from. with one ssh connection per foopy
Reporter | ||
Comment 3•12 years ago
|
||
Back from recovery
Status: REOPENED → RESOLVED
Closed: 12 years ago → 12 years ago
Resolution: --- → FIXED
Comment 4•12 years ago
|
||
9 days, 4:59:28 since last job
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment 5•12 years ago
|
||
recovery didn't help, dunno what to do
Reporter | ||
Updated•12 years ago
|
Status: REOPENED → RESOLVED
Closed: 12 years ago → 12 years ago
Resolution: --- → FIXED
Comment 6•11 years ago
|
||
Passing fewer than 50% of the jobs it takes - recover, smash, I don't care beyond stop_cp
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Reporter | ||
Comment 7•11 years ago
|
||
Trying again -- last job was in may (23'rd) -
manual pdu reboot brought it back to life
Status: REOPENED → RESOLVED
Closed: 12 years ago → 11 years ago
Resolution: --- → FIXED
Comment 8•11 years ago
|
||
agent check failing, try pdu reboot
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Comment 9•11 years ago
|
||
back in production
Status: REOPENED → RESOLVED
Closed: 11 years ago → 11 years ago
Resolution: --- → FIXED
Reporter | ||
Updated•11 years ago
|
Assignee | ||
Updated•11 years ago
|
Product: mozilla.org → Release Engineering
Comment 10•11 years ago
|
||
agent check failing; pdu reboot didn't help
Comment 11•11 years ago
|
||
Just came back on its own.
Status: REOPENED → RESOLVED
Closed: 11 years ago → 11 years ago
No longer depends on: 922822
Resolution: --- → FIXED
Reporter | ||
Comment 12•11 years ago
|
||
can't seem to reliably keep this one up
Comment 13•11 years ago
|
||
decommed
Status: REOPENED → RESOLVED
Closed: 11 years ago → 11 years ago
Resolution: --- → FIXED
Updated•7 years ago
|
Product: Release Engineering → Infrastructure & Operations
Updated•5 years ago
|
Product: Infrastructure & Operations → Infrastructure & Operations Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•