Closed
Bug 746042
Opened 12 years ago
Closed 11 years ago
Clientproxy should ensure buildbot stops after every job run on a tegra.
Categories
(Release Engineering :: General, defect)
Tracking
(Not tracked)
RESOLVED
DUPLICATE
of bug 875822
People
(Reporter: Callek, Assigned: Callek)
References
Details
Attachments
(2 files, 1 obsolete file)
7.49 KB,
patch
|
bear
:
review+
|
Details | Diff | Splinter Review |
3.14 KB,
patch
|
Details | Diff | Splinter Review |
Many times when a buildbot job on our tegras ends we just start the next job without ensuring that the tegra itself restarted or that we even run the verify.py over again. We should change that. Attached is my WIP 1, its currently testing on tegra-011 and has inserted into itself is a patch for another [to be filed] bug about making sut_lib actually log stuff properly for our needs. This certainly needs cleanup, but I suspect we're most of the way there with this bug.
Assignee | ||
Comment 1•12 years ago
|
||
I thought I had this patch up days ago, sorry.
Attachment #615593 -
Attachment is obsolete: true
Attachment #617065 -
Flags: review?(bear)
Comment 2•12 years ago
|
||
Comment on attachment 617065 [details] [diff] [review] [tools] v1 at the end of the forceReboot case, how do you get to the next state? or is that why your have the forceRebootTS check clearing the flag?
Attachment #617065 -
Flags: review?(bear) → review+
Assignee | ||
Comment 3•12 years ago
|
||
(In reply to Mike Taylor [:bear] from comment #2) > Comment on attachment 617065 [details] [diff] [review] > [tools] v1 > > at the end of the forceReboot case, how do you get to the next state? > or is that why your have the forceRebootTS check clearing the flag? Yea I have it clearing the flag, so we don't end up queuing up multiple forceReboot states, and so that we can rely on the other state-setting code (dialback/active) to bring us back into the normal startup flow.
Assignee | ||
Comment 4•12 years ago
|
||
http://hg.mozilla.org/build/tools/rev/c0924656a648
Assignee | ||
Comment 5•12 years ago
|
||
This was deployed -- But had to be backed out due to this overloading the buildbot masters. We're currently rebalancing foopies and will actually bring up bm22 as was planned to decrease load on bm19/20 and hopefully enable us to redeploy this without causing issues.
Assignee | ||
Comment 6•12 years ago
|
||
So I've decided to 360 my approach from tools v1, there is just too many hard to work around issues in there, and we need the custom buildbotcustom step anyway (to avoid exceptions when disco) And since we're hacking in buildbotcustom and this step has some magic to support it anyway, might as well _not_ have cp try and graceful via buildbots webstatus sockets.
Attachment #626898 -
Flags: review?(bear)
Assignee | ||
Updated•12 years ago
|
Attachment #626898 -
Attachment is patch: true
Assignee | ||
Comment 7•12 years ago
|
||
Comment on attachment 626898 [details] [diff] [review] [buildbotcustom] v1 clearing review request. This looked fine in testing on foopy5/staging but at bear's direction I deployed to bm19 so we could see if it helped avoid the issues we've been seeing in prod. It did not help though. So back to the drawing board for now.
Attachment #626898 -
Flags: review?(bear)
Assignee | ||
Comment 8•12 years ago
|
||
Theory is that the patch by catlee in https://bug772467.bugzilla.mozilla.org/attachment.cgi?id=640910 will help my sanity here when I dive back in.
Assignee | ||
Comment 9•11 years ago
|
||
Clientproxy is dead, and this theory is now superceded by my work in Bug 875822
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → DUPLICATE
Updated•11 years ago
|
Product: mozilla.org → Release Engineering
Updated•6 years ago
|
Component: General Automation → General
You need to log in
before you can comment on or make changes to this bug.
Description
•