The default bug view has changed. See this FAQ.

Slaverebooter should handle its jobs in a Queue rather than kickoff a single thread per slave.

RESOLVED FIXED

Status

Release Engineering
Tools
RESOLVED FIXED
3 years ago
3 years ago

People

(Reporter: Callek, Assigned: Callek)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(1 attachment)

(Assignee)

Description

3 years ago
Created attachment 8463802 [details] [diff] [review]
slaverebooter

So, in order to support return-early and then kick new work. Especially if we want to re-glance at a particular slave after we return from its worker in the same slaverebooter run, we'll need to have some way to communicate that job queue.

Not seeing any (sane) way of returning data from a Thread (such as a generator) I elected to switch slaverebooter to use Queue.

This tests fine (with --dryrun) on the same master that slaverebooter is running on now, I should note this bug, as is, does not change any behavior in a significant way, just front loads the queue and then churns through it at the same pace as before.

With some re-arranged logic to deal with the queue as well.

---

Once this lands it will not immediately deploy, slaverebooter uses a static buildtools package, and this patch does not presume to bump it.
Attachment #8463802 - Flags: review?(nthomas)

Updated

3 years ago
Attachment #8463802 - Flags: review?(nthomas) → review?(bhearsum)
I'm not quite sure I understand why this change is necessary, can you expand on what "return early and kick new work" means?

(In reply to Justin Wood (:Callek) from comment #0)
> Not seeing any (sane) way of returning data from a Thread (such as a
> generator)

I'm not sure what this means either. Did you mean "unlike a generator"? I don't think Threads and generators are the same thing...
(Assignee)

Comment 2

3 years ago
(In reply to Ben Hearsum [:bhearsum] from comment #1)
> I'm not quite sure I understand why this change is necessary, can you expand
> on what "return early and kick new work" means?

Basically I'm going to short circuit the graceful loop so we can exit the script far earlier than 5 hours in the case of failed gracefuls.

In order to do that I'm planning on "moving on" right after every graceful is issued, to the next piece of work (so we can issue gracefuls faster/earlier).

Since I'm not waiting even a minute for a graceful to finish with that new model, I'm also wanting to give the script a chance to go back and see if any gracefuls have finished before I exit the script.

Its that "go back and re-check the ones we started" piece that I'll need this for.

(Of course I'll also have logic that can skip the graceful step if necessary on next iteration of script)

> (In reply to Justin Wood (:Callek) from comment #0)
> > Not seeing any (sane) way of returning data from a Thread (such as a
> > generator)
> 
> I'm not sure what this means either. Did you mean "unlike a generator"? I
> don't think Threads and generators are the same thing...

I meant that I had considered using yield instead, but since this was in a Thread() that we stop/kill and have a limited number of threads, *and* since you can't easily return data to the parent thread/proc from a Thread (without some hacks) this was the best model I could come up with.

The idea here is that in process_slave, we'll SLAVE_QUEUE.put_nowait(slave) if we want to bail so other slaves can get work, and since its FIFO we'll finish the entire list before we get back to *this* slave. And the logic will allow that to work.


As I said all those benefits this patch is needed for is not reflected in this patch, but I wanted to try to make the diffs as sane as possible
I think I understand what you're getting at now, but this is hard to review without the follow-up patch that actually makes changes you're talking about. Unless you really need this reviewed first, I'm going to wait for that patch.
Attachment #8463802 - Flags: review?(bhearsum) → review+
(Assignee)

Comment 4

3 years ago
landed but not yet deployed (need to bump buildtools package first)
Status: NEW → RESOLVED
Last Resolved: 3 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.