Closed Bug 1105453 Opened 10 years ago Closed 2 years ago

Consider giving users of SliceBudget a way to query how much of the budget is left.

Categories

(Core :: JavaScript: GC, defect)

defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: ehoogeveen, Unassigned)

Details

Attachments

(1 file, 1 obsolete file)

Currently when you have a SliceBudget, there's no real way to tell how much of the budget is left (you could reach into its guts, but this seems ill-advised). You're either over budget or you're not. Knowing how much of a budget is left could be used to make decisions like whether or not to start the next stage of GC or CC, which might not all be equally incremental. This came up in #content today, and I thought it might be a worthwhile addition (although it won't have any current users).

The minimal way to do this would be to add a fractionRemaining() function taking a budget parameter, indicating the budget that the SliceBudget was created with. This could work with both time- and work-based budgets, since the SliceBudget knows which variant it is. However, this would require users to know the budget that the SliceBudget was created with, which seems pretty inconvenient.

Instead I'd like to propose that we just store the budget in each SliceBudget. This makes them a little bigger, but these objects are only used in a handful of places and it wouldn't exactly make them heavyweight. This field also wouldn't be queried on the hot paths (SliceBudget::step and SliceBudget::isOverBudget), so I don't think it would do any harm.
This adds a 'budget' field to SliceBudget to track what its budget is, and adds SliceBudget::fractionRemaining(), which returns a value between 0.0 and 1.0 indicating how much of the budget is left. For unlimited budgets, this function always returns 1.0, and for 0-time or 0-work budgets it always returns 0.0.

I also added SliceBudget::unlimitedBudget for completeness. This has the same value as SliceBudget::unlimitedDeadline, but they're different conceptually.
Attachment #8529238 - Flags: review?(wmccloskey)
(CCing some more GC folks so they know of this change / can chime in)
(In reply to Emanuel Hoogeveen [:ehoogeveen] from comment #3)
What's the use case for this?  We probably shouldn't add this unless it's actually going to be used for something.

I can think of one possible use - when doing incremental sweeping we could opt to not start sweeping a new zone group if we're nearly at the end of our slice.
Right, that's basically the (potential) use case that came up with regards to the CC: being able to answer the question "should we start the next phase, which may not be very incremental, or wait until we have a full slice?"

It could also potentially be used for stats or profiling, and I think it fills a strange gap in the api (but maybe it's not a gap that needs filling). It occurs to me that it might be more useful if it did not cap to 0.0, so you could use it to see by how much you went over budget. Though I don't know if the proposed function is the clearest way to do that.
Comment on attachment 8529238 [details] [diff] [review]
Let users query how much of a SliceBudget's original budget is remaining.

Jon is a better reviewer for this.
Attachment #8529238 - Flags: review?(wmccloskey) → review?(jcoppeard)
Comment on attachment 8529238 [details] [diff] [review]
Let users query how much of a SliceBudget's original budget is remaining.

Review of attachment 8529238 [details] [diff] [review]:
-----------------------------------------------------------------

I don't think we should take this change until we have a patch that uses the new functionality, so I'm cancelling the review for now.

I like the idea of not starting new work if it will probably push us over the slice budget.  For this though I'm not sure if fraction of budget remaining is the most useful information.  Being able to get the absolute time remaining is more concrete, although at the end of the day there still several unknowns when deciding whether to yield early.

Emanuel, do you want to look at doing this?  You could investigate how long beginSweepingZoneGroup() takes on average (maybe we have telemetry that will help) and then come up with a heuristic for when to start a new group and when to yield.
Attachment #8529238 - Flags: review?(jcoppeard)
Hmm. Aside from TimeBudgets, there are also WorkBudgets, and it's not clear how to map one to the other; there might not be a consistent conversion factor right now. I'd like to add a more-deterministic mode for fuzzing where all time budgets are converted to work budgets - I'd have to come up with such a conversion factor in the process, but I haven't gotten around it yet.

It's also not wholly clear what to do for unlimited budgets - just return INT64_MAX? Perhaps we could treat work budgets as unlimited for the purposes of deciding whether or not to start the next zone group.

Aside from that, the average time probably varies a lot depending on hardware and workloads. We could perhaps track a rolling mean and decide based on that (using mozilla::RollingMean). That doesn't give any information about the variance though (but then, anything using the variance would probably get complicated).
This turned out pretty simple. Seems to work (tested on http://gregor-wagner.com/tmp/mem), but I haven't checked if it actually reduces the max pause time. Need to check with GC stats enabled, but I'm not sure how (haven't had to do it before).
Attachment #8529238 - Attachment is obsolete: true
Attachment #8533717 - Flags: feedback?(jcoppeard)
I added some counters to see how often this helps. This is for a run of MemBench, but note that my CPU is busy doing some calculations (so there are occasional random spikes in latency skewing these slightly):

True positive:  143
False positive:  45
True negative: 2546
False negative:  49

Here,
1) 'true positive' means we held off until the next slice, and beginSweepZoneGroup took long enough that we *would* have gone over if we hadn't.
2) 'false positive' means we held off until the next slice, but there was enough time left in the slice for beginSweepZoneGroup to finish in time.
3) 'true negative' means we did beginSweepZoneGroup right away, and we didn't finish over budget.
4) 'false negative' means we did beginSweepZoneGroup right away, but we finished over budget. This number may be worse because my system is so busy.

So this shows that we make the right choice 3/4 of the time, and it helps about 5.5% of the time. Note also that the random spikes in latency are probably increasing the mean, so the number of false positives might also be lower with an idle system.

Beyond that, we could 1) introduce a multiplier to increase or reduce the dynamic limit, and 2) play with the maximum number of values in the rolling mean to make it more or less responsive to changes. I'll probably play around with it a bit more once I can interrupt these calculations.

Finally, I don't know how much sense it makes to have a shared mean for all zone groups. Perhaps it would make more sense to have one for each? That would make for a bigger patch though (and I'm not sure how to store it).
I played around with it a bit now that my computer is otherwise idle. Somewhat surprisingly the picture didn't change at all; it's about as good as before. Multiplying the mean by different factors: (note that the totals aren't the same each time, since the pages Membench opens basically keep causing GCs forever)

                 0.5   1.0   2.0
True positive    192   171   105
False positive     2    59   245
True negative   3260  3334  3608
False negative    82    58    59

Clearly, starting a new slice only if the time left is less than half the mean time cuts the false positives by a lot, at the cost of more false negatives, and conversely using double the mean increases the false positives by a lot, but doesn't really help the false negatives at all (the remaining ones are presumably large spikes). The mean will be bigger than the median because it includes the spikes, so it's probably large enough to include most of the variance; I'm guessing that's why it works well.

I also played with the maximum number of values used in the rolling mean, but the results were very inconclusive. 16 and 1024 gave basically the same result (i.e. the above), but 64 and 256 both had a worse true to false positive ratio. I don't know if that was just a statistical fluke (Membench takes a while, so I'm just doing a single run), or if these numbers really just work well with the workload of Membench, but as it is I guess 1000 is an okay choice.
Tried out some more values for science:

                 0.625     0.75   0.875 (1)   0.875 (2)
True positive    4.6907   4.8437      5.8525      5.4270
False positive   0.4104   0.4787      1.7638      0.9843
True negative   93.0519  92.9597     90.5665     91.9127
False negative   1.8470   1.7178      1.8172      1.6760

I don't know how well these numbers can be trusted - I did two runs with 0.875 and you can see that it's not as consistent as the tiny difference between 0.625 and 0.75 would make it appear. Still, 0.75 might be significantly better than 1.0 on my system (but I'm not sure how much I trust it to hold in general).
Comment on attachment 8533717 [details] [diff] [review]
Don't start sweeping the next zone group if it'll put us over budget.

Review of attachment 8533717 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry it's taken me so long to get to this.

I'm in two minds about this approach.  On the one hand it seems like a problem we should try to address, but on the other hand it looks like we don't really have enough information to address it properly.

So I don't think we can really land something like this unless we can show some measurable improvements from it.

> I don't know how much sense it makes to have a shared mean for all zone groups

Yeah, I'm not sure it does - the zones are independent after all.  A better way might be to look at the amount of work that needs to be done in the first sweep slice and relate that to the time taked... but that's just making the situation more and more complicated.
Attachment #8533717 - Flags: feedback?(jcoppeard)
(In reply to Jon Coppeard (:jonco) (PTO until 29th Dec) from comment #13)
> So I don't think we can really land something like this unless we can show
> some measurable improvements from it.

To expand on the earlier data, during a run of MemBench (with all 150 tabs open) I saw a mean of about 1.5ms - but that might well be more than double the median time, given the spikes I saw. So this might become relevant if we're striving for 5ms slices, but right now I think there are bigger fish to fry.

If we can eliminate those spikes and use per-zone(-group?) data, we might be looking at something closer to a normal distribution. At that point using the mean would only catch 50% of cases, so we'd need something more complicated. It wouldn't be hard to encapsulate the standard deviation in something like RollingMeanAndSD (since it just requires storing the squares as well), but that only works if we *do* get normally distributed data.

The best solution would be if we could incrementalize these bits somehow. I don't know enough about these functions to know if adding a bit of state and resuming later would leave things in an inconsistent state though (making the above patch didn't really involve any digging).

The bug assignee is inactive on Bugzilla, so the assignee is being reset.

Assignee: emanuel.hoogeveen → nobody
Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: