[meta] Investigate balancing GC heap sizes using compositional limit rule
Categories
(Core :: JavaScript: GC, task, P2)
Tracking
()
People
(Reporter: jonco, Unassigned)
References
(Depends on 1 open bug, Blocks 1 open bug)
Details
(Keywords: meta)
Attachments
(1 file)
148.89 KB,
image/png
|
Details |
An approach to optimally allocating memory between multiple GC heaps is described here: https://arxiv.org/abs/2204.10455
Current heap limits are set to | W * c | where W is the live memory use and and c is a factor. To avoid collecting too often for heaps that are growing quickly we adjust the value of c based on the heap size if collections are happening frequently.
The idea is for heap limits to be set to | W + c * square_root(W * g / s) | where g is allocation rate and s is garbage collection rate. This rule is compositional in that it allocates memory optimally when used by multiple heaps, even without communication.
The goal is to reduce overall memory usage and GC time by allocating memory optimally between heaps, as well as simplifying the heap sizing heuristics. For maximum effect the heap limits must be recalculated periodically as the allocation rate changes, rather than at the end of GC as currently happens.
Updated•10 months ago
|
Updated•10 months ago
|
Reporter | ||
Comment 1•8 months ago
|
||
The graph shows the current heap limits compared with the optimal heap limit approach described in the paper. (Optimal heap limit graph is shown for allocation rate / collection rate = 0.03).
Currently, we choose either the low frequency or high frequency limit based on how often GCs are occurring, with the high frequency version if GCs happen more than once a second. There is a hard cutover between the two.
The high frequency limit (which ends up being used for GC heavy workload) uses a variable multiplicative factor based on the size of the heap. This is linearly interpolated between two values for mid-sized heaps and this results in the 'bump' in the graph.
The hard cutover between the two regimes and the bump in the latter one mean that similar workloads (or even the same workload) can end up with very different GC behaviour. I believe this is one reason we see noisy or bimodal behaviour in our tests.
The optimal heap limit calculation should not have this behaviour. It takes into account allocation rate directly so it doesn't need the cutover to boost heap sizes for GC heavy workloads and it changes smoothly based on heap size.
Comment 2•7 months ago
|
||
(In reply to Jon Coppeard (:jonco) from comment #1)
The hard cutover between the two regimes and the bump in the latter one mean that similar workloads (or even the same workload) can end up with very different GC behaviour. I believe this is one reason we see noisy or bimodal behaviour in our tests.
This is a reasonable hypothesis, and fixing this problem would be really valuable.
Description
•