(In reply to [:fabrice] Fabrice Desré from comment #4)
Thanks, that does explain what's going on. gc_allocation_threshold_mb is very low and combined with the recently-added urgent threshold (default 16MB) this means that we're going to end up finishing a lot of GCs non-incrementally.
They key with setting these prefs is to test representative workloads and find a tradeoff between the amount of time spent doing GC and the max size of heap used. Setting these values very low will result in a lot of GC activity but won't improve memory use beyond a certain point.
Looking at the prefs themselves:
// Increase mark slice time from 10ms to 30ms
We use 5ms in the browser nowadays. Reducing this could reduce jank but test first if you run on lower performance devices. It looks like this was intentionally increased from the original 10ms.
These parameters all got renamed so I'm not sure what effect setting them has if any. The new names/defaults are:
The growth values are factors that are multiplied by 100 to get an integer, so here 150 means a factor of 1.5.
The purpose of these prefs is to make the high frequency growth factor vary based on the heap size (this is used to determine the GC trigger when GC is happening frequently). The heap is classified into small/medium/large by comparing its size to gc_small_heap_size_max_mb and gc_large_heap_size_min_mb (previous gc_high_frequency_high_limit_mb and gc_high_frequency_low_limit_mb). If the heap is small the gc_high_frequency_small_heap_growth is used, if it is large gc_high_frequency_large_heap_growth is used and otherwise the values are interpolated.
This heap size classification is also used to determine the non-incremental limit, which is the size at which the GC will finish an ongoing incremental GC non-incrementally. In the browser we have:
This is 150 in the browser but 120 is not unreasonable. This will trigger more GCs as the heap grows but keep the max heap size smaller.
This pref has been removed.
This is an important one for the purpose of this bug and is used to calculate the threshold size at which GC is triggered. This is calculated by max(heap_size, gc_allocation_threshold_mb) * growth_factor. The upshot of this is that we don't trigger GC on allocation if the heap size is less than gc_allocation_threshold_mb * growth_factor. Other parts of the system do still trigger GCs for other reasons.
In the brower this is 27MB.
Finally, a new pref has been added:
This controls the point at which an incremental GC becomes 'urgent' and the GC increases the budget of GC slices in a effort to finish it before we reach the non-incremental limit (at which point we will finish it synchronously).
Writing all this up makes me realise that this system is too complicated to configure easily and we should try and simplify it.
The problem in this bug is that we are hitting the urgent threshold immediately and increasing the slice time leading to very long slices. I'm not sure why the GC is running repeatedly though.
You want roughly arrange for growth_factor * heap_size * (non_incremental_limit - 1) to be more that 2 * urgent_threshold.
- removing renamed parameters and replacing with the browser defaults from modules/libpref/init/all.js
Hopefully that fixes this problem and you can the tune the parameters further from there.