Too much time spent in ulock_wait() on 10.15
Categories
(Core :: Memory Allocator, defect)
Tracking
()
| Performance Impact | low |
| Tracking | Status | |
|---|---|---|
| firefox-esr91 | --- | unaffected |
| firefox-esr102 | --- | unaffected |
| firefox102 | --- | unaffected |
| firefox103 | --- | wontfix |
| firefox104 | --- | wontfix |
| firefox105 | --- | fix-optional |
People
(Reporter: jrmuizel, Unassigned)
References
(Regression)
Details
(Keywords: perf:resource-use, regression)
https://share.firefox.dev/3c58vbJ
We're spending 59% of the time on the renderer in ulock_wait()
| Reporter | ||
Updated•3 years ago
|
Updated•3 years ago
|
Comment 1•3 years ago
|
||
Set release status flags based on info from the regressing bug 1670885
Updated•3 years ago
|
Updated•3 years ago
|
Comment 2•3 years ago
|
||
hey Jeff, to be able to properly triage this bug, we need some more information:
- Does that reproduce easily?
- On which websites?
- On which types of workloads? Our understanding is that it's affecting animations, is that it?
Thanks!
| Reporter | ||
Comment 3•3 years ago
|
||
I don't know how easy it is to get into this state. I don't think I'm seeing it now. It didn't seem to be website specific and affected everything.
Comment 4•3 years ago
|
||
Setting 103 to Won't Fix
Anything else that could be done to investigate or are there any pref metrics/telemetry around this?
Comment 5•3 years ago
|
||
Is this something that's happening only on 10.15? If it isn't affecting macOS 11+ on x86-64 then we could keep using the old spinlocks on 10.15. Also did you measure a significant performance difference? The reason I'm asking is that with the old locks you wouldn't see a contended lock actually being contended, because the implementation would keep yielding to another thread if the lock was taken, which could land on another thread which also didn't own the lock and yield and so on. So the time waiting for the lock to be released would be spread over a bunch of threads continuously waking up and yielding. That is, it's possible that we already had contention here but it wasn't visible.
Comment 6•3 years ago
|
||
(In reply to Donal Meehan [:dmeehan] from comment #4)
Setting 103 to Won't Fix
Anything else that could be done to investigate or are there any pref metrics/telemetry around this?
I'm setting the performance priority for this bug using Jeff's answer above and the performance calculator, but otherwise leaving this up to Gabriele to decide.
Comment 7•3 years ago
|
||
Set release status flags based on info from the regressing bug 1670885
Comment 8•3 years ago
|
||
The severity field is not set for this bug.
:glandium, could you have a look please?
For more information, please visit auto_nag documentation.
Comment 9•3 years ago
|
||
Maybe gab knows more (in pto currently)
Comment 10•3 years ago
|
||
I've left some info in comment 5, I'd be happy to look at the profile with a reproducible test case. In general if we're spending too much time in memory allocation locks we might have to further profile and optimize our memory allocator (or use a new one, but that's a tall order in terms of complexity).
Updated•3 years ago
|
Comment 11•2 years ago
|
||
I'm going to assume the recentish improvements around locking made things better here. If not, please reopen with a new profile or file a new bug.
Description
•