Atom pinning state is duplicated in AtomStateEntry and JSAtom flag
Categories
(Core :: JavaScript Engine, task, P3)
Tracking
()
People
(Reporter: jonco, Assigned: jonco)
References
Details
Attachments
(1 file, 1 obsolete file)
14.51 KB,
patch
|
RyanVM
:
approval-mozilla-esr68+
|
Details | Diff | Splinter Review |
The pinning state in AtomStateEntry is redundant. We should be able to remove this and use JSAtom* in the atoms table.
This class does perform a barrier, but there's only one place where it fires. We can just execute the barrier explicitly in this case.
Assignee | ||
Updated•6 years ago
|
Assignee | ||
Comment 1•6 years ago
|
||
(In reply to Jon Coppeard (:jonco) from comment #0)
I think I'll go with ReadBarriered<JSAtom*> instead of the raw pointer for documentation purposes.
Assignee | ||
Comment 2•6 years ago
|
||
Comment 4•6 years ago
|
||
bugherder |
Assignee | ||
Comment 5•6 years ago
|
||
(In reply to Jon Coppeard (:jonco) from comment #0)
In the description I complained that:
The pinning state in AtomStateEntry is redundant
But then I found my own comment as to why we have this redundant state in bug 1445196:
This keeps the state in the atoms table entry as well so it doesn't require touching all the atoms during marking, but I'm not sure how important this is. I'm a bit wary of doing anything that could affect marking speed because the atoms table can be pretty large.
Looking at telemetry, there was a jump from ~2% to ~6% in the GC_SLOW_PHASE telemetry for the phase where this happens (MARK_RUNTIME_DATA) when this change landed. Also there was an unexplained jump in GC_MARK_ROOTS_MS a few days after this, although the way that histogram is set up means the data is not accurate.
I think I'm going to back this out and add a comment as to why we duplicate this state.
There's also a crash associated with this change but I think that's just because we're touching more memory so it's more likely to crash on heap corruption.
Assignee | ||
Comment 6•6 years ago
|
||
Updated•6 years ago
|
Comment 7•6 years ago
|
||
This landed in 68, and the backout in 69. Should we do something about the perf hit from this in 68 ESR?
Assignee | ||
Comment 8•6 years ago
|
||
(In reply to Julien Cristau [:jcristau] from comment #7)
Might be as well to. I'll make a patch.
Assignee | ||
Comment 9•6 years ago
|
||
ESR Uplift Approval Request
- If this is not a sec:{high,crit} bug, please state case for ESR consideration: Patch to back this bug out of esr68 for perf regression.
- User impact if declined: Perf regression, although probably fairly minor.
- Fix Landed on Version: 69
- Risk to taking this patch: Low
- Why is the change risky/not risky? (and alternatives if risky): The patch reverts to the previous state. The backout has been on central for a week with no issues.
- String or UUID changes made by this patch: None
Comment 10•6 years ago
|
||
Comment 11•6 years ago
|
||
uplift |
Description
•