Closed
Bug 1351790
Opened 6 years ago
Closed 5 years ago
Add hook to allow browser to interrupt GC
Categories
(Core :: JavaScript: GC, enhancement, P1)
Tracking
()
RESOLVED
WONTFIX
Performance Impact | high |
People
(Reporter: djvj, Assigned: sfink)
References
(Blocks 2 open bugs)
Details
(Keywords: perf, stale-bug, Whiteboard: [necko-active])
Attachments
(1 file)
Currently, we have a way to schedule incremental GCs with timeslices. This allows for prediction-based incrementality, but does not allow us to interrupt long-running operations once they have begun. Billm suggested that it would be useful for the browser, especially in interactivity-sensitive areas, to inform the garbage collector to cancel an operation and try again later because of urgent user-initiated work to be done.
Updated•6 years ago
|
Whiteboard: [qf]
Updated•6 years ago
|
Whiteboard: [qf] → [qf:p1]
Updated•6 years ago
|
Whiteboard: [qf:p1]
Assignee | ||
Updated•6 years ago
|
Whiteboard: [qf]
Reporter | ||
Updated•6 years ago
|
Whiteboard: [qf] → [qf:p1]
Assignee | ||
Comment 1•6 years ago
|
||
This looks like it's two different things. For chrome process GCs, we would want to register a callback that would poll for pending events or whatever. Then we could double-duty the budget checks -- right now, we check every 1000 objects marked. Depending on the overhead of polling, we might check every N>1000 objects. For content process GCs, we already have an IPC mechanism where the chrome process ships all of the input events over to the content process. If that arrives on another thread, it could set an atomic, which I believe is what was intended in comment 0. billm, we could implement this, but we're wondering how likely you think it is to be used and useful?
Flags: needinfo?(wmccloskey)
Well, it would give us a lot more flexibility when scheduling GCs. The content process part seems relatively easy to implement, so I think we should at least do that one. I've also wondered whether we should use a separate thread to decide when GCs should stop. It could just call nanosleep or something and then set a flag asking the GC to stop. We might have to boost the priority of the timer thread, but it's probably a lot faster than calling PRMJ_Now() a lot.
Flags: needinfo?(wmccloskey)
Comment 3•6 years ago
|
||
Based on IRC conversation with Kannan. He confirmed that we'll this as P1 for now so it's on the QF radar but we may move it to P2 at a later date.
Assignee | ||
Comment 4•6 years ago
|
||
If it helps, this is the patch I was playing around with to implement the GC side of this.
Assignee | ||
Updated•6 years ago
|
Assignee: nobody → sphink
Status: NEW → ASSIGNED
Updated•6 years ago
|
Whiteboard: [qf:p1] → [qf:p2][necko-active]
Comment 5•6 years ago
|
||
Bulk priority update: https://bugzilla.mozilla.org/show_bug.cgi?id=1399258
Priority: -- → P1
Keywords: stale-bug
Updated•5 years ago
|
Whiteboard: [qf:p2][necko-active] → [qf:p1][necko-active]
Updated•5 years ago
|
Whiteboard: [qf:p1][necko-active] → [qf:i60][qf:p1][necko-active]
Updated•5 years ago
|
Whiteboard: [qf:i60][qf:p1][necko-active] → [qf:f60][qf:p1][necko-active]
Updated•5 years ago
|
Whiteboard: [qf:f60][qf:p1][necko-active] → [qf:f61][qf:p1][necko-active]
Reporter | ||
Comment 7•5 years ago
|
||
Olli noted that we have already changed the way GC and CC scheduling works, after this bug was filed. Even when this bug was filed, there was some question of whether it qualified as a P1 based on the expected wins. Given the uncertainty around the wins, and the changes in the GC/CC scheduling since the bug was filed, this is likely a P3 not P1.
Updated•5 years ago
|
Whiteboard: [qf:f61][qf:p1][necko-active] → [qf:f64][qf:p1][necko-active]
Updated•5 years ago
|
Whiteboard: [qf:f64][qf:p1][necko-active] → [qf:p1:f64][necko-active]
Comment 8•5 years ago
|
||
Given comment 7, closing. Feel free to reopen or just file a new one.
Status: ASSIGNED → RESOLVED
Closed: 5 years ago
Resolution: --- → WONTFIX
Updated•1 year ago
|
Performance Impact: --- → P1
Whiteboard: [qf:p1:f64][necko-active] → [necko-active]
You need to log in
before you can comment on or make changes to this bug.
Description
•