SessionWorker uses massive amounts of memory
Categories
(Firefox :: Session Restore, defect, P2)
Tracking
()
People
(Reporter: kael, Unassigned)
References
(Blocks 1 open bug)
Details
Attachments
(1 file, 1 obsolete file)
335.86 KB,
application/x-gzip
|
Details |
Session restore's worker uses a large amount of memory the whole time the browser is open (probably because my session is relatively big, which is ANOTHER problem) and the usage goes up over time and seemingly never goes down (unless I force GC+CC in about:memory).
I've attached an anonymized memory report where you can see it's using hundreds of MB worth of memory. After GC+CC it goes down by like 50-100mb but is still in excess of 150MB.
In comparison, my sessionstore.js is 4.3mb (because for some reason session restore keeps like a 100-entry history of every tab and the history objects are huge) - so it makes sense for sessionworker to be using a hefty amount of RAM but a 50-100x ratio is not especially sensible.
Comment 1•6 years ago
|
||
Something that may help here is to just clear out the variables explicitly and not rely on the GC implicitly.
GC + CC still doesn't work properly in workers, so it's worth trying.
Comment 3•6 years ago
|
||
See also bug 925343, bug 941794.
Comment 5•6 years ago
|
||
There are two possibilities here: either this is leaking memory and/or there is there a problem with GC in workers. We know that the latter is true, but the fact that manually triggering GC (as per the description) doesn't fix things suggests there may be a leak too.
To check whether GC is being triggered you can capture a profile. GC doesn't happen immediately but should happen on workers when they go idle after then finish running. You'll have to make sure the profiler captures worker threads to see this.
To check for leaks you can capture CC logs and analyse them as described below (maybe there's a better way of doing this nowadays):
https://developer.mozilla.org/en-US/docs/Mozilla/Performance/GC_and_CC_logs
Comment 6•6 years ago
|
||
Jean, I'm taking the discussion from Phabricator to here if you don't mind!
@jorendorff said in a follow-up Phabricator comment:
Two related goals are mentioned in the commit message:
"Trigger GC as soon as possible", which sounds like it means: cause a new GC cycle to happen.
@jonco's comment addresses this goal. Setting variables to null definitely has no impact on GC timing. If you know there is worthwhile garbage to collect, and you want to tell Gecko it's a good time to do GC, there may be a way to do that; I don't know.
"Ensure these objects are collected 'at the next GC pass', that is, during the next GC cycle."
But each GC cycle collects local variables that already went out of scope. There's no use nulling them out if you're about to throw or return anyway.
...Except if the variable is captured, i.e. used by a nested function or method, or potentially used by eval. Then the garbage collector keeps the variable alive as long as it might yet be used. But that isn't happening here. backups is captured (by a nested function), but that nested function is itself garbage by the time the method returns, so it won't keep backups alive. And of course backups isn't the variable you're worried about anyway.
Hope this helps!
So what would something we can do here, then? Is there a way to explicitly invoke CC for a ChromeWorker script or is that something to consider to implement?
So that would basically be an reply to me answering 'yes' to Jason's statement:
If you know there is worthwhile garbage to collect, and you want to tell Gecko it's a good time to do GC, there may be a way to do that; I don't know.
Yes, I'd like to tell Gecko it's a good time to do GC. Can I do that?
Comment 7•6 years ago
|
||
(In reply to Mike de Boer [:mikedeboer] from comment #6)
Yes, I'd like to tell Gecko it's a good time to do GC. Can I do that?
This is not really the right way to address this kind of problem. It has a tendency to change a problem where the browser takes up too much memory into a performance problem where the browser is spending too much time collecting.
What should happen is that GC or CC is scheduled automatically and frees up this memory. In this case it seems that this is not happening. We should investigate why that is and fix it.
To start with, do we have a test case that reproduces this problem?
Updated•6 years ago
|
Updated•6 years ago
|
Updated•3 years ago
|
Updated•3 years ago
|
Description
•