Open
Bug 1073499
Opened 10 years ago
Updated 8 months ago
Memory is not quickly released after repeatedly running a memory-intensive JS benchmark
Categories
(Core :: JavaScript: GC, defect)
Tracking
()
UNCONFIRMED
People
(Reporter: inscription, Unassigned, NeedInfo)
References
(Blocks 1 open bug, )
Details
Attachments
(1 file)
910 bytes,
text/html
|
Details |
User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35.0) Gecko/20100101 Firefox/35.0
Build ID: 20140925030203
Steps to reproduce:
I've just read this blog post about the new GGC in Firefox : https://hacks.mozilla.org/2014/09/generational-garbage-collection-in-firefox/, and when I tried the benchmarking script I noticed that the memory used is not released.
To reproduce :
firefox Nightly, Windows 7, E10S disabled.
run the script : http://jsbin.com/yipihi/1/edit
Actual results:
After each time you run the script, more that 300Mo of memory are taken by Firefox, and it doesn't release this memory after the execution of the script.
Expected results:
After the execution, Firefox should release the memory.
Reporter | ||
Updated•10 years ago
|
Component: Untriaged → JavaScript: GC
Product: Firefox → Core
Reporter | ||
Updated•10 years ago
|
Comment 1•10 years ago
|
||
I don't see this on win8.1 Nightly 35.0a1 (2014-09-25)
I run the jsbin and memory goes up. I then force a GC by going to about:memory and clicking the minimize memory button. The memory then drops back down.
If you don't force the GC, then you need to wait some amount of time to allow a GC to occur normally.
Reporter | ||
Comment 2•10 years ago
|
||
I've finaly seen that the memory is eventually released. The problem, is that it takes way too long (more than 2 minutes) to occur. Basically if you run this script 3-4 times in a row (figures depending on your memory) the browser crashes ...
Imho, in a context of heavy native games being ported to the browser with Emscripten, we could face huge troubles if the garbage collector is too lazy.
Updated•10 years ago
|
Flags: needinfo?(terrence)
Comment 3•10 years ago
|
||
I've been told we should have memory pressure events on desktop platforms, so if this test case can cause an OOM then its likely a bug.
Comment 4•10 years ago
|
||
I ran the jsbin quickly over time. While doing this I definitely saw memory dropping within a few seconds once I was over 1.5GBs. If I ran the jsbin quickly enough I could get it up to 3+ GB at which point the browser was basically hung. I assume it was constantly GC'ing in the background or something.
This seems different from what the reporter saw. I'm on win8.1 vs win7. What else could be different?
Inscription, if you go to About Firefox, what date do you have for your Nightly?
Flags: needinfo?(inscription)
Updated•10 years ago
|
Blocks: GC.stability
Reporter | ||
Comment 5•10 years ago
|
||
@bkelly : 2014-09-26 (today)
I've no idea of what else could be different ...
Do you have e10s enabled ? (I don't)
How much memory do you have ? (12GB here)
5 minutes after running the script several times, the memory droped from 2.9GB to 2.1GB, and manual GC released memory down to 980MB (in a few seconds, which proves that the GC was not running before).
Reporter | ||
Updated•10 years ago
|
Flags: needinfo?(inscription)
Reporter | ||
Updated•10 years ago
|
Flags: needinfo?(bkelly)
Comment 6•10 years ago
|
||
I think I have less memory free than you do. I only have 6GB free before adding in Firefox. So I probably am getting memory pressure events sooner.
Flags: needinfo?(bkelly)
Comment 7•10 years ago
|
||
I've been trying to reproduce this and have observed two issues, although I'm not sure either is exactly the problem initially reported:
1) If you load this page and run the script multiple times, firefox memory usage increases dramatically (up to ~3.5GB) and then firefox hangs completely. Looking in the debugger shows that it's inside NtAllocateVirtualMemory while trying to expand a hash table while building the cycle collector graph. My guess is we are exhausting virtual address space, although I don't know why this would cause that function to never return.
2) Aside from that, every time the script we allocate about 300MB of objects. This gets released in a time of the order of 30 seconds. Looking at about:memory after running this a couple of times shows an active window and possibly several detached windows that each contain 300MB of objects. I observed that the detached windows persist after triggering GC manually but not after minimizing memory usage.
Slightly more strangely, the memory use reported by task manager goes up by another ~300MB 15 seconds after initially running the script, then returns to baseline.
Comment 8•10 years ago
|
||
I refactored the test script so that the main code was inside the body of a function, with H a variable rather than a global property (this is the array containing 300MB of objects). That fixes the problem - you can run the script as many times as you like and memory usage goes up but then comes back down.
So I think the problem is that the global is staying alive and that is keeping the 300MB of objects the script creates alive as well. This seems to need a complete GC/CC/GC cycle to collect everything which is why I guess it's taking 30s to die by itself.
Andrew, how does CC get scheduled - is it timer based or are there memory triggers too? Also, any idea about the strange extra increase in memory at the end - are we putting /all/ JS objects into the CC graph?
Flags: needinfo?(continuation)
Comment 9•10 years ago
|
||
(In reply to Jon Coppeard (:jonco) from comment #8)
> So I think the problem is that the global is staying alive and that is
> keeping the 300MB of objects the script creates alive as well. This seems
> to need a complete GC/CC/GC cycle to collect everything which is why I guess
> it's taking 30s to die by itself.
If you are repeatedly running the test by reloading the page, the problem may be that running the GC suppresses the CC, so stuff can be held alive. We don't have any kind of fallback like "hey, I see a huge amount of memory, just stop running any JS until I can deal with it."
For the specific case of asm.js, there is (I think) some code to immediately dump the giant array of memory when the page is reloaded, to avoid exactly this problem.
> Andrew, how does CC get scheduled - is it timer based or are there memory
> triggers too?
There is a timer, and also the GC will schedule a CC most of the time when it finishes. But, as I said, if we just start immediately running a GC again then the CC doesn't get a chance to run.
> Also, any idea about the strange extra increase in memory at
> the end - are we putting /all/ JS objects into the CC graph?
The lower bound on the size of the CC graph is the number of garbage objects, so if you have a bunch of JS garbage to collect, it will all get added to the graph. Ideally, there's some code that merges all objects in a single zone into a single node in the CC graph, but there's code to keep it from running too often, so if you are CCing a bunch it may not trigger.
Flags: needinfo?(continuation)
Updated•10 years ago
|
Summary: GGC memory leak → Memory is not quickly released after repeatedly running a memory-intensive JS benchmark
Comment 10•8 years ago
|
||
Hopefully, bug 1296484 will have fixed this by going around the timeout. Hard to tell though.
Flags: needinfo?(terrence)
Comment 11•6 years ago
|
||
It's been a while since this was last looked at, anyone know if it's still a problem?
Flags: needinfo?(terrence.d.cole)
Flags: needinfo?(jcoppeard)
Updated•2 years ago
|
Severity: normal → S3
Comment 12•2 years ago
|
||
Clear a needinfo that is pending on an inactive user.
Inactive users most likely will not respond; if the missing information is essential and cannot be collected another way, the bug maybe should be closed as INCOMPLETE
.
For more information, please visit auto_nag documentation.
Flags: needinfo?(terrence.d.cole)
Reporter | ||
Comment 13•2 years ago
|
||
I just got an email about this old bug of mine because of the “Release mgmt bot” so I tried it again (in Linux this time). The behavior is still roughly the same, but at least now only the current tab crashes instead of the whole browser. It's not a critical bug, but that behavior is a bit fishy still, especially since Chromium handles it fairly well.
Comment 14•8 months ago
|
||
You need to log in
before you can comment on or make changes to this bug.
Description
•