Closed Bug 630932 Opened 13 years ago Closed 13 years ago

High memory usage by the JS Engine even with no tabs open for > 12 hours

Categories

(Core :: JavaScript Engine, defect)

x86_64
macOS
defect
Not set
normal

Tracking

()

RESOLVED FIXED
Tracking Status
blocking2.0 --- final+

People

(Reporter: jhatax, Assigned: gal)

References

Details

(Keywords: memory-footprint, Whiteboard: hardblocker [has patch][needs 614347], fixed-in-tracemonkey)

Attachments

(7 files, 7 obsolete files)

User-Agent:       Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b11pre) Gecko/20110201 Firefox/4.0b11pre
Build Identifier: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b11pre) Gecko/20110201 Firefox/4.0b11pre

Firefox memory usage was steadily climbing during a browsing session, but memory wasn't being released upon closing tabs. Approximately 12 hours ago, I closed all tabs and left things idle; this morning, I looked at about:memory to find the following heavy users of system memory:

Memory mapped: 360,148,992
Memory in use: 215,426,416

Description  Value
malloc/allocated 215,430,032
malloc/mapped  360,148,992

js/gc-heap  104,857,600
js/string-data  14,990,562
js/mjit-code  5,101,906

With no pages open, this memory should be surrendered back to the operating system.

Reproducible: Always

Steps to Reproduce:
1. Start Firefox. Open a number of tabs and windows
2. Close all tabs and windows
3. Wait a couple hours (I waited 12) and look at about:memory
Actual Results:  
The JS-engine isn't releasing memory that it should no longer be using.

Expected Results:  
Memory usage should be as low as when Firefox first started + some new artifacts in the page cache.
Status: UNCONFIRMED → NEW
Ever confirmed: true
What confuses me here is the mjit-code bit, since that's off for chrome and no tabs are open....
Don't rule out an accounting error; worth someone taking a close look at the instrumentation I added there.
Is there a flag I can turn on using about:config (or another mechanism) that would output the actual memory usage Mike?
Looks like the chunk expiration stuff? But if a chunk is partially used, we can't free it anyway ...
With no tabs open for the duration of quiesce, should any chunks be in use at all?
We put stuff into the back/forward cache so it will still be alive. One object is enough to keep a 2MB chunk alive.
> We put stuff into the back/forward cache so it will still be alive.

Those tabs are closed.  Those bfcaches should have been gced.
Alright, lets measure this. No way around it.
(In reply to comment #2)
> Don't rule out an accounting error; worth someone taking a close look at the
> instrumentation I added there.

I think you don't count the size of IC code, but that's pretty small.  Nor fragmentation in the executable allocator, which is also hopefully relatively small (I did my best with bug 616310).  Other than that, looks ok to me.
I did a similar experiment:  I opened a bunch of tabs (techcrunch.com, nytimes.com, theage.com.au, www.aec.gov.au, some others), then closed everything except about:memory, then waited two hours.

Memory mapped peaked at 422.5 MB, then only dropped to 384.8 MB after closing everything.  Memory used went from 397.8 MB to 91.4 MB, however.  The main individual measurement at the end was the js/gc-heap, which was 47.2 MB;  all the other individual measurements were less than 2 MB.

I did all this under Massif, but it didn't take a detailed snapshot after I closed all the tabs :(  Tomorrow I'll try to convince it to take a detailed snapshot that will help.
I'd expect nontrivial GC heap to be around because we have JS components and so forth that stick around even if all the windows are closed.  But there really shouldn't be mjit code around at that point, unless we're mjitting those...
The tracer's chrome/content distinction captured components in the content flag, just due to the way it relied on chrome: URLs.  MJIT might be the same way.
I had a tiny bit of mjit code at the end, around 0.3 MB.

> I'd expect nontrivial GC heap to be around because we have JS components and so
> forth that stick around even if all the windows are closed.

Really?  That is surprising.  People do this experiment all the time and they always get furious when Firefox is holding onto memory but displaying no tabs.  Can we change it?
The JS components are part of the browser, not part of the content.  That said, we don't release gc heap (I don't know if we even try) which we could improve.  Need compacting collector to really win there, but in due time...
Right; the JS components are things like services.  The update service doesn't stop running because you close all your tabs.  ;)
This will be clearer when we have content processes, by the way.
(In reply to comment #15)
> Right; the JS components are things like services.  The update service doesn't
> stop running because you close all your tabs.  ;)

Well, yes, but when I start up the GC heap is like 6 MB, and after this experiment it's 40+ MB.  That's what I don't understand.
(In reply to comment #17)
> Well, yes, but when I start up the GC heap is like 6 MB, and after this
> experiment it's 40+ MB.  That's what I don't understand.

Imagine my surprise when the GC heap was at 104,857,600 (100 MBytes). If Eclipse and Minefield are open simultaneously on my Mac, I run out of 4GBytes of RAM very quickly; the resulting degradation in performance due to swapping is reason enough to request that this memory creep be investigated.

Another data point: I never seem to experience this kind of memory bloat with Minefield on Vista. With all my Minefield profiles synced, I often pick up my browsing session on Vista exactly where I left it on my Mac. Could this be Mac-specific?
Attached file Massif profile
This time I had more luck with Massif.  I did a similar run.  about:memory peaked at 418/399 mapped/used with all the tabs open;  after closing them all and waiting maybe 45 minutes it had flattened out at 396/87.

Snapshot 52 in the profile is the interesting one, right near the end.  Here's a sample of the high-level results:

o1o> 05.44% (56,213,504B) 0x649D417: NewOrRecycledNode(JSTreeContext*) (jsparse.cpp:670)
o1o> 05.08% (52,430,848B) 0x644FE6F: js_NewFunction(JSContext*, JSObject*, int (*)(JSContext*, unsigned int, js::Value*), unsigned int, unsigned int, JSObject*, JSAtom*) (jsgcinlines.h:127)
o1o> 03.31% (34,230,272B) 0x64B8D3E: js::PropertyTree::newShape(JSContext*, bool) (jspropertytree.cpp:97)
o1o> 02.61% (26,959,872B) 0x62B3878: GCGraphBuilder::NoteScriptChild(unsigned int, void*) (mozalloc.h:229)
o1o> 01.65% (17,073,120B) 0x6561A5E: js::mjit::Compiler::finishThisUp(js::mjit::JITScript**) (jsutil.h:214)
o1o> 01.57% (16,243,036B) 0x625A1E2: ChangeTable (pldhash.c:563)
o1o> 01.50% (15,478,784B) 0xA399191: g_mapped_file_new (in /lib/libglib-2.0.so.0.2400.1)
o1o> 01.22% (12,582,912B) 0x6454FBA: js::NewNativeClassInstance(JSContext*, js::Class*, JSObject*, JSObject*) (jsgcinlines.h:127)
o1o> 01.12% (11,577,780B) 0x64D647C: JSScript::NewScript(JSContext*, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned short, unsigned short, JSVersion) (jsutil.h:210)
o1o> 00.95% (9,846,784B) 0x64A029B: js::Parser::markFunArgs(JSFunctionBox*) (jsutil.h:210)
o1o> 00.84% (8,675,404B) 0x6411B3D: js::Vector<unsigned short, 32ul, js::ContextAllocPolicy>::growStorageBy(unsigned long) (jsutil.h:218)
o1o> 00.84% (8,650,752B) 0x6278577: nsRecyclingAllocator::Malloc(unsigned long, int) (nsRecyclingAllocator.cpp:170)
o1o> 00.82% (8,420,812B) 0x62C023B: nsStringBuffer::Alloc(unsigned long) (nsSubstring.cpp:209)
o1o> 00.81% (8,388,608B) 0x6423FF1: js::StackSpace::init() (jscntxt.cpp:164)
o1o> 00.75% (7,753,928B) 0x54E5378: nsTArray_base<nsTArrayDefaultAllocator>::EnsureCapacity(unsigned int, unsigned int) (nsTArray.h:88)
o1o> 00.67% (6,887,708B) 0x60F6B32: HashMgr::add_word(char const*, int, int, unsigned short*, int, char const*, bool) (hashmgr.cpp:176)
o1o> 00.66% (6,848,512B) 0x641D2C7: js_alloc_temp_entry(void*, void const*) (jsatom.cpp:721)
o1o> 00.50% (5,117,196B) 0x625A4B1: PL_DHashTableInit (pldhash.c:268)

It looks a lot like... a profile with all the tabs still open.  A couple of things that caught my eye:

- NewOrRecycledNode() -- WTF?  Those allocations are from the cx->tempPool JSArenaPool which is supposedly only live during JS compilation.  56MB is down from 74 MB at the peak, ie. not down by that much.  I'm *really* suspicous about JSArenaPool, whether it's correct and/or being used properly.  I hate custom allocators and it's a really complicated one, the marking stuff in particular.

- finishThisUp() is part of the js/mjit-code tally.  Massif says 17 MB.  about:memory said js/mjit-code was 0.3 MB just before I quit.  WTF?  Fragmentation?  Something else?

It's all rather puzzling, not to mention disturbing.
For the JSParseNodes allocated by NewOrRecycledNode() I have some evidence that JSArenaPool is freeing the memory as you'd expect, but jemalloc is holding onto the mmap'd pages -- if I run Massif with --pages-as-heap=no, so it only measures malloc/new/new[] allocations, I don't see NewOrRecycledNode() except relatively early on in the run, as you'd expect for something that's only live during JS compilation.

BTW, about:memory is rather misleading -- it only covers malloc/free memory, not direct calls to mmap().  At least, the four malloc/* numbers come from jemalloc, and so only pertain to malloc/free.  But js/mjit-code includes a combination of malloc'd memory and mmap'd memory.
(In reply to comment #20)
> jemalloc is holding onto the mmap'd pages

I also instrumented the munmap calls in jemalloc.c, and when I close all the tabs not much memory gets munmapped -- maybe 20 MB, which seems on the low side.
(In reply to comment #21)
> > jemalloc is holding onto the mmap'd pages

That would also explain the discrepancy between Massif's high numbers for finishThisUp() and about:memory's tiny numbers for js/mjit-code.

Memory profiling is a PITA.
If you mallctl("arenas.purge", NULL, NULL, NULL, NULL) from somewhere (maybe just attach gdb and call it from there), does it help?
(In reply to comment #19)
> 
> This time I had more luck with Massif.  I did a similar run.  about:memory
> peaked at 418/399 mapped/used with all the tabs open;  after closing them all
> and waiting maybe 45 minutes it had flattened out at 396/87.

Hmm, those "Memory mapped" and "Memory in use" numbers are coming straight from jemalloc.  The fact that the "mapped" number is so much higher than the "in use" number after closing all tabs and waiting a while is more evidence that jemalloc is holding onto lots of pages.

It also means that Massif's --pages-as-heap=yes results aren't useful for anything other than the peak, because jemalloc's tardiness in releasing pages causes Massif to blame innocent bits of the code for holding onto memory.  So comment 19 can be ignored.

So the next question is:  which figure ("mapped" or "in use") do we care about?  "in use" is probably going to affect performance more.  If we decide to focus on just "in use", there's still the question of why it's substantially higher after closing many tabs than it is at start-up, as in comment 0.
(In reply to comment #23)
> If you mallctl("arenas.purge", NULL, NULL, NULL, NULL) from somewhere (maybe
> just attach gdb and call it from there), does it help?

I didn't have time to try that today;  I'll try it on Monday (it's Friday night right now in Melbourne).
I redid a malloc-level profile and got some detailed info for the tail.  Here's the graph, you can see how it started low (around 32MB) went up to 209, then dropped down to 68 once I closed everything:

    MB
209.1^                    ##                                                  
     |                    #                                                   
     |                    #        :                                          
     |                    # ::::::::::                                        
     |                    # : :::: :: :                                       
     |                    # : :::: :: :                                       
     |                    # : :::: :: :::::::                                 
     |                  @@# : :::: :: ::: :::                                 
     |                  @ # : :::: :: ::: ::::                                
     |                  @ # : :::: :: ::: ::::                                
     |                @@@ # : :::: :: ::: ::::                                
     |                @ @ # : :::: :: ::: ::::                                
     |               :@ @ # : :::: :: ::: ::::                                
     |               :@ @ # : :::: :: ::: :::::                               
     |      @@       :@ @ # : :::: :: ::: ::::::::::::::::::::@:::::::::::@:::
     |      @       ::@ @ # : :::: :: ::: :::::: : ::: :: :: :@: :: ::::::@:::
     |      @       ::@ @ # : :::: :: ::: :::::: : ::: :: :: :@: :: ::::::@:::
     |     :@ :@::::::@ @ # : :::: :: ::: :::::: : ::: :: :: :@: :: ::::::@:::
     | :@@::@ :@:: :::@ @ # : :::: :: ::: :::::: : ::: :: :: :@: :: ::::::@:::
     | :@ ::@ :@:: :::@ @ # : :::: :: ::: :::::: : ::: :: :: :@: :: ::::::@:::
   0 +----------------------------------------------------------------------->h
     0                                                                   1.047

I'll attach the detailed snapshots for the first and last detailed profiles (corresponds to the first and last '@' bar in the graph, don't you love ASCII art?)

Here are the locations responsible for > 1MB from the first profile:

o1o> 33.52% (10,985,172B) in 3324 places, all below massif's threshold (01.00%)
o1o> 08.61% (2,822,144B) 0x66BDD3E: js::PropertyTree::newShape(JSContext*, bool) (jspropertytree.cpp:97)
o1o> 06.74% (2,207,848B) 0x66DB47C: JSScript::NewScript(JSContext*, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned short, unsigned short, JSVersion) (jsutil.h:210)
o1o> 06.40% (2,097,152B) 0x6654E6F: js_NewFunction(JSContext*, JSObject*, int (*)(JSContext*, unsigned int, js::Value*), unsigned int, unsigned int, JSObject*, JSAtom*) (jsgcinlines.h:127)
o1o> 06.09% (1,996,824B) 0xB6ABF11: pcache1Alloc (sqlite3.c:33491)
o1o> 04.66% (1,528,648B) 0x64BF2CB: XPT_ArenaMalloc (xpt_arena.c:221)
o1o> 03.39% (1,111,544B) 0x645F1E2: ChangeTable (pldhash.c:563)
o1o> 03.20% (1,048,576B) 0x6656AA0: js::NewCallObject(JSContext*, js::Bindings*, JSObject&, JSObject*) (jsgcinlines.h:127)
o1o> 03.20% (1,048,576B) 0x6659FBA: js::NewNativeClassInstance(JSContext*, js::Class*, JSObject*, JSObject*) (jsgcinlines.h:127)
o1o> 03.20% (1,048,576B) 0x66EC0DE: js_NewStringCopyN(JSContext*, unsigned short const*, unsigned long) (jsgcinlines.h:127)
o1o> 03.18% (1,042,560B) 0x645F4B1: PL_DHashTableInit (pldhash.c:268)

And here they are from the last profile:

o1o> 25.34% (17,313,087B) in 4005 places, all below massif's threshold (01.00%)
o1o> 24.56% (16,777,216B) 0x6654E6F: js_NewFunction(JSContext*, JSObject*, int (*)(JSContext*, unsigned int, js::Value*), unsigned int, unsigned int, JSObject*, JSAtom*) (jsgcinlines.h:127)
o1o> 08.00% (5,468,160B) 0x66BDD3E: js::PropertyTree::newShape(JSContext*, bool) (jspropertytree.cpp:97)
o1o> 06.14% (4,195,328B) 0x6659FBA: js::NewNativeClassInstance(JSContext*, js::Class*, JSObject*, JSObject*) (jsgcinlines.h:127)
o1o> 04.60% (3,141,704B) 0xB6ABF11: pcache1Alloc (sqlite3.c:33491)
o1o> 03.53% (2,410,324B) 0x66DB47C: JSScript::NewScript(JSContext*, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned short, unsigned short, JSVersion) (jsutil.h:210)
o1o> 03.07% (2,097,152B) 0x66ED36E: js_NewGCString(JSContext*) (jsgcinlines.h:127)
o1o> 02.24% (1,528,648B) 0x64BF2CB: XPT_ArenaMalloc (xpt_arena.c:221)
o1o> 02.07% (1,417,216B) 0x68328C0: _cairo_scaled_glyph_lookup (cairo-scaled-font.c:2552)
o1o> 01.91% (1,303,160B) 0x645F1E2: ChangeTable (pldhash.c:563)
o1o> 01.53% (1,048,576B) 0x66EC0DE: js_NewStringCopyN(JSContext*, unsigned short const*, unsigned long) (jsgcinlines.h:127)
o1o> 01.53% (1,048,576B) 0x6623DE6: js::detail::HashTable<unsigned long const, js::HashSet<unsigned long, js::AtomHasher, js::SystemAllocPolicy>::SetOps, js::SystemAllocPolicy>::add(js::detail::HashTable<unsigned long const, js::HashSet<unsigned long, js::AtomHasher, js::SystemAllocPolicy>::SetOps, js::SystemAllocPolicy>::AddPtr&, unsigned long const&) (jsutil.h:210)
o1o> 01.53% (1,048,576B) 0x6656AA0: js::NewCallObject(JSContext*, js::Bindings*, JSObject&, JSObject*) (jsgcinlines.h:127)
o1o> 01.53% (1,048,576B) 0x66D0B92: js_CloneRegExpObject(JSContext*, JSObject*, JSObject*) (jsgcinlines.h:127)
o1o> 01.53% (1,048,576B) 0x660D5CE: JS_NewExternalString (jsgcinlines.h:127)
o1o> 01.53% (1,048,576B) 0x664607C: JSParseNode::getConstantValue(JSContext*, bool, js::Value*) (jsgcinlines.h:127)
o1o> 01.53% (1,048,576B) 0x67FFAA1: T.2401 (jsgcinlines.h:127)
o1o> 01.53% (1,047,368B) 0x645F4B1: PL_DHashTableInit (pldhash.c:268)

The obvious thing is all those JSFunctions hanging around (js_NewFunction).  Looks like they're not being GC'd?  Looking at the detailed profile of the last snapshot, more than half of them look like xpc/DOM quickstubs, whatever they are.
Attached file first snapshot
I managed to delete the profile before posting it.  Sigh.  I redid the measurement, this time the final memory usage was strangely much higher:

    MB
225.2^                      #                                                 
     |                     @#:                                                
     |                     @#::                                               
     |                   @@@#::                                               
     |                   @@@#::                                               
     |                 ::@@@#::                                               
     |               @:: @@@#::                                               
     |               @:: @@@#::                                               
     |              :@:: @@@#::                                               
     |              :@:: @@@#:::::::::::::::::@::::::::::::::::::@::::::@:::::
     |             ::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     |           ::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     |       ::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     |       ::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     |     ::::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     |   @@: ::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     |   @ : ::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     |  :@ : ::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     | @:@ : ::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
     | @:@ : ::::::::@:: @@@#::::: :::::::::::@::: :::: ::: :::::@::::::@:::::
   0 +----------------------------------------------------------------------->s
     0                                                                   1316.1

Attached is the details of the first snapshot.
Attached file last snapshot
Details of the last snapshot.  Although newShape() is the highest one here, I still think the js_NewFunction() is the real problem.  I reckon JSFunctions just aren't getting GC'd the way they should be;  that would explain a lot of the entries in the last profile.
Attachment #510190 - Attachment is patch: false
Something's leaking. And we have known mem leak bugs still blocking, IIRC.

Our XPCOM leak hunting tools may help.

/be
njn, quickstubs are what implements the DOM, for the most part.  So these are getter/setter functions on DOM prototypes.  As long as that prototype object is alive, they'll obviously stay alive.  So presumably there are live DOM prototype objects...  I wonder whether there are also live DOM nodes or some such from those documents.
How often did we run the JS GC? Given that we've mostly crippled the cycle collector with our recent GC scheduling changes (see bug 630947 comment 18) I expect accumulation of garbage over time. This should probably be retested with the patches for bug 630947 and bug 614347, there might be real leaks but it's going to be hard to track them down without those changes.
(In reply to comment #23)
> If you mallctl("arenas.purge", NULL, NULL, NULL, NULL) from somewhere (maybe
> just attach gdb and call it from there), does it help?

I was going to try this, but I think the version of jemalloc we have in the tree doesn't have mallctl()?  I couldn't find any reference to it in jemalloc.{c,h}.

Given how many other things there are to do, I'm inclined to give jemalloc the benefit of the doubt in terms of how it's handling things! :)
(In reply to comment #31)
> How often did we run the JS GC?
This is interesting question. If the problem is that js services cause
all the garbage, we may not run GC often enough, I think.
In DOMWindow context we call JS_MaybeGC quite often
because of nsJSContext::ScriptEvaluated.

A simple hack for cases like this bug could be to run full GC+CC every now and 
then (though, still rather seldom).
(In reply to comment #33)
> 
> A simple hack for cases like this bug could be to run full GC+CC every now and 
> then (though, still rather seldom).

There's a bug open for that, though I can't find it right now.

But I think the problem goes beyond that, that there is a real leak.  I added a printf to js_GC() and even after global GCs occurred the numbers were high.
(In reply to comment #29)
> Something's leaking. And we have known mem leak bugs still blocking, IIRC.

Is there a meta-memory management bug that this bug can be associated with? Alternatively, does this audience believe that this should be a blocking bug? It would make for a good customer experience, especially for Mac users that use Firefox for more than a few hours every day, to have this bug fixed in 4.0.
> Is there a meta-memory management bug that this bug can be associated with?

It's already blocking meta bug 632234.  I added a blocking2.0? flag.
blocking2.0: --- → ?
We don't block on metabugs, but the release is clearly blocked by getting both leaks and memory use in better shape.

/be
I confirmed this on Windows 7 as well. I opened 5 tabs of techcrunch and then closed them. mjit memory usage went down to minimal levels after a minute or two, but JS GC heap stayed at 66 MB for an hour, at which point I stopped.

We should find out whether bug 630947 and bug 614347 fix this right away. "CC basically disabled" seems to be the best hypothesis right now.
blocking2.0: ? → final+
Depends on: 630947, 614347
Whiteboard: hardblocker
It's a shame we can't use the throw-away-periodically-and-regenerate-if-necessary approach for JS objects.  It appears to be a lot simpler and more reliable than GC :(
There is definitely a bad bug here. If I open and close many dom windows, GCs happen based on activity but when I stop, we might still have a few hundred windows unreachable but not collected and no GCs occur. Something must be wrong with the inactivity GC scheduling.
Assignee: general → gal
(In reply to comment #40)
> There is definitely a bad bug here. If I open and close many dom windows, GCs
> happen based on activity but when I stop, we might still have a few hundred
> windows unreachable but not collected and no GCs occur. Something must be wrong
> with the inactivity GC scheduling.

Repeating comment 34:  I think it's not just a problem of the GC failing to run often enough;  even when it does run there is lots of stuff left behind, so it seems like a leak of some kind.
Attached patch patch (obsolete) — Splinter Review
Experimental patch.
Nick, could you try out this patch? After a peak memory use (i.e. opening a lots of tabs) memory use should return to normal after 20 seconds or so.
Attached patch patch (obsolete) — Splinter Review
This patch contains a hidden easter egg. If you set javascript.options.gcnotify to true in about:config, GC and CC events will be posted to the error console.
Attachment #510955 - Attachment is obsolete: true
Attached patch patch (obsolete) — Splinter Review
The previous upload was incomplete.
Attachment #510959 - Attachment is obsolete: true
Attached patch patch (obsolete) — Splinter Review
Schedule CC from the GC callback and only if we didn't run a compartment GC. In case of a compartment GC delay/reschedule the full GC since we are likely running heavy computation.
Attachment #510962 - Attachment is obsolete: true
(In reply to comment #41)
> Repeating comment 34:  I think it's not just a problem of the GC failing to run
> often enough

How did you determine that? You're seeing this even after multiple global GCs and CCs happen? Given how the cycle collector and GC interact one global GC won't collect much, you need at least two global GCs with a CC in between to collect anything that cycles through XPCOM and JS.
Comment on attachment 510969 [details] [diff] [review]
patch

So we would run CC only if there was a global gc?
What if there is only a tiny bit js garbage, or not at all, but lots of cycle 
collectable objects?
Something like https://bug624549.bugzilla.mozilla.org/attachment.cgi?id=502773
Seems like the patch can somehow handle
https://bug624549.bugzilla.mozilla.org/attachment.cgi?id=502773
At least we don't OOM, although max mem usage doubles.
(In reply to comment #38)
> I confirmed this on Windows 7 as well. I opened 5 tabs of techcrunch and then
> closed them. mjit memory usage went down to minimal levels after a minute or
> two, but JS GC heap stayed at 66 MB for an hour, at which point I stopped.
If there isn't anything happening in the browser, CC won't be called.

But usually something is happening even when browser is 'idle', so
https://bug632357.bugzilla.mozilla.org/attachment.cgi?id=510998 could help
with that.
If you watch the console with the patch and javascript.options.gcnotify set, you can see that comment 47 is absolutely right. There is often a CC with a large number of freed objects, then a GC and then another smaller CC, then a GC and then we won't collect anything any more and don't schedule a new CC.

Comment 48 is right. We also need a good point to trigger the CC. Only closing or freezing global windows causes a CC right now (and the GC). I will look around for more trigger points.
Attached patch patch (obsolete) — Splinter Review
Attachment #510969 - Attachment is obsolete: true
Attached patch patch (obsolete) — Splinter Review
Attachment #511121 - Attachment is obsolete: true
Comment on attachment 511167 [details] [diff] [review]
patch

I still wouldn't like to lose the user active/inactive heuristics.
When user is active, we really should try to postpone 
pauses caused by gc/cc, IMO.
I tried gal's patch.  It certainly makes the GC run more often;  previously when idle I don't think I saw it run at all, it does so now every couple of seconds.  And the CC runs pretty often too (Re comment 47, I didn't have a printf in the CC so I don't know previously if it was running.)

And these collections that occur after shutting down all the tabs (except about:memory) keep saying "collected: 0".

But the "memory in use" number doesn't get below about 64MB, in comparison to ~40MB at startup.  And the js/gc-heap number is still about 24MB, compared to ~5MB at startup.  So it doesn't look much different to the results I got in comment 26.

So... gal's patch ensures that insufficient GCs/CCs won't be a problem, which is a step forward.  Does anyone know why 20+MB of JS heap might be held onto after closing all tabs except about:memory, when loading about:memory at startup only results in 5MB of JS heap?
gcing every few seconds is probably too often for mobile, due to the wakeups, right?
I opened 50 mozilla.org tabs and CC's took 50-100 ms on average. Seems good to me.
Attached patch patchSplinter Review
Attachment #511167 - Attachment is obsolete: true
I bumped the CC trigger to 1000. 1000 C++ objects can hold on to a bit of memory, but in practice this is less than 1MB of waste. CCs seem to be really fast in general. GCs are what slows us down right now.
(In reply to comment #58)
> I opened 50 mozilla.org tabs and CC's took 50-100 ms on average. Seems good to
> me.
For FOTN?
Possibly the easiest thing to do here is to backout Bug 624549 and bug 630947.
Has anyone tried to patch with FOTN? *If* I read the code correctly, it would
cause quite a few pauses.
blocking2.0: final+ → ---
smaug, did you mean to remove the blocking+ flag?
blocking2.0: --- → ?
(In reply to comment #63)
> Has anyone tried to patch with FOTN? *If* I read the code correctly, it would
> cause quite a few pauses.

The patch improves FOTN significantly. There are many CC's, but they're short and spread out evenly over FOTN, which is exactly what we want.
That sounds good. If we could still keep the user-active/inactive
checks somehow...
Comment on attachment 511287 [details] [diff] [review]
patch


> void
> nsGlobalWindow::PageHidden()
> {
>   FORWARD_TO_INNER_VOID(PageHidden, ());
> 
>-  // the window is being hidden, so tell the focus manager that the frame is
>+  // The window is being hidden, so tell the focus manager that the frame is
>   // no longer valid. Use the persisted field to determine if the document
>   // is being destroyed.
> 
>   nsIFocusManager* fm = nsFocusManager::GetFocusManager();
>   if (fm)
>     fm->WindowHidden(this);
> 
>   mNeedsFocus = PR_TRUE;
>+
>+  // Poke the GC. This window might be collectable garbage now.
>+  nsJSContext::PokeGC();
Could you perhaps move this to DocumentViewerImpl::PageHide
and only if aIsUnload is true. That way fast back-forward doesn't trigger
unnecessary GC/CC


>+void
>+nsJSContext::CycleCollectNow(nsICycleCollectorListener *aListener)
>+{
>+  if (!NS_IsMainThread())
>+    return;
Why is the !NS_IsMainThread() check needed?
And in any case, 
if (expr) {
  ...
}


>+  if (collected > 0)
>+    PokeGC();
if (expr) {
  ...
}

>+    NS_NAMED_LITERAL_STRING(kFmt, "CC timestamp: %lu collected: %lu "
>+                                  "suspected: %lu duration: %lu ms.");
>+    PRUnichar* msg = nsTextFormatter::smprintf(kFmt.get(), now,
>+                                               collected, suspected,
>+                                               (now - start) / PR_USEC_PER_MSEC);
>+    nsCOMPtr<nsIConsoleService> cs =
>+      do_GetService(NS_CONSOLESERVICE_CONTRACTID);
>+    if (cs)
>+      cs->LogStringMessage(msg);
if (expr) {
  ...
}

Also, could you perhaps do something like
nsString msg;
msg.Adopt(nsTextFormatter::smprintf(...));

cs->LogStringMessage(msg.get());

Manual NS_Free is kind of ugly.


> nsJSContext::LoadEnd()
> {
>+  if (!sLoadingInProgress)
>+    return;
if (expr) {
  ...
}

>+      PRTime now = PR_Now();
>+      NS_NAMED_LITERAL_STRING(kFmt, "GC timestamp: %lu duration: %lu ms.");
>+      PRUnichar* msg = nsTextFormatter::smprintf(kFmt.get(), now,
>+                                                 (now - start) / PR_USEC_PER_MSEC);
>+      nsCOMPtr<nsIConsoleService> cs = do_GetService(NS_CONSOLESERVICE_CONTRACTID);
>+      if (cs)
>+        cs->LogStringMessage(msg);
>+      NS_Free(msg);  
Manual NS_Free...


>+    void MaybeTriggerCollection()
>+    {
>+        if (mCount == 1000)
>+            NS_CycleCollectSoon();
>+    }
>+
I certainly don't like this, though
mCount == 1000 shouldn't be very slow.
The method just is very hot.
I still wonder if it was ok to check the number of suspected objects in nsXPConnect::AfterProcessNextEvent.

I posted this patch to tryserver.
Attachment #511287 - Flags: feedback+
Comment on attachment 511287 [details] [diff] [review]
patch


>+void
>+NS_CycleCollectSoon()
>+{
>+  nsJSContext::PokeCC();
>+}

Oh, this is called by xpcom/. That is rather strange dependency.
Again, using nsXPConnect::AfterProcessNextEvent might work well enough.
Attached patch gal's patch + small tweaks (obsolete) — Splinter Review
This uses still nsXPConnect::AfterProcessNextEvent
Comment on attachment 511287 [details] [diff] [review]
patch

This doesn't compile on windows
dom/base/nsJSEnvironment.cpp(3303) : error C2308: concatenating mismatched strings
        Concatenating wide "CC timestamp: %lu collected: %lu " with narrow "suspected: %lu duration: %lu ms."
Er... why is NA_NAMED_LITERAL_STRING using NS_LL instead of NS_L?  That's what causes the compile fail...
blocking2.0: ? → final+
Whiteboard: hardblocker → hardblocker [has patch]
Attachment #511437 - Flags: review+
Attached patch patchSplinter Review
Comment on attachment 511641 [details] [diff] [review]
patch

smaug, this should be unchanged from your last version. It passes try and we tested it again FOTN and a bunch of websites. Can you review and land? Thanks a ton for all the help.
Attachment #511641 - Flags: review?(Olli.Pettay)
Attachment #511641 - Flags: review?(Olli.Pettay) → review+
the patch triggers tearoff problems
http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1297441059.1297441278.1358.gz&fulltext=1

We really need the gray/black handling fixed before landing the patch.
Whiteboard: hardblocker [has patch] → hardblocker [has patch][needs 614347]
Attached patch fix loggingSplinter Review
Logging doesn't produce the right numbers on 32bit OSX.
Attachment #512853 - Flags: review?(gal)
Comment on attachment 512853 [details] [diff] [review]
fix logging

Thanks smaug.
Attachment #512853 - Flags: review?(gal) → review+
Keywords: footprint
http://hg.mozilla.org/tracemonkey/rev/123057f16677
Whiteboard: hardblocker [has patch][needs 614347] → hardblocker [has patch][needs 614347], fixed-in-tracemonkey
Thanks for fixing this bug gal@ and smaug@.

+Manoj
Landed on mozilla-central:

http://hg.mozilla.org/mozilla-central/rev/88799325812a
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Depends on: 634916
I just downloaded the latest nightly to verify that this bug has been fixed, and I think I am missing something. I could be looking at the wrong metrics, so it's best I ask; gal@, how did you verify that this bug was fixed? With no tabs other than about:memory open, here is the memory usage breakdown on my Mac OSX.6 machine. Build identifier: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b12pre) Gecko/20110217 Firefox/4.0b12pre:

malloc/allocated: 75,832,384
malloc/mapped: 129,810,432
js/gc-heap: 29,360,128
js/string-data: 1,942,032
js/mjit-code: 379,329
storage/sqlite/pagecache: 11,324,368

The numbers have hovered around these values for the last 20-odd minutes.
(In reply to comment #86)

Forgot to ask the question! Is this the expected footprint for Firefox 4.0 with no tabs open on the Mac?
Depends on: 658849
No longer blocks: mlk2.0
You need to log in before you can comment on or make changes to this bug.