I'm seeing memory usage on asm.js demos that persists long after a tab is closed or refreshed. Upwards of 20 seconds. This is leading to OOM crashes when refreshing demos that use reasonable amounts of memory. You can duplicated this by loading up http://beta.unity3d.com/jonas/DT2/ and hitting refresh multiple times while monitoring memory.
We had a bunch of bugs about this already. I don't remember how much we ended up actually doing, and I can't find the bug number at the moment. Do you remember, Luke?
Yeah, so the 20 seconds is still expected, even longer some times, because of the BF Cache; that's just the time until we kick off a GC that frees up the window. What we added was, for a few large-allocation sites (viz., the ArrayBuffer ctor), if the allocation fails, we fire a low-memory event (which evicts the BF Cache and does GC/CC/GC) and try again. In my local experiments, that fixed all the simple OOM cases I was seeing. Martin, are these hard OOM crashes and, if so, could you paste your about:crashes links? Perhaps the failure is in some other memory allocation site so we don't fire the low-memory event. Andrew, do you know when we NukeCrossCompartmentWrappers? I assume it's when a page gets kicked out of the BF Cache. We could set a flag when you allocate a huge ArrayBuffer that makes a page uncacheable (so it doesn't go into the BF Cache and gets Nuke'd earlier) and then change Nuke to also immediately release (safely, in case they are accessed again) ArrayBuffer memory. Really, though, these are all just mitigations; e10s with process-per-tab (at least game-tab) is what we really need here.
The bfcache is not involved for hitting reload, if I recall correctly from last time, only when you navigate away. I think for the reloading cache, the WindowDestroyedEvent stuff is triggered immediately. I wrote a prototype patch to do something like that, if I recall correctly. Process-per-tab will only help if you destroy the process when you reload and navigate, which is not really what we do. I guess it helps mitigate address space problems.
(In reply to Andrew McCreight [:mccr8] from comment #3) > Process-per-tab will only help if you destroy the process when you reload > and navigate, which is not really what we do. I guess it helps mitigate > address space problems. What I've talked about with billm before is finding some semantic signal to Gecko that allows these big-heap-using pages to guarantee(-ish) their own process. E.g., maybe the page manually reloads itself using the magic that severs the referrer link .  http://blog.chromium.org/2009/12/links-that-open-in-new-processes.html
So what I'm seeing effects content more acutely but I can't link to here. Luke, I'll drop you a link for you to try.
I looked at this again. For reference, the app in question has a 1gb heap, definitely not something we expect to reliably work on 32-bit (due to fragmentation) and 300mb of in-memory assets, and a bunch of other stuff taking almost 2gb explicit. Thus, definitely ripe for OOM on reload on Win64 and on single-load on Win32. But still, I was wondering why the "trigger a low-memory event on OOM" mechanism we added in bug 936236 wasn't working. So, I instrumenting the alloc/free/OOM-retry sites and observed that, when we were OOMing, everything was actually working as intended: we'd OOM the first time, do a GC/CC/GC, release the old window's heap and asset ArrayBuffers, then try again... and still fail. So instead I replaced the malloc/free calls (for large allocations) with VirtualAlloc/VirtualFree calls and this actually stopped these ArrayBuffer-OOMs from happening. So I guessing the issue has to do with jemalloc. Unfortunately, even with this "fix", while a greater % of refreshes succeed, the OOM just moves somewhere else (presumably outside the JS engine) that silently kills the page (and sometimes the browser). So perhaps it's worth using VirtualAlloc/mmap for (all, not just asm.js) large ArrayBuffers, but it won't really change the high degree of OOMiness here. To do significantly better, I think we need a reliable way for each load to get a fresh process (so e10s++) (or just 64-bit :).
You need to log in before you can comment on or make changes to this bug.