I started an email thread on dev-platform a while back about handling memory pressure events more consistently and reliably:
The most useful messages in the thread were these:
the rest were about low-level technical details.
When a memory pressure event is triggered, sometimes it's not that urgent -- it'd be nice to release memory soon. But sometimes it is urgent, and we can't afford to allocate any memory while freeing other memory. (For example, running GC and CC requires allocating some memory, possible several MB, and so isn't suitable for the urgent case.) We should probably distinguish these two cases.
I wonder whether, rather than having separate memory-pressure events, we should have one event and then check how much free space we have.
On platforms where we have swap, this is just a virtual memory space check. On platforms where we don't have swap, this is just an RSS check.
> On platforms where we don't have swap, this is just an RSS check.
Or rather, it's a matter of calling the equivalent of $ free.
(In reply to Justin Lebar [:jlebar] from comment #2)
> On platforms where we have swap, this is just a virtual memory space check.
Well, min([available RAM + swap], [available address space]) to handle systems with either low RAM and swap or simply running 64-bit Firefox, I assume.
You're right, although this is a bit complicated. Can't Windows dynamically expand the swap file? There's a limit to how big it can get, certainly...
Yes, the limit on Windows 2000/XP/2003 for one paging file appears to be 4095 MiB (), though it is possible (through the registry) to assign several separate paging files. On Windows 7 the limit is considerably higher, though I haven't been able to find what it is: my paging file is 64 GiB for the occasional long rendering task. Unfortunately I haven't found any ways to query the maximum paging file size programmatically.
If the user has customized his or her paging file, the default/minimum and maximum sizes can be found in the PagingFiles REG_MULTI_SZ value in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\SessionManager\MemoryManagement - where the first column is the paging file location, the second column is the default/minimum size and the third column is the maximum size (there can be multiple rows for multiple paging files). If the paging file is set to a system managed size, the second and third columns will be set to 0 (unfortunately).
The main reason I brought it up is that the default paging file size is set to 1.5 times the system RAM or 2 GiB, whichever is smaller (at least on Windows 2000/XP/2003), which could be pretty small for some users. However I'm not sure how it sets the default maximum paging size - that might simply be the full 4 GiB, assuming enough hard drive space is left.
Alternatively, I suppose it could be argued that any paging is a bad thing, so it would make more sense to fire a memory pressure event as soon as available physical RAM reaches a critical level. That would avoid this whole mess, but wouldn't cover the strict out of memory case.
Sorry, I forgot to link to  in my post above:
Thanks for that investigation, Emanuel.
I am, in general, wary of the idea that we should aggressively drop memory when the system starts running out of available physical memory -- it's not necessarily Firefox at fault! A user could be on a 16gb system with Firefox using 200mb of RAM; if the system starts to page, should Firefox really take drastic action?
But preventing OOMs by monitoring available virtual address space is an easy place to start.
Inconsistent use of "memory" vs. "mem"? jlebar, can you please do a follow-up fix to make them all use "memory"? Thanks.
Whoops, comment 9 was supposed to go into bug 670967.
We talked about this bug in memshrink a bit today.
It looks like, unless we can reduce JS usage, we're not going to be able to make a big dent here. And JS usage is hard to reduce without effectively unloading tabs, either by session restoring them, or by somehow serializing the compartment to disk.
JS is big, but it isn't everything. In my current session layout is 15%, and when the style sheet reporter lands (bug 671299) that'll probably jump to over 20%.
And dropping bfcache will allow some JS compartments to be freed. So I think we can make reasonable progress without having to do really big changes.
We already drop bfcache on a timer and on memory pressure. This bug would unify the mechanism with the other things we drop, which is virtuous, but that doesn't change the amount of memory which is eventually freed.
I guess dropping 30% of our memory (layout plus sqlite) is good. To some degree, the question is: What's our goal?
My first goal is to drop stuff that's easy to drop, and see how far that takes us. Doing that will require the unification of mechanisms, as you mentioned, which is a good thing.