Closed Bug 664291 Opened 13 years ago Closed 2 years ago

Generate better low-memory events on Linux and Mac (and Android?) by tracking hard page faults

Categories

(Core :: General, defect)

Other Branch
defect
Not set
normal

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: justin.lebar+bug, Unassigned)

References

(Blocks 1 open bug)

Details

(Whiteboard: [MemShrink:P2])

Attachments

(1 file, 4 obsolete files)

Right now we don't have a good way of notifying code when we're running low on memory.  We try to sound warning bells when an allocation fails, but by then it's probably way too late.

A better approach would be to notice when the browser is being paged out by the OS.  If we catch it early enough, we may be able to reduce our footprint by running a GC/CC or by dropping caches, etc.  This was suggested in bug 661304 comment 32 and at [1].

A paper [2, from 1] suggests using the simple heuristic of "since the last time I checked, have we had 10 or more hard page faults, or has our resident size decreased?"  (In the paper's context, the RSS never decreases except due to paging.)

I presume we can tune the heuristic without much difficulty, although we'll need to figure out how to determine whether an RSS decrease is due to paging.  The harder problem, I think, will be determining the hard page fault count on Windows [3].  (Unix has getrusage, which makes it easy.)  It looks like we may be able to use ETW on Windows, although it's not clear whether that carries a performance penalty.  Chrome does something with ETW and page faults [4], although I don't know if they use it in production or only for debugging.

[1] https://groups.google.com/forum/#!topic/mozilla.dev.platform/DMqQ_HKEMp4
[2] http://www.cs.rochester.edu/~xiaoming/publications/ismm11-b.pdf
[3] http://glandium.org/blog/?p=1963
[4] http://code.google.com/p/sawbuck/source/browse/trunk/sawbuck/py/etw/etw/descriptors/pagefault.py
Assignee: nobody → justin.lebar+bug
(In reply to comment #0)
> 
> A paper [2, from 1] suggests using the simple heuristic of "since the last
> time I checked, have we had 10 or more hard page faults, or has our resident
> size decreased?"  (In the paper's context, the RSS never decreases except
> due to paging.)

The other part of this heuristic is how often it's run.  You start off doing it every N events (where "events" needs to be chosen, it could even be time-based).  Every time you don't have high memory pressure, you increase N by 1.  Every time you do have high memory pressure, you decrease N by a factor of 10.  That gives fast response when memory becomes tight, but slowly decreasing overhead when memory is plentiful.

If this is done well it might be enough to mitigate any overhead of using ETW on Windows?
Whiteboard: [MemShrink:P1]
Depends on: 664486
I blogged about running Firefox with a reduced max-RSS so as to force it to page [1].  But since this is more persistent than my blog, here's the short version.  On Ubuntu 11.04:

* Install the cgroup-bin package.

* Edit /etc/cgconfig.config and create a group with limited memory.  For
  instance, I added:

      group limited {
        memory {
          memory.limit_in_bytes = 50M;
        }
      }

* Run

    # restart cgconfig
    # chown jlebar /sys/fs/cgroup/memory/limited
    # chown jlebar /sys/fs/cgroup/memory/limited/*
    $ cgexec -g memory:limited dist/bin/firefox

And have at it.  I observed FF holding to a 93M RSS when I asked it to have a
50M limit, but that's no problem for me.  It did page, spectacularly.

cgclassify theoretically lets you attach restrictions to a running process, but
it didn't appear to do anything when used in conjunction with the RSS limit.

[1] http://jlebar.com/2011/6/15/Limiting_the_amount_of_RAM_a_program_can_use.html
Blocks: 664758
No longer blocks: 664758
Depends on: 664758
Blocks: 655455
On Windows, it appears we can get a systemwide hard page fault rate with performance counters [1].  That might be good enough for this bug -- If the hard page fault rate is high, either because FF is faulting pages in or some other program is, dump our caches.

Anyone have thoughts on this?

[1] http://msdn.microsoft.com/en-us/library/aa373083%28v=vs.85%29.aspx
Attached patch WIP v1, Linux/Mac only. (obsolete) — Splinter Review
Comment on attachment 540647 [details] [diff] [review]
WIP v1, Linux/Mac only.

Review of attachment 540647 [details] [diff] [review]:
-----------------------------------------------------------------

A couple of drive-by comments from a quick skim...

::: xpcom/base/LowMemoryDetector.cpp
@@ +48,5 @@
> +static PRLogModuleInfo* gLogModule = PR_LOG_DEFINE("LowMemoryDetector");
> +#define DEBUG(format) PR_LOG(gLogModule, PR_LOG_DEBUG, format)
> +#define INFO(format) PR_LOG(gLogModule, PR_LOG_WARNING, format)
> +
> +class LowMemoryNotificationReporter : public nsIMemoryReporter

I think you can use the NS_MEMORY_REPORTER_IMPLEMENT macro to avoid some boilerplate code here.  See js/src/xpconnect/src/xpcjsruntime.cpp for some examples of its use.

@@ +70,5 @@
> +
> +  nsresult GetKind(PRInt32 *aKind)
> +  {
> +    NS_ENSURE_ARG_POINTER(aKind);
> +    *aKind = MR_COUNT;

Does this patch compile?  MR_COUNT doesn't exist.  aboutMemory.js will also need modification in order to print the appropriate units -- it currently assumes all measurements are in bytes and prints "B" or "MB" as the unit.
This patch depends on the patch in bug 664486, which adds page faults to about:memory for Linux/Mac and implements MR_COUNT (with the appropriate changes to aboutMemory.js).
(In reply to comment #1)
> The other part of this heuristic is how often it's run.  You start off doing
> it every N events (where "events" needs to be chosen, it could even be
> time-based).  Every time you don't have high memory pressure, you increase N
> by 1.  Every time you do have high memory pressure, you decrease N by a
> factor of 10.  That gives fast response when memory becomes tight, but
> slowly decreasing overhead when memory is plentiful

I don't think this heuristic makes much sense in our context.  We lose unless we react quickly to low memory.  If we don't check for low memory often while we think everything is fine, then it's likely that by the time we notice that memory is low, the OS has already paged us out (that's what we were trying to avoid in the first place!).

Similarly, it doesn't make much sense to check for low memory more often right after we notice that memory is low.  That might make sense if we were able to dramatically decrease our footprint in response to memory pressure -- if we didn't release enough stuff on the first low memory event, maybe the second one will do it.  But in our case, there's relatively little I expect us to be able to release, so I don't expect that firing the notification many times in succession will do much good.

If we want any kind of backoff, I think we might want to back off when we hit a low memory notification (i.e. the opposite of what the paper suggests).  If we keep observing low memory, then clearly our notification isn't doing much good.  If a low-memory notification triggers, say, a GC, we wouldn't want to do that over and over while we're paging.
> We lose unless we react quickly to low memory.

On the other hand, I'm also kind of worried about checking all the time when it's not necessary.

On Windows, I can detect when the system as a whole starts paging.  When the system starts paging, we want to drop caches even if we haven't been doing anything lately, both to be a good citizen and to keep more important parts of FF from being paged out.  But to detect that, we have to check even while we're idle.

It remains to be seen how much CPU checking once a second will use.
bz or anyone else: Do you have an opinion on whether this should run in a separate thread or in the main event loop?  Right now it's only checking once a second.
Doug, it looks like you did something like this for Android and then backed it out.  Do you have any comments about the approach here?  Do you think we should turn it off for Android, since AIUI Android doesn't have swap?
From what I understand, once-per-second wakeups would be a serious problem for mobile.  Can we do whatever the JS folks did for periodic GC that doesn't run if nothing has happened here?

What's the practical difference between separate thread and main event loop?  Aren't you proxying to the main thread anyway?
(In reply to comment #11)
> From what I understand, once-per-second wakeups would be a serious problem
> for mobile.  Can we do whatever the JS folks did for periodic GC that
> doesn't run if nothing has happened here?

Sure, although it's not clear whether it should even be on for mobile in its current form.  Right now, on *nix, it warns about low memory when it sees that we're paging in, but AIUI there's no paging on Android at all.

We might want to do the same thing as the periodic GC to avoid wakeups on desktop, too.  I'll look into it.

Maybe we can watch something else on Android.  There's [1], but that seems to have been ill-fated (it was backed out [2]).

> What's the practical difference between separate thread and main event loop?
> Aren't you proxying to the main thread anyway?

It only proxies to the main thread if it needs to fire a low-memory event.  It's not clear how fast the syscall to get the memory info runs on Windows...

[1] http://hg.mozilla.org/mozilla-central/rev/37e4ab3abc44
[2] http://hg.mozilla.org/mozilla-central/rev/99af9d6485bb
If you _can_ stay off the main thread, that seems clearly superior to me.
justin - right, no swapping on most devices.  the n900/maemo had an option to enable swap to sdcard, but that is generally the exception.

regarding the android patches to detect low memory -- there was a system release for the Nexus S which, when low memory was hit, the kernel would panic.  This was fixed with a point release.  in order to work around this, we attempted to watch for low memory (MemFree, etc).  however, we found this to be not a good indicator of actually memory available.  on linux, free memory typically gets used for fs caches and, when needed, are given back to user space processes.
 
Android does send a 'system is low on memory' notification, but typically we this event very late.  Most other processes at the point of this notification have already been killed.  However, this might be a good enough signal for what you are trying to do.
I think the JS GC trigger waits until allocations, which won't work as well for general memory watching, as there are all sorts of allocations all over the place, and I don't know if you'd be able to instrument them all.
Attached patch WIP v2 (obsolete) — Splinter Review
This is almost ready for review.  The main thing that's left to do is tune how often we check for memory pressure (currently every second when there's no memory pressure, backing off to once every 20s when there is pressure) and what page fault / page out rate we declare is indicative of pressure (right now it's 10 faults or page outs in one second).

I'm tempted to change the 10 to 1000 or something, so we can have some confidence that we're not firing these notifications when there's plenty of available memory.  But that might not let us react quickly enough to memory pressure.  Suggestions on this point are welcome!

I also need to check that writing to an mmap'ed file on Windows doesn't cause the page out counter to increase.
Attachment #540647 - Attachment is obsolete: true
> I also need to check that writing to an mmap'ed file on Windows doesn't cause 
> the page out counter to increase.

Verified this with a Python script which writes 1M to an mmap'ed file on Windows 7 VM.  I'll test on Windows XP once I finish downloading the ISO (nobody uses Vista, right?).

The pages output / sec counter stays firmly pinned at 0 as I use my VM to load programs and browse around, but fires away once I start approaching the machine's memory limit.  That's great to see.
I'm seeing periodic spikes up to 32 pages output / sec on Win XP, so we should set the threshold above that.

If you have Windows and want to see what your pages output / sec looks like, run the performance monitor.  It's in the start menu on Win 7 (I think with that name), and on WinXP, you can start it through start, run, perfmon.msc.  Then add the "pages output / sec" counter under "memory".
With some hammering on my WinXP VM, I can get the pages output / sec number to creep up to 100 or so before there's actually memory pressure.  So maybe this is the wrong metric to use.

Windows does provide an "Available MBytes" metric which looks pretty good.  When it hits zero, we start paging like crazy.  Maybe we should just watch that.  I need to test it on Win7 now...

I think we should also be watching our virtual address space, on all platforms.  If we're almost out of address space, we should run out of the room crying for our mother.  Or at least fire a low-memory notification.
Summary: Generate better low-memory events by tracking RSS, hard page faults → Generate better low-memory events by tracking RSS, hard page faults, unused address space
Would it be going too far to keep a running mean and variance for the number of page counts per second? Then you could impose a dynamic limit like 'more than 5 standard deviations outside the normal range for two measurements in a row' (5 standard deviations being a 1 in ~2 million occurrence for a normal distribution).

A running mean and variance are cheap to compute ('mean = sum / n' and 'variance = mean_of_squares - square_of_mean') as long as you can do it in integers, but it does require keeping some memory around to evict the oldest entry for every new measurement.
> Would it be going too far to keep a running mean and variance for the number of 
> page counts per second?

I don't think that would be infeasible or overkill, but I'm also not sure it's better than the alternatives.

On Windows, there are large spikes in the page fault count when programs load, so we just don't want to use that metric, at least not by itself.  And on *nix, at least in my testing, we see very few page faults until we run out of memory, so I think the mean/variance measurement would be itself pretty noisy.

We'll hopefully have page fault counting in about:memory soon so others can have a look on *nix.
(In reply to comment #18)
> I'm seeing periodic spikes up to 32 pages output / sec on Win XP, so we
> should set the threshold above that.

I see something similar to this on Win7 except that it happens in bursts of 256 instead of 32. I'm not positive, but I think what we're seeing is Windows preemptively writing infrequently accessed pages to the pagefile so the memory can be reclaimed quickly.

(In reply to comment #19)
> With some hammering on my WinXP VM, I can get the pages output / sec number
> to creep up to 100 or so before there's actually memory pressure.  So maybe
> this is the wrong metric to use.

What sort of numbers do you see when there actually is memory pressure?

On my Win7 box, pages output / sec usually jumps straight from zero to thousands, although the spikes are occasionally in the 700-900 range. I wonder if using a threshold of something like 500 would be reasonable...
For the running-out-of-virtual-space case, we need to figure out how much of the virtual address space is available to userspace programs.

If you want to follow along at home, I've attached a Python script at the bottom of this comment which tries to mmap until it can't anymore.

Windows's limit is 2GB, unless you boot with some special switch.  I think I'm just going to ignore this and say that the VM limit is 2G, unless anyone objects.

The Linux-32 box I tested on apparently has a 3GB limit.  I can't find anything authoritative on this, but I think this is hardcoded.

Still looking for a mac-32 box...

    from itertools import count
    from mmap import mmap
     
    maps = []
    for i in count(1):
        maps.append(mmap(-1, 1024 * 1024))
        print('mapped %dmb' % i)
(In reply to comment #23)
> 
> The Linux-32 box I tested on apparently has a 3GB limit.  I can't find
> anything authoritative on this, but I think this is hardcoded.

Nope.  IIRC you can have various set-ups.  3GB is the most common, but you can have 1GB, 2GB, even 4GB somehow.  So I'm uncomfortable about using a hard-coded value.  I wonder if you can get the number from /proc.
(In reply to comment #24)
> Nope.  IIRC you can have various set-ups.  3GB is the most common, but you
> can have 1GB, 2GB, even 4GB somehow.  So I'm uncomfortable about using a
> hard-coded value.  I wonder if you can get the number from /proc.

Hm.  How much of a hack would it be to launch a separate process which does a search to find out how much data it can map?  We'd only have to run it once on that machine, and it's not like mmap'ing 4gb is expensive, if you're not going to use it.
That *seems* a bit much for me, and I don't know that we can cache that, at least not in a fool proof way. What if the profile is moved from one computer to another, or what about university type installs where the home directory is on nfs and you run firefox from various computers? Not sure how much we care, but it seems caching could cause problems...
Yeah, just sounds like it's asking for trouble :(
Figuring out how much virtual address space you have to play with should *not* be this difficult.
It would be nice to say that we usually or always run out of physical memory before we run out of virtual address space, but I don't actually know that's true.  My vmem seems to be about 2 * RSS (on Linux, anyway; we don't report vmem on Windows?), so on a Windows box with 3G of RAM and a 2G per-process vmem limit, it seems that we could hit the vmem limit first.
Digging through the syscalls, it looks like maybe we can't get the vsize on Windows?  (Seriously?)  If so, I guess we can scrap that for now.
...well, process explorer somehow gets something it calls "virtual size", so maybe all hope is not lost.
Aha.  GlobalMemoryStatusEx.ullAvailVirtual tells you how much virtual memory is available in the current process.

http://msdn.microsoft.com/en-us/library/aa366589%28v=vs.85%29.aspx

(How many different ways are there of querying the memory management system in Windows?  It's nuts...)
Depends on: 668137
For Linux, doesn't /proc/meminfo contain what you want? (probably VmallocTotal, but my kernel and system is 64 bits, so the value here is just huge)
I just booted up a 32-bit Linux VM -- I think it's CommitLimit in /proc/meminfo.

Two down, one to go!
(In reply to comment #34)
> I just booted up a 32-bit Linux VM -- I think it's CommitLimit in
> /proc/meminfo.
> 
> Two down, one to go!

On my x64 system with 16GB RAM, it reads:
CommitLimit:    10182392 kB

which is much less than the available RAM.

I could happily map 65470mb of virtual address space with your python script.
Hm.

On my 32-bit VM, CommitLimit is the only value near 3G (which is where the Python script bails).

MemTotal:        2060396 kB
MemFree:          434900 kB
Buffers:          155676 kB
Cached:          1239436 kB
SwapCached:            0 kB
Active:           566296 kB
Inactive:         950720 kB
Active(anon):     122656 kB
Inactive(anon):     2304 kB
Active(file):     443640 kB
Inactive(file):   948416 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:       1187784 kB
HighFree:          32724 kB
LowTotal:         872612 kB
LowFree:          402176 kB
SwapTotal:       2095100 kB
SwapFree:        2095100 kB
Dirty:               116 kB
Writeback:             0 kB
AnonPages:        121824 kB
Mapped:            43744 kB
Shmem:              3052 kB
Slab:              90240 kB
SReclaimable:      80508 kB
SUnreclaim:         9732 kB
KernelStack:        2200 kB
PageTables:         4320 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     3125296 kB
Committed_AS:    1063988 kB
VmallocTotal:     122880 kB
VmallocUsed:       32560 kB
VmallocChunk:      85492 kB
HardwareCorrupted:     0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       4096 kB
DirectMap4k:       12280 kB
DirectMap4M:      897024 kB
On my 64-bits system, there is no value near 65GB.
Actually, I just had hit another kind of limit: the number of different mappings we can have in a process. If I change your script to allocate 1GB at a time instead of 1MB, I go up to 65TB mapped...
I just tested on office.mozilla.org, a 32-bit system with CommitLimit 2,553MB, and I could map 3068MB.
On the 32-bit system with a measured mapping limit of ~3G, there don't appear to be any values immediately in /proc which are 3 followed by at least 6 digits, aside from meminfo:CommitLimit.

I come up similarly empty-handed when I look in /proc/1.

  find /proc -maxdepth 1 | xargs egrep '\<3[0-9]{6}' 2>/dev/null
FWIW I was able to get vsize to 1600MB with explicit/private/rss of ~900MB on a Windows 7 x86-32 VM.  Since Windows has a (default) max vsize of 2G, I think it's worth monitoring both explicit and vsize here.
Removing the dependency on bug 668137 (add vsize on Windows) -- as I have the patch written, that bug isn't a strict dependency for firing low vmem events.  I think we want it anyway.
No longer depends on: 668137
Attached patch Patch v1 (obsolete) — Splinter Review
Attachment #542934 - Flags: review?(nnethercote)
Attachment #541401 - Attachment is obsolete: true
I don't think we want to check this in without first auditing all the low-memory listeners and making sure that either:

 * it's OK to run them once a second when there's memory pressure, or
 * they back off when they're called too often.

I considered making the LowMemoryDetector back off and not fire lots of events when there's memory pressure, but I think there's an end-to-end argument in favor of pushing the decision about how often to do things like GC down to the code which understands the GC.

I'm going to do this audit next, and I'll report back.
There are actually more places that listen to memory-pressure than I thought!

The one I worry about most is gc/cc, especially since if we're swapping, gc/cc might be more expensive than usual.

It seems like the right thing to do is trigger a gc/cc immediately when we see the first notification in some period of time, but from then on only gc/cc a bit more aggressively, until memory pressure subsides.

njn, do you have any thoughts on how we should do this?
Testing done:

* On Windows, I tested that vsize notifications are fired when vram gets too high.  I just realized I haven't tested the low physical memory notifications in this latest iteration of the patch, and I'll do that in the morning.

* On Linux, I tested that memory notifications are fired when RAM is constrained, using the process at [1].

My mac has 8G of RAM, and the process at [1] doesn't work on Mac, so I haven't tested there.  But the code is exactly the same as on Linux.

I haven't done any testing to ensure that the memory notifications actually do something useful.  I think we might as well do that in separate bugs.  But of course we don't want to land this until we know that it's at least not deleterious to fire so many low-memory events.

[1] http://jlebar.com/2011/6/15/Limiting_the_amount_of_RAM_a_program_can_use.html
Comment on attachment 542934 [details] [diff] [review]
Patch v1

Review of attachment 542934 [details] [diff] [review]:
-----------------------------------------------------------------

Let me double check how this works (forgive me if I'm duplicating anything you've written).  The default behaviour:

- On Linux/Mac
  - If the number of hard page faults since the last check (1 second ago) exceeds 100, fire a PHYSICAL notification.

- On Windows:
  - If the available virtual memory is less than 256MB, fire a VIRTUAL notification.
  - If the available physical memory is less than 32MB, fire a PHYSICAL notification.

It'd be good to have comments like that in HasMemoryPressure(), even though the code isn't complicated, to make things super clear.

So the code looks pretty good, ie. it does what you intend it to.  However, I'm a Gecko newbie, so I strongly recommend you get someone else to review this (and then someone else to super-review).  I looked closely at the memory pressure stuff, but I don't know that much about observers and mutexes and initializing children and all that stuff.  So I've given feedback+.

The big questions are all on the heuristic side:  are these are the right measurements to be taking, are they taken often enough... ie. does it actually work?  As you say, that'll require checking individual listeners, and it shouldn't land until that has happened.  And then, this will need some time to bake, I suggest landing it early in the release cycle if possible.

Another question: does checking every 1 second cause problems with mobile devices?

Also, it's a concern that there are no tests, but it's hard to know what tests for this code would look like.  Maybe the prefs could be changed so that the notifications are fired very frequently?  Not sure.

::: toolkit/components/telemetry/TelemetryHistograms.h
@@ +50,5 @@
>  HISTOGRAM(MEMORY_JS_GC_HEAP, 1024, 512 * 1024, 10, EXPONENTIAL, "Memory used by the garbage-collected JavaScript heap (KB)")
>  HISTOGRAM(MEMORY_RESIDENT, 32 * 1024, 1024 * 1024, 10, EXPONENTIAL, "Resident memory size (KB)")
>  HISTOGRAM(MEMORY_LAYOUT_ALL, 1024, 64 * 1024, 10, EXPONENTIAL, "Memory used by layout (KB)")
> +HISTOGRAM(LOW_MEMORY_EVENTS, 1, 256, 4, EXPONENTIAL, "Number of low-memory events fired by LowMemoryDetector (since last telemetry ping)")
> +HISTOGRAM(LOW_VMEMORY_EVENTS, 1, 256, 4, EXPONENTIAL, "Number of low virtual memory events fired by LowMemoryDetector (since last telemetry ping)")

Nit: Inconsistent hyphenation, "low-memory" vs. "low virtual memory".

More substantial point:  you have an enum with
MEMORY_PRESSURE_PHYSICAL, MEMORY_PRESSURE_VIRTUAL.  So you should talk about "low physical memory" and "low virtual memory" and avoid phrases like "low memory"... I'd like you to scrupulously distinguish physical and virtual memory throughout the whole patch, it makes things much clearer.

::: xpcom/base/LowMemoryDetector.cpp
@@ +81,5 @@
> + *    check_interval_ms do we need to observe to declare that memory is low?
> + *
> + * - low_memory_threshold_mb (megabytes, Windows only)
> + *    If we're detecting memory pressure by monitoring the amount of available
> + *    memory, what amount of available memory do we consider to be "low"?

Similar: virtual or physical memory?  Presumably the former.

@@ +111,5 @@
> +  nsnull
> +};
> +
> +// Mark these as volatile since they may be read and written on different
> +// threads.  Volatile keeps the compiler from transforming this

This scares me.  Is it a standard Mozilla idiom?

@@ +132,5 @@
> +
> +void
> +ReloadPrefs()
> +{
> +  // You should probably keep these defaults sync'ed with all.js.

It kinda sucks that the defaults are specified twice, is that unavoidable?

@@ +169,5 @@
> +  UNITS_COUNT,
> +  GetNumLowMemoryEvents,
> +  "Number of times the process detected that the system was low on available "
> +  "physical memory and tried to reduce its footprint.  On Linux and Mac, we "
> +  "detect low memory by observing page hard page faults in the Firefox "

"page hard page"

Also, other reporters have avoided mentioning "Firefox", using "the application" or similar.  Seems a good idea in case this code ends up in some other product.  "The application" could replace "we", too.  (Oh, I see you used "the process" for low-vmemory-events, that's fine too.)

@@ +402,5 @@
> +#include <inttypes.h>
> +
> +// Initialize mLastNumPageFaults to -1 so we don't fire a low-memory
> +// notification the first time we read this value -- cold startup produces many
> +// hard page faults.

Won't the "don't send any notifications for the first N seconds" feature protect against that?  Then we could initialize it to 0, and then HasMemoryPressure() wouldn't need to check for -1.

@@ +506,5 @@
> +    NS_WARNING("GlobalMemoryStatusEx call failed.");
> +    return MEMORY_PRESSURE_NONE;
> +  }
> +
> +  // It's no extra work to check for vmemory presssure on 64-bit processes, so

presssure!

::: xpcom/base/LowMemoryDetector.h
@@ +116,5 @@
> +  /**
> +   * Increment our count of the number of memory pressure events we've fired
> +   * because we're running out of available physical memory.
> +   */
> +  void IncrementNumLowMemoryEvents();

As above:  it would be clearer if |Memory| was changed to |PMemory| or |PhysicalMemory|.
Attachment #542934 - Flags: review?(nnethercote) → feedback+
Summary: Generate better low-memory events by tracking RSS, hard page faults, unused address space → Generate better low-memory events by tracking hard page faults (Linux/Mac) or available physical & virtual memory (Windows)
Attached patch Patch v2Splinter Review
bsmedberg, are you interested in taking a look at this?
Attachment #543150 - Flags: review?(benjamin)
Attachment #542934 - Attachment is obsolete: true
(In reply to comment 47)
> Let me double check how this works (forgive me if I'm duplicating anything you've written).  The default behaviour:
> 
> - On Linux/Mac
>   - If the number of hard page faults since the last check (1 second ago) exceeds 100, fire a PHYSICAL notification.
> 
> - On Windows:
>   - If the available virtual memory is less than 256MB, fire a VIRTUAL notification.
>   - If the available physical memory is less than 32MB, fire a PHYSICAL notification.

Yes, that's right.  I've added comments to this effect.
> 
> The big questions are all on the heuristic side:  are these are the right
> measurements to be taking, are they taken often enough... ie. does it
> actually work?  As you say, that'll require checking individual listeners,
> and it shouldn't land until that has happened.  And then, this will need some
> time to bake, I suggest landing it early in the release cycle if possible.

I agree.  To be clear, there are two (mostly) separate "does it work?" questions:

 * Do the memory pressure notifications get fired at the right time?  It has
   to be before we're totally out of memory, since things like the CC allocate
   memory, and since walking memory in a GC is going to be bad if we're paged
   out.  But it shouldn't be too much before we're out of memory, because we
   don't want to drop caches and whatnot unnecessarily.

 * Do the memory-pressure observers react well to the memory pressure
   notifications?  If it's expensive to run the observer (e.g. the CC), does it
   back off appropriately upon getting one memory-pressure report a second?

> Another question: does checking every 1 second cause problems with mobile devices?

It's currently disabled on mobile:

  // The UnixLowMemoryDetector shouldn't break on Android/Maemo, but Android
  // has no swap and swap is optional on Maemo.  The Unix detector works by
  // noticing when we swap, so is unlikely to be useful on these platforms.

> Also, it's a concern that there are no tests, but it's hard to know what tests for this code would look like.  Maybe the prefs could be changed so that the notifications are fired very frequently?  Not sure.

I have no idea how to test this.  The problem isn't the frequency of the notifications so much as the fact that we're observing system events.

I guess I could create some JS objects which use a lot of memory and then check that the notifications are fired.  But I'm not sure I could do this in a way which consistently doesn't cause us to run out of virtual address space...

> @@ +111,5 @@
> > +  nsnull
> > +};
> > +
> > +// Mark these as volatile since they may be read and written on different
> > +// threads.  Volatile keeps the compiler from transforming this
> 
> This scares me.  Is it a standard Mozilla idiom?

I could guard with a lock -- Is that what you were asking? -- but it just felt
unnecessary. But I did this before there were any locking operations in the
loop -- then I had to add a mutex in the sleep call.  

I think it's safe as it is, because aiui, if atomic operation A happens-before
atomic operation B, all writes which occur before A are visible after B.  So
the changes to the prefs are visible as soon as we, for instance, take a lock
on each thread, which surely will happen within one iteration of the detector
loop.

Whether we should be relying on this behavior is another question entirely.
Let's see what the next reviewer thinks.

> > +void
> > +ReloadPrefs()
> > +{
> > +  // You should probably keep these defaults sync'ed with all.js.
> 
> It kinda sucks that the defaults are specified twice, is that unavoidable?

It is as far as I know.

> @@ +402,5 @@
> > +#include <inttypes.h>
> > +
> > +// Initialize mLastNumPageFaults to -1 so we don't fire a low-memory
> > +// notification the first time we read this value -- cold startup produces many
> > +// hard page faults.
> 
> Won't the "don't send any notifications for the first N seconds" feature protect against that?  Then we could initialize it to 0, and then HasMemoryPressure() wouldn't need to check for -1.

Suppose cold startup causes 100 page faults.  We initialize mLastNumPageFaults
to 0 and wait N seconds before checking.  When we check, we discover that there
are 100 more page faults than mLastNumPageFaults, so we fire a memory-pressure
notification.
Version: unspecified → Other Branch
Comment on attachment 543150 [details] [diff] [review]
Patch v2

Taras, can you take a look at the Telemetry changes here?
Attachment #543150 - Flags: review?(tglek)
Comment on attachment 543150 [details] [diff] [review]
Patch v2

It doesn't make sense to me to poll for OOM on a timer. I think a better way would be to devise an active timer: ie the opposite of nsIIdleService.

Stick some time tracking into the event loop. ie every time an event passes and elapsed time exceeds some internal, fire some callback.

Telemetry would benefit from this too.
Attachment #543150 - Flags: review?(tglek) → review-
In this specific case piggyback onto the cycle collector would work too.
(In reply to comment #51)
> It doesn't make sense to me to poll for OOM on a timer. I think a better way
> would be to devise an active timer: ie the opposite of nsIIdleService.

We can become low on physical memory even if we're idle.  Suppose the user minimizes Firefox and loads up Photoshop.  The hope is that we can catch this early and free up a whole bunch of memory, potentially keeping the system from swapping FF out and making us load back up more quickly.

The *nix code won't catch this case, since it's looking for page faults in the FF process, but the Windows code will, since it looks at overall available memory on the system.  I'm looking into doing something similar on *nix, if only since mobile can't swap.

That said, it would probably make sense that if we're idle and have fired one low memory notification, we stop checking until we're no longer idle.  Do you think this is reasonable, Taras?

Of course, this whole bug doesn't even make a lot of sense unless we can drop some serious RAM when we notify.  Dropping all of bfcache might help (it currently does this on memory-pressure), but maybe there's more we can do.  (Now that images are discarded on a 10s timer, they're kind of out of the picture.)
(In reply to comment #53)
> (In reply to comment #51)
> > It doesn't make sense to me to poll for OOM on a timer. I think a better way
> > would be to devise an active timer: ie the opposite of nsIIdleService.
> 
> We can become low on physical memory even if we're idle.  Suppose the user
> minimizes Firefox and loads up Photoshop.  The hope is that we can catch
> this early and free up a whole bunch of memory, potentially keeping the
> system from swapping FF out and making us load back up more quickly.
> 

Aside: personally I would drop memory when user minimizes the browser anyway. Android needs to do it(not sure if it does  yet), dunno about desktop.

I seriously doubt that Firefox would sit without spinning the event loop for too long. I object to adding yet another reason to spin it.
Note Linux will fault on Android, since memory mapped files(aka libraries) act as swap.
(In reply to comment #54)
> I seriously doubt that Firefox would sit without spinning the event loop for
> too long. I object to adding yet another reason to spin it.

Could you rephrase this?
(In reply to comment #56)
> Could you rephrase this?

<jlebar> taras, You're saying that the "idle" state is rare in practice.
<taras> yes
<jlebar> taras, But that it's important that when we're in this state, we don't wake up and check to see whether we're out of memory.
<taras> yes
<jlebar> taras, Well, why is it so important if that state doesn't happen so often?
<taras> it is important to not wake up too much to save power, unless you are asking something else
<taras> so while we wake up too often as is, we should move towards waking up less often, not more
<jlebar> taras, Okay, I buy that.  :)

(In reply to comment #55)
> Note Linux will fault on Android, since memory mapped files(aka libraries)
> act as swap.

This contradicts dougt in comment 14:

> justin - right, no swapping on most devices.  the n900/maemo had an option
> to enable swap to sdcard, but that is generally the exception.

Can you guys duke it out?
(In reply to comment #57)

> > justin - right, no swapping on most devices.  the n900/maemo had an option
> > to enable swap to sdcard, but that is generally the exception.
> 
> Can you guys duke it out?

Doug said no swapping, not no paging :)
Minor suggestion: The threshold prefs are OS-specific so it might be a good idea to #ifdef them out of all.js when not applicable so people don't see non-functioning prefs in about:config if they try to tweak these settings. (the Preferences::GetInt() calls have defaults so the rest of the usages don't really need cluttering up with #ifdefs, though)
Comment on attachment 543150 [details] [diff] [review]
Patch v2

I'll see if I can make something which wakes us up less-often.
Attachment #543150 - Flags: review?(benjamin)
Jesse pointed out:

> The Mac "Activity Monitor" shows a system-wide value called "page
> outs". That might be useful to record along with the number of hard
> page faults Firefox encounters. For example, if "page outs" is 0 (like
> it is for me right now), then we know that none of the hard page
> faults encountered by Firefox are the result of swapping.

It's something to look into, although at least on my computer, we don't page much after startup except when there's memory pressure.

I'm starting to think that the current Windows approach (look at available bytes) makes more sense than the current *nix approach (look at page faults), since on *nix if the user

 1. minimizes FF,
 2. loads photoshop, which eats all RAM and causes FF to be paged out,
 3. closes photoshop, freeing a bunch of ram,
 4. reopens FF

we'll only notice memory pressure during the last step, when we get faulted back in.  But that's not the time to free up memory!
See bug 669120 for a simpler approach which doesn't involve timer threads or heuristics.  I'm beginning to think that bug may be a better place to start, at least in terms of FF being a good citizen and not obstructing other applications' use of memory.
I think the way forward here is:

 * Bug 669120 - Fire a memory-pressure event when we lose focus for X seconds.  This way, we don't have to check for low memory when we're idle.

 * Check for low memory using the same or a similar heuristic to the new periodic GC code.

The assumption here is that it's likely that if some other program starts using lots of memory, it'll be while we're not focused.  And this strategy doesn't involve any new timer threads.
The GC timer is bug 656120.
Note that, now that I think of it, there are ways, at least on Linux, to know if the system is swapping or not, e.g. the taskstats api, which we're probably going to use to get a more accurate process startup time (and is quite painful to setup, so that part could be shared).
I filed a new bug just for tracking Windows vmem: bug 670967.  It seems to me that running out of address space on Windows is the worst manifestation of this issue, and I think I may be able to address that without adding a timer thread.
Depends on: 670967
For those following along at home, I've morphed bug 670967 into tracking both virtual and physical memory on Windows by wrapping VirtualAlloc and the other virtual allocation syscalls.
Blocks: 660577
For Windows, did you try using the CreateMemoryResourceNotification function for the LowMemoryDetector? From the MSDN description it looks to be exactly what you are looking for here. Its advantage over currently used solution is that it avoids time based polling, you can just include the returned handle in the main event loop and wait for it to become signaled.
Ah, the hidden gems of the WinAPI.  CreateMemoryResourceNotification looks like it does exactly what I'd want; thanks for pointing it out!

My current approach in bug 670967 is to watch only for low virtual memory, for reasons explained in the bug.  It doesn't look like CreateMemoryResourceNotificaiton helps here.  I'm currently watching the amount of available virtual memory by wrapping calls to VirtualAlloc and a few other functions.
Comment on attachment 581179 [details] [diff] [review]
Rev 2, part 3 v1: Pass env var when running PGO-instrumented build

Oops; wrong bug.
Attachment #581179 - Attachment is obsolete: true
Attachment #581179 - Flags: review?(ted.mielczarek)
jlebar, can you remind me how this bug relates to bug 670967?  Is this bug now only for Mac and Linux?
> Is this bug now only for Mac and Linux?

Right.
Summary: Generate better low-memory events by tracking hard page faults (Linux/Mac) or available physical & virtual memory (Windows) → Generate better low-memory events on Linux and Mac by tracking hard page faults
Summary: Generate better low-memory events on Linux and Mac by tracking hard page faults → Generate better low-memory events on Linux and Mac (and Android?) by tracking hard page faults
Whiteboard: [MemShrink:P1] → [MemShrink:P2]

The bug assignee didn't login in Bugzilla in the last 7 months.
:overholt, could you have a look please?
For more information, please visit auto_nag documentation.

Assignee: justin.lebar+bug → nobody
Flags: needinfo?(overholt)
Flags: needinfo?(overholt)

I assume the recent tab unloading work has superseded this.

Status: NEW → RESOLVED
Closed: 2 years ago
Resolution: --- → INCOMPLETE
See Also: → 1587762
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: