Closed Bug 623428 Opened 13 years ago Closed 13 years ago

TM: avoid bloat caused by multiple mReserve arrays in VMAllocator

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
Tracking Status
blocking2.0 --- betaN+

People

(Reporter: n.nethercote, Assigned: n.nethercote)

References

Details

(Keywords: memory-footprint, Whiteboard: [cib-memory][hardblocker], fixed-in-nanojit, fixed-in-tracemonkey)

Attachments

(4 files, 1 obsolete file)

Bug 584860 changed TraceMonitors from per-thread to per-compartment.  This potentially creates a lot more of them.  The trouble with this is that each TraceMonitor creates three VMAllocators, and VMAllocator is big -- 256KB on 32-bit platforms, 512KB on 64-bit platforms.

Why so big?  Because of VMAllocator::mReserve:

    /*
     * FIXME: Area the LIR spills into if we encounter an OOM mid-way
     * through compilation; we must check mOutOfMemory before we run out
     * of mReserve, otherwise we're in undefined territory. This area
     * used to be one page, now 16 to be "safer". This is a temporary
     * and quite unsatisfactory approach to handling OOM in Nanojit.
     */
    uintptr_t mReserve[0x10000];

(Nb: the comment is wrong, it's actually 64 or 128 pages, assuming 4KB pages.)

In a session where I open 20 comics from www.cad-comics.com/cad/ I get a peak of 26 compartments live at once, which is about 40MB of mReserve space!  And bug 620833 comment 4 has a link to some graphs showing an increase in space usage in one of the Talos tests.

The mReserve hack is really awful, for two reasons.

First, it's not guaranteed to be big enough.  In debug builds we'll assert if it's not big enough, but in optimized builds we'll just hand it over even though it's too small. 

Second, if multiple allocations occur in Nanojit after OOM occurs but before TM detects it, each of those allocations get handed the *same* pointer -- |&mReserve[0]|!  So different pieces of code will be reading and writing a potentially shared chunk of memory without realizing it's shared!  This is a great recipe for a crash or buggy behaviour.

Judging from the comment above -- the "before we run out of mReserve" phrase -- this second property wasn't intended.  But it's been like this way since at least 1.9.2, and probably 1.9.1.  Bug 504462 was the one where it was merged in from Adobe's version of Nanojit.  Bug 504462 comment 15 has this quote from Graydon:

>  this will die as soon as we figure out which of the saner forms of OOM
> handling we want to try (my vote is still with longjmp). This is just the 
> "work the same as currently" stopgap.

Sigh.  (See also bug 559707 for more OOM-in-Nanojit crappiness.)

Graydon, do you remember if the second property (all post-OOM allocations get the same block) was deliberate?  If so, the obvious and easy solution to this bloat is to make that mReserve section global and shared by all VMAllocators.

A better solution would be to do something properly with OOM in Nanojit, but that's a lot harder.  Double sigh.
Blocks: 598466
No longer blocks: 598466
blocking2.0: --- → ?
Whiteboard: [cib-memory]
Blocks: 598466
Oh dear. This is something I'd hoped had been dealt with. The meaningless panacea of the FIXME :(

(Ah, right, uintptr_ts vs. bytes. Yes, of course, it's 4x or 8x as big as intended..)

I agree there's no guarantee. Don't like this at all. But the fact is that NJ's "new" pool-based allocation system was built with the assumption that a single LIR-emitting event can be made an infallible action.

IMO the correct fix is to remove mReserve, bite the bullet and abort() in the mid-emit OOM case. Better firefox falls over than the user gets owned by a memory error. I argued that at the time and argue it now.

If we think that our space-checking logic is so flaky that we're "only within 16 pages" accurate on our polling of the mOutOfMemory flag, then ... why not just change the mOutOfMemory check to be sure that it's more than 16 pages from the end of the pool? 

As far as the second property, I don't know if it's intentional; any rationale for it I may have convinced myself of would have long since vanished from my head. But it certainly seems like a lousy idea. Maybe the rationale was that nobody reads LIR back during a write, so it's safe to just keep recycling the front part. I guess you could change that.

But I really think doing that would just adding kludge atop kludge. We only picked this mechanism because, for reasons lost to my fuzzy brain, someone was holding out insisting We Must Never Call abort(), even when the process is entering the land of undefined territory and memory errors.

Which, as any right-thinking person can see, is crazy talk. I don't know why I accepted that argument back then. Possibly I was at the end of my rope..
also note that, as these pools periodically overflow and get reset anyways, aborting on this event should not be a way to DoS us. If it is, it'd only be due to wrong overflow-polling logic. In theory, any user-provided "giant DoS-attempting input" should just cause repeated filling-and-flushing of the pools.

IOW, this should not be thought of using the neurons you usually use to decide if "crash-on-OOM" is appropriate behavior. More like "crash on array bounds-check failure". It means the logic's wrong. If user input can provoke that, you have bigger fish to fry.
(In reply to comment #1)
> 
> IMO the correct fix is to remove mReserve, bite the bullet and abort() in the
> mid-emit OOM case. Better firefox falls over than the user gets owned by a
> memory error. I argued that at the time and argue it now.

You mean abort() as in halt the Firefox process, right?

> As far as the second property, I don't know if it's intentional; any rationale
> for it I may have convinced myself of would have long since vanished from my
> head. But it certainly seems like a lousy idea. Maybe the rationale was that
> nobody reads LIR back during a write, so it's safe to just keep recycling the
> front part. I guess you could change that.

But those allocators are used not just for LIR, but all kinds of data structures and other stuff :(

> But I really think doing that would just adding kludge atop kludge. We only
> picked this mechanism because, for reasons lost to my fuzzy brain, someone was
> holding out insisting We Must Never Call abort(), even when the process is
> entering the land of undefined territory and memory errors.
> 
> Which, as any right-thinking person can see, is crazy talk. I don't know why I
> accepted that argument back then. Possibly I was at the end of my rope..

The Mozilla/Adobe NJ merge was an unpleasant process! :)


(In reply to comment #2)
> also note that, as these pools periodically overflow and get reset anyways,
> aborting on this event should not be a way to DoS us. If it is, it'd only be
> due to wrong overflow-polling logic. In theory, any user-provided "giant
> DoS-attempting input" should just cause repeated filling-and-flushing of the
> pools.

You've confused me now -- again, does "abort" mean halt the Firefox process?  I also don't understand the overflow-polling stuff... are you still assuming these allocators are only used for LIR?


The proper fix is to use setjmp/longjmp, IMO.  I'm going to try to do that.  Just aborting recording on an OOM seems like the right thing to do.
I looked at what Tamarin does.  AIUI, if its allocator fails, it tries to do a GC and then redoes the allocation;  if the GC doesn't free up enough memory, it does a hard abort.  No longjmp/setjmp.
Graydon we found out the hard way that it is way too easy to hit OOM without being really out of memory. See bug 605752 and bug 603077 at least.

Can we make a C++ auto helper and declare it in the entry points, or do something even tighter to decorate those methods? From past bugs I thought that the nanojit design supported this by not relying on dtors or other things that require unwind protection.

/be
(In reply to comment #3)

> You mean abort() as in halt the Firefox process, right?

Yeah. I mean, the original argument that failure was "safe" rested on returning null, and the belief that deref-null is a "safe crash". Whatever. Execute *((int*)0) if that feels better. "Get out of the situation".

> But those allocators are used not just for LIR, but all kinds of data
> structures and other stuff :(

Doesn't change the analysis. But the relationship between pools and hard-OOM does. I was mis-remembering. See below.

> The Mozilla/Adobe NJ merge was an unpleasant process! :)

Yeah. I feel bad for letting this sort of thing through though. Not an excuse.

> You've confused me now -- again, does "abort" mean halt the Firefox process?  

Yes. If I say 'abort' I mean process-abort, hard error. But the situation is more complex than I'd hoped. I forgot a lot of this context. Reloading.

> also don't understand the overflow-polling stuff... are you still assuming
> these allocators are only used for LIR?

No, but I was assuming we were talking about pool-drained events, not hard OS-level OOMs. There's a comment in OverfullJITCache that talks about this in some detail.

To clarify: in comment 1 I was discussing VMAllocator size-exceeded events, thinking that we shot into mReserve in those cases. I hadn't understood that you're talking about OS-level OOMs. So .. we do need to use our normal OOM-neurons for those. Sadly.

> The proper fix is to use setjmp/longjmp, IMO.  I'm going to try to do that. 
> Just aborting recording on an OOM seems like the right thing to do.

Figuring out how to get setjmp/longjmp in here is going to be a lot of work :(

(In reply to comment #5)

> Graydon we found out the hard way that it is way too easy to hit OOM without
> being really out of memory. See bug 605752 and bug 603077 at least.

Oh, I know OOMs at the OS level can be provoked; I thought we were talking about the oversized-pool case, which is .. a logic error, shouldn't be user-caused. But I forgot how this stuff fit together. The pools actually do permit endless growth. We just "try to rewind them when we realize we've grown too much". That's different than I was remembering.

> Can we make a C++ auto helper and declare it in the entry points, or do
> something even tighter to decorate those methods? From past bugs I thought that
> the nanojit design supported this by not relying on dtors or other things that
> require unwind protection.

NJ does avoid dtors, yes. But I have no idea if they tested that much, and I'm pretty sure our stuff hasn't been tested like that. Switching to longjmp here strikes me as a serious last minute swerve.

We *talked* about this, and I *think* everything's allocated within the VMAllocators. So longjmp'ing out to a place where you can just reset those and try again later might be *possible*. But you have to be very certain nothing else needs destructing. This is a big bet. Maybe call that plan A, maybe plan B.

You'll notice that all the places that check OverfullJITCache() also check outOfMemory(). Since at least the former case happens frequently in the field, we are presently resetting-the-JIT-and-recovering thousands of times a day with nobody complaining. The former is the cache-size-exceeded check, the latter is OS-level OOM; it happens that we already have lovely conditional branches that back out handling the former, but if the latter is true it will trigger the same conditional recovery: back out, reset the JIT. 

So here is plan B (or A), which doesn't involve introducing longjmp in a post-beta-8 world: fix the mReserve logic to be "better":

  - Make a single reserve-region per thread, not per VMAlloc, fixing the initial concern about memory bloat.

  - Advance a reserve-alloc pointer through the reserve buffer, solving njn's second concern about everything being allocated at &mReserve[0].

  - Hope that, in the period of time between the OS signalling OOM and the time when we hit the end of mReserve, that one of those existing checks to localtm->outOfMemory() notices, and backs out.

  - In the worst case, if we've already used up mReserve and we're being asked for *more*, abort() then.

  - Keep turning up the size of the per-thread mReserve until you see the abort() calls related to this drop to zero.

Keep in mind that the user provoking an OS OOM will not directly cause an abort() in this scenario. The abort() only fires on the *combination* of the user provoking an OS OOM as well as managing to convince the JIT to chew up more than mReserve bytes in a VMAllocator between one check to outOfMemory and the next. I *hope* that there is a size for mReserve where that is actually impossible; there ought to be an upper bound to the amount of memory TM will grow the VMAllocators by between outOfMemory checks.

I mean ... it checks after every single jsop it traces. You just have to make the reserve big enough to handle that-which-will-be-VMAllocated within the worst single jsop's compilation.
A question about testing:  SpiderMonkey has scads of OOM-handling code -- all those null/false checks.  Do we actually test it at all?  Other than the occasional bit of manual fault-injection a la bug 603077?  I suspect not, and that scares me a little.
(In reply to comment #5)
> Graydon we found out the hard way that it is way too easy to hit OOM without
> being really out of memory. See bug 605752 and bug 603077 at least.

That's because the method JIT allocates like a drunker sailor, see bug 615199.  I found the VMAllocator problem precisely because I was doing some space profiling in an attempt to reduce Firefox's excessive memory consumption.


> Can we make a C++ auto helper and declare it in the entry points, or do
> something even tighter to decorate those methods? From past bugs I thought that
> the nanojit design supported this by not relying on dtors or other things that
> require unwind protection.

I'm not sure what you mean by an "auto helper" there.  You're right that NJ requires little (if any) unwind protection because (I think, I haven't checked closely) it does all its allocation through these VMAllocators... except for the allocated native code chunks, they probably need extra handling.

But there are a couple of problems.  First, there isn't a small set of entry points.  jstracer calls into NJ a lot -- in particular, all those LIR insertion functions, which are everywhere.  Second, jstracer uses those VMAllocators for a lot of its own allocations.  (To give you an idea, there's only one js_malloc() call in all of jstracer.cpp, and it should probably be a VMAllocator allocation instead;  in other words, all the memory allocations in jstracer use VMAllocator.)  So even if NJ is designed to not need unwinding protection, jstracer isn't. 

In principle it seems like it shouldn't be hard -- everything in jstracer should have the lifetime of the TraceMonitor at most, so allocating everything in a small number of arenas and aborting/flushing them on OOM should work.  But jstracer is too complicated to do that;  if it was rewritten from scratch with this allocation strategy in mind it might work.


(In reply to comment #6)
> 
> > The proper fix is to use setjmp/longjmp, IMO.  I'm going to try to do that. 
> > Just aborting recording on an OOM seems like the right thing to do.
> 
> Figuring out how to get setjmp/longjmp in here is going to be a lot of work :(

Yeah, I started a patch, bleh.  I was hoping a single setjmp() on the |new TraceRecorder()|, would suffice, but I don't think it will because jstracer uses the VMAllocators for its own purposes so extensively.

> So here is plan B (or A), which doesn't involve introducing longjmp in a
> post-beta-8 world: fix the mReserve logic to be "better":

That sounds good.  It's depressing that we need this crappy reserve strategy, it's all so unclean.

> I mean ... it checks after every single jsop it traces. You just have to make
> the reserve big enough to handle that-which-will-be-VMAllocated within the
> worst single jsop's compilation.

There are some other cases to worry about, eg. the entire assembly phase is done in a single call to assm->compile(), with OOM checks on either side.  But I don't think that call will allocate too much, it's mostly allocating native code chunks, and uses a different allocator to do so (one which also is assumed to be fallible but isn't, sigh, bug 559707).

I'll write some instrumentation code that prints out how much memory is allocated between each outOfMemory check, and browse for a while to see what numbers come up on real websites, and then I'll double it, or something.
Just adding what little history I can.  At the time we introduced nanojit::Allocator and CodeAlloc, we rejected the idea of allocation checking everywhere in the code because we thought it would be unreliable and hard to get right.  (worst of two evils, the other being the current one).

The design intentionally defers OOM handling to the vm -- Allocator and CodeAlloc both call TM/TR api's to get memory, and expects them to succeed or not return, either via setjmp, exit, thread stoppage, whatever can be made to work.  

The various utility classes in nanojit all assume destructors may not run, and we do take pains to never call malloc or default-new, although (hmm) we could to audit this better. (dehydra?).

Sounds like the core issue is that nanojit is designed for out-of-memory to be handled by some kind of nonlocal jump, and further that if such a jump occurs, data structures will be in unknown states and the only reasonable action is to free them en masse (CodeAlloc or Allocator).

I'm open to changing this aspect of nanojit, or collaborating on other design changes, even if it's hefty work.
(In reply to comment #8)

> That's because the method JIT allocates like a drunker sailor, see bug 615199. 
> I found the VMAllocator problem precisely because I was doing some space
> profiling in an attempt to reduce Firefox's excessive memory consumption.

Once again, I cannot overstate how pleased I am to see said space-profiling occurring. It's long overdue. Likewise (sigh) this renewed attention to OOM cases. I just wish that'd happened earlier.

> I'm not sure what you mean by an "auto helper" there.

I believe he means a transaction-like guard-object that resets the JIT in its dtor if it hasn't been "committed" on successful code path. IOW just a helper for functions that have a lot of returns. Cheap way of avoiding refactoring control paths.

> You're right that NJ
> requires little (if any) unwind protection because (I think, I haven't checked
> closely) it does all its allocation through these VMAllocators... except for
> the allocated native code chunks, they probably need extra handling.

Those are allocated in CodeAllocs, which are also reset-able. We should ("should", knock-on-wood) be limiting allocations to the VMAllocator + CodeAlloc pools. I'm nervous about this change not in theory, just in practice: it's a lot to change, late in the game.

> But there are a couple of problems.  First, there isn't a small set of entry
> points.

Yes, there are. You don't want to look at the jstracer-calls-NJ entrypoints. You want to look at the jsinterp-calls-jstracer entrypoints. Of which there are few. See below.

> jstracer calls into NJ a lot -- in particular, all those LIR insertion
> functions, which are everywhere.  Second, jstracer uses those VMAllocators for
> a lot of its own allocations.  

That's ok. The point would be to longjmp out to the jstracer entrypoint and flush everything NJ *and* jstracer put in the VMAllocators and CodeAlloc. The paths that check outOfMemeory already handle backing out in those cases. You want your setjmp to be in the same frame, and following the same logic, as the root-of-call-tree function that checks outOfMemory.

> (To give you an idea, there's only one
> js_malloc() call in all of jstracer.cpp, and it should probably be a
> VMAllocator allocation instead;  in other words, all the memory allocations in
> jstracer use VMAllocator.)  So even if NJ is designed to not need unwinding
> protection, jstracer isn't.

You'd need to audit for any dtors in jstracer that do anything other than reset VMAllocators or CodeAllocs. I'm not sure there are any. I think that may be the only thing we use dtors for in jstracer.

If you longjmp over those dtors but then proceed to reset the allocators anyways, they're safe to skip.

(Of course, you'd have to mop up that one rogue js_malloc(). Who put that in?!)

> In principle it seems like it shouldn't be hard -- everything in jstracer
> should have the lifetime of the TraceMonitor at most, so allocating everything
> in a small number of arenas and aborting/flushing them on OOM should work.  But
> jstracer is too complicated to do that;  if it was rewritten from scratch with
> this allocation strategy in mind it might work.

It *was* written with -- or, uh, evolved-toward having -- this allocation strategy in mind. It just never had the code path implementing the strategy (longjmp-and-reset-allocators) implemented. So in *theory* I think it can work. Theory and practice though. You know what they say.

If you have the time, try making such a patch. But personally I'd think of it more like plan B. Maybe plan A? I dunno. I'm a little shocked that we're still (I think?) not at a consensus point on the main browser code handling OOMs either.

> Yeah, I started a patch, bleh.  I was hoping a single setjmp() on the |new
> TraceRecorder()|, would suffice, but I don't think it will because jstracer
> uses the VMAllocators for its own purposes so extensively.

It's (supposedly) comfortable having those allocators reset at the places it currently calls ResetJIT(). Like, it just backs out, doesn't look at any of the VMAllocator-allocated structures while returning. In theory.

Putting your setjmp in recordTree is too deep though. That's an interior node in the call tree you're interested in. The roots of that call tree, to the best of my memory, are:

  - MonitorLoopEdge (interp -> record event: entrypoint for activating recorder)
  - TraceRecorder::monitorRecording (entrypoint for record-one-jsop)

Both of these are already fallible in the outOfMemory-check sense and, if they fail, leave the JIT (including everything in jstracer and NJ) "reset", meaning the VMAllocators rewound. So I think you could setjmp in them. I just don't know what other Magical Mystery Bugs you'll uncover.

There appears to be a new root from JagerMonkey (since last I looked) called MonitorTracepoint, but it looks to me like it re-enters the interpreter in order to drive the above two, so probably accepts a recorder failing inside there. I hope. Talk to dvander and gal about the current set of call-tree roots. It's been over a year and a half since I was in this code, so even if there wasn't a shiny second JIT bolted on I wouldn't trust my memory. As it is, I'm sort of "feeling around" hoping I remember anything useful.

> > So here is plan B (or A), which doesn't involve introducing longjmp in a
> > post-beta-8 world: fix the mReserve logic to be "better":
> 
> That sounds good.  It's depressing that we need this crappy reserve strategy,
> it's all so unclean.

It sounds awful! But it's a plan. One of two. If you have time, do both. If you only have time for one, personally I ... would probably try to salvage the mReserve.

> There are some other cases to worry about, eg. the entire assembly phase is
> done in a single call to assm->compile(), with OOM checks on either side.  But
> I don't think that call will allocate too much, it's mostly allocating native
> code chunks, and uses a different allocator to do so (one which also is assumed
> to be fallible but isn't, sigh, bug 559707).

Yeah. That's still bounded by a single (loop-closing) jsop. But you're right that it could be a fair bit of codeAlloc pressure. I'd forgotten that. I dunno. Maybe sj/lj really is preferable.

> I'll write some instrumentation code that prints out how much memory is
> allocated between each outOfMemory check, and browse for a while to see what
> numbers come up on real websites, and then I'll double it, or something.

Yeah. Check the CodeAlloc numbers too. sj/lj might well be a smaller patch. I'm only hesitant in the abstract "ripping a hole in space is always a bit risky" sense. No concrete feelings about what will go wrong. Just the spidey sense.
Graydon's Plan B (or A), "reserve per JSThread" seems like the most direct path to a fix to me. Is there any clear reason not to do that?
blocking2.0: ? → betaN+
Whiteboard: [cib-memory] → [cib-memory][hardblocker]
(In reply to comment #11)
> Graydon's Plan B (or A), "reserve per JSThread" seems like the most direct path
> to a fix to me. Is there any clear reason not to do that?

It relies on picking a reserve size that you feel comfortable with as the threshold between "recoverable via reserve" and "unrecoverable". And having a backup-backup plan for when you hit the "unrecoverable" case. Like abort().

Otherwise you're leaving a known memory-error in a product. If I have to pick between potential-DoS and potential-wild-write, I'll go with DoS.
(In reply to comment #12)
> (In reply to comment #11)
> > Graydon's Plan B (or A), "reserve per JSThread" seems like the most direct path
> > to a fix to me. Is there any clear reason not to do that?
> 
> It relies on picking a reserve size that you feel comfortable with as the
> threshold between "recoverable via reserve" and "unrecoverable". 

That seems acceptable, in the sense that the plan is still a strict improvement from what we have.

> And having a backup-backup plan for when you hit the "unrecoverable" case. 
> Like abort().

Seems OK to me. Hopefully it can be very rare. Is that the purpose of the data-gathering Nick mentioned above?

> Otherwise you're leaving a known memory-error in a product. If I have to pick
> between potential-DoS and potential-wild-write, I'll go with DoS.

I agree.
(In reply to comment #10)
> 
> > But there are a couple of problems.  First, there isn't a small set of entry
> > points.
> 
> Yes, there are. You don't want to look at the jstracer-calls-NJ entrypoints.
> You want to look at the jsinterp-calls-jstracer entrypoints. Of which there are
> few. See below.

Ah, gotcha.

> It sounds awful! But it's a plan. One of two. If you have time, do both. If you
> only have time for one, personally I ... would probably try to salvage the
> mReserve.

Yeah, I'll start with that, and look at setjmp/longjmp if I have time.  As dmandelin said, fixing mReserve will be a strict improvement in what we have -- in both memory consumption and memory safety.
Keywords: footprint
I've been investigating the maximum amount of memory allocated between OOM checks.  Here are the allocation sizes of note (32/64 bit):

2004/2008:   the minimum chunk size;  most allocations are smaller than this
             (tempAlloc, traceAlloc, dataAlloc)
8004/8008:   the chunk size used for LIR buffers (tempAlloc)
2052/2056:   a common table size seen in CseFilter (can go higher:
             4100/4104, 8196/8200, 16388/16392, 32768/32776, etc, but the
             higher numbers are increasingly unlikely) (tempAlloc)
16692/33440: the size of Assembler, which is mostly the AR (debug builds; for 
             opt builds it is slightly smaller) (dataAlloc)

One thing to note is that each VMAllocator needs its own mReserve, because they aren't always reset() at the same time.  So I need to track the max between-OOM allocation sizes for each of the VMAllocators.

On 64-bit, I haven't seen numbers higher than this:
- traceAlloc: 2008
- tempAlloc:  65544
- dataAlloc:  33440

We could roughly double each of those, and use 4KB/128KB/64KB as the mReserve sizes on 64-bit, and half that on 32-bit (most of the data structures involve pointers, so halving is reasonable).  That would reduce the mReserve size by a factor of (512 * 3 / (4 + 128 + 64)) = 7.8.

An additional possibility is that we could introduce to Nanojit the notion of fallible allocations.  In particular, large CseFilter tables account for the largest individual allocations.  We could make those allocations fallible and if they fail, we just wouldn't grow the hash table.  This might result in missing some CSE opportunities, but that's no reason to abort.  This would reduce the likelihood of an abort(), or we could reduce tempAlloc's mReserve further.
For posterity, here's the instrumentation I added to get these measurements.
(In reply to comment #0)
>
> Second, if multiple allocations occur in Nanojit after OOM occurs but before TM
> detects it, each of those allocations get handed the *same* pointer --
> |&mReserve[0]|!  So different pieces of code will be reading and writing a
> potentially shared chunk of memory without realizing it's shared!  This is a
> great recipe for a crash or buggy behaviour.

Some more info about this... this assertion in allocChunk():

    JS_ASSERT(!vma->outOfMemory());

indicates that the author thought that the mReserve would only be needed to satisfy one chunk allocation.  But that's not the case, and when I inject occasional js_malloc() failures this assertion fails quickly.
(In reply to comment #10)
> 
> (Of course, you'd have to mop up that one rogue js_malloc(). Who put that in?!)

It's only used in debug builds and it's almost immediately freed -- it's just used to briefly store the result of an sprintf() call.  So not a big deal.

> > There are some other cases to worry about, eg. the entire assembly phase is
> > done in a single call to assm->compile(), with OOM checks on either side.  But
> > I don't think that call will allocate too much, it's mostly allocating native
> > code chunks, and uses a different allocator to do so (one which also is assumed
> > to be fallible but isn't, sigh, bug 559707).
> 
> Yeah. That's still bounded by a single (loop-closing) jsop. But you're right
> that it could be a fair bit of codeAlloc pressure. I'd forgotten that. I dunno.
> Maybe sj/lj really is preferable.

It's not codeAlloc pressure that's the problem, it's that the Assembler pass is a lot of code and might do a lot of VMAllocator allocation.  In fact, I found that can be the case today.  traceAlloc is easy, it's just used for SideExits, GuardRecords, and a couple of other small things, and is very small.  tempAlloc is easy after I make CseFilter allocations fallible, leaving LirBuffer chunks and some other small things.  But dataAlloc can have arbitrary amounts of allocation between OOM checks because of the assm->compile() call -- there's a table called the LabelStateMap which 


> 
> > I'll write some instrumentation code that prints out how much memory is
> > allocated between each outOfMemory check, and browse for a while to see what
> > numbers come up on real websites, and then I'll double it, or something.
> 
> Yeah. Check the CodeAlloc numbers too. sj/lj might well be a smaller patch. I'm
> only hesitant in the abstract "ripping a hole in space is always a bit risky"
> sense. No concrete feelings about what will go wrong. Just the spidey sense.

(In reply to comment #12)
> (In reply to comment #11)
> > Graydon's Plan B (or A), "reserve per JSThread" seems like the most direct path
> > to a fix to me. Is there any clear reason not to do that?
> 
> It relies on picking a reserve size that you feel comfortable with as the
> threshold between "recoverable via reserve" and "unrecoverable". And having a
> backup-backup plan for when you hit the "unrecoverable" case. Like abort().
> 
> Otherwise you're leaving a known memory-error in a product. If I have to pick
> between potential-DoS and potential-wild-write, I'll go with DoS.
[Apologies for the malformed comment 18.  Here's the full comment.]


(In reply to comment #10)
> 
> (Of course, you'd have to mop up that one rogue js_malloc(). Who put that in?!)

It's only used in debug builds and it's almost immediately freed -- it's just
used to briefly store the result of an sprintf() call.  So not a big deal.

> > There are some other cases to worry about, eg. the entire assembly phase is
> > done in a single call to assm->compile(), with OOM checks on either side.  But
> > I don't think that call will allocate too much, it's mostly allocating native
> > code chunks, and uses a different allocator to do so (one which also is assumed
> > to be fallible but isn't, sigh, bug 559707).
> 
> Yeah. That's still bounded by a single (loop-closing) jsop. But you're right
> that it could be a fair bit of codeAlloc pressure. I'd forgotten that. I dunno.
> Maybe sj/lj really is preferable.

It's not codeAlloc pressure that's the problem, it's that the Assembler pass is
a lot of code and might do a lot of VMAllocator allocation.  In fact, I found
that can be the case today.  traceAlloc is easy, it's just used for SideExits,
GuardRecords, and a couple of other small things, and is very small.  tempAlloc
is easy after I make CseFilter allocations fallible, leaving LirBuffer chunks
and some other small things.  But dataAlloc can have arbitrary amounts of
allocation between OOM checks because of the assm->compile() call -- there's a
table called the LabelStateMap which gets augmented when labels are crossed,
and I saw in v8-regexp this caused 62,248 bytes to be allocated on 64-bit
machines, and you could increase that arbitrarily.  Maybe I should make that
allocation arbitrary as well, and add an OOM value to the AssmError
enumeration... IIRC there was one there once upon a time.

> It relies on picking a reserve size that you feel comfortable with as the
> threshold between "recoverable via reserve" and "unrecoverable". And having a
> backup-backup plan for when you hit the "unrecoverable" case. Like abort().

Yeah;  if mReserve is too big you waste space (which can be a big problem, as
this bug shows), if it's too small you can abort, and working out the right
size is difficult.

setjmp/longjmp is a better approach in the long run if it can be made to work,
but it is a bit risky for Firefox 4.0.  (Having said that, AFAICT if anything
goes wrong with it we'll just end up with some memory leaks?)  So I'll stick
with the mReserve plan -- it'll reduce the memory use by at least 10x, fixes
the current correctness problems with mReserve, at the cost of cluttering up
Nanojit with fallible allocations in CseFilter and Assembler.
Blocks: 624590
I opened bug 624590 as a post-4.0 follow-up to implement the setjmp/longjmp approach and fix various other warts.
Blocks: 622291
This patch:
- Adds fallible allocation to Nanojit's Allocator class, and uses it in
  CseFilter's constructor and in growNL() and growL().

- Makes MIN_CHUNK_SZB non-local and public.  I used it in TM for a bit, then
  removed the use, but it seems like a good change to keep.

I tested the new paths in CseFilter by injecting some failures manually.
Bug 624590 should allow for pretty much all of these changes to be undone
(sigh).  I hope you don't mind adding this crud in the short-term to help
us get Firefox 4.0 out the door.
Attachment #502710 - Flags: review?(edwsmith)
This patch:
- Removes REHashFn, REHashKey, REFragent, REHashMap and reFragments, which
  were all dead (well, reFragments was created but never used meaningfully,
  and once I removed that the others became dead).

- Makes OutOfMemoryAbort() more informative.

- Shrinks the reserve space used in each of the three VMAllocators to
  something more reasonable, based on values I saw.  This will reduce the
  amount of space taken up by these reserve spaces by a roughly a factor of
  10, reducing the 64-bit 20-tab cad-comic case from ~100MB to ~10MB.

- Updates allocChunk() et al to:
  - Allow for fallibile requests;
  - March through the reserve space correctly;
  - Abort if the reserve is exhausted.

- Removes an assertion in finishAbort() that can be hit if an OOM occurs
  during the final patching phase of the Assembler pass -- fragment->code()
  can be non-NULL but we still end up aborting.

- Checks a bunch of allocations in TraceRecorder's constructor that weren't
  being checked, and would cause crashes if they failed.

- Skips CseFilter if an OOM occurs during its startup.

I tested the abort() case in allocChunk() by injecting some failures
manually.

For a 64-bit 20-tab cad-comic run, /proc/pid/status is giving me a ~40MB
drop in the VmPeak measurement with the patch applied.  Massif tells me that
~100MB of InitJIT() allocations are gone, but ~100MB of other allocations
came from elsewhere to make up for it;  I'm not sure what to make of that,
especially because /proc/pid/status and Massif don't agree on the Peak
measurements.  But I figure it's gotta be an improvement.
Attachment #502711 - Flags: review?(gal)
Attachment #502711 - Flags: feedback?(graydon)
Attachment #502711 - Flags: review?(gal) → review+
(In reply to comment #22)
> drop in the VmPeak measurement with the patch applied.  Massif tells me that
> ~100MB of InitJIT() allocations are gone, but ~100MB of other allocations
> came from elsewhere to make up for it;  I'm not sure what to make of that,

Are you doing the two runs as repeatably as possible?  I run up to 60
billion insns with no user input, then quit, and take numbers from the
"steady state after idling for a while, just before quit" point.  This
is on the basis that that factors out all the time-dependent GC-style
activities and reflects the underlying minimum achievable residency.

I'm seeing a drop of about 52MB for 20 cad-comic tabs.  That is, a
reduction in per-tab marginal cost from 27.5MB to 24.9MB, which is not
bad.  (FTR, 1.9.2 comes in at about 19MB/tab on this test).

The peak use falls from 1189.9MB to 1139.7MB, a drop of 50.2MB, which
seems consistent with the steady state drop.
>+ CHECK_ALLOC(tm->dataAlloc, new VMAllocator(dataReserve, DataReserveSize));

operator new throws rather than returning null, as seen in bug 624645, so I think this check won't work.  Can these be changed to use malloc?
Comment on attachment 502711 [details] [diff] [review]
TM patch, v1 (against TM 59803:9df93a2a40e5)

Careful, precise, minimal. No complaints here (except possibly multiplying the sizes by sizeof(uintptr_t) rather than ifdef-guarding on NANOJIT_64BIT, but that's up to you). I think this is probably the best that can be done for the problem faced, in such a short time period.

You did some fault injection on larger sites? I imagine the numbers might need revising after it bakes in nightlies.

Will calling abort() in the "got the numbers wrong" case wind up firing up the breakpad reporter, such that we get crash-stats signatures telling us about the crazy website that eats 10x as much mReserve we our worst estimate?
Attachment #502711 - Flags: feedback?(graydon) → feedback+
(In reply to comment #25)
> >+ CHECK_ALLOC(tm->dataAlloc, new VMAllocator(dataReserve, DataReserveSize));
> 
> operator new throws rather than returning null, as seen in bug 624645, so I
> think this check won't work.  Can these be changed to use malloc?

Ugh, you're right;  I thought we overrode operator new with a NULL-returning version, but I was wrong.  (Well, we do within TraceRecorder, but we're outside that here.)  

I opened bug 624645 for this.  I won't let that problem block this bug, which is fixing other problems, but I'll add a comment that explains the checks don't do anything, and point to bug 624645.
(In reply to comment #26)
> 
> No complaints here (except possibly multiplying the
> sizes by sizeof(uintptr_t) rather than ifdef-guarding on NANOJIT_64BIT, but
> that's up to you).

I changed it to do that.

> You did some fault injection on larger sites? I imagine the numbers might need
> revising after it bakes in nightlies.

I did the fault injection *testing* on small programs -- that was just to check that the right paths were followed on allocation failures.

I did the mReserve sizing independently by measuring all the allocations between each OOM check (see attachment 502712 [details] for the code).  I did measure that on various big sites, though I got the largest numbers for a shell 'js -j' run on v8bench, probably because the tracer doesn't get used as much in the browser thanks to JaegerMonkey.

> Will calling abort() in the "got the numbers wrong" case wind up firing up the
> breakpad reporter, such that we get crash-stats signatures telling us about the
> crazy website that eats 10x as much mReserve we our worst estimate?

Good question.  JS_Assert() looks carefully crafted, but maybe it's aimed at being catchable in debuggers.  jimb:  do you know how to abort in a manner that is maximally catchable in breakpad?  It would be great if we could somehow capture the allocation size that caused the abort, too.
abort() is annotated in some systems so that the app can corrupt its stack because it's __noreturn__ making it absolutely unhelpful for anyone.
jimb says that abort() should be handled fine by breakpad.
jimb now says that JS_Assert() would be better :)
(In reply to comment #29)
> abort() is annotated in some systems so that the app can corrupt its stack
> because it's __noreturn__ making it absolutely unhelpful for anyone.

Yeah, when a function is annotated __noreturn__, the compiler would be within its rights not even to push a return address, and just jump directly to the function. However, since the most popular reason to mark a function __noreturn__ is that it's an error-reporting function, this would totally hork debugging for everyone. I've never seen this happen.

If you call JS_Assert, though, that should be just fine.
Here's the one-line patch needed for TR.
Attachment #502998 - Flags: review?(edwsmith)
(In reply to comment #31)
> jimb now says that JS_Assert() would be better :)

JS_CRASH_UNLESS might be even better.
Comment on attachment 502710 [details] [diff] [review]
NJ patch, v1 (against TM 59803:9df93a2a40e5)

Looks fine to me, adding Rick for a second set of eyes.
Attachment #502710 - Flags: review?(edwsmith)
Attachment #502710 - Flags: review+
Attachment #502710 - Flags: feedback?(rreitmai)
Attachment #502998 - Flags: review?(edwsmith) → review+
Will TM have sufficient safeguards against other large allocations that might not be from CSE?  for example, if allocChunk() is called, with fallable=false, and the allocation request is over some threshold related to mReserve?
(In reply to comment #36)
> Will TM have sufficient safeguards against other large allocations that might
> not be from CSE?  for example, if allocChunk() is called, with fallable=false,
> and the allocation request is over some threshold related to mReserve?

Sufficient safeguards, yes -- it'll abort the process if mReserve overflows.

I'm pretty confident that mReserve will be big enough in all cases except the 
LabelStateMap one I described in comment 19.  But for an abort to occur we'll need to OOM just before assembling a very unusual fragment.  Pretty unlikely.
Comment on attachment 502710 [details] [diff] [review]
NJ patch, v1 (against TM 59803:9df93a2a40e5)

r+ looks fine
Attachment #502710 - Flags: feedback?(rreitmai) → feedback+
http://hg.mozilla.org/projects/nanojit-central/rev/f6016c7c7cd4
Whiteboard: [cib-memory][hardblocker] → [cib-memory][hardblocker], fixed-in-nanojit
http://hg.mozilla.org/tracemonkey/rev/f2c9c558e51e
http://hg.mozilla.org/tracemonkey/rev/d90128361cb8
Whiteboard: [cib-memory][hardblocker], fixed-in-nanojit → [cib-memory][hardblocker], fixed-in-nanojit, fixed-in-tracemonkey
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
*Applause*

Thanks so much. I will do penance for this bug some day, promise.
You need to log in before you can comment on or make changes to this bug.