Closed Bug 606198 Opened 14 years ago Closed 5 years ago

Remove checks for memory allocation failure

Categories

(Core :: JavaScript Engine, defect)

x86
macOS
defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: paul.biggar, Unassigned)

References

Details

(Keywords: sec-want)

Our API code is filled with checks for failure of the memory allocator. Most memory allocation functions can return NULL, and the API returns JS_FALSE in that case. Removing the NULL-check code would improve icache usage and reduce instructions executed.

Challenges:

We don't currently use infallible malloc. Fixing this requires making sure malicious scripts cant force an OOM condition. How about making js_malloc infallable, but not using the mozalloc infallible malloc?


We must support embedders. I'm not sure of how this should look. There is already a js_reportmemoryerror function, why do we return JS_FALSE too?

Are we happy to abort in the shell?
Oooh, there's lots of history here::

https://bugzilla.mozilla.org/show_bug.cgi?id=599791#c19 talks about infallible malloc. I'm using a different definition of infallible, in that we don't actually need to abort the process, just the script (in some manner).

Bug 602935 goes in the opposite direction.

https://bugzilla.mozilla.org/show_bug.cgi?id=471528#c9 has a view on the futility of trying to catch OOM.
I'm going to try this the sciency-way: remove as many NULL checks as I can, and check the improvement. We can then give up if it goes nowhere.
Let's try to build on one anothers' work, not out-Lone-Ranger each other. There has been work on infallible malloc for a while, but it is not going to make Fx4. On this basis alone, I think there are higher priority bugs to fix this month and next, and I think we are a bit short-handed. We have crash bugs aplenty (I can't run m-c nightlies since last Thursday without crashing promptly, e.g.

http://crash-stats.mozilla.com/report/index/bp-e5bf6656-916c-40a0-933f-974ac2101021
http://crash-stats.mozilla.com/report/index/bp-1c486560-602a-441a-813c-242c32101021
http://crash-stats.mozilla.com/report/index/bp-e5e2fbc2-621a-4c2a-9f41-d520b2101021
http://crash-stats.mozilla.com/report/index/bp-bdea77ad-76f2-4007-bb54-661852101020

We have fairly-low-hanging pref fruit to pick too, still.

There is a better world where small-enough, fixed-size allocations can be done via infallible new or malloc, but we're not there yet and trying to jump to it will leave devs and users with random crashes, even when there's a ton of free memory latent in caches, GC'ed heaps, and VM.

/be
> We have fairly-low-hanging pref fruit to pick too, still.

s/pref/perf/ of course.

/be
(In reply to comment #3)
>  There has been work on infallible malloc for a while, but it is not going to make
> Fx4. 

I really wasn't clear when I said:

> How about making js_malloc infallable, but not using the mozalloc infallible malloc?

What I meant was that we could make js_malloc infallible, without assuming abort on OOM. That is, we would just abort the script, same as a syntax error. I think this is considerably less work that going through infallible mozalloc, and could be made land in very short order.


> On this basis alone, I think there are higher priority bugs to fix this
> month and next, and I think we are a bit short-handed.

OK, this point is well taken. My intention was to quickly see what the performance benefit of doing something small is here.
Aborting the script is no good so long as we fail to purge caches and GC, because without that effort, people will get bogus OOM script errors.

Anyway, stopping the script requires propagating failure. We can't longjmp. We have not yet enough unwind-protect RAII in our native code paths (and we would need static analysis to prove we had enough).

/be
> We can't longjmp. We
> have not yet enough unwind-protect RAII in our native code paths (and we would
> need static analysis to prove we had enough).

Furthermore, AFAIK, longjmp doesn't call destructors ;-)
(In reply to comment #6)
> Aborting the script is no good so long as we fail to purge caches and GC,
> because without that effort, people will get bogus OOM script errors.

Obviously I don't know the engine well enough, but I had thought it would be straightforward to cleanup an entire compartment or context at once.


> Anyway, stopping the script requires propagating failure. We can't longjmp. We
> have not yet enough unwind-protect RAII in our native code paths (and we would
> need static analysis to prove we had enough).

We know the endpoints (the OOM endpoint and the JS API entrypoint). Why do we need to propagate failure though the middle? As I understand it, we need to cleanup at the macro level, and we need to return JS_FALSE to callers.


(In reply to comment #7)
> Furthermore, AFAIK, longjmp doesn't call destructors ;-)

That's the point. The overhead of throwing a C++ exception (both in terms of codebase and run-time) are too high. I was thinking:

  OOM -> longjump to outermost API caller -> cleanup script -> return JS_FALSE

rather than unwinding through all the callers.
Paul, see comment 6, second paragraph. You'll leak and leave stuck locks all over. longjmp is not an option (I'll ignore setjmp overhead on API entry).

Also, it is not enough to GC one compartment. The OOM can be due to a global shortfall requiring full GC.

A serious approach to infallible malloc is under way, that's why I cc'ed Chris Jones. It's for after Fx4.

/be
(In reply to comment #8)
> The overhead of throwing a C++ exception (both in terms of
> codebase and run-time) are too high.

I don't understand the reasoning behind this statement. Zero-cost exceptions should have no runtime overhead until we actually OOM (rare, and okay to be slow in) and the exception tables should be small (and near-always swapped out) if we only throw from the point of OOM to handlers at the API boundary.

It does add the additional headache of making *really* sure that exceptions don't get thrown beyond the API boundary, though.
We've been over this a lot, going back to http://brendaneich.com/2006/11/oink-based-patch-generation/ and http://brendaneich.com/2006/10/mozilla-2/, which called for investigating switching to C++ exceptions.

The biggest problem: until Microsoft changes its ABI or we switch to MinGW for our Windows builds (this is not at all likely for many reasons), we are not going to start using C++ exceptions. Please read:

http://groups.google.com/group/mozilla.dev.platform/msg/e6ebaa5d055c93a4

and the parent thread.

Beyond this little problem, we'd have to take the RTTI hit (which last time we checked was too big), and we'd have to get exception-safe with RAII and static analyses.

In total, switching to C++ exceptions is a huge and difficult, multi-real-year job. My personal opinion is that we would be better off working on other and more urgent hard problems, including Rust -- which has a better failure model among other advantages.

/be
In the mean time, if it is good enough for the kernel, it's good enough for us: I mean propagating failure carefully, where possible poisoning a phase-structured process like compilation so you don't have to over-check and twist all signatures to have failed-state return types/value-encodings (dmandelin did this recently in bug 605274; nanojit does it too IIRC).

/be
(In reply to comment #11)
> In total, switching to C++ exceptions is a huge and difficult, multi-real-year

This bug was never about that, nor RTTI, nor infallible allocation, though I think that my awkward phrasing may have led people to misunderstand.

This bug says 4 things:

1. There is a lot of overhead to our OOM, mostly in maintainability, but I would be surprised if not in speed too.

2. We know that the OOM paths go straight from NewObject to a JSAPI boundary, and don't really care about the middle.

3. C++ exception handling is a pain in the ass, and no-one wants to get involved in it.

4. What if instead of returning FALSE on OOM, we simply called a handler which freed all the resources used in the context/compartment/thread/something, then made a giant leap up the stack to the JSAPI boundary? That leap could be longjmp, or it could be fiddling with the stack pointer, or it could be doing some stack scan, it doesn't really matter right now.


Writing out part 4 made me think: we actually don't do real resource cleanup anyway, we just propagate the error codes. So what resources does a compartment/context/thread/something really have that is important to clean up, since we don't do it now anyway?
@be: Thanks for the background info!

(In reply to comment #13)
> we actually don't do real resource cleanup
> anyway, we just propagate the error codes. So what resources does a
> compartment/context/thread/something really have that is important to clean up,
> since we don't do it now anyway?

Certainly bits of malloc'd memory are released by destructors (and cleanup code that doesn't expect to be longjump'd over) -- not all resources are accounted for in the compartment/context/thread data structures. Relying on scopes to manage resource lifetimes is the C-but-even-moreso-C++ way, so we'd be leaking a bunch of memory when we're already OOMing.
(In reply to comment #14)
> Certainly bits of malloc'd memory are released by destructors (and cleanup code
> that doesn't expect to be longjump'd over) -- not all resources are accounted
> for in the compartment/context/thread data structures. Relying on scopes to
> manage resource lifetimes is the C-but-even-moreso-C++ way, so we'd be leaking
> a bunch of memory when we're already OOMing.

I'm not sure what you're getting at; are you saying we do leak resources currently, or that the scope takes care of them all via existing C++ destructors?
(In reply to comment #13)
> 4. What if instead of returning FALSE on OOM, we simply called a handler which
> freed all the resources used in the context/compartment/thread/something, then
> made a giant leap up the stack to the JSAPI boundary? That leap could be
> longjmp, or it could be fiddling with the stack pointer, or it could be doing
> some stack scan, it doesn't really matter right now.

cdleary points out that i've still not explained myself well. The proposal is as follows:

- keep track of "resources" (locks, memory, etc) in a central place

- on "real" OOM, call a handler to free these resources, then "leap" to the JSAPI caller, and return FALSE. No resource leaks of any kind.

- this allows us delete all OOM handling code.



I don't know the exact solution for the "fake OOM" problem, which is where the caller can release memory and try again, but I speculate that it isn't difficult. I also don't know the cost of bookkeeping these resources, but speculate that its both a lower maintenance burden, and a lower performance cost, than the existing OOM error propagation.
The way this will resolve itself is to get on the same page as Gecko's infallible malloc work.  This involves using infallible malloc for small, fixed-size allocations, and having good backup strategies for handling OOMs when they happen (eg. do GC, drop optional caches, do memory pressure monitoring so this stuff can be done before we're thrashing to death).  Big allocations will still be fallible, and loops that can do lots of small allocations that add up to big amounts will also be fallible.

So it'll be a mix of fallible and infallible, but should still be a lot better (many fewer NULL/false checks) than what we currently have.
Comment 16 has some bold speculations, which would need an awful lot of effort to implement and measure before you could have a bake-off (microbenchmarks will not cut it).

Other projects have done things like this, the war stories I've heard are not pretty. Four or so years ago we were hoping for C++ exceptions and static-analysis-based refactoring of our code to use RAII patterns. Now I think that is pie-in-the-sky. Static analysis is good but refactoring is hard and C++ exceptions at zero-cost in the no-throw case, cross-platform, are nowhere.

+1 on comment 17.

/be
(In reply to comment #18)
> Comment 16 has some bold speculations, which would need an awful lot of effort
> to implement and measure before you could have a bake-off (microbenchmarks will
> not cut it).

Agreed.

> Other projects have done things like this, the war stories I've heard are not

I was suggesting this approach as it didn't seem as hard as getting our C-exception-handling/RAII/whatever approach right. Obviously you disagree, and I just have a hunch, so I'm definitely not pushing for this.


However, from working on bug 624094, I count dozens of places we get OOM error conditions wrong (and I've only analyzed tens of test files). Almost none of them are straightforward to understand, to say nothing of fixing them. My current preference (suggested in the comments on nnethercote's blog I think), is to wait for electrolysis and just fail on OOM (preferably after taking the steps in comment 17).
(In reply to comment #19)
> > Other projects have done things like this, the war stories I've heard are not
> 
> I was suggesting this approach as it didn't seem as hard as getting our
> C-exception-handling/RAII/whatever approach right. Obviously you disagree, and
> I just have a hunch, so I'm definitely not pushing for this.

I was not comparing to C++ exceptions, that would be preferable to SEH-like setjmp/longjmp, and in both cases we would be leaning on RAII for the dealloc and unlock on all exit paths coverage.

C++ exceptions seem to need RTTI in general for the unwinding, in case of dynamic types. This seems avoidable, and I heard Luke had found some RTTI-free exception option, at least for some compilers.

But C++ exceptions are not happening, because while zero-(runtime-)cost is coming on GCC if not already here (let's ignore the Mac until Clang is up, and hope for the best), MSVC won't break its ABI, so it sticks all users with costly exception handling. See Graydon's post at:

http://groups.google.com/group/mozilla.dev.platform/msg/e6ebaa5d055c93a4?hl=en

and the surrounding thread.

> However, from working on bug 624094, I count dozens of places we get OOM error
> conditions wrong (and I've only analyzed tens of test files).

First, so what? Gavin Reaney used to cover us and declared out older versions "good". This is a matter of testing and (old school human, new school sixgill) discipline.

Unix kernel code has similar obligations, with negative int return codes conveying failure with the negated errno. Kernel coders have to check.

Second, the general probelm is not all about OOM. JS has errors as exceptions, they must propagate to a catcher or default reporter, and not be ignored. While some functions and methods can fail only due to OOM, many can fail due to OOM, a slow-script abort, or a catchable error-as-exception.

Failure or abnormal completion could ideally be implemented in SpiderMonkey via C++ exceptions. It cannot always map to a longjmp to an API entry point, even assuming we unwound-protected and could stand the setjmp overhead (I don't thin we can). There could be intervening interpreter or JITted code with exception handlers.

> Almost none of
> them are straightforward to understand, to say nothing of fixing them.

I haven't looked at all of them. The ones I did look at seemed easy, but the C++ infallible constructor design can indeed make trouble.

Nick dove into the nanojit/tracer are and patched, so I do not see how any of this can be as hard as C++ exceptions. Can you cite a particularly hard case?

> My
> current preference (suggested in the comments on nnethercote's blog I think),
> is to wait for electrolysis and just fail on OOM (preferably after taking the
> steps in comment 17).

Infallible short/fixed allocations are coming, that's a goal already. Do you want to morph this bug into a metabug for that?

Again infallible operator new (let's say; malloc/calloc/realloc suggest variable and potentially large/web-content-induced size, requiring fallibility as well as size_t overflow checking) is not at issue, but the many fallible methods and functions that can fail due to OOM, slow script abort, or a catchable exception must still propagate.

Finally, waiting for infallible-new may leave some easy OOM-ignoring bug fixes unfixed, and sometimes such bugs are exploitable. We should test and consider fixes, like the kernel hackers have to, and not let bad bugs slide until some future wonderland.

/be
(Need a new keyboard -- sorry for the dropped chars, figure you can error-correct like an iPhone would ;-)
The Linux negated errno return convention reminds me of something gal mentioned: v8 using reserved values as errors or exceptions. I'm not sure of the details, and v8 uses thread-local storage too.

In contrast, our old C-based API and codebase have explicit cx params, bool returns, js::Value *vp and other out params. Could possibly make implciit or combine some of these and win, while still requiring callers to check (efficiently) and propagate.

Not this bug, but I thought it worth a mention.

/be
(In reply to comment #20)
> > I just have a hunch, so I'm definitely not pushing for this.
> 
> I was not comparing to C++ exceptions, that would be preferable to SEH-like
> ... <snip/> ...

I wasn't proposing a specific implementation. More of a 'what if we registered resources as we got them, and then cleaned them up together out-of-line if we see an error - rather than propagating'. You've said specific implementations won't work, I agree, but it doesn't seem impossible.


> > However, from working on bug 624094, I count dozens of places we get OOM error
> > conditions wrong (and I've only analyzed tens of test files).
> First, so what? Gavin Reaney used to cover us and declared out older versions
> "good". This is a matter of testing and (old school human, new school sixgill)
> discipline.

So this is hard to get right, and we haven't been getting it right, and it causes the browser to crash (and other less bad things like memory leaks, assertion failures, etc).

We currently don't test this. I have a plan to get it tested (bug 624094 via bug 626706) but bug 609413 is a prereq, and it's been P4ed by RelEng.

 
> Second, the general probelm is not all about OOM. JS has errors as exceptions,
> they must propagate to a catcher or default reporter, and not be ignored. 

Good point. I think those problems are not as pervasive throughout the engine - stop me if I'm wrong - and therefore less hard to get right.



> Failure or abnormal completion could ideally be implemented in SpiderMonkey via
> C++ exceptions. It cannot always map to a longjmp to an API entry point, even
> assuming we unwound-protected and could stand the setjmp overhead (I don't thin
> we can). There could be intervening interpreter or JITted code with exception
> handlers.

Again, I was suggesting a strategy, rather than a specific implementation. I agree the C++ exceptions, longjmps, etc could well be too expensive, and that refactoring our code to RAII is a massive job.


> > Almost none of
> > them are straightforward to understand, to say nothing of fixing them.
> 
> I haven't looked at all of them. The ones I did look at seemed easy, but the
> C++ infallible constructor design can indeed make trouble.

I'm triaging them before filing, and I have a long list remaining.

 
> Nick dove into the nanojit/tracer are and patched, so I do not see how any of
> this can be as hard as C++ exceptions. Can you cite a particularly hard case?

The parts that Nick left as too hard are a good example. Grep for OUT_OF_MEMORY_ABORT. Basically, our strategy of returning NULL/false/etc doesn't work for C++ constructors. If the body of the constructor calls a function which OOMs, we have no (good/clean) way of passing that back to the parent.
(Oh, I think this is what you meant by 'C++ infallible constructor' above. I'll leave this in anyway for context.)


> > My
> > current preference (suggested in the comments on nnethercote's blog I think),
> > is to wait for electrolysis and just fail on OOM (preferably after taking the
> > steps in comment 17).
> 
> Infallible short/fixed allocations are coming, that's a goal already. Do you
> want to morph this bug into a metabug for that?

Probably better to make a new bug and close this one. But..... I guess I've never understood infallible allocations. They still cause a crash right? They're just less likely to cause a crash than a large allocation?

My main goal here is to remove our error-propagation code, which is costly in run-time (I suspect) and in developer time. I guess I don't know if infallible allocations allow that.


> Finally, waiting for infallible-new may leave some easy OOM-ignoring bug fixes
> unfixed, and sometimes such bugs are exploitable. We should test and consider
> fixes, like the kernel hackers have to, and not let bad bugs slide until some
> future wonderland.

Agreed. I'm still working on bug 624094, and focusing on the low-hanging fruit for now. I agree that we should get bug 624094 into the tree. Help on bug 609413 appreciated!
(But, electrolysis is coming right?)
(In reply to comment #23)
> I wasn't proposing a specific implementation. More of a 'what if we registered
> resources as we got them, and then cleaned them up together out-of-line if we
> see an error - rather than propagating'. You've said specific implementations
> won't work, I agree, but it doesn't seem impossible.

Definitely not. We should have C++ RAII and some static analysis backstopping. It's a coding and tooling discipline to aim at. But that could also be said of OOM-checking!

> > First, so what? Gavin Reaney used to cover us and declared out older versions
> > "good". This is a matter of testing and (old school human, new school sixgill)
> > discipline.
> 
> So this is hard to get right, and we haven't been getting it right, and it
> causes the browser to crash (and other less bad things like memory leaks,
> assertion failures, etc).

There's no short-term rearchitecture, so we should test and patch. Automate the test and turn the tree orange, even.

When Gavin was at picsel.com (and there were fewer JS hackers and less code change per time unit), we had our hands around this. We can do it again. njn's nanojit/tm work shows this.

> We currently don't test this. I have a plan to get it tested (bug 624094 via
> bug 626706) but bug 609413 is a prereq, and it's been P4ed by RelEng.

Sayrer, can you help?

> Good point. I think those problems are not as pervasive throughout the engine -
> stop me if I'm wrong - and therefore less hard to get right.

They are strictly more numerous by any count. Quick and dirty lower bounds:

$ grep -w new *.h *cpp */*.{h,cpp} | wc -l
     575
$ grep 'malloc(' *.h *cpp */*.{h,cpp} | wc -l
     149
$ grep 'calloc(' *.h *cpp */*.{h,cpp} | wc -l
      27
$ grep 'realloc(' *.h *cpp */*.{h,cpp} | wc -l
      52
$ grep 'if (!ok' *.h *cpp */*.{h,cpp} | wc -l
     141
$ grep 'if (ok' *.h *cpp */*.{h,cpp} | wc -l
      45
$ grep 'if (js_[A-Za-z_0-9]*(' *.h *cpp */*.{h,cpp} | wc -l
     193
$ grep 'if (!js_[A-Za-z_0-9]*(' *.h *cpp */*.{h,cpp} | wc -l 
     460
$ grep 'if (.*->[a-z_][A-Za-z_0-9]*(' *.h *cpp */*.{h,cpp} | wc -l
    2269
$ grep 'if (.*\.[a-z_][A-Za-z_0-9]*(' *.h *cpp */*.{h,cpp} | wc -l
    1895

Any failure to propagate failure is a bug, easily as bad as failing to cope with fallible-new.

> I'm triaging them before filing, and I have a long list remaining.

What's the tracking bug?

> > Nick dove into the nanojit/tracer are and patched, so I do not see how any of
> > this can be as hard as C++ exceptions. Can you cite a particularly hard case?
> 
> The parts that Nick left as too hard are a good example. Grep for
> OUT_OF_MEMORY_ABORT.

I know about those, but they are minimized per the endgame schedule, and fixable (and unlikely to be hit compared to false-OOMs we took from JSC code and users actually hit).

> Basically, our strategy of returning NULL/false/etc
> doesn't work for C++ constructors.

"The ones I did look at seemed easy, but the C++ infallible constructor design can indeed make trouble."

This is no excuse. We have init methods on purpose in much new C++ code (Luke did the work) to cope with constructor infallibility without C++ exceptions. Why are we circling around this point?

> If the body of the constructor calls a
> function which OOMs, we have no (good/clean) way of passing that back to the
> parent.
> (Oh, I think this is what you meant by 'C++ infallible constructor' above. I'll
> leave this in anyway for context.)

Sorry, I didn't read ahead -- but again this is something we've already dealt with via init methods. It is nowhere near a hard case in general.

> > Infallible short/fixed allocations are coming, that's a goal already. Do you
> > want to morph this bug into a metabug for that?
> 
> Probably better to make a new bug and close this one. But..... I guess I've
> never understood infallible allocations. They still cause a crash right?
> They're just less likely to cause a crash than a large allocation?

No, OS memory pressure monitoring and a ballast to drop on OOM should mean small new cannot fail. Our using jemalloc helps, since fragmentation can kill this in the worst case.

Also, system libraries mishandle OOM, and we sometimes can't dodge those bullets (some of which are exploitable). The longer term plan is lowered-rights renderer processes a la Chrome -- but this does not mean false-OOM should be tolerated.

Web developers can stress memory allocation, and quality of implementation rises when we handle OOM due to low/fragmented transients well.

> My main goal here is to remove our error-propagation code, which is costly in
> run-time (I suspect) and in developer time. I guess I don't know if infallible
> allocations allow that.

For allocation sites and callers that can fail only due to OOM, they help. For failure propagation in general, which dominates by call site count, they do not. Failure can happen in JS due to several causes, and OOM is only one. Our code must handle failure.

Given this, I think we must be willing and able to look at OOM failure handling in the pre-e10s period, and fix bugs.

> > Finally, waiting for infallible-new may leave some easy OOM-ignoring bug fixes
> > unfixed, and sometimes such bugs are exploitable. We should test and consider
> > fixes, like the kernel hackers have to, and not let bad bugs slide until some
> > future wonderland.
> 
> Agreed. I'm still working on bug 624094, and focusing on the low-hanging fruit
> for now. I agree that we should get bug 624094 into the tree. Help on bug
> 609413 appreciated!
> (But, electrolysis is coming right?)

I don't know when. The full monte is Firefox 7, end of year. The infallible_t stuff is really up to us, with cjones: we need to absorb or use mozalloc.

Checking for OOM-mishandling security bugs should happen now, not wait for e10s.

/be
Now that the browser is mostly infallible-malloc, is moving the JS engine in this direction more feasible or desirable? Personally, my concern is about security problems arising from rarely-tested, rarely-encountered codepaths rather than performance.
Blocks: 427099, 611123
Keywords: sec-want
Blocks: 806026
Assignee: general → nobody
Iain, would this still be relevant in today's OOM world?
Flags: needinfo?(iireland)
All of this discussion took place in a very different context than our current OOM world. It's historically interesting, but not directly relevant. 

Comment 17 is well put, though:

> The way this will resolve itself is to get on the same page as Gecko's
> infallible malloc work.  This involves using infallible malloc for small,
> fixed-size allocations, and having good backup strategies for handling OOMs
> when they happen (eg. do GC, drop optional caches, do memory pressure
> monitoring so this stuff can be done before we're thrashing to death).  Big
> allocations will still be fallible, and loops that can do lots of small
> allocations that add up to big amounts will also be fallible.
> 
> So it'll be a mix of fallible and infallible, but should still be a lot
> better (many fewer NULL/false checks) than what we currently have.

As it stands, the prevailing opinion is that the performance improvement of making more allocations infallible (using mechanisms like LifoAlloc ballast, for example) is generally not worth the complexity/risk of having to reason about states where we continue executing after an allocation fails.

We're definitely not going to implement the proposed design. We might move in the direction of failing on more OOMs instead of trying to bubble them, but we can open a new bug for that if it ever happens. I don't think there's any real value in leaving this bug open. 

Closing as WONTFIX (for the proposed setjmp/longjmp mechanism).
Status: NEW → RESOLVED
Closed: 5 years ago
Flags: needinfo?(iireland)
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.