Closed Bug 1435532 (js-large-allocs) Opened 2 years ago Closed 5 months ago

Detect/Prevent/Crash on large allocations in JS engine

Categories

(Core :: JavaScript Engine, enhancement, P3)

All
Linux
enhancement

Tracking

()

RESOLVED FIXED
mozilla69
Tracking Status
firefox60 --- wontfix
firefox69 --- fixed

People

(Reporter: decoder, Assigned: decoder)

Details

(Keywords: sec-want, Whiteboard: [adv-main69-])

Attachments

(1 file)

I've encountered various cases by now where values from untrusted content more or less directly flow into allocators, e.g. bug 1434573 and bug 1435525. These allocations not only hurt fuzzing (because the fuzzer can abort when it exceeds the allowed memory usage, also ASan doesn't seem to cope well with allocations around 16 GB or larger) but is also undesirable in production (we probably don't want a wasm or binjs file to be able to allocate 4 GB of memory with a single allocation).

In order to detect these cases, I'm proposing to add code (only active in FUZZING builds) to the JS allocators, similar to the OOM testing code. If a large allocation is detected, we could silently make it fail by returning NULL or even MOZ_CRASH, depending on e.g. an environment variable for configuration. One of the open questions is, what size "undesirable" means here or if that should be configurable as well. It probably depends a bit on the target as well. E.g., do we want BinAST to make 2 GB allocations?

Jan, what's your opinion on this?
Flags: needinfo?(jdemooij)
(In reply to Christian Holler (:decoder) from comment #0)

> (we probably don't want a wasm or binjs file to be able to
> allocate 4 GB of memory with a single allocation).

Why not?  The SpiderMonkey ArrayBuffer length limit is 2GB, but Jukka's been complaining about that, it limits ambitious use of wasm.

> In order to detect these cases, I'm proposing to add code (only active in
> FUZZING builds) to the JS allocators, similar to the OOM testing code. If a
> large allocation is detected, we could silently make it fail by returning
> NULL or even MOZ_CRASH, depending on e.g. an environment variable for
> configuration.

That seems reasonable, because large allocations are more likely to fail (on 32-bit systems, at least).

> One of the open questions is, what size "undesirable" means
> here or if that should be configurable as well. It probably depends a bit on
> the target as well. E.g., do we want BinAST to make 2 GB allocations?

I think it needs to be configurable, so that you can test for "mobile" situations on desktop systems.
(In reply to Christian Holler (:decoder) from comment #0)
> Jan, what's your opinion on this?

I think it's great you're working on this and you already found some bad/unexpected large allocations :)

Ideally we would have a limit of 1 or 2 GB or so, and places where large allocations are okay/safe/etc would have to be annotated somehow. Lars mentions the ArrayBuffer constructor, hopefully there aren't too many of those "large allocation" sites.
Flags: needinfo?(jdemooij)
Priority: -- → P1
Priority: P1 → P3
(In reply to Jan de Mooij [:jandem] from comment #2)
> Ideally we would have a limit of 1 or 2 GB or so, and places where large
> allocations are okay/safe/etc would have to be annotated somehow. Lars
> mentions the ArrayBuffer constructor, hopefully there aren't too many of
> those "large allocation" sites.

Jan, can you be more specific? Is this the idea?

- in all the common malloc-like allocation functions, MOZ_ASSERT(nbytes < 2MB)
- callers must not pass an unbounded value to malloc, doing this is a bug
- add a separate allocation function for the "large allocation" sites where it's OK.

This sounds great, but maybe we would just discover that there are a lot of ways for web sites to chew up all your memory. If it's OK for `new ArrayBuffer(BIG)` to allocate unbounded amounts of memory, what problem does it solve to constrain everything else scripts can do? Does it make sense to add checks in Vector and HashMap?

(To be clear, I think web site memory usage *should* be sandboxed, and therefore we should do this. But it is a lot of work, and if we're not going to limit ArrayBuffers...)
Flags: needinfo?(jdemooij)
(In reply to Jason Orendorff [:jorendorff] from comment #3)
> If it's OK for `new
> ArrayBuffer(BIG)` to allocate unbounded amounts of memory, what problem does
> it solve to constrain everything else scripts can do? Does it make sense to
> add checks in Vector and HashMap?

It would make it very clear where "large allocations" can happen and these are the places we want to audit carefully for integer overflow and "does this caller really need to allocate so much memory?". There might be some interesting results.

You're right though that it's a lot of work.. I don't have a strong opinion on this.

(Also 2 MB seems a bit small, maybe 20 or 50 MB would require fewer big-alloc annotations.)
Flags: needinfo?(jdemooij)
Pushed by choller@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/7be0577e5318
Prevent/crash on large allocations in fuzzing JS shell. r=jandem
Status: NEW → RESOLVED
Closed: 5 months ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla69
Whiteboard: [adv-main69-]
You need to log in before you can comment on or make changes to this bug.