Closed Bug 1497813 Opened 7 years ago Closed 7 years ago

Crash [@ ??] with stack space exhaustion

Categories

(Core :: JavaScript Engine: JIT, defect)

x86
Linux
defect
Not set
critical

Tracking

()

RESOLVED DUPLICATE of bug 909094
Tracking Status
firefox64 --- fix-optional

People

(Reporter: decoder, Unassigned)

Details

(4 keywords, Whiteboard: [jsbugmon:update,bisect])

Crash Data

Attachments

(2 files)

The following testcase crashes on mozilla-central revision a9616aaeff87 (build with --enable-posix-nspr-emulation --enable-valgrind --enable-gczeal --disable-tests --disable-profiling --enable-debug --without-intl-api --enable-optimize --target=i686-pc-linux-gnu, run with --fuzzing-safe --cpu-count=2 --ion-offthread-compile=off --disable-oom-functions): See attachment. Backtrace: received signal SIGSEGV, Segmentation fault. [Switching to Thread 0xf5dffb40 (LWP 21981)] 0x4128124f in ?? () #0 0x4128124f in ?? () #1 0xf49590c0 in ?? () eax 0xb47f0000 -1266745344 ebx 0xdeadbeef -559038737 ecx 0x40000000 1073741824 edx 0xfffd 65533 esi 0xf49590c0 -191524672 edi 0xf5dfd608 -169880056 ebp 0xf5dfd4c4 4125086916 esp 0xf5dfd4c4 4125086916 eip 0x4128124f 1093145167 => 0x4128124f: mov (%eax,%edx,1),%eax 0x41281252: pop %ebp This test is slightly intermittent and does not cleanly reduce. Reducing it makes it highly intermittent, so the full test is likely easier to diagnose. Note that this only reproduces on 32-bit for me.
Attached file Testcase
Here is a partly reduced version that is more intermittent, but still reproduces fairly often.
Attached file Testcase for comment 2
I spot some wasm in the reduced version of the testcase, but I can't tell if it is a wasm bug or not, because it isn't fully reduced. Needinfo for :bbouvier to investigate.
Flags: needinfo?(bbouvier)
I observe segfaults that are out-of-bounds reads in the wasm memory, which then get caught by the wasm signal handlers, which properly redirect control flow to error handling and so on and so forth. The thing is, it seems there are several threads that seem to run this code at the same time, as a run within gdb shows: many threads interleaving. Some of them are trying to reenter the handler while it's been entered, etc. I get apparent iloops (or very very slow code) when I run this under rr --chaos mode, and since this is intermittent, this might be a threading issue.
So as it turns out, the crash trace in comment 0 is not correct because the fuzzing toolchain for triaging uses GDB directly to run the tests, rather than using core dumps (like the rest of fuzzing automation). I reran this with forced core dumping instead, and here is the correct trace: Program terminated with signal SIGSEGV, Segmentation fault. #0 js::NativeObject::slotInRange (this=0xf6185bc0, slot=0, sentinel=js::NativeObject::SENTINEL_NOT_ALLOWED) at /js/src/vm/NativeObject.cpp:305 #1 0x56931376 in js::NativeObject::getSlot (slot=0, this=0xf6185bc0) at /js/src/vm/NativeObject.h:984 #2 js::NativeObject::getReservedSlot (index=0, this=0xf6185bc0) at /js/src/vm/NativeObject.h:1098 #3 js::EnvironmentObject::enclosingEnvironment (this=<optimized out>) at /js/src/vm/EnvironmentObject.h:281 #4 JSObject::enclosingEnvironment (this=0xf6185bc0) at /js/src/vm/EnvironmentObject-inl.h:76 #5 AssertInnerizedEnvironmentChain (cx=<optimized out>, env=...) at /js/src/builtin/Eval.cpp:40 #6 0x56941026 in EvalKernel (cx=0xf661b800, v=v@entry=..., evalType=evalType@entry=DIRECT_EVAL, caller=..., env=..., pc=0x58f5bd29 "{", vp=...) at /js/src/builtin/Eval.cpp:235 #7 0x56941ed4 in js::DirectEval (cx=<optimized out>, v=..., vp=...) at /js/src/builtin/Eval.cpp:452 #8 0x56a5e459 in js::jit::DoCallFallback (cx=<optimized out>, frame=0xff8c17d8, stub_=0x58bf30a0, argc=0, vp=0xff8c17a0, res=...) at /js/src/jit/BaselineIC.cpp:3793 #9 0x5969976a in ?? () [...] #127 0x59904af2 in ?? () eax 0x2 2 ebx 0x5794aff4 1469362164 ecx 0x57925a00 1469209088 edx 0x57925bfc 1469209596 esi 0x5794aff4 1469362164 edi 0xf6185bc0 -166175808 ebp 0xff8c1018 4287369240 esp 0xff8c0fe4 4287369188 eip 0x56f69ae7 <js::NativeObject::slotInRange(unsigned int, js::NativeObject::SentinelAllowed) const+23> => 0x56f69ae7 <js::NativeObject::slotInRange(unsigned int, js::NativeObject::SentinelAllowed) const+23>: push %edi 0x56f69ae8 <js::NativeObject::slotInRange(unsigned int, js::NativeObject::SentinelAllowed) const+24>: call 0x5682a0b0 <js::NativeObject::lastProperty() const> So this is not s-s with an invalid read but a stack space exhaustion. It still reproduces fairly reliable for me and is causing lots of crashes in fuzzing.
Group: javascript-core-security
Component: JavaScript Engine → Javascript: Web Assembly
Summary: Crash [@ ??] with invalid memory read → Crash [@ ??] with wasm and stack space exhaustion
I can reproduce with the first test case (couldn't with the second one). However, wasm code isn't on the stack when the crash occurs, and I don't know much what to do here, so leaving to the JIT team.
Component: Javascript: Web Assembly → JavaScript Engine: JIT
Flags: needinfo?(bbouvier)
Summary: Crash [@ ??] with wasm and stack space exhaustion → Crash [@ ??] with stack space exhaustion
Flags: needinfo?(jdemooij)
Iain, maybe this is an interesting one for you to look at. You'll have to compile the shell on Linux with the build flags decoder provided. It's possible our stack quota in the shell needs a bigger buffer. We should also check how big the stack actually is when we crash ("info proc mappings" in gdb might help), on 32-bit Linux it's possible the stack can't grow larger when the heap gets too big.
Flags: needinfo?(jdemooij) → needinfo?(iireland)
This is a duplicate of bug 909094. Further details (including a reduced testcase) on that bug.
Flags: needinfo?(iireland)
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → DUPLICATE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: