Closed Bug 1076872 Opened 11 years ago Closed 11 years ago

Browser app crashes when running Massive benchmark on Flame

Categories

(Core :: JavaScript Engine: JIT, defect)

All
Gonk (Firefox OS)
defect
Not set
normal

Tracking

()

RESOLVED WORKSFORME
blocking-b2g -
Tracking Status
b2g-v2.1 --- unaffected
b2g-v2.2 --- affected

People

(Reporter: dminor, Unassigned)

References

Details

(Keywords: crash, regression, reproducible)

Attachments

(1 file)

Running the Massive benchmark [1] on recent (2014-10-02) mozilla-central eng builds with default memory settings on the Flame causes the Gaia browser to crash. This is occuring for me and :rwood. I asked :mdas to check on her phone last week and she did not see a crash with a build that was crashing for me. Aurora builds are on the same phone are fine. The crash occurs a few seconds after the 'Run the benchmark now>" button is pressed. [1] http://kripken.github.io/Massive/
This reproducible crash is a regression in B2G 2.2.
blocking-b2g: --- → 2.2?
QA Contact: ckreinbring
Regression window Last working BuildID: 20140924130254 Gaia: 03d7bcad57ea281869976a9aed0a38849f7c8bc5 Gecko: 435732392989 Platform Version: 35.0a1 Firmware Version: V180 User Agent: Mozilla/5.0 (Mobile; rv:35.0) Gecko/35.0 Firefox/35.0 First broken BuildID: 20140924141054 Gaia: 03d7bcad57ea281869976a9aed0a38849f7c8bc5 Gecko: 1735ff2bb23e Platform Version: 35.0a1 Firmware Version: V180 User Agent: Mozilla/5.0 (Mobile; rv:35.0) Gecko/35.0 Firefox/35.0 Working Gaia / Broken Gecko = Repro Gaia: 03d7bcad57ea281869976a9aed0a38849f7c8bc5 Gecko: 1735ff2bb23e Broken Gaia / Working Gecko = No repro Gaia: 03d7bcad57ea281869976a9aed0a38849f7c8bc5 Gecko: 435732392989 Gecko pushlog: http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=435732392989&tochange=1735ff2bb23e Mozilla Inbound Last working BuildID: 20140923145822 Gaia: 37b8a812c642ca616bf9457cb9b71e45261cdfa8 Gecko: d6d504a48a9e Platform Version: 35.0a1 Firmware Version: V180 User Agent: Mozilla/5.0 (Mobile; rv:35.0) Gecko/35.0 Firefox/35.0 First broken BuildID: 20140923150820 Gaia: 37b8a812c642ca616bf9457cb9b71e45261cdfa8 Gecko: 883232b3192d Platform Version: 35.0a1 Firmware Version: V180 User Agent: Mozilla/5.0 (Mobile; rv:35.0) Gecko/35.0 Firefox/35.0 Working Gaia / Broken Gecko = Repro Gaia: 37b8a812c642ca616bf9457cb9b71e45261cdfa8 Gecko: 883232b3192d Broken Gaia / Working Gecko = No repro Gaia: 37b8a812c642ca616bf9457cb9b71e45261cdfa8 Gecko: d6d504a48a9e Gecko pushlog: http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=d6d504a48a9e&tochange=883232b3192d
QA Whiteboard: [QAnalyst-Triage?]
Flags: needinfo?(jmitchell)
Jeff - this pushlog is only 2 of your patches - can you take a look?
QA Whiteboard: [QAnalyst-Triage?] → [QAnalyst-Triage+]
Flags: needinfo?(jmitchell) → needinfo?(jwalden+bmo)
Can do, hopefully. If this is Flame- or b2g-specific, that might complicate things. If it reproduces in a browser (I tried yesterday but screwed up testing on my first try, then when I went to do it again I was on a plane and didn't have Internet access to load the test page -- fail) I should be able to figure out what's up, tho. Investigating that now.
Running on desktop, happily/thankfully I hit an assertion fairly quickly: #0 js::jit::ToRegister (a=...) at /home/jwalden/moz/slots/js/src/jit/shared/CodeGenerator-shared-inl.h:34 34 MOZ_ASSERT(a.isGeneralReg()); And stepping upward from that, I hit this: #2 0x00007ffff275ea3b in js::jit::CodeGeneratorX64::visitAsmJSStoreHeap (this=0x7fffb9502000, ins=0x7fffc0b3c8d0) at /home/jwalden/moz/slots/js/src/jit/x64/CodeGenerator-x64.cpp:329 329 CodeOffsetLabel cmp = masm.cmplWithPatch(ToRegister(ptr), Imm32(0)); (gdb) lis 324 } 325 326 Label rejoin; 327 uint32_t maybeCmpOffset = AsmJSHeapAccess::NoLengthCheck; 328 if (!mir->skipBoundsCheck()) { 329 CodeOffsetLabel cmp = masm.cmplWithPatch(ToRegister(ptr), Imm32(0)); 330 masm.j(Assembler::AboveOrEqual, &rejoin); 331 maybeCmpOffset = cmp.offset(); 332 } 333 Now, earlier in the method we have this: 310 bool 311 CodeGeneratorX64::visitAsmJSStoreHeap(LAsmJSStoreHeap *ins) 312 { 313 MAsmJSStoreHeap *mir = ins->mir(); 314 Scalar::Type vt = mir->viewType(); 315 const LAllocation *ptr = ins->ptr(); 316 Operand dstAddr(HeapReg); 317 318 if (ptr->isConstant()) { 319 int32_t ptrImm = ptr->toConstant()->toInt32(); 320 MOZ_ASSERT(ptrImm >= 0); 321 dstAddr = Operand(HeapReg, ptrImm); 322 } else { 323 dstAddr = Operand(HeapReg, ToRegister(ptr), TimesOne); 324 } which seems to read, to me, as if |ptr| can be either constant or non-constant. And we're seemingly not "handling" the constant case, when |!mir->skipBoundsCheck()| (where "handling" may be as simple as not doing anything at all, not sure exactly).
Of course, that's x86-64-only, and none of the other backends appear to have this problem. So obviously that's not the cause of the issue here, just a yak to shave along the way.
sunfish provided me with this hack-fix in order to continue debugging. With it in place, I don't crash or fail at all, on a desktop build. So it sounds like a mobile-specific problem (not super-surprisingly), sigh. I'll probably have to get some hardware to start investigating this.
Waldo, any update here?
Component: Gaia::Browser → JavaScript Engine: JIT
Product: Firefox OS → Core
Sorry, got distracted by other things. :-( That said, I wonder if this isn't a duplicate of bug 1085746. Showed up around the same time, right platform, in code probably called here. Any chance at someone retesting with that fixed?
Flags: needinfo?(jwalden+bmo) → needinfo?(dminor)
Retested this on a flame-kk flashed with the latest mozilla-central engineering build and it still crashes after a few seconds :( I can't see bug 1085746 so I don't know if the patch there is landed or still in progress.
Flags: needinfo?(dminor)
The patch landed in inbound and presumably went to central long ago: https://hg.mozilla.org/integration/mozilla-inbound/rev/46e4b6ef0bac The other possibility is that that patch needs to generalize to *all* possible target element types; not sure why it wasn't. We should do that and retest.
We generalized volatility to all element types long ago, at this point. Search for JS_VOLATILE_ARM to confirm. Does this still reproduce?
I checked again today and it is running fine now. The Massive benchmark has also undergone some changes in the meantime but chances are the underlying problem was fixed.
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → WORKSFORME
blocking-b2g: 2.2? → -
Blocks: 1231676
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: