Closed
Bug 852838
Opened 11 years ago
Closed 11 years ago
WebAudio abort with infallible array [@mozilla::dom::AudioBuffer::InitializeBuffers]
Categories
(Core :: Web Audio, defect)
Tracking
()
RESOLVED
FIXED
mozilla22
People
(Reporter: posidron, Assigned: ehsan.akhgari)
References
Details
(Keywords: crash, testcase)
Attachments
(4 files)
201 bytes,
text/html
|
Details | |
2.26 KB,
patch
|
roc
:
review+
|
Details | Diff | Splinter Review |
611 bytes,
patch
|
ehsan.akhgari
:
review-
|
Details | Diff | Splinter Review |
40.58 KB,
text/plain
|
Details |
#0 0x10386e40e in mozalloc_abort mozalloc_abort.cpp:30 #1 0x10399d4a4 in nsTArrayInfallibleAllocator::SizeTooBig nsTArray.h:118 #2 0x10399cfb0 in nsTArray_base<nsTArrayInfallibleAllocator>::EnsureCapacity nsTArray-inl.h:112 #3 0x10548993d in nsTArray_Impl<JSObject*, nsTArrayInfallibleAllocator>::SetCapacity nsTArray.h:1099 #4 0x105488910 in mozilla::dom::AudioBuffer::InitializeBuffers AudioBuffer.cpp:73 #5 0x10548e0ca in mozilla::dom::AudioContext::CreateBuffer AudioContext.cpp:93 #6 0x10665de6a in mozilla::dom::AudioContextBinding::createBuffer AudioContextBinding.cpp:221 #7 0x10665c4a4 in mozilla::dom::AudioContextBinding::genericMethod AudioContextBinding.cpp:502 Tested with m-i changeset: 125476:cad5306d569e
Assignee | ||
Comment 1•11 years ago
|
||
Attachment #727483 -
Flags: review?(roc)
Attachment #727483 -
Flags: review?(roc) → review+
Assignee | ||
Comment 2•11 years ago
|
||
https://hg.mozilla.org/integration/mozilla-inbound/rev/e07c6617533d
Comment 3•11 years ago
|
||
https://hg.mozilla.org/mozilla-central/rev/e07c6617533d
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla22
Comment 4•11 years ago
|
||
this test case causes an OOM in fennec on the tegras. Any way to fix this, or should we disable it?
Comment 5•11 years ago
|
||
just incase we determine disabling on android makes the most sense. fwiw, we don't run the crashtest 1 suite on android due to it crashing too much in the past.
Assignee | ||
Comment 6•11 years ago
|
||
(In reply to Joel Maher (:jmaher) from comment #4) > this test case causes an OOM in fennec on the tegras. Any way to fix this, > or should we disable it? Can you give us a stack trace please? That's sort of what this bug meant to prevent. :-)
Assignee | ||
Comment 7•11 years ago
|
||
Comment on attachment 744252 [details] [diff] [review] disable the test case on android We're definitely not going to disable this test -- as it indicates that the bug is not completely fixed yet.
Attachment #744252 -
Flags: review-
Comment 8•11 years ago
|
||
well, I don't see anything other than: 01-01 14:56:25.736 E/GeckoConsole( 3944): [JavaScript Error: "NS_ERROR_NOT_AVAILABLE: Component is not available" {file: "data:text/html,<body%20onUnload="location%20=%20'http://www.mozilla.org/'">This%20frame's%20onunload%20tries%20to%20load%20another%20page." line: 1}] I see nothing else in the logcat folders, nor a crash dump file created.
Comment 9•11 years ago
|
||
I ran a debug build of android on here, testing the content/media folder, yielded no OOM as it was just 13 tests, but running the entire suites fails early on due to a crash dump.
Assignee | ||
Comment 10•11 years ago
|
||
(In reply to comment #8) > well, I don't see anything other than: > 01-01 14:56:25.736 E/GeckoConsole( 3944): [JavaScript Error: > "NS_ERROR_NOT_AVAILABLE: Component is not available" {file: > "data:text/html,<body%20onUnload="location%20=%20'http://www.mozilla.org/'">This%20frame's%20onunload%20tries%20to%20load%20another%20page." > line: 1}] > > I see nothing else in the logcat folders, nor a crash dump file created. This is not an OOM! :-) Are you sure that this is caused by the patch landed here?
Comment 11•11 years ago
|
||
oops, I had the wrong log in there. Too many fennec automation bugs to work on at once. Here is what I see in the log: 01-02 04:49:50.509 E/GeckoConsole( 9914): [JavaScript Error: "NS_ERROR_OUT_OF_MEMORY: Out of Memory" {file: "http://192.168.1.80:8888/tests/content/media/test/crashtests/852838.html" line: 6}] see attached log file for entire output from adb logcat.
Assignee | ||
Updated•11 years ago
|
Attachment #744587 -
Attachment mime type: text/x-log → text/plain
Assignee | ||
Comment 12•11 years ago
|
||
(In reply to Joel Maher (:jmaher) from comment #11) > Created attachment 744587 [details] > logcat from OOM reftest run > > oops, I had the wrong log in there. Too many fennec automation bugs to work > on at once. Here is what I see in the log: > 01-02 04:49:50.509 E/GeckoConsole( 9914): [JavaScript Error: > "NS_ERROR_OUT_OF_MEMORY: Out of Memory" {file: > "http://192.168.1.80:8888/tests/content/media/test/crashtests/852838.html" > line: 6}] > > see attached log file for entire output from adb logcat. OK, now that's what I had expected to see. But again, this is not an OOM. This is just a JS exception that we raise when createBuffer fails because we don't have enough memory to create a buffer as large as requested. I still don't see where the problem is. Given the adb log, it seems like the test suite is just moving on and running the tests after this, which is exactly what I would expect.
Comment 13•11 years ago
|
||
the browser shuts down unexpectedly, causing only 19% of the tests to run. If I skip the one test in the patch I attached to this bug, we can run 100% of the tests. Right now this single test case is blocking us from turning on 800 crashtests.
Assignee | ||
Comment 14•11 years ago
|
||
Yes, but the question is, is this test the reason? I see the following line in the log: 01-02 04:49:52.119 E/GeckoConsole( 9914): [JavaScript Error: "NS_ERROR_NOT_AVAILABLE: Component is not available" {file: "data:text/html,<body%20onUnload="location%20=%20'http://www.mozilla.org/'">This%20frame's%20onunload%20tries%20to%20load%20another%20page." line: 1}] Which suggests that there is another test running after this one... Do you have a log from the test output? Any information on when and why the crash happens would be really helpful... Another question: do you get the same crash if you navigate to this web page in Firefox on one of these devices?
Comment 15•11 years ago
|
||
I had overlooked the obvious, thanks for pointing it out. After running a series of runs locally, I find the last test ran is a different one each time somewhere in docshell/base/crashtests/4*. I wonder why this specific test case causes the browser to crash later on in the test suite. If we were running C1 in automation in March and this specific crashtest was landed, it would have been backed out for causing C1 to go orange. Any tips for figuring out why this test cause is causing so much pain downstream for other crashtests?
Assignee | ||
Comment 16•11 years ago
|
||
(In reply to comment #15) > I had overlooked the obvious, thanks for pointing it out. After running a > series of runs locally, I find the last test ran is a different one each time > somewhere in docshell/base/crashtests/4*. > > I wonder why this specific test case causes the browser to crash later on in > the test suite. If we were running C1 in automation in March and this specific > crashtest was landed, it would have been backed out for causing C1 to go > orange. > > Any tips for figuring out why this test cause is causing so much pain > downstream for other crashtests? I don't know enough about the rest of the media code to be able to answer that. Sorry!
Assignee | ||
Comment 17•11 years ago
|
||
Mass moving Web Audio bugs to the Web Audio component. Filter on duckityduck.
Component: Video/Audio → Web Audio
You need to log in
before you can comment on or make changes to this bug.
Description
•