Closed Bug 852838 Opened 11 years ago Closed 11 years ago

WebAudio abort with infallible array [@mozilla::dom::AudioBuffer::InitializeBuffers]

Categories

(Core :: Web Audio, defect)

x86_64
macOS
defect
Not set
critical

Tracking

()

RESOLVED FIXED
mozilla22

People

(Reporter: posidron, Assigned: ehsan.akhgari)

References

Details

(Keywords: crash, testcase)

Attachments

(4 files)

Attached file testcase
#0 0x10386e40e in mozalloc_abort mozalloc_abort.cpp:30
#1 0x10399d4a4 in nsTArrayInfallibleAllocator::SizeTooBig nsTArray.h:118
#2 0x10399cfb0 in nsTArray_base<nsTArrayInfallibleAllocator>::EnsureCapacity nsTArray-inl.h:112
#3 0x10548993d in nsTArray_Impl<JSObject*, nsTArrayInfallibleAllocator>::SetCapacity nsTArray.h:1099
#4 0x105488910 in mozilla::dom::AudioBuffer::InitializeBuffers AudioBuffer.cpp:73
#5 0x10548e0ca in mozilla::dom::AudioContext::CreateBuffer AudioContext.cpp:93
#6 0x10665de6a in mozilla::dom::AudioContextBinding::createBuffer AudioContextBinding.cpp:221
#7 0x10665c4a4 in mozilla::dom::AudioContextBinding::genericMethod AudioContextBinding.cpp:502

Tested with m-i changeset: 125476:cad5306d569e
Assignee: nobody → ehsan
Blocks: webaudio
Attached patch Patch (v1)Splinter Review
Attachment #727483 - Flags: review?(roc)
https://hg.mozilla.org/mozilla-central/rev/e07c6617533d
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla22
this test case causes an OOM in fennec on the tegras.  Any way to fix this, or should we disable it?
just incase we determine disabling on android makes the most sense.  fwiw, we don't run the crashtest 1 suite on android due to it crashing too much in the past.
(In reply to Joel Maher (:jmaher) from comment #4)
> this test case causes an OOM in fennec on the tegras.  Any way to fix this,
> or should we disable it?

Can you give us a stack trace please?  That's sort of what this bug meant to prevent.  :-)
Comment on attachment 744252 [details] [diff] [review]
disable the test case on android

We're definitely not going to disable this test -- as it indicates that the bug is not completely fixed yet.
Attachment #744252 - Flags: review-
well, I don't see anything other than:
01-01 14:56:25.736 E/GeckoConsole( 3944): [JavaScript Error: "NS_ERROR_NOT_AVAILABLE: Component is not available" {file: "data:text/html,<body%20onUnload="location%20=%20'http://www.mozilla.org/'">This%20frame's%20onunload%20tries%20to%20load%20another%20page." line: 1}]

I see nothing else in the logcat folders, nor a crash dump file created.
I ran a debug build of android on here, testing the content/media folder, yielded no OOM as it was just 13 tests, but running the entire suites fails early on due to a crash dump.
(In reply to comment #8)
> well, I don't see anything other than:
> 01-01 14:56:25.736 E/GeckoConsole( 3944): [JavaScript Error:
> "NS_ERROR_NOT_AVAILABLE: Component is not available" {file:
> "data:text/html,<body%20onUnload="location%20=%20'http://www.mozilla.org/'">This%20frame's%20onunload%20tries%20to%20load%20another%20page."
> line: 1}]
> 
> I see nothing else in the logcat folders, nor a crash dump file created.

This is not an OOM!  :-)

Are you sure that this is caused by the patch landed here?
oops, I had the wrong log in there.  Too many fennec automation bugs to work on at once.  Here is what I see in the log:
01-02 04:49:50.509 E/GeckoConsole( 9914): [JavaScript Error: "NS_ERROR_OUT_OF_MEMORY: Out of Memory" {file: "http://192.168.1.80:8888/tests/content/media/test/crashtests/852838.html" line: 6}]

see attached log file for entire output from adb logcat.
Attachment #744587 - Attachment mime type: text/x-log → text/plain
(In reply to Joel Maher (:jmaher) from comment #11)
> Created attachment 744587 [details]
> logcat from OOM reftest run
> 
> oops, I had the wrong log in there.  Too many fennec automation bugs to work
> on at once.  Here is what I see in the log:
> 01-02 04:49:50.509 E/GeckoConsole( 9914): [JavaScript Error:
> "NS_ERROR_OUT_OF_MEMORY: Out of Memory" {file:
> "http://192.168.1.80:8888/tests/content/media/test/crashtests/852838.html"
> line: 6}]
> 
> see attached log file for entire output from adb logcat.

OK, now that's what I had expected to see.  But again, this is not an OOM.  This is just a JS exception that we raise when createBuffer fails because we don't have enough memory to create a buffer as large as requested.

I still don't see where the problem is.  Given the adb log, it seems like the test suite is just moving on and running the tests after this, which is exactly what I would expect.
the browser shuts down unexpectedly, causing only 19% of the tests to run.  If I skip the one test in the patch I attached to this bug, we can run 100% of the tests.  Right now this single test case is blocking us from turning on 800 crashtests.
Yes, but the question is, is this test the reason?  I see the following line in the log:

01-02 04:49:52.119 E/GeckoConsole( 9914): [JavaScript Error: "NS_ERROR_NOT_AVAILABLE: Component is not available" {file: "data:text/html,<body%20onUnload="location%20=%20'http://www.mozilla.org/'">This%20frame's%20onunload%20tries%20to%20load%20another%20page." line: 1}]

Which suggests that there is another test running after this one...  Do you have a log from the test output?  Any information on when and why the crash happens would be really helpful...

Another question: do you get the same crash if you navigate to this web page in Firefox on one of these devices?
I had overlooked the obvious, thanks for pointing it out.  After running a series of runs locally, I find the last test ran is a different one each time somewhere in docshell/base/crashtests/4*.

I wonder why this specific test case causes the browser to crash later on in the test suite.  If we were running C1 in automation in March and this specific crashtest was landed, it would have been backed out for causing C1 to go orange.

Any tips for figuring out why this test cause is causing so much pain downstream for other crashtests?
(In reply to comment #15)
> I had overlooked the obvious, thanks for pointing it out.  After running a
> series of runs locally, I find the last test ran is a different one each time
> somewhere in docshell/base/crashtests/4*.
> 
> I wonder why this specific test case causes the browser to crash later on in
> the test suite.  If we were running C1 in automation in March and this specific
> crashtest was landed, it would have been backed out for causing C1 to go
> orange.
> 
> Any tips for figuring out why this test cause is causing so much pain
> downstream for other crashtests?

I don't know enough about the rest of the media code to be able to answer that.  Sorry!
Mass moving Web Audio bugs to the Web Audio component.  Filter on duckityduck.
Component: Video/Audio → Web Audio
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: