Chunked FileAPI Upload with XMLHttprequest of large files (>=2GB) causes huge memory leak and crashes Firefox




3 years ago
3 days ago


(Reporter: franz.hose1234, Unassigned)


42 Branch
Windows 7

Firefox Tracking Flags

(Not tracked)


(crash signature)


(1 attachment)



3 years ago
Created attachment 8697266 [details]

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:42.0) Gecko/20100101 Firefox/42.0
Build ID: 20151029151421

Steps to reproduce:

1. read the next file chunk with the FileReader API
2. when the chunk is finished, upload it with a XMLHTTPRequest, method PUT
3. read the next chunk, until the file was fully sent

(My example code to reproduce the crash is in the zip file. Put it on a Webserver on Localhost (eg. with Python), then open http://localhost:8000/upload_test.html and try to upload a large file.)

Tested on Windows 7 x64, with Firefox 42.0.

Actual results:

Firefox consumes more and more memory, after a few seconds (reading the file from an SSD), it reaches 2GB and Firefox Crashes there.

The leak appears only when using XMLHttpRequest to send the data I've read -- I've included a second example ("read_from_disk.html"), which just reads the file and works flawlessly.

Expected results:

Proper garbage collection, not crashing the browser.

This code works in Chrome for example, memory consumption bounces between ~200MB and 1GB there (when it reaches 1GB, the garbage collector kicks in and it goes down again).

Am I doing something wrong in my code?

I've tried many variations, eg. not using a callback to "upload_next_chunk()" in the XMLHttpRequest callback, but setting a global variable to "chunk upload finished" and periodically checking that global variable in an interval to start the next chunk. It didn't help it.

Also, even if this is a Firefox bug, how can I make this backwards compatible to older versions of Firefox?

Thanks a lot!

Comment 1

3 years ago
Is it possible to upload a version without a python script (or providing a live testcase on server)?
Flags: needinfo?(franz.hose1234)

Comment 2

3 years ago
I can't really do this right now. But the script should work on any webserver, just put the HTML file there and change the URL in the JS code from http://localhost:8000 to the server URL.
Flags: needinfo?(franz.hose1234)

Comment 3

3 years ago
I guess it's possible to host the script on free webhost like

Comment 4

3 years ago
I think, that this test case can be reproduced better on localhost, or a server that is connected to GBit with the computer, that runs Firefox for testing.

The script will make a LOT of requests to the server, so when putting this on another host, it will
* generate many HTTP requests (similar to a DOS attack, not nice to do this to a free hoster!)
* slow down the memory leak effect due to network latency of each request.

Please test it with a local webserver (`python -m SimpleHTTPServer` is enough!) for the reasons above.

Comment 5

3 years ago

If you type in url bar about:crashes you will see a report id, please send it.

Also please download the Firefox Nightly from here: and retest the problem.

If you still have the issue please create a new profile, you have the steps here:

Please test if the issue can be reproduced in the safe mode of Firefox:
Flags: needinfo?(franz.hose1234)

Comment 6

3 years ago

I can't send a crash report to Firefox, because my company does not allow it. So about:crashes is empty.

However, I've tested the new Firefox 43 and Nightly now.

Firefox 43:
Still crashes

Firefox Nightly:
Does not crash! The new plugin-container.exe process takes up the big amounts of RAM, from ~200mb up to 2GB, but then instead of crashing, it drops down to 200mb again. So the garbage collector seems to be working there! (Same result with Safe Mode).

When will the code from the current nightly build become stable?

Sorry, but that's all the info I can provide for now. I recommend creating an automated test for GBit uploads with FileReader/XMLHttpRequest APIs, so this can be optimized (Memory and CPU usage, as well as network efficiency), but I really can't do this myself right now. (BTW: Chrome doesn't crash at this test, but it is has even less network efficiency.)

Thanks for processing my bugreport!
Flags: needinfo?(franz.hose1234)

Comment 7

3 years ago
One more thing, I've tested the cases from the attached zip file on two Windows computers with Firefox 42 initially.

As I've said before, I will not be able to provide more information any time soon.


3 years ago
Component: Untriaged → XML
OS: Unspecified → Windows 7
Product: Firefox → Core
Hardware: Unspecified → x86_64

Comment 8

3 years ago
Sorry for the change but I think this component is more appropriate for this problem. If someone think that is not the right component please change it.
Thank you
Component: XML → DOM

Comment 9

3 years ago
Hello, I reproduce the same problem on Firefox version 44.

When sending many file chunk (from a big file >2Go) via xhr, the Firefox app memory rises up critically an and end up in a memory crash. 

Here the crash report

Thanks a lot!


3 years ago
Crash Signature: [@ OOM | large | NS_ABORT_OOM | nsACString_internal::SetCapacity ]

Comment 11

2 years ago
Any news about this? We have exactly the same problem with chunked upload. The uploaded blobs are never garbage collected in Firefox 48.0.2. Aurora doesn't have the problem.
I wasn't able to reproduce this, but from comment 11 I see that this can't be reproduced with Aurora, that may be because this issue was fixed on that channel. I can send you the release calendar to see when the Aurora 50.0a2 version will arrive in the release channel:
Release 50 will be around 2016-11-07.

Comment 13

2 years ago
Does Firefox crash when the uploaded amount reaches 2GB or immediately?
Closing because no crashes reported for 12 weeks.
Last Resolved: 3 days ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.