Why does Firefox consume alot of memory when posting multipart/form-data request with total size greater that ~5GB?
Categories
(Core :: Performance, defect)
Tracking
()
People
(Reporter: filip.lindby, Unassigned, NeedInfo)
Details
Attachments
(2 files)
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:136.0) Gecko/20100101 Firefox/136.0
Steps to reproduce:
I have a Vue.js application where I upload large files with FormData (multipart/form-data) to a ASP.NET backend API. When the total size of the files I upload is about 5GB or less Firefox consumes around 300MB, when I instead add files that sums up to around 8GB Firefox consumes ~11-16 GB. Is this a bug or something else? Any ideas on how to approach this issue?
I have tried both fetch and axios, same problem.
Each file is around 800MB
It works fine in Microsoft Edge
Using ESR version of Firefox.
Actual results:
Consumes a lot of memory
Expected results:
Basically no increase of memory usage
Comment 1•7 days ago
|
||
The Bugbug bot thinks this bug should belong to the 'Core::Performance' component, and is moving the bug to that component. Please correct in case you think the bot is wrong.
Comment 2•7 days ago
|
||
Can you share a testcase or a publicly available website?
Comment 3•7 days ago
|
||
And when you see the huge memory use, please type about:memory in a new tab and click on save memory report. A report will be generated, please share here.
Saved memory report from "about:memory"
(In reply to Mayank Bansal from comment #3)
And when you see the huge memory use, please type about:memory in a new tab and click on save memory report. A report will be generated, please share here.
I added it as an attachment or do you want it in text in a comment?
(In reply to Mayank Bansal from comment #2)
Can you share a testcase or a publicly available website?
I have no public website. I can share a small snippet where the magic happens:
public async postFormDataWithProgress(
endpoint: string,
formData: FormData,
onProgress: (percentage: number, loaded: number, total: number | undefined) => void,
onFailed: () => void,
cancelToken: CancelToken,
): Promise<AxiosResponse> {
let headers = this.defaultHeaders;
headers = { ...headers, 'Content-Type': 'multipart/form-data' };
const response: AxiosResponse = await axios
.post(`${this.apiUrl}${endpoint}`, formData, {
cancelToken: cancelToken,
maxRedirects: 0,
maxBodyLength: Infinity,
maxContentLength: Infinity,
onUploadProgress: (progressEvent) => {
const { loaded, total } = progressEvent;
const percentage = Math.floor((loaded * 100) / (total ?? 1));
onProgress(percentage, loaded, total);
},
headers: headers as AxiosHeaders,
})
.catch((error: any) => {
onFailed();
console.log('Upload post failed:', error);
return error.response;
});
return response;
}
const formData = new FormData();
// Append each file as a separate section in the form data object.
files.forEach((file: any, index: number) => {
formData.append(index.toString(), file, file.name);
});
await postFormDataWithProgress("url", formData,...);
I sometimes see that the POST request fails with NS_ERROR_NET_RESET
Comment 8•6 days ago
|
||
Your memory report shows 10GB used in the parent-process with almost all of it in heap-unclassified, so something i definitely going wrong somewhere.
Flindby, is it possible to create a standalone testcase and share it here? That would make diagnosis much easier.
Additionally, did your application work as expected in an older version of Firefox? If yes, can you please do a bisection to find the exact change that caused this using mozregression (https://mozilla.github.io/mozregression/)
Comment 9•6 days ago
|
||
Can you clarify where the Blob/Files are coming from? (ex: <input type=file>
, fetch(), XHR, blob created via new Blob/File, blob/file retrieved from IDB, etc.)
Also:
- How much RAM does the computer have / is there swapfile use happening? I'm asking mainly in the context of maybe there always is a transient memory spike but if swap starts getting used it gets more obvious because the time during which a ton of memory is being used ends up being longer.
- Is it possible there are any redirects happening with the POST?
Reporter | ||
Comment 10•6 days ago
|
||
Small repro application. NodeJs backend server with vanilla JS/HTML.
cd file-upload-app
npm install
node server.js
- Open http://localhost:3000
- Select 10 files with size around 800MB
- Upload
Reporter | ||
Comment 11•6 days ago
|
||
(In reply to Mayank Bansal from comment #8)
Your memory report shows 10GB used in the parent-process with almost all of it in heap-unclassified, so something i definitely going wrong somewhere.
Flindby, is it possible to create a standalone testcase and share it here? That would make diagnosis much easier.
Additionally, did your application work as expected in an older version of Firefox? If yes, can you please do a bisection to find the exact change that caused this using mozregression (https://mozilla.github.io/mozregression/)
Added small repro application.
We just noticed the large memory usage so I do not know if it has worked before.
Reporter | ||
Comment 12•6 days ago
|
||
(In reply to Andrew Sutherland [:asuth] (he/him) from comment #9)
Can you clarify where the Blob/Files are coming from? (ex:
<input type=file>
, fetch(), XHR, blob created via new Blob/File, blob/file retrieved from IDB, etc.)Also:
- How much RAM does the computer have / is there swapfile use happening? I'm asking mainly in the context of maybe there always is a transient memory spike but if swap starts getting used it gets more obvious because the time during which a ton of memory is being used ends up being longer.
- Is it possible there are any redirects happening with the POST?
Yes, <input type="file" id="fileInput" multiple>
The computer have 64GB of RAM. If swapfile used, wouldn't that happen in Microsoft Edge as well?
I can't see any redirects, it is a simple POST to a backend. See small repro app.
Comment 13•6 days ago
•
|
||
Profile with all threads + IO + IPC: https://share.firefox.dev/3DKfftJ
Smaller profile with all threads +IO + IPC + Memory tracking : https://share.firefox.dev/41FRv1T
Asuth, is this something you can take a look at?
Comment 14•6 days ago
•
|
||
Thank you for going out of your way to provide a reproducing test case and in particular posing it in terms of pure fetch, Flindby! And thank you for running it and gathering profiles Mayank!
The tl;dr here is that, just going by the smaller profile with memory tracking (and noting that the links I provide are showing filtered stacks which decouples the flame graph from the timeline; there are far fewer samples than that):
- We've decided to send the contents of the File/Blob to the parent process via a DataPipe instead of just serializing the underlying file descriptor. In the parent based on the file reads in the content process via a stream copier.
- And in the parent process I do see StorageStream writes flowing from a DataPipeReceiver which suggests that when normalizing the upload stream in HttpBaseChannel we decided to put it in a storage stream. This is part of the problem around our need to know the content-length to tell the server that we'll be sending and ability to replay it, plus the sort of known issues about the nsIInputStream contracts where we don't have a characteristic of "I can tell you the exact stream size even though I implement nsIAsyncInputStream". A notable potential factor here is the StorageStream has a max limit of 4 GiB and will start returning NS_ERROR_OUT_OF_MEMORY when it hits the top. But NormalizeUploadStream piercing multiple input streams and recursively normalizing those streams could sidestep that limit since if each individual file gets normalized, they could all end up in storage streams.
I believe we've tried to avoid this exact situation from happening, but it's possible there have been regressions or we didn't quite get the plumbing completed because of the nsIInputStream limitations. Bug 1643280 is in this area, for example, where we have tried to propagate stream length metadata.
I'm going to leave the needinfo up for now on me to get a pernosco trace since I think we have test coverage for the correctness of this situation already, but we lack performance-related checks or structural checks like mozilla::dom::BaseBlobImpl::GetBlobImplType enables for Blobs, so it shouldn't be too hard to see what the emergent thing happening here is. (And it wouldn't be surprising to see that the code explicitly says that this happens/may happen. But we also can/should fix this.)
edit: I'd meant to include the test case JS source to help people avoid needing to download the zip:
// Get elements
const fileInput = document.getElementById('fileInput');
const uploadBtn = document.getElementById('uploadBtn');
// Handle button click to upload files
uploadBtn.addEventListener('click', async () => {
const files = fileInput.files;
if (files.length === 0) {
alert("Please select at least one file to upload.");
return;
}
// Create FormData object
const formData = new FormData();
// Append each file to the FormData object
for (let i = 0; i < files.length; i++) {
formData.append('files[]', files[i]);
}
try {
// Send the request with fetch and headers
const response = await fetch('http://localhost:3000/upload', {
method: 'POST',
body: formData,
});
// Handle the response
if (response.ok) {
const result = await response.json();
console.log('Upload Success:', result);
alert('Files uploaded successfully!');
} else {
console.error('Upload failed:', response);
alert('Upload failed.');
}
} catch (error) {
console.error('Error:', error);
alert('An error occurred during the upload.');
}
});
Reporter | ||
Comment 15•5 days ago
|
||
(In reply to Andrew Sutherland [:asuth] (he/him) from comment #14)
Thank you for going out of your way to provide a reproducing test case and in particular posing it in terms of pure fetch, Flindby! And thank you for running it and gathering profiles Mayank!
The tl;dr here is that, just going by the smaller profile with memory tracking (and noting that the links I provide are showing filtered stacks which decouples the flame graph from the timeline; there are far fewer samples than that):
- We've decided to send the contents of the File/Blob to the parent process via a DataPipe instead of just serializing the underlying file descriptor. In the parent based on the file reads in the content process via a stream copier.
- And in the parent process I do see StorageStream writes flowing from a DataPipeReceiver which suggests that when normalizing the upload stream in HttpBaseChannel we decided to put it in a storage stream. This is part of the problem around our need to know the content-length to tell the server that we'll be sending and ability to replay it, plus the sort of known issues about the nsIInputStream contracts where we don't have a characteristic of "I can tell you the exact stream size even though I implement nsIAsyncInputStream". A notable potential factor here is the StorageStream has a max limit of 4 GiB and will start returning NS_ERROR_OUT_OF_MEMORY when it hits the top. But NormalizeUploadStream piercing multiple input streams and recursively normalizing those streams could sidestep that limit since if each individual file gets normalized, they could all end up in storage streams.
I believe we've tried to avoid this exact situation from happening, but it's possible there have been regressions or we didn't quite get the plumbing completed because of the nsIInputStream limitations. Bug 1643280 is in this area, for example, where we have tried to propagate stream length metadata.
I'm going to leave the needinfo up for now on me to get a pernosco trace since I think we have test coverage for the correctness of this situation already, but we lack performance-related checks or structural checks like mozilla::dom::BaseBlobImpl::GetBlobImplType enables for Blobs, so it shouldn't be too hard to see what the emergent thing happening here is. (And it wouldn't be surprising to see that the code explicitly says that this happens/may happen. But we also can/should fix this.)
edit: I'd meant to include the test case JS source to help people avoid needing to download the zip:
// Get elements const fileInput = document.getElementById('fileInput'); const uploadBtn = document.getElementById('uploadBtn'); // Handle button click to upload files uploadBtn.addEventListener('click', async () => { const files = fileInput.files; if (files.length === 0) { alert("Please select at least one file to upload."); return; } // Create FormData object const formData = new FormData(); // Append each file to the FormData object for (let i = 0; i < files.length; i++) { formData.append('files[]', files[i]); } try { // Send the request with fetch and headers const response = await fetch('http://localhost:3000/upload', { method: 'POST', body: formData, }); // Handle the response if (response.ok) { const result = await response.json(); console.log('Upload Success:', result); alert('Files uploaded successfully!'); } else { console.error('Upload failed:', response); alert('Upload failed.'); } } catch (error) { console.error('Error:', error); alert('An error occurred during the upload.'); } });
I'm glad I could help. So the short answer is that there is currently nothing I can do in my application to handle this? :)
Note: I tested in Firefox version 91.9.0esr (64-bits) on Windows, I do not see any memory problems there.
Comment 16•5 days ago
|
||
This bug was moved into the Performance component.
:filip.lindby, could you make sure the following information is on this bug?
✅ For slowness or high CPU usage, capture a profile with http://profiler.firefox.com/, upload it and share the link here.- For memory usage issues, capture a memory dump from
about:memory
and attach it to this bug. - Troubleshooting information: Go to
about:support
, click "Copy raw data to clipboard", paste it into a file, save it, and attach the file here.
If the requested information is already in the bug, please confirm it is recent.
Thank you.
Description
•