Thank you for going out of your way to provide a reproducing test case and in particular posing it in terms of pure fetch, Flindby! And thank you for running it and gathering profile Mayank! The tl;dr here is that, just going by the smaller profile with memory tracking (and noting that the links I provide are showing filtered stacks which decouples the flame graph from the timeline; there are far fewer samples than that): - We've decided to send the contents of the File/Blob to the parent process via a DataPipe instead of just serializing the underlying file descriptor. In the parent [based on the file reads in the content process via a stream copier](https://share.firefox.dev/3FqYJQ3). - And in the parent process I do [see StorageStream writes flowing from a DataPipeReceiver](https://share.firefox.dev/3FCMT5a) which suggests that [when normalizing the upload stream in HttpBaseChannel we decided to put it in a storage stream](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1113-1131). This is part of the problem around our need to know the content-length to tell the server that we'll be sending and ability to replay it, plus the sort of known issues about the nsIInputStream contracts where we don't have a characteristic of "I can tell you the exact stream size even though I implement nsIAsyncInputStream". A notable potential factor here is the StorageStream has a max limit of 4 GiB and will [start returning NS_ERROR_OUT_OF_MEMORY](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/xpcom/io/nsStorageStream.cpp#169-171) when it hits the top. But [NormalizeUploadStream piercing multiple input streams and recursively normalizing those streams](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1046-1073) could sidestep that limit since if each individual file gets normalized, they could all end up in storage streams. I believe we've tried to avoid this exact situation from happening, but it's possible there have been regressions or we didn't quite get the plumbing completed because of the nsIInputStream limitations. Bug 1643280 is in this area, for example, where we have tried to propagate stream length metadata. I'm going to leave the needinfo up for now on me to get a pernosco trace since I think we have test coverage for the correctness of this situation already, but we lack performance-related checks or structural checks like [mozilla::dom::BaseBlobImpl::GetBlobImplType](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/dom/file/BaseBlobImpl.h#114-116) enables for Blobs, so it shouldn't be too hard to see what the emergent thing happening here is. (And it wouldn't be surprising to see that the code explicitly says that this happens/may happen.)
Bug 1953185 Comment 14 Edit History
Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.
Thank you for going out of your way to provide a reproducing test case and in particular posing it in terms of pure fetch, Flindby! And thank you for running it and gathering profiles Mayank! The tl;dr here is that, just going by the smaller profile with memory tracking (and noting that the links I provide are showing filtered stacks which decouples the flame graph from the timeline; there are far fewer samples than that): - We've decided to send the contents of the File/Blob to the parent process via a DataPipe instead of just serializing the underlying file descriptor. In the parent [based on the file reads in the content process via a stream copier](https://share.firefox.dev/3FqYJQ3). - And in the parent process I do [see StorageStream writes flowing from a DataPipeReceiver](https://share.firefox.dev/3FCMT5a) which suggests that [when normalizing the upload stream in HttpBaseChannel we decided to put it in a storage stream](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1113-1131). This is part of the problem around our need to know the content-length to tell the server that we'll be sending and ability to replay it, plus the sort of known issues about the nsIInputStream contracts where we don't have a characteristic of "I can tell you the exact stream size even though I implement nsIAsyncInputStream". A notable potential factor here is the StorageStream has a max limit of 4 GiB and will [start returning NS_ERROR_OUT_OF_MEMORY](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/xpcom/io/nsStorageStream.cpp#169-171) when it hits the top. But [NormalizeUploadStream piercing multiple input streams and recursively normalizing those streams](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1046-1073) could sidestep that limit since if each individual file gets normalized, they could all end up in storage streams. I believe we've tried to avoid this exact situation from happening, but it's possible there have been regressions or we didn't quite get the plumbing completed because of the nsIInputStream limitations. Bug 1643280 is in this area, for example, where we have tried to propagate stream length metadata. I'm going to leave the needinfo up for now on me to get a pernosco trace since I think we have test coverage for the correctness of this situation already, but we lack performance-related checks or structural checks like [mozilla::dom::BaseBlobImpl::GetBlobImplType](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/dom/file/BaseBlobImpl.h#114-116) enables for Blobs, so it shouldn't be too hard to see what the emergent thing happening here is. (And it wouldn't be surprising to see that the code explicitly says that this happens/may happen.)
Thank you for going out of your way to provide a reproducing test case and in particular posing it in terms of pure fetch, Flindby! And thank you for running it and gathering profiles Mayank! The tl;dr here is that, just going by the smaller profile with memory tracking (and noting that the links I provide are showing filtered stacks which decouples the flame graph from the timeline; there are far fewer samples than that): - We've decided to send the contents of the File/Blob to the parent process via a DataPipe instead of just serializing the underlying file descriptor. In the parent [based on the file reads in the content process via a stream copier](https://share.firefox.dev/3FqYJQ3). - And in the parent process I do [see StorageStream writes flowing from a DataPipeReceiver](https://share.firefox.dev/3FCMT5a) which suggests that [when normalizing the upload stream in HttpBaseChannel we decided to put it in a storage stream](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1113-1131). This is part of the problem around our need to know the content-length to tell the server that we'll be sending and ability to replay it, plus the sort of known issues about the nsIInputStream contracts where we don't have a characteristic of "I can tell you the exact stream size even though I implement nsIAsyncInputStream". A notable potential factor here is the StorageStream has a max limit of 4 GiB and will [start returning NS_ERROR_OUT_OF_MEMORY](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/xpcom/io/nsStorageStream.cpp#169-171) when it hits the top. But [NormalizeUploadStream piercing multiple input streams and recursively normalizing those streams](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1046-1073) could sidestep that limit since if each individual file gets normalized, they could all end up in storage streams. I believe we've tried to avoid this exact situation from happening, but it's possible there have been regressions or we didn't quite get the plumbing completed because of the nsIInputStream limitations. Bug 1643280 is in this area, for example, where we have tried to propagate stream length metadata. I'm going to leave the needinfo up for now on me to get a pernosco trace since I think we have test coverage for the correctness of this situation already, but we lack performance-related checks or structural checks like [mozilla::dom::BaseBlobImpl::GetBlobImplType](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/dom/file/BaseBlobImpl.h#114-116) enables for Blobs, so it shouldn't be too hard to see what the emergent thing happening here is. (And it wouldn't be surprising to see that the code explicitly says that this happens/may happen. But we also can/should fix this.)
Thank you for going out of your way to provide a reproducing test case and in particular posing it in terms of pure fetch, Flindby! And thank you for running it and gathering profiles Mayank! The tl;dr here is that, just going by the smaller profile with memory tracking (and noting that the links I provide are showing filtered stacks which decouples the flame graph from the timeline; there are far fewer samples than that): - We've decided to send the contents of the File/Blob to the parent process via a DataPipe instead of just serializing the underlying file descriptor. In the parent [based on the file reads in the content process via a stream copier](https://share.firefox.dev/3FqYJQ3). - And in the parent process I do [see StorageStream writes flowing from a DataPipeReceiver](https://share.firefox.dev/3FCMT5a) which suggests that [when normalizing the upload stream in HttpBaseChannel we decided to put it in a storage stream](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1113-1131). This is part of the problem around our need to know the content-length to tell the server that we'll be sending and ability to replay it, plus the sort of known issues about the nsIInputStream contracts where we don't have a characteristic of "I can tell you the exact stream size even though I implement nsIAsyncInputStream". A notable potential factor here is the StorageStream has a max limit of 4 GiB and will [start returning NS_ERROR_OUT_OF_MEMORY](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/xpcom/io/nsStorageStream.cpp#169-171) when it hits the top. But [NormalizeUploadStream piercing multiple input streams and recursively normalizing those streams](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/netwerk/protocol/http/HttpBaseChannel.cpp#1046-1073) could sidestep that limit since if each individual file gets normalized, they could all end up in storage streams. I believe we've tried to avoid this exact situation from happening, but it's possible there have been regressions or we didn't quite get the plumbing completed because of the nsIInputStream limitations. Bug 1643280 is in this area, for example, where we have tried to propagate stream length metadata. I'm going to leave the needinfo up for now on me to get a pernosco trace since I think we have test coverage for the correctness of this situation already, but we lack performance-related checks or structural checks like [mozilla::dom::BaseBlobImpl::GetBlobImplType](https://searchfox.org/mozilla-central/rev/b0e8e4ceb46cb3339cdcb90310fcc161ef4b9e3e/dom/file/BaseBlobImpl.h#114-116) enables for Blobs, so it shouldn't be too hard to see what the emergent thing happening here is. (And it wouldn't be surprising to see that the code explicitly says that this happens/may happen. But we also can/should fix this.) edit: I'd meant to include the test case JS source to help people avoid needing to download the zip: ```js // Get elements const fileInput = document.getElementById('fileInput'); const uploadBtn = document.getElementById('uploadBtn'); // Handle button click to upload files uploadBtn.addEventListener('click', async () => { const files = fileInput.files; if (files.length === 0) { alert("Please select at least one file to upload."); return; } // Create FormData object const formData = new FormData(); // Append each file to the FormData object for (let i = 0; i < files.length; i++) { formData.append('files[]', files[i]); } try { // Send the request with fetch and headers const response = await fetch('http://localhost:3000/upload', { method: 'POST', body: formData, }); // Handle the response if (response.ok) { const result = await response.json(); console.log('Upload Success:', result); alert('Files uploaded successfully!'); } else { console.error('Upload failed:', response); alert('Upload failed.'); } } catch (error) { console.error('Error:', error); alert('An error occurred during the upload.'); } }); ```