Closed
Bug 1421094
Opened 7 years ago
Closed 7 years ago
Memory leaks while uploading file
Categories
(Core :: General, defect)
Tracking
()
RESOLVED
FIXED
mozilla59
Tracking | Status | |
---|---|---|
firefox59 | --- | fixed |
People
(Reporter: trufanovan, Assigned: baku)
References
Details
(Keywords: memory-footprint, Whiteboard: [MemShrink:P1])
Attachments
(2 files, 2 obsolete files)
5.86 KB,
patch
|
smaug
:
review+
|
Details | Diff | Splinter Review |
5.46 KB,
patch
|
smaug
:
review+
|
Details | Diff | Splinter Review |
User Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 Build ID: 20171123171603 Steps to reproduce: Register at mail.ru Go to Mail.ru Cloud service https://cloud.mail.ru Try to upload 4 files with ~2Gb (about max) size (efault mail.ru cloud is 8 Gb). Actual results: The RAM usage will jump to ~2Gb as apparently whole file is loaded into RAM instead of read it by parts. Also after 3rd file RAM usage increase 4 Gb that leads to OS freeze on my machine (Kubuntu 17.10) Expected results: Files are uploaded smoothly with feasible RAM usage and without cumulative effects.
Reporter | ||
Comment 1•7 years ago
|
||
57.0 (64-bit) Package 57.0+build4-0ubuntu0.17.10.6 $ uname -a Linux lg-laptop 4.13.0-16-generic #19-Ubuntu SMP Wed Oct 11 18:35:14 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Updated•7 years ago
|
Comment 2•7 years ago
|
||
Andrea, any thoughts here?
Flags: needinfo?(amarchesini)
Whiteboard: [MemShrink]
Assignee | ||
Comment 3•7 years ago
|
||
> Register at mail.ru
I need help with the language. I don't know how to create an account.
Could it be that the website uses FileReader API? In this case, the whole file is kept in memory.
Hard to say without a proper test.
Flags: needinfo?(amarchesini)
Reporter | ||
Comment 4•7 years ago
|
||
> Could it be that the website uses FileReader API? In this case, the whole file is kept in memory. Probably. That may be reason for <2Gb per file limit. While they allow to upload bigger files if native Windows client app is used. But the problem is in Memory usage which grows if bulk upload is used e.g. if several files are selected for upload at once. In y case I've total 8Gb RAM and starting experience problems on 3rd file (all files are 1.7-1.8Gb). I've created a test account. Credentials could be found here: https://pastebin.com/BbRGWcyg It's switched to English but looks like Cloud service of Mail.ru lacks localization. I've screenshotted it and add English labels: http://i97.fastpic.ru/big/2017/1128/0d/5d769e5839626a954229b1d511d5610d.png
Assignee | ||
Comment 5•7 years ago
|
||
Thanks so much for helping me debugging this issue. I confirm that they are using FileReader to read the full content of the file. I guess we could contact them asking to use a different approach, but nothing else can be done on the browser side.
Reporter | ||
Comment 6•7 years ago
|
||
Are you sure there are no leaks? I would expect browser use same amount of memory for 10 files bulk upload as for uploading each of them one by one. Is this a FileReader API disadvantage? I've tried Chromium 62.0.3202.94 - it takes < 100Mb RAM to upload all files and apparently doesn't read whole file into RAM (I keep seeing disk I/O till file uploading ends). Its overall RAM usage is dramatically lower and it behaves much faster than FF 57.
Assignee | ||
Comment 7•7 years ago
|
||
The issue here is here: https://searchfox.org/mozilla-central/source/netwerk/protocol/http/HttpBaseChannel.cpp#908-913 When we are sending a POST, we end up creating a multiplexInputStream composed by stringStream + BufferedInputStream(IPCBlobInputStream) + stringStream. This is not cloneable because nsBufferedInputStream doesn't implement nsICloneableInputStream interface. This patch is about implementing it. Of course, we expose it only if the underlying stream is cloneable as well.
Attachment #8932546 -
Flags: review?(bugs)
Assignee | ||
Updated•7 years ago
|
Assignee: nobody → amarchesini
Comment 8•7 years ago
|
||
Comment on attachment 8932546 [details] [diff] [review] buffer.patch >+NS_IMETHODIMP >+nsBufferedInputStream::GetCloneable(bool* aCloneable) >+{ >+ nsCOMPtr<nsICloneableInputStream> stream = do_QueryInterface(mStream); >+ NS_ENSURE_TRUE(stream, NS_ERROR_FAILURE); >+ return stream->GetCloneable(aCloneable); I wonder, should this just set first *aCloneable = false; and if QI fails, return NS_OK; Especially given the .idl has [infallible] >+NS_IMETHODIMP >+nsBufferedInputStream::Clone(nsIInputStream** aResult) >+{ >+ nsCOMPtr<nsICloneableInputStream> stream = do_QueryInterface(mStream); >+ NS_ENSURE_TRUE(stream, NS_ERROR_FAILURE); >+ >+ nsCOMPtr<nsIInputStream> clonedStream; >+ nsresult rv = stream->Clone(getter_AddRefs(clonedStream)); >+ NS_ENSURE_SUCCESS(rv, rv); >+ >+ nsCOMPtr<nsIBufferedInputStream> bis = new nsBufferedInputStream(); >+ rv = bis->Init(clonedStream, mBufferSize); Why we don't need to clone other states from 'this'? Without those the patch doesn't seem to follow the documentation we have about clone() in the .idl. Please explain why this would be enough, or fix and ask review again. Can we get some tests for this, also in cases when cloning happens when some data has been read already.
Attachment #8932546 -
Flags: review?(bugs) → review-
Assignee | ||
Comment 9•7 years ago
|
||
For what is needed here, we want nsBufferedInputStream to be cloneable only if unread and unclosed.
Attachment #8932546 -
Attachment is obsolete: true
Attachment #8932810 -
Flags: review?(bugs)
Assignee | ||
Comment 10•7 years ago
|
||
Attachment #8932810 -
Attachment is obsolete: true
Attachment #8932810 -
Flags: review?(bugs)
Attachment #8932815 -
Flags: review?(bugs)
Comment 11•7 years ago
|
||
Comment on attachment 8932815 [details] [diff] [review] buffer.patch >+NS_IMETHODIMP >+nsBufferedInputStream::Clone(nsIInputStream** aResult) >+{ I think we should we check also here if (!mBuffer || mBufferStartOffset) { return NS_ERROR_FAILURE; }
Attachment #8932815 -
Flags: review?(bugs) → review+
Updated•7 years ago
|
Whiteboard: [MemShrink] → [MemShrink:P1]
Assignee | ||
Comment 12•7 years ago
|
||
The first patch is not enough. This fixes the issue, but it breaks a SW test. Here why: the first patch makes nsBufferedInputStream cloneable. Having nsBufferedInputStream cloneable, makes necko to avoid the creation of a copy of the stream here: https://searchfox.org/mozilla-central/source/netwerk/protocol/http/HttpBaseChannel.cpp#901. If this channel is handled by a ServiceWorker, when a ServiceWorkerFetchEvent is dispatched, the inputStream is cloned: https://searchfox.org/mozilla-central/source/dom/workers/ServiceWorkerPrivate.cpp#1498 The cloned stream is stored into the InternalRequest: https://searchfox.org/mozilla-central/source/dom/workers/ServiceWorkerPrivate.cpp#1608 and that is used by fetch to create a new HttpChannel: https://searchfox.org/mozilla-central/source/dom/fetch/FetchDriver.cpp#642,646-648 As you can see, we don't store the body size, and we end up passing -1 in ExplicitSetUploadStream(). BaseHttpChannel calls ::Available() to know the stream size, and we have the wrong value, because nsBufferedInputStream could be dealing with a async inputStream. This patch is about propagating the buffer size into Fetch and Request. I also have a set of patches to remove the use of Available() in necko. But that will be a separate bug.
Attachment #8936325 -
Flags: review?(bugs)
Comment 13•7 years ago
|
||
Comment on attachment 8936325 [details] [diff] [review] part 2 - CloneUploadStream > bool mIsReload; > bool mMarkLaunchServiceWorkerEnd; > RequestCache mCacheMode; > RequestMode mRequestMode; > RequestRedirect mRequestRedirect; > RequestCredentials mRequestCredentials; > nsContentPolicyType mContentPolicyType; > nsCOMPtr<nsIInputStream> mUploadStream; >+ int64_t mContentLength; Could you call this mUploadStreamContentLength to make it a bit easier to understand what it is about. But we need to do something to our current streams handling. We keep fixing more and more this kind of issues.
Attachment #8936325 -
Flags: review?(bugs) → review+
Assignee | ||
Comment 14•7 years ago
|
||
> But we need to do something to our current streams handling. We keep fixing
> more and more this kind of issues.
You are right. I'm almost there with removing the use of ::Available() to calculate the size of a stream in necko.
Note that, because IPCBlobInputStream, the number of places that have to deal with nsIAsyncInputStream is increased and our code base was not really ready to this step. It's taking time to do the proper cleanup, but, I hope, we are almost there.
Comment 15•7 years ago
|
||
Pushed by amarchesini@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/8705cd6a540d Make nsBufferedStream cloneable, r=smaug https://hg.mozilla.org/integration/mozilla-inbound/rev/dc71ec3a971b nsIUploadChannel2.cloneUploadStream returns the length of the stream, r=smaug
Comment 16•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/8705cd6a540d https://hg.mozilla.org/mozilla-central/rev/dc71ec3a971b
Status: UNCONFIRMED → RESOLVED
Closed: 7 years ago
status-firefox59:
--- → fixed
Resolution: --- → FIXED
Target Milestone: --- → mozilla59
You need to log in
before you can comment on or make changes to this bug.
Description
•