Closed Bug 293732 Opened 20 years ago Closed 15 years ago

FTP upload consumes a lot of RAM

Categories

(Core Graveyard :: Embedding: APIs, defect)

x86
All
defect
Not set
normal

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: mozilla, Assigned: neil)

References

Details

Attachments

(1 file)

Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8b2) Gecko/20050510 This problem was reported to me for an OS/2 build I have created but also happens to me on Linux. When uploading a large file to an FTP server, available RAM is observed to be consumed very quickly, in quantities of at least the size of the file. I cannot trace the network interface, but it seems that before the real transfer to the FTP server begins, the file is first completely loaded into RAM. The FTP server does not show even a partial upload. On machines with more restricted RAM this means that either the machine slows down a lot (due to swapping or even hangs completely) or Mozilla will get killed as it tries to allocate too much RAM. Strange that I didn't find another bug about this, probably everybody doing large uploads is using a proper FTP client...
hm... the code just does an AsyncCopy...
oh. the wbp code sorta looks like it reads the entire file into memory? (StartUpload takes an nsIStorageStream, which I believe comes from MakeOutputStreamFromURI) (what's the right component for webbrowserpersist bugs?)
Assignee: dougt → adamlock
Component: Networking: FTP → Embedding: APIs
QA Contact: benc
Probably apis or file handling....
WBP should use a temp file for large uploads. (I thought it was smart about this... guess not... maybe that was something that was planned but fell on the floor when NS was disbanded.)
Why use a copy of the data at all? If the file is large, copying it somewhere else on disk may create similar problems to the case where it is copied to RAM. Can it not get streamed directly from the original file?
Sorry, I was referring to what happens when a large file is uploaded from say a http:// link to a ftp:// link. In the case of a file already on the filesystem, WBP should just read the file directly without having to load it all into memory. It should wrap the file input stream in a buffered input stream so that it can be processed by the network library, but that buffered input stream need not grow so large. I'm sure there is something minor causing this bug then because it definitely has a codepath for reading directly from disk.
hm... I'm unable to find anything creating a file input stream, or a buffered stream in nsWebBrowserPersist.cpp... where is that codepath?
Yes, you are correct. I was imagining things. The WBP code should look at the source URI, and try to QI to nsIFileURL. If the QI succeeds, then it should get the nsIFile from it, and upload that directly instead of "downloading" the URI to a memory buffer as is done currently.
QA Contact: apis
Attached patch Possible patchSplinter Review
I don't know if this is the right idea, or whether we should try to create a buffered input stream for all non-file uploads.
Assignee: adamlock → neil
Status: NEW → ASSIGNED
Attachment #432130 - Flags: review?(cbiesinger)
Attachment #432130 - Flags: review?(cbiesinger) → review+
Pushed changeset 41b1df0b2262 to mozilla-central. Sadly I mistyped the bug number as 239372...
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: