Closed
Bug 293732
Opened 20 years ago
Closed 15 years ago
FTP upload consumes a lot of RAM
Categories
(Core Graveyard :: Embedding: APIs, defect)
Tracking
(Not tracked)
RESOLVED
FIXED
People
(Reporter: mozilla, Assigned: neil)
References
Details
Attachments
(1 file)
|
6.22 KB,
patch
|
Biesinger
:
review+
|
Details | Diff | Splinter Review |
Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8b2) Gecko/20050510
This problem was reported to me for an OS/2 build I have created but also
happens to me on Linux.
When uploading a large file to an FTP server, available RAM is observed to be
consumed very quickly, in quantities of at least the size of the file. I cannot
trace the network interface, but it seems that before the real transfer to the
FTP server begins, the file is first completely loaded into RAM. The FTP server
does not show even a partial upload.
On machines with more restricted RAM this means that either the machine slows
down a lot (due to swapping or even hangs completely) or Mozilla will get killed
as it tries to allocate too much RAM.
Strange that I didn't find another bug about this, probably everybody doing
large uploads is using a proper FTP client...
Comment 1•20 years ago
|
||
hm... the code just does an AsyncCopy...
Comment 2•20 years ago
|
||
oh. the wbp code sorta looks like it reads the entire file into memory?
(StartUpload takes an nsIStorageStream, which I believe comes from
MakeOutputStreamFromURI)
(what's the right component for webbrowserpersist bugs?)
Assignee: dougt → adamlock
Component: Networking: FTP → Embedding: APIs
QA Contact: benc
Comment 3•20 years ago
|
||
Probably apis or file handling....
Comment 4•20 years ago
|
||
WBP should use a temp file for large uploads. (I thought it was smart about
this... guess not... maybe that was something that was planned but fell on the
floor when NS was disbanded.)
| Reporter | ||
Comment 5•20 years ago
|
||
Why use a copy of the data at all? If the file is large, copying it somewhere
else on disk may create similar problems to the case where it is copied to RAM.
Can it not get streamed directly from the original file?
Comment 6•20 years ago
|
||
Sorry, I was referring to what happens when a large file is uploaded from say a
http:// link to a ftp:// link. In the case of a file already on the filesystem,
WBP should just read the file directly without having to load it all into
memory. It should wrap the file input stream in a buffered input stream so that
it can be processed by the network library, but that buffered input stream need
not grow so large. I'm sure there is something minor causing this bug then
because it definitely has a codepath for reading directly from disk.
Comment 7•20 years ago
|
||
hm... I'm unable to find anything creating a file input stream, or a buffered
stream in nsWebBrowserPersist.cpp... where is that codepath?
Comment 8•20 years ago
|
||
Yes, you are correct. I was imagining things. The WBP code should look at the
source URI, and try to QI to nsIFileURL. If the QI succeeds, then it should get
the nsIFile from it, and upload that directly instead of "downloading" the URI
to a memory buffer as is done currently.
Updated•16 years ago
|
QA Contact: apis
| Assignee | ||
Comment 10•15 years ago
|
||
I don't know if this is the right idea, or whether we should try to create a buffered input stream for all non-file uploads.
Updated•15 years ago
|
Attachment #432130 -
Flags: review?(cbiesinger) → review+
| Assignee | ||
Comment 11•15 years ago
|
||
Pushed changeset 41b1df0b2262 to mozilla-central.
Sadly I mistyped the bug number as 239372...
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Updated•6 years ago
|
Product: Core → Core Graveyard
You need to log in
before you can comment on or make changes to this bug.
Description
•