Closed Bug 91795 Opened 23 years ago Closed 22 years ago

large downloads causes mozilla to consume too much memory (results possibly in an kill because of out of memory)

Categories

(Core :: Networking: Cache, defect, P1)

x86
Linux
defect

Tracking

()

VERIFIED FIXED
mozilla1.0

People

(Reporter: vb, Assigned: gordon)

References

(Blocks 1 open bug, )

Details

(Keywords: topembed+, Whiteboard: [ADT1 RTM] ready to go? [driver:shaver])

Attachments

(3 files)

When downloading really large files >400MB  mozillas memory consumption
grows and grows often resulting in an out of memory exception and thus
in a kill of mozilla. (my computer has 256MB Ram and 350MB swap space)

This is mozillas fault because after a crash (or exit) the swap and memory usage
is back to normal.

And opera does it fine.

The problem exists in Netscape 4.7, too.
->necko
Assignee: asa → neeti
Component: Browser-General → Networking
QA Contact: doronr → benc
Vincenz, could you take a look at bug 87249 and see if that bug is the same as
yours?

They look very similar...
Answer to Christopher Aillon:
No, i think these are two different bugs. The memory usage is
about 10%. I observed increasing memory consumption till
there is no more memory. Resulting in continuous swapping
and a kill. Regardless whether i use a proxy or not
(My disk cache is set to 0).

I do have the same problem with Netscape 4.7x. So this should not be
a gui related bug, as 87249 is supposed to be.

I tried to download the 9i database from oracle.
gagan+neeti: can we triage this now?
*** Bug 103271 has been marked as a duplicate of this bug. ***
marking NEW (based on the Dupe)
Status: UNCONFIRMED → NEW
Ever confirmed: true
My disk cache is set to 0 as well, when I reported the dupe. This is a fairly
severe problem, specially for low memory systems. I wasted four hours last night
geting 80% through a download of staroffice 6.0 beta, at which point my system
was thrashing so bad there wasn't any CPU available to process network traffic.
The download was being written to both /tmp/foo and the swap. Not pretty.
Severity: normal → major
darin
Assignee: neeti → darin
cc'ing gordon.
gordon:

this is another example of why we want to limit the size of the objects we allow
in the memory cache.  the solution is two fold.  1) http needs to look at the
content length header to determine if it is even going to bother caching, and 2)
the cache needs to start refusing to buffer any more data.

-> 0.9.8 for the http changes.
Status: NEW → ASSIGNED
Priority: -- → P2
Target Milestone: --- → mozilla0.9.8
gordon: we actually have the same problem with the disk cache... imagine a user
downloading an iso :(
So, do you think the cache stream wrapper should fail when attempting to write
data that would go over some limit?
yup... i think that is what we need to do.  i recall once thinking otherwise,
but now i'm sold on the idea.
  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 7981 poelzi    20   0 95776  85M 16300 R    69.3 34.2  15:17 mozilla-bin

and i just save a 70 mb file :)
bumping up priority on this one.
Severity: major → critical
Priority: P2 → P1
Keywords: mozilla0.9.8
this patch makes nsInputStreamTee drop it's mSink when it get's an error of
NS_BASE_STREAM_CLOSED.	i haven't tested this patch very much, but it should be
correct.

it also fixes nsInputStreamTee::Close() to allow multiple calls (part of the
requirements of nsIInputStream::Close())
-> gordon
Assignee: darin → gordon
Status: ASSIGNED → NEW
Target Milestone: mozilla0.9.8 → mozilla0.9.9
*** Bug 73272 has been marked as a duplicate of this bug. ***
Keywords: nsbeta1+
Target Milestone: mozilla0.9.9 → mozilla1.0
Blocks: 129923
this seems to be fixed by the landing of the download manager.

wfm on linux 2002031721
I'm still seeing this on today's nightly (2002032503) on Windows ME. While
downloading a 28 MB file, I can see my free memory decreasing steadily, even
when I have downloaded past 8 MB, which is the size of my memory cache.
Whiteboard: [adt1]
ADT3 per ADT triage, unless someone can tell us that this affects a large number
of users.
Whiteboard: [adt1] → [adt3]
Sorry for the spam, but this will affect any and all users who attempt to
download large files with Mozilla (ISO images for example).

Anyone who downloads an ISO will start the download, notice it's going well, and
leave it. It seems to work until the user runs out of RAM, but a few tens of
minutes later (or a few hours later) his computer will crash or will start
thrashing so badly as to be unusable.

The user will have to reboot or cancel the download and *start again* with
another program. This does not make for a good user experience.
If anyone could update the URL field with a pointer to a really big (400-500
MB+) file, that would be great.
Adding 664338 kbyte URL. Crashy crashy.
*** Bug 131439 has been marked as a duplicate of this bug. ***
I was just downloading 5 isos at once. Each was over 300mb+ and most over 600mb.
Just as others have said, you think it is going fine, you walk away, you come
back to a computer thrashing itself. I 768mb of real memory and 1.5gb of swap
and mozilla had gotten up to 84% memory usage before I killed it. This is using
mozilla 0.9.9  4-8-2002  from redhat skipjack2 beta. I think this bug should be
definitely fixed before 1.0 comes out.
Changing nsbeta1+ [adt3] bugs to nsbeta1- on behalf of the adt.  If you have any
questions about this, please email adt@netscape.com.  You can search for
"changing adt3 bugs" to quickly find and delete these bug mails.
Keywords: nsbeta1-
Changing nsbeta1+ [adt3] bugs to nsbeta1- on behalf of the adt.  If you have any
questions about this, please email adt@netscape.com.  You can search for
"changing adt3 bugs" to quickly find and delete these bug mails.
Keywords: nsbeta1+
*** Bug 142669 has been marked as a duplicate of this bug. ***
*** Bug 142842 has been marked as a duplicate of this bug. ***
Confirmed with 2002050508 on Win2k.  Downloaded 2 RedHat v7.3 ISOs via HTTP and
ended up with 245MB real memory and 1.5GB of swap consumed.  I left it running
for an hour or so and it never returned the memory.

Using HTTP is necessary with the RedHat Network as priority download of ISO's
are only offered via HTTP.

Nasty leak for common functionality.

In other bugs marked as duplicate this bug was reported for windows 2000. This
bug should be changed to OS -> ALL.

Another thing to notice is that one reported it happening by downloading 700Mbs
with HTTPS.
Setting OS to all as per comments and personal experience (happens to me on
WinME as well).
Attachment #83119 - Flags: superreview+
Comment on attachment 83119 [details] [diff] [review]
proposed patch for disk/memory cache devices to abort when entry size exceeds cache capacity

sr=darin (looks good)
Comment on attachment 83119 [details] [diff] [review]
proposed patch for disk/memory cache devices to abort when entry size exceeds cache capacity

r=dougt
Attachment #83119 - Flags: review+
trying to get this patch some attention...
Keywords: mozilla1.0, topembed
Whiteboard: [adt3] → [ADT1 RTM]
I hope that the reasons in comment 22, comment 26, and comment 31 sway ADT into
considering this bug for 1.0. It could make mozilla look pretty bad if it stays in.
Summary: large downloads causes mozilla to consume too much memory (results possibly in an kill because of ou of memory) → large downloads causes mozilla to consume too much memory (results possibly in an kill because of out of memory)
-> cache
Component: Networking → Networking: Cache
QA Contact: benc → tever
patch to cache checked into trunk.  nsInputStreamTee.cpp patch checked in
previously as part of another bug.

I'll leave this open for now, until we decide if we want to move these to any
branches.
tever/benc/other, can you shake on this today?  if we think this is ready for
the branch we should get it in tonight or tomorrow.
Whiteboard: [ADT1 RTM] → [ADT1 RTM] ready to go?
I spoke to tever earlier in the day. He said he would test.
If the tests work out, this needs to be checked in weds, or
thursday night at the latest if we want to get it in 1.0.
Yes, I've been looking at it. Unfortunately, I can't reproduce this on an older
build so I can't really confirm that it's fixed on the trunk.  I also had a few
other people try and they couldn't repro this either.  So it's possible that I
don't have the proper conditions set up yet.  Not sure. 

Here is what we tried:
1) Either disabled disk cache or set the disk cache size to 0
2) hit ftp://redhat.newaol.com/redhat/redhat-7.3-en/iso/i386/ and downloaded one
of the large .iso files.
3) Using 'top', monitored the process size.  (In windows used taskmanagr.)

I used an older trunk (5/6) build and the mozilla RC2 on linux for testing.  The
size remained pretty stable through all my testing and I was able to
successfully dowload the file.  

Could someone who can reproduce this test the nightly mozilla?  Thanks.
I spoke with Gordon earlier and he suggested this fix could affect other areas
that use cache such as plugins, IMAP, and sound (I guess plugins too).  So we
might want to get some additional people looking into these areas once the fix
is confirmed.

cc'ing shrirang, gayatri
Confirmed with RC2 Win2k.  Note that Cache settings are at 4096 mem & 50000 disk
with comparison at "Every time...".

I again downloaded a RedHat v7.3 ISO via HTTPS from Red Hat's RHN service. 
After downloading core memory was at ~165MB and virtual memory at ~700MB. 
Though there is a wrinkle: I tried downloading again to verify the test result
but rather than waiting for the download to complete I canceled it 3/4 of the
way through.  While there was a mess of swapping action for 4 minutes or so the
virtual memory allocated for the download did appear to have been returned.

I'm not familiar with the Mozilla architecture/source, author of comment #43, so
forgive my ignorance but perhaps FTP isn't cached similarly to HTTP?

Mozilla 1.0 Release Candidate 2
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0rc2) Gecko/20020510
ok, thanks.  The original url listed an ftp link, I didn't catch the morphing. 
But now I can see this happening using http.  
This is working on trunk build 2002051421 (linux).  Memory consumption is not
growing out of control as it did on the 5/6 build when http downloading.
Confirm success w/build 2002051408; the outrageous memory usage on HTTP/S
downloads is not occurring.  Reiterate that it remains a problem with RC2.

Mozilla 1.0.0+
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.0+) Gecko/20020514
If someone can test this with a branch+patch build (all 3 platforms, please
please please), it'll make my driving-life a lot easier.
Whiteboard: [ADT1 RTM] ready to go? → [ADT1 RTM] ready to go? [driver:shaver]
Keywords: topembedtopembed+
shaver: fwiw, the branch and trunk do not differ in the areas effected by the
patch to fix this bug.  i have branch builds that i can use to test this... but
it will take me some time to finish testing + plus other things are getting in
my way... if anyone else can help test on one or more of the platforms that
would be great!
how did the branch testing go?
dawn: i have not fully tested this patch yet.  i am behind on that :(
*** Bug 144253 has been marked as a duplicate of this bug. ***
Comment on attachment 84123 [details] [diff] [review]
patch for 1.0 branch

gordon's part of this patch has r=dougt and sr=darin

my part of this patch (taken from bug 143311) has r=gordon,dougt and sr=rpotts
Attachment #84123 - Flags: superreview+
Attachment #84123 - Flags: review+
Comment on attachment 84123 [details] [diff] [review]
patch for 1.0 branch

a=brendan@mozilla.org on behalf of drivers for 1.0 branch checkin.

(Who spells 1023 0x399? a hex-talking hakmem hacker after my own heart :-)

/be
Attachment #84123 - Flags: approval+
ok, fixed-on-branch
Status: NEW → RESOLVED
Closed: 22 years ago
Keywords: fixed1.0.0
Resolution: --- → FIXED
If 0x399 is meant to be 1023, instead of 0x3ff, this is not fixed.
No longer blocks: 143200
Yes, 0x399 is 921, not 1024. It should be 0x3ff. gordon, darin, brendan, what do
we do? I can submit a new patch for the trunk if you like...
gordon has this fixed along with another patch that he's working on right now.
verified branch - 05/23 builds - winNT4, Linux rh6, mac osX
Status: RESOLVED → VERIFIED
Keywords: verified1.0.0
*** Bug 143055 has been marked as a duplicate of this bug. ***
*** Bug 69027 has been marked as a duplicate of this bug. ***
forgot to remove fixed1.0.0 keyword so doing so now
Keywords: fixed1.0.0
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: