large downloads causes mozilla to consume too much memory (results possibly in an kill because of out of memory)

VERIFIED FIXED in mozilla1.0

Status

()

P1
critical
VERIFIED FIXED
18 years ago
11 years ago

People

(Reporter: vb, Assigned: gordon)

Tracking

(Blocks: 1 bug, {topembed+})

Trunk
mozilla1.0
x86
Linux
topembed+
Points:
---

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [ADT1 RTM] ready to go? [driver:shaver], URL)

Attachments

(3 attachments)

(Reporter)

Description

18 years ago
When downloading really large files >400MB  mozillas memory consumption
grows and grows often resulting in an out of memory exception and thus
in a kill of mozilla. (my computer has 256MB Ram and 350MB swap space)

This is mozillas fault because after a crash (or exit) the swap and memory usage
is back to normal.

And opera does it fine.

The problem exists in Netscape 4.7, too.

Comment 1

18 years ago
->necko
Assignee: asa → neeti
Component: Browser-General → Networking
QA Contact: doronr → benc
Vincenz, could you take a look at bug 87249 and see if that bug is the same as
yours?

They look very similar...
(Reporter)

Comment 3

18 years ago
Answer to Christopher Aillon:
No, i think these are two different bugs. The memory usage is
about 10%. I observed increasing memory consumption till
there is no more memory. Resulting in continuous swapping
and a kill. Regardless whether i use a proxy or not
(My disk cache is set to 0).

I do have the same problem with Netscape 4.7x. So this should not be
a gui related bug, as 87249 is supposed to be.

I tried to download the 9i database from oracle.

Comment 4

18 years ago
gagan+neeti: can we triage this now?

Comment 5

18 years ago
*** Bug 103271 has been marked as a duplicate of this bug. ***
marking NEW (based on the Dupe)
Status: UNCONFIRMED → NEW
Ever confirmed: true

Comment 7

18 years ago
My disk cache is set to 0 as well, when I reported the dupe. This is a fairly
severe problem, specially for low memory systems. I wasted four hours last night
geting 80% through a download of staroffice 6.0 beta, at which point my system
was thrashing so bad there wasn't any CPU available to process network traffic.
The download was being written to both /tmp/foo and the swap. Not pretty.
Severity: normal → major

Comment 8

18 years ago
darin
Assignee: neeti → darin

Comment 9

18 years ago
cc'ing gordon.

Comment 10

18 years ago
gordon:

this is another example of why we want to limit the size of the objects we allow
in the memory cache.  the solution is two fold.  1) http needs to look at the
content length header to determine if it is even going to bother caching, and 2)
the cache needs to start refusing to buffer any more data.

-> 0.9.8 for the http changes.
Status: NEW → ASSIGNED
Priority: -- → P2
Target Milestone: --- → mozilla0.9.8

Comment 11

18 years ago
gordon: we actually have the same problem with the disk cache... imagine a user
downloading an iso :(
(Assignee)

Comment 12

18 years ago
So, do you think the cache stream wrapper should fail when attempting to write
data that would go over some limit?

Comment 13

18 years ago
yup... i think that is what we need to do.  i recall once thinking otherwise,
but now i'm sold on the idea.

Comment 14

18 years ago
  PID USER     PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME COMMAND
 7981 poelzi    20   0 95776  85M 16300 R    69.3 34.2  15:17 mozilla-bin

and i just save a 70 mb file :)

Comment 15

17 years ago
bumping up priority on this one.
Severity: major → critical
Priority: P2 → P1

Updated

17 years ago
Keywords: mozilla0.9.8

Comment 16

17 years ago
this patch makes nsInputStreamTee drop it's mSink when it get's an error of
NS_BASE_STREAM_CLOSED.	i haven't tested this patch very much, but it should be
correct.

it also fixes nsInputStreamTee::Close() to allow multiple calls (part of the
requirements of nsIInputStream::Close())

Comment 17

17 years ago
-> gordon
Assignee: darin → gordon
Status: ASSIGNED → NEW
(Assignee)

Updated

17 years ago
Target Milestone: mozilla0.9.8 → mozilla0.9.9

Comment 18

17 years ago
*** Bug 73272 has been marked as a duplicate of this bug. ***

Updated

17 years ago
Keywords: nsbeta1+
(Assignee)

Updated

17 years ago
Target Milestone: mozilla0.9.9 → mozilla1.0

Updated

17 years ago
Blocks: 129923

Comment 19

17 years ago
this seems to be fixed by the landing of the download manager.

wfm on linux 2002031721

Comment 20

17 years ago
I'm still seeing this on today's nightly (2002032503) on Windows ME. While
downloading a 28 MB file, I can see my free memory decreasing steadily, even
when I have downloaded past 8 MB, which is the size of my memory cache.

Updated

17 years ago
Whiteboard: [adt1]
ADT3 per ADT triage, unless someone can tell us that this affects a large number
of users.
Whiteboard: [adt1] → [adt3]

Comment 22

17 years ago
Sorry for the spam, but this will affect any and all users who attempt to
download large files with Mozilla (ISO images for example).

Anyone who downloads an ISO will start the download, notice it's going well, and
leave it. It seems to work until the user runs out of RAM, but a few tens of
minutes later (or a few hours later) his computer will crash or will start
thrashing so badly as to be unusable.

The user will have to reboot or cancel the download and *start again* with
another program. This does not make for a good user experience.

Comment 23

17 years ago
If anyone could update the URL field with a pointer to a really big (400-500
MB+) file, that would be great.

Updated

17 years ago
Keywords: mozilla0.9.8

Comment 24

17 years ago
Adding 664338 kbyte URL. Crashy crashy.

Comment 25

17 years ago
*** Bug 131439 has been marked as a duplicate of this bug. ***

Comment 26

17 years ago
I was just downloading 5 isos at once. Each was over 300mb+ and most over 600mb.
Just as others have said, you think it is going fine, you walk away, you come
back to a computer thrashing itself. I 768mb of real memory and 1.5gb of swap
and mozilla had gotten up to 84% memory usage before I killed it. This is using
mozilla 0.9.9  4-8-2002  from redhat skipjack2 beta. I think this bug should be
definitely fixed before 1.0 comes out.

Comment 27

17 years ago
Changing nsbeta1+ [adt3] bugs to nsbeta1- on behalf of the adt.  If you have any
questions about this, please email adt@netscape.com.  You can search for
"changing adt3 bugs" to quickly find and delete these bug mails.
Keywords: nsbeta1-

Comment 28

17 years ago
Changing nsbeta1+ [adt3] bugs to nsbeta1- on behalf of the adt.  If you have any
questions about this, please email adt@netscape.com.  You can search for
"changing adt3 bugs" to quickly find and delete these bug mails.
Keywords: nsbeta1+
*** Bug 142669 has been marked as a duplicate of this bug. ***
*** Bug 142842 has been marked as a duplicate of this bug. ***

Comment 31

17 years ago
Confirmed with 2002050508 on Win2k.  Downloaded 2 RedHat v7.3 ISOs via HTTP and
ended up with 245MB real memory and 1.5GB of swap consumed.  I left it running
for an hour or so and it never returned the memory.

Using HTTP is necessary with the RedHat Network as priority download of ISO's
are only offered via HTTP.

Nasty leak for common functionality.

In other bugs marked as duplicate this bug was reported for windows 2000. This
bug should be changed to OS -> ALL.

Another thing to notice is that one reported it happening by downloading 700Mbs
with HTTPS.

Comment 33

17 years ago
Setting OS to all as per comments and personal experience (happens to me on
WinME as well).

Updated

17 years ago
Attachment #83119 - Flags: superreview+

Comment 35

17 years ago
Comment on attachment 83119 [details] [diff] [review]
proposed patch for disk/memory cache devices to abort when entry size exceeds cache capacity

sr=darin (looks good)
Comment on attachment 83119 [details] [diff] [review]
proposed patch for disk/memory cache devices to abort when entry size exceeds cache capacity

r=dougt
Attachment #83119 - Flags: review+

Comment 37

17 years ago
trying to get this patch some attention...
Keywords: mozilla1.0, topembed
Whiteboard: [adt3] → [ADT1 RTM]

Comment 38

17 years ago
I hope that the reasons in comment 22, comment 26, and comment 31 sway ADT into
considering this bug for 1.0. It could make mozilla look pretty bad if it stays in.

Updated

17 years ago
Summary: large downloads causes mozilla to consume too much memory (results possibly in an kill because of ou of memory) → large downloads causes mozilla to consume too much memory (results possibly in an kill because of out of memory)

Comment 39

17 years ago
-> cache
Component: Networking → Networking: Cache
QA Contact: benc → tever
(Assignee)

Comment 40

17 years ago
patch to cache checked into trunk.  nsInputStreamTee.cpp patch checked in
previously as part of another bug.

I'll leave this open for now, until we decide if we want to move these to any
branches.

Comment 41

17 years ago
tever/benc/other, can you shake on this today?  if we think this is ready for
the branch we should get it in tonight or tomorrow.
Whiteboard: [ADT1 RTM] → [ADT1 RTM] ready to go?

Comment 42

17 years ago
I spoke to tever earlier in the day. He said he would test.
If the tests work out, this needs to be checked in weds, or
thursday night at the latest if we want to get it in 1.0.

Comment 43

17 years ago
Yes, I've been looking at it. Unfortunately, I can't reproduce this on an older
build so I can't really confirm that it's fixed on the trunk.  I also had a few
other people try and they couldn't repro this either.  So it's possible that I
don't have the proper conditions set up yet.  Not sure. 

Here is what we tried:
1) Either disabled disk cache or set the disk cache size to 0
2) hit ftp://redhat.newaol.com/redhat/redhat-7.3-en/iso/i386/ and downloaded one
of the large .iso files.
3) Using 'top', monitored the process size.  (In windows used taskmanagr.)

I used an older trunk (5/6) build and the mozilla RC2 on linux for testing.  The
size remained pretty stable through all my testing and I was able to
successfully dowload the file.  

Could someone who can reproduce this test the nightly mozilla?  Thanks.

Comment 44

17 years ago
I spoke with Gordon earlier and he suggested this fix could affect other areas
that use cache such as plugins, IMAP, and sound (I guess plugins too).  So we
might want to get some additional people looking into these areas once the fix
is confirmed.

cc'ing shrirang, gayatri

Comment 45

17 years ago
Confirmed with RC2 Win2k.  Note that Cache settings are at 4096 mem & 50000 disk
with comparison at "Every time...".

I again downloaded a RedHat v7.3 ISO via HTTPS from Red Hat's RHN service. 
After downloading core memory was at ~165MB and virtual memory at ~700MB. 
Though there is a wrinkle: I tried downloading again to verify the test result
but rather than waiting for the download to complete I canceled it 3/4 of the
way through.  While there was a mess of swapping action for 4 minutes or so the
virtual memory allocated for the download did appear to have been returned.

I'm not familiar with the Mozilla architecture/source, author of comment #43, so
forgive my ignorance but perhaps FTP isn't cached similarly to HTTP?

Mozilla 1.0 Release Candidate 2
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0rc2) Gecko/20020510

Comment 46

17 years ago
ok, thanks.  The original url listed an ftp link, I didn't catch the morphing. 
But now I can see this happening using http.  

Comment 47

17 years ago
This is working on trunk build 2002051421 (linux).  Memory consumption is not
growing out of control as it did on the 5/6 build when http downloading.

Comment 48

17 years ago
Confirm success w/build 2002051408; the outrageous memory usage on HTTP/S
downloads is not occurring.  Reiterate that it remains a problem with RC2.

Mozilla 1.0.0+
Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.0+) Gecko/20020514
If someone can test this with a branch+patch build (all 3 platforms, please
please please), it'll make my driving-life a lot easier.
Whiteboard: [ADT1 RTM] ready to go? → [ADT1 RTM] ready to go? [driver:shaver]

Updated

17 years ago
Keywords: topembed → topembed+

Comment 50

17 years ago
shaver: fwiw, the branch and trunk do not differ in the areas effected by the
patch to fix this bug.  i have branch builds that i can use to test this... but
it will take me some time to finish testing + plus other things are getting in
my way... if anyone else can help test on one or more of the platforms that
would be great!

Comment 51

17 years ago
how did the branch testing go?

Comment 52

17 years ago
dawn: i have not fully tested this patch yet.  i am behind on that :(

Comment 53

17 years ago
*** Bug 144253 has been marked as a duplicate of this bug. ***

Comment 55

17 years ago
Comment on attachment 84123 [details] [diff] [review]
patch for 1.0 branch

gordon's part of this patch has r=dougt and sr=darin

my part of this patch (taken from bug 143311) has r=gordon,dougt and sr=rpotts
Attachment #84123 - Flags: superreview+
Attachment #84123 - Flags: review+
Comment on attachment 84123 [details] [diff] [review]
patch for 1.0 branch

a=brendan@mozilla.org on behalf of drivers for 1.0 branch checkin.

(Who spells 1023 0x399? a hex-talking hakmem hacker after my own heart :-)

/be
Attachment #84123 - Flags: approval+

Comment 57

17 years ago
ok, fixed-on-branch
Status: NEW → RESOLVED
Last Resolved: 17 years ago
Keywords: fixed1.0.0
Resolution: --- → FIXED

Comment 58

17 years ago
If 0x399 is meant to be 1023, instead of 0x3ff, this is not fixed.

Updated

17 years ago
No longer blocks: 143200

Comment 59

17 years ago
Yes, 0x399 is 921, not 1024. It should be 0x3ff. gordon, darin, brendan, what do
we do? I can submit a new patch for the trunk if you like...

Comment 60

17 years ago
gordon has this fixed along with another patch that he's working on right now.

Comment 61

17 years ago
verified branch - 05/23 builds - winNT4, Linux rh6, mac osX
Status: RESOLVED → VERIFIED
Keywords: verified1.0.0

Comment 62

17 years ago
*** Bug 143055 has been marked as a duplicate of this bug. ***

Comment 63

17 years ago
*** Bug 69027 has been marked as a duplicate of this bug. ***

Comment 64

17 years ago
forgot to remove fixed1.0.0 keyword so doing so now
Keywords: fixed1.0.0
You need to log in before you can comment on or make changes to this bug.