Closed Bug 97012 Opened 23 years ago Closed 23 years ago

Some webpages make mozilla unstable and leak memory and cpu use

Categories

(Core :: Layout, defect)

x86
Linux
defect
Not set
critical

Tracking

()

RESOLVED WORKSFORME

People

(Reporter: fleona, Assigned: n.nethercote)

References

()

Details

(Keywords: memory-leak)

Attachments

(2 obsolete files)

From Bugzilla Helper:
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.3+) Gecko/20010824
BuildID:    2001082408

Go to the above url
On the top right, a box that says xchatbox 1.1, click on "Chatta anche tu" which
must mean something like "chat now"

After clicking on that, mozilla will become unstable, not opening new windows,
the stop button will dissapear, and memory use and cpu use will sometimes rise.

The only way is to close mozilla and reopen again.

Other webpages that trigger this is 
http://ve.loquesea.com/linkeados/

when clicking on some people in "gente online" and watching their profile,
sometimes this happens there too


Reproducible: Always
Steps to Reproduce:
As described above

Actual Results:  Unstable : new windows do not open, links do not work, stop
button on toolbar will dissapear, hich cpu and memory use
Blocks: 92580
I have seen this bug in the last week but
I can't reproduce it with current Linux build (2001091121). Please retest.
Nukeitalia.com problem is fixed
ve.loquesea.com problem is not. I still get unstable use, and the stop button
dissapears when i browse lots of people's profiles ("linkeados")
This is on 09/13 trunk build
Reporter: Is this a problem with a more recent build ?
Yes, it has, and maybe has gotten worse.
in http://ve.loquesea.com/linkeados/ there are lots of profiles of people. After
seeing many or interacting with the pages (for example, on any profile you will
see "enviame mensajes" which should open a pager to send a message. Half of the
time that page just does not open, and makes the buttons in the toolbar
dissapear. After that, mozilla cannot browse any pages or open new windows)
Kind of hard to reproduce, but very serious.
Try out http://ve.loquesea.com/linkeados/profile/index.php?user=1053243 and
click on the yellow "enviame mensajes" (note, you may need to subscribe before
doing that, registration is free) That sometimes will make the bug appear. Using
modern theme and 2001100408
No longer blocks: 92580
I don't see this problem at all. I've been playing around with the page in
windows and linux 0.9.7+ builds for a while without incident.
Assignee: asa → attinasi
Component: Browser-General → Layout
QA Contact: doronr → petersen
Thankfully, this bug WFM with those url since 0.9.6+ builds
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → WORKSFORME
This patch makes HashKnownLength() and HashUntilZero() only hash the first 1000
chars. This is about a 1.01-1.02x speed-up for SunSpider on my desktop machine.
regexp-dna is typically 1.05--1.08x faster, and string-tagcloud and
string-unpack-code are typically 1.02--1.04x faster. Unsurprisingly, these are
the tests with very long strings.

If I time just the parsing of SunSpider (parsemark has the whole suite
concatenated into a single file), I get about a 1.10--1.12x speed-up.

If you're wondering how I chose 1000 chars, here are instruction counts for
parsing Sunspider with different limits.

- no limit  74.461
- 100000    73.439
- 10000     71.960
- 1000      71.622
- 100       71.539
- 10        71.975
- 8,9       72.384
- 6,7       73.498 
- 4,5       76.603
- 2,3       95.338
- 0,1      604.685

100 was the best result, but 1000 is almost as good and will be more robust
against bad cases. The instruction count reduction going from no limit to 1000
is 1.04x; I assume the remainder of the 1.10--1.12x speed-up I saw is due to
fewer memory accesses. (During parsing, we tokenize the string and then
immediately atomize it; two consecutive iterations over a huge string will
trash the caches.) ARM devices typically have slower memory subsystems, so
improvements there might be bigger.

Speaking of bad cases: for this to hurt performance, we'd need to be hashing
*many* strings that are identical for the first 1000 bytes, but differ after
that, but also aren't much longer than 1000 chars. That seems *extremely*
unlikely. It's encouraging that I had to reduce the limit to 4 to make the
results worse than the baseline.

My only remaining concern is this: these functions are only used for hash
tables, right? I.e. nobody's using these for any kind of cryptographic/security
purposes? I sure hope not. If this is a concern, I could keep the existing
HashString() functions and instead add some HashStringLimited() functions, and
replace existing HashString() calls where appropriate.
Attachment #8373104 - Flags: review?(jwalden+bmo)
Assignee: attinasi → n.nethercote
This patch makes HashKnownLength() and HashUntilZero() only hash the first 1000
chars. This is about a 1.01x speed-up for SunSpider on my desktop machine.
regexp-dna is typically 1.05--1.08x faster, and string-tagcloud and
string-unpack-code are typically 1.02--1.04x faster. Unsurprisingly, these are
the tests with very long strings.

If I time just the parsing of SunSpider (parsemark has the whole suite
concatenated into a single file), I get about a 1.10--1.12x speed-up.

If you're wondering how I chose 1000 chars, here are instruction counts for
parsing Sunspider with different limits.

- no limit  74.461
- 100000    73.439
- 10000     71.960
- 1000      71.622
- 100       71.539
- 10        71.975
- 8,9       72.384
- 6,7       73.498 
- 4,5       76.603
- 2,3       95.338
- 0,1      604.685

100 was the best result, but 1000 is almost as good and will be more robust
against bad cases. The instruction count reduction going from no limit to 1000
is 1.04x; I assume the remainder of the 1.10--1.12x speed-up I saw is due to
fewer memory accesses. (During parsing, we tokenize the string and then
immediately atomize it; two consecutive iterations over a huge string will
trash the caches.) ARM devices typically have slower memory subsystems, so
improvements there might be bigger.

Speaking of bad cases: for this to hurt performance, we'd need to be hashing
*many* strings that are identical for the first 1000 bytes, but differ after
that, but also aren't much longer than 1000 chars. That seems *extremely*
unlikely.

My only other concern is this: these functions are only used for hash tables,
right? I.e. nobody's using these for any kind of cryptographic/security
purposes? I sure hope so. If this is a concern, I could keep the existing
HashString() functions and instead add some HashStringLimited() functions, and
replace existing HashString() calls where appropriate.
Attachment #8373108 - Flags: review?(jwalden+bmo)
Attachment #8373104 - Attachment is obsolete: true
Attachment #8373104 - Flags: review?(jwalden+bmo)
Comment on attachment 8373108 [details] [diff] [review]
Don't hash past the first 1000 bytes of a string.

Sorry, I bzexported this patch to the wrong bug. It's meant to be bug 970152.
Attachment #8373108 - Attachment is obsolete: true
Attachment #8373108 - Flags: review?(jwalden+bmo)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: