Closed
Bug 296818
Opened 19 years ago
Closed 17 years ago
Too much memory is used due to holding on to decoded image data
Categories
(Core :: Graphics: ImageLib, defect, P1)
Core
Graphics: ImageLib
Tracking
()
VERIFIED
FIXED
People
(Reporter: bsenf, Assigned: federico)
References
()
Details
(Keywords: memory-footprint)
Attachments
(2 files, 5 obsolete files)
53.75 KB,
patch
|
vlad
:
review+
|
Details | Diff | Splinter Review |
3.72 KB,
patch
|
vlad
:
review+
|
Details | Diff | Splinter Review |
User-Agent: Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.7.8) Gecko/20050512
Build Identifier: Mozilla/5.0 (X11; U; SunOS sun4u; en-US; rv:1.7.8) Gecko/20050512
I'm using X-Terminal with limited ressources and I can check the allocated
memory with some hotkeys. So I can see that the allocated memory step by step
will shrink with every image I see in the foreground but also with the amount of
cached image.
When I clear the cache, the memory allocated at the X11 server on my X-Terminal
will be freed, but with loading next image the same bad beheavior begins.
--------
Old Versions like netscape 4.78 or netscape7 (gecko 2002xxxx) or opera are not
affected with this bug.
--------
Reproducible: Always
Steps to Reproduce:
1. visit this or any other webpage with many images
2. view this images at any order
3. you can see only one image at a time
4. but at 20th or 30th image mozilla can't create any new image - e.g. crash
Actual Results:
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 53 (X_CreatePixmap)
Serial number of failed request: 118408
Current serial number in output stream: 118587
X Error of failed request: BadDrawable (invalid Pixmap or Window parameter)
Major opcode of failed request: 66 (X_PolySegment)
Resource id in failed request: 0x8001194
Serial number of failed request: 118478
Current serial number in output stream: 118591
Expected Results:
The cached images maybe hold in memory for performance, but have to release X11
pixmap or have to call XDestroyImage before going to memory.
Assignee: general → blizzard
Component: General → GFX: Gtk
Product: Mozilla Application Suite → Core
QA Contact: general → gtk
Version: unspecified → 1.7 Branch
Comment 1•19 years ago
|
||
Confirming... The thing is, the x11 pixmap is the _only_ place we're storing our
image data. I seem to recall some discussion about catching BadAlloc and
dragging all the image data back over to the client at that point before
retrying, but I can't recall what bug that was in....
Status: UNCONFIRMED → NEW
Ever confirmed: true
bug 210931 is about trapping errors, but i don't think it was planning on doing this (sounds like a good idea, but first we have to deal w/ certain other problems...)
I have a same bug while using firefox on a X-Terminal with 64 MB of RAM under Ubuntu Dapper. When I load several pictures, the X-Terminal is getting slower and slower until freezing. The mouce does not answer as the keyboard (except SysRQ to reboot).
While attempting to reproduce the bug with gnome-system-monitor opened, the amount of X memory used by firefox is increasing each time a picture is loaded until it reach 60.5 MB, amount of memory used which makes the X-Terminal freezing.
You can reproduce this bug by loading a weather website @ http://www.meteociel.fr/modeles/gfs/vent-moyen/3h.htm and rolling the mouse over the left side of weather pictures under the word "Échéance" (a javascript script loads a picture each time you roll a link, and eats about 3 MB of X memory on our X-Terminal)
I tried to reproduce this bug under Epiphany ("epiphany-browser" debian package), which is based on firefox >=1.5 (according to "aptitude show" command) and the result is exactly the same : freeze.
Comment 5•18 years ago
|
||
bug 259672 seems related with this one, firefox seems to abuse the X memory...
this bug OS probably applies to all OS with X11, not just solaris... as there isnt any "any OS with X11", i would suggest changing it to all, at least isnt excluding all other X11 enabled OS
Assignee | ||
Comment 6•17 years ago
|
||
I'm working on a patch to fix this. You can see the initial version of the patch here:
http://primates.ximian.com/~federico/news-2007-09.html#firefox-memory-1
The patch makes Firefox release the pixmaps used for JPEG images after a few seconds. It seems to work pretty well on my machine; my next step is to add support for this to the PNG and GIF decoders, and to take better measurements of how much memory we can save.
Frederico, could you post your latest patch from http://primates.ximian.com/~federico/news-2007-09.html#06 here?
Assignee | ||
Comment 8•17 years ago
|
||
Updated patch.
The PNG loader works fine with this one. I just discovered some problems with the JPEG loader; JPEGs "disappear" when they get loaded in very small chunks. I think I know what's going on; hope to have a fix today.
Attachment #279598 -
Attachment is obsolete: true
Assignee | ||
Comment 9•17 years ago
|
||
With this patch, both the JPEG and PNG decoders seem to work perfectly. I'd call this ready for inclusion, and leave GIF as an exercise for the reader :)
Attachment #280070 -
Attachment is obsolete: true
Assignee | ||
Comment 10•17 years ago
|
||
BTW, I'm pretty sure that this bug is present on all platforms, not just X11. Could someone with proper access please change the OS to "all"?
Updated•17 years ago
|
OS: Solaris → All
Hardware: Sun → All
Comment 11•17 years ago
|
||
this stuff should probably have gone in to a new bug, but since we're here...
Assignee: blizzard → nobody
Component: GFX: Gtk → ImageLib
Flags: blocking1.9+
QA Contact: gtk → imagelib
Summary: cached images do not free associated x11 pixmap - memory on x-terminal shrink to zero → Too much memory is used due to holding on to decoded image data
Version: 1.7 Branch → Trunk
Updated•17 years ago
|
Assignee: nobody → federico
Comment 12•17 years ago
|
||
Hello, I had recently filed bug 395260 to find a way to disable pixmap caching to X. See below:
https://bugzilla.mozilla.org/show_bug.cgi?id=395260
There was a post the above bug to reference this bug. I see the link to the diff and have downloaded source from mozilla. I would like to test the diff to see if this fixes the pixmap issue or not, but do not know how to go about this. I have been trying to use patch but get many errors. If anyone could post some quick steps to do so I would love to do some testing and report back.
The pixmap problem makes firefox on thin clients very unstable (and firefox causes the clients to freeze hard after sucking up all available memory).
Thanks,
Jim Kronebusch
jim@winonacotter.org
Assignee | ||
Comment 13•17 years ago
|
||
Can someone with sufficient permissions please mark bug #395260 as a duplicate of this one?
Status: NEW → ASSIGNED
Assignee | ||
Comment 14•17 years ago
|
||
(In reply to comment #12)
> There was a post the above bug to reference this bug. I see the link to the
> diff and have downloaded source from mozilla. I would like to test the diff to
> see if this fixes the pixmap issue or not, but do not know how to go about
> this.
Download the Firefox sources here:
ftp://ftp.mozilla.org/pub/mozilla.org/firefox/releases/granparadiso/alpha7/source
Download the patch from attachment #280146 [details] [diff] [review]
Unpack the source; cd into its toplevel directory.
Run this command:
patch -p1 < patchfile
(where "patchfile" is the patch you downloaded).
Now compile Firefox as usual.
Comment 16•17 years ago
|
||
Well I successfully built the granparadiso package with the suggested patch applied. Performance improved but pixmap cache was still allowed to spike and cause thin clients to freeze, but less often. Gavin McCullagh then suggested trying to tweak this section of the patch to decrease the amount of time the pixmaps are cached even further:
+static int
+get_discard_timer_ms (void)
+{
+ /* FIXME: don't hardcode this */
+ return 5000; /* 5 seconds */
+}
I went to the extreme side and reduced this to .2 seconds (return 200). Now every site that would have crashed the client in the past...did not. Cool. However performance of said sites went to horrible. But extremely slow loading of the offending sites is way better than hard freezes. Possible increase to 1 second cache may help. But the other 95% of average browsing seems to take no performance it at all and work quite nicely.
However non-gecko based browsers such as Opera and Konqueror do not seem to cache pixmaps in this fashion at all, and are still able to load the offending sites with no performance hit at all. I have no idea how this is done, but if Firefox could incorporate such methods this would be a much better fix.
Are the portions of the patch that reduce the amount of time pixmaps remain in system memory able to be patched into current releases such as 2.0.0.6? It looks like it will be some time until FF 3 is released and the ability to build a current release with this portion of the patch could solve a lot of headaches in the thin client world until FF 3 is released. If this is possible please let me know.
Federico, thanks for your work on this.
For others following this thread here is the method I was able to use on Ubuntu with success to build the patch into Firefox (from Gavin McCullagh):
1. Add gutsy source line to /etc/apt/sources.list if they do not already exist
deb-src http://ie.archive.ubuntu.com/ubuntu/ gutsy main restricted universe multiverse
2. sudo apt-get build-dep firefox-granparadiso
3. apt-get source firefox-granparadiso
4. cd firefox-granparadiso-3.0~alpha7
a. cd debian/patches/
b. wget http://primates.ximian.com/~federico/misc/moz-images-20070907.diff
c. edit moz-images-20070907.diff if desired to increase/decrease discard timer
d. echo "moz-images-20070907.diff" >> series
e. cd ../../
5. debuild -uc -us
6. Remove the source
Jim
Comment 17•17 years ago
|
||
Well I successfully built the granparadiso package with the suggested patch applied. Performance improved but pixmap cache was still allowed to spike and cause thin clients to freeze, but less often. Gavin McCullagh then suggested trying to tweak this section of the patch to decrease the amount of time the pixmaps are cached even further:
+static int
+get_discard_timer_ms (void)
+{
+ /* FIXME: don't hardcode this */
+ return 5000; /* 5 seconds */
+}
I went to the extreme side and reduced this to .2 seconds (return 200). Now every site that would have crashed the client in the past...did not. Cool. However performance of said sites went to horrible. But extremely slow loading of the offending sites is way better than hard freezes. Possible increase to 1 second cache may help. But the other 95% of average browsing seems to take no performance it at all and work quite nicely.
However non-gecko based browsers such as Opera and Konqueror do not seem to cache pixmaps in this fashion at all, and are still able to load the offending sites with no performance hit at all. I have no idea how this is done, but if Firefox could incorporate such methods this would be a much better fix.
Are the portions of the patch that reduce the amount of time pixmaps remain in system memory able to be patched into current releases such as 2.0.0.6? It looks like it will be some time until FF 3 is released and the ability to build a current release with this portion of the patch could solve a lot of headaches in the thin client world until FF 3 is released. If this is possible please let me know.
Federico, thanks for your work on this.
For others following this thread here is the method I was able to use on Ubuntu with success to build the patch into Firefox (from Gavin McCullagh):
1. Add gutsy source line to /etc/apt/sources.list if they do not already exist
deb-src http://ie.archive.ubuntu.com/ubuntu/ gutsy main restricted universe multiverse
2. sudo apt-get build-dep firefox-granparadiso
3. apt-get source firefox-granparadiso
4. cd firefox-granparadiso-3.0~alpha7
a. cd debian/patches/
b. wget http://primates.ximian.com/~federico/misc/moz-images-20070907.diff
c. edit moz-images-20070907.diff if desired to increase/decrease discard timer
d. echo "moz-images-20070907.diff" >> series
e. cd ../../
5. debuild -uc -us
6. Remove the source
Jim
Comment 18•17 years ago
|
||
Not sure what the heck happened there :) If anyone can remove my double post feel free.
Comment 19•17 years ago
|
||
I was just thinking (which isn't always a good thing). Would it be possible to modify this portion of the patch:
+static int
+get_discard_timer_ms (void)
+{
+ /* FIXME: don't hardcode this */
+ return 5000; /* 5 seconds */
+}
and add some intelligence to it? Could this involve some sort of formula that looked at what percentage of RAM is available and set the time accordingly? Say I have a system with 128MB RAM total, and 15% of it is in use, then the formula would set the discard time to 20 seconds. Then on the next request it would look at the percentage used again and see that 50% was in use, now it sets the discard time to 5 seconds. Next request RAM is almost full at 80%, discard set to .1 seconds. Does that make any sense? This would still involve some hard coded thresholds but it would add a little intelligence to the discard time. This could help increase performance when RAM is available, but stay safe when it is used up.
But this would also require a check on every request for pixmap storage which may add overhead. Also I don't know if the discard timer is set on every request or if it is only set on startup of Firefox. And I don't know if the discard time is set per individual storage request or if it affects all items currently in storage.
So feel free to tell me I'm nuts. It was just a thought :-)
Jim
Comment 20•17 years ago
|
||
So won't this approach mean that any repaint after the image has timed out has to redecode the image? We should be pretty careful in tuning the timeout such that reading some text and then paging down doesn't mean that you have to redecode the images on the page (e.g. the backkground) on every pagedown...
Comment 21•17 years ago
|
||
I've tried this patch and for general browsing it works fantastically, however its seem to either crash or grind firefox to a halt if i try to open huge images, an example being nasa's blue marble:
http://earthobservatory.nasa.gov/Newsroom/BlueMarble/Images/BlueMarble_2005_SAm_09_4096.jpg
others here: http://earthobservatory.nasa.gov/Newsroom/BlueMarble/
it seems the timeout is shorter than it take for the image to load is it end up fighting against itself, it will eventually load if i download the image to disk and open it but it's still slow
Comment 22•17 years ago
|
||
(In reply to comment #20)
> So won't this approach mean that any repaint after the image has timed out has
> to redecode the image?
I think this is preferable to Firefox filling up all available physical memory
- swapping to disk hurts performance of the whole operating system more, than
having to redecode images (although I won't say it can't be annoying).
(In reply to comment #19)
> Could this involve some sort of formula that
> looked at what percentage of RAM is available and set the time accordingly?
There are cache memory limits (that are derived from total RAM available) that
should be used in such calculations, not really the physical ram size. See Bug
213391 (for those interested - theres also a testcase that makes FF's memory
usage jump really high almost instantly).
Comment 23•17 years ago
|
||
> I think this is preferable to Firefox filling up all available physical memory
Sure. All I'm saying is that it might be a good idea for the timeout to be on the order of minutes rather than seconds. That would still prevent filling up physical memory and evict images in truly inactive tabs, while not causing issues with common user activity.
Ideally, the timeout would be based on some evaluation of the speed of the processor and amount of available RAM, but that sounds hard and complicated.
If we really wanted to overengineer this, we would time how long the first decode takes (just the decode, so time spent while calling libpng/jpeg/etc), and then use that as input in when to evict that image, along with the amount of memory taken up by the decoded image. But that's probably a future addon patch.
Comment 25•17 years ago
|
||
Is it possible to make the use of a the discard timer a user settable option in
about:config? Maybe an option to enable a discard timer at all such as
browser.enable.image_discard true, and an option such as
browser.image_discard.time 5000 (in milliseconds)? This wouldn't be something
that necessarily everyone wants, so the ability to turn on/off wouldn't be a
bad thing and since most users are on high memory desktops this could even
default to off.
I still like the idea of if a discard timer is enabled to set the time based on
current memory usage.
Also it is probably worth looking into how Opera and Konqueror handle this as they do not seem to have the same problems.
Jim
Comment 26•17 years ago
|
||
Boris, this problem actually allows memory to be filled up within seconds! Install an application similar to xrestop and run it, then hit this website and watch how quickly RAM usage for pixmap storage is allowed to climb:
http://www.carteretcountyschools.org/bms/teacherwebs/sdavenport/artgallery6.htm
This is unacceptable and will crash a lower memory client in seconds (Linux thin clients are very unstable due to this). It should also be noted this is not just Firefox's fault. Although Firefox is the offending application in this case, the Xserver should be better equipped to not allow RAM usage to get to the point where the Xserver crashes. Firefox is also not the only offender in this area, other apps such as OpenOffice can be a problem as well.
To help deal with this problem farther up the line I have started a thread on the xorg mailing list:
http://lists.freedesktop.org/archives/xorg/2007-September/thread.html#28452
If there are any developers for Firefox who would like to help find a way to get the Xclients and Xserver to better communicate usage and set limits, feel free to join the discussion.
Jim
Comment 27•17 years ago
|
||
> then hit this website and watch how quickly RAM usage for pixmap storage is
> allowed to climb:
I don't see how timing out images within seconds will help that problem. A solid-color PNG image size scales basically as the log of the decoded image size. So a fairly small actual image can easily fill up whatever size memory you happen to have once decoded.
Frankly, I don't think bug 395260 is a duplicate of this bug. What's needed to fix this bug, in my opinion, is to evict "unused" decoded images, for some definition of "unused". A good one might be an image that has not had its actual image data accessed (via the DOM or by being painted or whatever) for some period of time. The "unused" qualifier is almost certainly needed to prevent unacceptable impact on DHTML, pageload, and scrolling performance.
What's needed to fix bug 395260, again in my opinion, is a hard cap on the amount of pixmap storage we'll allocate. That should be a separate configuration option from the fix for this bug, and someone running Gecko in an environment where server memory is constrained could use that option. In such an environment, as you have said, a severe performance hit, or even not seeing the image at all, is much more acceptable than crashing the X server.
I do think that we should consider defaulting this hard cap to some fraction of the total physical memory in the machine. Not rendering images at all seems preferable to having to swap to paint them. But again, I think that would best be handled in bug 395260.
Comment 28•17 years ago
|
||
Boris, if there is a way to do what you suggest above that would be great. Federico's patch to discard images after a specified amount of time does increase stability on the thin clients, but it still does not prevent abuse of pixmap storage that can possibly crash the Xserver. A cap on the maximum amount able to be used would be a better fix. But if Federico's patch could be backported to a current release of Firefox we would at least have a solution for the interim.
I think the bug I started of 395260 is definitely different that the reason this bug was originally started. But a resolution to this bug could also help bug 395260. I would however still love to see a solution that could provide a user settable option in about:config to limit allocation for pixmap storage.
I am not sure who the actual developers are of Firefox and if they know about this problem. If anyone knows please help make sure they are aware of this.
Thanks,
Jim
Comment 29•17 years ago
|
||
Jim, Boris happens to be a big one ;)
Comment 30•17 years ago
|
||
(these patches should have gone in to a new bug not this one, but here we are.)
Capping the amount of pixmaps allocated is trivial (assuming we do proper accounting for our pixmaps and don't care about other pixmaps that other apps may have created). That should be separate from this bug. If someone cares feel free to reopen that bug and I'll be happy to explain to someone what needs to be done in that code.
This bug is about discarding uncompressed image data. For doing that it doesn't matter how that data is stored be it in a system object or on the heap. Please separate out the issues. Pixmaps are X11 specific and this is a very cross-platform bug.
As for how long we should wait before discarding data I think that is up in the air. I tend to agree with Boris that we probably want to wait about a minute to purge things not in use. This should have the least impact on performance.
There are many additional things that could be done once this patch is finished and landed, but they really aren't directly related to this bug and should probably live elsewhere.
Comment 31•17 years ago
|
||
this updates things to the trunk and cleans up several things.
We'll want to do several follow-on patches including adding support for this to the GIF decoder. We'll probably want to tweak the timeout (currently set to 45 seconds), possibly making it dynamic or make us hold on to images on the current page longer. There are a lot of options.
Attachment #280146 -
Attachment is obsolete: true
Attachment #284696 -
Flags: review?
Comment 32•17 years ago
|
||
This removes the temp buffer accrual in the JPEG decoder and makes us create the image container object earlier on.
Attachment #284696 -
Attachment is obsolete: true
Attachment #284708 -
Flags: review?(vladimir)
Attachment #284696 -
Flags: review?
Comment 33•17 years ago
|
||
SNIP from comment #31
We'll probably want to tweak the timeout (currently set to 45
seconds)
END SNIP
Is it possible that any future patches that incorporate a discard timer also allow the time to be set by the user in about:config? Then the time can be a general number that should be good for most circumstances, but allow tweaking by users for a particular need.
Thanks,
Jim
Comment on attachment 284708 [details] [diff] [review]
better fix
Looks fine, only issue I have is that AddToRestoreData is using realloc() for every chunk of data that comes in, with just the length of the new piece added -- so we'll end up realloc'ing every time we get data from the network, which sucks a little. It would be better to keep increasing this buffer -- start it at 4k and keep doubling up to 128k or something, and then increase by 128k at a time... when you're done reading you can always realloc it back down if there's too much slop.
Attachment #284708 -
Flags: review?(vladimir) → review+
Comment 35•17 years ago
|
||
this will fix the reallocs by using nsTArray to manage the buffer and then compacting it at the end.
Attachment #284708 -
Attachment is obsolete: true
Attachment #285005 -
Flags: review?(vladimir)
Attachment #285005 -
Flags: review?(vladimir) → review+
Comment 36•17 years ago
|
||
I just filed several followup bugs for this one including adding GIF support, making the timeout a pref and adding xlib logging. If there are other followup issue you would like to see I suggest filing additional bugs and we'll go from there.
checked in -- marking this FIXED.
Status: ASSIGNED → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
Comment 37•17 years ago
|
||
regression from 20071015_1517_firefox-3.0a9pre.en-US.win32.zip
check-in for this or bug 399863 cause crash.
http://crash-stats.mozilla.com/report/index/6c1ddbca-7b80-11dc-901b-001a4bd43ef6
(URL is wrong, correct URL is http://blog.livedoor.jp/ayase_shiori/)
Comment 38•17 years ago
|
||
this moves the init of the image container for the png decoder to init and lazily inits it -- this code is basically the same as that in the jpeg decoder.
Attachment #285026 -
Flags: review?(vladimir)
Attachment #285026 -
Flags: review?(vladimir) → review+
Comment 39•17 years ago
|
||
fixed, thanks.
Comment 40•17 years ago
|
||
We should get a test at least for the crash from comment 38 and for any other stuff mentioned earlier in the bug (too lazy to check now to see if anything else got mentioned).
Flags: in-testsuite?
Comment 41•17 years ago
|
||
I've backed this out in the ongoing investigation into the current Tp regression
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Updated•17 years ago
|
Priority: -- → P1
Comment 42•17 years ago
|
||
This appears to have re-landed so I'm not sure why it isn't fixed. And it also appears to have caused around a 1% Tp regression on the Linux box and some on one of the talos boxes. I recall the suggestion that this patch was worth it but don't see any confirmation of this anywhere.
Comment 44•17 years ago
|
||
This is certainly not really proven technology...
As far as I understand, this patch makes it that the 'undecoded' data as read from the stream, is also stored in memory. So, this will initially use more memory, until the timeouts kick in and start destroying the 'decoded' data (for JPG/PNG's, this is the image data itself).
I think that this storage of 'undecoded' data needs to coordinated with the normal caching (disk/memory), as it could better to just reload from the diskcache, and not allocate additional internal memory.
Comment 45•17 years ago
|
||
it certainly could be better, maybe, to read from the diskcache -- it could also be significantly slower. Worth trying though in a separate bug.
Status: REOPENED → RESOLVED
Closed: 17 years ago → 17 years ago
Resolution: --- → FIXED
Comment 46•17 years ago
|
||
Did this help memory use according to task manager or similar?
/be
Comment 47•17 years ago
|
||
I see a definitive reduction in both memory usage and the number of GDI objects, 45 seconds after the page is loaded. And when you force the images back in (not refreshing, but redisplaying after they're scrolled away, and were then purged by the timer), both numbers go back up to their top, and will then go back down (45 sec later) to their minimum again.
Comment 48•17 years ago
|
||
There is some definite memory reduction, but since most of the images on the web (especially forums) are GIFs, bug 399925 needs to be fixed for an even greater memory use reduction.
Comment 49•17 years ago
|
||
Bug 51028 has some numbers after running the stress test there; on OS X my VM size dropped by about 400MB.
Comment 50•17 years ago
|
||
Has anyone been able to successfully apply a version of this patch to a current flavor of the FF 2.x codebase? The current patch and all earlier versions appear to be only for the latest trunk or an earlier version of FF3-alpha, which doesn't do much for the rest of the world that still needs to run FF2 in a production environment.
Comment 51•17 years ago
|
||
The only things landing on the 2.0 branch are stability, security or regression fixes. This patch falls way outside what would be accepted for the branch.
Comment 52•17 years ago
|
||
Well, although invasive, I'd classify this as a stability fix myself.
Regardless, even if not acceptable in mainline, a patch that could be applied to 2.x would be extremely useful. I think the problem was that it relied on some infrastructure that didn't exist in ff2.
Comment 53•17 years ago
|
||
In general, the image decoders have changed a lot between Fx2 and Fx3 due to the change to the Cairo rendering backend. I'm willing to bet that the patch in this bug wouldn't even have a prayer of working in the non-Cairo world. The rendering backends at this point in time are essentially apples and oranges.
Comment 54•17 years ago
|
||
> Well, although invasive, I'd classify this as a stability fix myself.
"stability" means crashes and hangs. Particularly large memory leaks _may_ qualify. This but just doesn't fit those criteria.
Comment 55•17 years ago
|
||
Note that this improves the testcase from bug 213391 quite a bit (if you wait 45s for the caches to clear). Now that we have this is it easy to detect if we are blowing through our in-mem cache and only keep images that are being painted in ram?
Comment 56•17 years ago
|
||
> "stability" means crashes and hangs. Particularly large memory leaks _may_
> qualify. This but just doesn't fit those criteria.
Currently, firefox crashes on me every 4 or 5 days under heavy usage (generally 4+ windows showing a rotating set of 50+ tabs, which I admit is not typical of 'normal' usage). Generally, all the firefox windows disappear, and then firefox-bin pegs the cpu for hours until I kill -9 the process (which might be a (seperate) bug). I have experimented with clean profiles, removing the flash plugin, to no avail.
I've been running a cvs build of minefield/3.0a9pre for just under two weeks, and haven't had a single crash yet, and xrestop is showing mem usages that are 20-30 times smaller (16-20mb instead of half a gig of pxm mem).
This is why I'd classify it as a stability fix: it causes crashes. That said, I know it's not an issue for most people under normal use (although I'm not the only person who browses like this)
Comment 57•17 years ago
|
||
For what it's worth, I usually have 30+ windows open, with 10-20 tabs in each. And I don't generally run into the sort of memory or crash issues you're describing.
Perhaps it's worth figuring out what exact sites are involved and seeing whether there's something going on other than the image memory usage?
Comment 58•17 years ago
|
||
I'll try to reproduce it again (although I've grown somewhat attached to 3.0a9pre). Suggestions as to what logs I should be keeping?
Comment 59•17 years ago
|
||
Some idea of what sites in particular cause the memory usage, and better yet the crashes...
If you end up with the browser in a loop, kill it with -SEGV, not -KILL. That should generate a talkback report that might help understand that issue.
Comment 60•17 years ago
|
||
What originally put this issue on my radar was a site that included a 2MB JPEG image (over 41 megapixel, poster size) that had been scaled down to thumbnail size. The pixmap decompression to display caused the resources for this image to balloon up to 160MB of ram. Couple this with the fact that pixmap resources were not being freed, and you can calculate on one hand how many times this page could be reloaded before it was blowing out all available memory, including swap.
Unfortunately, I can't provide a link because it came from an inventory page which required an account login.
Depends on: 414259
Comment 62•16 years ago
|
||
This patch causes crash. Please see Bug 441563.
Sorry, I cannot modify dependency list.
Comment 63•16 years ago
|
||
This patch changed imgIContainer without changing the iid. See bug 475897.
Updated•15 years ago
|
Depends on: CVE-2010-1201
See Also: → https://launchpad.net/bugs/137764
See Also: → https://launchpad.net/bugs/111299
You need to log in
before you can comment on or make changes to this bug.
Description
•