Closed Bug 637782 Opened 10 years ago Closed 10 years ago
Memory not being released after visiting image heavy sites
I wouldn't expect malloc/mapped and malloc/zone0/allocated to go down, necessarily. It depends on the malloc implementation.... Note that malloc/mapped doesn't mean we're using the memory. We could have called free() on it but the malloc impl just didn't munmap the mappings, on the assumption that we might want them again. This is not uncommon for malloc to do.
Any reason why it couldn't reuse the memory that is allocated? malloc/mapped and malloc/zone0/allocated will grow as soon as you visit any new page. This continues until you run out of RAM.
Ah, it _should_ be reusing it, if that's mapped but freed memory. Over to imagelib, to see what's going on here.
Status: UNCONFIRMED → NEW
Component: General → ImageLib
Ever confirmed: true
QA Contact: general → imagelib
image.mem.min_discard_timeout_ms was changed from 10 seconds to 120 seconds a few weeks ago (bug 593426), which means that in the worst case, you might have to wait for 240 seconds before an image is cleaned up. This can create problems on certain NSFW websites that use hundreds of megabytes of images per page. If you keep visiting different pages on that website, you might easily use more than 1GB or more. It takes a while before they're cleaned up. Note that on Windows, those images aren't counted in malloc/*, but they're part of win32/workingset. I don't know about Mac.
Also note that you have to close the tab, or surf to another URL, before the images really unlocked and released (tab switching or page scrolling don't do that). I think that at least one of the reasons why Chrome seems to use less memory on images : it can unload images that aren't visible. There's a bug about that somewhere.
Do they still reside in memory after you clear all history and cache? From my testing I can see that malloc/mapped and malloc/zone0/allocated never goes down, and are the only values that show any significant amount of memory used. images/*, layout/*, storage/* and gfx/surface/image shows at most a couple of MB of usage. It never seems to be able to re-use the allocated memory, perhaps the reason the memory grows so much? While doing testing for bug 634855 and bug 631494 and I didn't notice this problem, and looking at the depending bugs for bug 632234 something happened around Jan/Feb that caused several memory leaks.
I'm not sure if something has changed since my last post but now I see that gfx/surface/image is never really cleared. Now it's stands at a whooping 1.65 GB. It had been up to 2.43 GB, but that was after swap usage was 1.7+ GB. Firefox itself topped out at about 2.4 GB, leaving no free memory available to the system.
A little further testing: In my previous comment I had been browsing for a few hours before following the STR in comment 1. After restarting the browser and only doing the STR in comment 1. gfx/surface/image did go down to <1 MB after closing the tabs. I had basically only browsed these sites before doing the steps in comment 1: youtube.com reddit.com imgur.com (originating from reddit) autosport.com macrumors.com engadget.com uk.gizmodo.com Without actually browsing around for a couple of hours I can't seem to reproduce the ever growing gfx/surface/image. I've been browsing around on the above pages and redid the test and gfx/surface/image is now at 35 MB. malloc/mapped and malloc/zone0/allocated still show increased memory usage though.
I created a Mozmill endurance test to attempt to replicate this issue. The test opens http://www.pixdaus.com/, http://boston.com/bigpicture/, and http://www.theatlantic.com/infocus/ in separate tabs twice (so six tabs in total), and then closes all of the tabs. This is repeated for 10 iterations. With a 1 second delay between the iterations the allocated memory rarely drops, however with a 240 second delay (as suggested in comment 4) the allocated memory drops for every iteration. 1 second delay between iterations: http://mozmill.blargon7.com/#/endurance/report/88b31ff45aa5a2c9f30fa1462d00b737 240 second delay between iterations: http://mozmill.blargon7.com/#/endurance/report/88b31ff45aa5a2c9f30fa1462d00a942
Whiteboard: [mozmill-test-needed][mozmill-endurance] → [mozmill-endurance]
Going back to the previous settings of 10 seconds for image.mem.min_discard_timeout_ms helps clear the memory much faster as expected, and also setting image.mem.decondeondraw to true prevents Firefox's memory usage to grow too much with a few tabs in the background. When bug 593426 I think it exacerbated the issue of how Firefox stores images in memory. No other browser seems to use as much memory when viewing image heavy sites as Firefox is. Is there any bug that deals with the memory usage concerning images?
d.a, whats your extension listing? theres a few extensions that make firefox hang onto more memory than usual when browsing image heavy pages.
I reported bug 643651 which looks to be the same issue. The test site is http://www.mapcrunch.com/gallery This is far more image heavy than the examples given, and in FF4 renders the browser almost unusable though massive memory use. FF 3.x, Chrome, Safari, IE 6+ and Opera suffer no problems with this site, and use much less memory.
no one has mentioned http://www.npi-news.dk/ as yet.
(In reply to comment #11) > d.a, whats your extension listing? > > theres a few extensions that make firefox hang onto more memory than usual when > browsing image heavy pages. I can reproduce this without any extensions, but the memory usage is slightly lower. I wonder if this is related to bug 642902?
I just wanted to comment that I have this same thing happen in Windows. So, this isn't just a Mac problem, and IMO, the "Platform" tag should be tweaked to show that. When I ran http://www.mapcrunch.com/gallery in Safe Mode in Windows 7 x64, Firefox hit a peak of just over 1,800,000 K in memory usage (per Task Manager) on that site.
Using the Instruments tool included in XCode I checked the Firefox process with the leaks template, and visiting this page: http://www.theatlantic.com/infocus/2011/03/earth-hour-2011/100035/ Call Tree -> Leaked Blocked shows this, as the top 3: js::PropertyTree::insertChild(JSContext*, js::Shape*, js::Shape*) 46.1% of total, # of leaks: 594 moz_xmalloc (libmozalloc.dylib), 23.9% of total, # of leaks: 820 nsAString_internal::MutatePrep(unsigned int, unsigned short**, unsigned int*), 12.6% of total, # of leaks: 166
d.a., the OSX leaks tool has some known issues. In particular, it can't handle pointers into the middle of allocated regions or tagged pointers (the latter is certainly extensively used by the JS engine) and treats them as leaks....
A commenter on my blog (see http://blog.mozilla.com/nnethercote/2011/05/24/leak-reports-triage-may-24-2011-2/#comments) says the same problem occurs with http://chan4chan.com/archive/ (warning: NSFW site).
To see the problem on MapCrunch you'll now need to add ?webgl=1 to the url: http://www.mapcrunch.com/gallery?webgl=1 The Google Street View API uses webGL to display the panoramas if the browser is capable. I found a legacy option in the API to disable this behaviour. The query string above will set it to use webGL mode as before.
I've consolidated a number of bugs about image-heavy sites, including this one, into bug 660577. Please CC yourself on that bug if you want to follow along. Thanks.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → DUPLICATE
Duplicate of bug: 660577
You need to log in before you can comment on or make changes to this bug.