The default bug view has changed. See this FAQ.

Improve page loading and memory footprint by deferring decoding for images not yet in view

RESOLVED DUPLICATE of bug 847223

Status

()

Core
ImageLib
--
enhancement
RESOLVED DUPLICATE of bug 847223
7 years ago
4 years ago

People

(Reporter: IU, Unassigned)

Tracking

(Blocks: 1 bug, {footprint, perf})

Trunk
footprint, perf
Points:
---
Dependency tree / graph

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [parity-webkit][parity-ie9][parity-opera][MemShrink:P1][Snappy:p1])

(Reporter)

Description

7 years ago
User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.3a1pre) Gecko/20100125 BetterPrivacy-1.47 Minefield/3.7a1pre
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.3a1pre) Gecko/20100125 BetterPrivacy-1.47 Minefield/3.7a1pre ID:20100125052940

I would like to request that Gecko's image handling be improved by deferring decoding of loaded images until they are first in view.  I might be misunderstanding Gecko's mechanics, so please bear with me as I try to explain.

Firefox has atrocious loading times and memory footprint when it comes to pages containing many medium to high resolution images.  This in stark contrast to Chrome/WebKit.  Compare loading the linked URL between the two browsers: the difference is enormous (my comparison at the bottom).

If my assumption is correct, the problem is that Firefox (like IE) decodes images upon loading, instead of deferring decoding until the first time a give image needs to be displayed (like Chrome seems to be doing).  The result for Firefox is very poor performance on many image-heavy sites -- especially where memory is already being swapped.

I believe the WebKit approach makes a whole lot more sense.  That a person visits an particular page does not mean the person is going to view everything on that page.  Maybe the person is interested in only a few links or articles near the top.  If the page is rife with images, all the memory and processing power Firefox expended to frustrate the user was for naught.  The possible scrolling lag, for deferring decoding until first view, in comparison is quite low (compare Chrome).

MEMORY CONSUMPTION COMPARISON (loading the linked URL):

Minefield/3.7a1pre Gecko/20100125 (page load)
---------------------------------------------
Process     | PID |Pvt Memory |Working Set
firefox.exe, 1516, 560.83 MB,  9.75 MB


Chromium 4.0.299.0 (36331) (page load)
--------------------------------------
Process    | PID |Pvt Memory |Working Set
chrome.exe, 376,  19.37 MB,   32.52 MB
chrome.exe, 3488, 6.54 MB,    12.04 MB
chrome.exe, 596,  8.78 MB,    14.72 MB
chrome.exe, 3088, 86.8 MB,    86.18 MB
chrome.exe, 772,  3.55 MB,    7.23 MB
-------
TOTALS:           125.04 MB   152.69 MB

Chrome 4.0.299.0 (36331)(after scrolling page up/down a couple of times)
------------------------------------------------------------------------
Process    | PID |Pvt Memory |Working Set
chrome.exe, 376,  19.59 MB,   7.24 MB
chrome.exe, 3488, 6.54 MB,    980 kB
chrome.exe, 596,  8.78 MB,    968 kB
chrome.exe, 3088, 358.6 MB,   37.16 MB
chrome.exe, 772,  3.55 MB,    1,008 kB
-------
TOTALS:           397.06 MB   47.356 MB


As can be seen, the difference is quite stark.  Firefox, like IE, sucks up all the memory it can, decoding every image into memory.  Chrome uses a more efficient process.

For all the memory management improvements made to Firefox thus far, this is Firefox's remaining areas of great weakness and is no doubt why many users complain of Firefox's "memory leak" (especially when coupled with the default fast-back cache setting that holds all that memory and allows the problem to grow out of hand, with each new page loaded).

With many blogs having this pattern of being a media/image board, this is not an isolated issue.  I run into this problem a lot, and so far Chrome is the only other browser I use (I don't use Opera, so can't comment, though legend has it...) that can handle such websites well.

Thus, it is really easy to get Firefox to consume over 1 GB of RAM, with as little as two or three such blog pages, so improvements would be greatly appreciated.  As I'm not verse in browser architecture, feel free to correct my ignorance. :-)

Thanks

Reproducible: Always

Steps to Reproduce:
1. Launch Firefox and...
2. Load the linked URL: http://prawnkingold.blogspot.com/2008_04_01_archive.html
3. Compare with Google Chrome
(Reporter)

Updated

7 years ago
Keywords: footprint, perf
Version: unspecified → Trunk
This is what we've been calling "decode-on-draw", and it's what I spent all last summer implementing. Glad you're as excited about it as I am. ;-) 
 
In fact, it's already landed in mozilla-central, but it hasn't been turned on yet, because I didn't have time to get to the last 5% of work before I went back to school. I've been trying to chug through it, but progress is much slower now that I'm not working on it full time.

I wasn't aware that chrome does this already - I'll have to investigate it further.

Keep tabs on http://bholley.wordpress.com. I'll be blogging about it once I switch it on, and then I'd love to have your feedback. To see what's holding things up, search for the bugzilla whiteboard item "decode-on-draw".
Status: UNCONFIRMED → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → DUPLICATE
Duplicate of bug: 435296
(Reporter)

Comment 2

7 years ago
Ooo.  Very nice.  Good to see there's much progress on this.  Hopefully all the regressions noted for that bug can be ironed out in due time.

Thanks.
Status: RESOLVED → VERIFIED
(Reporter)

Comment 3

7 years ago
For completion, here are Opera 10.5 Beta numbers:

Opera 10.5 Build 3257 (page load)
---------------------------------
Process     | PID |Pvt Memory |Working Set
opera.exe,   3400, 148.69 MB,  112.7 MB


Opera 10.5 Build 3257 (after scrolling page up/down a couple of times)
----------------------------------------------------------------------
Process     | PID |Pvt Memory |Working Set
opera.exe,   3400, 138.13 MB,  105.63 MB


As you can see, Opera is incredibly efficient and fast.
(Reporter)

Comment 4

6 years ago
I'm reopening this bug because bug 435296 does not do what this bug was requesting.  The request is that images in the active tab not be decoded at all if they are not in view.  This is how Webkit, Opera, and now IE 9 do it.  Sure Firefox 4 has image discarding, but the image discarding does not always work properly, as many people have encountered and reported on the Mozillazine.org forums.  Add to that the fact that the images have to be decoded first then discarded -- hardly efficient.

As bug 573583 comment 3 stated, bug 435296 is concerned only with not decoding images in background tabs.  But what about active/foreground tabs that constitute pages with many many high resolution images?

And now that even Microsoft, with IE 9 handles image decoding efficiently, and on par with with other major browsers, I believe this is worth considering.
Status: VERIFIED → REOPENED
Ever confirmed: true
Resolution: DUPLICATE → ---
Whiteboard: [parity-webkit][parity-ie9][parity-opera]

Updated

6 years ago
Blocks: 658790
Blocks: 682230
No longer blocks: 682230
Depends on: 689623
No longer blocks: 658790
Blocks: 683284
Whiteboard: [parity-webkit][parity-ie9][parity-opera] → [parity-webkit][parity-ie9][parity-opera][MemShrink][Snappy]
Whiteboard: [parity-webkit][parity-ie9][parity-opera][MemShrink][Snappy] → [parity-webkit][parity-ie9][parity-opera][MemShrink:P1][Snappy]

Updated

5 years ago
Whiteboard: [parity-webkit][parity-ie9][parity-opera][MemShrink:P1][Snappy] → [parity-webkit][parity-ie9][parity-opera][MemShrink:P1][Snappy:p1]
Note that if all you care about is decoding when the image comes into view, then bug 715308 will get you most of the way there.

But we probably want to *stop* decoding images when they leave the viewport, in order to handle fast scrolling.  In order to this, we'd need bug 689623.
One interesting use case relating to this:  imagine you have a bunch of image-heavy tabs that you want to close.  One way to do this is to hit ctrl-w repeatedly, or click on the foreground tab's 'X' repeatedly.  Unfortunately, doing this briefly brings every one of those tabs into the foreground, so for a short time we start decoding all images in each tab again, which is a CPU and memory hit.

This might not sound important, but in beta versions of areweslimyet.com (bug 704646) we're seeing that resident memory measurements taken just after closing 30 tabs are ~25% higher than they were just before closing them.  This counter-intuitive result is largely due to image memory spiking.  The usage disappears within 30 seconds and we drop to a much lower number, but it's still bad -- lots of CPU and memory churn, and there's a risk of OOMing.
It may be that we don't immediately drop images in closed tabs, or something nuts like that...
We don't do anything special about closed tabs. We should!
(In reply to Joe Drew (:JOEDREW!) from comment #8)
> We don't do anything special about closed tabs. We should!

Presumably we stop any decoding that's still in progress when the tab is closed, but it sounds like any already-decoded images just expire on the usual 10--20 second timer?

Comment 10

5 years ago
How about leaving the in view images of the last x viewed and not closed tabs decoded to speed up tab switching?
(In reply to Nicholas Nethercote [:njn] from comment #9)
> Presumably we stop any decoding that's still in progress when the tab is
> closed

I don't even know if that's the case. I'd be unsurprised either way.

>, but it sounds like any already-decoded images just expire on the
> usual 10--20 second timer?

Yep.

(In reply to DB Cooper from comment #10)
> How about leaving the in view images of the last x viewed and not closed
> tabs decoded to speed up tab switching?

We don't know what images are in view. Fixing that is bug 689623.
>> Presumably we stop any decoding that's still in progress when the tab is
>> closed
>
> I don't even know if that's the case. I'd be unsurprised either way.

I think bug 672578 might handle this, but I'm not sure.
This has now been implemented!  See bug 689623 comment 155 for details.

But it might need some tweaking.  E.g. see bug 658790, which shows that pretty much all images on an image-heavy page can be decoded during loading.
Whiteboard: [parity-webkit][parity-ie9][parity-opera][MemShrink:P1][Snappy:p1] → [parity-webkit][parity-ie9][parity-opera][MemShrink][Snappy:p1]
Depends on: 266111
I'm going to dup some other bugs to this one:

- Bug 658790.  It lists the following sites:

  http://www.militaryphotos.net/forums/showthread.php?99988-Russian-Photos-%28updated-on-regular-basis%29/page2737
  http://www.mapcrunch.com/gallery
  http://www.9-eyes.com/
  http://www.theatlantic.com/infocus/2011/03/air-strikes-on-libya/100031/

  I tested the militaryphotos.com one and found it quite a good test, giving ~250 MiB of decoded images at peak (see bug 658790 comment 3).

- Bug 266111, which lists this site:

  http://web.archive.org/web/20060428001051/http://api.openoffice.org/docs/DevelopersGuide/Spreadsheet/Spreadsheet.htm

  I tested this too, and it also has ~250 MiB of decoded images at peak (see bug 266111 comment 11).

FWIW, I found http://prawnkingold.blogspot.com/2008_04_01_archive.html to not be a very good test case.  It doesn't have nearly as many images as the ones listed above.
Depends on: 266111
Duplicate of this bug: 266111
Duplicate of this bug: 658790
No longer depends on: 266111
>   I tested the militaryphotos.com one and found it quite a good test, giving
> ~250 MiB of decoded images at peak (see bug 658790 comment 3).
> 
>   I tested this too, and it also has ~250 MiB of decoded images at peak (see
> bug 266111 comment 11).

Ah, bug 791865 comment 5 says:

> 250mb is the default maximum amount of decoded image data we'll willingly
> keep around.  Sometimes we're forced to keep around more than that, but
> you're seeing here us hitting our max.
Another good test case:
http://congressoamericano.blogspot.com/p/fotos-do-congresso-americano-iii.html

With a pre-689623 build on Mac my resident climbs to 3 GiB, with images accounting for about 2.7 GiB.  With a post-689623 build my resident climbed to 2.8 GiB before settling down to 650 MiB.  So bug 689623 helped a bit in reducing the peak memory consumption at load time, but not a lot.


> >   I tested the militaryphotos.com one and found it quite a good test, giving
> > ~250 MiB of decoded images at peak (see bug 658790 comment 3).
> > 
> >   I tested this too, and it also has ~250 MiB of decoded images at peak (see
> > bug 266111 comment 11).

I was wrong about this -- I didn't realize that we discard images above that 256 MiB limit immediately upon leaving the tab.  So my measurements were flawed, because I was viewing about:memory in another tab.  jlebar suggested opening about:memory in another window instead.  After doing that, I see the militaryphotos.com site has 608 MiB of decoded images (in a pre-689623 build), and the openoffice.org one has 306 MiB.

Comment 19

4 years ago
Hey Nicholas, when I opened your first link Nightly never released memory until I actually scrolled the page. The page took almost 1 minute to load and I scrolled 30 seconds after. Memory did get released when I reloaded the page and the images got served from the cache.
(In reply to Nicholas Nethercote [:njn] from comment #18)
> Another good test case:
> http://congressoamericano.blogspot.com/p/fotos-do-congresso-americano-iii.
> html

I looked into this a little closer. The list of visible images starts out good, containing only a handful of images. The problem is that as the network requests for the remaining images come in we create the image frames for them, and when we create the frames we assume the images are visible until we next update the list of visible images.
(In reply to Timothy Nikkel (:tn) from comment #20)
> (In reply to Nicholas Nethercote [:njn] from comment #18)
> > Another good test case:
> > http://congressoamericano.blogspot.com/p/fotos-do-congresso-americano-iii.
> > html
> 
> I looked into this a little closer. The list of visible images starts out
> good, containing only a handful of images. The problem is that as the
> network requests for the remaining images come in we create the image frames
> for them, and when we create the frames we assume the images are visible
> until we next update the list of visible images.

Before there is any data for the images they still have frames but they aren't image frames, just placeholder loading frames.
Should I dupe bug 847223 here, or do you want to make this depend on that bug?
Duplicate of this bug: 847223
(Reporter)

Updated

4 years ago
Depends on: 847223
Whiteboard: [parity-webkit][parity-ie9][parity-opera][MemShrink][Snappy:p1] → [parity-webkit][parity-ie9][parity-opera][MemShrink:P1][Snappy:p1]
(Reporter)

Comment 24

4 years ago
With bug 847223 landed, I've performed various test using various image-heavy sites sites, including those of comment 14 and bug 847223 comment 13.

As far as I can tell, the fix is working very well -- both for initial loading and for decoding/discarding on scroll.

Tested on Windows 8 with m-c cset 9366ee039645

Thanks for fixing this! \o/
Status: REOPENED → RESOLVED
Last Resolved: 7 years ago4 years ago
Resolution: --- → FIXED
Thanks for checking, IU!  It took over 3.5 years, but we got there :)
Resolution: FIXED → DUPLICATE
Duplicate of bug: 847223
You need to log in before you can comment on or make changes to this bug.