Closed Bug 205893 Opened 21 years ago Closed 20 years ago

ImageLib uses GDI handles inappropriately; malicious page can crash mozilla, windows

Categories

(Core :: Graphics: ImageLib, defect)

x86
Windows 98
defect
Not set
critical

Tracking

()

RESOLVED FIXED

People

(Reporter: bugzilla, Assigned: jdunn)

References

Details

(Keywords: crash, fixed1.4.1, Whiteboard: fixed 1.5b)

Attachments

(7 files, 3 obsolete files)

User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.4b) Gecko/20030507
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.4b) Gecko/20030507

For some reason, objects stored in the memory cache use a GDI handle. The
smallest size the memory cache can be is one mebibyte. The smallest size an
image can be is 34 bytes. Even with the overhead of indices, a
maliciously-crafted webpage can make the cache store 10000 'web bugs',
sufficient to make mozilla crash the operating system.

Reproducible: Always

Steps to Reproduce:
1. enable memory cache
2. put bomb.php or bomb.php.html on a convenient webserver
3. put webbug.gif on a convenient webserver
4. open in mozilla

Actual Results:  
Mozilla loads all 10000 images into the cache, using a GDI handle and 50k of
memory for each one. Assuming the system is still usable, these handles are not
freed until the webbugs are displaced from the cache.


Expected Results:  
Mozilla loads all 10000 images into the cache, using 50k of memory for each one.
The system is hit for 1Mb of ram, but no GDI handles. System does not crash.


Note that this entire exploit, in flat HTML form takes up 2k on a webserver, but
transfers around 500 bytes per file. Around 5Mb of data is needed to lock
windows XP, but far fewer for 9x. This makes it unlikey to affect modem users,
but it could easily crash a machine with DSL or faster, and can easily be
disguised by making the webbugs the same colour as the background.
Attached image 'web bug' - a 1x1 gif.
If the attacker can guess a local file (%mozilladir%/res/loading-image.gif for
instance), the download required drops to 2k or so. Using dynamic HTML it could
be even smaller. If %mozilladir% can be found out using javascript, then a 2k
webpage can crash any windows machine running mozilla.
while this is similar to bug 199443, in bug 199443 the memory cache loses track
of handles it has allocated, whereas in this bug the memory cache working as
intended is sufficient to crash the system.

Workarounds:
disable images
disable memory cache
ImageLib uses GDI handles, not the memory cache.  Changing component.
Assignee: gordon → jdunn
Component: Networking: Cache → ImageLib
QA Contact: cacheqa → tpreston
Summary: memory cache uses GDI handles inappropriately; maliciously-crafted webpage can crash mozilla, windows (exploit available) → ImageLib uses GDI handles inappropriately; maliciously-crafted webpage can crash mozilla, windows (exploit available)
I believe this is real though I have not tested it.  Much more of an
architectural issue than a 'bug'.  Confirming.  Marking as 'crash', though it's
more severe in that it can take your entire system down.

Also - this is quite related to the memory cache implementation; see Darin's and
Jim Dunn's comments in bug 199443.

While not a solution to this bug per se (an attacker could still cause the
crash, though with considerable more pain, by having 10000 different 1x1 colored
images), an interesting feature to the cache (as Lewis indicated in bug 199443)
would be to not store duplicate copies of identical images in the memory cache -
or at least don't hold GDI's for duplicate images.  As usual, the big offender
is transparent 1x1 bitmaps; next biggest is 1x1 colored bitmaps, all others are
relatively minor.  In fact, a significant win in GDI and memory cache usage
could be made just dealing with those (or just 1x1 transparents).  Note that we
have optimizations in the rendering code to not render them anyways, so storing
a GDI for a 1x1 transparent image doesn't do a lot to help us.

Making a separate GDI cache might be a smart way to handle this; it would
require memory cache accesses to make sure that a GDI resource is grabbed and
any associated actions (re)done (like making the HBITMAP).  In normal operation,
most entries in the cache would have GDI's, but you'd avoid edge cases like this
attack at the cost of slowing things down when an excessive number of resources
are in use.  Alternatively (and probably much simpler), you could have a
separate limit to the number of GDI's the cache can have in use, and start
flushing items (with GDI's) from the cache when the limit is hit, instead of
just when the cache goes over on size.  The GDI limit can be OS-dependant
(higher on 2000/XP).  I'm don't think it would need to be exposed at the
preferences UI; perhaps a hidden pref to override the default.  Default should
probably be in % of possible, so it'd adjust automatically on 2000/XP.  Just a
guess.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: crash
i think this could be done entirely within GFX.  we can keep a list of the
most-recently-used nsImageWin objects, and only those objects would have their
data stored using a HBITMAP.  other nsImageWin objects would use DIBs.  as we
push nsImageWin objects onto the MRU list, we push a less frequently used
nsImageWin off the list.  this wouldn't take much work to implement.  and
imagelib wouldn't need to change at all.
Sounds good to me.  Also, quite unlikely to affect things outside of graphics. 
All we need to do (other than coding this) is come up with reasonable defaults
for different win versions (or based on total GDI resources available), and
perhaps a non-UI pref to tweak.
As far as limits go, it's probably not possible to set a percentage. Windows NT
officially has no resource limit (and therefore there's nothing to take a
percentage of), while in 9x all applications share the same resource area, so if
mozilla kept to a percentage and a resource hog came along, the OS would still
crash.

In windows NTs, I've seen mozilla run at 8000 or so GDI handles with no ill
effects. 10000 seems to crash the system. 8000 would seem to be a good choice
for a hard limit.

In windows 9x, all applications share the same GDI heap, 2MB for 32 bit apps,
and 64k for 16 bit apps. The 2MB heap must store every application's graphical
handles (http://www.apptools.com/rants/resources.php). For windows 9x, it would
probably make more sense to specify an amount of resources that mozilla will
keep free, so that the system can operate properly (If too many handles are
consumed, windows can't display the 'close program' box, and resorts to the 'The
system is dangerously low on resources' box). There are system calls to ask how
much free space is available in the GDI heap. 

Off the top of my head, I'd suggest 300k should be kept free if at all possible
("When Resources get below the 15% range, it is time to think about shutting
down some programs" is quoted all over the web, but I think is originally from
the 98 resource kit.)
Blocks: 204374
Basically only allow 100 (BITMAP_CACHE_SIZE) images to use gdi hbitmaps.
All others must rely on non :: Optimized way.

This patch uses a cache to keep track of objects holding gdi hbitmaps.
The cache is hardcoded at 100, when we have 100 objects in this cache
the oldest image gets bumped, we convert it hbitmap back to a dib.

Have run some tests and the patch clears up most (if not all) of
the gdi leaks.	Now trying to make sure I am not impacting 
performance too much.

Have played around with a cache size of 200.
Have thought about making this a preference.
there might some portability issues with having a static object with ctor/dtor.
 given that this code lives in a shared library, you might run into trouble in
some cases.  see:
http://mozilla.org/hacking/portable-cpp.html#dont_use_static_constructors
Hmmm, I assume this is referring to that I can't count on the fact that
gImageBitmapCache.mCache[1..100] & mIndexOldest won't be initialized.
I specifically initialized them in the constructor.  Note: this is also
win-only code (so is limited to gcc on windows & msvc) so am assuming
won't run into this problem.

However, I can change this (have a check in nsImageWin::nsImageWin
if gImageBitmaCache is nsnull and if so... new a ImageBitmapCache object.
If I do this should I also fix
http://lxr.mozilla.org/seamonkey/source/gfx/src/windows/nsRenderingContextWin.cpp#2675
(which is the code I stole from and is doing the same???)
Like the other exploit, but with javascript
Attachment #127654 - Attachment mime type: text/html → text/plain
I have tested Mozilla 1.4 with Jim Dunn's patch applied.
It seems to fix both bug 204374 and this bug (tested with the javascript exploit).

Without the patch Mozilla (and other applications) crash, with the patch nothing 
crashes and the gdi handle usage doesn't climb much.

There is one (speed) problem:
It is extremely slow, mozilla hangs for over 20 seconds displaying the 3000
images while other browsers display them after 1 or 2 seconds.
The second time the page is displayed (and the images are already in memory
cache) it is much faster, but still slower than IE.
So this is probably because the initial loading of the 3000 images into memory
cache takes so long.
"3000 images takes 20 seconds". That's not really overly bad. Well, OK, it is,
but considering the speed that most people connect to the Internet, I don't
think it is that bad.

I'm on ADSL, and would be overjoyed if I could load 3000 images in 20 seconds on
any browser...never mind those on 56K dialup.

Maybe a seperate bug should be opened for a more complete, total rewrite type, fix.
Depends - It's probably a bad idea to assume this would never be done on an
intranet. On a LAN, using a decent webserver, I'd be pretty surprised if it took
more than a few seconds to transfer.

Any webserver worth it salt would know that the image files are not executable,
so the different GET data could be ignored. If the webserver isn't serving the
1x1 gif out of its memory cache, there's something seriously wrong with it. We
can rule out disk load on the webserver.

HTTP overhead for 3000 files is about 1.5 MB, or 12 Mb. Even at 100% ethernet
encapsulation overhead, that's still only 24 Mb. 100Mb ethernet would transfer
this in a fifth of a second. 10Mb would take a whopping two seconds. 20 seconds
to transfer this data would need a 1.2 Mbps, which is SDSL territory. Academic
and corporate connections are much faster. Webserver overhead can be ruled out
by using file URLS, allowing the bug to be exploited even over a modem (A little
guesswork is required, but "file:///C:/WINDOWS/BACKGRND.GIF", or
"file:///C:/WINDOWS/FOREST.BMP" would get 90% of cases). 

I'd suggest the following bugs be filed:

- file:// and chrome:// URLS should be keyed in the cache ignoring GET data
- some form of 1x1 gif GDI factory would improve GDI cache performance
(suggested in comment 6)
Summary: ImageLib uses GDI handles inappropriately; maliciously-crafted webpage can crash mozilla, windows (exploit available) → ImageLib uses GDI handles inappropriately; malicious page can crash mozilla, windows
Attachment #127547 - Flags: superreview?(darin)
Attachment #127547 - Flags: review?(kmcclusk)
We should try to get this in for 1.4.1 after it's reviewed and landed on the trunk. 
Flags: blocking1.4.x+
So, are BMPs that much faster than DIBs?
This has nothing to do with the difference between BMP and DIB image formats.
The difference is not in the file format, but in the location of the image.

Windows has always had a 'client/server' rendering model - programs that want to
render to the screen send brushes, lines, bitmaps, menus etc. to the GDI, which
gives them back a handle. Rather than saying 'draw this bitmap', the program
says 'draw the bitmap that I've given you earlier that has this handle'. X11
works in much the same way.

This separation means that the program doesn't need to know what it's drawing to
- most programs run with no modification on Terminal Server, despite the fact
that the part of the GDI that communicates with the program, and the part of the
GDI that draws to the screen are on different computers. IIRC, advanced graphics
drivers can store uploaded resources on the card rather than memory, speeding up
rendering.

On outdated OSes such as windows 3.x and windows 9x, there is a fixed-size area,
shared between all the processes running on the machine for them to upload
images, widgets, etc. into. Even on NTs, there is a limit of ~10000 handles that
each process can leave uploaded on the GDI system. 

What mozilla's memory cache strives to do is keep a mapping of the bitmaps to
URLs, so that displaying ten copies of an image in the cache uses 1 handle ten
times, rather than ten handles one time. Unfortunately, URLs are not a perfect
way of keying - images that are identical, but have different URLs get different
handles.

What the (pathological) testcase for this bug does is create tens of thousands
of images that could (potentially) be renedered using one handle with no
performance impact. Mozilla currently uses tens of thousands of handles, and
overloads the GDI system, especially on 'toy operating systems'.

The fix isn't as simple as i'm suggesting - seeing if images are the same is
very computationally intensive, and hard to do (If it was as easy as it sounds,
the GDI would do it for us :). 

patch 127547 minimises the impact of this problem by establishing an upper limit
on the number of handles imglib is allowed to use - once it reaches that upper
limit, imglib will remove the least-used bitmaps from the GDI, freeing up its
handle. The performance hit on pathological pages like this comes from
constantly uploading and freeing bitmaps.
answer to #19
After some more tests I came to the conclusion that caching only 100 GDI handles
has NO significant impact on speed.
I tested with the javascript example and unpatched Moz 1.4 vs. Moz 1.4 with
patch 127547.
With the max counter value set to 1000, displaying the page takes a little bit
more than 4 seconds with BOTH versions of Mozilla - patched and unpatched: their
speed is exactly the same.
(Note that on unpatched Mozilla, GDI resources are down to 13% after this)

Then I changed the value back to 3000. Unpatched Mozilla crashes, so only the
results for patched Moz.: 42 seconds (the "over 20" seconds I reported above
were a little bit optimistic :-) ).

IMHO these results show that the slow speed has nothing to do with not caching
GDI handles.
Agreed.  There are other N**2 issues in mozilla that may come into play.  perf
test would be interesting.

I agree general img matching is a pain.  Doing it for 1x1's is easy.

You could even some sort of hash of size, flags, and some of the data words
(from the middle probably) and put it in a hashtable, and avoid fully checking
things that aren't likely matches.  That might make the overhead for arbitrary
images reasonable.  Still, not a huge issue.

Looks like this patch is a good one...
Comment on attachment 127547 [details] [diff] [review]
patch to fix problem

>+PRBool ImageBitmapCache::Add(nsImageWin *entry)
>+{
>+  int     i;
>+  HBITMAP result = NULL;
>+
>+  // See if it's already in the cache
>+  for (i = 0; ((i < BITMAP_CACHE_SIZE) && (mCache[i].mImage != nsnull)); i++) {
>+    if (entry == mCache[i].mImage) {
>+      return PR_TRUE;                 // Image's is already optimized
>+    }
>+  }

I'm not happy about iterating through the whole array every time we want to add
a new image to the cache. Maybe we could add an mIsCached flag to nsImageWin?
If you are worried about bloat, we could always combine the existing boolean
flags into a bit field.
blocking 1.4.x, so probably should block 1.5b
Flags: blocking1.5b?
Flags: blocking1.5b? → blocking1.5b+
This is an emergency fix, till an approach similar to 127547 can be
made to work for the problem reported by bug 184933
Comment on attachment 129188 [details] [diff] [review]
Temporary fix: remove all Windows GDI optimizations

>+  // With the increase memory cache size, we hold to many images.

'too'?
Attachment #129188 - Flags: superreview?(tor)
Attachment #129188 - Flags: review?(smontagu)
Attachment #129188 - Flags: approval1.4.x?
Possibly 'hurts system performance' would be better phrased 'hurts system
stability' - performance stays much the same until the resources run out, at
which point the system becomes unusable.
Comment on attachment 129188 [details] [diff] [review]
Temporary fix: remove all Windows GDI optimizations

r=smontagu.
I agree that this should go in as a temp fix.

From looking at the code (is this the first time I've looked, or is this segment
just easier to read?), it looks like "oldbits" (i.e. HBITMAP oldbits) may be one
of the GDI resources that isn't being deleted. However, I didn't look beyond the
function that the patch applies to, so maybe it's being deleted later on.
Comment on attachment 129188 [details] [diff] [review]
Temporary fix: remove all Windows GDI optimizations

Do we have any idea of how much this will hurt performance?

sr=tor
Attachment #129188 - Flags: superreview?(tor) → superreview+
Attachment #129188 - Flags: review?(smontagu) → review+
Comment on attachment 129188 [details] [diff] [review]
Temporary fix: remove all Windows GDI optimizations

Let's get this landed on the trunk and try to get some testing across the
various windows platforms to see if anyone can shake out any problems.
Attachment #129188 - Flags: approval1.5b+
The patch has been apparently checked in yesterday (8/7 12:46 Pacific Time) by
smontagu:

"As a temporary stopgap fix for issues with GDI handles on Windows, don't
optimize images at all. Patch by jdunn@maine.rr.com, r=smontagu, sr=tor, a=asa."

I made a fresh CVS build and was able to open about 20 windows full of graphics
(which overwhelmed all recent Mozilla builds) even with
browser.cache.memmory.capacity set to -1 (meaning "auto"). Today (8/8) morning
builds will includr this patch. We need to do some testing before 1.5b is out.
Yes, I did check in attachment 129188 [details] [diff] [review]. It seems I forget to reference any bug
number in the check in comment, but at least the comment in the patch says 

 // see bug 205893 & bug 204374
Tor,
I beat on both this and the other patch (that used 100 limit cache).
I noticed some performance hits, but nothing major.  Depending on
the type of performance test, there was either small or almost no
degradation.  My concern now that the fix is in is that, because
it solves such a HUGE problem, that performance testing and patch
tweaking might be put on the back burner by mozilla as a whole.
(btw I am awaiting a PC which is on order before I can do any more
 analysis of this).
Did you notice the rather large memory footprint increases?  This patch should
not have gone in on the trunk.  Perhaps on the branch as a last minute attempt
to stop windows95/98/me from dying as quickly.
Comment on attachment 127547 [details] [diff] [review]
patch to fix problem

this needs work.  Trying to fix this by using a fixed number of GDI objects is
pretty suboptimal.  We need to come up with a clean way of solving this that
doesn't sacrifice speed or memory footprint.  I've been looking in to this a
bit and have a few ideas that I'll post once I'm sure that they'll work.
Attachment #127547 - Flags: review?(kmcclusk) → review-
Re: comment #34 & #35 (pavlov's)

I disagree. New Mozilla builds (and releases) were for too long practically
useless on Win98 and WinME. I believe this "fix" should stay in the tree at
least until 1.5beta is released. This way there will be at least one recent
release which works with Win98/me. It should probably go also into 1.4 tree
before mozilla1.4.1 is released. Only after that, if a better solution is found,
this hack should be replaced in the tree by a real fix.
Comment on attachment 129188 [details] [diff] [review]
Temporary fix: remove all Windows GDI optimizations

a=mkaply for temp fix on 1.4.1
Attachment #129188 - Flags: approval1.4.x? → approval1.4.x+
down to the wire on shipping 1.5beta.  want to do it this week.  what
alturnatives do we have here?  can we refine the patch to not do the
optimizations on win9.x/ME where most of the problems are exposed?   who could
do that work...
chofmann,
I am confused.  This patch is in the trunk and hence will make it for 1.5beta.
Where it is NOT checked in is the 1.4 branch (to make 1.4.1).

Actually while this bug deals mainly with win98, all windows platforms
can be affected by this if mozilla is run for long periods of time and/or
the user has lots of tabs.  If you make this win98, heavy mozilla users
will complain that we took away an important fix (IMHO).
I frequently see this problem manifest in Windows XP. Prior to the
temporary fix I could not use Firebird or Mozilla for long periods of
time without getting severe redrawing errors and sometimes even "out
of memory" type dialog boxes. I would not suggest limiting this to one
OS.
chofmann: this problem does occur on NT systems, especially those with lots of
ram (e.g. my w2k system which has 512mb of ram). Also when people use terminal
services and connect their 9x (e.g. my w98se laptop) to an nt system (as I do)
then this problem will happen.

We should *not* restrict this hack to 9x.
Keywords: fixed1.4.1
ok,  thanks for the updates.  
here is the plan for 1.5b and this bug:
-pav's not ready with an alturnate way to do this.  maybe for final.
-we will use patch in its current form to ship 1.5beta
-jdunn says the patch has landed so 1.5b should be ready to go w/respect to this
problem.
Flags: blocking1.5b+
ok, also sounds like from http://bugzilla.mozilla.org/show_bug.cgi?id=205893#c39
we still need to get the patch landed on the branch for a 1.4.1 release.  jdunn,
can you do that, or update on status?
It landed on the 1.4 branch on 08/18/2003 15:38 (2 hours after comment 39),
thanks to timeless. 

"Bug 205893 ImageLib uses GDI handles inappropriately; malicious page can crash
mozilla, windows patch by jdunn@maine.rr.com r=smontagu sr=tor a=mkaply"
Is this a duplicate of bug 204374
("GDI Resources are used till the UI/website displays faulty")?
It seems awfully closely related.
This a not a duplicate of bug 204374 - 

bug 204374 is about GDI problems in general, mostly GDI leaks where GDI
resources are allocated, but not later freed. Bug 205893 is a specific problem,
where mozilla can be asked to allocate tens of thousands of GDIs by a malicious
web page, and it does. The resources will be freed later, as they are displaced
by other images, but in the interim, nothing on the system is able to display
properly.

bug 204374 is in many ways a meta-bug, concerning general resource problems,
while bug 205893 concerns a specific issue, not addressed in bug 204374. (bug
204374, in fact has a dependency on bug 205893.)

Though limiting the maximum number of GDIs that can be allocated, or disabling
all Windows GDI optimizations does work around both bugs, saying that they are
duplicates because of this would be like saying that two PSM bugs are duplicates
because removing PSM works around both of them.
Whiteboard: fixed 1.5b
*** Bug 211111 has been marked as a duplicate of this bug. ***
SIZE DOES MATTER

A page with some large images still lets the cache size go far beyond what is
specified in browser.cache.memory.capacity (with mozilla 1.5rc1)

At least the nvidia driver on w2k (detonator 44.03) seems to keep those images
in video memory, so exhaustion arrives quickly.
Size only matters in 9x . For OSes based on NT, the only thing that matters is
the number of images. So limiting GDIs based on an arbitrary number works fine
for NT, but not for 9x (which is why the original patch did not help for 9x).

It is my understanding that mozillas 1.4.1, 1.5b onwards have speculative GDI
acceleration entirely disabled; they only upload to the GDI images that are
currently present in the visible window. This means that the size of the
committed memory cache is not representative of the GDI allocation, as the vast
majority of images in the memory cache are not 'uploaded' to the GDI.

This being the case, the only way to overload the GDI is to have sufficient
images of sufficient size present on a single page to fill up the entire GDI
memory. There is no way that any browser (or application of any kind) could
display such a page on such an operating system. Bug 51028 details one such
page, bug 122581 details another.

If the memory cache allocated is larger than the memory cache maximum, you may
have discovered a new GDI leak. Please download attachment 121792 [details] (9x), or
attachment 123324 [details] (NT) and have a look for suspicious identical allocations, or
suspicious over-allocations (something like 1000 or so allocated regions, images
is suspicious, as are any allocations of the same object that keep happening
when you do something in mozilla). At any rate, please don't scream 'GDI LEAK!'
until you've confirmed it with and actual GDI tool (and no, 'system resource
meter' does not count - it's about as useful as a light that says 'car broken').

For the record, Video memory and GDI memory are two different things. Video
memory is typically used for the framebuffer (though (I think, I may be way
wrong here) there's nothing to stop the graphics driver using free RAM on the
video card to cache uploaded GDI objects). GDI memory is a logical area used by
the operating system to store drawing objects. In 9x, this area is fixed, and in
all operating systems, the size of this area is independent of the amount of
memory on the graphics card.
Comment #49:

This is not true. As you can see in my comment, I am using W2K. You are right in
saying that the memory cache can be overcommited by displaying a single page
with large images on it. As soon as the memory cache size (as reported by
about:cache) approaches 32MB (which is how much graphics memory I have), all the
known repainting errors start cropping up.

And there definitely is a way to display those pages as IE has no problem at all
to do so. I can throw a dozen times the image sizes at IE and it won't hickup.
I've created a program to support my statement that size matters. This is for
2000/XP users only.

Start up the program and enter the path to a directory with many (large) jpegs.
 The first image from the directory will appear in the window. There are two
noteworthy buttons in the toolbar. Use the plus to add another image to the
bottom. Go try this a few times. Note the memory usage display in the status
bar. Now press the plus button repeatedly until memory usage surpasses your
amount of graphics RAM. Verify that you can scroll all the way up and down and
no distortion happens. Now press the m button. Try scrolling again and note that
it does not work as it should (the bottom-most images just disappeared). Now
leave the window open and play with some other applications (esp. Mozilla). Note
that they can't properly repaint themselves. Switch mozilla mode off, and
everything goes back to normal.

This is what happens on my system and I assert that this is normal behaviour
with the detonator driver at least.

If you don't experience the described behaviour, please look at the task manager
while adding images (with mozilla mode turned on). If the driver places the
HBITMAPs in video ram, the program memory usage should be close to the value
shown in the task bar (this is what happens for me). Otherwise I would expect
the task manager value to be at least twice as high.

Source code is available here (VC6):
http://stud3.tuwien.ac.at/~e9725446/mozimgSrc.zip

(includes libjpeg, build it first with nmake /f makefile.vc nodebug=1)

Implementation note: mozilla mode keeps all the bitmaps as HBITMAP objects and
BitBlts them while normal mode uses SetDIBitsToDevice.
Attachment #131771 - Attachment description: Test program that mimcs mozilla's behaviour when showing many images → Test program that mimics mozilla's behaviour when showing many images
Attachment #131771 - Attachment is obsolete: true
Some jpegs did not work as expected and crashed the program. This should be
fixed now.
Attached patch Patch to use DIB sections (obsolete) — Splinter Review
This little patch fixes all my problems. It uses DIB sections instead of DDBs
to keep images in memory. DIB sections are just as fast as DDBs if the pixel
format matches and reasonably fast in any case. Maybe 24 bit images should be
promoted to 32 bit if the screen mode is 32 bit.
*** Bug 220040 has been marked as a duplicate of this bug. ***
Stefan - please use the Edit link on your latest patch here and request reviews
for it.  (See the other patches as to whom to ask for which review type.)
Attachment #131947 - Flags: review?(smontagu)
Blocks: 224532
Blocks: 64401
Attachment #131947 - Flags: review?(smontagu) → review?(jdunn)
I am currently on vacation and will get to this sometime in Jan... 
however, I remember trying this and it seemed to be ok.
But I don't have r= authority.
Pav & Darin (or someone who really knows DIB .vs. DDB junk) should probably review.
Attachment #131947 - Flags: review?(jdunn) → review?(smontagu)
Attachment #131947 - Flags: review?(smontagu) → review?(darin)
Attachment #131947 - Flags: review?(darin) → review?(ere)
Comment on attachment 131947 [details] [diff] [review]
Patch to use DIB sections

You said "This little patch fixes all my problems. It uses DIB sections instead
of DDBs to keep images in memory.  DIB sections are just as fast as DDBs if the
pixel format matches and reasonably fast in any case."	But it looks like the
patch replaces a DIB, not a DDB, with a DIBSection.  What are the performance
implications in that case?

I see that Mozilla uses DIBSections elsewhere, so I guess they're safe to use
-- they won't crash for users of some particular video card, etc :)

nsImageWin.cpp uses DIBitmap in two places.  Should both uses be replaced by
DIBSections, or just that one?
No, it's not a DIB being replaced. CreateDIBitmap actually creates a DDB.

I did't notice any difference performance-wise but given a couple days I can
investigate a bit.

I don't know how CreateDDB (the other occurence of CreateDIBitmap) is used. My
experience shows that my patch is sufficient. However, it doesn't look like it
would hurt to change this as well. Comments on this please?
Flags: blocking1.7b?
Comment on attachment 131947 [details] [diff] [review]
Patch to use DIB sections

If the tabs are removed, r=ere
Attachment #131947 - Flags: review?(ere) → review+
Attachment #131947 - Flags: superreview?(tor)
Comment on attachment 131947 [details] [diff] [review]
Patch to use DIB sections

As ere mentioned, the indentation needs work.

I'm not sure how cut&paste works on win32, but the manual for
CreateDIBSection indicates that DIB sections cannot be pasted
to another application.  Will this cause us any problems?

We already store the size of the image in mSizeImage.
Indentation fixed, mSizeImage used

The clipboard issue does not apply. This would only affect the Windows Metafile
format which is not used in Mozilla.

It seems that CreateDDB (the other place where CreateDIBitmap is used) is never
actually called, so it's best left untouched...
Attachment #131947 - Attachment is obsolete: true
Attachment #142279 - Flags: superreview?(tor)
I'm not a reviewer, so ignore me if you want....

However, I notice in the patch that you do a test on mHBitmap. Is it possible to
merge in the next line of code that tests against mHBitmap for setting the value
of mIsOptimized (which is just a boolean anyway)? This would reduce the need to
test mHBitmap twice....
It is possible but every decent compiler will do it for us.
Attachment #142279 - Flags: review?(ere)
(In reply to comment #62)
> It seems that CreateDDB (the other place where CreateDIBitmap is used) is never
> actually called, so it's best left untouched...

If you ask me, functions that are never called are best removed...
(unless they are on an interface and meant to be used by extensions or something)
I'm not sure. The function was not called during browsing, but maybe it's used
for the toolbars or such things.
In regards to the if mHBitmap, it should probably be at least changed 
to something of the form:

      mHBitmap = ::CreateDIBSection(TheHDC, (LPBITMAPINFO)mBHead,
        256 == mNumPaletteColors ? DIB_PAL_COLORS : DIB_RGB_COLORS,
        &mBits, NULL, 0);

      if (mHBitmap) {
        memcpy(mBits, mImageBits, mSizeImage);
        mIsOptimized = PR_TRUE;
      } else {
        mIsOptimized = PR_FALSE;
      }

Why trust the compiler to be smart when it is better if we are.

As for the CreateDDB, I remember having a discussion with Pav/Darin about
this and I "think" that it can be removed.  Maybe Darin will speak up here.
However, if it is, then we probably should coordinate this with mkaply so 
that we remove it from the OS2 version as well.
Comment on attachment 142279 [details] [diff] [review]
Patch to use DIB sections, updated

Yep, what Jim said would be smarter from us.
Attachment #142279 - Flags: superreview?(tor)
Attachment #142279 - Flags: review?(ere)
Attachment #142279 - Flags: review-
Ok, let's do it this way.
Attachment #142279 - Attachment is obsolete: true
Attachment #142439 - Flags: review?(ere)
(In reply to comment #66)
> I'm not sure. The function was not called during browsing, but maybe it's used
> for the toolbars or such things.

CreateDDB has no caller:
http://lxr.mozilla.org/mozilla/search?string=CreateDDB
Comment on attachment 142439 [details] [diff] [review]
DIB sections patch, updated again

sr=ere
Attachment #142439 - Flags: review?(ere) → superreview+
Attachment #142439 - Flags: superreview+ → review+
Comment on attachment 142439 [details] [diff] [review]
DIB sections patch, updated again

r=ere, not sr. Sorry.
Attachment #131947 - Flags: superreview?(tor)
Attachment #127547 - Flags: superreview?(darin)
Comment on attachment 142439 [details] [diff] [review]
DIB sections patch, updated again

It seems that the DIB created by CreateDIBSection will be placed in
main memory (unless windows starts messing with the VM mappings and
requires that drivers support BGR->native blits).  Thus this change
will cause nsImageWin::Optimize to create a second full-depth BGR
copy of the image for what very little benefit over our existing
unoptimized image path.  Were there problems encountered when we
just turned off optimization?
Attachment #142439 - Flags: superreview-
tor... yes there was problems with backgrounds, can't remember
the bug number but it was why we turned optimization back on.

I did have a previous patch to create a "cache" of hbitmap's
(http://bugzilla.mozilla.org/attachment.cgi?id=127547&action=view)
that I was experimenting with last summer.  It seemed to work ok
but I didn't really get a chance to make sure that the "size" was
optimal and junk.  However, I am not really pushing for this patch
(just wanted to let you know that I had worked on it with Darin watching).

I think a better idea (if this DIB thing doesn't get put in) would be
to have ImgLoader or something indicate whether we want to optimize
the image or not.  "Someone" should know which images should be fast
and which should be slow.
Jim Dunn: I think the perf bug with backgrounds you referred to was bug 216430.
Mook made a build with this patch several days ago:
http://forums.mozillazine.org/viewtopic.php?p=418370
(In reply to comment #76)
> Mook made a build with this patch several days ago:
> http://forums.mozillazine.org/viewtopic.php?p=418370

Works great!   I can once again view Fark PhotoShop threads.
(In reply to comment #76)
> Mook made a build with this patch several days ago:
> http://forums.mozillazine.org/viewtopic.php?p=418370

Please see my comment at
http://bugzilla.mozilla.org/show_bug.cgi?id=204374#c355
which is a bug that depends on this one.

Looks to me that there is something wrong with at least one of: 
  the build, the patch, or my test.
(In reply to comment #73)
> It seems that the DIB created by CreateDIBSection will be placed in
> main memory (unless windows starts messing with the VM mappings and
> requires that drivers support BGR->native blits).

Yes, likely.

> Thus this change
> will cause nsImageWin::Optimize to create a second full-depth BGR
> copy of the image for what very little benefit over our existing
> unoptimized image path.

There is a substantial speed benefit on most cards. And if you look a few lines
farther down in the code, you will notice that the extra memory is immediately
freed, so we trade a slightly higher set up time for (possibly much) higher
drawing speed.
Attachment #142439 - Flags: superreview- → superreview?(tor)
Comment on attachment 142439 [details] [diff] [review]
DIB sections patch, updated again

Worth a shot to see what problems this way might present, I suppose.
Attachment #142439 - Flags: superreview?(tor) → superreview+
Comment on attachment 142439 [details] [diff] [review]
DIB sections patch, updated again

Might be a good idea to finally make 1.7 usable again with large image threads.
At least the patch doesn't worsen the situation, that I can say with
confidence.
Attachment #142439 - Flags: approval1.7b?
Comment on attachment 142439 [details] [diff] [review]
DIB sections patch, updated again

You probably want to use the name |bits| rather than |mBits| since the m prefix
is usually for member variables.
Comment on attachment 142439 [details] [diff] [review]
DIB sections patch, updated again

update with dbaron's comments and a=chofmann for 1.7b
Attachment #142439 - Flags: approval1.7b? → approval1.7b+
Checked in.
Status: NEW → RESOLVED
Closed: 20 years ago
Resolution: --- → FIXED
Flags: blocking1.7b?
*** Bug 204374 has been marked as a duplicate of this bug. ***
*** Bug 237892 has been marked as a duplicate of this bug. ***
Possible regression resulting in no image display using Javascript clip: rect
http://bugzilla.mozilla.org/show_bug.cgi?id=237447
refers to an image rendering problem that showed up in all Thunderbird builds
just after this patch was checked in.
I am using win98 with an ATI radeon 64m ddr display.
Sorry for the spam if this does not apply, but I have no current Moz trunk build
to check it out.
No regression was caused by this fix
http://bugzilla.mozilla.org/show_bug.cgi?id=237426
fixed the clip: rect problem
Could the DIB sections patch be the cause of some performance regressions?
See bug 244555 and bug 244555. 
In my debug build, after I backed out the DIB sections patch, the performance of
Mozilla with the testcases in those bugs, seem to be quite improved.
*** Bug 217174 has been marked as a duplicate of this bug. ***
Blocks: 244555
Blocks: 244582
backing this patch out improves performance (win2k/nvidia) as per 277762 comment 30
Bug 204374 (which was marked as a duplicate of this bug) is back on WinXP
because of a performance fix in bug 284716 :(

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8b2) Gecko/20050305
Firefox/1.0+.
I filed bug 284978 for the regression instead of reopening this bug.
Blocks: majorbugs
No longer blocks: majorbugs
FYI: I recently upgraded to Mozilla 1.7.11 from 1.7.7 (Win2K), and suddenly I'm
getting the graphics errors again, on the second day of using it. It thought
"WTF?" and went to Bonsai to verify if my patch had been taken out. And indeed,
it's no longer there. I won't evangelize on this any further, just wanted to let
you know that the problem my patch fixed back then has not mysteriously
disappeared in the meantime. For the sake of fairness, I have to say that it's
not as bad as it was at the time I did the patch (2 years ago) but that might be
attributed to my graphics card having more video RAM these days.
(In reply to comment #94)
> And indeed,
> it's no longer there. I won't evangelize on this any further, just wanted to let
> you know that the problem my patch fixed back then has not mysteriously
> disappeared in the meantime.

Sadly some people think that a tiny performance improvement under some circumstances is more important than having a usable browser under other circumstances. The priority should be to make the browser work, without using up all the GDI resources and crashing all manner of other programs, so your patch should be put back unless another fix is made.

Blocks: 334359
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: