Closed Bug 98971 Opened 19 years ago Closed 13 years ago

Prettier image resizing & scaling (Bilinear, Bicubic, anything better than Nearest Neighbor)


(Core Graveyard :: Image: Painting, enhancement)

Not set


(Not tracked)



(Reporter: shelby, Assigned: paper)



(Whiteboard: [parity-opera] [parity-IE])


(8 files, 2 obsolete files)

Here is a chance for Mozilla to get a leg up on IE.

There are many valid cases where the IMG width and height attributes do not 
equal the IMG src's actual width and height.  Thus image scaling occurs in the 
browsers.  Unfortunately no browser I know of uses bilinear scaling to achieve 
a decent results.  In the case of line art, bilinear scaling can be the 
difference between seeing something instead of nothing at all!  In other words, 
we could characterize this as a bug.  Imagine an image with vertical 1 pixel 
wide lines that is scaled horizontally by 50%-- the lines on even numbered 
columns will disappear with current browser scaling algorithms.  Given the fast 
CPUs in current day computers, and an algorithm I will donate to this project, 
there is no excuse not to improve the browser.

I will attach an example which illustrates one of many valid uses for image 
scaling.  Many people might argue that if you want good image quality then 
designer should scale the image used in the SRC attribute.  However, there are 
cases where this isn't possible.  In the attached example, the IMG tag uses 
width=50% and height=50% attributes to create a web page that scales with the 
browser window.  Imagine it is also possible to use Javascript to create web 
pages which scale in the same proportion for height and width to fit the width 
of the window.  In fact, with absolute positioning introduced by CSS, this 
becomes a necessity for web pages which contain 100% absolutely positioned 
content.  This is especially important to us, the developers of Cool Page 
(, because our product (and other such pixel-accurate drag+drop 
such as Dreamweaver, Fusion) can use 100% absolute positioning.  And we have 
intention in the next release to be the first product to create absolutely 
positioned web pages which scale to the width of the browser window (i.e. to 
overcome one of the major weaknesses of absolutely positioned content versus 
flowed content).  You can do a links search on at AltaVista and find 
15,000+ web pages made with our product.

However, I hope my generic example proves that this is not simply a feature 
request to support our product.  Also you will often find on the web that naive 
designers will use the same SRC for the thumbnail.  Although this is poor 
design, it is quite common in my experience, and thus these thumbnails look 
horrible in current browsers.

Also the strongest argument for this improvement would probably be Mozilla's 
own use of it's ImageLib.  If you are creating GUI with the layout engine, then 
bilinear scaling is a no brainer IMO.

I hope I made a strong case for fixing this borderline bug.
The ResizeSmaller.cpp code illustrates a fast bilinear downsample using 
precomputed row and column weights.

I am not sure if the of the legal state of this algorithm.  I am willing to 
donate the algorithm and code to the Mozilla project under the Mozilla license, 
as long as we retain our 100% right (All Rights Reserved) to the code.  In 
other words, everyone else is bound to the Mozilla license with regard to this 
code, but we are not.

The algorithm may already be in the public domain, since it is quite obvious 
way to do it.  I am not sure.

I never optimized the upsample case.  I use a image doubler and then downsample 
to fractional scale.  The bug at the transparency edge is probably in the image 

As implied, this algorithm handles transparency!

The algorithm works in 24 bit mode.  I did not investigate what requirements 
this would place on integration with ImageLib.

Good luck!
Ever confirmed: true
I forgot to mention this algorithm is very fast!

It supports real-time scaling in Cool Page on a decent Pentium!
As long as this algorithm can properly handle the scaling of images with
transparency, it sounds like a great idea to me.

That is to say, if an image has a transparen pixel next to a black pixel, the
half-sized version should be a 50% transparent black pixel.
The algorithm detects the transparency color, so just replace with the 
background color in bilinear calculation.  Or output an at least 2-bit alpha 
channel for compositing stage.  I am not versed on the way images are 
composited in the Mozilla, so I am not sure which method is more appropriate in 
this case.

If any one needs me to do some tweaks to the algorithm, please give me a 
succinct description of inputs and outputs.  I do not have time to become 
versed in the ImageLib, and/or do testing.

The algorithm is well commented, so might be faster for someone versed in 
ImageLib to just carry the ball from here.
Blocks: pagezoom
Whiteboard: has code to be incorporated!
No longer blocks: pagezoom
*** Bug 105932 has been marked as a duplicate of this bug. ***
To anybody who might accept this bug:

While working on this bug, you might want to test your code against the
test case I filed for bug 105932.
Target Milestone: --- → Future
i would say this bug and bug 75077 are closely related, so i am marking that as
a dep to this. if anyone disagrees, feel free to remove.
Depends on: 75077
No longer depends on: 75077
Blocks: 75077
I'll vote for this one. If this also affects tab icons, I'll vote for it twice. ;)

See Bug 152506 for an example of tab icon scaling.
IIRC I saw an artical on a fast image scaling algorythm in Dr Dobbs' recently. 
Just thought I'd toss that in. :)
We already have an algorithm.  We event have an implementation.  It's attached
to this bug.

What needs to happen is integration of this implementation with the existing
image library.
Keywords: helpwanted
Voting to this enhancement by reason of HTML/CSS evangelism: this will allow
make web pages with content independing of screen resolution. We'll can use
relative CSS units to point size of picture (e.g. <img ...
style="width:20em;height:10em;" />) and see nice results.
See also bug 73322 about auto-scaling images.
See also bug 4821 about full zooming.

Although these bugs don't depend on this one, this bug would be handy if those
were implemented.
FWIW I hacked-up antialiased down-scaling in mozilla about
three years ago, but only in the X-direction, and only with
a 1-bit transparency mask (I don't think moz supported anything
with full-alpha at the time).  I had to give up on a full
(X,Y) solution because of the funky way that mozilla accumulated
scanlines, the mozilla-guts side of things was getting nontrivial.

Just a warning that (although the codebase may have changed
to make this much easier), having an algorithm and an implementation
is the easiest part.  (Also not to rain on anyone's parade but
pre-cached fragment weights have been used in graphics programs
since about 1996, probably a lot earlier.  On the code-rights
side, IANAL but allowing mozilla to use your code doesn't take
any rights away from *you*, the original code is still your code.)

*** Bug 179564 has been marked as a duplicate of this bug. ***
Has anyone looked into licensing the appropriate bits of ImageMagick perhaps?

Also, since I reported a duplicate bug, here's my samples page:
*** Bug 131473 has been marked as a duplicate of this bug. ***
Summary: Bilinear scaling need for line art and scalable web pages → [RFE]Bilinear scaling need for line art and scalable web pages
I thought "[RFE]" in summary was deprecated in favor of
Severity: enhancement.
Summary: [RFE]Bilinear scaling need for line art and scalable web pages → Bilinear scaling need for line art and scalable web pages
hm, on most platforms, mozilla just uses the operating system's functions to
scale the image
This would be even more useful now, with the automatic image resizing IMO. it
appears that the implementation wouldn't be too hard.
Adam, the bug is helpwanted.  Was your comment an offer to help?  ;)
biesi: I know you have quite a bit of experience with libpr0n. I am not asking
you to implement this, but I was wondering if you or stuart could take a look at
what needs to be done and make a list of that for anyone who wants to impliment
I would think that this would need to be done in each platform's
gfx/src/*/nsImage*.cpp, or maybe in gfx/src/imgScaler.cpp (in which case
platform nsImage* would need to be modified to use imgScaler, most just use the
platform's drawing routines for most stuff).

note that this is Image:GFX, not core imagelib.

Also, I would think that this costs quite a bit of performance... but feel free
to try to implement this.
Component: ImageLib → Image: GFX
oh right, the function that would need to be changed is, I think, nsImage*::Draw
(both, unless one calls the other anyway) and DrawToImage.
Please note that IIRC GTKfe (and X11fe I presume) do their scaling on the
server-side now, so implementing bilinear scaling for these front-ends will
require a bit of rearchitecturing (I don't know if things have changed too
much by now for the old old scaling code to be a useful reference).
Also not that said rearchitecturing will lead Mozilla to take up a lot more ram
and to be a _lot_ slower at scaling (which is why it was moved server-side to
start with).
Being slower or faster shouldn't be a problem -- the fact that it should be
configurable or not fixes that issue.  People who want the better looking images
will want this feature and those who don't care may turn it off or even turn off
images altogether (as I often browse).

That said, I have no ability that I know of to help on this bug -- I would
simply love to see it done.  I'm perfectly willing to go hunt down routines and
source authors for licensing, etc. though.  Like I said, ImageMagick does some
fairly good scaling and the GIMP does an even better job (both OSS; licenses may
need to be arranged for code segments).  

I currently use Intel's performance primitives DLLs for all my image scaling in
Windows C++ software; I understand that these libraries are cross-platform
(exist on Linux, *BSD, Windows, maybe more?) but may not be the route the
project wants to go.  Again, perhaps some form of licensing arrangement could be
made with such a company as a 'sponsor' of this feature?
BTW, bilinear scaling for image might be applied instantly (in few milliseconds)
via OpenGL interface. Almost every contemporary computer has OpenGL-enabled
video card; practically every contemporary OS has corresponding APIs (more,
MacOS X Jaguar uses OpenGL as a core for GDI functions).

May be, OpenGL is the cheepest way to solve this bug?
I thought about that a while yesterday, manko.  You can't just
OpenGL-draw onto an arbitrary surface though, basically I think
an OpenGLfe would have to be written.  There are complications
to efficiency there.  Not insurmountable though.
> the fact that it should be configurable or not fixes that issue

Um.  No.  We are _not_ having a pref to control the friggin' image scaling
algorithm.  At best, it would be a build-time option and even then the code will
be much more complex as a result; we should weigh the costs of such complexity
_very_ carefully.

See bug 170553 for OpenGL.
Since bilinear scaling would only be done when images are resized, it wouldn't 
slow things down in most cases, right? Therefore, if there is no scaling 
occuring, we shouldn't do bilinear scaling obviously. Also, we might want to 
decide upon other cases where its not necessary.

My question is whether we should rely on hardware/etc or whether we should do 
the scaling ourselves. We are capable of writing fast and attractive scaling 
routines ourselves. Inconsistencies between APIs and toolkits across platforms 
won't apply then.

If OpenGL is guaranteed to work equally on every system that Mozilla is ported 
to, then this is fine. Let's deal with that when we move our graphics code to 
OpenGL (if it ever happens) AFTER we have tested exhaustively and _if_ its 
truly a better way to do it. Its experimental and won't be available 
immediately for the bug 73322 and bug 4821 and others. And those bugs rely on 
this one being fixed to be attractive. Going to OpenGL now for this fix when 
that hasn't been our policy in the past isn't the way to go about this imho. 
It would be experimental for a long time and wouldn't result in this bug 
getting fixed.

Can't we use a modified version of the algorithm in attachment 48784 [details] within 
the imagelib code to make a cross-platform version of bilinear scaling? Is 
relying on hardware acceleration or OpenGL working identically on bit blotting 
on every platform really necessary at this time? Won't that just make things 
more complicated and slow down the fixing of this bug?
Bilinear interpolation (or higher order interpolation) will be measureably
slower than nearest neighbour unless it is hardware accelerated. 
As to how often this occurs, I was under the impression that it occurs much more
often than one might think with all the img size modifiers in html/css.

If we are going to do it in software, people must be able be able to
"self-inflict" the perf cost... because most users will just complain that
everything is slower for no "noticeable" (in most cases) benefit. If we cant, we
shouldn't bother OR treat it as a side case for small subset of keeners (as is
done for MathML and SVG now). The latter point is (I think) what comment #34

My take is that we really want to take advantage of hardware acceleration for
any graphics algorithm improvements. New algorithms are the easy part.

Options that take advantage of hardware:
- OpenGl  (x-platform -- big changes required... some issues too)
- DirectX (win32 only -- big changes required)
- GDI+    (win32 only -- smaller changes required)

- other (free) libs?

> We are capable of writing fast and attractive scaling routines ourselves.

You must be kidding.  Scaling ourselves is easily an order of magnitude slower
than doing it in hardware.  In the case of X, it's as much as two orders of
magnitude slower than doing it server-side (depends on the latency and bandwidth).

> it wouldn't slow things down in most cases, right?

As I recall, a large fraction of images on the net _are_ resized.  This is
trivially testable, of course....
> Scaling ourselves is easily an order of magnitude slower than doing it in

This may be true, but that doesn't necessarily mean the slowdown is
unacceptable. If it would take 10 ms in hardware (random assumption), and 100ms
in software, I'm fine with it. Though I had no real hard evidence, I just tried
resizing a 2MB image in Photoshop using bicubic scaling and it took well under 1
second (thus those 100ms don't seem that unreasonable). Then, surely, bilinear
filtering will not give us much of a performance hit, especially because most
images won't be 2 meg in size, and for the ones that are, the scaling time will
be pretty insignificant compared to the download time. 

Moreover, it seems to me the scaling can be done incrementally, just as the
rendering of the image is done incrementally while it downloads? In that case,
the performance hit will be all but 0. Surely we have some cycles to spare while
we're just downloading an image to do some scaling?

Also, I don't think most webpages use scaled images anyway (though a lot may add
reduntant width/height tags/css rules to specify the height the image already
has anyway), so this shouldn't even be a problem for most webpages.

Most webpages that seem to (extensively) use scaled images are very badly
designed image galleries that take ages to load anyway, because people but 20+
large images on one page and lets the browser resize them. For these sites, the
performance hit is insignificant compared to the download time, and spending a
few more cycles on making the image look okay would be very much what I want.
Arnoud, your whole comment misses one important thing -- Mozilla scales images
every time it paints them.  It's not a one-time cost, but a cost for every
redraw.  The alternative is a cost in memory that was found prohibitive (keeping
both the scaled and unscaled version of the image around, etc).

And instead of just guessing at how often scaled images happened, I'd suggest
you actually test it... you may be surprised (or I may be, but either way
arguing over something that's easily testable is silly).
Arnoud --- what bz said :-).

Brian points out that we can't just say "we'll use OpenGL", and he is absolutely
right. OpenGL cannot help us with this anytime soon.

Platform-specific patches to the existing platform GFX implementations,
preferably ones that help us by using accelerated platform APIs to avoid the
speed or space hit, are the way to go here. However even if someone produces a
patch to use GDI+ (for example) in GFX/Win32, I'd still like to see us somehow
detect whether the operation is accelerated and turn off bilinear scaling if it
is not.
gah, ok, give me a day or two to dig up my old GDIPlus patch.  I'll attach it
here and people (on Windows) can decide for themselves if hardware accelleration
is good enough yet to use it.  I'll even build a new .dll for 1.3final when it
comes out, so non-'source having' people can see.

My opinion is, no, at least for the way Mozilla does things.

Mind you, my implementation is a hack, and is only about 6 lines long.
Just for clarification, my comment was not a direct reply to roc.  I actually
wrote that 10 minutes earlier, but didn't notice I had a bugzilla mid air collision.
Re: Comment# 40: 

Interesting idea... I am not aware of an api call in GDI+ which allows you to
solicit which operations are hardware accelerated. I have dont a cursory search
of the MSDN docs without finding anything like this. 

Even if we can't detect which operations are hardware accelerated, we can
measure the time taken for bilinear scaling operations and disable them on the
fly if the average time gets too high.
As hinted to by comment #38, there is a lot of time (compared to current CPU
speeds) spent downloading an image to do software scaling and there's no
(little) need for a high-quality algorithm, just one that looks a lot better
than the current one.  On the assumption that images are drawn incrementally and
scaling can be done in a vertical-draw optimized fashion, cycles can be spent
doing the scaling while drawing.  

Better yet, for user experience (not necessarily for the coding of it) would be
to actually draw it the fast way and do the scaling as a seperated low-priority
thread.  In the case of a dial-up user on a P350, the images will probably still
end up being bilinearly scaled and filtered faster than download, but if they
resize the window, the image quality would be degraded until the resize thread
catches up.

In the case of the broadband user with an Athlon 2100+ though, scaling would
probably keep up with both download speeds and redraws. (again, the filtering
isn't _that_ processor intensive, as pointed out earlier, even if its a lot
slower than hardware scaling).  

The only case that might need special consideration might be scrolling; this
might be part of what comment #39 is refering to.  If the images are scaled
'live' on each redraw, then scrolling needs to be taken into account and the
scaled image should be memory cached.  However, this would require a seperate
bug to be filed for the caching of images with their specific rendered
resolution as I doubt the current cache code tracks anything more than the fqdn,
path and name of the image.
BTW, Windows 200 uses bilinear scaling to display 24-bit BMP images as stretched
desktop wallpaper. So, this might be used to estimate CPU cost of this algorithm.

I have an little experimentation.
My hardware: Pentium III 600/Riva TNT2 M64/17" monitor 1152x864@85Hz

1. To exclude probably hardware acceleration, I went to:
Control Panel/Display/Settings/Advanced/Troubleshooting
and have moved slider to leftmost position.

2. I have created some 24-bit BMPs sized from 64x64 to 1024x768 and placed it
into \WINNT directory.

3. I go to Control Panel/Display/Background, choose Picture Display:Stretch.

4. For every 24-bit image I clicked on it in list, then clicked Apply button.
Image appeared as desktop background stretched to 1152x864 and I tried to
estimate delay between clicking on button and finishing of painting.

In all cases delay by eye not exceeds 0.1-0.2 seconds.

I think, that demonstrates that even in this quite heavy case (scaling from
64x64 to 1152x864) bilinear scaling requires not so much CPU resurces.
Michael, the other case (in addition to scrolling) that we need to worry about
are "DHTML" animations... those cause things to be redrawn over and over again
as they are moved around; often the page designer expects this to happen at
intervals of 10-15ms (and bugs get filed when this does not, because it is
_very_ noticeable).  Needless to say, 0.2-second response would be unacceptable

Keeping scaled images cached has already been considered (and rejected), as I
said in comment 39.

One other thing to keep in mind that people seem to be forgetting is that on
Linux the limiting factor is likely to be not only CPU power but also I/O
latency and bandwidth.   This is especially a problem with a "remote" server
(where by "remote" I mean anything that can't use memory mapping and has to
actually go over TCP/IP, even if it's a loopback device).  This is why
server-side scaling, which keeps all the data on the server at all times, is far
preferable to client-side scaling....

Just tossing out items to consider,
Boris, 0.2 second is gross time to draw whole desktop 1152x864 (3 Mb of video
data!). It includes time to send data to video adapter, time to handle button
clicking etc. When I compared drawing bilinear with rough scaling (Windows make
it for 256-bit images), difference is even no more than 0.05-0.1 seconds.

If we'll paint image by pieces as download process goes, time will be
significantly smaller.

About further scrolling etc: we could keep resized image data in memory buffer.
I don't know how it's in fact, but I'm practically convinced that image
displaying module keeps bitmap data in RAW format (parsed from
gif/jpeg/png/etc.) during all time while page is actually displayed.

We could simply change this RAW data from original image to resized, thus
bilinear resizing will be needed only once per page.

Anyway, we could make user to be enabled to turn off bilinear scaling via prefs
if he considers it to slow down his computer.
manko ... let's think about this for a minute.
in x11 which is one of the ways mozilla can be run, the entire application can
live on the other side of a network connection.

suppose the network connection is 28.8kbps (it's called dialup, and i've done
this), and suppose that you load a DHTML page which moves scaled images around
as fast as it can.

if you're suggesting that we need to send 3mb of data over a 28.8kbps pipe in
<0.2 seconds then we have a serious problem.

     internet       / end-user@localhost
     mozilla  <-dialup->  XServer
       /|\       /
        |       /
       \|/     /
    webserver /

Remember, in the current world we send the image data from mozilla to the
XServer and then tell it to scale the image as necessary.  So we send the image
data over the slow link *once* and then do some relatively cheap albeit perhaps
not pretty operations on it many many times (DHTML).

In your world, we do lots of pretty operations on the image and after each
operation we have to send the whole image over the slow link.
Sorry, but you maybe have missed a point.

If you want see 24-bit image sized to 1152x864, you need send 3Mb data to screen
regardless of scaling algorithm or even without scaling (if original image size
is equal to 1152x864). Scaling images to reasonable dimensions (for example,
200x200 pixels) will require 120K buffer - and, again, regardless of scaling

I don't know details of x11 realization, but it seems to me that scaling
algorithm and data exchange between server and termilal are very weakly linked

About DHTML and scrolling issues see my suggestion at comment #48, paragraph 3.
It's highly probably that we don't need resize image again and again while one
page is still being displayed.

Finally, if we'll have a pref, you could simply turn off bilinear scaling of you
consider it to be too slow. As I can comprehend, if you connect from remote
terminal to X-server via dialup, you would prefer to turn off images completely.
Yet another consideration: if we'll scale 3M image during 0.2s (maximum,
probably faster), 120K image will be scaled during 8ms.

And if we take into account that 120K will fit in CPU cache completely,
approximate time will be significantly smaller.
manko, we used to cache images after scaling, but we stopped doing that for
various reasons, including that it used too much memory, and we don't want to
start doing it again. Boris clearly said this in comment #47. Please don't keep
bringing it up.
It was discussed in comment #50, but I'm going to slam comment #49 as well (and
all previous times server/client-side rendering has come up).  If an image is
going to be sent over the X protocol at a given resolution, it will be sent at
that resolution no matter _what_ it looks like or _how_ it was rendered.  The
CPU costs will be moved to the X client (remote machine) and the X server will
display  and receive the _same_ amount of data as it would no matter whether
bilinear resizing is done or not.

Also, the fact that rendered image caching was rejected once in another context
does not mean that it doesn't bear discussing in this case.  Perhaps its now a
worthwhile tradeoff.
do you have a way to tell an xserver to do bilinear scaling?

the proposal is that mozilla (xclient) do bilinear scaling.

old world:
mozilla loads image
mozilla sends image to xserver (not cheap)
mozilla tells xserver scale image to <...> and put at <...> (cheap)
mozilla tells xserver scale image to <...> and put at <...> (cheap)
mozilla tells xserver scale image to <...> and put at <...> (cheap)
mozilla tells xserver scale image to <...> and put at <...> (cheap)

your world:
mozilla loads image
mozilla scales image (not free)
mozilla sends image to xserver (not cheap, possibly bigger than original)
mozilla tells xserver scale image to <...> and put at <...> (cheap)
mozilla scales image (not free)
mozilla sends image to xserver (not cheap, possibly bigger than original)
mozilla tells xserver scale image to <...> and put at <...> (cheap)
mozilla scales image (not free)
mozilla sends image to xserver (not cheap, possibly bigger than original)
mozilla tells xserver scale image to <...> and put at <...> (cheap)
mozilla scales image (not free)
mozilla sends image to xserver (not cheap, possibly bigger than original)
mozilla tells xserver scale image to <...> and put at <...> (cheap)

There's nothing wrong with sending an image across the wire once, we all know
you have to do that if you want the image to appear.

The problem is that the bilinear concept being proposed is a mozilla (xclient)
coded feature which means that we have to send a larger image across the wire
many more times.
I *think* that XIE has the ability to bilinearly (or otherwise) scale the image
at the server-side.  We even used to have code in Mozilla to use XIE for
server-side scaling.  HOWEVER, XIE is dying by the minute, and while I recall it
being a bit faster than the neat server-side scaling trick that Mozilla now
uses, XIE's use is generally deprecated... hence its dropping from the codebase.
Yeah, XFree86 is dropping XIE support, last I checked.  So let's not go there.

Further, I'm not saying the X issues should prevent this being implemented.  I'm
just saying that we should keep them in mind.  Since, as roc says, this would
need to be done in platform-specific code anyway, we could implement it on
Windows first, as a test, for example.
You need a fairly recent Platform SDK for the includes, and you need the
gdiplus.dll, which is at

Personally, I dislike Microsoft's (or video card's?) bilinear quality.. it's
way too blurry.  Their bicubic is much nicer to look at.  Considering both of
them are slow, I'd choose bicubic over bilinear if I only had a choice of one.
The other non-high quality modes (commented out) just plain sucked for me..
they  weren't much better than what we do now.	Your mileage may vary, since in
theory this could depend on the video card.

Here's a good sample page to try out stuff at:

Note: The images on the screen will not look good at first!  You'll have to
scroll or refresh the page somehow (hide it and then show it again).  This is
because of our progressive displaying.

Other example:
data:text/html, <IMG

Play with the size of your screen and scream in agony at the slowness (compare
this to the speed of what we do now, which is virtually instantaneous).  Some
of this slowness may be caused by initializing GDIPlus every friggen time draw
is called.

I'll make a gkgfxwin.dll for 1.3b once it comes out, so non-builders can play.
Oh, I suppose you could click on that image link and use the automatic image
resizing thing, but that has seems to have some sort of delay in it already so
you would get the full effect.  Definitely try bicubic though.. it looks much
better, and the text is readable at a much smaller size than bilinear.
Attached patch Bicubic GDIPlus hack for Windows (obsolete) — Splinter Review
ack, previous patch had an incredibly stupid logic error in it.  Anyway, here's
the correct patch, and this time it has bicubic on.

Actually, despite the speed, I'm almost tempted to leave this on for "Auto
Image Resizing" only.  It really makes a big difference in viewing those
digicamera images that people put on their websites.  But that's just me, and I
can handle slowness when I know it was me who caused the slowness :)
Attachment #112482 - Attachment is obsolete: true
> Their bicubic is much nicer to look at.

Bicubic interpolation is bug #75077.
No, Bug 75077 talks about using an interpolation mode (preferably bicubic, but
bilinear is even mentioned in the bug) during decoding of a X pass interlaced
PNG, which may be a different issue if we decide to do it in libpng. 

There's no need for a bug for each interpolation mode.  Updating Summary.
Summary: Bilinear scaling need for line art and scalable web pages → Prettier image scaling (Bilinear, Bicubic, anything better than Nearest Color)
This is not very relevant, but it's very interesting ... how to do *cubic*
interpolation with commodity OpenGL hardware.
I have not tried applying the patch from comment #59 yet, but I did a few quick
benchmark tests using GDI+ on my machine with some interesting results. I am
running on an AMD 850GhZ machine, with a GeForce2 card running under Win2K pro.

First I did a loop of 1000 GdiplusStartup(); GdiplusShutdown(); calls. This took
3 seconds (in debug mode) so I dont think this is the overhead mentioned in
comment #57. The overhead still maybe in the Graphics/Bitmap object
instantiation or the drawing speed itself.

Next, I loaded a JPG image which was 300x255 pixels in size and performed the
following test which scales the image in a loop 500 times. My screen resolution
is 1280x1024 so this image does not get clipped as it scales (because I maximize
the window before I start)... Also the image grows to be about 800 x 755 by the
last drawing:

 for(j =0, k = 0; k < 500; ++k)
  graphics.DrawImage(&bitmap, j, j / 2, width+k, height+k);

I did this same test for the following interpolation modes, and received the
following time results (running a debug build but not from within debugger):

Nearest Neighbour: 9 seconds
Bilinear: 11 seconds
High Quality Bilinear: 24 seconds
Bicubic: 1 min 48 seconds
HighQuality Bicubic: 20 seconds

Observations: basic bilinear which is noticeably better than n.neighbour is less
than 20% slower (suggesting it is hardware accelerated?)
High Quality Bicubic is faster than High Quality Bilinear!??? This I dont

Most likely your video card accelerates the HQ Bicubic, not not normal Bicubic.
 I find that amusing. :)
Re: comment #63
oops ... that was an 850MHz computer (not 850GHz)

Also, I would be very surprised if my GeForce 2 along with associated drivers
had hardware accelerated HQ BiCubic interpolation... but hey, stranger things
have been know to happen. 

Finally, my times used the defaults for Compositing Mode and Quality and
Smoothing Modes.
I solicted the graphcis object and in my case (GDI+ defaults?), 
Compositing Mode was set to "CompositingModeSourceOver" 
Compositing Quality was set to "default"
Smoothing Mode (anti aliasing) was set to none.

If I set CompositingMode to "CompositingModeSourceCopy" my times all get reduced
by 1 second. I guess the point is, that if we use GDI+ we need to set the
appropriate settings to match what Mozilla is doing in the appropriate spot. 

Isn't there a way to query through a device context, etc to see what hardware
accelerator is capable of in terms of scaling? I know directx has this
capability, but since we aren't moving to directx yet, does GDI+ have something
akin to that? Can you try those same tests with hardware acceleration
turned off? (I think you can do this through the control panel).

18-48 milliseconds an image doesn't sound bad. 216 milliseconds gives me a bit
of pause.

You can do a GetDeviceCaps() but this is a GDI function... I am not sure how it
relates to what GDI+ is doing. I cant find anything in GDI+ docs that is
analagous or comparable (doesnt mean its not doable, just that i dont know how).

Doing the same tests as in comment #63 after setting my acceleration slider to 5
(0 - 5 scale): Where 5 means: "No hardware accelerations are allowed. The driver
will only be called to do bit-block transfers from a system memory surface to
the screen."

Nearest Neighbour: 22 seconds
Bilinear: 25 seconds
High Quality Bilinear: 38 seconds
Bicubic: 123 seconds
HighQuality Bicubic: 35 seconds

Hmmm, a relatively constant delta of around 13-15 seconds for each compared to
accelerations on. 
Some more info on GDI+:

I did a quick search of on the microsoft gdi newsgroups and
came across a couple of interesting items:

1) There is an MS Knowledgebase article regarding mixing and matching GDI/GDI+.
It is titled "INFO: Interoperability Between GDI and GDI+" (Q311221). It
explains some things to watch out for if you want to use both (which sounds

The link is:;EN-US;Q311221

2) In a different newsgroup article (dated May 2002), one of the MS support guys
made a comment about performance of gdi versus gdi plus as follows:

-- start excerpt --
"GDI+ is not GDI. GDI+ v1 has more features and less hardware acceleration.
This can result in a perception of "slower" execution. Actually, your
test compares apples with oranges - GDI+ code could be doing antialiasing,
can handle non-integer coordinates, and doesn't have it's own DDI.

If you need hardware acceleration and speed, and don't need the new
features in GDI+, then continue to use GDI."
-- end excerpt --

The link to this article (so you can get full context) is:

So in spite of what the docs say, true hardware accel (like you might get with
directx ) may not be here yet. If performance is good enough for the operations
of interest, given the fact that you get their implementation "free of
charge"... then go for it.  His comment also explains why I could not find any
references to GDI+ in my Windows DDK docs.

Here's the dll

It *only* works for 1.3b release (and who knows, it may not work on YOUR 1.3b,
don't blame me if it doesn't).  It will not work for a nightly a day after, a
day before, etc.  (unless you get lucky).  
Move the old gkgfxwin.dll someplace safe (don't just rename it to another name
with a .dll extension and keep it in the components dir, Mozilla will find it!),
and unzip this one.

Again, the GDIPlus download may be needed if you are on an older OS (95/98/ME/2k)

Its default is set to mode 7 (InterpolationModeHighQualityBicubic)
You can change this by adding a line to your user.js, _and_ restarting mozilla: 
user_pref("image.interpolationMode", v);

Where v is one of the folowing _numbers_:
// None (BitBlt)                       -1
// InterpolationModeDefault             0
// InterpolationModeLowQuality          1
// InterpolationModeHighQuality         2
// InterpolationModeBilinear            3
// InterpolationModeBicubic             4
// InterpolationModeNearestNeighbor     5
// InterpolationModeHighQualityBilinear 6
// InterpolationModeHighQualityBicubic  7 (Default)

Here's a page to test performance on:
Follow the instructions, write down your PER values and compare the modes.  Or
just try resizing your window, or swishing another window overtop the image, or
pressing the down arrow 10 times and then the up arrow 10 times.

For those who don't want to download and try, you can see the quality
differences in the modes here:
Open each one up on a new tab, and you can Ctrl-PGUP, Ctrl-PGDN between them and
get a good idea which modes suck.

My speed results are:
Mode  'PER' Speed
-1     102ms      BitBlt
 0     645ms      Default
 1     643ms      LQ
 2    1027ms      HQ
 3     645ms      Bilinear
 4     843ms      Bicubic
 5     619ms      NearestNeighbour
 6    1067ms      HQ Bilinear
 7    1072ms      HQ Bicubic

putting -1, 4, and 7 on 3 tabs and flipping between them gives you a cool "bad
to good" look :)
Anyway, the fastest the Interpolation Modes do is still 6 times slower than our
crappy bitblt.  There may be ways to get that down.. but I'm not holding my
breath on getting it down by 6x, or even to 2x the speed of bitblt.  Without
using a cache, that is.
With the new gkgfxwin.dll I ran the tests from comment #68 on my machine (AMD
K7-850MHz, Win2K, GeForce2, 1280x1024x32) with the following results:

Mode  'PER' Speed
-1     60ms      BitBlt
 0     790ms      Default
 1     788ms      LQ
 2     1270ms     HQ
 3     790ms      Bilinear
 4     1046ms     Bicubic
 5     774ms      NearestNeighbour
 6    1363ms      HQ Bilinear
 7    1282ms      HQ Bicubic

Sigh... an order of magnitude slower:
-  Do we know if the GDI+ modes being used  match the internal BitBlt modes (for
example we are not alpha blending and anti-aliasing in one case and not the
other)? I suspect this is not the issue, but should be asked.

It doesnt really appear that GDI+ is "currently" doing any hardware
acceleration. I would think that the BitBlt should compare to mode 5 Nearest
Neighbour if it was.

So what would we gain from using GDI+? ... probably just some implementation
time with hope of future acceleration??? With the above numbers it would seem
that we would probably only want to consider using it in certain constrained
user preference situations.
I did what you said on 1.3b, restarted Mozilla, had the pref on -1, and Whenever
I hit F5, it would show mode 7. If anyone has the same problem and figures out
why, please let me know.
You probably have to force a profile with the -P commandline.  I made the
mistake of reading the pref only once, and if the profile manager pops up first,
default prefs will be read and the profile prefs will never get read.
*** Bug 198828 has been marked as a duplicate of this bug. ***
I would like to stress that I think this bug has risen in terms of severity,
since it is not a pure enhancement anymore: Some images, especially maps ("how
to find us"), which are downscaled by the "automatic image resize" feature
become unreadable because of the bad algorithm used. IMO this makes "auto image
resize" look like a half-hearted feature, so this needs to be fixed, if only for
"auto image resize".

Esp. if you look at IE 6.0, which has auto-shrink the image, but if you don't
want that then you can display the image at native resolution with one of the
hovering buttons. 

Mozilla either should have a button like that, or a better scaling algorithm, or
both (ideally).
>Mozilla either should have a button like that

it has. click on the image.
Using these algorithms for auto-image scaling would not have any significant
performance impact, so it is a good match for the best algorithm. Using it for
scaling in web pages should be user choice.

Maarten: We already have that. Just click on the image when you see the funky
move cursor (needs to be changed).

The most annoying feature on IE is that image toolbar that appears above images
on regular web pages. It blocks my dad's stock graphs. Please don't do anything
like that.
Since Benjamin seems to be willing to do the work....
Assignee: pavlov → benjamin
Hey, your'e a funny guy, Boris... 

I can't program worth sh*t (see So I'm not able
to help that actively -- being in charge of bugs is WAY beond me. I considder
myself a user, tester and web-designer.

If in the future you would like me stay out of your way and not try to help
Mozilla stay the best browser out there, then say so, and I will take my (I
hope) constructive criticism somewhere else.
Assignee: benjamin → bzbarsky
Reassigning to Paper since he has created a patch that actually does something.

Benjamin: That fractal generator looks impressive from the screen shots. I once
wrote a fractal program myself when I was about 16, but it only did Mandlebrot.
Fractals rock! :-)
Assignee: bzbarsky → paper
*** Bug 201017 has been marked as a duplicate of this bug. ***
*** Bug 203526 has been marked as a duplicate of this bug. ***
Like I said in the most recent dupe (sorry), I think the proper way to do this
is like the Viewer in Mac OS X does it: first draw the nearest line (BitBlt)
resized image, then, when done resampling using a slower algorithm, redraw over
it with a smooth image. Performance an order of magnitude slower with loss of
interactivity warrants this.
Summary: Prettier image scaling (Bilinear, Bicubic, anything better than Nearest Color) → Prettier image resizing & scaling (Bilinear, Bicubic, anything better than Nearest Color)
how about a css property for this?
You mean a moz_* property for XUL?
I too would like to see bicubic scalling in mozilla (firebird).
When making some html list of my music album cover scans I ran into images that
were totally unrecognizable with current resizeing. Therefore bicubic scalling
would be a very nice feature; a feature opera has with its "Smooth zooming of
images" option...and the result is from another world that what mozilla (and
most other) programs performs.

The problem is only noticeable when you scale a lot (huge image --> small image)
so wouldn't it be possible to only use bicubic scalling when a lot of scaling is
You can have a look at the huge difference it makes here:
Isnt' bicubic resizing only advantageous over bilinear resizing when you're
making images bigger?
*** Bug 213089 has been marked as a duplicate of this bug. ***
*** Bug 215782 has been marked as a duplicate of this bug. ***
So, the bug pushing two years and the solution being about 3x10 lines of code or
so, when will we see this?
One argument against implementing this is that webmasters should size images
correctly for their use; if they don't and they look bad because mozilla can't
draw resized images well, that's just incentive for Getting It Right in the
first place.

I suggest that mozilla implement better image drawing as in this bug, but also
show a '+' magnifying-glass icon in the corner of all missized images or
(better) change the cursor to a magnifying glass as is done in resizing images
to fit in a browser, to indicate that the image as displayed is sub-optimal and
that 'view image' should be selected from a context menu to get a better look.

The presence of the changed cursor should encourage correctly-sized images,
decreasing long-term browser rendering load...
One argument for fixing this is when you are resizing stand-alone images. 

There are some cases where this is alright in a web page, for instance if an
image is within a div and it is shown floated to the right in a small size, and
when you click on it appears full size. Its true that they could have had a
small image and then within javascript loaded the larger image and done a
replacement, but sometimes that would be more work than is necessary for a,
let's say, 3KB difference in image size, and there might be a reason they want
to do it that way.

Another good argument is for full zoom for a web page.

Therefore, its better to teach better web development techniques than punish
everyone because a few people might abuse this.
*** Bug 223610 has been marked as a duplicate of this bug. ***
Blocks: 228085
See bug 228085 - Enable prettier image scaling and resizing for stand-alone images
*** Bug 228085 has been marked as a duplicate of this bug. ***
Opera 7.5 has a "Smooth zooming of images" option that uses bilinear or bicubic 
resampling. I can't say that it helps scaled images look much better, though.
Whiteboard: has code to be incorporated! → [parity-opera] has code to be incorporated!
While we will need to scale on the fly, is there anything we can do post
page/image render (i.e. after the image or page is fully rendered) during idle
time to speed up possible future zooming? 

Perhaps for images within a web page, we can initially do a normal scale of the
image, then do a resampling of the image using bilinear. Context-menu options
could allow them to choose other options. The reason I say this is that bilinear
rescaling when the user initially displays the page might be overkill. They
might immediately click a link, and its better to resample after so the
operations can be ended if the user so chooses.

Some resampling will be unecessary. Could we have fast code to determine if its
necessary? Could we filter images during download? For web pages, this would
work a lot better if there could be some placeholder for the images so the rest
of the page could be rendered. Could the image library spit out before the
entire image is rendered the size of the image that it gets from the image
header so a frame can be built for it before the image has been totally
downloaded? That way users won't have to wait to read content if an image isn't
fully downloaded and re-sampled.
(In reply to comment #98)
> Context-menu options
> could allow them to choose other options. 

Yes, add some options to the context menu that very few people even understand
and even fewer have a use for.

> Could the image library spit out before the
> entire image is rendered the size of the image that it gets from the image
> header so a frame can be built for it before the image has been totally
> downloaded?

Have you browsed the web recently? The image library does do that. That's why
Mozilla can show a placeholder for the image (with the right size) before it is
Biesi: My connection is too fast to notice. All I know is there was a bug open
not too recently about incremental rendering of pages and images, and a lot of
people complaining about it. I guess either I'm confused, or its been fixed. Is
there a way to throttle our bandwidth through the browser so we can see this?

Anyway, if the download is slow, we can be doing resampling during the download.
Otherwise, we can show it using poor scaling, then resample it in-place.

>Yes, add some options to the context menu that very few people even understand
>and even fewer have a use for.

So you suggest a one-size-fits-all solution on diverse platforms? If we even
attempt a one-size-fits-all solution, we'll need adaptive code. It seems like an
incredible waste of processor time to do sampling that a user doesn't want.
Any progress?
Patch has bitrotted - didn't apply cleanly to my source tree.
Summary: Prettier image resizing & scaling (Bilinear, Bicubic, anything better than Nearest Color) → Prettier image resizing & scaling (Bilinear, Bicubic, anything better than Nearest Neighbor)
<CTho|away> rocWork: what needs to happen for 98971? 
<rocWork> someone would need to convert the code so that it loads the GDIPLUS
library dynamically
<rocWork> rather than just barfing if it's not found, which the current code
would do
<rocWork> then we'd need to test performance, see if it regresses Tp
*** Bug 268375 has been marked as a duplicate of this bug. ***
I have a few comments on this I hope may be useful. When considering
performance, these algorithms will often need to sample up as well as down –
something that has not been benchmarked or even mentioned. This bug has been
referenced several times by bug 4821 which promises a page zoom function and I
came across it after using Jason Adams’ nicely designed ImageZoom extension –
both require upsampling.

Caching has been discussed – either recalculate on the fly every time the image
is displayed (performance hit) or render the entire page to memory once and use
that when scrolling (too memory intensive). What about a disk cache with short
persistence, purely for resampled images? It could be cleared after 24 hours, as
the browser is closed or even after navigation away from the page. It wouldn’t
need to be part of the normal browser cache.

It has been suggested that the processing be done during download because the
download will take longer. Does anybody know if the libraries likely to be used
for this will allow incremental streaming of images for processing? It seems
unlikely to me. There will be an additional problem dealing with interlaced
images. In addition, don't forget users who are on a fast intranet or viewing
images loaded from their cache - download times will be substantially reduced or

For this reason, and the substantially larger requirement for instant
gratification on a zoom request, it seems to me that the approach most likely to
find favour with users is to display the quick and dirty image created using the
BitBlt algorithm first and start a new thread to process the image more
accurately to replace it – either one thread per image or one thread with a
queue of images requiring resampling. Adobe Reader does something like this when
rendering new parts of the page after scrolling. Ideally, the new thread would
be interrupted if the image size was altered again through a zoom event of some

Lastly (and I think I’m in agreement with most commentators here), the presence
of “smooth resizing” should certainly be a user option. Variations in libraries
(or home-grown code execution time) and eventual hardware acceleration between
platforms will make the performance of this feature vary greatly and with it,
its desirability. Whether altering the algorithm (if this is even possible)
should be in the main control panel I’m less sure about. Probably this should
default to bicubic (which seems surprisingly superior to bilinear, at least for
size reduction) and different algorithms could be selected in about:config.

Just to confuse matters, there are, of course, other algorithms such as Lanczos
and Xin Li that could be considered. I know nothing about these.
update hack patch to trunk (with pref loading)

See Comment #69 for Interpolation Modes, and Test Case.  Changing
image.interpolationMode (via about:config) doesn't require restarting Moz to
have the changes come into effect.

Also note that if only a section of the image needs updating, the hack uses
GDI+ to pretty-draw to the dirty area, which leaves the edges lacking some
interpolation.	To fix this, the dirty area would have to be expanded by a
pixel or two on each side.

To restate what I've said in prior comments: A cache of the resized image is
the only viable option.  On the fly is just to slow.  

(In reply to Comment #105)
>Does anybody know if the libraries likely to be used
>for this will allow incremental streaming of images for processing?

GDI+ will not, and GDI+ is what we should be using (or a similar OS native
library), since, in theory, the graphics card could do all the processing.
Attachment #112485 - Attachment is obsolete: true
*** Bug 266752 has been marked as a duplicate of this bug. ***
(In reply to comment #103)
> <rocWork> someone would need to convert the code so that it loads the GDIPLUS
> library dynamically
> <rocWork> rather than just barfing if it's not found, which the current code
> would do

ArronM, could you make it silently fall back to the default if GDI+ is not
found, as mentioned above?

Another issue: as other platforms will use other libraries (e.g. Cairo), the
pref values should be somehow abstracted away from GDI+. Perhaps something as
simple as Default/Low Quality/Medium/High, which is mapped to real algorithms
for each library.
Blocks: majorbugs
No longer blocks: majorbugs
I'd like to see this happen ASAP so I can use natural units on images... when
SVG takes off in Fx 1.5 I don't wanna have to enclose my raster graphics in SVG
to have them nicely scalable! :S

I am getting sick of having to mix pixel measurements when playing with images,
with natural units when playing with fonts, as anyone with different DPI
settings will see a mess.

If Fx had proper support for this, it really /would/ be a leg up on IE because I
for one would develop sites that use this technique.

Opera 7.5 had a "smooth zooming of images" option. In Opera 8.5, there is no option; this is the default behaviour. The improvement is *very* noticeable when looking at small thumbnails of human faces in e.g. staff directories (edges of glasses, highlights in eyes) -- I think Opera must have the online dating crowd sewn up.

People look better in Opera.
New and useful application for image subsampling:
the Viamatic foXpose 0.2 plugin gives a MacOSX-Expose-like view of what's in all tab windows in a single page as clickable thumbnails, via a little control in the bottom-left corner (or a hotkey). It's a lifesaver for finding your way around tabs. These thumbnails would look far better and more readable with image subsampling.

Seems pretty clear to me that Firefox should include code resolving this bug and build in the foXpose extension to take advantage of it asap!
*** Bug 319187 has been marked as a duplicate of this bug. ***
*** Bug 319918 has been marked as a duplicate of this bug. ***
For what its worth, this gets a lot better with the cairo stuff coming soon.
(In reply to comment #115)
> For what its worth, this gets a lot better with the cairo stuff coming soon.

WFM Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9a1) Gecko/20060309 SeaMonkey/1.5a
The scaling looks much better in Cairo, but it does not look as good as with paint or another good picture-editor.

Do we want to improve the scaling or is this fixed now?
which interpolation algorithm is used with cairo to resize images?
According to Bug 324698 Comment 10 the algorithm is still nearest neighbor so this bug is still relevant.

Here is another testcase: 
(In reply to comment #119)
> According to Bug 324698 Comment 10 the algorithm is still nearest neighbor so
> this bug is still relevant.

the _downsampling_ algorithm is nearest neighbour. upsampling uses a better one but I don't know which one.
*** Bug 331074 has been marked as a duplicate of this bug. ***
Somebody might want to look at :)

Also, in said project (or in avisynth) you have a fast resizer with lots of methods that obey these rules, and also very fast high quality 2x-downscaling filters (not just 2x2 average).

For the speed issues - I think a good compromise, which MS Office uses since ages ago (*) is having a limited size resized image cache. So you can keep the relatively recent seen resized images cached. Think around 1-4 megs, or a small  % (or log) of the RAM, like the memory cache already does now.

(*) I think this was documented in the help file, but I can't find it in recent versions. The size of the cache was specified, and it is very evident if you have a slow CPU and knew where to look, as sometimes it runs very fast but hitches when a new image comes into view, and sometimes tha cache can't hold all the images in the current view and always runs slow.
*** Bug 331968 has been marked as a duplicate of this bug. ***
I'll just point out that the Mediawiki codebase contains an algorithm to make lovely thumbnails. I'm not sure if it's bilinear, but the results speak for it, and it must be legal. As for performance, that remains to be seen...
Mediawiki uses ImageMagick ( ). I'm dont know if we can or want include it in Mozilla.

Just curious: What happens when an image is upscaled in one direction and downscaled in the other? Which algorithm is used then? 

Maybe we could just use only one algoritm for both downscaling and upscaling?
Flags: blocking1.9a1?
Flags: blocking1.9a1? → blocking1.9a1-
Cairo (on the trunk) has a nicer resizing algorithm.
For downscaling (which occurs when Mozilla/Firefox have "Resize large images to fit in the broser window" turned on), there exists a nice algorithm that's faster then bilinear and produces comparable results. It's implemented in gdk-pixbuf:

This is an accurate simulation of the PostScript image operator without any interpolation enabled. Each pixel is rendered as a tiny parallelogram of solid color, the edges of which are implemented with antialiasing. It resembles nearest neighbor for enlargement, and bilinear for reduction."
*** Bug 343952 has been marked as a duplicate of this bug. ***
This can be surprising to many people, but in IE7 Microsoft is offering a CSS extension (yes with ms- prefix!!!) to choose the interpolation method:

(no, I haven't tested it and can't offer a screenshot, sorry)
Adding parity-IE to status now IE7 has this.
Whiteboard: [parity-opera] has code to be incorporated! → [parity-opera] [parity-IE] has code to be incorporated!
*** Bug 347388 has been marked as a duplicate of this bug. ***
Duplicate of this bug: 365787
Duplicate of this bug: 366637
I can't believe that a serious design flaw that was reported almost 6 years ago is still not fixed today. Many downsized photos look really **** after downsizing with the Windows version Firefox or Internet Explorer while Opera does it right. I have heared that the OS X version of Firefox does a correct job too.

You often don't have any control about the size of the images you have to work with. And the 3456 pixel originals of a Canon 350D are really useless on a 1600 or 1280 pixel monitor without downsizing. So there are thousands or even millions of web pages out there that look really **** if they are visited with something else than Opera.

Some examples for applications where this problem is very noticable are the <a href=>High Res Flickr greasemonkey script</a> and the <a href=>flickr slide show</a> by Tinou Bao.

I recently tested downscaling with version but I rember that version 1.x had the same problems. Many images are noticably degraded in quality if they are
downsized in the bwrowser. In some even as bad that the images are essentially
destroyed. You can see such a problem in my first example.

Reproducible: Always

Steps to Reproduce:
1. See any webpage that forces the browser to downscale an image and compare it
with a downscaled version of the image done by properly working software.
Almost every photo software will do a correct job. In some programs you have to
configure the the use of a better downscaling algorithm for on the fly
downscaling for your display.

You can see two examples here:

Actual Results:  
Getting even more disgusted at the firefox downscaling performance after seing
the horrible results in my first example.

Expected Results:  
The image quality I see for downscaled images in just about every other photo
program or the Opera web browser.

It should have used an more intelligent downscaling algorithm than the one
actually employed here.

Opera 9.02 does a good downscaling job. There is no difference visible compared
to the downscaled version provided by flickr.

(In reply to comment #134)
> I can't believe that a serious design flaw that was reported almost 6 years ago
> is still not fixed today.

It's really not that serious, and I wouldn't call it a design flaw to begin with. That being said, Firefox 3 will use Cairo for graphics, which fixes this issue.

(Sorry about the bugmail, everyone.)
What's missing on trunk to mark this bug fixed?
I don't understand why the developers consider it no big deal that millions of users get substandard images delivered because the developers could not spend a few minutes of programming time to include a decent quality downscaler into the code. 

For me this design feature makes Firefox almost unusable. The only reason that I have not yet switched to Opera full time is that I have not yet figured out how to get greasemonkey scripts running on Opera.
I'll mark this bug fixed, because we do have "anything better than Nearest Neighbor".  Please reopen if I'm missing something.

Andreas Helke, don't add comments if you have nothing to say.
Closed: 13 years ago
Keywords: helpwanted
Resolution: --- → FIXED
Whiteboard: [parity-opera] [parity-IE] has code to be incorporated! → [parity-opera] [parity-IE]
Target Milestone: Future → mozilla1.9alpha
No longer blocks: 228085
Duplicate of this bug: 228085
As I understand it, trunk is using bilinear filtering for upscaling images, but downscaling is still nearest neighbour.
Resolution: FIXED → ---
Justin, I did test that briefly with <>. It gives me a better result than 1.8.1. If you can verify that, please re-resolve this bug as FIXED.
Dao, your test image looks awful downsampled in Firefox -- all those diagonal roof edges. We don't have parity with Opera or IE yet, either.
I tested with trunk and there is still considerable aliasing indicative of nearest neighbor.
Test with - there is some kind of filtering being done because grays appear when it's scaled, but the result is horrible compared to any of the filters in IrfanView.
Looks anti-aliased to me.

Jason, please don't guess whether it's nearest neighbor or not but verify it.
(In reply to comment #144)
> Test with -
> there is some kind of filtering being done because grays appear when it's
> scaled, but the result is horrible compared to any of the filters in IrfanView.

Doesn't look too different from Opera, but still worse. IMO you should file new bugs for further enhancement requests or regressions, but this bug should be fixed according to its summary. Here's no real work going on anyway -- much noise about nothing.
Right.  It helps if I actually test trunk and not :(

The diagram linked in #144 does indeed show there is clearly some interpolation with trunk.  I suppose strictly speaking this bug it could be closed, but I think there is still room for improvement here.
Resolving fixed based on conversation in #gfx.

(In reply to comment #137)
> I don't understand why the developers consider it no big deal that millions of
> users get substandard images delivered because the developers could not spend a
> few minutes of programming time to include a decent quality downscaler into the
> code. 

The developers are out to get you.
Closed: 13 years ago13 years ago
Resolution: --- → FIXED
Duplicate of this bug: 366942
Duplicate of this bug: 372227
Duplicate of this bug: 366942
(In reply to comment #146)
> (In reply to comment #144)
> > Test with -
> > there is some kind of filtering being done because grays appear when it's
> > scaled, but the result is horrible compared to any of the filters in IrfanView.
> Doesn't look too different from Opera, but still worse. IMO you should file new
> bugs for further enhancement requests or regressions, but this bug should be
> fixed according to its summary. Here's no real work going on anyway -- much
> noise about nothing.

Done Bug 372462 
Duplicate of this bug: 378936
(In reply to comment #148)
> Resolving fixed based on conversation in #gfx.

How exactly is this fixed? I don't think any of the patches in this bug have been checked in.
If another (probably Cairo-related) bug fixed this, this bug should be duped against that bug.
If it's unknown which bug fixed this, this bug should be marked WORKSFORME.
Hopefully a helpful note for folks finding this bug - the bug was "fixed" in Firefox 3.0 ("Gran Paradiso") by moving to a new graphics rendering engine called Cairo. If you're using Firefox 2.* you will still see very chunky/ugly downsampled graphics. This is verified fixed (and pretty!) in Firefox 3.*.
Actually, better upscaling got backed out on the trunk due to some issues with it. See bug 381661 for more information.
Ryan - thanks for the clarification - it's smooth *downscaling* that is fixed in 3.0 and not fixed in 2.0.
Hm, staring until my eyes bled with trunk 20070614 I could not find or reproduce chunkiness in the image upscaling or find inferiority to IE7 / Photoshop upscaling. Perhaps this got quietly fixed in Cairo? Noting this on bug 381661. Let's carry upscaling discussion there.
Duplicate of this bug: 392587
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.