Closed Bug 713555 Opened 10 years ago Closed 10 years ago
Use adaptive background color when rendering images directly
As of bug 376997 we render top level images on top of a #222 background. That approach has accessibility issues, besides being completely arbitrary. The Dominant Color add-on  does a good job at this by rendering the image on top of its dominant color but with a smaller opacity. I've adapted (read: shamelessly stolen) its getDominantColor function and came up with a quick patch that uses a similar approach to choose the background color of top level images. Here's an example of how it looks now: http://cl.ly/CsvL
I'm not sure if this approach is appropriate, I just link an external script to MediaDocument that does all the work. One obvious problem with this is that large images will only get the correct background color after they are completely downloaded, but I'm not sure if that's avoidable. Also, I'm not sure if layout/style is the correct place for TopLevelImageDocument.js. This patch breaks most of the pngsuite reftests, I'm looking into that. Builds available here for testing: https://firstname.lastname@example.org/
The approach is generally ok (though note the changes in bug 708431). I do have three concerns: 1) Does this need UX review? 2) How long does that script take to run on a large image? How much memory does it allocate? Seems like for a typical digital camera image it would be at least 40MB and up to 80MB or so (compared to what we allocate now). 3) Since you reused code from the Dominant Color addon, what's the license on the original code?
What does this do for images that have a lot of transparency in them? It sounds (based on the description alone) that it might lead to bad results in that case, since you could end up with a background matching the (limited) foreground.
(In reply to Boris Zbarsky (:bz) from comment #2) > The approach is generally ok (though note the changes in bug 708431). > > I do have three concerns: Thanks for the fast feedback. > > 1) Does this need UX review? The current #222 bg needed, so yes. > 2) How long does that script take to run on a large image? How much memory > does it allocate? Seems like for a typical digital camera image it would be at > least 40MB and up to 80MB or so (compared to what we allocate now). I'll do some tests with large images. I'm not sure how to profile memory usage, though. > 3) Since you reused code from the Dominant Color addon, what's the license > on the original code? MPL 1.1
(In reply to David Baron [:dbaron] from comment #3) > What does this do for images that have a lot of transparency in them? It > sounds (based on the description alone) that it might lead to bad results in > that case, since you could end up with a background matching the (limited) > foreground. The background color is still #222, so the behavior will be the same as the current one as the image approaches 100% transparency. For example: http://cl.ly/Crt1
> I'm not sure how to profile memory usage, though. about:memory is worth a try. Or watching OS memory statistics. Or just running on a memory-limited mobile device and seeing whether your browser gets killed.
(In reply to Boris Zbarsky (:bz) from comment #2) > 2) How long does that script take to run on a large image? How much memory > does it allocate? Seems like for a typical digital camera image it would > be at least 40MB and up to 80MB or so (compared to what we allocate now). It could take up to 4s and up to 150MB of memory to calculate the dominant color on large images (>2MP), so I optimized some of the operations and made it scale large images down to 500x500, which runs much faster (~220ms for a 2461 × 2461 image), uses only a fraction of the memory, and the changes in the result are barely noticeable. > 3) Since you reused code from the Dominant Color addon, what's the license > on the original code? Btw I just Googled Margaret's name to see if she was a good candidate for ux-review? and found a blog post about dominant favicon colors. Turns out the Dominant Color add-on uses a modified version of her code. Small world! :)
> which runs much faster (~220ms for a 2461 × 2461 image) That's a 6 megapixel image, right? That's actually kind of small by modern digital camera standards... What sort of hardware (and OS, for the canvas drawImage bit) were you testing on? Can you attach your test image here? The script is definitely doing some stuff that might be slow in our JS impl; I bet we can make it faster. 220ms is still pretty darn slow...
For what it's worth, I just did a profile on a somewhat smaller image that took ~200ms over (990x660), and well over a third of the time before I even start optimizing is the SetElem and GetElem stubs; presumably for the colorCount object... I wonder how we can do that faster.
(In reply to Boris Zbarsky (:bz) from comment #8) > > which runs much faster (~220ms for a 2461 × 2461 image) > > That's a 6 megapixel image, right? That's actually kind of small by modern > digital camera standards... > > What sort of hardware (and OS, for the canvas drawImage bit) were you > testing on? Can you attach your test image here? The script is definitely > doing some stuff that might be slow in our JS impl; I bet we can make it > faster. 220ms is still pretty darn slow... 2.53 GHz i5 running OS X 10.7.2. When optimizing the getDominantColor function the main bottleneck I found was: > data = Array.map(data, function(val) Math.round(val / 8) * 8); which took ~2s for the 6 MP image and was responsible for most of the memory usage. While Uint8Array.subarray is slower than Array.slice, the end result is still faster.
One note on the script itself: green and blue are swapped (both places, so the error cancels itself out).
Hmm. The rounding was presumably there so that similar-ish colors get lumped together to some extent, right? The single biggest speedup I see for the second version of the script is to get rid of the subarray() and destructuring assignment stuff, and just write things out like this: let red = data[i], blue = data[i+1], green = data[i+2], alpha = data[i+3]; (still keeping the blue/green swap). After that the colorCount management is the main bottleneck for me. That gets a bit better if I reintroduce the rounding, for what might be a slight time win too; at that point the drawImage call itself is starting to be a significant part of the time. I did the rounding using %, not using Math.round, fwiw. Math.round seems a bit slower, though the difference is pretty slight, so using Math.round might be clearer. You should measure on Windows with D2D or Linux with RENDER as well, since there the imageData get will involve a readback from the graphics card...
We could also use the average color, which can be calculated nearly instantly (scale to 1x1 and get the pixel color), though it doesn't look as good as the dominant color for some images, and we'd have to workaround monocromatic pictures.
Can you just set the image as the background image and use background-size: 1px 1px;?
Won't there be a pretty high rate of "flash of default-image-background-color" before the adaptive background color can be used? Regardless of the speed of the JS, it seems that the computation of background color is inherently dependent on network speed. Also, what happens for images that are mostly one color? This solution seems to make those images blend in with the background, which isn't an improvement over what was landed with bug 376997.
My other qualm about this change is that it hurts users expectation of viewing images. Consistency with the background-color is good because users will feel comfortable and have an expectation for two different images to be displayed on a "similar pedestal". With this patch, opening up multiple images in succession can cause the user to see a rainbow of colors throughout their browsing session and remove some of the familiarity they may have with "View Image".
(In reply to Jared Wein [:jwein and :jaws] (Vacation December 23-January 3) from comment #15) > Also, what happens for images that are mostly one color? This solution seems > to make those images blend in with the background, which isn't an > improvement over what was landed with bug 376997. Yeah, so this is actually worse than before bug 376997. Now all single-color images will blend in with the background. With bug 376997 and before, only single-color images that were near or close-to the default background color would blend in.
(In reply to Jared Wein [:jwein and :jaws] (Vacation December 23-January 3) from comment #15) > Also, what happens for images that are mostly one color? This solution seems > to make those images blend in with the background, which isn't an > improvement over what was landed with bug 376997. Something like this: http://cl.ly/CtLD
Why not only two different backgrounds? A light and a dark one.
(In reply to Dão Gottwald [:dao] from comment #14) > Can you just set the image as the background image and use background-size: > 1px 1px;? This might not be cheap, performance-wise. (I have the same concern about some of the other proposals here.) Also, you'd need to use device pixels rather than CSS pixels.
This sounds like over optimization, to me. I can't think of any other image viewer that does anything that a display on a constant background. (Examples to the contrary welcome!) The closest I can think of is Photoshop's checkerboard background. I'm also concerned about UX here, flipping between tabs with apparently-random backgrounds isn't good. I would suggest first experimenting with different approaches as an addon.
(In reply to Justin Dolske [:Dolske] from comment #22) > I'm also concerned about UX here, flipping between tabs with > apparently-random backgrounds isn't good. I wonder if that use case is that common? I rarely have more than 2 images open in tabs, but I agree, that's why I used a low opacity on the background color, so they would all be soft colors. The Dominant Color add-on  has an even better look using a radial gradient, but for some reason you can't use a gradient as the background-image of <body>.  https://addons.mozilla.org/en-US/firefox/addon/dominant-color/
I am looking at this problem from a different direction and am trying to meet the expectations of 3 main groups (and 1 rare group) with a simple 'patch' - #1. The regular user who just wants to see a picture with no background distractions. #2. The graphics guys who make semi transparent or dark images and expect them to be seen on the light backgrounds that they intended. - known as the 'Glazman Syndrome' :) #3. The graphics guys who view many images of all types and need a constant tonal reference point (see border stuff later) and need to quickly know if an image is semi-transparent or not. #4. Rare case situations of light semi-transparent images used on dark backgrounds. The above attachment - Image_Patch_v1 has been made as a simple .css snippet, so that it can be tested by anyone either in userChrome.css or as a 'userstyle'. It applies a light tinted background to the image itself that is only ever seen when viewing semi-transparent images. When the image is hovered over, a graphics style checker board pattern is applied, but is only ever seen with semi-transparent images. It also uses a border, as borders have always historically been used, to ease the transition from one area to another. I haven't tried to be too clever with this transition and use box-shadow or similar as users are used to this kind of hard edged border style. Note - when an image has been scaled down to fit it does not take this border into account and can go into scrollbar overflow..which will need to be fixed. Once the 'patch' is applied, let's see how it fits the expectations of the groups outlined above - #1. Regular user - http://img842.imageshack.us/img842/2435/picture1at.jpg ...a regular pretty picture with a simple round cornered border. Nothing happens on hover and no background is 'in their face.' The blue pill is taken, the story ends. #2. Maker of semi-transparent or dark images - http://bluegriffon.org/logo_bluegriffon.png ...the image looks as intended and hover over shows that it is a dark image on a transparent background. #3. The graphics guys viewing images - http://f.cl.ly/items/003x2a2N3E2J0j2n241G/Screen%20Shot%202011-12-27%20at%2011.03.14%20AM.png ...when you viewed this image before in the existing page default, did you know that the browser had a shadow around it? Well, you do now. Now hover over the image. Now load up a similar, but darker image - see how the border colour gives you an instant tonal reference point? This 'patch' gives graphics guys everything they need to know about this image, without the regular user being inconvenienced/confused by it. #4. Rare case situations - http://img838.imageshack.us/img838/3361/24499678.png ...I've never actually seen this, but it could happen - a semi-transparent light image intended for a dark background. The main reason for using a light tinted image background here was to avoid the 'I am staring at a lightbulb' effect that using pure white backgrounds produces. But here, it achieves another aim - you can still see there is a white ball in this image...and then there is hover. The chances of an image being the exact same shade as the image background is negligible. So, even in this extreme case something will show. As for the original near black #333 background problem..what problem? the background has now returned to where it belongs, err, in the background. It can be left as it is and used for viewing videos just as it is. After all, no one ever objects that a cinema is darkened when the film comes up and it's the same here.
This seems like unneeded complexity. I appreciate the sentiment and what it's trying to do, but I think it's better to keep this simple and predictable.
(In reply to Alex Limi (:limi) — Firefox UX Team from comment #26) > This seems like unneeded complexity. I appreciate the sentiment and what > it's trying to do, but I think it's better to keep this simple and > predictable. Shouldn't we try to use something predictable but a little bit friendlier with dark stuff over transparent background? e.g. http://www.msleeper.com/misc/not_that_its_any_of_your_business_but_this_is_what_the_problem_was.png
Try this image in Image_Patch (#eceeef - works fine). data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAAAAACo4kLRAAAAG0lEQVR42mO4AwLvQADBookggo9g0UJw+PkIAJdvZeAMVgxlAAAAAElFTkSuQmCC
I think, a good neutral color would be a sufficient and simple solution. People should to be able to view all of the images with a consistent experience. A browser should not aim to give "perfect" contrast for each given situation, that is a job for an artist, who designs the situation or the given picture. With viewing images from the browser, most of the problems are rooting from transparent images anyway, if the background is way too dark or way to bright (white). i've checked Photoshop's dominant default #c0c0c0 in Firefox, and it works greatly with transparent solid white, dark and photo based images too. Here are some examples: http://i.imgur.com/o7O9z.jpg http://i.imgur.com/F91Ii.jpg http://i.imgur.com/ZYpZd.jpg http://i.imgur.com/GbfaC.jpg http://i.imgur.com/DCTlj.jpg
Comment on attachment 584362 [details] [diff] [review] Patch v2 I'm clearing ui-review, since Limi is a more qualified person to offer that, and he seems to disagree with this change in comment 26. It's cool that you're finding a potential use for that dominant color stuff I did, but I agree that this may be an over-engineered solution that leads to unexpected behavior.
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → INVALID
Comment on attachment 584362 [details] [diff] [review] Patch v2 Clearing requests if this is wontfix.
Hey guys, I apologise for resurrecting discussion in this bug tagged WONTFIX, but I was reading the comments, and couldn't find a good reason for killing off the dominant colour request. I have a sample here: http://m8y.org/tmp/testcase242d.xhtml Of an image that is very hard to view in Firefox, along with fast and simple fix code (could possibly be optimised slightly more, but the speeds are already very fast - most of the tiny delay is in loading the image data and writing it to the canvas). Doing a check of pixels is pretty fast, and I can't think of any bad edge cases, since it only applies a new colour if the image itself is not too neutral. Obviously this could be a simple bookmarklet, but it would be nice if you guys would reconsider. I think it makes the circuit diagram *way* more readable. Ditto any other transparent images of dark colours. As noted in the bug text, the calculating code could simply be skipped if the image height/width are over a critical threshold, like, say, a width/height that is more than, oh, 4 million or 2 million pixels. Thanks!
Oh. Also, unlike the patch code, the demo in the prior comment is just calculating the overall value (to make a greyscale colour), and uses standard luminance weightings. I think this causes a more pleasing effect.
FWIW, I get crashes whenever I try to run the calculation for the second image (bp-835bf932-2ea4-49ba-b39d-445b12120228 and bp-ecf6ddb8-7158-4110-8f46-7308b2120228 for instance). This seems due to OOM, and advocates another reason to limit the processing to "sane" image sizes, apart from performance. I'm happy to file a new bug if needed.
Any reasons why this enhancement was wontfixed? Adaptive background was a wise idea, especially when Firefox now sucks as a viewer for PNGs with alpha channel...
See 754539 for an alternative solution for viewing transparent images using a patterned background.
You need to log in before you can comment on or make changes to this bug.