According to the CSS4 Images Module Working Draft — http://www.w3.org/TR/css4-images/#the-image-rendering — and our documentation — https://developer.mozilla.org/en-US/docs/CSS/image-rendering — the image-rendering property should accept the value "pixelated", which specifies nearest-neighbor upscaling and "auto" downscaling. This behavior is exactly what we want for many content-provided images in the browser that are usually low resolution but sometimes overly high resolution. The most ubiquitous example is favicons, which should be nearest-neighbor upscaled from 16x16 on HiDPI displays but "auto" downscaled when too large.
Upscaling to non-integer factors looks bad with naïve nearest-neighbour scaling. Nearest-neighbour with antialiasing would be better in these cases.
(In reply to Greg Edwards from comment #1) > Upscaling to non-integer factors looks bad with naïve nearest-neighbour > scaling. Nearest-neighbour with antialiasing would be better in these cases. For icons, I agree. But, at low scale factors, antialiasing also decreases the appearance of pixelization. That's good for icons, but bad for pixel art. I think we'd need a way to turn antialiasing on or off. Also, are you sure that nearest-neighbor with antialiasing would produce superior results to other scaling algorithms (like EPX aka Scale2x)--both in quality and speed?
Scale2x only works at (as it says on the tin) twice the scale. It's also inappropriate here since the w3c spec calls for the image to be rendered "as if it's made from large pixels." The language is unclear but it strikes me that the image should be treated as a vector image made entirely out of squares, then rendered to the best of the UA's ability. I'm not sure about performance. It would presumably depend on the algorithm and performance/quality tradeoffs probably exist.
(In reply to Greg Edwards from comment #3) > Scale2x only works at (as it says on the tin) twice the scale. It's also > inappropriate here since the w3c spec calls for the image to be rendered "as > if it's made from large pixels." I wasn't referring to using Scale2x for image-rendering: pixelated. I was saying that I think Scale2x would work better for scaling up favicons. I don't think image-rendering: pixelated should be used for that case.
And, of course, the most common case for scaling up favicons is converting 16x16 to 32x32--exactly what Scale2x does. Anything smaller can be downscaled, and anything larger can have the filter done twice and then downscaled (to add some antialiasing).
I believe you're looking for Bug 828508 or Bug 854956. This bug is about implementing image-rendering: pixelated. And for what it's worth, standard UI behaviour on Mac OS X and Windows 8.1 is to upscale to 2x with nearest neighbour since it has more fidelity to the original, and (uniform) blockiness is more tolerable than blurriness at high pixel densities. You're welcome to file a request for image-rendering: -moz-hqx if you like, but that sort of thing would be better discussed with the W3C.
Okay, how about we forget what I said about favicons. I only brought that up because it was mentioned in the original post. The point of my comment is that I don't think image-rendering: pixelated should use antialiasing at small scale sizes, even if they are non-integer. I think it will just result in blurry images, obscuring the pixels, which are supposed to be clearly visible for this spec. The reason you cite for why Mac OS X and Windows 8.1 upscale with nearest neighbor is exactly my reasoning here. Uniform blockiness is what we want from this spec, not blurriness.
Forgot to mention: I'm okay with antialiasing at higher non-integer scale factors. (Windows does this, too.) I would even suggest making a pref for determining what qualifies as a high scale factor.
Tentatively taking this.
Assignee: nobody → dholbert
Status: NEW → ASSIGNED
The "auto" downscaling requirement has now been removed, per . So, "pixelated" now just means "nearest-neighbor". (For Gecko, this means it's effectively an alias for -moz-crisp-edges. That may not stay the case, though; the spec allows "crisp-edges" to be a bit more nuanced, and it's conceivable that we'll someday tweak "crisp-edges" to use a different algorithm, while keeping "pixelated" on nearest-neighbor.)  http://lists.w3.org/Archives/Public/www-style/2014Sep/0351.html
This includes two reftests: - The first reftest scales up a 16x8 image (with a grid of 4 colors) to a larger 32x32 square grid, and checks it against an actual hardcoded 32x32 square grid, to be sure no blurring has happened. This is done for <img>, <embed>, and a CSS background. - The second reftest does the reverse -- it scales the 32x32 image down to 16x8, for <img>, <embed>, and a CSS background, and tests it against the 16x8 image. Note that the spec links in these tests are not currently valid; I'm using "http://www.w3.org/TR/css-images/" since that's (as of today) where the ED says this spec is going to live. (the ED being at http://dev.w3.org/csswg/css-images-3/ )
(You'll notice that layout/reftests/w3c-css/submitted/reftest.list already has an commented-out line for "include images3/reftest.list" -- this patch uncomments that line and creates the subdirectory.)
"Intent to implement" thread: https://groups.google.com/forum/?fromgroups#!topic/mozilla.dev.platform/0KYBjCdUMJw
(In reply to Daniel Holbert [:dholbert] from comment #12) > Note that the spec links in these tests are not currently valid; I'm using > "http://www.w3.org/TR/css-images/" since that's (as of today) where the ED > says this spec is going to live Turns out this was just a typo in the ED, per . I've fixed the links in the reftests to now point at http://www.w3.org/TR/css3-images/ (which is actually where the spec-snapshot currently lives) to match the updated ED.  http://lists.w3.org/Archives/Public/www-style/2014Sep/0373.html
Attachment #8494597 - Flags: review?(seth)
(In reply to Daniel Holbert [:dholbert] from comment #10) > The "auto" downscaling requirement has now been removed, per . So, > "pixelated" now just means "nearest-neighbor" [...] >  http://lists.w3.org/Archives/Public/www-style/2014Sep/0351.html Update: The CSSWG considered the above change, and resolved to *allow* better downscaling algorithms (e.g. the default one), but it's optional. For reference, see the first resolution in http://lists.w3.org/Archives/Public/www-style/2014Sep/0384.html I think we should still ship "image-rendering:pixelated" with nearest-neighbor up & downscaling, and implement the (optional) smarter downscaling as a second step. I've filed bug 1072703 to cover that part.
Should we also implement crisp-edges and deprecate moz-crisp-edges before we ship? (not necessarily in this bug).
I don't think so. I posted earlier today about this here: https://groups.google.com/d/msg/mozilla.dev.platform/0KYBjCdUMJw/wp3L2O9e5SgJ Summarizing my thoughts from that post: - It's possible (and maybe likely) that the web currently depends on "-moz-crisp-edges" being available. (Note that no browser implements *unprefixed* crisp-edges right now, and the existing per-browser keywords for this [aside from Chrome prerelease] are prefixed & different from each other. So I'm betting that authors don't bother with providing unprefixed fallback; and that means it'd be dangerous to unprefix "-moz-crisp-edges", until after we've provided (for some time) an alternative standardized way for authors to ask for the same behavior. And the most interoperable such alternative at the moment is *not* "crisp-edges", but is in fact "pixelated". (since no engine implements 'crisp-edges' yet, as far as I know; but chrome does have an upcoming "pixelated" impl.) - Down the line, we may want to make "crisp-edges" use a different edge-preserving scaling algorithm, other than Nearest-Neighbor (and if so, it'd be nice to make that change before unprefixing; though maybe we can unprefix first, if we know that the algorithm won't change behavior in important ways. Anyway, discussion for another bug.) - In the meantime, it's allowable for [-moz-]crisp-edges to be implemented the same as "pixelated"; so it's not a problem that we have two keywords that (for now) map to the same behavior. So, I don't see any strong reason to wait for changes to "-moz-crisp-edges" before shipping "pixelated"; rather, I think we'll be in a better position to change and/or unprefix "-moz-crisp-edges" *after* we've shipped "pixelated".
I'm not suggesting removing -moz-crisp-edges, merely introducing some kind of warning encouraging them to switch away from it and use pixelated instead.
(In reply to Robert Longson from comment #20) > I'm not suggesting removing -moz-crisp-edges, merely introducing some kind > of warning encouraging them to switch away from it and use pixelated instead. That's not a good idea, because the semantics of the two are different, even if they currently have the same implementation. The pixelated option is specifically for content that is supposed to looks pixelated, like 8-bit sprites. Plus, if/when we implement a different downscaling algorithm for pixelated, the implementations will be different. Unless you meant encouraging them to use crisp-edges. That I agree with. If nothing else, log -moz-crisp-edges as deprecated and suggest using the unprefixed version.
(In reply to Robert Longson from comment #20) > I'm not suggesting removing -moz-crisp-edges, merely introducing some kind > of warning encouraging them to switch away from it and use pixelated instead. Ah, OK - sorry for misinterpreting. That might be worthwhile, but it's probably more worth thinking about as we get closer to unprefixing -moz-crisp-edges. (Which is not covered by this bug, and which I'm not suggesting we do right now.) In any case, I don't think any such warning should be a prerequisite for shipping "pixelated". (In reply to Terrell Kelley from comment #21) > That's not a good idea, because the semantics of the two are different (In theory, yes; but I doubt that many authors are *really* making a intentional choice to get the "crisp-edges" behavior (and *not* asking for pixelated beahvior) when they use -moz-crisp-edges. I'd bet that authors are at least as likely to be going for the pixelated look when they use "-moz-crisp-edges", and this is just our only way to give it to them currently.)
> (In theory, yes; but I doubt that many authors are *really* making a > intentional choice to get the "crisp-edges" behavior (and *not* asking for > pixelated beahvior) when they use -moz-crisp-edges. I'd bet that authors > are at least as likely to be going for the pixelated look when they use > "-moz-crisp-edges", and this is just our only way to give it to them > currently.) Yeah, I didn't think about the fact that people currently use -moz-crisp-edges to get pixelated sprites. But I have encountered people who use it just to keep things from getting blurry. So I'd suggest recommending both, and letting the author decide. That way, you don't accidentally imply that crisp-edges is going away.
Comment on attachment 8494128 [details] [diff] [review] fix v1 Review of attachment 8494128 [details] [diff] [review]: ----------------------------------------------------------------- Looks good!
Attachment #8494128 - Flags: review?(seth) → review+
Comment on attachment 8494597 [details] [diff] [review] reftests patch v2 Review of attachment 8494597 [details] [diff] [review]: ----------------------------------------------------------------- This looks good to me. One question: if we did decide later to use the default scaling algorithm when downscaling, which it sounds like the spec still allows us to do, would we need to change the downscaling test here? It seems like at a minimum it'd need to be marked fuzzy.
Attachment #8494597 - Flags: review?(seth) → review+
Continuing to talk about the downscaling test: I'm mainly curious about whether you'd have designed the test differently in that case because it seems like we're planning to contribute these tests to the W3C, and in that case it seems like we need to ensure that tests allow the full range of legal implementations, and not just the one we chose to implement.
The downscaling test (reftest #2) works in Firefox with both the default scaling algorithm and with nearest-neighbor. (no fuzziness needed) i.e. it passes with and without the actual fix applied. I just included it for symmetry/completeness. We don't have any exact spec language yet, regarding what's allowed for downscaling, but the w3c resolution was in favor of allowing browsers to do something "prettier" for downscaling. Given that this reftest is a scenario where the correct downscaled result is pretty obvious & deterministic (every result-pixel maps directly to a rect that's all a single color in the original image), I think it's reasonable to expect that whatever algorithm a UA chooses, it should get this right, if it's "prettier" (better) than NN. So, I'm inclined to leave that test in, perhaps with a comment referencing the spec language once it exists.
(Also: for the record, I'm currently planning on getting a patch for bug 1072703 before landing this, and then landing the two bugs together, due to ehsan's uneasiness about interop on the intent-to-ship thread. I'm on PTO today, but hopefully I'll have a patch or more information on that bug tomorrow. It's a bit complex, since we may end up doing multiple individual scale operations, per bug 1072703 comment 2, so I'm not 100% sure it's possible to know whether the overall drawing operation is an upscale or a downscale.)
I'm posting an updated reftests patch here, with reftests that I've developed while working on bug 1072703 (but which are valid for this bug). Notable changes: 1) The first reftest (from the previous patch-version) now exercises <object> (along with <img>/<embed>/CSS-background) 2) I've added a test that's the same as the first one, but with a 180-degree rotation applied. (This catches a bug that I'd had in an early version of my patch for bug 1072703, because the transform ends up producing a rect with a negative (but still larger-in-magnitude) height & width. My patch for bug 1072703 will use std::abs() on transformed heights & widths before comparing them, to allow for this sort of rotation.) 2) I added a reftest for SVG's image-rendering elements: <image>, <pattern>, and <feImage> 3) I added a reftest to assert that SVG-as-an-image will use its own, local "image-rendering" value for its raster image resources (since it's got its own private CSS cascade), rather than using "image-rendering" from the host document. 4) I've removed the downscaling test mentioned in comment 26 and comment 27, because (per beginning of comment 27) it trivially passes with any sane image downscaling algorithm. I'll be adding more useful downscaling tests over in bug 1072703. (Those won't be in our w3c-css test directory, because of the CSSWG resolution to allow a variety of downscaling behaviors.) I'm not bothering with another round of review (just carrying forward the earlier r+), but feedback is definitely welcome.
Attachment #8494597 - Attachment is obsolete: true
Whiteboard: [waiting to land until bug 1072703 is ready]
Really looking forward to seeing this implemented.
(In reply to Franpa_999 from comment #30) > Really looking forward to seeing this implemented. Primarily for upscaling. I really loath the blurry Bilinear filter currently used and the Crisp Edges thing looks pretty bad depending on the source material.
(In reply to Franpa_999 from comment #31) > > Really looking forward to seeing this implemented. (Just a heads-up: I'm not actively working on getting this landed right now -- more work is needed on bug 1072703 before this can land, and other features are higher-priority than that at the moment.) > Primarily for upscaling. Ah! Then, good news -- "-moz-crisp-edges" should have you covered. That behaves exactly how "pixelated" will behave once it's implemented, when upscaling at least. > and the Crisp Edges thing looks pretty bad depending on the source > material. ...this is when upscaling an image? or downscaling? "-moz-crisp-edges" just triggers a Nearest-Neighbor scaling algorithm [and that's exactly what the CSS spec requires for "pixelated" & upscaling]. If you're seeing results in Firefox + "-moz-crisp-edges" + upscaling that e.g. differ from Chrome + "pixelated" + upscaling, please file a bug, because they should be the same.
WebKit Nightly support: https://bugs.webkit.org/show_bug.cgi?id=146771
Shouldn't pixel mixing be the ideal algorithm to scale pixel art (both up and down) while keeping the "pixeliness"? With integer scales, it upscales equivalent to nearest-neighbor and downscales equivalent to averaging (box filter). With non-integer scales, it produces more evenly-sized pixels than nearest-neighbor, which IMO is more visually pleasant. http://entropymine.com/imageworsener/pixelmixing/
I agree, Pixel Mixing is most in line with the spirit of the spec, even though the spec's wording is very ambiguous. It's also, IMO, the best algorithm for scaling UI, games, and icons, and is certainly worth having available to web apps.
You need to log in before you can comment on or make changes to this bug.