Implement image-rendering: pixelated

ASSIGNED
Assigned to

Status

()

defect
ASSIGNED
6 years ago
2 months ago

People

(Reporter: fryn, Assigned: dholbert)

Tracking

(Blocks 3 bugs, {dev-doc-needed})

Trunk
Points:
---
Dependency tree / graph
Bug Flags:
in-testsuite ?

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [waiting to land until bug 1072703 is ready], )

Attachments

(2 attachments, 2 obsolete attachments)

Reporter

Description

6 years ago
According to the CSS4 Images Module Working Draft — http://www.w3.org/TR/css4-images/#the-image-rendering — and our documentation — https://developer.mozilla.org/en-US/docs/CSS/image-rendering — the image-rendering property should accept the value "pixelated", which specifies nearest-neighbor upscaling and "auto" downscaling.

This behavior is exactly what we want for many content-provided images in the browser that are usually low resolution but sometimes overly high resolution. The most ubiquitous example is favicons, which should be nearest-neighbor upscaled from 16x16 on HiDPI displays but "auto" downscaled when too large.

Comment 1

6 years ago
Upscaling to non-integer factors looks bad with naïve nearest-neighbour scaling. Nearest-neighbour with antialiasing would be better in these cases.

Comment 2

6 years ago
(In reply to Greg Edwards from comment #1)
> Upscaling to non-integer factors looks bad with naïve nearest-neighbour
> scaling. Nearest-neighbour with antialiasing would be better in these cases.

For icons, I agree. But, at low scale factors, antialiasing also decreases the appearance of pixelization. That's good for icons, but bad for pixel art. I think we'd need a way to turn antialiasing on or off.

Also, are you sure that nearest-neighbor with antialiasing would produce superior results to other scaling algorithms (like EPX aka Scale2x)--both in quality and speed?

Comment 3

6 years ago
Scale2x only works at (as it says on the tin) twice the scale. It's also inappropriate here since the w3c spec calls for the image to be rendered "as if it's made from large pixels." The language is unclear but it strikes me that the image should be treated as a vector image made entirely out of squares, then rendered to the best of the UA's ability.

I'm not sure about performance. It would presumably depend on the algorithm and performance/quality tradeoffs probably exist.

Comment 4

6 years ago
(In reply to Greg Edwards from comment #3)
> Scale2x only works at (as it says on the tin) twice the scale. It's also
> inappropriate here since the w3c spec calls for the image to be rendered "as
> if it's made from large pixels." 

I wasn't referring to using Scale2x for image-rendering: pixelated. I was saying that I think Scale2x would work better for scaling up favicons. I don't think image-rendering: pixelated should be used for that case.

Comment 5

6 years ago
And, of course, the most common case for scaling up favicons is converting 16x16 to 32x32--exactly what Scale2x does. Anything smaller can be downscaled, and anything larger can have the filter done twice and then downscaled (to add some antialiasing).

Comment 6

6 years ago
I believe you're looking for Bug 828508 or Bug 854956. This bug is about implementing image-rendering: pixelated. And for what it's worth, standard UI behaviour on Mac OS X and Windows 8.1 is to upscale to 2x with nearest neighbour since it has more fidelity to the original, and (uniform) blockiness is more tolerable than blurriness at high pixel densities.

You're welcome to file a request for image-rendering: -moz-hqx if you like, but that sort of thing would be better discussed with the W3C.

Comment 7

6 years ago
Okay, how about we forget what I said about favicons. I only brought that up because it was mentioned in the original post. The point of my comment is that I don't think image-rendering: pixelated should use antialiasing at small scale sizes, even if they are non-integer. I think it will just result in blurry images, obscuring the pixels, which are supposed to be clearly visible for this spec.

The reason you cite for why Mac OS X and Windows 8.1 upscale with nearest neighbor is exactly my reasoning here. Uniform blockiness is what we want from this spec, not blurriness.

Comment 8

6 years ago
Forgot to mention: I'm okay with antialiasing at higher non-integer scale factors. (Windows does this, too.) I would even suggest making a pref for determining what qualifies as a high scale factor.
Assignee

Comment 9

5 years ago
Tentatively taking this.
Assignee: nobody → dholbert
Status: NEW → ASSIGNED
Assignee

Comment 10

5 years ago
The "auto" downscaling requirement has now been removed, per [1]. So, "pixelated" now just means "nearest-neighbor".  (For Gecko, this means it's effectively an alias for -moz-crisp-edges. That may not stay the case, though; the spec allows "crisp-edges" to be a bit more nuanced, and it's conceivable that we'll someday tweak "crisp-edges" to use a different algorithm, while keeping "pixelated" on nearest-neighbor.)

[1] http://lists.w3.org/Archives/Public/www-style/2014Sep/0351.html
Assignee

Comment 11

5 years ago
Posted patch fix v1Splinter Review
Attachment #8494128 - Flags: review?(seth)
Assignee

Comment 12

5 years ago
Posted patch reftests patch v1 (obsolete) — Splinter Review
This includes two reftests:
 - The first reftest scales up a 16x8 image (with a grid of 4 colors) to a larger 32x32 square grid, and checks it against an actual hardcoded 32x32 square grid, to be sure no blurring has happened.  This is done for <img>, <embed>, and a CSS background.
 - The second reftest does the reverse -- it scales the 32x32 image down to 16x8, for <img>, <embed>, and a CSS background, and tests it against the 16x8 image.

Note that the spec links in these tests are not currently valid; I'm using "http://www.w3.org/TR/css-images/" since that's (as of today) where the ED says this spec is going to live. (the ED being at http://dev.w3.org/csswg/css-images-3/ )
Attachment #8494195 - Flags: review?(seth)
Assignee

Comment 13

5 years ago
(You'll notice that layout/reftests/w3c-css/submitted/reftest.list already has an commented-out line for "include images3/reftest.list" -- this patch uncomments that line and creates the subdirectory.)
Assignee

Updated

5 years ago
Flags: in-testsuite?
Assignee

Comment 16

5 years ago
Posted patch reftests patch v2 (obsolete) — Splinter Review
(In reply to Daniel Holbert [:dholbert] from comment #12)
> Note that the spec links in these tests are not currently valid; I'm using
> "http://www.w3.org/TR/css-images/" since that's (as of today) where the ED
> says this spec is going to live

Turns out this was just a typo in the ED, per [1].  I've fixed the links in the reftests to now point at http://www.w3.org/TR/css3-images/ (which is actually where the spec-snapshot currently lives) to match the updated ED.
[1] http://lists.w3.org/Archives/Public/www-style/2014Sep/0373.html
Attachment #8494195 - Attachment is obsolete: true
Attachment #8494195 - Flags: review?(seth)
Assignee

Updated

5 years ago
Attachment #8494597 - Flags: review?(seth)
Assignee

Updated

5 years ago
Blocks: 1072703
Assignee

Comment 17

5 years ago
(In reply to Daniel Holbert [:dholbert] from comment #10)
> The "auto" downscaling requirement has now been removed, per [1]. So,
> "pixelated" now just means "nearest-neighbor"
[...]
> [1] http://lists.w3.org/Archives/Public/www-style/2014Sep/0351.html

Update: The CSSWG considered the above change, and resolved to *allow* better downscaling algorithms (e.g. the default one), but it's optional. For reference, see the first resolution in http://lists.w3.org/Archives/Public/www-style/2014Sep/0384.html

I think we should still ship "image-rendering:pixelated" with nearest-neighbor up & downscaling, and implement the (optional) smarter downscaling as a second step. I've filed bug 1072703 to cover that part.
Should we also implement crisp-edges and deprecate moz-crisp-edges before we ship? (not necessarily in this bug).
Assignee

Comment 19

5 years ago
I don't think so.

I posted earlier today about this here:
 https://groups.google.com/d/msg/mozilla.dev.platform/0KYBjCdUMJw/wp3L2O9e5SgJ

Summarizing my thoughts from that post:

 - It's possible (and maybe likely) that the web currently depends on "-moz-crisp-edges" being available. (Note that no browser implements *unprefixed* crisp-edges right now, and the existing per-browser keywords for this [aside from Chrome prerelease] are prefixed & different from each other.  So I'm betting that authors don't bother with providing unprefixed fallback; and that means it'd be dangerous to unprefix "-moz-crisp-edges", until after we've provided (for some time) an alternative standardized way for authors to ask for the same behavior.  And the most interoperable such alternative at the moment is *not* "crisp-edges", but is in fact "pixelated". (since no engine implements 'crisp-edges' yet, as far as I know; but chrome does have an upcoming "pixelated" impl.)

 - Down the line, we may want to make "crisp-edges" use a different edge-preserving scaling algorithm, other than Nearest-Neighbor (and if so, it'd be nice to make that change before unprefixing; though maybe we can unprefix first, if we know that the algorithm won't change behavior in important ways. Anyway, discussion for another bug.)

 - In the meantime, it's allowable for [-moz-]crisp-edges to be implemented the same as "pixelated"; so it's not a problem that we have two keywords that (for now) map to the same behavior.

So, I don't see any strong reason to wait for changes to "-moz-crisp-edges" before shipping "pixelated"; rather, I think we'll be in a better position to change and/or unprefix "-moz-crisp-edges" *after* we've shipped "pixelated".
I'm not suggesting removing -moz-crisp-edges, merely introducing some kind of warning encouraging them to switch away from it and use pixelated instead.

Comment 21

5 years ago
(In reply to Robert Longson from comment #20)
> I'm not suggesting removing -moz-crisp-edges, merely introducing some kind
> of warning encouraging them to switch away from it and use pixelated instead.

That's not a good idea, because the semantics of the two are different, even if they currently have the same implementation. The pixelated option is specifically for content that is supposed to looks pixelated, like 8-bit sprites. Plus, if/when we implement a different downscaling algorithm for pixelated, the implementations will be different. 

Unless you meant encouraging them to use crisp-edges. That I agree with. If nothing else, log -moz-crisp-edges as deprecated and suggest using the unprefixed version.
Assignee

Comment 22

5 years ago
(In reply to Robert Longson from comment #20)
> I'm not suggesting removing -moz-crisp-edges, merely introducing some kind
> of warning encouraging them to switch away from it and use pixelated instead.

Ah, OK - sorry for misinterpreting. That might be worthwhile, but it's probably more worth thinking about as we get closer to unprefixing -moz-crisp-edges. (Which is not covered by this bug, and which I'm not suggesting we do right now.)

In any case, I don't think any such warning should be a prerequisite for shipping "pixelated".

(In reply to Terrell Kelley from comment #21)
> That's not a good idea, because the semantics of the two are different

(In theory, yes; but I doubt that many authors are *really* making a intentional choice to get the "crisp-edges" behavior (and *not* asking for pixelated beahvior) when they use -moz-crisp-edges.  I'd bet that authors are at least as likely to be going for the pixelated look when they use "-moz-crisp-edges", and this is just our only way to give it to them currently.)

Comment 23

5 years ago
> (In theory, yes; but I doubt that many authors are *really* making a
> intentional choice to get the "crisp-edges" behavior (and *not* asking for
> pixelated beahvior) when they use -moz-crisp-edges.  I'd bet that authors
> are at least as likely to be going for the pixelated look when they use
> "-moz-crisp-edges", and this is just our only way to give it to them
> currently.)

Yeah, I didn't think about the fact that people currently use -moz-crisp-edges to get pixelated sprites. But I have encountered people who use it just to keep things from getting blurry. So I'd suggest recommending both, and letting the author decide. That way, you don't accidentally imply that crisp-edges is going away.
Comment on attachment 8494128 [details] [diff] [review]
fix v1

Review of attachment 8494128 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good!
Attachment #8494128 - Flags: review?(seth) → review+
Comment on attachment 8494597 [details] [diff] [review]
reftests patch v2

Review of attachment 8494597 [details] [diff] [review]:
-----------------------------------------------------------------

This looks good to me.

One question: if we did decide later to use the default scaling algorithm when downscaling, which it sounds like the spec still allows us to do, would we need to change the downscaling test here? It seems like at a minimum it'd need to be marked fuzzy.
Attachment #8494597 - Flags: review?(seth) → review+
Continuing to talk about the downscaling test: I'm mainly curious about whether you'd have designed the test differently in that case because it seems like we're planning to contribute these tests to the W3C, and in that case it seems like we need to ensure that tests allow the full range of legal implementations, and not just the one we chose to implement.
Assignee

Comment 27

5 years ago
The downscaling test (reftest #2) works in Firefox with both the default scaling algorithm and with nearest-neighbor. (no fuzziness needed) i.e. it passes with and without the actual fix applied.  I just included it for symmetry/completeness.

We don't have any exact spec language yet, regarding what's allowed for downscaling, but the w3c resolution was in favor of allowing browsers to do something "prettier" for downscaling.   Given that this reftest is a scenario where the correct downscaled result is pretty obvious & deterministic (every result-pixel maps directly to a rect that's all a single color in the original image), I think it's reasonable to expect that whatever algorithm a UA chooses, it should get this right, if it's "prettier" (better) than NN.

So, I'm inclined to leave that test in, perhaps with a comment referencing the spec language once it exists.
Assignee

Comment 28

5 years ago
(Also: for the record, I'm currently planning on getting a patch for bug 1072703 before landing this, and then landing the two bugs together, due to ehsan's uneasiness about interop on the intent-to-ship thread. I'm on PTO today, but hopefully I'll have a patch or more information on that bug tomorrow. It's a bit complex, since we may end up doing multiple individual scale operations, per bug 1072703 comment 2, so I'm not 100% sure it's possible to know whether the overall drawing operation is an upscale or a downscale.)
Assignee

Comment 29

5 years ago
I'm posting an updated reftests patch here, with reftests that I've developed while working on bug 1072703 (but which are valid for this bug).

Notable changes:
 1) The first reftest (from the previous patch-version) now exercises <object> (along with <img>/<embed>/CSS-background)
 2) I've added a test that's the same as the first one, but with a 180-degree rotation applied. (This catches a bug that I'd had in an early version of my patch for bug 1072703, because the transform ends up producing a rect with a negative (but still larger-in-magnitude) height & width. My patch for bug 1072703 will use std::abs() on transformed heights & widths before comparing them, to allow for this sort of rotation.)
 2) I added a reftest for SVG's image-rendering elements: <image>, <pattern>, and <feImage>
 3) I added a reftest to assert that SVG-as-an-image will use its own, local "image-rendering" value for its raster image resources (since it's got its own private CSS cascade), rather than using "image-rendering" from the host document.
 4) I've removed the downscaling test mentioned in comment 26 and comment 27, because (per beginning of comment 27) it trivially passes with any sane image downscaling algorithm.  I'll be adding more useful downscaling tests over in bug 1072703. (Those won't be in our w3c-css test directory, because of the CSSWG resolution to allow a variety of downscaling behaviors.)

I'm not bothering with another round of review (just carrying forward the earlier r+), but feedback is definitely welcome.
Attachment #8494597 - Attachment is obsolete: true
Assignee

Updated

5 years ago
Whiteboard: [waiting to land until bug 1072703 is ready]

Comment 30

4 years ago
Really looking forward to seeing this implemented.

Comment 31

4 years ago
(In reply to Franpa_999 from comment #30)
> Really looking forward to seeing this implemented.

Primarily for upscaling. I really loath the blurry Bilinear filter currently used and the Crisp Edges thing looks pretty bad depending on the source material.
Assignee

Comment 32

4 years ago
(In reply to Franpa_999 from comment #31)
> > Really looking forward to seeing this implemented.

(Just a heads-up: I'm not actively working on getting this landed right now -- more work is needed on bug 1072703 before this can land, and other features are higher-priority than that at the moment.)

> Primarily for upscaling.

Ah! Then, good news -- "-moz-crisp-edges" should have you covered. That behaves exactly how "pixelated" will behave once it's implemented, when upscaling at least.

> and the Crisp Edges thing looks pretty bad depending on the source
> material.

...this is when upscaling an image? or downscaling?  "-moz-crisp-edges" just triggers a Nearest-Neighbor scaling algorithm [and that's exactly what the CSS spec requires for "pixelated" & upscaling].

If you're seeing results in Firefox + "-moz-crisp-edges" + upscaling that e.g. differ from Chrome + "pixelated" + upscaling, please file a bug, because they should be the same.

Comment 33

3 years ago
WebKit Nightly support: https://bugs.webkit.org/show_bug.cgi?id=146771
Shouldn't pixel mixing be the ideal algorithm to scale pixel art (both up and down) while keeping the "pixeliness"?

With integer scales, it upscales equivalent to nearest-neighbor and downscales equivalent to averaging (box filter).

With non-integer scales, it produces more evenly-sized pixels than nearest-neighbor, which IMO is more visually pleasant.

http://entropymine.com/imageworsener/pixelmixing/

Comment 35

3 years ago
I agree, Pixel Mixing is most in line with the spirit of the spec, even though the spec's wording is very ambiguous. It's also, IMO, the best algorithm for scaling UI, games, and icons, and is certainly worth having available to web apps.

Updated

2 years ago
Blocks: css-images-3
Duplicate of this bug: 1432050

Updated

11 months ago
Blocks: 1081224

Updated

9 months ago
See Also: → 1496617

Comment 37

5 months ago

An chance this will get some attention? I don't know what the spec used to say but it seems fairly unambiguous now

pixelated: The image must be scaled with the "nearest neighbor" ..., to preserve a "pixelated" look as the image changes in size.

Every dev and question I've seen on this topic wants simple "nearest neighbor" to be what all browsers do when "pixelated" is selected.

The spec also makes it 100% clear that choosing "crisp-edges" is not asking for the same thing as "pixelated" and that at anytime on any browser "crisp-edges" could do something other than "nearest neighbor" and not give the results the dev wants.

Comment 38

5 months ago

(In reply to Gregg Tavares from comment #37)

choosing "crisp-edges" is not asking for the same thing as "pixelated" and that at anytime on any browser "crisp-edges" could do something other than "nearest neighbor" and not give the results the dev wants.

Indeed, straightforward Nearest Neighbour would be great.

According to the example image in the spec, crisp-edges may mean pretty much anything including totally non-pixelated things, depending on the browser. I have no idea why we need such broad nonspecific things in the spec in the first place.

My SmartUpscale extension for preventing distortion of integer-ratio-scaled images currently uses -moz-crisp-edges, but it actually needs Nearest Neighbour with a 100% predictable result. Too bad that even pixelated is described in the spec not as “nearest neighbor”, but as “"nearest neighbor" or similar algorithm”, so surprises are still possible.

Since the spec allows a "similar" algorithm, I still think Pixel Mixing is the better choice; you get the nearest-neighbor behavior you expect for integer up-scaling. For all other cases, the look is much more even and less aliased than nearest-neighbor.

Comment 40

5 months ago

Please, please PLEASE just use nearest neighbor.

Because of other issues with the spec you can't always know exactly how many pixels will be displayed.

You make a 640x480 canvas but it's displayed on a 1.33 dpi screen so actual size is 851x638 or it might be 852x639 depending on the browser. AFAICT the majority of devs that asked for a 640x480 image-rendering: pixelated; canvas don't want it to suddenly get antialiased and blurred out in that condition.

Devs picking pixelated want an aliased image. That's the entire point of pixelated

Comment 41

5 months ago

I'm a dev picking pixelated and I want pixel mixing. My use case is displaying sprite art optimized for 1x displays which I want to maintain its blocky appearance on 2x displays without looking awful on 1.5x and 2.5x displays. My current approach is to use image-rendering: auto and then have a chain of media queries to turn on crisp-edges at exactly 1x, 2x, 3x, etc. 1.5x displays get the very blurry bilinear scaling which is still better than the awful results of nearest neighbour but significantly worse than pixel mixing.

Favicons--especially 16x16 favicons being displayed on 1.5x and 2.5x displays--are also a very compelling case for pixel mixing.

Nearest neighbour scaling is awful at non-integer scales--especially between 1x and 2x. I can't understand how any web developer would prefer it to anything else. Please check the Image Worsener code for a CPU optimized implementation with some clever performance tweaks.

Please note that pixel mixing and nearest neighbour, by design, will always give the same results for integer scales. At non-integer scales, pixel mixing is a kind of 'perfectly supersampled' version of nearest neighbour.

Comment 42

5 months ago

If pixel mixing means this

http://entropymine.com/imageworsener/pixelmixing/

Then it is in no way compatible with the definition of "pixelated"

Comment 43

5 months ago

I would prefer uneven pixel rendering to occur at non-integer scaling than for it to be blurred/smoothed/filtered/blended.

https://colececil.io/blog/2017/scaling-pixel-art-without-destroying-it/ seems to have a shader implementation of pixel mixing, if I interpret it correctly.

You need to log in before you can comment on or make changes to this bug.