Closed
Bug 916544
Opened 11 years ago
Closed 11 years ago
getComputedStyle() returning inaccurate rgba alpha values on Linux Opt and Linux Debug
Categories
(Core :: CSS Parsing and Computation, defect)
Tracking
()
RESOLVED
WONTFIX
People
(Reporter: miker, Unassigned)
References
Details
Attachments
(1 file)
2.09 KB,
patch
|
Details | Diff | Splinter Review |
I have a failing test because getComputedStyle() returns an inaccurate rgba alpha value on Linux Opt and Linux Debug. Scratchpad: let {Services} = Cu.import("resource://gre/modules/Services.jsm", {}); let win = Services.appShell.hiddenDOMWindow; let doc = win.document; let span = doc.createElement("span"); span.style.color = "rgba(255, 255, 255, 0.7)"; console.log(win.getComputedStyle(span).color); // rgba(255, 255, 255, 0.698) We are trying to land a color conversion utility for our devtools... any ideas why this could be happening?
Reporter | ||
Comment 1•11 years ago
|
||
This patch contains a simple test case that demonstrates the problem. Only the Linux Opt and Linux Debug oranges are of interest here. Try run: https://tbpl.mozilla.org/?tree=Try&rev=4528b9a6893c
Seems like either nsStyleUtil::ColorComponentToFloat isn't doing what it's supposed to be doing, or some code isn't using it when it should (though nsComputedDOMStyle::SetToRGBAColor does).
Reporter | ||
Comment 3•11 years ago
|
||
(In reply to David Baron [:dbaron] (needinfo? me) (busy through Sept. 14) from comment #2) > Seems like either nsStyleUtil::ColorComponentToFloat isn't doing what it's > supposed to be doing, or some code isn't using it when it should (though > nsComputedDOMStyle::SetToRGBAColor does). It is just strange that this only happens on 32 bit Linux.
OS: All → Linux
Hardware: All → x86
Comment 4•11 years ago
|
||
I'd bet it's a rounding difference. We're presumably storing this as a nscolor at some point, which gives each channel 8 bits. So, scale the alpha value from its fractional value up to an integer in the range 0-255: specifically, we get 0.7 * 255 = 178.5. But we can't represent fractional values, so we'll have to round. If we round up to 179 and then convert back to a fractional value for getComputedStyle, we'll end up with: 179/255 == 0.780 (which is what you're getting on most platforms) but if we round down to 178, and then convert back to a fractional value, we'll end up with: 178/255 == 0.698 (which is what you're getting on 32-bit Linux) I'm not sure why the rounding behavior is different - not having a 32-bit linux box/build handy, I can't immediately step through and find out where the rounding happens -- but it's probably because we do something like 178/(float)255, and that creates an intermediate architecture-dependent float representation, which on 32-bit systems is 178.499999 due to float precision error so we round that down. More generally: you can hit this will all sorts of other values -- e.g. replace 0.7 with 0.699, and you'll see this same issue (you'll get 0.698 instead of 0.699 back from getcomputedStyle -- at least, I do, on Linux64). tl;dr, it's a product of rounding error from converting to an integer range (using an arch-dependent intermediate float value). I don't think we can avoid this sort of thing in general, unless we want to allocate more bits for each color channel.
Comment 5•11 years ago
|
||
(In reply to Daniel Holbert [:dholbert] from comment #4) > I don't think we can > avoid this sort of thing in general, unless we want to allocate more bits > for each color channel. (I don't think we want to take the perf hit that that would impose, so I suspect this is probably WONTFIX.)
Reporter | ||
Updated•11 years ago
|
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•