The current implementation of matrix decomposition, interpolation and recomposition is different from Safari's. The spec was updated with the correct algorithm so the code can now be updated. See http://jsfiddle.net/Vv84m/6/embedded/result/ to see differences in Safari vs Firefox.
Created attachment 830755 [details] [diff] [review] Update the de/recomposte matrix + test updates try server: https://tbpl.mozilla.org/?tree=Try&rev=52eef16fe807
Attachment #830659 - Attachment is obsolete: true
Assignee: nobody → cabanier
OS: Mac OS X → All
Priority: -- → P3
Hardware: x86 → All
Comment on attachment 830755 [details] [diff] [review] Update the de/recomposte matrix + test updates there are build issues on windows that I need to fix first + have to fix the style
The windows try bots don't give me any useful information so I will have to wait until next week when I'm back in the office.
Created attachment 8335833 [details] [diff] [review] Update the de/recomposte matrix + updates to the test file
Attachment #830755 - Attachment is obsolete: true
Comment on attachment 8335833 [details] [diff] [review] Update the de/recomposte matrix + updates to the test file try run: https://tbpl.mozilla.org/?tree=Try&rev=379a79decc84
Comment on attachment 8335833 [details] [diff] [review] Update the de/recomposte matrix + updates to the test file weird floating point issue on max that was caught by try bots. Will fix.
Created attachment 8337195 [details] [diff] [review] Update the de/recomposte matrix + updates to the test file successful try run: https://tbpl.mozilla.org/?tree=Try&rev=9b424e66e724 Previous runs failed because of incorrect 'abs' function
Comment on attachment 8337195 [details] [diff] [review] Update the de/recomposte matrix + updates to the test file reassigning since David has such a big backlog
Attachment #8337195 - Flags: review?(dbaron) → review?(bzbarsky)
I am very unlikely to get to this review this week, for what it's worth.
Rik, I'm really sorry this is taking so long. Hopefully in early January... :(
No worries! This is not that urgent.
This is definitely still on my list. Just haven't been able to find a long-enough chunk of uninterrupted time to do a proper job. :(
Attachment #8337195 - Flags: review?(bzbarsky) → review?(mstange)
This patch likely no longer compiles. Let me know if you want it refreshed before reviewing.
(In reply to Rik Cabanier from comment #14) > This patch likely no longer compiles. Let me know if you want it refreshed > before reviewing. That would be ideal so you can land quickly. Thanks, Rik!
Created attachment 8792208 [details] [diff] [review] 937494-update_CSS_matrix_interpolation_to_match_the_spec.diff In the interests of getting this past the finish line, here's a rebased version of the patch. Unfortunately, a try run reveals two failing tests: - web platform test: web-animations/interfaces/KeyframeEffect/iterationComposite.html - mochitest on OSX: dom/animation/test/mozilla/test_transform_limits.html Both would seem to be failing from a precision/clamping issue, as values such as 3.40282e+38, 3.49691e-7, and -2.62268e-7 are expected to be zero in the tests. However, I'm not sure what the "correct" fix is here. Rik, do you have any input (or know who might?) (Try run is at https://treeherder.mozilla.org/#/jobs?repo=try&revision=975dfa61458f)
manishearth, I see that you were recently working on adding DOMMatrix interpolation to Servo. Could you read comment 16 and see if you maybe have some insights for me, so I can get the Firefox interpolation up to spec? Thanks!
Can't think of anything. Note that the spec has some bugs or inconsistencies (https://github.com/w3c/csswg-drafts/issues/483), might want to check if one of those is the issue here.
Canaltinova did the actual implementation and testing. Did you come across anything similar (see comment 16) when implementing this in Servo? We did have that min-max issue but it shouldn't lead to large floating point values.
(In reply to Thomas Wisniewski from comment #16) > Created attachment 8792208 [details] [diff] [review] > 937494-update_CSS_matrix_interpolation_to_match_the_spec.diff > > In the interests of getting this past the finish line, here's a rebased > version of the patch. > > Unfortunately, a try run reveals two failing tests: > - web platform test: > web-animations/interfaces/KeyframeEffect/iterationComposite.html > - mochitest on OSX: dom/animation/test/mozilla/test_transform_limits.html > > > Both would seem to be failing from a precision/clamping issue, as values > such as 3.40282e+38, 3.49691e-7, and -2.62268e-7 are expected to be zero in > the tests. However, I'm not sure what the "correct" fix is here. > > Rik, do you have any input (or know who might?) The code is mixing floats and doubles. That might be causing the rounding issues.
(In reply to Manish Goregaokar [:manishearth] from comment #18) > Can't think of anything. > > Note that the spec has some bugs or inconsistencies > (https://github.com/w3c/csswg-drafts/issues/483), might want to check if one > of those is the issue here. There is really no "correct" way to decompose a matrix. We got this code from Apple since everyone liked how they decomposed matrices and the Apple engineers are unable to change it since it's part of the OS.
(In reply to Rik Cabanier from comment #20) > (In reply to Thomas Wisniewski from comment #16) > > Created attachment 8792208 [details] [diff] [review] > > 937494-update_CSS_matrix_interpolation_to_match_the_spec.diff > > > > In the interests of getting this past the finish line, here's a rebased > > version of the patch. > > > > Unfortunately, a try run reveals two failing tests: > > - web platform test: > > web-animations/interfaces/KeyframeEffect/iterationComposite.html > > - mochitest on OSX: dom/animation/test/mozilla/test_transform_limits.html > > > > > > Both would seem to be failing from a precision/clamping issue, as values > > such as 3.40282e+38, 3.49691e-7, and -2.62268e-7 are expected to be zero in > > the tests. However, I'm not sure what the "correct" fix is here. > > > > Rik, do you have any input (or know who might?) > > The code is mixing floats and doubles. That might be causing the rounding > issues. I think you should debug the code (minimize the test to just the failing one and step through the code until you find where the new value is calculated.
Thanks guys. I did try to change the patch to use doubles throughout, but it didn't seem to help. I'll try again when I have the time (including possibly looking over the Servo and WebKit code for insight).
(In reply to Thomas Wisniewski from comment #23) > Thanks guys. I did try to change the patch to use doubles throughout, but it > didn't seem to help. I'll try again when I have the time (including possibly > looking over the Servo and WebKit code for insight). I suspect the problem is *because* of the doubles. wrt WebKit, you will not find the code in there because that code matches what Core Animation does.
Ah, I see. Thanks for the clarification.
fyi with the servo implementation, 3d translations are buggy (and some other 3d matrices) on the rendering side, so we didn't test some failing cases. So be careful of servo bugs when looking at the 3d code.
I didn't come across this kind of a bug in servo. But the first thing that came to my mind is type conversion as Rik mentioned. You can see the implementation changes here in servo: https://github.com/servo/servo/pull/13188/files. We had to change some parts because of bugs in spec(You can see the link in comment 18). And yeah, 3d implementation still has some bugs. But 2d implementation looks good.
Just a quick update. I tinkered with making everything all-float or all-double, but the same web platform test failed the same way. Digging into why, it actually came down to this chunk of the spec's "Interpolation of decomposed 2D matrix values" algorithm: > // Don’t rotate the long way around. > if (!angleA) > angleA = 360 > if (!angleB) > angleB = 360 For the failing tests, angleA = angleB = 0.0, and so both are changed to 360. Then later when performing matrix rotation, sin(360d) is taken, which does *not* equal to 0.0 as sin(0) does. As such the tests will not end up with zeros where they expect to see them. Since WebKit is using the same algorithm , I'd imagine they would also run into this same failure, but the related tests won't run on my Linux WebKit trunk build as-is (WebKit doesn't yet seem to support the necessary APIs to run the tests, and neither does Chrome).  https://github.com/WebKit/webkit/blob/master/Source/WebCore/platform/graphics/transforms/TransformationMatrix.cpp#L1517
(In reply to Thomas Wisniewski from comment #28) > For the failing tests, angleA = angleB = 0.0, and so both are changed to > 360. Then later when performing matrix rotation, sin(360d) is taken, which > does *not* equal to 0.0 as sin(0) does. As such the tests will not end up > with zeros where they expect to see them. Are you making sure that you're always using floats throughout and are using sinf, cosf, etc
Created attachment 8794563 [details] [diff] [review] 937494-update_CSS_matrix_interpolation_to_match_the_spec.diff (In reply to Rik Cabanier from comment #29) > Are you making sure that you're always using floats throughout and are using > sinf, cosf, etc I'm about as sure as I can be, given how new I am to the Firefox codebase. Someone more familiar would have to confirm whether it's fully correct, but when I comment out that block of code from the algorithm (so that 360deg/2pi isn't added to the angle), the tests pass. And stepping through the code confirmed that the only change that makes is that the value that goes into cos/sin for the rotation step is 360deg instead of 0. So even a "perfect" patch should still fail these same tests, simply because: > 0.0l != sin(2.0l * M_PI) > 0.0f != sinf(2.0f * static_cast<float>(M_PI)) And that's effectively the sin computation that the spec is having the browsers run in the rotation step for the test inputs, which only get scaled up from there (and the tests end up with a non-zero value where they didn't expect one). WebKit seems to be running the double variant of the code, while this patch is currently using the float version. That to me implies that the tests will also fail on WebKit, unless I'm missing something. As such, I'm going to try to confirm whether WebKit has the same problem with the same inputs. If it does, we'll know that either the spec needs to change (perhaps so that if both angles are 0, don't bother adding 360deg to them, which seems to do the trick), or the tests need to reflect the non-zero output. And if WebKit does somehow compute what these tests expect, then hopefully it won't take too long to find out what this patch is doing differently.
Attachment #8792208 - Attachment is obsolete: true
It turns out that WebKit's code does have the same issue. For one of the failing tests, this is the matrix interpolation being performed: > [1 0 0 [ 2 0 0 > 0 1 0 -> 50% progress to -> 0 2 0 > 0 0 0 0 0 1 > 0 0 0] 20 0 0] WebKit gives the same results that Firefox does, just with double-precision: > [1.5 ~0 0 > ~0 1.5 0 > 0 0 1 > 10 0 0] Because those two matrix components aren't quite zero (a C++ boolean comparison confirms that), the test in question will also fail on WebKit, once they support the API well enough, and assuming the test is added to the suite as-is. In addition, if I change their logic as I suggested above (so the 360deg isn't added when the angles are both zero) the numbers do equal to zero and WebKit should pass the test. So if the spec wants precise zeros values when interpolating from 0deg -> 0deg, then it probably should not be adding 360deg to the values in that case. And if it doesn't care, then these tests need to be changed to expect a near-zero value within some reasonable threshold for both single or double precision (presumably double, since that's what the spec and WebKit seem to be using. I don't mind changing my patch to doubles as well). For the record, here is the code-fragment I had WebKit run in order to do the interpolation, in case I'm missing something: > TransformationMatrix mat1(1.0, 0.0, 0.0, 0.0, > 0.0, 1.0, 0.0, 0.0, > 0.0, 0.0, 1.0, 0.0, > 0.0, 0.0, 0.0, 1.0); > TransformationMatrix mat2(2.0, 0.0, 0.0, 0.0, > 0.0, 2.0, 0.0, 0.0, > 0.0, 0.0, 1.0, 0.0, > 20.0, 0.0, 0.0, 1.0); > mat2.blend(mat1, 0.5); I compiled this on a trunk SVN build that I synced just a day or two ago, and ran it with the GTK minibrowser.
Nice detective work! Are you proposing we should change the spec or are you ok with the current behavior? I'm unsure if Safari can change this since they use CA, so maybe we can leave this edge case as-is?
Sure, if WebKit's almost certainly stuck with this probable flaw, then let's just change the failing tests so they accept a number "reasonably close to zero" instead of requiring zero. I'll try to get a reasonable implementation of that done ASAP. I'll also adjust my patch to use doubles instead of floats, since that's what the spec and WebKit are using. Or if you think the spec shouldn't care about float vs double, then it would be best to re-word it so it isn't saying "double" (and I'll make my test-changes tolerate a single-precision error here, rather than double).
There were a couple of heated debates over 'double' vs 'float'. When I wrote the patch, the arguments for 'float' seemingly had prevailed, mostly because they use less memory and are faster on arm and on GPU's. However, there was some feedback from the tag that indicated that float would be dropped so we didn't make the spec change. (well, I did but then changed it back right away) I agree that you should move to double since it matches the current spec and WK. Even if the spec moves to float, you will still be at least as good.
Mats, I'm not sure who to ask about this, so feel free to send me to someone more appropriate, but what should I do here? Looking over the codebase, floats are being used everywhere for Matrix math, and it strikes me as silly to just use doubles internally for these specific interpolation algorithms. But the spec wants us to use doubles, not floats, as Rik just confirmed in comment 34, so what is the best thing to do here? I'm guessing that flipping the gfx::Float typedef over to "double" and dealing with the fallout wouldn't fly (due to performance, memory-use, or general breakage concerns)? If so, I don't personally mind just keeping things as floats in my patch, and changing the tests in question (see comment 31) to accept a single-precision error... but then we won't match the spec. Which makes me think that just using doubles internally is the way to go for now. But in that case, would it be better for me to implement my own one-off classes/inline double-based matrix code just for this patch, or would it be considered better for me to go the extra mile and templatize/reuse the existing gfx matrix/point/etc classes?
Kip might have some ideas here, because he did some work to templatize the gfx/2d classes in bug 1157984.
I'm a Layout guy so this isn't my area of expertise. That said, FWIW: > But the spec wants us to use doubles, not floats Does the spec explicitly say whether that requirement is a MUST or a SHOULD? If it's unclear then it might be worth raising a spec issue to have that clarified. > I'm guessing that flipping the gfx::Float typedef over to "double" That seems drastic. I seem to recall we intentionally converted a lot of code from gfxFloat (double) to gfx::Float (float) not so long ago. I fixed a regression from that in bug 1091709 comment 25. The solution there was to use double for the multiplications but continue to use float for the struct fields that stored the final result. Could something like that work here too?
>Does the spec explicitly say whether that requirement is a MUST or a SHOULD? It seems clear-cut that doubles are a must, given that they explicitly use that word in the text: https://drafts.csswg.org/css-transforms/#supporting-functions (Also, Rik's statements in comment 34 corroborate that). >That seems drastic. Agreed, I just mentioned it in case we're thinking of going that route eventually anyhow, or in case we need to argue with the spec maintainers to loosen up the requirements (given that it sounds like the WebKit implementation is wagging the dog entirely for these APIs, and I'm unfamiliar with how much other code/API-space this has the potential to affect). >Could something like that work here too? I'm certainly not against using doubles for intermediate work, and returning the final results as floats. I would just like to know which implementation strategy we'd prefer for that approach, as all of them have their downsides; - using in-line math and clear comments - duplicating/making new double-based matrix classes just for this patch - templatizing/repurposing the existing classes do they can handle floats or doubles Or if we don't have a strong preference, I can try them in the order given until we find the "right" one.
(In reply to Thomas Wisniewski from comment #38) > >Does the spec explicitly say whether that requirement is a MUST or a SHOULD? > > It seems clear-cut that doubles are a must, given that they explicitly use > that word in the text: > https://drafts.csswg.org/css-transforms/#supporting-functions Well, isn't that just some pseudo-code to illustrate the algorithm? I wouldn't interpret the use of "double" there as a requirement that double-precision is a MUST. The spec should probably call that out explicitly if it intends it to be a normative requirement, IMO. I don't really have an opinion on how we should implement it though, I'll leave that to the Graphics guys.
> Well, isn't that just some pseudo-code to illustrate the algorithm? > I wouldn't interpret the use of "double" there as a requirement > that double-precision is a MUST. The spec should probably call > that out explicitly if it intends it to be a normative requirement, > IMO. Everything in a spec is normative. If it's not, it has to be called out as such.
> Everything in a spec is normative. I understand that each step taken in the algorithm is normative, but I don't think the syntax or programming language as such are normative. It's just example code for *one* normative implementation. It can't be anything else because it needs to be possible to implement it in languages that doesn't have a "double" type for example. If the spec really intends to require double-precision math, then it should say so explicitly, IMHO.
Bug 1254904 might be of interest since it tries to switch to using double precision for some matrix operations. Bas seemed ok with the patch but it's stuck waiting on jwatt to finish reviewing it (and he's on leave until next month).
Thanks, Brian. That patch does make things easier. I used it as a base to convert this patch over to doubles, and all I had to do was copy some more required support methods over to the new MatrixDouble4x4 class that patch adds. Changing over to doubles does cause two more of the mochitests in test_transitions_per_property.html to need adjustment, but otherwise than such test adjustments I think it will be a pretty clean set of changes. As such, it's probably best to wait until that patch is fully-baked before finishing this one.
(In reply to Markus Stange [:mstange] from comment #36) > Kip might have some ideas here, because he did some work to templatize the > gfx/2d classes in bug 1157984. If the spec does not call for double precision representation of the result, we can certainly improve accuracy of the interpolation by using doubles in intermediate calculations. Rather than redefining gfx::float, I'd suggest simply using double within the implementation of the interpolation function even when their template type specifies only float. This should be documented in the function so that someone doesn't assume it's an error and replaces it with the template's type parameter. Another approach would be to extend Matrix4x4Typed to accept a type parameter to specify float or double precision members, following the model of PointTyped with it's template "F" parameter. As with any code that is using doubles rather than floats, performance is expected to be traded for the increased accuracy, but that may be acceptable. If any spec requires double-precision rather than treating it as optional, it will run much slower in a CPU implementation (especially on some ARM processors) and any GPU implementation (Only a small fraction of GPU cores in consumer graphics chip are capable of double precision, with more available for single precision only). I hope this helps!
>we can certainly improve accuracy of the interpolation by using doubles in intermediate calculations Sure, that's basically what I had in mind. >Rather than redefining gfx::float The patch in bug 1254904 is just changing Matrix3x3 to use doubles, and adding a MatrixDouble4x4 class which I can use here, so such drastic redefinitions won't be necessary. >If any spec requires double-precision rather than treating it as optional, it will run much slower [on some CPUs/GPUs] Agreed. But then, given that the SVG 2 spec is aiming to allow the choice of single or double precision math EXCEPT for coordinate transformations , I don't think it's too much of a stretch to presume that a similar concession is in mind for this spec (use of doubles for intermediate results of interpolations and such, with singles permitted elsewhere).  https://svgwg.org/svg2-draft/conform.html#ConformingSVGViewers
> Agreed. But then, given that the SVG 2 spec is aiming to allow the choice of single or double > precision math EXCEPT for coordinate transformations  I'm unsure why that is still in the spec. I explained to stakagi that his use case works fine with single precision. If people want, I can bring the topic up again. As I said before, the group settled on float and we only brought back doubles because there was talk of removing floats from the spec.
>If people want, I can bring the topic up again. I'm afraid that I'll have to leave it up to you and others to decide if it's worth re-hashing any such debates, Rik (I'm not an expert on whether floats or doubles are necessary or desirable here). However, if doubles aren't necessary in either case then I would certainly err on the side of allowing floats, simply for the sake of everyone's hardware. Also, if you do ask stakagi to relax the SVG 2 spec, it would be good to let them know in bug 1254904. I'm sure that we can afford tabling both patches for a few more days or weeks to get final/proper consensus here.
Rik, if you feel strongly about that, would you mind bringing it up again? I don't recall the detail of the discussion but I know Takagi-san had some test cases that, after applying a large scale to, showed significant artifacts in Gecko. After applying the patch in bug 1254904, however, the artifacts no longer appear. Getting those test cases and proposing an alternative solution might be one step forwards.
Relying on many decimal places will break as soon as the number got bigger. At the time I explained to Takagi-san that they need to translate and scale down with matrices and then draw with coordinates. That way, zooming in/out will be done in a matrix and the coordinates are applied to this scaled matrix. (FYI this is how Adobe's PDFs draw and it works fine for maps, graphs and fine artwork)
Hello Rik, apart from the precision problem, this is necessary to accumulate of transform list (bug 1304886 comment 16) properly. Could you please finish up the patch? Or should we still wait for bug 1254904?
You need to log in before you can comment on or make changes to this bug.