Closed Bug 1430841 Opened 6 years ago Closed 6 years ago

ReduceTimePrecision suffers from float fuzziness

Categories

(Core :: DOM: Core & HTML, enhancement, P1)

enhancement

Tracking

()

RESOLVED FIXED
mozilla60
Tracking Status
firefox58 + wontfix
firefox59 + wontfix
firefox60 --- fixed

People

(Reporter: tjr, Assigned: tjr)

References

Details

(Whiteboard: [fingerprinting][fp-triaged])

Attachments

(3 files, 3 obsolete files)

Our ReduceTimePrecision cannot be applied successively in all cases.  That is to say, if we intend to clamp (we do not 'round' time, we clamp it downwards) to a precision of 20, the intention is that ReduceTimePrecision(55) = 40, and ReduceTimePrecision(ReduceTimePrecision(55)) = 40. However this is not always the case.

Bug 1429764 illustrated this problem. The patch in that bug fixes the surface problem (we should not be calling ReduceTimePrecision multiple times) but the function _should_ behave correctly if we do so. There is _probably_ a subtler bug here we don't have a test for; but if we did, we would fail it. Time may flow backwards because if we clamp time values X and Y, where Y = X + epsilon. Imagine if Y clamps down but X does not. 

The problem is with floats (really we use doubles, which are high range+precision floats, but I'll describe them as floats.) Floats have a 'fuzziness' to them that prevents them from representing all numbers exactly. That fuzziness leads to this problem.

Floats can represent integer values exactly, so when we call ReduceTimePrecisionAsUSecs, we will be operating (mostly) on integral values. But ReduceTimePrecisionAsMSecs and ReduceTimePrecisionAsSecs use decimal values, and we see approximation issues.  For example, consider the following (real) traces of millisecond values.

For reference, here is our current function. The main logic is the return line and that's what we'll be focusing on.

> double nsRFPService::ReduceTimePrecisionAsMSecs(double aTime)
> {
>   if (!IsTimerPrecisionReductionEnabled()) { return aTime; }
> 
>   const double resolutionMSec = sResolutionUSec / 1000.0;
>   return floor(aTime / resolutionMSec) * resolutionMSec;
> }

Note that all numbers in the examples are the output of the computer except when indicated by * in which case it's the output of my calculator and thus does not have float rounding. Also, when the CPU cannot represent a float, instead of saying the CPU 'rounds' the value, I'll say the CPU 'approximates' the value, just to distinguish it from the clamping we do.

Example 1:
A) The clock value is '2064.8338460' however that value is not directly representable, so it is approximated as 2064.83384599999999. We want to clamp it with 0.02 and call ReduceTimePrecisionAsMSecs
B) First we calculate floor(2064.83384599999999 / 0.02) which results in floor(103241.6922999999995*) which is floored down to 103241.00000000000000
C) Then we multiply 103241.00000000000000 by 0.02 and get 2064.82* - but that is not directly representable so it is approximated as 2064.82000000000016

D) Now we call ReduceTimePrecisionAsMSecs again, with 2064.82000000000016
E) First we calculate floor(2064.82000000000016 / 0.02) which results in floor(103241.000000000008*) - That .08 isn't even representable, so it is approximated down to floor(103241.0) which is then floor-ed to the same value: 103241.0
F) Then we multiply 103241.0 by 0.02 and get 2064.82* - which, again, is not directly representable so it is approximated as 2064.82000000000016


In Example 1 we can see that the floats were fuzzy, but the values happened to work out just fine.  Now lets' consider an example where they don't.


Example 2:
A) The clock value is '1054.842405' however that value is not directly representable, so it is approximated as 1054.84240500000010. We want to clamp it with 0.02 and call ReduceTimePrecisionAsMSecs
B) First we calculate floor(1054.84240500000010 / 0.02) which results in floor(52742.120250000005*) which is floored down to 52742
C) Then we multiply  by 0.02 and get 1054.84* - but that is not directly representable so it is approximated as 1054.83999999999992

D) Now we call ReduceTimePrecisionAsMSecs again, with 1054.83999999999992
E) First we calculate floor(1054.83999999999992 / 0.02) which results in floor(52741.999999999996*) - which is then floor-ed to 52741
F) Then we multiply 52741 by 0.02 and get 1054.82* - which, again, is not directly representable so it is approximated as 1054.81999999999994


In Example 2 we can see how we clamped 1054.842405 to 1054.84 and then clamped it again down to 1054.82. That is the problem we need to solve.



------------------
Now there is an interesting edge-case scenario. Consider Example 1 again. Imagine the numbers looked like this, where I am using letters to represent essentially unknown values that would produce the scenario I want to illustrate:

A) The clock value is AAA.AAAAAAA however that value is not directly representable, so the clock value is actually BBBBB.BBBBBB. We want to round it with 0.02
B) First we calculate floor(BBBBB.BBBBBB / 0.02) which results in floor(CCCC.0000000*). One would wonder: what would happen if CCCC.0000000 was not directly representable and actually became floor(CCCB.9999999999999)?  

We can find such a scenario through brute force, and when I did it looked like this:

A) The clock value is '2611.140000' however that value is not directly representable, so it is approximated as 2611.13999999999987. We want to clamp it with 0.02 and call ReduceTimePrecisionAsMSecs
B) First we calculate floor(2611.13999999999987 / 0.02) which results in floor(130556.9999999999935*) which is not representable, approximated to 130556.99999999998545, floored down to 130556
C) Then we multiply 130556 by 0.02 and get 2611.12* - but that is not directly representable so it is approximated as 2611.11999999999989

So already we've taken it down further than we should: ReduceTimePrecisionAsMSecs('2611.140000') should equal 2611.14 - but it doesn't, and it's 2611.12

D) Now we call ReduceTimePrecisionAsMSecs again, with 2611.11999999999989
E) First we calculate floor(2611.11999999999989 / 0.02) which results in floor(130555.9999999999945*) - which is not representable, so it is approximated as floor(130555.99999999998545) which is then floor-ed to the same value: 103555
F) Then we multiply 103555.0 by 0.02 and get 2611.100000* - which, again, is not directly representable so it is approximated as 2611.09999999999991

And we have reduced '2611.14' to '2611.1' through two incorrect reductions.

------------------
There is another thing to consider. Doubles can't represent all integer values exactly, once it reaches power(FLT_RADIX, DBL_MANT_DIG), it loses integer precision. Does this matter to us?  power(FLT_RADIX, DBL_MANT_DIG) is 2^53 which is 9007199254740992, which in milliseconds after the epoch is approximately the year 285,616 (not accouting for leap years). So we can stop considering this concern.
Priority: -- → P2
Comment on attachment 8943048 [details]
Bug 1430841 Eliminate float fuzziness in ReduceTimerPrecision

https://reviewboard.mozilla.org/r/213316/#review219332

A few comments below.  I'm still wrapping my head around this code, so please do push back on some of this if it's completely bogus.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:101
(Diff revision 1)
>  double
>  nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolution, double aResolutionReduction)

This function (and its callers) need some documentation, particularly around what units `aTime` can be in, and what `aResolution` and `aResolutionReduction` represent.  You could consider adding assertions on the particular values of the last two parameters as well, since I don't think you really mean to allow completely arbitrary values...though maybe you do for testing purposes?

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:112
(Diff revision 1)
>    const double reducedResolution = aResolution / aResolutionReduction;
>    if (aResolutionReduction > 1000 && aResolution < 1000000) {
>      // The resolution is so small we need to use the reciprocal to avoid floating point error.
>      const double resolutionReciprocal = aResolutionReduction / aResolution;

What is "The resolution" in this comment referring to?

If it's `aResultion`, then the condition is checking for `aResolution` values less than one million, which doesn't feel very small to me.  And then the condition is also checking `aResolutionReduction`, which isn't mentioned in the comment, which is weird.

If it's referring to `reducedResolution`, then why are we checking the inputs producing `reducedResolution`, rather than `reducedResolution` itself?  But `reducedResolution` could be something ranging from ~900 (`aResolutionReduction = ~1000` and `aResolution = ~1000000`) to 1 (`aResolutionReduction == aResolution`) to maybe 0.001.  None of these seem super-small.

Maybe some example of the floating-point error we're trying to avoid here would be helpful?  My guess is that you could run into some situation where `aTime` is ~2^31 and `reducedResolution` is ~2^-21 or so, which would mean that `floor(aTime / reducedResolution) * reducedResolution` would lose some bits?  But then I'm not totally clear why using reciprocals would help.

And since this is some security-sensitive code, being really explicit in the comments would be helpful.

Note also that there are no explicit tests for this branch in the gtest you added, which is Not Good.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:125
(Diff revision 1)
> +    // ReduceTimePrecision(x) should equal ReduceTimePrecision(ReduceTimePrecision(x))
> +    // Sometimes it doesn't, because the aTime value gets approximated downwards
> +    // If our input + 1 ULP is clamped to the same value as the input, we hit such a situation
> +    // So perform the clamping on both the input and input+1 ULP and check the output of the latter

Nit: sentences in comments end with a period, `.`.

What does "because the aTime value gets approximated downwards" really mean?  Who is approximating it downwards?  Why are we "approximating" it (this is probably not the right word to be using, I think "rounding" here would be more descriptive?)?

I think what this code and comment are trying to communicate--and my explanation is probably not going to be very precise--is that for values of `aTime` that have already had their resolution reduced, trying to reduce the resolution _again_ will not give you `aTime` back, as you would expect, but something slightly less than `aTime`, due to floating-point issues.  So what we're going to do is try to reduce the precision of a value that is _slightly_ more than `aTime` and see if that value reduces to `aTime`.  If it does, then `aTime` didn't actually need to have its resolution reduced (i.e. had already been rounded to the nearest N milliseconds or whatever), and so we should return things unmodified.

Is that a correct summary?  If so, I guess the question is whether we can be smarter about our initial floating-point calculation to avoid this fixup?  (This is probably the question you're flagging me for review for! :)

Is there any reason we can't do something like:

```
// Get an integer part of the units we are working in.
double x = aTime * aResolutionReduction;
// Drop any fractional bits, since they can't matter (?).
double asInt = floor(x);
// Exactly round to aResolution units.
double rounded = (asInt / aResolution) * aResolution;
// Convert back into fractional units.
ret = rounded / aResolutionReduction;
```

This handles your known bad (and other ones as well) test cases with aplomb.  I haven't thought about it super-hard, but it's definitely possible there are still bad cases for that code as well?

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:130
(Diff revision 1)
> +    // First interpret the bits used to store the double as an integer
> +    uint64_t aTimeIntegerRep = *((uint64_t*)&aTime);

Please use `mozilla::BitwiseCast` (`#include "mozilla/Casting.h"`) for code like this, which makes this code much nicer (and legal) and also does the `static_assert` from above (`auto` usage optional):

```
  auto aTimeIntegerRep = BitwiseCast<uint64_t>(aTime);
```

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:134
(Diff revision 1)
> +    // Now reinterpret the bits back as a double
> +    double aTimePlusOneULP = *((double*)&aTimeIntegerRepPlusOne);

Same `BitwiseCast` comment here.
Attachment #8943048 - Flags: review?(nfroyd)
Whoops, looking at my logging again for the examples, I think my example code was totally bogus.  Here's a better attempt:

    double x = aTime * aResolutionReduction;
    long long asInt = llround(x);
    long long intResolution = aResolution;
    long long rounded = (asInt / intResolution) * intResolution;
    ret = double(rounded) / aResolutionReduction;

This gives things like:

Given: 1054.84240500000010, Rounding with 0.02000000000000, Intermediate: 52742.00000000000000, Got: 1054.83999999999992
Given: 1054.83999999999992, Rounding with 0.02000000000000, Intermediate: 52741.00000000000000, Got: 1054.83999999999992
Given: 273.53038600000002, Rounding with 0.02000000000000, Intermediate: 13676.00000000000000, Got: 273.51999999999998
Given: 273.51999999999998, Rounding with 0.02000000000000, Intermediate: 13675.00000000000000, Got: 273.51999999999998
Given: 628.66686500000003, Rounding with 0.02000000000000, Intermediate: 31433.00000000000000, Got: 628.65999999999997
Given: 628.65999999999997, Rounding with 0.02000000000000, Intermediate: 31432.00000000000000, Got: 628.65999999999997
Given: 521.28919100000007, Rounding with 0.02000000000000, Intermediate: 26064.00000000000000, Got: 521.27999999999997
Given: 521.27999999999997, Rounding with 0.02000000000000, Intermediate: 26063.00000000000000, Got: 521.27999999999997

which I think is sort of what you want?
(In reply to Nathan Froyd [:froydnj] from comment #3)
> ::: toolkit/components/resistfingerprinting/nsRFPService.cpp:101
> (Diff revision 1)
> >  double
> >  nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolution, double aResolutionReduction)
> 
> This function (and its callers) need some documentation, particularly around
> what units `aTime` can be in, and what `aResolution` and
> `aResolutionReduction` represent.  

Yes. 

> You could consider adding assertions on
> the particular values of the last two parameters as well, since I don't
> think you really mean to allow completely arbitrary values...though maybe
> you do for testing purposes?

No, I think it's best to restrict it to the legal values as determined by the existing ReduceTimePrecisionAsXXX. So legal values would be 0 (usec), 1000 (msec), and 100000 (sec).

> ::: toolkit/components/resistfingerprinting/nsRFPService.cpp:112
> (Diff revision 1)
> >    const double reducedResolution = aResolution / aResolutionReduction;
> >    if (aResolutionReduction > 1000 && aResolution < 1000000) {
> >      // The resolution is so small we need to use the reciprocal to avoid floating point error.
> >      const double resolutionReciprocal = aResolutionReduction / aResolution;
> 
> What is "The resolution" in this comment referring to?
> 
> If it's `aResultion`, then the condition is checking for `aResolution`
> values less than one million, which doesn't feel very small to me.  And then
> the condition is also checking `aResolutionReduction`, which isn't mentioned
> in the comment, which is weird.
> 
> If it's referring to `reducedResolution`, then why are we checking the
> inputs producing `reducedResolution`, rather than `reducedResolution`
> itself?  But `reducedResolution` could be something ranging from ~900
> (`aResolutionReduction = ~1000` and `aResolution = ~1000000`) to 1
> (`aResolutionReduction == aResolution`) to maybe 0.001.  None of these seem
> super-small.
> 
> Maybe some example of the floating-point error we're trying to avoid here
> would be helpful?  My guess is that you could run into some situation where
> `aTime` is ~2^31 and `reducedResolution` is ~2^-21 or so, which would mean
> that `floor(aTime / reducedResolution) * reducedResolution` would lose some
> bits?  But then I'm not totally clear why using reciprocals would help.
> 
> And since this is some security-sensitive code, being really explicit in the
> comments would be helpful.
> 
> Note also that there are no explicit tests for this branch in the gtest you
> added, which is Not Good.


Yea. The reciprocal case is essentially code I ported forward from existing code. I wasn't very clear on it's provenance or what errors it was intending to correct. If Tim has any recollection of the error cases that would be fantastic, but I think I need to disable the code, find them again, then re-enable the code to understand it.


 
> ::: toolkit/components/resistfingerprinting/nsRFPService.cpp:125
> (Diff revision 1)
> > +    // ReduceTimePrecision(x) should equal ReduceTimePrecision(ReduceTimePrecision(x))
> > +    // Sometimes it doesn't, because the aTime value gets approximated downwards
> > +    // If our input + 1 ULP is clamped to the same value as the input, we hit such a situation
> > +    // So perform the clamping on both the input and input+1 ULP and check the output of the latter
> 
> Nit: sentences in comments end with a period, `.`.
> 
> What does "because the aTime value gets approximated downwards" really mean?
> Who is approximating it downwards?  Why are we "approximating" it (this is
> probably not the right word to be using, I think "rounding" here would be
> more descriptive?)?


I am trying to avoid using the term 'round' anywhere near this code. I think it's ambiguous.

The intent of the function is to 'Reduce the precision of a time value'. It does so not by rounding the value (which implies the value may increase) but by 'floor()'-ing it. But 'floor' implies reducing to an integer and we're actually reducing to a multiple of a decimal. I (try to) call this 'clamping'.

Approximation however, is an entirely different term. I use approximation to refer to how the computer handles a number it cannot represent in a float/double. It 'approximates' the number by increasing or decreasing it (I'm not sure which or under which circumstance) to a nearby value it can represent.


> I think what this code and comment are trying to communicate--and my
> explanation is probably not going to be very precise--is that for values of
> `aTime` that have already had their resolution reduced, trying to reduce the
> resolution _again_ will not give you `aTime` back, as you would expect, but
> something slightly less than `aTime`, due to floating-point issues.  

Correct, except "slightly less than `aTime`" is precisely one full 'clamp' (aka precision) value less.

> So what
> we're going to do is try to reduce the precision of a value that is
> _slightly_ more than `aTime` and see if that value reduces to `aTime`.  If
> it does, then `aTime` didn't actually need to have its resolution reduced
> (i.e. had already been rounded to the nearest N milliseconds or whatever),
> and so we should return things unmodified.

Correct. Although "_slightly_ more" is precisely "the next representable float greater than aTime", or the smallest we could possibly increase aTime.

> Is that a correct summary?  If so, I guess the question is whether we can be
> smarter about our initial floating-point calculation to avoid this fixup? 
> (This is probably the question you're flagging me for review for! :)

Maybe!  I will dig into your suggestion.
Whiteboard: [fingerprinting] → [fingerprinting][fp-triaged]
See also https://twitter.com/TomRittervg/status/953796586559111168

Ben advocates for clamping the value to a power of 2, which is directly representable in float/double, and provides this example:

> #include <math.h>
> #include <stdio.h>
> #define VALUE 1054.842405
> double badreduce(double aTime) {
>     aTime = floor(aTime / .02);
>     return aTime * .02;
> }
> double goodreduce(double aTime) {
>     aTime = floor(aTime * 64);  // 1/64 is roughly 0.016 (nearest power of 2 to .02)
>     return aTime / 64;
> }
> int main() {
>     printf("%f %f %f\n", VALUE, badreduce(VALUE), badreduce(badreduce(VALUE)));
>     printf("%f %f %f\n", VALUE, goodreduce(VALUE), goodreduce(goodreduce(VALUE)));
>     return 0;
> }

That's a very interesting idea, but from discussions I think we may be targeting a rounding value that is between powers of 2; and that would be rather limiting.

Zooko advocates for abandoning floats entirely. I think rewriting all of Firefox's time code is beyond this bug, but I do wonder about it. If we stored our timestamps as nanoseconds since the epoch in a uint64, our precision would be 1 nanosecond and the range would be ~584 years. (That's a bit of a simplistic view on it though.)

I think a reasonable path forward is to pursue what Nathan suggests - convert out of floats, do the math, convert back into floats.
(In reply to Tom Ritter [:tjr] from comment #6)
> Zooko advocates for abandoning floats entirely. I think rewriting all of
> Firefox's time code is beyond this bug, but I do wonder about it. If we
> stored our timestamps as nanoseconds since the epoch in a uint64, our
> precision would be 1 nanosecond and the range would be ~584 years. (That's a
> bit of a simplistic view on it though.)

Is this really super-hard, though?  mozilla::TimeStamp is integer-based on all of our major platforms.  I think a lot of the external interfaces (i.e. JS) want doubles, but I don't know of any real reason that we can't do all the arithmetic internally on integers.
Well, if folks think it's worth pursuing, I can open a bug on it and maybe noodle on it. (I probably still won't try to make it happen in this bug though...)
Flags: needinfo?(bzbarsky)
I forgot to ?ni Tim about the reciprocal case. Tim, do you have any recollection about what situation the reciprocal was needed for?

I determined that if we are working with seconds-from-the-epoch (and we do not use the reciprocal) we will elevate our value into the range where floats cannot accurately represent it (and we produce an incorrect value on the order of seconds.) But when we use doubles, we can handle values of this size, and no error is introduced.  I'm not sure if that's what you meant by your comment:

> // The resolution is so small we need to use the reciprocal to avoid floating point error.
Flags: needinfo?(tihuang)
It does seem like one fundamental problem is converting to doubles "too early".  Ideally we would be able to reduce the precisions of TimeStamp and TimeDuration instances (not doubles), which are entirely integer-based...

I do think it's worth doing that, to avoid problems like this bug.
Flags: needinfo?(bzbarsky)
Tracking for 59.  I think it is too late to get a change into 58 since we release next week and already have an RC2 build ready.
Comment on attachment 8943048 [details]
Bug 1430841 Eliminate float fuzziness in ReduceTimerPrecision

https://reviewboard.mozilla.org/r/213316/#review220084


C/C++ static analysis found 1 defect in this patch.

You can run this analysis locally with: `./mach static-analysis check path/to/file.cpp`

If you see a problem in this automated review, please report it here: http://bit.ly/2y9N9Vx


::: toolkit/components/resistfingerprinting/nsRFPService.cpp:112
(Diff revision 2)
> -nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolutionUS, double aResolutionScaleCorrection)
> +nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolutionUSec, double aTimeScaleCorrection)
>  {
> -  if (!IsTimerPrecisionReductionEnabled()) {
> +  if (!IsTimerPrecisionReductionEnabled() ||
> +      aResolutionUSec == 0) {
>      return aTime;
> -  } else if (aResolutionScaleCorrection != 1 &&
> +  } else if (aTimeScaleCorrection != 1 &&

Warning: Do not use 'else' after 'return' [clang-tidy: readability-else-after-return]
Comment on attachment 8943048 [details]
Bug 1430841 Eliminate float fuzziness in ReduceTimerPrecision

https://reviewboard.mozilla.org/r/213316/#review220234

I feel a little weird reviewing what I suggested as an approach, but OK. :)  Some things for discussion below.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:101
(Diff revision 4)
>    return (sPrivacyTimerPrecisionReduction || IsResistFingerprintingEnabled()) &&
>           sResolutionUSec != 0;
>  }
>  
>  /*
>    @param aTime timestamp in native units

What does "native units" mean?  (I realize this is a pre-existing issue, but still.)

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:103
(Diff revision 4)
>  }
>  
>  /*
>    @param aTime timestamp in native units
>    @param aResolutionUS the precision, in microseconds, to clamp it to
> -  @param aResolutionScaleCorrection the amount aResolutionScaleCorrection must be divided by to match the units of aResolutionUS
> +  @param aTimeScaleCorrection the amount aTime must be multiplied by to convert to microseconds

I think it would be cleaner here to have `aTimeScaleCorrection` be an `enum` of resolutions, so we had as few as possible magic numbers floating around.  We'd then map enum values to the appropriate scaling factors internally.  WDYT?

I don't know whether we have to clean that up in this bug or whether we have to do it someplace else.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:110
(Diff revision 4)
>  /* static */
>  double
> -nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolutionUS, double aResolutionScaleCorrection)
> +nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolutionUSec, double aTimeScaleCorrection)
>  {
> -  if (!IsTimerPrecisionReductionEnabled()) {
> +  if (!IsTimerPrecisionReductionEnabled() ||
> +      aResolutionUSec == 0) {

Does `aResolutionUSec == 0` even make sense?  What about negative numbers--should we handle those as well?  Maybe we should have an `enum` for `aResolutionUSec` as well, so we don't have to worry about bogus data being passed in?  I should think that we're not going to have many resolutions that timers care about throughout the platform.  Is that a reasonable assumption?

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:120
(Diff revision 4)
> -      aResolutionScaleCorrection != 1000000) {
> +      aTimeScaleCorrection != 1000000) {
>      MOZ_ASSERT(false, "Only scale corrections of 1, 1000, and 1000000 are supported.");
>      return aTime;
>    }
>  
> -  double ret;
> +  // Increase the time as needed until it is in microseconds

Nit: comments end with periods, please.  Here and several places below this.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:121
(Diff revision 4)
>      MOZ_ASSERT(false, "Only scale corrections of 1, 1000, and 1000000 are supported.");
>      return aTime;
>    }
>  
> -  double ret;
> -  const double reducedResolution = aResolutionUS / aResolutionScaleCorrection;
> +  // Increase the time as needed until it is in microseconds
> +  double timeScaled = aTime * aTimeScaleCorrection;

We do have a loss of precision concern here, right?  Let's say that `aTime` is seconds since the epoch (plus some fractional part), so it's about `2**31` right now.  So we'd need to be multiplying by ~`2**22` to start getting floating point numbers that would start losing precision in the lower bits of their non-fractional part, right?  And if `aTimeScaleCorrection` is 1000000, that's ~`2**20`, which is uncomfortably close.

I guess we don't have to start worrying about it quite yet, but maybe we should note it here just to show that we've thought about the issue and decided it's not a problem for now?
Attachment #8943048 - Flags: review?(nfroyd) → review+
Priority: P2 → P1
(In reply to Nathan Froyd [:froydnj] from comment #18)
> What does "native units" mean?  (I realize this is a pre-existing issue, but
> still.)

Clarified comment.


> I think it would be cleaner here to have `aTimeScaleCorrection` be an `enum`
> of resolutions, so we had as few as possible magic numbers floating around. 
> We'd then map enum values to the appropriate scaling factors internally. 
> WDYT?

Sounds good, I changed it.

> ::: toolkit/components/resistfingerprinting/nsRFPService.cpp:110
> (Diff revision 4)
> >  /* static */
> >  double
> > -nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolutionUS, double aResolutionScaleCorrection)
> > +nsRFPService::ReduceTimePrecisionImpl(double aTime, double aResolutionUSec, double aTimeScaleCorrection)
> >  {
> > -  if (!IsTimerPrecisionReductionEnabled()) {
> > +  if (!IsTimerPrecisionReductionEnabled() ||
> > +      aResolutionUSec == 0) {
> 
> Does `aResolutionUSec == 0` even make sense?  What about negative
> numbers--should we handle those as well?  Maybe we should have an `enum` for
> `aResolutionUSec` as well, so we don't have to worry about bogus data being
> passed in?  I should think that we're not going to have many resolutions
> that timers care about throughout the platform.  Is that a reasonable
> assumption?

We should handle negative, that's fixed.

And we do want to support arbitrary precision. While I don't expect anyone (or us) to set it to 13, or 27 or 254; 20, 50, 100, 250, 500, 1000, 2000, 1000000 are all possible, reasonable values. So I don't think we should have an enum here.


> ::: toolkit/components/resistfingerprinting/nsRFPService.cpp:121
> (Diff revision 4)
> >      MOZ_ASSERT(false, "Only scale corrections of 1, 1000, and 1000000 are supported.");
> >      return aTime;
> >    }
> >  
> > -  double ret;
> > -  const double reducedResolution = aResolutionUS / aResolutionScaleCorrection;
> > +  // Increase the time as needed until it is in microseconds
> > +  double timeScaled = aTime * aTimeScaleCorrection;
> 
> We do have a loss of precision concern here, right?  Let's say that `aTime`
> is seconds since the epoch (plus some fractional part), so it's about
> `2**31` right now.  So we'd need to be multiplying by ~`2**22` to start
> getting floating point numbers that would start losing precision in the
> lower bits of their non-fractional part, right?  And if
> `aTimeScaleCorrection` is 1000000, that's ~`2**20`, which is uncomfortably
> close.
> 
> I guess we don't have to start worrying about it quite yet, but maybe we
> should note it here just to show that we've thought about the issue and
> decided it's not a problem for now?

Hm, so doubles lose integer precision at 9,007,199,254,740,992 (2**53). 

Seconds since epoch is 1516916040 and we multiply by 1000000, putting us at 1516916040000000. Same order of magnitude:

> 9007199254740992
> 1516916040000000

9007199254740992 in microseconds since the epoch brings us to June 5, 2255.

I agree this needs clarification. But I don't think it needs changing.
Flags: needinfo?(artines1)
Comment on attachment 8943047 [details]
Bug 1430841 Refactor ReduceTimePrecision and add (failing) gtests

https://reviewboard.mozilla.org/r/213314/#review221714

::: toolkit/components/resistfingerprinting/nsRFPService.h:166
(Diff revision 5)
> -  static double ReduceTimePrecisionAsMSecs(double aTime);
>    static double ReduceTimePrecisionAsUSecs(double aTime);
> +  static double ReduceTimePrecisionAsMSecs(double aTime);
>    static double ReduceTimePrecisionAsSecs(double aTime);
> +  // Public only for testing purposes
> +  static double ReduceTimePrecisionImpl(double aTime, double aResolutionUSec, double aTimeScaleCorrection);

Comment describing what `aTimeScaleCorrection` means would be useful.  Also, below you call it `aResolutionScaleCorrection` instead.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:130
(Diff revision 5)
>    if (!IsTimerPrecisionReductionEnabled()) {
>      return aTime;
>    }
> -  const double resolutionMSec = TimerResolution() / 1000.0;
> -  double ret = floor(aTime / resolutionMSec) * resolutionMSec;
> +  if (aResolutionScaleCorrection != 1 &&
> +      aResolutionScaleCorrection != 1000 &&
> +      aResolutionScaleCorrection != 1000000) {

Perhaps use an enum for "seconds", "milliseconds", and "microseconds" instead?  That makes it more clear what values are supported here.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:135
(Diff revision 5)
> +      aResolutionScaleCorrection != 1000000) {
> +    MOZ_ASSERT(false, "Only scale corrections of 1, 1000, and 1000000 are supported.");
> +    return aTime;
> +  }
> +
> +  double ret;

nit: please initialize stack variables

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:136
(Diff revision 5)
> +    MOZ_ASSERT(false, "Only scale corrections of 1, 1000, and 1000000 are supported.");
> +    return aTime;
> +  }
> +
> +  double ret;
> +  const double reducedResolution = aResolutionUS / aResolutionScaleCorrection;

A comment describing this algorithm would be helpful.  Its hard to understand the comment about "we need to use the reciprocal" without that information.

::: toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:31
(Diff revision 5)
> +   The answer lies beyond what you can see, in that which you cannot see. One must
> +   journey into the depths, the hidden, that which the world fights its hardest to
> +   conceal from us.
> +
> +   Specially: you need to look at more decimal places. Run the test with:
> +   	  MOZ_LOG="nsResistFingerprinting:5"

Seem there is a tab or some other whitespace error here.
Attachment #8943047 - Flags: review?(bkelly) → review+
(In reply to Ben Kelly [:bkelly] from comment #24)
> Comment on attachment 8943047 [details]
> Bug 1430841 Refactor ReduceTimePrecision and add (failing) gtests
> 
> https://reviewboard.mozilla.org/r/213314/#review221714
> 

Hey Ben, most of these are addressed in the second patch where I basically throw this function out and rewrite it.  The first patch is a mostly-faithful refactoring of the code to make it more apparent what the actual algorithmic changes are that I'm making in the second patch. 

So my plan is to ensure that your comments are addressed in the second patch but not significantly alter the first one. Thanks!
Comment on attachment 8945838 [details]
Bug 1430841 Fix the whitespace in test_reduceprecision.cpp

https://reviewboard.mozilla.org/r/215952/#review222022
Attachment #8945838 - Flags: review?(nfroyd) → review+
Pushed by nerli@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/b2c9a5c4e971
Refactor ReduceTimePrecision and add (failing) gtests r=bkelly
https://hg.mozilla.org/integration/autoland/rev/bb8dcc2219b3
Eliminate float fuzziness in ReduceTimerPrecision r=froydnj
https://hg.mozilla.org/integration/autoland/rev/5c060f21f0b3
Fix the whitespace in test_reduceprecision.cpp r=froydnj
Keywords: checkin-needed
Backed out for bustage at toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:142:

https://hg.mozilla.org/integration/autoland/rev/be1b4e4d00baa1650b31bba87f1fd67c23b70fcd

Push with bustage: https://treeherder.mozilla.org/#/jobs?repo=autoland&revision=5c060f21f0b3bc4122ff0ad684013fa640a24a27&filter-resultStatus=usercancel&filter-resultStatus=runnable&filter-resultStatus=testfailed&filter-resultStatus=busted&filter-resultStatus=exception
Build log: https://treeherder.mozilla.org/logviewer.html#?job_id=159087036&repo=autoland

[task 2018-01-29T17:13:37.788Z] 17:13:37     INFO -  In file included from /builds/worker/workspace/build/src/obj-firefox/toolkit/components/resistfingerprinting/tests/Unified_cpp_tests0.cpp:2:0:
[task 2018-01-29T17:13:37.788Z] 17:13:37     INFO -  /builds/worker/workspace/build/src/toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp: In member function 'virtual void ResistFingerprinting_ReducePrecision_Aggressive_Test::TestBody()':
[task 2018-01-29T17:13:37.788Z] 17:13:37     INFO -  /builds/worker/workspace/build/src/toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:142:56: error: integer overflow in expression [-Werror=overflow]
[task 2018-01-29T17:13:37.788Z] 17:13:37     INFO -        double time2_us = fmod(RAND_DOUBLE, (1000000 * 60 * 60 * 5));
[task 2018-01-29T17:13:37.788Z] 17:13:37     INFO -                                             ~~~~~~~~~~~~~^~~~
[task 2018-01-29T17:13:37.789Z] 17:13:37     INFO -  cc1plus: all warnings being treated as errors
[task 2018-01-29T17:13:37.789Z] 17:13:37     INFO -  /builds/worker/workspace/build/src/config/rules.mk:1047: recipe for target 'Unified_cpp_tests0.o' failed
[task 2018-01-29T17:13:37.789Z] 17:13:37     INFO -  make[4]: *** [Unified_cpp_tests0.o] Error 1
Flags: needinfo?(tom)
Comment on attachment 8946450 [details]
Bug 1430841 Fix compiler errors and a failing test relating to Time Precision Reduction

https://reviewboard.mozilla.org/r/216390/#review222188

This seems fine, two small things below.

::: dom/animation/test/css-animations/test_animation-finish.html:11
(Diff revision 1)
> +  	["dom.animations-api.core.enabled", true],
> +  	["privacy.resistFingerprinting.reduceTimerPrecision.microseconds", 0]]},

mozreview is complaining about the leading whitespace here...it looks to me like you used tabs.  Can you verify that and use spaces instead, please?

::: toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:136
(Diff revision 1)
> -     double time1_s = fmod(RAND_DOUBLE, 1516305819);
> -     double time1_ms = fmod(RAND_DOUBLE, 1516305819000);
> -     double time1_us = fmod(RAND_DOUBLE, 1516305819000000);
> +     double time1_s = fmod(RAND_DOUBLE, 1516305819L);
> +     double time1_ms = fmod(RAND_DOUBLE, 1516305819000L);
> +     double time1_us = fmod(RAND_DOUBLE, 1516305819000000L);

Given that `long` is not always 64 bits, you may want to explicitly make these `int64_t(...)` or something like that?  Otherwise, you might get bustage on 32-bit test platforms?
Attachment #8946450 - Flags: review?(nfroyd) → review+
Flags: needinfo?(tom)
Attachment #8945838 - Attachment is obsolete: true
Attachment #8946450 - Attachment is obsolete: true
Comment on attachment 8951763 [details] [diff] [review]
Patch illustrating failing distributive property for ReducetimePrecision()

Obsoleting this patch. It is being attached only for future reference, not actual use.
Attachment #8951763 - Attachment is obsolete: true
Okay, I think I have this in a decent spot now. 

The first patch hasn't changed, it's just rebased.  I also fixed the whitespace and the compiler errors in the beginning instead of as add-on patches.

The second patch has only one meaningful change:

> long long rounded = (timeAsInt / resolutionAsInt) * resolutionAsInt;

The original version that got backed out, became

> long long clamped = floor(double(timeAsInt) / resolutionAsInt) * resolutionAsInt;

The explanation of that is in the patch.







Previously, the problem that got this backed out and stuck was the CSS Animations tests. These are in two categories:

Test Failure Type 1) Off-by-one-clamp

Example:

dom/animation/test/css-animations/test_animation-finish.html | Test finish() while pause-pending with positive playbackRate
Test finish() while pause-pending with positive playbackRate: assert_approx_equals: The start time of a pause-pending animation should be set after calling finish() expected -49642 +/- 0.001 but got -49640 

When crossing the zero boundary, we would be off by one full clamp value. The substative change above fixes this.

HOWEVER, we have NOT achieved full or even limited distributive-ness. The above tests do, roughly, the following:

There is a finished 100s animation
(animation.timeline.currentTime) with Timestamp A. animation.startTime
should be 100s prior to that, right?

animation.startTime (call it A) is ~ -99534
animation.timeline.currentTime (call it B) is ~466
And A == B - 100
Or at least that's how it should work.

Here's how it actually works.
A    is -99532
B-10 is -99534
B    is 466

We are off by one clamp value. I fixed that, again, by the subsative change noted above. But my initial attempt at fixing it was by describing three invariants I attempted to have hold. 

1) clamp(x) == clamp(clamp(x))
2) clamp(x - c) == clamp(x) - c
   Here c is an unclamped constant, like 100.
   x - c may be negative
3) We clamp downwards for all input values UNLESS a) the input is
exactly 1ULP below a target rounding value, and b) the target rounding
value cant be represented exactly.

2 is a form of limited distributive-ness.

I believe 2 and 3 are in conflict.

Consider this (concrete, working) example:

>  clock = 2611.6; // Can't be represented exactly. Is actually 1 ULP below the target at 2611.59999999999990905052982270717620849609375
>  constant = .3; // Can't be represented exactly, is actually 1 ULP below at 0.299999999999999988897769753748434595763683319091796875
>  precision = 100;
>  expected = 2611.3; // Can't be represented exactly, Is actually 1 ULP above at 2611.3000000000002
>
>  A = clamp(clock - constant)
>  B = clamp(clock) - constant


In B, we first clamp 2611.59999999999990905052982270717620849609375 which _does_ satisfy (a) and (b) from Invariant 3 so it stays the
same. Then we subtract 0.299999999999999988897769753748434595763683319091796875 and get 2611.3000000000002 which is our expected value.

In A, we first subtract clock - constant, giving us 2611.29999999999973 which is one ULP below the target value.  When consider Invariant 3 we satisfy (b) (because 2611.3 can't be represented exactly) but not (a). So we stay at 2611.29999999999973 and clamp(2611.29999999999973) = ~2611.2 instead of 2611.2 which would make sense.

I have an ugly patch that demonstrates the distributivity problem, which I am attaching for posterity (and obsoleting.) Where we are at with distributivity is that (I *think*) you will wind up some number of ULPs (more than 1 to be sure) away from the target value, but less than a clamp value.  Here is some output illustrating limited distributiveness failures:


> Given: (469993878, Scaled: 469993877635, Converted: 469993877635), Rounding with (65, Originally 65), Intermediate: (469993877600), Got: (14 Converted: 7230675040)
> Given: (1503555431, Scaled: 1503555430635, Converted: 1503555430635), Rounding with (65, Originally 65), Intermediate: (1503555430585), Got: (14 Converted: 23131622009)
> Testing (constant: 1033561553): 469993877.60000002 =?= 469993877.5850001
> toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:148: Failure
>       Expected: inside
>       Which is: 4.69994e+08
> To be equal to: outsideMinusOne
>       Which is: 4.69994e+08
> 
> 
> Given: (-1032325582, Scaled: -1032325581505, Converted: -1032325581504), Rounding with (237085, Originally 237085), Intermediate: (-1032325701655), Got: (14 Converted: -4354243)
> Given: (1235971, Scaled: 1235971495, Converted: 1235971495), Rounding with (237085, Originally 237085), Intermediate: (1235924105), Got: (14 Converted: 5213)
> Testing (constant: 1033561553): -1032325701.655 =?= -1032325628.8950001
> toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:148: Failure
>       Expected: inside
>       Which is: -1.03233e+09
> To be equal to: outsideMinusOne
>       Which is: -1.03233e+09
> 
> 
> Given: (-1032325582, Scaled: -1032325581505, Converted: -1032325581504), Rounding with (65, Originally 65), Intermediate: (-1032325581560), Got: (14 Converted: -15881932024)
> Given: (1235971, Scaled: 1235971495, Converted: 1235971495), Rounding with (65, Originally 65), Intermediate: (1235971490), Got: (14 Converted: 19014946)
> Testing (constant: 1033561553): -1032325581.5599999 =?= -1032325581.5100001
> toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:148: Failure
>       Expected: inside
>       Which is: -1.03233e+09
> To be equal to: outsideMinusOne
>       Which is: -1.03233e+09
> 
>       
> Given: (-1033555618, Scaled: -1033555618100, Converted: -1033555618100), Rounding with (237085, Originally 237085), Intermediate: (-1033555698635), Got: (14 Converted: -4359431)
> Given: (5935, Scaled: 5934900, Converted: 5934899), Rounding with (237085, Originally 237085), Intermediate: (5927125), Got: (14 Converted: 25)
> Testing (constant: 1033561553): -1033555698.635 =?= -1033555625.8750001
> toolkit/components/resistfingerprinting/tests/test_reduceprecision.cpp:148: Failure
>       Expected: inside
>       Which is: -1.03356e+09
> To be equal to: outsideMinusOne
>       Which is: -1.03356e+09


Test Failure Type 2) Incorrect Timer Reduction inside of CSS Animations

/web-animations/animation-model/animation-types/discrete.html | Test discrete animation is used when interpolation fails
assert_equals: Animation produces 'to' value at exact middle of the interval expected "200px" but got "0px" 

This failure indicates that we are (incorrectly) influencing the behavior of an animation by altering a timestamp. We want to only influence the timing values reported to the user, not how the animation flows. Brian and Hiro are going to be resolving this issue.

In the meantime, in Bug 1435296, we changed Reduce Timer Precision so that it doesn't apply to CSS Animations.  For testing purposes I have revert that in the try runs.



Test Failure Type 3) False Positive

browser/components/resistfingerprinting/test/mochitest/test_animation_api.html | pref: privacy.reduceTimerPrecision - animation.startTime with precision 100 is not rounded: 200 


Ignore this type. I have a patch "Apply to CSS Animations too".  This patch overrides the change mentioned in Type 2 to make the test apply to CSS Animations. I didn't fix the test when I did that, so the test fails.


Try Run 1: https://treeherder.mozilla.org/#/jobs?repo=try&revision=6c5b7a0fbfcecf113d91fcdc2a9cb27f9212b099

This try run shows the result of running ``long long rounded = (timeAsInt / resolutionAsInt) * resolutionAsInt`` (or the version that got backed out)

You can see both Type 1 and 2 failures. (And Type 3)

Tru Run 2: https://treeherder.mozilla.org/#/jobs?repo=try&revision=baef04ee4927bd6a9f4c5a89fbae849059c5a927

When this run finishes, you should only see Type 2 and Type 3 Failures

Try Run 3: https://treeherder.mozilla.org/#/jobs?repo=try&revision=74ec1a8b3f3e0025ec8c37878f7f76fd1c5d332f

When this run finishes, you should see no failures





I'm flagging Brian and Nathan for re-review/input.
Flags: needinfo?(nfroyd)
Flags: needinfo?(bbirtles)
(In reply to Tom Ritter [:tjr] from comment #41)
> Try Run 3:
> https://treeherder.mozilla.org/#/
> jobs?repo=try&revision=74ec1a8b3f3e0025ec8c37878f7f76fd1c5d332f
> 
> When this run finishes, you should see no failures

Well, you see a bunch of unrelated intermittent, heh.

There is one relevant failure: h1 on Linux x64 debug

> TEST-UNEXPECTED-FAIL | browser/components/resistfingerprinting/test/mochitest/test_animation_api.html | pref: privacy.reduceTimerPrecision  - animation.currentTime with precision 0.013 is not rounded: 172.10658 

There are a few things here. First, because CSS Animations are not supposed to be rounded, this is failing because it IS rounded. Second: it's a false positive. Through sheer luck, we got a value (172.10658) that is within .0005 (see Bug 1438953 for that value) of a rounded value.

I suppose I ought to figure out how to resolve that, but I think it should probably be done over in Bug 1438953
Comment on attachment 8943048 [details]
Bug 1430841 Eliminate float fuzziness in ReduceTimerPrecision

https://reviewboard.mozilla.org/r/213316/#review227072

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:137
(Diff revision 8)
> -  DOC TODO
> + * Given a precision value, this function will reduce a given input time to the nearest
> + * multiple of that precision.
> + *
> + * It will check if it is appropriate to clamp the input time according to the values
> + * of the privacy.resistFingerprinting and privacy.reduceTimerPrecision preferences.
> + * Note that why it will check these prefs, it will use whatever precision is given to

s/why/while/

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:143
(Diff revision 8)
> + * It ensures the given precision value is greater than zero, if it is not it returns
> + * the input time.

Just curious, but is there any reason for this? Could we just assert it's > 0?

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:169
(Diff revision 8)
> -#if defined(DEBUG)
> -   MOZ_LOG(gResistFingerprintingLog, LogLevel::Verbose,
> -    ("Given: %.*f, Reciprocal Rounding with %.*f, Intermediate: %.*f, Got: %.*f",
> +  //Cut off anything less than a microsecond.
> +  long long timeAsInt = timeScaled;
> +  //Cast the resolution (in microseconds) to an int.

(Nit: Whitespace after //)
Comment on attachment 8951765 [details]
Bug 1430841 Re-enable the Reduce Timer Precision pref for CSS Animations tests

https://reviewboard.mozilla.org/r/221042/#review227074

::: dom/animation/test/css-animations/test_animation-starttime.html:10
(Diff revision 1)
>    { "set": 
> -      [["dom.animations-api.core.enabled", true],
> +      [["dom.animations-api.core.enabled", true]]},
> -       ["privacy.reduceTimerPrecision", false]]},

There's some (pre-existing) trailing whitespace here which could fix and make this more clear just by putting it all on one line:

  { "set": [["dom.animations-api.core.enabled", true]] },

::: dom/animation/test/css-animations/test_event-dispatch.html:10
(Diff revision 1)
>    { "set":
> -  	  [["dom.animations-api.core.enabled", true],
> +  	  [["dom.animations-api.core.enabled", true]]},
> -       ["privacy.reduceTimerPrecision", false]]},

Likewise here (where we can fix what appears to be a stray tab).
Attachment #8951765 - Flags: review?(bbirtles) → review+
Flags: needinfo?(bbirtles)
Comment on attachment 8943048 [details]
Bug 1430841 Eliminate float fuzziness in ReduceTimerPrecision

https://reviewboard.mozilla.org/r/213316/#review227072

> Just curious, but is there any reason for this? Could we just assert it's > 0?

The value comes from a pref, so it could be anything; have to handle unexpected values.
Comment on attachment 8943048 [details]
Bug 1430841 Eliminate float fuzziness in ReduceTimerPrecision

https://reviewboard.mozilla.org/r/213316/#review228396

I have just one comment on the commentary.

::: toolkit/components/resistfingerprinting/nsRFPService.cpp:180
(Diff revision 9)
> +  // The impact of this is that comparing two clamped values that should be related by a
> +  // constant (e.g. 10s) that are across the zero barrier will no longer work. We need to
> +  // consistently move away from (or closer to) zero both above and below the zero barrier.

The way the last part of this bit is written can be interpreted as the very scenario you were describing above that we *don't* want to happen.  That is, the above discussion of integer division is always truncating towards ("moving closer to") zero.  We don't want that; we always want to truncate in the same direction regardless of whether we're above or below zero.

We're really implementing consistent directed rounding, and the direction is either towards positive infinity or towards negative infinity, right?  (We've chosen negative infinity here, by virtue of using `floor`.)  Should we be using that terminology instead?
I think what you have is reasonable.
Flags: needinfo?(nfroyd)
Pushed by shindli@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/369ecd043636
Refactor ReduceTimePrecision and add (failing) gtests r=bkelly
https://hg.mozilla.org/integration/autoland/rev/9c10a641e698
Eliminate float fuzziness in ReduceTimerPrecision r=froydnj
https://hg.mozilla.org/integration/autoland/rev/41492f00c669
Re-enable the Reduce Timer Precision pref for CSS Animations tests r=birtles
Keywords: checkin-needed
Per IRC discussion with Tom, the behavior change in question is subtle and not likely to affect real-world content in any meaningful way. Given that, this is a fix that would be better to let bake on 60 rather than attempting a last-minute uplift to 59.
Component: DOM → DOM: Core & HTML
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: