Closed Bug 1153672 Opened 5 years ago Closed 5 years ago
Fingerprinting individuals via performance
Amit Klein is working on research that shows it's possible to extract the Windows counter frequency using the Windows.performance.now() high-resolution timer, and that this value is consistent between Firefox and IE. He doesn't mention Chrome which also supports this now-standard feature (http://www.w3.org/TR/hr-time) so it's unclear if the technique doesn't work on Chrome or if he simply didn't test it. Amit is able to detect two families of VM by their characteristic frequency, and between two machines of the same nominal clock speed there can be a detectable and consistent skew that raises fingerprinting worries. He has also reported this to Microsoft and we should coordinate any disclosures through Amit to make sure everyone's on the same page.
This version is a little easier to play with than the bare-bones one from the paper. I get pretty wide fluctuations in measurement if you only do 10 loops, but if you increase it to 100 or 1000 it starts settling down (still very fast to calculate). So far I've only tested on a Mac while the paper is talking about the Windows implementation so I may get better results when I try it there.
Attachment #8591393 - Attachment is obsolete: true
On windows I do get exactly the same result on Firefox and IE, and 10 loops is plenty. Chrome gives me completely different results but fairly close values between my Mac and Windows physical machines (right around 1 million Hz, whereas the Firefox values are quite different as I'd expect from the different hardware (2.2M on windows, ~10M on Mac).
sorry, the answers I'm getting on Mac are more like 100M Hz. I'm not sure I believe that.
On Mac, it will really depend on exactly what API TimeStamp::Now uses and what its resolution is. On Mac we use mach_absolute_time for that. The documentation claims that this does in fact have CPU-dependent resolution, but not what it would be in terms of CPU frequency. I haven't managed to figure out where in Blink's source (if at all) the monotonic timer lives; Platform.h just has a virtual function that always returns 0, and it's possible the implementation is part of Chrome, not Blink. That all said, is this really worse than just extracting the raw CPU clock speed, which I bet you can do reasonably reliably by just writing a JS busy-loop and timing it?
Regarding the previous comment: 1. I doubt you can get the kind of accuracy the advisory describes (few ppm) by timing JS busy loop, what with other processes/threads running on the same core and present day varying CPU clock speed. Accurate measurement helps telling apart non-TSC based clocks from TSC-based clock (see below) as well as enabling discernment of different machines running the same CPU clock speed (there's an example in the advisory of telling apart two machines with clock speed of 3.4GHz). 2. I should have clarified this in the advisory, but when the performance counter source is not the TSC timer (bur rather HPET or APIC timer), then the CPU clock speed cannot be derived from it (and vice versa). So measuring the CPU speed via JS busy loop cannot help to tell apart these cases - i.e. you can't tell TSC from HPET or APIC. 3. Busy loop has adverse impact on the "user" experience, this is much more visible than this method.
Regarding Google Chrome - the advisory doesn't mention it because Chrome is not vulnerable (at least not on Windows) - it provide time measurements in units of exactly 1 microsecond. So the performance counter frequency cannot be inferred.
We _could_ round off performance.now() values to the nearest us. It seems a little unfortunate to lose so much precision, though.
Arguably you're not losing a lot of precision. Given that on most machines the source timer is TSC, with typical frequency ranging in 1MHz-4MHz, you lose less than 2 bits (again - typically).
Note: the "low" rating is relative to _security_ bugs--no one's going to get malware from this. That does not mean it's a "low" _privacy_ problem. The fingerprintability is quite accurate on Windows machines but I don't know how that rates for the privacy team.
I got permission to share with you the following quick update from Microsoft (MSRC): We [Microsoft] are working on a fix [...] the contact info for the issue of this case [is] (email email@example.com with the subject containing "[21897mp]"
Did they indicate what their planned fix is? Seems to me like whatever the fix is should just get standardized...
Also, in comment 7 there was mention that Chrome provides performance measurements in units of 1us. That's not exactly what I'm seeing in Chrome; it looks like it's rounding to the nearest us but then also adding a bit of noise in the several-percent-of-a-nanosecond range or something. I did check that this is not just rounding errors from float-to-double conversions of non-representable values or something; the numbers Chrome is logging don't correspond to double values of representable floats.... Maybe it's some artifact of how they do their rounding.
So for what it's worth, I tried the obvious thing: instead of returning the value we return now from now(), return round(value * 1000)/1000. As expected, this makes calls to now() significantly slower (at least on Mac); it's about a 30% regression. That might be ok; the result is still faster than Chrome. It's definitely slower than Safari, though (and just looking at performance.now() values in Safari shows they're not doing any sort of rounding to the nearest us. It would be good to understand what the actual threat model here is and in particular whether it's windows-only or applicable to other OSes based on the now() implementations we use there...
(In reply to Not doing reviews right now from comment #12) > Did they indicate what their planned fix is? Seems to me like whatever the > fix is should just get standardized... Nope, I have no info on their proposed solution, sorry.
(In reply to Not doing reviews right now from comment #14) > So for what it's worth, I tried the obvious thing: instead of returning the > value we return now from now(), return round(value * 1000)/1000. > > As expected, this makes calls to now() significantly slower (at least on > Mac); it's about a 30% regression. That might be ok; the result is still > faster than Chrome. It's definitely slower than Safari, though (and just > looking at performance.now() values in Safari shows they're not doing any > sort of rounding to the nearest us. > > It would be good to understand what the actual threat model here is and in > particular whether it's windows-only or applicable to other OSes based on > the now() implementations we use there... Would using 1024 instead of 1000 improve things? I mean, there's nothing magical about sticking to 1000. Maybe do the whole thing over integers with resolution 2^20 (1048576)?
(In reply to Not doing reviews right now from comment #13) > Also, in comment 7 there was mention that Chrome provides performance > measurements in units of 1us. That's not exactly what I'm seeing in Chrome; > it looks like it's rounding to the nearest us but then also adding a bit of > noise in the several-percent-of-a-nanosecond range or something. I did > check that this is not just rounding errors from float-to-double conversions > of non-representable values or something; the numbers Chrome is logging > don't correspond to double values of representable floats.... Maybe it's > some artifact of how they do their rounding. I don't have an explanation to what you see there. I did try to follow the call tree for Perfomance::now() in Chromium, and eventually got to QPCValueToTimeDelta()(at https://chromium.googlesource.com/chromium/src.git/+/master/base/time/time_win.cc) which contains the following code (this is but a single branch...): return TimeDelta::FromMicroseconds( qpc_value * Time::kMicrosecondsPerSecond / g_qpc_ticks_per_second); I'll try to experiment with a live code later today.
> Would using 1024 instead of 1000 improve things? Not really, because it's the round() call that's slow. > I mean, there's nothing magical about sticking to 1000. But there is. The spec requires performance.now() to have at least microsecond accuracy. Using 1024 would not provide that. > Maybe do the whole thing over integers with resolution 2^20 (1048576)? That would work if it were not for the fact that the underlying timing API I have is handing me back doubles. ;) Changing that will require a bit of surgery.
So ok, I tried doing this by adding a ToRoundedMicroseconds() method on TimeDuration (which actually just rounds down; it doesn't do arithmetically correct rounding) and that seems to work acceptably both in terms of resulting behavior and performance. Assuming we're ok with the just rounding down behavior. Basically right now I'm doing: (aTicks / kNsPerUs) * sNsPerTick; Where the division is an integer division (and the multiply is as doubles, since sNsPerTick is a double). We could also do something like: (aTicks / kNsPerUs + (aTicks % kNsPerUs >= kNsPerUs / 2)) * sNsPerTick; But note that both of those approaches lose a lot of precision if it happens that a tick is in fact smaller than a us. Not sure what tick sizes actually are on all our platforms yet. On Mac, sNsPerTick is 1.0, so at least there this would be viable.
FWIW, in the Tor Browser 5.0a1 release today, we decided to go with a more aggressive approach for testing by our alpha users. Our patch against Firefox 31.7.0 is here: https://gitweb.torproject.org/tor-browser.git/commit/?h=tor-browser-31.7.0esr-5.0-1&id=dcd5fcc102a3eb19c20013542fa3ca399db66da4 That patch reduces all content-facing time sources to at least 100ms resolution. The primary motivation for such a large resolution was keystroke fingerprinting. In fact, we additionally clip keypress events to 250ms resolution, to make any currently deployed versions of these attacks break. It also seemed best to go with an aggressively course resolution for all time sources as a first attempt, so that any breakage was more obvious. I used floor() instead of round() to keep the timers monotonic. I imagine floor() should also be much quicker than round(), but monotonicity was the main reasoning for that choice. We're calling on our users to specifically test HTML5 video, animations, and games. I'll update this bug if anyone reports any issues with this approach. Please also let me know if there are other types of sites we should look into. I wrote this patch because we've long been worried about various ways that accurate time sources can be used to fingerprint users. In addition to the issues pointed out in this bug, there have also been a series of side channel attacks relating to SVG filters (with the most recent one demonstrating history sniffing: http://cseweb.ucsd.edu/~dkohlbre/papers/subnormal.pdf). Even more concerning, cache side channel attacks capable of performing cross-process data stealing and/or high capacity exfiltration were recently demonstrated in http://arxiv.org/abs/1502.07373. All of these recent side channel attacks rely on the high accuracy provided by performance.now(), but we decided to also clamp Date.now() and event timestamps to mitigate other forms of performance and behavioral fingerprinting.
(In reply to Amit Klein from comment #17) > (In reply to Not doing reviews right now from comment #13) > > Also, in comment 7 there was mention that Chrome provides performance > > measurements in units of 1us. That's not exactly what I'm seeing in Chrome; > > it looks like it's rounding to the nearest us but then also adding a bit of > > noise in the several-percent-of-a-nanosecond range or something. I did > > check that this is not just rounding errors from float-to-double conversions > > of non-representable values or something; the numbers Chrome is logging > > don't correspond to double values of representable floats.... Maybe it's > > some artifact of how they do their rounding. > > I don't have an explanation to what you see there. I did try to follow the > call tree for Perfomance::now() in Chromium, and eventually got to > QPCValueToTimeDelta()(at > https://chromium.googlesource.com/chromium/src.git/+/master/base/time/ > time_win.cc) which contains the following code (this is but a single > branch...): > > return TimeDelta::FromMicroseconds( > qpc_value * Time::kMicrosecondsPerSecond / g_qpc_ticks_per_second); > > I'll try to experiment with a live code later today. Indeed, I got similar results to what you described, namely a "too high" inaccuracy. I think I found the culprit. Long story short - this is an artifact of the fact that double not being an accurate representation of a real number. It just bites in a bit more tricky manner. The full story is this: there's indeed a tiny, expected inaccuracy once the microseconds (internally kept as int64) are converted to double and divided by 1000000. The division results in a number that cannot be accurately expressed as a double, so let's assume there's an error in the order of magnitude of 2^(-54) of the value (we're talking about floating point architecture of course). Now, keep in mind that the value here de-facto represents the number of seconds since boot. The last stage of calculating Performance::now() is subtracting m_referenceTime, which is (I presume) a timestamp taken when the frame was started. And therein lies the rub. The end result is a difference between two quantities that represent time since boot at two close points. When I tested it (and I suppose when you tested it as well), these values were around hundreds of thousands of seconds (in my case 160000sec - almost two days since boot), so the typical error here was 2^(-54)*160000 =~= 10^(-11). However, after subtraction, we get a value of order of magnitude of milliseconds, but with the same error, in other words, the error is "amplified", and instead of the "expected" 2^(-54) of the value, it becomes 10^(-11)/10^(-3)=10^(-8) =~= 2^(-27) of the value. In absolute terms, the error here, at the eleventh digit to the right of the decimal point, is in line with my empirical observation. To double check this theory, I conducted another experiment. I rebooted my machine, and as soon as Windows came up, I retested. Now the typical value was 60-100 seconds, the typical error is more around 10^(-15) and indeed I got 3-4 additional zero digits before the inaccuracy kicked in. Not sure how much of this is applicable to Mozilla, but just in case, you may want to keep it in mind.
Thank you for figuring that out! If they round before subtracting, not after, that would totally explain what I was seeing. We're storing all this stuff as int64 ticks until the last moment of conversion to double (and in particular, the subtraction is done on int64), so we wouldn't run into that problem.
If bug 1167489 (see previous comment) is relevant for me, can you please grant me access to it?
Microsoft's MSRC just informed me that they WONT FIX. So no need to coordinate with Microsoft, but please do let *me* know what's the plan.
Would it be possible to release later this month (July)?
We're not likely to have a fix shipping this month. I think what we'll likely end up doing here is clamping the values to the nearest 5us; see discussion in bug 1167489.
OK. Any idea when this is expected to be released?
Hard to say until we have a fix actually written, no? Once we have an idea I'll update this bug.
No problems. Didn't mean to bug you (pardon my pun).
Assigning this to bz because he is fixing bug 1167489.
Assignee: nobody → bzbarsky
Amit, I checked in a fix for this today and requested release manager approval for backporting to Firefox 41. If that approval comes through, I expect we will ship the fix near the end of September 2015 (see <https://wiki.mozilla.org/RapidRelease/Calendar>). If not, then it will ship in Firefox 42 in early November 2015. Of course if we run into web compat issues between now and then we will end up needing to reevlaute the schedule.
Sounds good, thanks.
Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla42
Whiteboard: Coordinate disclosure w/reporter and Microsoft → [post-critsmash-triage] Coordinate disclosure w/reporter and Microsoft
I remind you that in early July, Microsoft informed me that they WONTFIX (https://bugzilla.mozilla.org/show_bug.cgi?id=1153672#c25). So you only need to coordinate with me (AKA "the reporter"). As far as I'm concerned, let me know in advance (ideally a week or more), indicate the fix/advisory ID and fixed version, and I will provide you with my advisory URL (if you embed this in your page).
Amit, I'm planning on writing an advisory for this for Firefox for next week's release, along with another bug in this area that was reported by another researcher. We don't normally put out advisories by third parties though you are welcome to release your own write up after we ship the fix for this. We'd prefer you not give a huge amount of detail for a week or two in order to give people time to update to the fixed version but this isn't the kind of attack where people will have computers taken over. If you have any comments on this, please let me know.
Whiteboard: [post-critsmash-triage] Coordinate disclosure w/reporter and Microsoft → [post-critsmash-triage][adv-main41+] Coordinate disclosure w/reporter and Microsoft
(In reply to Al Billings [:abillings] from comment #37) > Amit, > > I'm planning on writing an advisory for this for Firefox for next week's > release, along with another bug in this area that was reported by another > researcher. We don't normally put out advisories by third parties though you > are welcome to release your own write up after we ship the fix for this. > We'd prefer you not give a huge amount of detail for a week or two in order > to give people time to update to the fixed version but this isn't the kind > of attack where people will have computers taken over. If you have any > comments on this, please let me know. Sure thing, I'll give about one week's grace (unless it's disclosed elsewhere).
My advisory is now public: http://www.securitygalore.com/site3/vmd1-advisory Thanks for working on this and coordinating with me. -Amit
Whiteboard: [post-critsmash-triage][adv-main41+] Coordinate disclosure w/reporter and Microsoft → [post-critsmash-triage][adv-main41+][fingerprinting]
You need to log in before you can comment on or make changes to this bug.