Closed
Bug 665814
(CVE-2011-3389)
Opened 13 years ago
Closed 13 years ago
Rizzo/Duong chosen plaintext attack (BEAST) on SSL/TLS 1.0 (facilitated by websockets -76)
Categories
(NSS :: Libraries, defect)
NSS
Libraries
Tracking
(firefox6- wontfix, firefox7+ affected)
RESOLVED
FIXED
3.13
People
(Reporter: dveditz, Assigned: briansmith)
References
Details
(Keywords: reporter-external, Whiteboard: [sg:moderate?])
Attachments
(5 files, 12 obsolete files)
208.99 KB,
application/pdf
|
Details | |
2.15 KB,
patch
|
Details | Diff | Splinter Review | |
6.46 KB,
text/plain
|
Details | |
4.97 KB,
text/plain
|
Details | |
17.41 KB,
patch
|
wtc
:
review+
rrelyea
:
superreview+
|
Details | Diff | Splinter Review |
Thai Dong sent mail to security@ (and Apple, Microsoft, Google, Opera, CERT, and Oracle):
"This is Juliano Rizzo and Thai Duong. We are working on a new
attack against SSL implementations on major web browsers and
plugins. We are going to publish a preliminary result of our work
next month. The published paper will include some browser
exploits that can be used to decrypt HTTPS requests sent by
browsers."
Sent mail requesting more detail.
Reporter | ||
Comment 1•13 years ago
|
||
Hi Dan!
You probably know that SSL v3.0 and TLS 1.0 are vulnerable to a
chosen-plaintext attack because it uses a predictable IV. One can easily
confirm that Mozilla Firefox is vulnerable to this attack. Although it has been
known for years, most people consider this chosen-plaintext attack
non-exploitable. Our work is to extend the attack, and demonstrate that it can
be exploited efficiently using browser feature such as HTML5 WebSocket API. It
can also be exploited using browser plugins such as Java Applet or Silverlight,
but we think that is out of scope of our discussion here.
Attachment is a draft of our paper which describes our extension to the
original chosen-plaintext attack against SSL. In order to demonstrate the risk,
we also describe how an attacker can leverage WebSocket API to decrypt
WebSocket handshake requests and obtain the embedded browser cookies. Please
note that this paper is a work in progress and it has just gone through limited peer
review process, so it may contain errors (if you spot any of them, please let
us know).
We want to stress that WebSocket is just a means to demonstrate the risk
associated with the cryptographic vulnerability in SSL. All ninjas are cool. We
are self-acclaimed ninjas. Thus, we are self-acclaimed cool people. We don't
want to exaggerate the risk or the important of our work. Our goal in this
collaboration effort is to communicate the risk, and convince you guys that it
is high enough to fix the underlying vulnerability. If we are successful, then
both sides can continue working together on the best way to fix the issue. For
such a fix, given the current state of SSL/TLS implementation on the server
side, it is clear that upgrading to TLS 1.1 or TLS 1.2 is probably not
possible. We prefer the method of OpenSSL team: sending an empty SSL record
before sending any new data.
Happy hacking,
Thai.
Reporter | ||
Comment 2•13 years ago
|
||
Brian: assigning to you initially because you straddle the crypto/network code. Not sure if we should ultimately fix this in NSS in general or need to lobby for changes in websockets. For now will leave this in the generic "Core::Security" component but it should ultimately be moved one way or the other (or even split into two bugs).
Assignee: nobody → bsmith
Summary: Placeholder for Rizzo/Duong SSL attack → Rizzo/Duong chosen plaintext attack on SSL/TLS 1.0 (facilitated by websockets)
Reporter | ||
Comment 3•13 years ago
|
||
Tracking for mozilla6 because of the author's plan to publish in a month.
status-firefox6:
--- → affected
status-firefox7:
--- → affected
tracking-firefox6:
--- → +
tracking-firefox7:
--- → +
Assignee | ||
Comment 5•13 years ago
|
||
Regarding the WebSockets-specific parts in Appendix A:
The proof of concept in the paper is based on draft-76. I believe that the randomized masking of the latest WebSockets drafts may be a very good mitigation against this--but, this depends on some details of the interaction between websockets send() calls and websockets frames on the wire, and/or the correlation between those two things and TLS records.
We could implement the OpenSSL-like prefix-with-empty-record functionality for TLS versions prior to TLS 1.1. But, we would need to measure the interoperability of this to make sure it won't break sites. We could minimize the compatibility risk here by only enabling this feature on wss:// connections (and not https:// connections or connections using other protocols).
Similarly, we could/should implement TLS 1.1 and we already have a bug open for that with a patch that is almost ready for testing. However, again, there is a pretty big risk of incompatibilities. Again, we could minimize the risk of incompatibility by initially only enabling TLS 1.1 for connections created specifically for wss://. We would need to make sure that the connection manager never chooses a previously-existing HTTPS connection, if we rely on this mitigation.
Comment 6•13 years ago
|
||
@\\(In reply to comment #5)
> Regarding the WebSockets-specific parts in Appendix A:
>
> The proof of concept in the paper is based on draft-76. I believe that the
> randomized masking of the latest WebSockets drafts may be a very good
> mitigation against this--but, this depends on some details of the
> interaction between websockets send() calls and websockets frames on the
> wire, and/or the correlation between those two things and TLS records.
Yes, it does. But fixing WebSockets is just fix a way to exploit the real vulnerability. I recommend fixing the vulnerability in SSL v3.0 and TLS 1.0. If we don't fix the real issue, then future protocols or technologies can introduce new ways to exploit it.
>
> We could implement the OpenSSL-like prefix-with-empty-record functionality
> for TLS versions prior to TLS 1.1. But, we would need to measure the
> interoperability of this to make sure it won't break sites. We could
> minimize the compatibility risk here by only enabling this feature on wss://
> connections (and not https:// connections or connections using other
> protocols).
I think it's worth to implement OpenSSL's fix in NSS itself so that you don't have to investigate new application protocols to ensure that they don't need such a fix. Nevertheless, I'm concerned about the interoperability either (it's also the reason that's we want to work with you guys on the fix).
>
> Similarly, we could/should implement TLS 1.1 and we already have a bug open
> for that with a patch that is almost ready for testing. However, again,
> there is a pretty big risk of incompatibilities. Again, we could minimize
> the risk of incompatibility by initially only enabling TLS 1.1 for
> connections created specifically for wss://. We would need to make sure that
> the connection manager never chooses a previously-existing HTTPS connection,
> if we rely on this mitigation.
It's good to implement TLS 1.1 and later versions of the protocol. However, I don't think TLS 1.1 or higher are popular on the server side [1], and it is known that it is easy to downgrade browsers to SSLv3.0 by reseting ClientHello packet [2]. This is why I think the best fix is OpenSSL's.
[1] http://blog.ivanristic.com/2010/07/internet-ssl-survey-2010-is-here.html
[2] http://www.carbonwind.net/blog/post/Random-SSLTLS-101%E2%80%93SSLTLS-version-rollbacks-and-browsers.aspx
Thai.
Comment 7•13 years ago
|
||
At Google we reached similar conclusions. Attached is a preliminary patch to add empty application data records. When patched into Chrome it, at least, passes the smoke test.
Comment 8•13 years ago
|
||
Since this patch is to NSS, I think we need to add a switch to optionally turn it off. We have some pretty strict binary compatibility guarrentees with NSS itself. Given the nature of this exploit, I would be ok with defaulting that switch to on.
bob
Comment 9•13 years ago
|
||
(In reply to comment #8)
> Since this patch is to NSS, I think we need to add a switch to optionally
> turn it off.
Oh, I agree. It was just preliminary. I'll chat with wtc today and see if we want to float this on the Chrome canary channel. That'll give us some insight into any compat issues within a couple of days.
Assignee | ||
Comment 10•13 years ago
|
||
Adam, in your patch, you assume that the attacker can manipulate application data records. Although this is probably the case now, it might not always be the case in the future. I think the workaround for SSL 3.0 and TLS 1.0 should be more like the TLS 1.1 design where it applies to every record. This would make it easier to show that it is correct. Is there a reason that we shouldn't do it for other types of records?
I can see that this workaround will increase overhead more than TLS 1.1 will, if every record gets an empty record applied to it--every encrypted record would be prepended by two blocks instead of just one. Your patch minimizes this overhead by only sending one empty record per set of records written, but I think we can get a similar amount of overhead while randomizing the IV for all record types by moving the logic from ssl3_SendApplicationData to ssl3_SendRecord, at least.
Even then, we are still assuming that the application prevents mutation of the application data as NSS processes it, in order to avoid any this attack in cases when an attacker can modify data in the remaining-to-be-encrypted-and-sent data, especially after ssl_DefSend sends each record across the network. AFAICT, to maximize safety, we should send the empty record after every call to ssl_DefSend. This would be overkill for many applications but might be necessary for other applications.
Thoughts?
Assignee | ||
Comment 11•13 years ago
|
||
s/"you assume that the attacker can manipulate application data records"/
"you assume that the attacker can ONLY manipulate application data records"/
Assignee | ||
Comment 12•13 years ago
|
||
I looked at OpenSSL and it also only sends the empty record for application data only. Also, I remember that there are interoperability problems with fragmented handshake records, so only doing this before application data records is probably the most sensible thing to do, at least for now.
Comment 13•13 years ago
|
||
(In reply to comment #10)
Brian Smith wrote:
> Adam, in your patch, you assume that the attacker can ONLY manipulate application
> data records. Although this is probably the case now, it might not always be
> the case in the future. I think the workaround for SSL 3.0 and TLS 1.0
> should be more like the TLS 1.1 design where it applies to every record.
> This would make it easier to show that it is correct. Is there a reason that
> we shouldn't do it for other types of records?
It may be more likely to break existing SSL 3.0 and TLS 1.0 implementations.
Although NSS's ssl3_HandleHandshake seems to allow a zero length, other
implementations may not do so.
Note: NSS's ssl3_HandleChangeCipherSpecs and ssl3_HandleAlert require
the length be 1 and 2, respectively:
http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/ssl/ssl3con.c&rev=1.150&mark=2752,2770-2774#2745
http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/ssl/ssl3con.c&rev=1.150&mark=2591,2602-2606#2587
so we can't add empty change_cipher_spec and alert type records.
Assignee | ||
Comment 14•13 years ago
|
||
Adam's patch would definitely seem to work for the known attack, but I think this patch is better because (a) it maps more directly to the TLS 1.1 mechanism, so it is easier to prove (to me, anyway) that it is effective, and (b) it also would protect against any future known plaintext (as opposed to chosen plaintext like we have here) attacks that involve non-application_data records such as alerts or handshake records (e.g. if the attacker can force the application to renegotiate).
In theory Adam's patch has slightly less overhead, but in many cases the overhead difference will be negligible.
Example 1 (using AES + HMAC-SHA1): An application writes 300KB (307,200 bytes) in a single write call. This would result in 300KB + (19*37=703) = 307,903 bytes sent over the wire without any mitigation. Adam's patch would increase this by 37 bytes to 307,940 bytes sent. This patch would increase this by 703 bytes to 308,606 bytes, which is still less than half of one percent of additional overhead on the wire. If the application writes data in chunks less than 16KB, then the two approaches will be even closer in performance.
The additional risk of this patch compared to Adam's is that it is different than the OpenSSL approach, and it requires the peer to be able to handle application_data records interleaved with handshake records for renegotations (only).
Attachment #541628 -
Flags: superreview?(wtc)
Attachment #541628 -
Flags: review?(rrelyea)
Attachment #541628 -
Flags: feedback?(agl)
Assignee | ||
Comment 15•13 years ago
|
||
Attachment #541628 -
Attachment is obsolete: true
Attachment #541632 -
Flags: superreview?(wtc)
Attachment #541632 -
Flags: review?(rrelyea)
Attachment #541632 -
Flags: feedback?(agl)
Attachment #541628 -
Flags: superreview?(wtc)
Attachment #541628 -
Flags: review?(rrelyea)
Attachment #541628 -
Flags: feedback?(agl)
Assignee | ||
Comment 16•13 years ago
|
||
This is an add-on to the previous patch. It moves the acquisition/release of the spec read lock to ssl3_SendRecord, to avoid having to acquire the lock twice in every iteration of the loop.
Note that this patch also fixes a bug where the lock wasn't released when the cwSpec->compressor fails in ssl3_CompressMACEncryptRecord.
Attachment #541721 -
Flags: review?(rrelyea)
Assignee | ||
Comment 17•13 years ago
|
||
Bug 565047 is the bug about implementing TLS 1.1.
Comment 18•13 years ago
|
||
Comment on attachment 541632 [details] [diff] [review]
Prevent chosen plaintext attack by making every encrypted record's IV unpredictable (typo corrected)
Review of attachment 541632 [details] [diff] [review]:
-----------------------------------------------------------------
I think I agree that this is a better place to do the patch, but I wouldn't go so far as to interleave the application data records with a renegotiation. Technically it should work, but I wouldn't be too disparaging of server side code that assumed that it didn't happen.
On the other hand, the number of sites that use renegotiation is few and far between so I don't really care either way.
I think we going to try Chrome with my patch for a little while to see if we scare up any compat issues. Apart from the renegotiation, I think the compat issues should be the same with both patches. If this ends up in upstream NSS then I strongly suspect that we would use it in order to avoid needless forking.
::: security/nss/lib/ssl/ssl3con.c
@@ +1913,5 @@
> : spec->client.write_mac_context);
> rv = PK11_DigestBegin(mac_context);
> rv |= PK11_DigestOp(mac_context, temp, tempLen);
> + if (inputLength > 0) {
> + rv |= PK11_DigestOp(mac_context, input, inputLength);
wtc pointed out to me that this isn't needed if a non-NULL pointer is passed in when writing the empty record. The check in PK11_DigestOp is probably deficient and should be fixed, but Chrome has to live with the system's version of that code.
Updated•13 years ago
|
Component: Security → Libraries
Product: Core → NSS
QA Contact: toolkit → libraries
Version: unspecified → 3.12
Comment 19•13 years ago
|
||
Comment on attachment 541137 [details] [diff] [review]
patch to add empty application data records
Adam's patch precedes each call to ssl3_SendApplicationData
with a single empty application data record. Although this
reduces the overhead, the empty application data records
may expose more message boundaries in the application data
stream, for messages longer than MAX_FRAGMENT_LENGTH.
I agree ssl3_CompressMACEncryptRecord is the best place to
insert the empty application data record because we won't
need to worry about handling errors in sending the records.
But I don't like sending empty application data records
in the middle of renegotiation.
Comment 20•13 years ago
|
||
Comment on attachment 541632 [details] [diff] [review]
Prevent chosen plaintext attack by making every encrypted record's IV unpredictable (typo corrected)
r+ rrelyea
Any use of the low level SSL buffers can be tricky, I'd be happy if we you got another r+ on just that aspect.
The switch and environment variables look right, and the minor code rearranging (passing the write buffer into the Mac routine for instance) is also correct. I'm happy with the env name.
bob
Attachment #541632 -
Flags: review?(rrelyea) → review+
Comment 21•13 years ago
|
||
Comment on attachment 541721 [details] [diff] [review]
Add-on patch to avoid doubling the number times the spec read lock is aquired/released
r+ probably should put 'caller must hold ssl_GetSpecReadLock()' in a comment before the ssl3_CompressMac function.
Also, please attach cvs diffs. This allows more context when reviewing (I can bump the context up to 15 to 20 lines). It greatly improves the use of review.
Thanks,
bob
Attachment #541721 -
Flags: review?(rrelyea) → review+
Assignee | ||
Comment 22•13 years ago
|
||
Changes in this version:
1. The empty application_data records are no longer interleaved inside renegotiations. As a consequence, handshake and alert records that appear immediately after a handshake are not protected against as-yet-mythical known-plaintext attacks.
2. Eliminated all extra lock acquisition/release calls--now locking is no worse than before the patch.
Attachment #541632 -
Attachment is obsolete: true
Attachment #541721 -
Attachment is obsolete: true
Attachment #541996 -
Flags: review?(wtc)
Attachment #541996 -
Flags: feedback?(agl)
Attachment #541632 -
Flags: superreview?(wtc)
Attachment #541632 -
Flags: feedback?(agl)
Comment 23•13 years ago
|
||
Comment on attachment 541996 [details] [diff] [review]
v3: Prevent chosen plaintext attack by making every encrypted record's IV unpredictable
Review of attachment 541996 [details] [diff] [review]:
-----------------------------------------------------------------
LGTM
::: lib/ssl/ssl.h
@@ -143,0 +143,13 @@
> > +/* For SSL 3.0 and TLS 1.0, prevent the chosen plaintext attack on SSL CBC mode
> > + * cipher suites (see RFC 4346 Section F.3 and the [CBCATT] message it cites).
> > + * by sending an empty application_data ciphertext record before every
> > + * ciphertext record sent. This only prevents the attack in the sending
NaN more ...
Double newline might not have been intended.
::: lib/ssl/ssl3con.c
@@ +1912,5 @@
> (useServerMacKey ? spec->server.write_mac_context
> : spec->client.write_mac_context);
> rv = PK11_DigestBegin(mac_context);
> rv |= PK11_DigestOp(mac_context, temp, tempLen);
> + if (inputLength > 0) {
This isn't needed as long as |input| != NULL. (And the check in PK11_DigestOp should be fixed too. I couldn't do that in my Chrome patch because we don't carry code outside of lib/ssl/.)
Attachment #541996 -
Flags: feedback?(agl) → feedback+
Assignee | ||
Comment 24•13 years ago
|
||
Adam, Thanks for the feedback.
This patch fixes the comments in ssl.h, makes the improvement to PK11_DigestOp that Adam suggested, and removes the check guarding the call to PK11_DigestOp.
Attachment #541996 -
Attachment is obsolete: true
Attachment #542352 -
Flags: review?(wtc)
Attachment #541996 -
Flags: review?(wtc)
Comment 25•13 years ago
|
||
Comment on attachment 542352 [details] [diff] [review]
Prevent chosen plaintext attack by making every encrypted record's IV unpredictable, v4
Thank you for the patch. I marked the patch review- for two reasons.
1. In sslsock.c, the way you set the default value of the
SSL_PREVENT_CBC_ATTACK_WHEN_SENDING option is uncommon, and different
from how we set the detault values of other options controlled by
environment variables. Please imitate how we deal with SSLBYPASS
and NSS_SSL_REQUIRE_SAFE_NEGOTIATION in ssl_NewSocket.
Perhaps you found a bug in the way we deal with these environment
variables? If so, we should fix the bug systematically, in a
separate patch, rather than using a one-off fix for
SSL_PREVENT_CBC_ATTACK_WHEN_SENDING.
2. The way you determine which record should receive the CBC attack
protection is complicated because you still want to protect alert
and handshake records whenever you can. I think this kind of
complexity is problematic because few NSS developers will be able
to memorize and repeat it easily. I suggest using the simple rule
of only applying the CBC attack prevention to application_data records.
Mozilla doesn't call SSL_ReHandshake:
http://mxr.mozilla.org/mozilla-central/ident?i=SSL_ReHandshake
and there is no NSS function an application can call to send an SSL
alert.
Low-level comments follow.
1. lib/pk11wrap/pk11cxt.c
>- if (!in) {
Just change this to
>- if (!in && inLen) {
2. lib/ssl/ssl.h
>+ * ... On by default, unless the NSS_ALLOW_CBC_ATTACK
>+ * environment variable is set to a non-empty value the first time
>+ * SSL_OptionGetDefault is called or the first time a SSL socket is created
>+ * (whichever happens first); the application can also explicitly set the
>+ * default with SSL_OptionSetDefault, which will cause the environment variable
>+ * to be ignored.
This rule of determining the default value is too complicated. (I already
commented on this problem above.)
>+#define SSL_PREVENT_CBC_ATTACK_WHEN_SENDING 23
Nit: I would use a name that describes what we do (prepending empty records)
rather than what the goal is.
3. lib/ssl/ssl3con.c
> static SECStatus
> ssl3_CompressMACEncryptRecord(sslSocket * ss,
> SSL3ContentType type,
> const SSL3Opaque * pIn,
>- PRUint32 contentLen)
>+ PRUint32 contentLen,
>+ sslBuffer * wrBuf)
Please imitate Nelson's style of placing the '*" one space after
the data type.
>+ * ... Every application_data record will be
>+ * protected, as will any alert records that follow application_data
>+ * records, and the first handshake record following an application_data
>+ * record; this maximizes interoperability with any implementations that
>+ * wrongly fail when application_data records are interleaved with
>+ * handshake records.
This rule is difficult to describe to people.
>+ /* We start prefixing all records with empty application_data records
>+ * (only) once we start sending application data.
>+ */
>+ if (type == content_application_data &&
>+ ss->ssl3.cwSpec->version <= SSL_LIBRARY_VERSION_3_1_TLS &&
>+ !ss->ssl3.readyToPreventCBCAttack &&
>+ ss->opt.preventCBCAttackWhenSending)
>+ ss->ssl3.readyToPreventCBCAttack = PR_TRUE;
Should we call ssl_GetSpecReadLock(ss) before reading ss->ssl3.cwSpec->version?
> while (nIn > 0) {
>- PRUint32 contentLen = PR_MIN(nIn, MAX_FRAGMENT_LENGTH);
>+ const PRUint32 contentLen = PR_MIN(nIn, MAX_FRAGMENT_LENGTH);
>+ sslBuffer * const wrBuf = &ss->sec.writeBuf;
Nit: this use of 'const' is uncommon in the NSS source tree. Does it help
people reason about the code?
>+ if (ss->ssl3.readyToPreventCBCAttack &&
>+ ss->ssl3.cwSpec->cipher_def->type == type_block) {
>+ /* The record will consist of the MAC, one byte for the length of
>+ * the padding, and then enough padding to make the record an
>+ * integer multiple of the block size.
The comment should note that we assume CBC mode is used without checking.
>- if (wrBuf->space < contentLen + SSL3_BUFFER_FUDGE) {
>+ if (wrBuf->space < cbcAttackPreventionRecordLen + contentLen
>+ + SSL3_BUFFER_FUDGE) {
> PRInt32 newSpace = PR_MAX(wrBuf->space * 2, contentLen);
> newSpace = PR_MIN(newSpace, MAX_FRAGMENT_LENGTH);
> newSpace += SSL3_BUFFER_FUDGE;
> rv = sslBuffer_Grow(wrBuf, newSpace);
newSpace needs to include space for cbcAttackPreventionRecordLen. I suggest
we replace the line
newSpace += SSL3_BUFFER_FUDGE;
with
newSpace += SSL3_BUFFER_FUDGE + cbcAttackPreventionRecordLen;
effectively treating cbcAttackPreventionRecordLen as part of the fudge.
>+ rv = ssl3_CompressMACEncryptRecord(ss, type, pIn, contentLen,
>+ &afterPrependedRecord);
>+
>+ wrBuf->len += afterPrependedRecord.len;
It may be better to update wrBuf->len only if (rv == SECSuccess).
>- PRINT_BUF(50, (ss, "send (encrypted) record data:", wrBuf->buf, wrBuf->len));
>+ PRINT_BUF(50, (ss, "send (encrypted) record data:",
>+ wrBuf->buf + cbcAttackPreventionRecordLen,
>+ wrBuf->len - cbcAttackPreventionRecordLen));
I think we should print the entire wrBuf faithfully, including the empty record.
Attachment #542352 -
Flags: review?(wtc) → review-
Assignee | ||
Comment 26•13 years ago
|
||
Wan-Teh, thanks for the review. Comments inline.
(In reply to comment #25)
> 1. In sslsock.c, the way you set the default value of the
> SSL_PREVENT_CBC_ATTACK_WHEN_SENDING option is uncommon, and different
> from how we set the detault values of other options controlled by
> environment variables. Please imitate how we deal with SSLBYPASS
> and NSS_SSL_REQUIRE_SAFE_NEGOTIATION in ssl_NewSocket.
With SSLBYPASS, the environment variable is checked the first time a socket is created. That means that if the application called SSL_OptionSetDefault before creating a socket, the environment variable will override the explicit call to SSL_OptionSetDefault. In Firefox, I would like to have our preferences override the environment variables. However, I can do that by using SSL_OptionSet on each socket instead of using SSL_OptionSetDefault, so I will change my patch to use the existing style. I will file a separate bug about more clearly documenting the advantages/disadvantages of using SSL_OptionSetDefault as it related to environment variables.
> 2. The way you determine which record should receive the CBC attack
> protection is complicated because you still want to protect alert
> and handshake records whenever you can. I think this kind of
> complexity is problematic because few NSS developers will be able
> to memorize and repeat it easily. I suggest using the simple rule
> of only applying the CBC attack prevention to application_data records.
With the current patch, NSS will prefix records with the empty application_data record starting from the first application_data record sent following a handshake until the next handshake record sent (inclusive), for each handshake.
I do not think this is too complicated and I think it does a good job of maximizing protection against almost any possible future improvements to the attack without making the code complicated. I do agree my explanation in the comments makes it seem complicated. Would it be OK if I changed the explanation in the comments to use the explanation in the previous paragraph, or do you still think it is better to only do this for application_data records.
> Mozilla doesn't call SSL_ReHandshake:
> http://mxr.mozilla.org/mozilla-central/ident?i=SSL_ReHandshake
> and there is no NSS function an application can call to send an SSL
> alert.
When renegotiation is disabled, a web app could cause the browser to send a client_hello message or the no_renegotiation alert by navigating between two resources https://example.org/a and https://example.org/b that are configured to renegotiate (e.g. using the Apache misfeature of per-location cipher suite settings). Also, an active attacker can cause libssl to send various predictable alerts (record_overflow, bad_record_mac, unexpected_message, etc.). There may be other possibilities that I have overlooked. Although there is no published attack that exploits this, I think it is better to be safe than sorry.
> >+#define SSL_PREVENT_CBC_ATTACK_WHEN_SENDING 23
>
> Nit: I would use a name that describes what we do (prepending empty records)
> rather than what the goal is.
I chose this name because this code looks wrong and dangerous:
SSL_OptionSet(SSL_PREVENT_CBC_ATTACK_WHEN_SENDING, PR_FALSE);
but this code looks reasonable:
SSL_OptionSet(SSL_PREPEND_EMPTY_APPLICATION_DATA_RECORDS, PR_FALSE);
> >- PRINT_BUF(50, (ss, "send (encrypted) record data:", wrBuf->buf, wrBuf->len));
> >+ PRINT_BUF(50, (ss, "send (encrypted) record data:",
> >+ wrBuf->buf + cbcAttackPreventionRecordLen,
> >+ wrBuf->len - cbcAttackPreventionRecordLen));
>
> I think we should print the entire wrBuf faithfully, including the empty
> record.
The empty record is printed in the PRINT_BUF call with the message ""send (encrypted) random IV record data:". I did this because then you can use SSLTRACE=50 to see exactly where the extra records are added. If the call you quoted above also included the empty record, then the empty records would be printed twice when tracing, and I think that would be very confusing.
Thank you again for the very detailed review. I will make the other changes you suggested.
Assignee | ||
Comment 27•13 years ago
|
||
(In reply to comment #25)
> 1. lib/pk11wrap/pk11cxt.c
>
> >- if (!in) {
>
> Just change this to
>
> >- if (!in && inLen) {
I think my change is better for two reasons:
1. With this patch, there will be many calls to PK11_DigestOp with inLen == 0. If we do not exit early when inLen == 0 then we will incur overhead (especially locking overhead) in each of these calls in PK11_DigestOp. By returning early, we avoid that overhead.
2. PK11_DigestOp is a public API and it is a good idea to check context != NULL to avoid crashing when a NULL pointer is passed in.
Assignee | ||
Comment 28•13 years ago
|
||
In addition to addressing the review comments, this version makes the naming of the environment variable and option consistent with the rest of the code and with each other.
I removed the explicit calculation of cbcAttackPreventionRecordLen. In addition to the problem Wan-Teh mentioned, I had also overlooked the possibility that compression overhead may be added to the empty record.
Attachment #542352 -
Attachment is obsolete: true
Attachment #542392 -
Flags: review?(wtc)
Assignee | ||
Comment 29•13 years ago
|
||
Attachment #542392 -
Attachment is obsolete: true
Attachment #542393 -
Flags: review?(wtc)
Attachment #542392 -
Flags: review?(wtc)
Comment 30•13 years ago
|
||
We have our first compat issue: Oracle servers.
Upon receiving an empty application data record they send a close_notify alert and close the connection.
(For those unfamiliar with Oracle servers, they're commonly TLS intolerant, even today.)
The report actually came from Android (which uses OpenSSL and so already does this in Honeycomb at least) for www.banner.iup.edu.
Comment 31•13 years ago
|
||
Comment on attachment 542393 [details] [diff] [review]
v6: Prevent chosen plaintext attack by making every encrypted record's IV unpredictable
r=wtc.
1. lib/pk11wrap/pk11cxt.c
>- if (!in) {
>+ if (inLen == 0)
>+ return SECSuccess;
>+
>+ if (context == NULL || in == NULL) {
> PORT_SetError(SEC_ERROR_INVALID_ARGS);
> return SECFailure;
> }
Your change is fine. The reason I don't like the if (inLen == 0) test
is that it prevents us from exercising our low-level hash implementations
with a zero-length input. (Some people use our softoken directly, and
our libSSL calls the freebl functions in bypass-PKCS11 mode, so it's good
to know our low-level hash functions can also handle zero-length inputs.)
2. lib/ssl/ssl3con.c
>+ * ... The empty records are always of type
>+ * application_data because many implementations do not accept empty
>+ * records of other types. ...
Unless you have verified this with experiments or code review, it is better
to add "we suspect" or "we are worried" before "many implementationd do not ...".
>- if (wrBuf->space < contentLen + SSL3_BUFFER_FUDGE) {
>+ if (wrBuf->space < contentLen + (numRecords * SSL3_BUFFER_FUDGE)) {
I like your original code better, which uses the accurate length of the
empty application_data record. In the new code, we'll need to prove
that SSL3_BUFFER_FUDGE is at least that length.
>+ startOfRecord = wrBuf->len;
It may be better to rename this variable recordOffset. startOfRecord sounds
like a pointer to me.
3. lib/ssl/sslsock.c
>+static PRBool defaultSetForPreventCBCAttackWhenSending = PR_FALSE;
Delete this variable. It's now unused.
> if (on) {
> locksEverDisabled = PR_TRUE;
> strcpy(lockStatus + LOCKSTATUS_OFFSET, "DISABLED.");
>- }
>+ }
> break;
Undo this whitespace change.
Attachment #542393 -
Flags: review?(wtc) → review+
Comment 32•13 years ago
|
||
(In reply to comment #26)
>
> With SSLBYPASS, the environment variable is checked the first time a socket
> is created. That means that if the application called SSL_OptionSetDefault
> before creating a socket, the environment variable will override the
> explicit call to SSL_OptionSetDefault. ...
> ... I will file a separate bug
> about more clearly documenting the advantages/disadvantages of using
> SSL_OptionSetDefault as it related to environment variables.
Thank you for the explanation. This is the bug I feared.
SSL_OptionSetDefault should override environment variables. We should
fix this bug for all environment variables with a separate patch.
Assignee | ||
Comment 33•13 years ago
|
||
(In reply to comment #31)
> Your change is fine. The reason I don't like the if (inLen == 0) test
> is that it prevents us from exercising our low-level hash implementations
> with a zero-length input. (Some people use our softoken directly, and
> our libSSL calls the freebl functions in bypass-PKCS11 mode, so it's good
> to know our low-level hash functions can also handle zero-length inputs.)
Filed bug 668002 and bug 668003.
> >- if (wrBuf->space < contentLen + SSL3_BUFFER_FUDGE) {
> >+ if (wrBuf->space < contentLen + (numRecords * SSL3_BUFFER_FUDGE)) {
>
> I like your original code better, which uses the accurate length of the
> empty application_data record. In the new code, we'll need to prove
> that SSL3_BUFFER_FUDGE is at least that length.
I filed bug 668005 about this. I changed the code to the above because it wasn't clear that we could rely on compressing a zero-byte input resulting in zero bytes out. If we can be sure of this then I can change the code back to the way it was.
(In reply to comment #32)
> (In reply to comment #26)
> >
> > With SSLBYPASS, the environment variable is checked the first time a socket
> > is created. That means that if the application called SSL_OptionSetDefault
> > before creating a socket, the environment variable will override the
> > explicit call to SSL_OptionSetDefault. ...
> > ... I will file a separate bug
> > about more clearly documenting the advantages/disadvantages of using
> > SSL_OptionSetDefault as it related to environment variables.
>
> Thank you for the explanation. This is the bug I feared.
> SSL_OptionSetDefault should override environment variables. We should
> fix this bug for all environment variables with a separate patch.
I filed bug 668001 about this.
Assignee | ||
Comment 34•13 years ago
|
||
(In reply to comment #30)
> We have our first compat issue: Oracle servers.
>
> Upon receiving an empty application data record they send a close_notify
> alert and close the connection.
>
> (For those unfamiliar with Oracle servers, they're commonly TLS intolerant,
> even today.)
>
> The report actually came from Android (which uses OpenSSL and so already
> does this in Honeycomb at least) for www.banner.iup.edu.
Now we should revisit whether it should be enabled by default.
Is it the case that TLS intolerant Oracle servers reject the empty ones (so, only over SSL 3.0) but TLS tolerant ones accept them? If so, it might be reasonable to only enable this for TLS 1.0.
I think before we were hoping that since OpenSSL does this, we could do it too with very low risk of compatibility. Now that there is some documented compatibility issue, we should more carefully analyze the impact to see if we really need to do this. In particular, are Firefox and Chrome really impacted in an exploitable way by this issue?
Comment 35•13 years ago
|
||
I'm not sure if there are any TLS tolerant Oracle servers.
Android has been running with this since (I think) Gingerbread, but certainly Honeycomb and this is the first reported instance. So it's clearly not a widespread problem.
Only enabling for TLS is problematic as it's pretty simple to cause either Firefox or Chrome to fallback to SSLv3.
Another possible workaround is to only allow stream ciphers (basically meaning RC4) for SSLv3 connections.
As for the attack itself, it's *way* down my list of problems. The new WebSockets protocol fixes it masking the application data with a random mask. The other attack vectors mentioned were Flash, Java and Silverlight. Flash is commonly installed and I don't know much about it but, in order to exploit this, I assume that you need to create a socket to the server and I believe that Flash will only do that to the same origin or when the server explicitly allows it.
Reporter | ||
Comment 36•13 years ago
|
||
Unless we find a vector that makes this exploitable by default in the browser (and with a fixed websockets spec/implementation it is not) then it's not really a critical security bug.
Summary: Rizzo/Duong chosen plaintext attack on SSL/TLS 1.0 (facilitated by websockets) → Rizzo/Duong chosen plaintext attack on SSL/TLS 1.0 (facilitated by websockets -76)
Whiteboard: [sg:moderate] not vuln by default currently
Comment 37•13 years ago
|
||
(In reply to comment #36)
> Unless we find a vector that makes this exploitable by default in the
> browser (and with a fixed websockets spec/implementation it is not) then
> it's not really a critical security bug.
Up until this very moment, the vulnerability is exploitable in Chrome and Firefox, since the fixed WebSockets spec/implementation has not been shipped. Anyway, fixing WebSockets is necessary and it should be done as soon as possible -1-.
Our concern is that the moment we release the paper, somebody would figure out a better way to exploit this vulnerability without WebSockets. So if we don't fix the vulnerability, we then must ensure that: a) there is no other way to exploit this besides WebSockets; b) everybody is aware of this problem whenever they want to design a new application protocol or technology running in the browser. We are not sure about the first point. There might be a clever Javascript hack (using such as XMLHttpRequest Level 2 and Blob/File APIs) that we missed. For the second point, we don't think it's a good approach. Most people assume that SSL/TLS must protect the privacy and secrecy of upper layer protocols regardless of their designs and operation modes.
Anyway, we've started speculating and we should stop now ;-).
Juliano and Thai.
-1-: Well, people can always employ a hack like WebSockets does with its masking, but personally we think the right approach to avoid cross-protocol attacks in browsers is to use SSL/TLS by default. Then we should fix SSL/TLS first.
Comment 38•13 years ago
|
||
For the reported compat issue, how about convincing Oracle to make their servers compatible? It is not that difficult as you might think.
Since Java Secure Socket Extension is also vulnerable to the attack, and in fact it is much easier to exploit than WebSockets, we are also working with Oracle on the patch. Oracle probably follows what we have proposed, i.e., prepending an empty message before every application record. They now have a very strong incentive to make their servers compatible with JSSE and browsers, right?
Comment 39•13 years ago
|
||
(In reply to comment #35)
>
> Another possible workaround is to only allow stream ciphers (basically
> meaning RC4) for SSLv3 connections.
This is probably out of scope of this discussion, but since you are the author of False Start, we would like to ask if it is possible for somebody to leverage False Start to select block ciphers instead of stream ciphers. We think it's possible in theory, but we haven't looked at any False Start implementations to be sure.
>
> As for the attack itself, it's *way* down my list of problems. The new
> WebSockets protocol fixes it masking the application data with a random
> mask. The other attack vectors mentioned were Flash, Java and Silverlight.
> Flash is commonly installed and I don't know much about it but, in order to
> exploit this, I assume that you need to create a socket to the server and I
> believe that Flash will only do that to the same origin or when the server
> explicitly allows it.
Recent study showed that there are many servers out there allowing cross-origin requests from anywhere [1]. A cross-site scripting vulnerability in one of the allowed domains can also open doors to this attack. Note that in some sense, this attack is stronger than XSS, because XSS can't steal HTTPOnly cookies. Nevertheless, we believe that Flash, Java or Silverlight is out of scope of our current discussion, that's why we choose to work with their respectively vendors in parallel with browser vendors.
[1]: http://w2spconf.com/2011/papers/cross_domain_Nation.pdf
Comment 40•13 years ago
|
||
(In reply to comment #39)
> This is probably out of scope of this discussion, but since you are the
> author of False Start, we would like to ask if it is possible for somebody
> to leverage False Start to select block ciphers instead of stream ciphers.
> We think it's possible in theory, but we haven't looked at any False Start
> implementations to be sure.
(This might be getting off topic. You're always welcome to email me directly.)
With False Start, a network attacker can cause the client to send its initial request using a block cipher in cases where an unaltered handshake would have selected a stream cipher. However, the connection will never get further than that because the server's Finished message will be invalid from the point of view of the client.
Assignee | ||
Comment 41•13 years ago
|
||
(In reply to comment #40)
> With False Start, a network attacker can cause the client to send its
> initial request using a block cipher in cases where an unaltered handshake
> would have selected a stream cipher.
I think that this is the problem, because the goal of the attack is to steal request credentials (HTTP auth / cookie), not responses.
Comment 42•13 years ago
|
||
(In reply to comment #41)
> I think that this is the problem, because the goal of the attack is to steal
> request credentials (HTTP auth / cookie), not responses.
The attacker can't mount any sort of adaptive attack because they can't send data on the connection after the handshake. The connection won't continue until the server replies, and the attacker can't fake a valid Finished message from the server.
Comment 43•13 years ago
|
||
(In reply to comment #42)
> The attacker can't mount any sort of adaptive attack because they can't send
> data on the connection after the handshake. The connection won't continue
> until the server replies, and the attacker can't fake a valid Finished
> message from the server.
This is true for WebSockets, since the client has to wait for WebSockets handshake to finish before it can send more data. However, with plain old HTTP the attacker can make client open a POST request, and perform the adaptive step in the request body. Actually this is how we exploit the vulnerability in Java Secure Socket Extension with Java Applet. Now if JSSE supports False Start, then we have a problem here, right?
Comment 44•13 years ago
|
||
(In reply to comment #43)
> However, with plain old
> HTTP the attacker can make client open a POST request, and perform the
> adaptive step in the request body.
Is there any way to provide a POST body after a request with normal HTML(5)?
If you have some plugin that allows it then yes, False Start would mean that setting the server to prefer stream ciphers wouldn't mitigate this attack.
Comment 45•13 years ago
|
||
(In reply to comment #44)
>
> Is there any way to provide a POST body after a request with normal HTML(5)?
As far as I know, the answer is no. We still keep looking though.
Comment 46•13 years ago
|
||
xuelei.su@gmail.com is working for Java SE security and networking. His team is evaluating a security vulnerability of TLS protocol, the CBC attack of TLS.
They want to make sure that they address the issue with the same solution as other vendors in case of any interoperability issues, take part in the discussion, and look further to make interoperability testing when the fix ready.
Comment 47•13 years ago
|
||
Bradford.Wetmore@oracle.com works for Java too.
Comment 48•13 years ago
|
||
I'm a little late to the party, but am jumping in with both feet.
One quick question: for the claim that Oracle servers don't support this approach, can you provide specifics as to which server(s) show this behavior. Oracle (and its various acquisitions) have seen a lot of different SSL/TLS implementations over the years. Are any of these JSSE-based? I haven't done a JSSE prototype of the attack yet, but I would expect the standard SunJSSE code should be able to handle this approach.
(I realize this is a NSS discussion, and not Oracle, so feel free to email me and Xuelei directly if this is too off-topic. However, it might be good for folks to know where the compat problems actually lie.)
I'm still thinking through whether this empty packet approach is workable, initially it seems fine, but still want to give it more thought. If I understand this right, this assumes that the empty initial packet must be generated and sent atomically along with the outbound ciphertext, so that the attacker could not adaptively choose the proper plaintext. Since the attacker doesn't know k, the IV for the next data block is effectively "random."
Comment 49•13 years ago
|
||
(In reply to comment #48)
> One quick question: for the claim that Oracle servers don't support this
> approach, can you provide specifics as to which server(s) show this
> behavior. Oracle (and its various acquisitions) have seen a lot of
> different SSL/TLS implementations over the years. Are any of these
> JSSE-based? I haven't done a JSSE prototype of the attack yet, but I would
> expect the standard SunJSSE code should be able to handle this approach.
Please check out www.banner.iup.edu. Its server header is "Oracle-Application-Server-10g/10.1.2.2.0 Oracle-HTTP-Server".
>
> I'm still thinking through whether this empty packet approach is workable,
> initially it seems fine, but still want to give it more thought. If I
> understand this right, this assumes that the empty initial packet must be
> generated and sent atomically along with the outbound ciphertext, so that
> the attacker could not adaptively choose the proper plaintext. Since the
> attacker doesn't know k, the IV for the next data block is effectively
> "random."
Yes, you understand this right.
Assignee | ||
Comment 50•13 years ago
|
||
"My problem is I can't access web pages by IE6.0.29 when server uses the DES-CBC-SHA or other(DES or DES3) cipher suits." "I ran into a very similar problem. I think that the solution is to use the SSL_OP_ALL or SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS flags for OpenSSL." - [1]
"Reasons why the SSL connection from an i-mode phone fails" [2]
"The .NET Compact Framework runtime treats such empty packets as a signal to end the connection." [3]
Apparently, TomCat, Apache mod_ssl, and Exim disable this feature in OpenSSL by default [4][5][6].
[1] http://www.daniweb.com/hardware-and-software/microsoft-windows/web-browsers/threads/50765
[2] http://support.f5.com/kb/en-us/solutions/public/6000/000/sol6040.html
[3] http://support.microsoft.com/kb/970549
[4] http://svn.apache.org/repos/asf/tomcat/trunk/java/org/apache/tomcat/jni/SSL.java
[5] http://www.opensource.apple.com/source/apache_mod_ssl/apache_mod_ssl-4/mod_ssl/pkg.sslmod/ssl_engine_init.c
[6] http://lists.pcre.org/lurker/message/20110322.131702.b0cd375d.it.html
Assignee | ||
Comment 51•13 years ago
|
||
Although this workaround is enabled by default in OpenSSL, many applications set SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS (or SSL_OP_ALL, which includes that option among others). Based on the results of a simple Google search, we can see that there are many cases where this workaround causes interoperability problems.
Perhaps, then, it makes more sense to modify products to support TLS 1.1 than to deploy this workaround. We could do this in NSS 3.13. The main issues are "how many servers are intolerant of TLS 1.1 hellos but are tolerant of TLS 1.0 hellos?" and "should we still bother with implementing this workaround for TLS 1.0 and SSL 3.0?"
I think it is clear now that this has to be off by default. If we decide we still want to do this for 3.12.11, then I will produce the patch.
Comment 52•13 years ago
|
||
Comment 53•13 years ago
|
||
Comment 54•13 years ago
|
||
I use Shodan [1] to collect a bunch of Oracle application servers, and I found that some of them work with the proposed patch and some do not. Please see the attachments for server IPs and headers.
[1] http://www.shodanhq.com/?q=Oracle-Application-Server-10g%2F10.1.2.2.0+port%3A443
Comment 55•13 years ago
|
||
(In reply to comment #51)
> Although this workaround is enabled by default in OpenSSL, many applications
> set SSL_OP_DONT_INSERT_EMPTY_FRAGMENTS (or SSL_OP_ALL, which includes that
> option among others). Based on the results of a simple Google search, we can
> see that there are many cases where this workaround causes interoperability
> problems.
Your information is very useful. Anyway I think all the cases you listed are examples of client doesn't tolerate empty message from server. As Adam Langley stated in comment #35, Android has been sending empty messages for a while and Oracle application server is the first reported instance. So it's clearly not a widespread problem (for server doesn't tolerate empty message from client).
>
> I think it is clear now that this has to be off by default.
I think it should be on when used in client products, and off in server products.
Comment 56•13 years ago
|
||
We've landed a change in Chrome that performs the zero-sized application data stuffing and we'll be monitoring for bug reports resulting from this.
Assignee | ||
Comment 57•13 years ago
|
||
This version switches the default to "off" and renames the variables, including the environment variable, accordingly.
Attachment #542393 -
Attachment is obsolete: true
Attachment #545005 -
Flags: review?(wtc)
Assignee | ||
Updated•13 years ago
|
Target Milestone: --- → 3.12.11
Comment 58•13 years ago
|
||
Dan, does this still need to be tracked for 6 given that comment #5 suggests we're not vulnerable there?
Comment 59•13 years ago
|
||
One significant drawback of the current proposed countermeasure
(sending empty application data packets) is that the empty packet
might be rejected by the TLS peer (see comments #30/#50/others: MSIE
does not accept empty fragments, Oracle application server (non-JSSE)
cannot accept empty fragments, etc.)
We've been looking at a slightly different countermeasure that should
comply with the TLSv1.0/SSLv3.0 specifications, and likely won't break
implementations. Would you please review the following proposal?
If this is sound, this might avoid the empty packet issue, and the
switches necessary to configure it.
Looking at the spec of TLS, the block-ciphered structure is defined as:
block-ciphered struct {
opaque content[TLSCompressed.length];
opaque MAC[CipherSpec.hash_size];
uint8 padding[GenericBlockCipher.padding_length];
uint8 padding_length;
} GenericBlockCipher;
This implies that if the TLSCompressed.length is less than the cipher
block size, the MAC and padding data [1] will be used to construct the
first block.
The countermeasure: instead of sending a completely empty fragment
before the supplied application data, send the first byte of
application data in a packet, followed by the remaining data from the
write call. I think this should work because the TLS specification
requires content followed by the MAC. Like the proposed solution,
you're still removing the adaptive guessing mechanism from the IV
calculation, but in a different way.
Assume here a cipher block size of 8. Since the MAC effectively
randomizes the IV for successive SSL/TLS packets, if this first byte is
one of the bytes being guessed, that still leaves a search space of 28
(1 byte of data) * 256 (MAC) = 264, which is the same as the cipher
itself. If the 1 byte is known plaintext, then the MAC will change the
IV for successive packets, and the attacker is not given the chance to
adapt the input Pj.
Implementations should readily accept 1 byte of application data, so
this will likely address the empty fragment issue, while complying with
the SSL 3.0 and TLS 1.0 specifications.
Looking forward to your comments.
Thanks,
Xuelei
[1] From a quick review of the standard hashes and ciphers he TLS
cipher suites, the CipherSpec.hash_size should be always bigger or
equal to block size. So the first block of an cipher operation should
not contain the padding data.
Comment 60•13 years ago
|
||
(In reply to comment #59)
> The countermeasure: instead of sending a completely empty fragment
> before the supplied application data, send the first byte of
> application data in a packet, followed by the remaining data from the
> write call. I think this should work because the TLS specification
> requires content followed by the MAC. Like the proposed solution,
> you're still removing the adaptive guessing mechanism from the IV
> calculation, but in a different way.
>
> Assume here a cipher block size of 8. Since the MAC effectively
> randomizes the IV for successive SSL/TLS packets, if this first byte is
> one of the bytes being guessed, that still leaves a search space of 28
> (1 byte of data) * 256 (MAC) = 264, which is the same as the cipher
> itself. If the 1 byte is known plaintext, then the MAC will change the
> IV for successive packets, and the attacker is not given the chance to
> adapt the input Pj.
>
> Implementations should readily accept 1 byte of application data, so
> this will likely address the empty fragment issue, while complying with
> the SSL 3.0 and TLS 1.0 specifications.
It works. Following your idea, I wrote a small HTTPS client using a modified version of tlslite [1]. My client can connect to all of those "bad" Oracle application servers including www.banner.iup.edu.
Thanks for proposing such a simple yet very effective approach.
Thai.
[1] http://trevp.net/tlslite/
Assignee | ||
Comment 61•13 years ago
|
||
Xuelei, thanks for the suggestion.
Would any amount of plaintext in the first record up to (block_size - 1) bytes work? If so, then it be better to send up to (block_size - 1) bytes in the first record, to minimize overhead.
Comment 62•13 years ago
|
||
(In reply to comment #32)
> (In reply to comment #26)
> Thank you for the explanation. This is the bug I feared.
> SSL_OptionSetDefault should override environment variables.
I disagree. IMO, the environment variables should trump all other calls.
I've always envisioned them this way. They exist to allow users to change
the behaviors of applications, even those that call "...SetDefault".
Comment 63•13 years ago
|
||
Neither Xuelei nor I are low-level cryptographers, so we're hoping someone with a stronger background can weigh in here soon.
To respond to Brian's Comment 61, I don't think so, otherwise, you're getting back to the original problem using a known plaintext/guess approach as described in sections 4.2/5 of Thai/Juliano's paper. For example, let's continue to assume 8 byte blocks for the cipher, and you have the previous IV available from the previous packet. If you choose/know the first 6 bytes and are guessing at the 7th (with 8th being the first byte of the MAC): ABCDEF?M I think your search "space" is 2^16.
You may not be able to completely confirm the "?M" value is correct with just the one operation, but you might be able to repeat the process and get a better probabilistic sense for whether "?" is correct.
The one byte proposal seems ok in my understanding. If the one byte is a known value, you can't change the bytes to come. If it's unknown, you're essentially back to guessing the cipher's key.
Comment 64•13 years ago
|
||
(in reply of comment #61)
If there is only one randomized MAC byte in the first block, the probability to guess the last application data byte in the fragment is 1/2^16 in one operation. Although as the MAC byte is randomized, the attacker might not be able to guess the byte in 2^16 operations, but I think 1/2^16 is not a small enough likelihood.
I have no sense whether the likelihood of 1/2^8^(block_size/2) should be small enough to prevent force attack. Need experts to weight in here.
Only sending the first byte of the application data in the first fragment of a payload should have the best strength, I think.
Comment 65•13 years ago
|
||
From comment 63:
> Neither Xuelei nor I are low-level cryptographers, so we're hoping someone
> with a stronger background can weigh in here soon.
Just wondering if anyone with a stronger crypto background is looking at the soundness of Xuelei's proposal. We're likely going to go with this approach.
Thanks.
Comment 66•13 years ago
|
||
Hi Brad and Xuelei,
I think the proposal is good. I recommend you go with that approach.
Assignee | ||
Comment 67•13 years ago
|
||
I (In reply to comment #59)
> Assume here a cipher block size of 8. Since the MAC effectively
> randomizes the IV for successive SSL/TLS packets, if this first byte is
> one of the bytes being guessed, that still leaves a search space of 2^8
> (1 byte of data) * 2^56 (MAC) = 2^64, which is the same as the cipher
> itself. If the 1 byte is known plaintext, then the MAC will change the
> IV for successive packets, and the attacker is not given the chance to
> adapt the input Pj.
The search space for that first byte might not be 2^8. The attacker probably knows something about the protocol so that he can reduce the search space for that byte. So, we can only assume the search space for the first byte is 2^1 not 2^8.
As for the MAC part, the odds of the attack succeeding depend on the first (block_size - 1) bits of the MAC being equal to the plaintext that the attacker wanted to guess. This will happen 2^(block_size - 8) of the time (e.g. 2^53 for 3DES).
So, effectively, the odds of successful guess of one block would be ~1/(2^(block_size - 8)), not 1/(2^block_size). (2^54 for 3DES). Is this problematic for any reason?
Comment 68•13 years ago
|
||
(in reply of comment #67)
> This will happen 2^(block_size - 8) of the time (e.g. 2^53 for 3DES).
The block size of 3DES is 64 bits. In the above example, should it be 2^56 rather than 2^53? Or anyting I missed?
I think the purpose of the attack is to guess the plain text. If the plain text (1st byte) is known, the attck won't need to guess it any more, and countermeasure still works for the following bytes. If it is from a very limited set (for example, "0" or "1", search space is 1), I think 1/2^(block_size - 8 + set_size) should be enough to prevent the CBC attack.
Suppose the set size is 1, if the block size is 8 bytes, the search space is 2^57. For a computer with 10GHz CPU, and every attack takes 10 CPU cycles (it's certain that it will take much more CPU cycles in practice), an sucessful attack will need around 4 years. Considering that the attack must be resident within a live TLS session, I think the strengh is enough.
Comment 69•13 years ago
|
||
As the MAC calculation takes into account the sequence number (to prevent replay attacks), even if you're guessing at the MAC value, it will vary from packet to packet. I don't think you will be able to replay and get an exact equivalence answer.
Reporter | ||
Comment 70•13 years ago
|
||
Clearly not going to rush a protocol change into a Firefox beta, "wontfix" for Fx6. Probably not appropriate for Fx7 either.
Comment 71•13 years ago
|
||
This patch implements the new countermeasure proposed by
Xuelei Fan in comment 59. This patch is partially based
on Brian's patch v6.
The description of the SSL_ALLOW_CBC_ATTACK_WHEN_SENDING
option in ssl.h is confusing because it says "On by default",
but that refers to the countermeasure. The
SSL_ALLOW_CBC_ATTACK_WHEN_SENDING option is off (PR_FALSE)
by default.
Attachment #545005 -
Attachment is obsolete: true
Attachment #545005 -
Flags: review?(wtc)
Comment 72•13 years ago
|
||
I found that we may not be able to enable the new countermeasure
by default. Some test programs assume that a single SSL read call
will receive all the expected data. With the new countermeasure,
the SSL read call returns only one byte. A second SSL read call
is needed to receive the rest of the expected data.
With the old countermeasure, NSS won't report the empty data
record it receives, so there is no perceivable behavior change.
Updated•13 years ago
|
Target Milestone: 3.12.11 → 3.13
Comment 73•13 years ago
|
||
Opera has just release a patch for this vulnerability http://www.opera.com/docs/changelogs/windows/1151/. It's worth noting that WebSocket is not enabled by default in Opera. BTW, we are going to present this attack with some demos at the EKOPARTY conference on September 21-22-23.
All the best,
Thai.
Comment 74•13 years ago
|
||
(In reply to Wan-Teh Chang from comment #72)
> I found that we may not be able to enable the new countermeasure
> by default. Some test programs assume that a single SSL read call
> will receive all the expected data. With the new countermeasure,
> the SSL read call returns only one byte. A second SSL read call
> is needed to receive the rest of the expected data.
>
> With the old countermeasure, NSS won't report the empty data
> record it receives, so there is no perceivable behavior change.
Do we know what opera did in comment 73? Can we do that? This needs action today if it is going to come in Firefox 7.
Assignee | ||
Comment 75•13 years ago
|
||
(In reply to Wan-Teh Chang from comment #72)
> Some test programs assume that a single SSL read call will
> receive all the expected data.
Wan-Teh, were you referring to test programs in our NSS test suite, or were you referring to interoperability tests with other (deployed, production) software?
Assignee | ||
Comment 76•13 years ago
|
||
(In reply to Thai Duong from comment #73)
> BTW, we are going to present this attack with some demos
> at the EKOPARTY conference on September 21-22-23.
Thai, are any of your demos using HTTP or WebSockets draft 7 or later? I have seen some of your comments (e.g. on Twitter) mentioning successful attacks on "HTTPS" connections but I don't know if you mean literally "HTTP over TLS" or you were referring to obsolete versions of websockets over TLS (which happen to use "HTTPS" as the scheme), or both.
Have you improved your attack at all beyond the draft that is currently attached to this bug?
Comment 77•13 years ago
|
||
Yes, I'd be very interested if this works in the websockets version we have in Firefox 7, currently in Beta.
Updated•13 years ago
|
Whiteboard: [sg:moderate] not vuln by default currently → [sg:high] vuln if plugins are enabled
Assignee | ||
Comment 78•13 years ago
|
||
Please use bug 688008 for non-NSS-specific aspects and mitigations of the attack, including how plugins are affected and how affected plugins affect Gecko applications.
OS: Mac OS X → All
Hardware: x86 → All
Whiteboard: [sg:high] vuln if plugins are enabled → [sg:high]
Version: 3.12 → trunk
Assignee | ||
Comment 79•13 years ago
|
||
Here is a version that splits the first plaintext byte of each application_data record into its own record. It avoids the problem of sending the one-plaintext-byte record in a separate packet from the record that follows it, which was reported to me by Wan-Teh.
It is possible that the technique used by Wan-Teh and Adam Langley in their patches would also work, if we modified it to send the one-plaintext-byte record with the ssl_SEND_FLAG_FORCE_INTO_BUFFER flag. However, I did not do that because I wanted to make the code more robust against the following scenerio:
The application calls ssl3_SendApplicationData(ss, in, len, 0). The input buffer is controlled by an attacker's script. If we assume that the attacker cannot modify the input buffer until after ssl3_SendApplicationData returns, then Wan-Teh and Adam's code works fine. But, if it is possible for the attacker to modify the input buffer (e.g. in another thread, or by causing a memory-mapped file's contents to change in some way, or in some other way) during the execution of ssl3_SendApplicationData, then there would be a race condition in which the attacker could conceivable still pull off the attack that we are trying to protect against: we would send the one-plaintext-byte record, the attacker would intercept the traffic and then modify the input buffer approproately, and then we would encode and send the rest of the records. I doubt that Firefox or Chrome would be exploitable in such a way, but it is straightforward to prevent this scenerio. My patch prevents that scenerio by never doing any I/O between the encoding of the one-plaintext-byte record and the encoding of a subsequent record.
I tested this code using these commands:
SSLDEBUGFILE=foo.log SSLTRACE=99 SSLDEBUG=99 \
tstclnt.exe -h www.reddit.com -p 443 -d . -o -c y
NSS_SSL_ALLOW_CBC_ATTACK_WHEN_SENDING=1 \
SSLDEBUGFILE=foo.log SSLTRACE=99 SSLDEBUG=99 \
tstclnt.exe -h www.reddit.com -p 443 -d . -o -c y
Without NSS_SSL_ALLOW_CBC_ATTACK_WHEN_SENDING=1, you will see "send (encrypted) record data [1/2]" and "send (encrypted) record data [2/2]" in the log for the application_data portions.
With NSS_SSL_ALLOW_CBC_ATTACK_WHEN_SENDING=1, you will see "send (encrypted) record data [1/1]" in the log for the application_data portions.
Attachment #561647 -
Flags: superreview?(rrelyea)
Attachment #561647 -
Flags: review?(wtc)
Attachment #561647 -
Flags: feedback?
Comment 80•13 years ago
|
||
I discussed this issue with MITRE (without giving away the actual attack), and they've decided that it needs an old CVE ID.
They are giving it CVE-2004-2770 due to me pointing them at this paper:
http://eprint.iacr.org/2004/111
The basic idea is that the actual flaw is very old, and was thought to be useless in the real world. This attack uses an old flaw with a new attack. In such instances MITRE attaches an ID to the root flaw, not how it's exploited, hence the old ID.
Comment 81•13 years ago
|
||
Juliano was kind enough to point out to me that Opera already used
CVE-2011-3389 - unspecified "low severity issue, as reported by Thai Duong and Juliano Rizzo."
MITRE says since that ID is public, let's use it. CVE-2004-2770 should not be used in any public announcements and will be marked as a duplicate of CVE-2011-3389.
Updated•13 years ago
|
Alias: CVE-2011-3389
Comment 82•13 years ago
|
||
Another solution to this problem, one completely outside of NSS, is to add a new header
to each and every http(s) request, one that precedes all other headers, after the first
line of the request (e.g GET, POST), one that will be ignored by the server and contains
ample random data.
Example:
GET /whatever HTTP/1.1
X-BEAST: <16 bytes of random data, encoded in base64>
Host: localhost
User-Agent: Mozilla/5.0 [...]
This could be done with an add-on (IINM).
Comment 83•13 years ago
|
||
Nelson, indeed, this was my suggestion. But we would have to do the research and see if all servers really ignore extraneous headers. I suspect some might dislike it :-(
This still won't help plug-ins/applets/etc. I think other protocols running on top of SSL/TLS might have other sorts of vulnerabilities as well - LDAPS for example.
Comment 84•13 years ago
|
||
Yet another non-NSS solution is to limit the number of times that a connection can
be used with http(s) keep-alive to a very small number (1 digit). Conceptually, this
could be done for all schemes, e.g. https, ldaps, etc.
Assignee | ||
Updated•13 years ago
|
Attachment #561647 -
Flags: feedback?(agl)
Attachment #561647 -
Flags: feedback?(Xuelei.Su)
Attachment #561647 -
Flags: feedback?
Comment 85•13 years ago
|
||
Comment on attachment 561647 [details] [diff] [review]
v9: Prevent chosen plaintext attack on CBC mode, on by default
I don't understand why the thread-yield code moved, but otherwise LGTM.
Attachment #561647 -
Flags: feedback?(agl) → feedback+
Assignee | ||
Comment 86•13 years ago
|
||
(In reply to Adam Langley from comment #85)
> Comment on attachment 561647 [details] [diff] [review] [diff] [details] [review]
> v9: Prevent chosen plaintext attack on CBC mode, on by default
>
> I don't understand why the thread-yield code moved, but otherwise LGTM.
I moved the yield so that we continue to yield every 16KB as we did before. Previously, ssl3_SendApplicationData called ssl_SendRecord with 16KB chunks at a time. In the patch, it calls ssl_SendRecord with all the application data at once. Previously, we yielded every 16KB.
Comment 87•13 years ago
|
||
Comment on attachment 561647 [details] [diff] [review]
v9: Prevent chosen plaintext attack on CBC mode, on by default
I only reviewed the changes to lib/ssl/ssl3con.c, which is
the meat of this patch. I reviewed the changes carefully.
I believe they are correct. I suggest some changes below.
High-level comments:
I think it is useful to get back to the MAX_FRAGMENT_LENGTH
boundary after the second record, rather than staying off by
one. But I won't impose this on you.
Use of TABs is prevalent in files written by Bob and Nelson
originally. I highly recommend that you use TABs in libSSL.
This makes the diffs more pleasant to read. I can fix this
for you.
Below are my comments on the three functions you changed.
1. ssl3_CompressMACEncryptRecord:
>+/* Caller must hold the spec read lock. */
> static SECStatus
> ssl3_CompressMACEncryptRecord(sslSocket * ss,
> SSL3ContentType type,
> const SSL3Opaque * pIn,
>- PRUint32 contentLen)
>+ PRUint32 contentLen,
>+ sslBuffer * wrBuf)
Please document that this function must not use ss->sec.writeBuf
and must use the wrBuf argument instead.
2. ssl3_SendRecord:
> while (nIn > 0) {
>- PRUint32 contentLen = PR_MIN(nIn, MAX_FRAGMENT_LENGTH);
>+ PRUint32 contentLen;
>+ sslBuffer * wrBuf = &ss->sec.writeBuf;
>+ unsigned newSpace;
>+ unsigned numRecords;
Nit: Although wrBuf is only used inside this while loop, initializing
it at the beginning of the function is slightly more efficient.
Please use "unsigned int" instead of just "unsigned" to be
consistent with the rest of the NSS code.
>+ /* We will split the first byte of the record into its own record,
>+ as explained in the documentation for
>+ SSL_ALLOW_CBC_ATTACK_WHEN_SENDING in ssl.h */
Please put '*' at the beginning of the second and third comment lines.
>- if (wrBuf->space < contentLen + SSL3_BUFFER_FUDGE) {
>- PRInt32 newSpace = PR_MAX(wrBuf->space * 2, contentLen);
Just wanted to confirm that you do NOT want the original "grow by twofold"
behavior.
>+ newSpace = contentLen + (numRecords * SSL3_BUFFER_FUDGE);
>+ if (newSpace > wrBuf->space) {
In the new code, the variable name 'spaceNeeded" would be more
appropriate than 'newSpace'.
3. ssl3_SendApplicationData:
I would not bother to change ssl3_SendApplicationData (and move
the thread yield code to ssl3_SendRecord). Is it necessary?
I guess you want to allow the second record to be a full
MAX_FRAGMENT_LENGTH bytes.
With your change, the while loop inside ssl3_SendApplicationData
may no longer be necessary because a single call to
ssl3_SendRecord should be enough -- ssl3_SendRecord has its own
while loop. Note that ssl3_FlushHandshake makes a single call
to ssl3_SendRecord, too.
Assignee | ||
Comment 88•13 years ago
|
||
(In reply to Wan-Teh Chang from comment #87)
> Comment on attachment 561647 [details] [diff] [review] [diff] [details] [review]
> I think it is useful to get back to the MAX_FRAGMENT_LENGTH
> boundary after the second record, rather than staying off by
> one. But I won't impose this on you.
I don't understand what you mean here.
> >- if (wrBuf->space < contentLen + SSL3_BUFFER_FUDGE) {
> >- PRInt32 newSpace = PR_MAX(wrBuf->space * 2, contentLen);
>
> Just wanted to confirm that you do NOT want the original "grow by twofold"
> behavior.
With the current implementation of sslBuffer_Grow, wrBuf->space is going to start at (MAX_FRAGMENT_LENGTH + 2048) bytes, which is greater than MAX_FRAGMENT_LENGTH + (2 * SSL3_BUFFER_FUDGE), so this is effectively dead code. Also, it is never useful to have writeBuf be larger than MAX_FRAGMENT_LENGTH + (2 * SSL3_BUFFER_FUDGE) because anything larger would just result in a bunch of unused space. I will try to make this clearer.
> I would not bother to change ssl3_SendApplicationData (and move
> the thread yield code to ssl3_SendRecord). Is it necessary?
> I guess you want to allow the second record to be a full
> MAX_FRAGMENT_LENGTH bytes.
I believe it is no problem to change this back. I wish I remember why I thought it was worth doing.
I will also address your other comments this weekend.
Assignee | ||
Comment 89•13 years ago
|
||
Yngve wrote in an email to a private set of recipients. He indicated that keeping split records in the same TCP packet may be useful for improved compatibility, as well as the possibly improved performance that Wan-Teh mentioned.
He also mentioned that for HTTP, having a split other than 1/(n-1) might be better. I wonder if this is because applications expect the entire request method (e.g. "GET ", "POST ", etc.) of the request line to be readable in a single recv()). If so, it might make sense to investigate splitting the records at 5/(n-5) or so. But, the effects he observed might also be due to him requesting the root ("GET / "); if this has an effect, then this would mean that these servers may require the entire Request-URI and/or the first non-space character after the request method. I do not know yet how good of an idea it is to change the split from 1/(n-1).
Another possible way to improve compatibility: Do not split the first application_data record sent. AFAICT, it isn't necessary to split the first record sent, because the preceding record is always the Finished handshake message, which is randomized. If we don't split the first application_data record, then this workaround will have no effect for virtually all HTTPS connections that aren't persistent and/or for which we never send a second request.
Also, the receiving end can do extra work to help ensure compatibility for poorly-written apps: Read a record. If it is an application_data record and if the length of the plaintext is less than the block size, then check if there is any data pending in the recv() buffer. If so, read() that record too. Then, if the second record were application_data records, combine them together and return them to the application. If the second record isn't an application_data record, return the application_data record to the application, leaving the second record in the "pending" buffer. I will suggest this to the OpenSSL team, and I will experiment with implementing it in NSS.
Yngve wrote:
> I tested a total of 5 options:
>
> 0 byte record
> 1 byte record
> 2 byte record
> 6/14 byte record (I call this -2 (blocksize -2))
> 7/15 byte record (I call this -1 (blocksize -1))
>
> Breakage is detected based on the following:
>
> - We have received a valid HTTPS response to an unmodified requests
> earlier
> - break occurs if the connection fails for any reason before we get the
> response (there is a 10 second timeout)
> - break occurs if the HTTP response code was 400 or higher, and did NOT
> match the code registered during the first unmodified request
> - The break have to be detected twice, just to be on the safe side.
>
> All of them produce approximately (some variation) the same amount of
> broken sites when block ciphers are used, according to my tests, and none
> of these options produce zero breakage, unfortunately.
>
>
> For zero byte the prober's breakage rate is 0.147% for AES supporting
> servers, 0.008% for 3DES-only
> for one byte 0.124%/0.03%
> For two byte 0.127%/0.029%
> For -1: 0.067%/0.016%
> for -2: 0.66%/0.015%
>
> Failed All: 0.039%/0.006%
>
> Total tested servers: ~610000 (the percentages are for the total, not for
> the cipher group).
>
> There is some curious behavior observed, as a number of serves detected as
> breaking does not break when loaded in a version with the patch. I am
> uncertain about why that is the case.
>
> My tests also revealed that there is a timing issue involved here, as
> well. Specifically, fewer servers break if the client makes sure that the
> two records are transmitted in the same TCP record. The above results are
> for this mode.
Assignee | ||
Comment 90•13 years ago
|
||
Jesse, you changed this bug to [sg:high] from [sg:moderate] when you added the comments about plugins. Since we split the plugin vector into bug 688008, and because everybody's assessment seems to have converged on "does not affect HTTP and WebSockets in Gecko," I restored the original severity rating of [sg:moderate].
Whiteboard: [sg:high] → [sg:moderate?]
Assignee | ||
Comment 92•13 years ago
|
||
The draft paper that we received from Rizzo & Duong has been leaked publicly along with (allegedly) part of the PoC code after they gave their talk. So, I propose we unhide this bug.
Comment 93•13 years ago
|
||
Comment on attachment 561647 [details] [diff] [review]
v9: Prevent chosen plaintext attack on CBC mode, on by default
I only looked at the 1/n-1 split logic. Looks fine to me.
Reporter | ||
Comment 94•13 years ago
|
||
> For -1: 0.067%/0.016%
> for -2: 0.66%/0.015%
Yngve confirmed the -2 result should be 0.066% -- it's a typo not ten times worse than -1
Reporter | ||
Updated•13 years ago
|
Group: core-security
Comment 95•13 years ago
|
||
(In reply to Brian Smith (:bsmith) from comment #88)
> (In reply to Wan-Teh Chang from comment #87)
> > I think it is useful to get back to the MAX_FRAGMENT_LENGTH
> > boundary after the second record, rather than staying off by
> > one. But I won't impose this on you.
>
> I don't understand what you mean here.
Sorry I wasn't clear.
Let n = MAX_FRAGMENT_LENGTH. When we send a lot of bytes, the
current NSS code sends:
n, n, n, n, ...
Your current patch sends:
1, n, 1, n, 1, n, 1, n, ...
I propose we send, if only the first record needs the protection:
1, n-1, n, n, n, ...
or, if all records need the protection:
1, n-1, 1, n-1, 1, n-1, 1, n-1, ...
What you called a "split" at 1/(n-1) is exactly what I proposed, but
not what your patch actually does.
Two more comments on your patch:
1. You add a one-byte record before every application data record,
rather than before just the first application data record in a
ssl3_SendApplicationData call. This increases the overhead of
this countermeasure. I just wanted to make sure you deliberately
do this. I guess you want to defend against the scenario you
described in comment 79 (the attack modifies the input buffer after
some data in the input buffer has been sent).
2. Nit: I suggest naming the SSL option with the shorter name
SSL_CBC_RANDOM_IV.
Comment 96•13 years ago
|
||
Chrome has received the first report of an incompatibility with 1/n-1 record splitting: http://code.google.com/p/chromium/issues/detail?id=98202
The site in question hangs forever when you try to login with the 1/n-1 splitting enabled.
(Yngve has previously provided me with a list of sites which his scanning suggested would have problems but I've been unable to actually trigger any issues until now.)
Comment 97•13 years ago
|
||
Comment on attachment 561647 [details] [diff] [review]
v9: Prevent chosen plaintext attack on CBC mode, on by default
I completed my review of [patch] v9. I've already commented on ssl3con.c.
I found just some nits in the other files.
In lib/ssl/ssl.h:
>+ * execution, and/or by the application setting the option
>+ * SSL_ALLOW_CBC_ATTACK_WHEN_SENDING.
Nit: add "to PR_TRUE" at the end of this sentence.
>+ * record, whereas this hack will add add at least two blocks of overhead
Remove one "add".
>+ * application_data record to every application_data record they send; we do
>+ * not do that because we know some implementations cannot handle empty
>+ * application_data records. Also, we only split application_data records and
>+ * not other types of records, because we know that some implementations
Nit: remove "we know" for brevity.
>+#define SSL_ALLOW_CBC_ATTACK_WHEN_SENDING 23
>+
>+
Nit: remove one blank line.
In lib/ssl/sslsock.c:
>+ ev = getenv("NSS_SSL_ALLOW_CBC_ATTACK_WHEN_SENDING");
>+ if (ev && ev[0] == '1') {
>+ ssl_defaults.allowCBCAttackWhenSending = PR_TRUE;
>+ SSL_TRACE(("SSL: allowCBCAttackWhenSending set to %d",
>+ PR_TRUE));
>+ }
Nit: since PR_TRUE has the value 1, the SSL_TRACE statement can
be simply:
SSL_TRACE(("SSL: allowCBCAttackWhenSending set to 1"));
I think the use of %d makes sense only if the code is like the
code for SSLBYPASS:
ev = getenv("SSLBYPASS");
if (ev && ev[0]) {
ssl_defaults.bypassPKCS11 = (ev[0] == '1');
SSL_TRACE(("SSL: bypass default set to %d", \
ssl_defaults.bypassPKCS11));
}
Assignee | ||
Comment 98•13 years ago
|
||
This patch addresses Wan-Teh's review comments. I tried to fix the space/tab issue but honestly there are so many mixed styles in this file I don't know how much it helps.
I left the wrBuf resize logic the same as the previous version of the patch for the reason I described in my previous comment: it is effectively dead code, and if it wasn't, it would often be wasting memory.
I implemented the split as 1/(n-1) on every record. Microsoft said they are also implementing the 1/(n-1) split on every record too. I think it is important for us all to be consistent here. The difference in overhead is minimal, especially for web browsers--especially considering that many applications write always write less than 16KB at a time anyway.
I also mentioned to Microsoft the idea of skipping the 1/(n-1) split for the first application_data record. They said they were (or would be) investigating that.
I also removed the ss parameter from ssl3_CompressMacEncryptRecord to make it clearer that the code never accesses ss->sec.writeBuf.
Attachment #552285 -
Attachment is obsolete: true
Attachment #561647 -
Attachment is obsolete: true
Attachment #562997 -
Flags: review?(wtc)
Attachment #561647 -
Flags: superreview?(rrelyea)
Attachment #561647 -
Flags: review?(wtc)
Attachment #561647 -
Flags: feedback?(Xuelei.Su)
Assignee | ||
Updated•13 years ago
|
Attachment #562997 -
Attachment is patch: true
Assignee | ||
Comment 99•13 years ago
|
||
I also changed the comments in ssl.h so that it doesn't imply that there is only one kind of chosen plaintext attack.
Attachment #562997 -
Attachment is obsolete: true
Attachment #562999 -
Flags: review?(wtc)
Attachment #562997 -
Flags: review?(wtc)
Comment 100•13 years ago
|
||
Comment on attachment 562999 [details] [diff] [review]
v10: Prevent chosen plaintext attacks on CBC mode, on by default
r=wtc. This patch has a bug and some nits. Please attach a new
patch when you check this in. Thanks.
All the comments below marked with "BUG" describe the same bug.
1. security/nss/lib/ssl/ssl.h
>+ * This protection mechanism is on by default; the default can be overridden by
>+ * setting NSS_SSL_CBC_RANDOM_IV=1 in the environment prior to execution,
>+ * and/or by the application setting the option SSL_CBC_RANDOM_IV to PR_TRUE.
BUG: SSL_CBC_RANDOM_IV has the opposite meaning of
SSL_ALLOW_CBC_ATTACK_WHEN_SENDING. So this paragraph is wrong.
If you name the option SSL_CBC_RANDOM_IV and want the protection on by default,
then this paragraph should say "... setting NSS_SSL_CBC_RANDOM_IV=0 ...
... setting the option SSL_CBC_RANDOM_IV to PR_FALSE."
2. security/nss/lib/ssl/ssl3con.c
>+/* Caller must hold the spec read lock. wrBuf is sometimes, but not always,
>+ * ss->sec.writeBuf.
>+ */
Nit: I suggest removing "wrBuf is sometimes, but not always, ss->sec.writeBuf."
because that fact is irrelevant to this function.
>+ if (nIn > 1 && !ss->opt.cbcRandomIV &&
BUG: remove the '!' before ss->opt.cbcRandomIV.
>+ if (rv == SECSuccess) {
>+ PRINT_BUF(50, (ss, "send (encrypted) record data [1/1]:",
>+ wrBuf->buf, wrBuf->len));
> }
Nit: we may want to omit "[1/1]" in this case. It may not make sense without
the "[1/2]" and "[2/2]" in the same SSL trace.
3. security/nss/lib/ssl/sslsock.c
>+ PR_FALSE /* cbcRandomIV */
BUG: If you want the protection on by default, then the default value
of cbcRandomIV should be PR_TRUE.
Nit: In SSL_OptionGet:
> case SSL_ENABLE_FALSE_START: on = ss->opt.enableFalseStart; break;
>+ case SSL_CBC_RANDOM_IV:
>+ on = ss->opt.cbcRandomIV;
>+ break;
and in SSL_OptionGetDefault:
> case SSL_ENABLE_FALSE_START: on = ssl_defaults.enableFalseStart; break;
>+ case SSL_CBC_RANDOM_IV:
>+ on = ssl_defaults.cbcRandomIV;
>+ break;
you can write the SSL_CBC_RANDOM_IV case on a single line as the
other cases.
>+ ev = getenv("NSS_SSL_CBC_RANDOM_IV");
>+ if (ev && ev[0] == '1') {
>+ ssl_defaults.cbcRandomIV = PR_TRUE;
>+ SSL_TRACE(("SSL: cbcRandomIV set to 1"));
>+ }
BUG: if the protection is on by default, then we need to change 1 to 0
and PR_TRUE to PR_FALSE, or you can imitate the SSLBYPASS code.
Attachment #562999 -
Flags: review?(wtc) → review+
Comment 101•13 years ago
|
||
Comment on attachment 562999 [details] [diff] [review]
v10: Prevent chosen plaintext attacks on CBC mode, on by default
One more nit in ssl3con.c:
>+/* Caller must hold the spec read lock. wrBuf is sometimes, but not always,
>+ * ss->sec.writeBuf.
>+ */
If you keep the second sentence, please add '&' before ss->sec.writeBuf.
Comment 102•13 years ago
|
||
Comment on attachment 562999 [details] [diff] [review]
v10: Prevent chosen plaintext attacks on CBC mode, on by default
There is a compilation error in ssl3con.c:
> SSL_DBG(("%d: SSL3[%d]: SendRecord, tried to get %d bytes",
> SSL_GETPID(), ss->fd, newSpace));
Change 'newSpace' to 'spaceNeeded'.
Comment 103•13 years ago
|
||
Comment on attachment 562999 [details] [diff] [review]
v10: Prevent chosen plaintext attacks on CBC mode, on by default
In security/nss/lib/ssl/ssl3con.c:
>+ goto spec_locked_loser; /* sslBuffer_Grow set a memory error code. */
Nit: this line is longer than 80 characters. I suggest moving the
comment to the previous line.
(In reply to Brian Smith (:bsmith) from comment #98)
>
> I also mentioned to Microsoft the idea of skipping the 1/(n-1) split for the
> first application_data record. They said they were (or would be)
> investigating that.
I tested this and found it doesn't always help. I guess the use
of HTTP keep-alive connections makes it less effective.
Since we want to help websites fix their bugs, we should make the
failure as deterministic as possible. So I no longer think it's
a good idea to skip the 1/(n-1) split for the first application_data
record.
Updated•13 years ago
|
See Also: → CVE-2011-3389
Updated•13 years ago
|
See Also: CVE-2011-3389 →
Comment 105•13 years ago
|
||
Wan-Teh,
I think it's worth noting that one way to defeat this attack on the server side for HTTPS is to limit the number of HTTP Keep-Alive requests per TCP connection - for servers that have such a setting. Apache does. This defeats the attack because when the client is forced to reconnect, there is a new TLS handshake. Even an abbreviated one defeats the attack.
Another more complex workaround would be for the server to force a renegotiation every n HTTP requests. But due to unrelated attacks, some clients that don't support the renegotiation extension may have disabled renegotiation.
Assignee | ||
Comment 106•13 years ago
|
||
In Thai's blog post, he specifically noted "In fact, no packet from BEAST has ever been sent to any servers." I think that the TLS handshake is completed as normal (reaching the server, and the server responding), but no application_data records are ever sent/received to/from the server; instead they are all discarded by the MITM.
Assignee | ||
Comment 107•13 years ago
|
||
Attachment #562999 -
Attachment is obsolete: true
Attachment #563777 -
Flags: superreview?(rrelyea)
Assignee | ||
Updated•13 years ago
|
Attachment #563777 -
Attachment is patch: true
Comment 108•13 years ago
|
||
Comment on attachment 563777 [details] [diff] [review]
v11: Prevent chosen plaintext attacks on CBC mode, on by default
r=wtc.
Attachment #563777 -
Flags: review+
Comment 109•13 years ago
|
||
Comment on attachment 563777 [details] [diff] [review]
v11: Prevent chosen plaintext attacks on CBC mode, on by default
r+
One unadvertised semantic change this patch makes is to the allocation code. The new code is simpler, but is subject to performance hits in the pathological case where the caller makes write calls with monotonically increasing content sizes (the worst case is where the increase is exactly 1). The old code does exponential jumps in this case.
On the other hand, the new code is not subject to performance hits where the caller uses a small initial buffer and switches to a larger buffer which is much less than twice the size of the smaller initial buffer.
I'm not sure how common these cases are. The smaller memory usage is likely a win small win for a browser. I think servers are the only ones that would really notice a difference in either case.
(No change made, just documenting the semantic change)
bob
Attachment #563777 -
Flags: superreview?(rrelyea) → superreview+
Assignee | ||
Comment 110•13 years ago
|
||
Robert, I tried to explain why that is a non-issue in comment 88; the buffer always starts out larger than necessary and we'll never actually grow it, AFAICT. If we ever change the initial allocation strategy for this buffer and/or the code for sslBuffer_Grow, then we will have to revisit it.
Comment 111•13 years ago
|
||
Assignee | ||
Comment 112•13 years ago
|
||
Diffs: http://bonsai.mozilla.org/cvsview2.cgi?command=DIRECTORY&subdir=mozilla/security/nss/lib/ssl&files=ssl.h:ssl3con.c:sslimpl.h:sslsock.c&root=/cvsroot
Changes committed by bsmith%mozilla.com:
Update of /cvsroot/mozilla/security/nss/lib/ssl
In directory dm-cvs01:/tmp/cvs-serv21231/security/nss/lib/ssl
Modified Files:
ssl.h ssl3con.c sslimpl.h sslsock.c
Log Message:
Bug 665814: Prevent chosen plaintext attacks on SSL 3.0 and TLS 1.0 connections, r=wtc, sr=rrelyea
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Comment 113•13 years ago
|
||
> Robert, I tried to explain why that is a non-issue in comment 88; the buffer
> always starts out larger than necessary and we'll never actually grow it,
Ah, OK. There there isn't even a semantic issue.
bob
Assignee | ||
Updated•13 years ago
|
Summary: Rizzo/Duong chosen plaintext attack on SSL/TLS 1.0 (facilitated by websockets -76) → Rizzo/Duong chosen plaintext attack (BEAST) on SSL/TLS 1.0 (facilitated by websockets -76)
Comment 114•13 years ago
|
||
Is 3.13 the correct target milestone ? Don't we want to land this patch in 3.12.11 as well ?
Assignee | ||
Comment 115•13 years ago
|
||
AFAIK, we decided to stop maintaining 3.12.*.
Comment 116•13 years ago
|
||
Then it would be nice to inform this in the crypto devel list for the benefit of the maintainers at downstream distributions. For various reasons some may not be able upgrade to 3.13 right away. The patch is very localized and is easy to backport to 3.12.11.
Assignee | ||
Comment 117•13 years ago
|
||
I have sent email to dev-tech-crypto about this.
Comment 118•13 years ago
|
||
A quick hack that would protect a lot of servers/clients.
In relation to Cipher Suites for SSL/TLS:
IF server has no preference for the order of cipher suites AND supports RC4 use RC4
As RC4 is a stream cipher and not vulnerable to this attack.
Patent Pending btw, so if you use it I expect $1 from Mozilla each year to be donated to Mozilla. ;)
I have no stats to back me up only just my observations, hey grab some sites, even ones you own and put them through: https://www.ssllabs.com
look for "no preference for the order of cipher suites"
Remember the server hasn't specified what cipher order to use.
Comment 119•13 years ago
|
||
(In reply to Steven Roddis from comment #118)
> A quick hack that would protect a lot of servers/clients.
>
> In relation to Cipher Suites for SSL/TLS:
>
> IF server has no preference for the order of cipher suites AND supports RC4
> use RC4
We have no influence on the choice of cipher because the TLS connection is not made by the browser but by Java for this attack. Java has its own (vulnerable) TLS stack that we cannot patch.
Comment 120•13 years ago
|
||
Ah yes, now my memory kicks in, Firefox can't be used to do the chosen plain text part (excuse me it's early morning and haven't slept)
However what about websockets?
Comment 121•13 years ago
|
||
Modern Websockets includes masking, which should defend against this.
Comment 122•13 years ago
|
||
(In reply to Elio Maldonado from comment #116)
> Then it would be nice to inform this in the crypto devel list for the
> benefit of the maintainers at downstream distributions. For various reasons
> some may not be able upgrade to 3.13 right away. The patch is very localized
> and is easy to backport to 3.12.11.
We do indeed want this at Oracle. I think 3.12.11 is already released. so we talking about 3.12.12 .
Comment 124•13 years ago
|
||
Patch v11 introduced a regression with some SSL servers, see bug 702111.
Blocks: 702111
Updated•13 years ago
|
Comment 125•13 years ago
|
||
(In reply to Brian Smith (:bsmith) from comment #112)
> Diffs:
> http://bonsai.mozilla.org/cvsview2.cgi?command=DIRECTORY&subdir=mozilla/
> security/nss/lib/ssl&files=ssl.h:ssl3con.c:sslimpl.h:sslsock.c&root=/cvsroot
>
> Changes committed by bsmith%mozilla.com:
>
> Update of /cvsroot/mozilla/security/nss/lib/ssl
> In directory dm-cvs01:/tmp/cvs-serv21231/security/nss/lib/ssl
>
> Modified Files:
> ssl.h ssl3con.c sslimpl.h sslsock.c
> Log Message:
> Bug 665814: Prevent chosen plaintext attacks on SSL 3.0 and TLS 1.0
> connections, r=wtc, sr=rrelyea
This commit seems to break libcurl:
https://bugzilla.redhat.com/760060
Assignee | ||
Comment 126•13 years ago
|
||
Kamil, try setting NSS_SSL_CBC_RANDOM_IV=0 in the environment prior to running your tets. If they start passing, then the peer of your tests is incompatible with 1/(n-1) record splitting and should be fixed.
If the test still fails even with NSS_SSL_CBC_RANDOM_IV=0, then please report back.
Comment 127•13 years ago
|
||
Thanks Brian, with NSS_SSL_CBC_RANDOM_IV=0 the tests start passing. The peer is stunnel, which uses OpenSSL. Please provide details on what exactly should be fixed in stunnel and how.
Assignee | ||
Comment 128•13 years ago
|
||
(In reply to Kamil Dudka from comment #127)
> Thanks Brian, with NSS_SSL_CBC_RANDOM_IV=0 the tests start passing. The
> peer is stunnel, which uses OpenSSL. Please provide details on what exactly
> should be fixed in stunnel and how.
It is probably not stunnel that needs fixed, but rather the application on the other side of stunnel. For example, let's say the program is doing this (psuedocode):
recv(infile, buffer, 4);
if (memcmp(buffer, "GET ", 4)) {
fail();
}
This code is wrongly assuming that read(infile, 4) will return 4 bytes. But, read() may only any number of bytes up to 4, because the parameter given to read() is the *maximum* number of bytes it is allowed to return in that call. Instead, the code has to be written to call recv() in a loop until it has received four bytes.
There are probably many applications that need to be fixed like this.
One way to work around the problem would be to patch openssl: After it has decrypted/authenticated an application_data record, check if that record contains zero or one bytes of plaintext. If it contains more than one byte of plaintext, return the data to the application. If the record does NOT contain more than one byte of plaintext, then see if there is any more data available in the recv buffer--e.g. using recv(MSG_PEEK). If there is any data available in the recv buffer, then process the next record, and then return the plaintext from both records (concatenated together) to the application. If there is no more data available in the recv buffer, and the record just read in contained just a single byte, then you must return just the single byte.
AFAICT, this should work as well as possible as a general solution to the problem, because implementations that do the 1/(n-1) record try to pack both records into the same network packet--at least NSS does, I don't know about other implementations.
Comment 129•13 years ago
|
||
(In reply to Brian Smith (:bsmith) from comment #128)
> It is probably not stunnel that needs fixed, but rather the application on
> the other side of stunnel.
Brian, you are right. The fix will go to the testing infrastructure of curl:
http://thread.gmane.org/gmane.comp.web.curl.library/34261
> There are probably many applications that need to be fixed like this.
Yes, I spotted additional two independent bug reports triggered by this security fix:
https://bugzilla.redhat.com/770419
https://bugzilla.redhat.com/770682
> One way to work around the problem would be to patch openssl
Would such a workaround fix the Microsoft Communicator server?
Comment 130•13 years ago
|
||
> One way to work around the problem would be to patch openssl:
See, http://rt.openssl.org/Ticket/Display.html?id=263
Comment 131•13 years ago
|
||
The trailing digit is missing in the link above I guess. This appears to be the correct link:
http://rt.openssl.org/Ticket/Display.html?id=2635
Comment 132•13 years ago
|
||
(In reply to Kamil Dudka from comment #129)
> Would such a workaround fix the Microsoft Communicator server?
I've raised this issue with a Microsoft Manager responsible for the Lync platform. Let's see how they react.
Comment 133•13 years ago
|
||
We are seeing a regression where certain 3rd party clients can no longer communicate with an NSS-based server after this fix was applied. The problem is only seen with CBC ciphers.
Comment 134•13 years ago
|
||
yes, the number of incorrectly coded clients are quite staggering. For servers you may want to switch the default to off. You can also set the environment variable to off as well. (This patch only applies to CBC ciphers). Ping me offline and I can get you some patches that could help you out.
bob
Updated•6 months ago
|
Keywords: reporter-external
You need to log in
before you can comment on or make changes to this bug.
Description
•