Closed
Bug 1138554
(CVE-2015-4000)
Opened 10 years ago
Closed 10 years ago
NSS accepts export-length DHE keys with regular DHE cipher suites ("Logjam")
Categories
(NSS :: Libraries, defect)
NSS
Libraries
Tracking
(firefox38 wontfix, firefox38.0.5 wontfix, firefox39 verified, firefox40 verified, firefox41 verified, firefox-esr3139+ verified, firefox-esr3839+ verified, b2g-v2.0 fixed, b2g-v2.0M fixed, b2g-v2.1 fixed, b2g-v2.1S fixed, b2g-v2.2 fixed, relnote-firefox 39+, b2g-master fixed)
RESOLVED
FIXED
3.19.1
Tracking | Status | |
---|---|---|
firefox38 | --- | wontfix |
firefox38.0.5 | --- | wontfix |
firefox39 | --- | verified |
firefox40 | --- | verified |
firefox41 | --- | verified |
firefox-esr31 | 39+ | verified |
firefox-esr38 | 39+ | verified |
b2g-v2.0 | --- | fixed |
b2g-v2.0M | --- | fixed |
b2g-v2.1 | --- | fixed |
b2g-v2.1S | --- | fixed |
b2g-v2.2 | --- | fixed |
relnote-firefox | --- | 39+ |
b2g-master | --- | fixed |
People
(Reporter: ekr, Assigned: mt)
References
(Depends on 1 open bug)
Details
(Keywords: dev-doc-needed, sec-moderate, site-compat, Whiteboard: [adv-main39+][adv-esr38.1+][adv-esr31.8+] Embargo until multi-vendor coordinated info release (May 19))
Attachments
(1 file, 10 obsolete files)
15.23 KB,
patch
|
wtc
:
review+
|
Details | Diff | Splinter Review |
No description provided.
Reporter | ||
Comment 1•10 years ago
|
||
The following issue needs to be externally coordinated.
Matthew Green informed me this morning of a potential issue in libssl
and DHE processing. The scenario is a TLS server which is configured
for export DHE modes, such as:
TLS_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA
As well as stronger modes.
In this scenario, the server is supposed to use a 512-bit DHE key.
If the server uses a static key rather than a fresh key for each
connection, then it is potentially possible for an attacker to
break the key. At that point he modifies the client's ClientHello
to only offer the export DHE mode, causing the server to return
a valid ServerKeyExchange with that key. The attacker than computes
the master secret and assumes the rest of the connection.
Green suggests that NSS should address this by increasing the
minimum DHE key size. See:
https://dxr.mozilla.org/mozilla-central/source/security/nss/lib/ssl/ssl3con.c#6734
Reporter | ||
Updated•10 years ago
|
Summary: NSS needs to check length of DHE keys → NSS accepts export-length DHE keys with regular DHE cipher suites
Assignee | ||
Comment 2•10 years ago
|
||
I have an second fix. Delete all of that nasty dangerous export rubbish. It has been off by default for ages already.
I'm not sure about the purported attack: Finished should prevent the downgrade. And if peers are willing to negotiate export strength suites, then they get what they deserve.
Reporter | ||
Comment 3•10 years ago
|
||
I do not believe that Finished will detect the downgrade because the
attacker knows the MS.
Comment 4•10 years ago
|
||
possibly related bug 308724? See also discussion in 587234
If I'm looking at the right category we have very few connections at < 1024 bits
http://telemetry.mozilla.org/#filter=beta%2F37%2FSSL_KEA_DHE_KEY_SIZE_FULL%2Fsaved_session%2FFirefox&aggregates=multiselect-all!Submissions&evoOver=Builds&locked=true&sanitize=true&renderhistogram=Graph
There's recent work (December) trying to block connections with fewer bits in bug 587407
Comment 5•10 years ago
|
||
Is there any way the server you're talking to is doing this accidentally? Seems like something you'd do in a deliberate attempt to make traffic exposed, and if the server is complicit they could just take the data and ship it out the back end. Going with "moderate" severity for now, but if you think this is a realistic attack scenario then maybe "high" is better.
Keywords: sec-moderate
Whiteboard: Embargo until multi-vendor coordinated info release
Assignee | ||
Comment 6•10 years ago
|
||
(In reply to Eric Rescorla (:ekr) from comment #3)
> I do not believe that Finished will detect the downgrade because the
> attacker knows the MS.
Ahh, I didn't get that bit. Is this right?
1. The client sends a regular HS with a DHE suite.
2. The MitM switches out all the ciphers for an export suite.
3. The server sends a pathetic DH share (<512 bit). And signs it.
4. The MitM recovers the servers private value based on the share. (Leap of faith required here, but at 512 bits, that's not especially big.)
5. The MitM constructs a ServerHello with the weak DHE share (matching it to one of the client's DHE suites).
6. The MitM can now impersonate both peers toward each other and construct valid Finished messages.
From step 3, is it the case that the server can now be impersonated on any connection where the client offers a DHE suite? That would make step 4 much more plausible.
Landing bug 587407 seems like the right thing here.
Comment 7•10 years ago
|
||
> From step 3, is it the case that the server can now be impersonated on any
> connection where the client offers a DHE suite? That would make step 4 much
> more plausible.
This is exactly the attack. Two more data points:
1. Most Apache servers generate the DHE export key once at startup, and use the same key for the entire session. I checked the mod_ssl to verify this.
2. We can factor a 512-bit RSA key in 8 hours for $75 on EC2. This is based on recent, real efforts. Discrete log should be approximately the same effort, but we haven't verified this.
I don't know how widespread these export-grade DHE ciphers are. Going to get someone to run a scan. In a minute or two of searching I found at least one on SSL Labs:
https://www.ssllabs.com/ssltest/analyze.html?d=partnerapi.iassist.com
Matt
Assignee | ||
Comment 8•10 years ago
|
||
Might want to open this up now: http://arstechnica.com/security/2015/03/freak-flaw-in-android-and-apple-devices-cripples-https-crypto-protection/
Reporter | ||
Comment 9•10 years ago
|
||
i believe that does not describe the issue in NSS. Please check with Matt Green first.
Comment 10•10 years ago
|
||
> Landing bug 587407 seems like the right thing here.
+1, especially given that there's already a WIP patch.
Comment 11•10 years ago
|
||
The bug that was disclosed today affects RSA ciphersuites only. NSS is not vulnerable to that particular vector (dodged it by a whisker). The DHE thing is a completely different issue that won't be affected by any existing client-side patch.
The best you can hope for is that Akamai wipes out all of its export ciphersuites (both RSA and DHE), but there's no guarantee this will happen. A client-side patch would be a very good idea.
Assignee | ||
Comment 12•10 years ago
|
||
OK, let's land the fix in bug 587407 before we open this up. Arguably, this is even more serious than FREAK.
Comment 13•10 years ago
|
||
Is the proposal in 587407 to set the minimum DH group size to 1024?
Updated•10 years ago
|
See Also: → CVE-2015-2721
Assignee | ||
Comment 14•10 years ago
|
||
After looking at bug 587407 and discussing this on the NSS, I think that we should avoid those changes and provide a hard limit in NSS. The stated plan was to limit at 1024...ish. We will need to check whether this is 1024, or whether we need to make this slightly less to avoid excluding too many sites.
Comment 15•10 years ago
|
||
I suggest first increasing the limit to 768 bits, because that fixes the export cipher suite problem problem and 768 bits is the size used by many Java servers. Then uplift that change and then investigate making the minimum 1024, knowing that there will be a higher compatibility impact that will likely require additional work and/or delay to address.
Comment 16•10 years ago
|
||
Some data points for comparison:
- Windows 10 has set 1024 as the minimum size for DH groups.
This limit is being backported to older versions of Windows.
- In our scan of the Alexa top 1M sites, only 773 (0.2%) uses 768-bit primes, and 118 used 512-bit primes.
In the full IP space the number of 768-bit primes was higher (about 2.6%)
- Most of the sites offering short primes seem to prefer RSA ciphersuites and hence would pick
those over DHE anyway.
In summary, based on public servers on the web, we do not see many reasons for accepting <1024-bit groups.
Maybe there are other servers out there that are of concern for this?
Comment 17•10 years ago
|
||
(In reply to Karthikeyan Bhargavan from comment #16)
> - In our scan of the Alexa top 1M sites, only 773 (0.2%) uses 768-bit
> primes, and 118 used 512-bit primes.
> In the full IP space the number of 768-bit primes was higher (about 2.6%)
TLS 1.2 intolerance was less than 1% after SSLv3 was disabled, IIRC. So the compatibility risk and approximate amount of work required to have a 1024 bit limit could be as high as 3x the associated risk and cost of disabling non-secure version fallback.
It is good that Windows 10 enforces a higher minimimum. However, Windows 8.1 and lower only offer DHE key exchange for AES-GCM and DSS cipher suites, so the compatibility risk is probably much lower for Microsoft. And, of course, windows 10 hasn't been released and those backports haven't happened yet.
Comment 18•10 years ago
|
||
(In reply to Karthikeyan Bhargavan from comment #16)
> - In our scan of the Alexa top 1M sites, only 773 (0.2%) uses 768-bit
> primes, and 118 used 512-bit primes.
> In the full IP space the number of 768-bit primes was higher (about 2.6%)
See also [1], which measures the DHE key size distribution that Firefox uses. The key is at [2]. [3] tells you the percentage of time that DHE key exchange is used in Firefox (about 5% of the time). These numbers give an alternative (more optimistic) estimate of the compatibility impact of enforcing a 1024-bit minimum.
[1] http://telemetry.mozilla.org/#filter=release%2F36%2FSSL_KEA_DHE_KEY_SIZE_FULL
[2] https://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/src/nsNSSCallbacks.cpp?rev=7badc767cc25#1029
[3] http://telemetry.mozilla.org/#filter=release%2F36%2FSSL_KEY_EXCHANGE_ALGORITHM_FULL
Comment 19•10 years ago
|
||
Thanks, these numbers are quite interesting.
I should clarify that our concern about short groups is two-fold:
1) Groups <= 512 can be broken right now with academic processing power
2) Most groups > 512 bits being used out there are really old.
The vast majority use Apache hard-coded groups from 15-20 years ago.
I believe the Java and IIS 768-bit groups are also hard-coded.
This has given some adversaries more than a decade to break them.
The second point above is more worrisome, and cannot be fixed just at the client.
Still, raising to 1024 may help stave off the attacks for longer.
None of this is new of course and I don't want to preach to the choir.
I agree that raising the limit from 512-bits is a no-brainer, and should be done asap, even if just to 768.
Comment 20•10 years ago
|
||
(In reply to Karthikeyan Bhargavan from comment #19)
> The second point above is more worrisome, and cannot be fixed just at the
> client.
> Still, raising to 1024 may help stave off the attacks for longer.
>
> None of this is new of course and I don't want to preach to the choir.
> I agree that raising the limit from 512-bits is a no-brainer, and should be
> done asap, even if just to 768.
I agree. In fact, the TLS design is just completely broken here, because the DHE key is effectively authenticating itself. That means that a secure minimum size for the DHE key is whatever the minimum size we require for authentication, which is going to be *2048* bits after bug 1137484 lands. And, almost nobody doing DHE is using 2048-bit DHE keys.
Note that this has interesting implications for the negotiated FF-DHE extension that the TLS WG is working on. A few months ago I already alluded to this issue on the TLS WG mailing list, where I suggested that the client's FF-DHE extension should be signed along with the DHE parameters. It would be great if you could contribute to that discussion by sharing your research on the mailing list soon.
Comment 21•10 years ago
|
||
(In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #20)
> Note that this has interesting implications for the negotiated FF-DHE
> extension that the TLS WG is working on. A few months ago I already alluded
> to this issue on the TLS WG mailing list, where I suggested that the
> client's FF-DHE extension should be signed along with the DHE parameters. It
> would be great if you could contribute to that discussion by sharing your
> research on the mailing list soon.
Indeed it would be great if the core idea of the TLS 1.3 server signature design were to be
backported into earlier versions of TLS. E.g. signing a hash of the handshake
into the ServerKeyExchange would solve a whole bunch of downgrade issues (including DHE_EXPORT).
Assignee | ||
Comment 22•10 years ago
|
||
Based on this discussion and the telemetry numbers (I tallied just over 1% for DHE suites, which are again 5% of the total, so 0.05%), I've been convinced that raising to 1024 is safe.
I would like to go higher, but I don't see a way of doing that. Our options for the attack at 1024 are: raise to 2048 (and lose virtually the entire 5% of DHE servers), or implement the extended master secret (in progress, but I can't see that being implemented by affected servers).
Based on the telemetry numbers, the bulk of the bit sizes are in the 1025-1279 bucket. Rejecting everything <=1023 seems tolerable based on that.
Assignee | ||
Comment 23•10 years ago
|
||
http://codereview.appspot.com/215460043 - access on request.
Assignee | ||
Comment 24•10 years ago
|
||
Comment 25•10 years ago
|
||
Comment on attachment 8580961 [details] [diff] [review]
bug1138554.patch
Review of attachment 8580961 [details] [diff] [review]:
-----------------------------------------------------------------
r=wtc. I wrote more comments at https://codereview.appspot.com/215460043/
Attachment #8580961 -
Flags: review?(wtc) → review+
Assignee | ||
Comment 26•10 years ago
|
||
So I got a nominal OK here, but something still bothers me about this. I think that perhaps it was Wan-Teh's old patch that attempted to place limits on the RSA modulus in the same way that caused me to rethink it.
I looked at the code all the way down to the point that it hits the bignum code in freebl/mpi.c. Nowhere on that chain can I find a place where we reject primes with a large number of leading zeros in their encoding. The only exception is that SECKEY_PublicKeyStrength docks one byte (8 bits) from the strength if the first byte is zero, which is hardly worth much. If you follow the chain to its conclusion, mp_read_unsigned_octets() is happy to ignore leading zero bytes.
I take that to mean that - as currently specified - it would be trivial for an attacker to zero-pad any prime in order to make it pass our superficial test of strength. I think that we should at least make a passing attempt at doing better. It's not like we can test that the value is a safe prime, (at least not in any reasonable time) but if we want to protect against this attack, we should at least check if it is plausibly long. New patch inbound.
I also want to check that we are checking the RSA modulus size properly before we open this up. If someone can confirm my thinking, it might be worth raising this point with other implementations.
Matt, who might I need to coordinate with on this?
Flags: needinfo?(matthewdgreen)
Assignee | ||
Comment 27•10 years ago
|
||
Wan-Teh, I looked at https://bugzilla.mozilla.org/page.cgi?id=splinter.html&bug=587234&attachment=468110 as you suggested. I'm pretty sure that your check for the RSA modulus size is in dead code. The ServerKeyExchange message is not sent for an RSA key exchange, see https://tools.ietf.org/html/rfc5246#section-7.4.3
Also, I don't think we need to have a bit-size check for ECC suites; disabling groups is probably the right way to deal with that and our weakest supported is currently 163, which is (by most accounts) stronger than 1024-bit DHE or RSA.
I think that I've got all the necessary parts in place now, but this is a much bigger change than originally thought.
Assignee | ||
Comment 28•10 years ago
|
||
This is a significantly more scary patch.
Also, I've added a check on the size of the RSA modulus in the server certificate. This has the effect of killing off export cipher suites, so we'll want to discuss that on the list.
Attachment #8580961 -
Attachment is obsolete: true
Attachment #8581044 -
Flags: review?(wtc)
Assignee | ||
Comment 29•10 years ago
|
||
I found the other bug that relates to the size calculation: bug 623658.
Comment 30•10 years ago
|
||
> I also want to check that we are checking the RSA modulus size properly
> before we open this up. If someone can confirm my thinking, it might be
> worth raising this point with other implementations.
>
> Matt, who might I need to coordinate with on this?
It should be hard for an MITM attacker to pad the primes, since fortunately those values are properly signed. A bad server could issue a small prime, but a bad server is probably already outside of your threat model.
Can you clarify your request? Let me know what you want to check and I'll make sure the right people get involved.
Flags: needinfo?(matthewdgreen)
Assignee | ||
Comment 31•10 years ago
|
||
The signature is the piece of information I'd overlooked. No need to file up further, thanks.
Comment 32•10 years ago
|
||
Comment on attachment 8581044 [details] [diff] [review]
bug1138554.patch
Review of attachment 8581044 [details] [diff] [review]:
-----------------------------------------------------------------
r=wtc. I suggest some changes, some of which may be important.
::: lib/cryptohi/keyhi.h
@@ +38,5 @@
> SECKEY_UpdateCertPQG(CERTCertificate * subjectCert);
>
> +/* This sets a limit on the DH or RSA modulus that we are OK with accepting. */
> +#ifndef SECKEY_MIN_MODP_PRIME_BITS
> +#define SECKEY_MIN_MODP_PRIME_BITS 1017
1. This macro definition probably should be moved to keythi.h. The "t" in the header file's name stands for "type".
2. We should not need to protect redefinition, i.e., the "#ifndef SECKEY_MIN_MODP_PRIME_BITS" seems unnecessary.
3. Since 1017 is not a power of 2 that computer programmers are used to, it would be nice to cite the source of this minimum.
@@ +45,5 @@
> +/*
> +** Return the number of bits in the provided number. This assumes that the
> +** SECItem contains a big-endian number and counts from the first non-zero bit.
> +*/
> +extern unsigned SECKEY_ModulusLengthBits(const SECItem *number);
I am surprised NSS doesn't yet have such a function. This seems to imply the existing key size/strength functions are deficient. If so, please add a comment to document how this new function is better than existing key size/strength functions.
Or is the input parameter type the only difference? In that case, perhaps you can rename the function to be more general, such as "SECKEY_BigIntegerBits".
::: lib/cryptohi/seckey.c
@@ +949,5 @@
> + unsigned octets;
> + unsigned bits;
> +
> + if (!number || !number->data) {
> + PORT_SetError(SEC_ERROR_INVALID_KEY);
Should we also check || number->len == 0 ?
@@ +955,5 @@
> + }
> +
> + p = number->data;
> + octets = number->len;
> + while (!*p && octets > 0) {
IMPORTANT: we need to check these in the reverse order, otherwise we may read one byte beyond the end of the buffer. This should read:
while (octets > 0 && !*p) {
@@ +963,5 @@
> + if (octets == 0) {
> + return 0;
> + }
> + /* bits = 7..1 because we know at least one bit is set already */
> + for (bits = 7; bits > 0; --bits) {
There is a trick to do this using binary search. Since a properly generated key should have the most significant bit set, this for loop should exit in the first iteration. So it doesn't seem necessary to use the binary search trick.
@@ +975,5 @@
> /* returns key strength in bytes (not bits) */
> unsigned
> SECKEY_PublicKeyStrength(const SECKEYPublicKey *pubk)
> {
> + return (SECKEY_PublicKeyStrengthInBits(pubk) + 7) / 8;
I like your changes to these two functions. However, the change to SECKEY_PublicKeyStrengthInBits may break code that assumes it always return a multiple of 8 for RSA, DSA, and DH keys. I think that is very unlikely though.
@@ +984,5 @@
> SECKEY_PublicKeyStrengthInBits(const SECKEYPublicKey *pubk)
> {
> + unsigned bitSize = 0;
> +
> + /* interpret modulus length as key strength */
Nit: move this comment to the switch statement.
::: lib/nss/nss.def
@@ +222,5 @@
> RSA_FormatBlock;
> SECITEM_CompareItem;
> SECKEY_CreateRSAPrivateKey;
> SECKEY_DestroyPublicKey;
> +SECKEY_ModulusLengthBits;
This needs to be added at the end of the file, inside a new section with the current NSS version.
::: lib/ssl/ssl3con.c
@@ +6660,5 @@
> goto loser; /* malformed. */
> }
> + if (SECKEY_ModulusLengthBits(&modulus) < SECKEY_MIN_MODP_PRIME_BITS) {
> + errCode = SSL_ERROR_WEAK_SERVER_RSA_KEY;
> + desc = insufficient_security;
The insufficient_security alert was added in TLS 1.0. So we need an alternative for SSL 3.0.
@@ +6754,5 @@
> if (rv != SECSuccess) {
> goto loser; /* malformed. */
> }
> + /* Allow a little less than 1024 */
> + if (SECKEY_ModulusLengthBits(&dh_p) < SECKEY_MIN_MODP_PRIME_BITS) {
Please update the comment. It doesn't match the code.
@@ +10058,5 @@
> ss->sec.keaKeyBits = ss->sec.authKeyBits =
> SECKEY_PublicKeyStrengthInBits(pubKey);
> + /* Too small: not good enough. Send a fatal alert. */
> + if (ss->sec.keaKeyBits < SECKEY_MIN_MODP_PRIME_BITS) {
> + PORT_SetError(SSL_ERROR_WEAK_SERVER_RSA_KEY);
How do you know the key is an RSA key?
@@ +10059,5 @@
> SECKEY_PublicKeyStrengthInBits(pubKey);
> + /* Too small: not good enough. Send a fatal alert. */
> + if (ss->sec.keaKeyBits < SECKEY_MIN_MODP_PRIME_BITS) {
> + PORT_SetError(SSL_ERROR_WEAK_SERVER_RSA_KEY);
> + (void)SSL3_SendAlert(ss, alert_fatal, insufficient_security);
We need a different alert for SSL 3.0.
@@ +10060,5 @@
> + /* Too small: not good enough. Send a fatal alert. */
> + if (ss->sec.keaKeyBits < SECKEY_MIN_MODP_PRIME_BITS) {
> + PORT_SetError(SSL_ERROR_WEAK_SERVER_RSA_KEY);
> + (void)SSL3_SendAlert(ss, alert_fatal, insufficient_security);
> + SECKEY_DestroyPublicKey(pubKey);
Nit: please remove the space at the end of the line.
@@ +10067,1 @@
> #ifndef NSS_DISABLE_ECC
IMPORTANT: please check this block of code, which adjusts ss->sec.authKeyBit (initialized on lines 10058-10059). Please make sure it doesn't undo the improvement you made to SECKEY_PublicKeyStrengthInBits(pubKey);
Attachment #8581044 -
Flags: review?(wtc) → review+
Assignee | ||
Comment 33•10 years ago
|
||
(In reply to Wan-Teh Chang from comment #32)
> 1. This macro definition probably should be moved to keythi.h. The "t" in
> the header file's name stands for "type".
Done.
> 2. We should not need to protect redefinition, i.e., the "#ifndef
> SECKEY_MIN_MODP_PRIME_BITS" seems unnecessary.
Done.
> 3. Since 1017 is not a power of 2 that computer programmers are used to, it
> would be nice to cite the source of this minimum.
Done. I'm being conservative with this number. Setting this to 1023 would be OK with me too.
> > +extern unsigned SECKEY_ModulusLengthBits(const SECItem *number);
>
> I am surprised NSS doesn't yet have such a function. This seems to imply the
> existing key size/strength functions are deficient. If so, please add a
> comment to document how this new function is better than existing key
> size/strength functions.
The current size methods were added in 3.8. And arguably they were deficient as you can see by my refactoring.
I couldn't find anything that is equivalent, though I did find something in ssl3con.c that I can replace with a call to this function while fixing up the other patch:
- if (dh_g.len > dh_p.len || !ssl3_BigIntGreaterThanOne(&dh_g))
- goto alert_loser;
+ dh_g_bits = SECKEY_BigIntegerBitLength(&dh_g);
+ if (dh_g_bits > dh_p_bits || dh_g_bits <= 1)
+ goto alert_loser;
That makes the checking a little more robust for DHE as a result, though still not actually correct. Here I'm assuming that the check on length is merely a sanity check and that we do a proper check somewhere else.
> Or is the input parameter type the only difference? In that case, perhaps
> you can rename the function to be more general, such as
> "SECKEY_BigIntegerBits".
The input parameter is an important difference, yes.
> ::: lib/cryptohi/seckey.c
> @@ +949,5 @@
> > + unsigned octets;
> > + unsigned bits;
> > +
> > + if (!number || !number->data) {
> > + PORT_SetError(SEC_ERROR_INVALID_KEY);
>
> Should we also check || number->len == 0 ?
Well, given the more generic name now, I think that we can let the function simply report 0 bits in that case.
> @@ +955,5 @@
> > + }
> > +
> > + p = number->data;
> > + octets = number->len;
> > + while (!*p && octets > 0) {
>
> IMPORTANT: we need to check these in the reverse order, otherwise we may
> read one byte beyond the end of the buffer. This should read:
>
> while (octets > 0 && !*p) {
Oops.
> @@ +963,5 @@
> > + if (octets == 0) {
> > + return 0;
> > + }
> > + /* bits = 7..1 because we know at least one bit is set already */
> > + for (bits = 7; bits > 0; --bits) {
>
> There is a trick to do this using binary search. Since a properly generated
> key should have the most significant bit set, this for loop should exit in
> the first iteration. So it doesn't seem necessary to use the binary search
> trick.
I looked at the binary search and it's a little complex to set up. And I came to the same conclusion you did regarding performance.
> @@ +975,5 @@
> > /* returns key strength in bytes (not bits) */
> > unsigned
> > SECKEY_PublicKeyStrength(const SECKEYPublicKey *pubk)
> > {
> > + return (SECKEY_PublicKeyStrengthInBits(pubk) + 7) / 8;
>
> I like your changes to these two functions. However, the change to
> SECKEY_PublicKeyStrengthInBits may break code that assumes it always return
> a multiple of 8 for RSA, DSA, and DH keys. I think that is very unlikely
> though.
I thought the same. I think that this is worth adding to the release notes. Firefox assumes that these functions are doing the right thing, as most other users will, I expect.
> ::: lib/ssl/ssl3con.c
> @@ +6660,5 @@
> > goto loser; /* malformed. */
> > }
> > + if (SECKEY_ModulusLengthBits(&modulus) < SECKEY_MIN_MODP_PRIME_BITS) {
> > + errCode = SSL_ERROR_WEAK_SERVER_RSA_KEY;
> > + desc = insufficient_security;
>
> The insufficient_security alert was added in TLS 1.0. So we need an
> alternative for SSL 3.0.
Another reason for me to want to kill SSL 3.0 support. I've chosen illegal_parameter here. I couldn't see a better choice. Correctness in SSL 3.0 when it comes to error reporting isn't a high priority.
> @@ +10058,5 @@
> > ss->sec.keaKeyBits = ss->sec.authKeyBits =
> > SECKEY_PublicKeyStrengthInBits(pubKey);
> > + /* Too small: not good enough. Send a fatal alert. */
> > + if (ss->sec.keaKeyBits < SECKEY_MIN_MODP_PRIME_BITS) {
> > + PORT_SetError(SSL_ERROR_WEAK_SERVER_RSA_KEY);
>
> How do you know the key is an RSA key?
I'll check the signtype on the kea_def. I realize that this would have failed the tests for that other bug, had I run them over this patch.
> @@ +10059,5 @@
> > SECKEY_PublicKeyStrengthInBits(pubKey);
> > + /* Too small: not good enough. Send a fatal alert. */
> > + if (ss->sec.keaKeyBits < SECKEY_MIN_MODP_PRIME_BITS) {
> > + PORT_SetError(SSL_ERROR_WEAK_SERVER_RSA_KEY);
> > + (void)SSL3_SendAlert(ss, alert_fatal, insufficient_security);
>
> We need a different alert for SSL 3.0.
Boo.
> @@ +10067,1 @@
> > #ifndef NSS_DISABLE_ECC
>
> IMPORTANT: please check this block of code, which adjusts ss->sec.authKeyBit
> (initialized on lines 10058-10059). Please make sure it doesn't undo the
> improvement you made to SECKEY_PublicKeyStrengthInBits(pubKey);
Well, it only affects ECDH certificates, of which there are approximately none, so it doesn't make things worse. That said, it could use a little fixing up. The only question is whether it's worthwhile.
I just spent quite a while trying to work out how to do that properly. The best method I could think of is below: get the certificate issuer, then to extract the issuer public key and call SECKEY_PublicKeyStrengthInBits on that. I'm not that confident about getting that code right though. What do you think?
/* Static ECDH suites carry parameters in the certificate and
* are therefore authenticated by the strength of their issuer. */
if ((ss->ssl3.hs.kea_def->kea == kea_ecdh_ecdsa ||
ss->ssl3.hs.kea_def->kea == kea_ecdh_rsa) &&
!cert->isRoot) {
CERTCertificate *issuerCert = CERT_FindCertIssuer(subjectCert, PR_Now(), certUsageAnyCA);
SECKEYPublicKey *issuerPublicKey = CERT_ExtractPublicKey(cert);
ss->sec.authKeyBits = SECKEY_PublicKeyStrengthInBits(issuerPublicKey);
CERT_DestroyCertificate(issuerCert);
}
I'm operating outside of my comfort zone here. The original code is obviously extremely hacky.
Assignee | ||
Comment 34•10 years ago
|
||
r=wtc
Attachment #8581044 -
Attachment is obsolete: true
Attachment #8588235 -
Flags: review+
Assignee | ||
Comment 35•10 years ago
|
||
This is the bit that I'm not so sure about.
Attachment #8588236 -
Flags: review?(wtc)
Assignee | ||
Comment 36•10 years ago
|
||
Updated to actually check return status on the different functions.
Attachment #8588236 -
Attachment is obsolete: true
Attachment #8588236 -
Flags: review?(wtc)
Attachment #8588644 -
Flags: review?(wtc)
Comment 37•10 years ago
|
||
Comment on attachment 8588235 [details] [diff] [review]
bug1138554-1.patch
Review of attachment 8588235 [details] [diff] [review]:
-----------------------------------------------------------------
r=wtc. I suggest some changes. Please note the comments marked with "IMPORTANT". Arguably even those are minor issues, but I think we should handle them correctly.
::: lib/cryptohi/keythi.h
@@ +57,5 @@
> +/* This sets a limit on the DH or RSA modulus that we are OK with accepting.
> + * This allows for moduli that were previously (incorrectly) counted as being
> + * 1024 bits long because a bit in the most significant byte was set.
> + */
> +#define SECKEY_MIN_MODP_PRIME_BITS 1017
This macro name isn't appropriate if we also use it with an RSA modulus because an RSA modulus is not a prime number. Please try to come with a neutral name that would work with both DH and RSA. Perhaps SECKEY_MIN_MODULUS_BITS?
It just occurred to me that we usually define such MIN/MAX constants in blapit.h. In particular, see http://mxr.mozilla.org/nss/source/lib/freebl/blapit.h#137
Please take a look and see it would be more appropriate to move this macro to blapit.h, or perhaps just increase the existing RSA_MIN_MODULUS_BITS and DH_MIN_P_BITS values to 1017.
::: lib/ssl/ssl3con.c
@@ +6641,5 @@
> rv = ssl3_ConsumeHandshakeVariable(ss, &modulus, 2, &b, &length);
> if (rv != SECSuccess) {
> goto loser; /* malformed. */
> }
> + if (SECKEY_BigIntegerBitLength(&modulus) < SECKEY_MIN_MODP_PRIME_BITS) {
IMPORTANT: for RSA keys, I believe this check is wrong because the temporary RSA keys should be 512 bits. See TLS 1.0 (RFC 2246) Appendix D.1: https://tools.ietf.org/html/rfc2246#appendix-D.1
I believe this RSA key size check should be performed in ssl3_HandleCertificate() instead.
@@ +6733,5 @@
> SECItem dh_g = {siBuffer, NULL, 0};
> SECItem dh_Ys = {siBuffer, NULL, 0};
> + unsigned dh_p_bits;
> + unsigned dh_g_bits;
> + unsigned dh_Ys_bits;
Nit: these three can be combined into the same variable. I guess the compiler will do this for us?
@@ +6748,5 @@
> rv = ssl3_ConsumeHandshakeVariable(ss, &dh_g, 2, &b, &length);
> if (rv != SECSuccess) {
> goto loser; /* malformed. */
> }
> + /* Check if dh_g is 1 or 0, or obviously too big. */
Nit: reverse this comment to match the two checks on line 6754.
@@ +10049,5 @@
> + /* Too small: not good enough. Send a fatal alert. */
> + if ((ss->ssl3.hs.kea_def->signKeyType == sign_rsa ||
> + ss->ssl3.hs.kea_def->signKeyType == sign_dsa) &&
> + ss->sec.keaKeyBits < SECKEY_MIN_MODP_PRIME_BITS) {
> + PORT_SetError(SSL_ERROR_WEAK_SERVER_RSA_KEY);
IMPORTANT: this error code is wrong for the ss->ssl3.hs.kea_def->signKeyType == sign_dsa case.
We can either add a SSL_ERROR_WEAK_SERVER_DSA_KEY error code, or just add a generic SSL_ERROR_WEAK_SERVER_CERTIFICATE_KEY error code that applies to RSA, DSA, and ECDSA public keys in server certificates.
Attachment #8588235 -
Flags: review+
Assignee | ||
Comment 38•10 years ago
|
||
Thanks Wan-Teh.
(In reply to Wan-Teh Chang from comment #37)
> Please take a look and see it would be more appropriate to move this macro
> to blapit.h, or perhaps just increase the existing RSA_MIN_MODULUS_BITS and
> DH_MIN_P_BITS values to 1017.
Hmm, I think that is the right way to do this. DH_MIN_P_BITS isn't currently used, but this can correct that.
The RSA one is used for things like keygen. That means that we won't be able to generate the keys needed for export-strength suites. That's OK with me, but it means that we won't be able to act as a server for the rsa_1024 export cipher, since we'd be preventing the generation of the 512-bit ephemeral key.
For the record, I'm quite happy to break export modes.
> ::: lib/ssl/ssl3con.c
> > + if (SECKEY_BigIntegerBitLength(&modulus) < SECKEY_MIN_MODP_PRIME_BITS) {
>
> IMPORTANT: for RSA keys, I believe this check is wrong because the temporary
> RSA keys should be 512 bits. See TLS 1.0 (RFC 2246) Appendix D.1:
> https://tools.ietf.org/html/rfc2246#appendix-D.1
You are right; this effectively makes US export compliance impossible. The crazy-person that turns on those suites is already operating well outside the recommended parameters.
I think that I might just reduce this specific limit with 512, but replace it with a nice little banner that explains that export RSA is a hazard.
> I believe this RSA key size check should be performed in
> ssl3_HandleCertificate() instead.
ssl3_AuthCertificate() (which is called from ssl3_HandleCertificate) already has this check in place. The limit here only affects the ephemeral exchange, which in effect disables export compatibility.
> @@ +6733,5 @@
> > + unsigned dh_p_bits;
> > + unsigned dh_g_bits;
> > + unsigned dh_Ys_bits;
>
> Nit: these three can be combined into the same variable. I guess the
> compiler will do this for us?
Both are compared to dh_p_bits so you need two at least. I considered compressing dh_g_bits and dh_Ys_bits, but it seemed less obvious that way.
> @@ +10049,5 @@
> > + /* Too small: not good enough. Send a fatal alert. */
> > + if ((ss->ssl3.hs.kea_def->signKeyType == sign_rsa ||
> > + ss->ssl3.hs.kea_def->signKeyType == sign_dsa) &&
> > + ss->sec.keaKeyBits < SECKEY_MIN_MODP_PRIME_BITS) {
> > + PORT_SetError(SSL_ERROR_WEAK_SERVER_RSA_KEY);
>
> IMPORTANT: this error code is wrong for the ss->ssl3.hs.kea_def->signKeyType
> == sign_dsa case.
>
> We can either add a SSL_ERROR_WEAK_SERVER_DSA_KEY error code, or just add a
> generic SSL_ERROR_WEAK_SERVER_CERTIFICATE_KEY error code that applies to
> RSA, DSA, and ECDSA public keys in server certificates.
I think that the latter works out fine. I don't think that being specific provides any particular advantage here.
Assignee | ||
Comment 39•10 years ago
|
||
OK, I just tested this out. Turns out that setting RSA_MIN_MODULUS_BITS to 1023 causes the default configuration to break. The default setup creates a 512-bit "step down" key in ssl_ConfigSecureServer(). That fails with a higher minimum.
I'm not really looking to embark upon a purge of export cipher suites for just this little change.
I see a couple of options:
A. Another #define for the *real* RSA modulus minimum.
B. Change the default for ss->opt.noStepDown
C. Generate a 1024-bit "step down" key.
D. Start a partial removal of export cipher suites.
I'm inclined to choose B, but D is looking quite attractive.
Comment 40•10 years ago
|
||
Martin: thanks a lot for looking into this. We should not do C.
I didn't mean to make you do a lot of work. It seems that we
should immediately increase RSA_MIN_MODULUS_BITS from 128 to
at least 512. In ssl3con.c, we can consider using the number
1017 directly or add a new macro for the RSA modulus minimum
outside freebl until we have removed the export ciphersuites.
Comment 41•10 years ago
|
||
Comment on attachment 8588644 [details] [diff] [review]
bug1138554-2.patch
Review of attachment 8588644 [details] [diff] [review]:
-----------------------------------------------------------------
This seems correct, but I am worried that the CERT_FindCertIssuer() call may not work well. But I don't have a good suggestion.
::: lib/ssl/ssl3con.c
@@ +10060,5 @@
> }
> #ifndef NSS_DISABLE_ECC
> + /* Static ECDH suites carry parameters in the certificate and
> + * are therefore authenticated by the strength of their issuer. */
> + if (ss->sec.keaType == kt_ecdh && !cert->isRoot &&
1. I think we can omit the ss->sec.keaType == kt_ecdh check because it is redundant with (ss->ssl3.hs.kea_def->kea == kea_ecdh_ecdsa || ss->ssl3.hs.kea_def->kea == kea_ecdh_rsa).
2. Why do we need to check !cert->isRoot? We don't verify root certificates.
UPDATE: I guess this is to short-circuit the CERT_FindCertIssuer() call?
@@ +10064,5 @@
> + if (ss->sec.keaType == kt_ecdh && !cert->isRoot &&
> + (ss->ssl3.hs.kea_def->kea == kea_ecdh_ecdsa ||
> + ss->ssl3.hs.kea_def->kea == kea_ecdh_rsa)) {
> + CERTCertificate *issuerCert =
> + CERT_FindCertIssuer(cert, PR_Now(), certUsageAnyCA);
IMPORTANT: this requires more thought. CERT_FindCertIssuer() uses a naive way to find the issuer certificate. It doesn't do proper certification path construction. Do you know if that matters here?
@@ +10074,5 @@
> + SECKEY_PublicKeyStrengthInBits(issuerPublicKey);
> + SECKEY_DestroyPublicKey(issuerPublicKey);
> + }
> + }
> + CERT_DestroyCertificate(issuerCert);
IMPORTANT: move this inside the "if (issuerCert)" block.
Assignee | ||
Comment 42•10 years ago
|
||
Thanks Wan-Teh.
I think that CERT_FindCertIssuer() is probably OK; this is only to update the auth bit count.
Checking isRoot avoids the need to look for an issuer, which would only fail. We get that all the time with WebRTC (though we don't use static ECDH certs.
BTW, I didn't add a macro, just a bare 1023 and a big warning sticker on it.
Assignee | ||
Comment 43•10 years ago
|
||
I rolled the patches together. Interdiff coming if you'd prefer to see only that.
Attachment #8588235 -
Attachment is obsolete: true
Attachment #8588644 -
Attachment is obsolete: true
Attachment #8588644 -
Flags: review?(wtc)
Attachment #8589276 -
Flags: review?(wtc)
Assignee | ||
Comment 44•10 years ago
|
||
Only the changes from the last round.
Comment 45•10 years ago
|
||
Comment on attachment 8589276 [details] [diff] [review]
bug1138554.patch
Review of attachment 8589276 [details] [diff] [review]:
-----------------------------------------------------------------
This seems correct. I have two comments.
::: lib/freebl/blapit.h
@@ +180,5 @@
> */
>
> #define DSA1_Q_BITS 160
> #define DSA_MAX_P_BITS 3072
> +#define DSA_MIN_P_BITS 1023
This can be 1024 because only a few discrete DSA key sizes are specified.
::: lib/ssl/ssl3con.c
@@ +10053,5 @@
> + * RSA_MIN_MODULUS_BITS and use that here. */
> + if ((ss->ssl3.hs.kea_def->signKeyType == sign_rsa &&
> + ss->sec.keaKeyBits < 1023) ||
> + (ss->ssl3.hs.kea_def->signKeyType == sign_dsa &&
> + ss->sec.keaKeyBits < DSA_MIN_P_BITS)) {
It seems that we should not need to check the certificate's public key size here. The ss->authCertificate callback is supposed to check the certificate's key size. Are we trying to enforce an NSS minimum? If we are, then we also need to check ECDSA key size, and we should do the certificate key size checks after the static ECDH adjustment below.
Note that freebl will also perform minimum key size checks.
@@ +10069,5 @@
> + if (!cert->isRoot &&
> + (ss->ssl3.hs.kea_def->kea == kea_ecdh_ecdsa ||
> + ss->ssl3.hs.kea_def->kea == kea_ecdh_rsa)) {
> + CERTCertificate *issuerCert =
> + CERT_FindCertIssuer(cert, PR_Now(), certUsageAnyCA);
I think we should use certUsageSSLServer instead of certUsageAnyCA here.
If you trace the code of SSL_AuthCertificate:
http://mxr.mozilla.org/mozilla-central/ident?i=SSL_AuthCertificate
the call chain goes through CERT_VerifyCert, cert_VerifyCertWithFlags, CERT_VerifyCertChain, cert_VerifyCertChain, cert_VerifyCertChainOld, eventually you will arrive at this CERT_FindCertIssuer call:
http://mxr.mozilla.org/mozilla-central/source/security/nss/lib/certhigh/certvfy.c#449
449 /* find the certificate of the issuer */
450 issuerCert = CERT_FindCertIssuer(subjectCert, t, certUsage);
So, SSL_AuthCertificate passes certUsageSSLServer to CERT_FindCertIssuer. We should emulate that here.
Comment 46•10 years ago
|
||
Comment on attachment 8589276 [details] [diff] [review]
bug1138554.patch
Review of attachment 8589276 [details] [diff] [review]:
-----------------------------------------------------------------
I didn't check the whole patch, but I have a comment on one part, after reading Wan-Teh mention CERT_FindCertIssuer.
::: lib/ssl/ssl3con.c
@@ +10069,5 @@
> + if (!cert->isRoot &&
> + (ss->ssl3.hs.kea_def->kea == kea_ecdh_ecdsa ||
> + ss->ssl3.hs.kea_def->kea == kea_ecdh_rsa)) {
> + CERTCertificate *issuerCert =
> + CERT_FindCertIssuer(cert, PR_Now(), certUsageAnyCA);
This change may be best done in a separate bug.
Note that calling CERT_FindCertIssuer here is generally a bad thing to do, because CERT_FindCertIssuer is generally not smart enough like libpkix, mozilla::pkix, etc. are. It's OK as far as Gecko is concerned, currently, because Gecko doesn't use these cipher suites. But, I don't think it's good to add new calls to CERT_FindCertIssuer to libssl when we're supposed to be excising them.
More generally, I disagree with the whole premise of the logic here. authKeyBits should be the size of the ECDH key, not the size of the issuer's key. authKeyBits is "assuming the end-end certificate is valid, how strong is the authentication of this handshake?" In ECDH key exchange, the only asymmetric key uesd for authenticate is the ECDH key.
Obviously, our trust in a particular end-entity's key is limited by your trust in the issuer's key. But, that kind of transitive trust in crypto parameters isn't what authKeySize is used for in applications.
I suggest, instead, that we just set authKeyBits to keaKeyBits for these cipher suites.
Comment 47•10 years ago
|
||
Karthikeyan, Matthew,
Do you (still) think that 1024 bits is sufficient for this change? The impression I got from your recent email is that you *don't* think 1024 is a sufficient limit. I am also doubting that 1024 bits is generally sufficient.
In my comments above, I suggested that it might be better to do this gradually, increasing the limit to 768 bits and then to 1024 bits. That is because I didn't understand at that time that the problem is more general than the fact that export-sized parameters are accepted. Thanks to Karthikeyan for clarifying my understanding on that.
I think this change should land, but I also think that it is time to disable the TLS_DHE_* cipher suites in Firefox and probably in NSS by default, considering a 1024-bit minimum seems insufficient but any minimum higher than that will have big compatibility problems.
Flags: needinfo?(matthewdgreen)
Flags: needinfo?(karthikeyan.bhargavan)
OS: Mac OS X → All
Hardware: x86 → All
Target Milestone: --- → 3.18
Version: 3.18 → trunk
Assignee | ||
Comment 48•10 years ago
|
||
(In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #46)
> I suggest, instead, that we just set authKeyBits to keaKeyBits for these
> cipher suites.
That is something that I can do easily. Delete the whole mess.
Comment 49•10 years ago
|
||
Comment on attachment 8589276 [details] [diff] [review]
bug1138554.patch
Review of attachment 8589276 [details] [diff] [review]:
-----------------------------------------------------------------
::: lib/cryptohi/seckey.c
@@ +970,5 @@
> + if (*p & (1 << bits)) {
> + break;
> + }
> + }
> + return octets * 8 + bits - 7;
Again, I didn't read the whole patch, however I read this part.
I suggest that you avoid trying to be so precise here.
We know from previous reports that some certificates have 2047 and 1023 bit keys, according to this type of calculation. I believe it's likely that there are checks within Gecko and probably within other applications that assume (after reading the NSS code) that these key sizes will get rounded up to the next multiple of 8 bits.
Since this change is likely going to get put into an accelerated release, it's better to be conservative here and continue doing the rounding up, to minimize compatibility problems. If you think the added precision is useful (I'm not sure it is) then perhaps that can land in a follow-up change in the very next NSS (pre-)release, where we'll have more time to test it.
Comment 50•10 years ago
|
||
(In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #47)
> Do you (still) think that 1024 bits is sufficient for this change? The
> impression I got from your recent email is that you *don't* think 1024 is a
> sufficient limit. I am also doubting that 1024 bits is generally sufficient.
Our current thinking is that 512 and 768 bit groups are currently broken even
against modestly powerful adversaries. We have several demonstrations for 512
bit groups and fairly solid estimates for 768 bits.
We believe that 1024 bit groups are probably broken against state-level adversaries,
but do not yet have concrete numbers to match this judgement.
I don't speak for Matthew, but for 1024 bits, I think that the pressing concern is
that most servers are using very old (hard-coded) groups that may have been broken
with multi-year efforts. Turning off 1024 bits at the client would help,
but we also need to move servers to stronger groups.
Moving wholesale from DHE to ECDHE seems attractive, but mainly because
we don't know good attacks on curves just yet. I don't know enough on
the EC literature to say whether it is just a matter of time before such attacks appear.
On a separate note, we've started notifying all vendors about our disclosure date.
We are submitting our paper to ACM CCS, deadline May 16. We will publish a preprint
paper soon after, so you can expect disclosure during the May 18-22 week.
Flags: needinfo?(karthikeyan.bhargavan)
Assignee | ||
Updated•10 years ago
|
Whiteboard: Embargo until multi-vendor coordinated info release → Embargo until multi-vendor coordinated info release (May 18)
Assignee | ||
Comment 51•10 years ago
|
||
Adding Hubert. The discussion in bug 102794 is heading dangerously close to this and you need to be aware of the problem.
Comment 52•10 years ago
|
||
Thanks for including me to it, but I've known about this issue for a long time, it's far from secret. There was even a test site to check your browser for this (https://dh.tlsfun.de/), sadly a bit broken now.
Assignee | ||
Updated•10 years ago
|
Whiteboard: Embargo until multi-vendor coordinated info release (May 18) → Embargo until multi-vendor coordinated info release (May 19)
Updated•10 years ago
|
status-b2g-v2.0:
--- → affected
status-b2g-v2.0M:
--- → affected
status-b2g-v2.1:
--- → affected
status-b2g-v2.1S:
--- → affected
status-b2g-v2.2:
--- → affected
status-b2g-master:
--- → affected
status-firefox38:
--- → wontfix
status-firefox38.0.5:
--- → wontfix
status-firefox39:
--- → affected
status-firefox40:
--- → affected
status-firefox41:
--- → affected
status-firefox-esr31:
--- → affected
status-firefox-esr38:
--- → affected
Target Milestone: 3.18 → 3.19.1
Assignee | ||
Comment 54•10 years ago
|
||
I realize that I didn't double-check that we had all the necessary approvals. And now we don't have a lot of time.
Wan-Teh, are we good to go with this? Based on comment 46, I've removed the issuer tracing that we were doing for static ECDH. That should also remove the concerns you raised in your last comment.
Attachment #8589276 -
Attachment is obsolete: true
Attachment #8589277 -
Attachment is obsolete: true
Attachment #8589276 -
Flags: review?(wtc)
Attachment #8607648 -
Flags: review?(wtc)
Assignee | ||
Comment 55•10 years ago
|
||
Good thing I ran a full regression suite this morning. This change regresses ECDH_RSA suites. If it weren't for the fact that it took so long, I would have noticed earlier.
Assignee | ||
Comment 56•10 years ago
|
||
Apologies for the churn.
Attachment #8607648 -
Attachment is obsolete: true
Attachment #8607648 -
Flags: review?(wtc)
Attachment #8607687 -
Flags: review?(wtc)
Comment 57•10 years ago
|
||
Comment on attachment 8607687 [details] [diff] [review]
bug1138554.patch
Martin: can you prepare a version of this patch
that omits the white space changes to ssl3con.c?
Comment 58•10 years ago
|
||
Comment on attachment 8607687 [details] [diff] [review]
bug1138554.patch
Review of attachment 8607687 [details] [diff] [review]:
-----------------------------------------------------------------
::: lib/ssl/ssl3con.c
@@ +10052,5 @@
> + * break export cipher suites; when those are removed, update
> + * RSA_MIN_MODULUS_BITS and use that here. */
> + if ((ss->ssl3.hs.kea_def->signKeyType == sign_rsa &&
> + ss->sec.keaKeyBits < 1023 &&
> + ss->ssl3.hs.kea_def->kea != kea_ecdh_rsa) ||
Why do we need the ss->ssl3.hs.kea_def->kea != kea_ecdh_rsa test?
Should we also test for kea_dh_rsa?
@@ +10054,5 @@
> + if ((ss->ssl3.hs.kea_def->signKeyType == sign_rsa &&
> + ss->sec.keaKeyBits < 1023 &&
> + ss->ssl3.hs.kea_def->kea != kea_ecdh_rsa) ||
> + (ss->ssl3.hs.kea_def->signKeyType == sign_dsa &&
> + ss->sec.keaKeyBits < DSA_MIN_P_BITS)) {
Should we also test for kea_dh_dss?
Comment 59•10 years ago
|
||
Comment on attachment 8607687 [details] [diff] [review]
bug1138554.patch
Review of attachment 8607687 [details] [diff] [review]:
-----------------------------------------------------------------
::: lib/ssl/ssl3con.c
@@ +10054,5 @@
> + if ((ss->ssl3.hs.kea_def->signKeyType == sign_rsa &&
> + ss->sec.keaKeyBits < 1023 &&
> + ss->ssl3.hs.kea_def->kea != kea_ecdh_rsa) ||
> + (ss->ssl3.hs.kea_def->signKeyType == sign_dsa &&
> + ss->sec.keaKeyBits < DSA_MIN_P_BITS)) {
Martin: after staring at this code for a few minutes, I think
we should rewrite the conditional expression as follows:
if ((pubKey->keyType == rsaKey &&
ss->sec.authKeyBits < 1023) ||
(pubKey->keyType == dsaKey &&
ss->sec.authKeyBits < DSA_MIN_P_BITS)) {
Note that I suggest testing authKeyBits instead of keaKeyBits
even though they have the same value.
We can use SECKEY_GetPublicKeyType(pubKey) instead of pubKey->keyType.
Assignee | ||
Comment 60•10 years ago
|
||
Arrgggh, I'm sorry about the whitespace. I'll get onto that.
(In reply to Wan-Teh Chang from comment #58)
> Why do we need the ss->ssl3.hs.kea_def->kea != kea_ecdh_rsa test?
Hmm, this definitely needs a comment then.
_ECDH_RSA_ suites are funny because the public key is EC and that doesn't pass the test.
> Should we also test for kea_dh_rsa?
...
> Should we also test for kea_dh_dss?
The test for kea_dh_{rsa,dss} would be correct-ish, in the sense that it's on the right order of magnitude. But we will be testing against the wrong constant if RSA and DH ever diverge in configuration.
The test that I'm looking for here is a little tricky to get from the information we have. It's whether the key is wide enough to sign, if we're signing, or wide enough for key exchange for the static rsa suites. We sign for ephemeral suites. We only do static key exchange when signKeyType == sign_rsa AND exchKeyType == kt_rsa.
Maybe the test needs to be
// rsa signing of ephemeral keys, or static rsa
(signKeyType == sign_rsa && (ephemeral || exchKeyType == ssl_kea_rsa) && size < 1023)
// dsa signing of ephemeral keys
|| (signKeyType == sign_dsa && ephemeral && size < DSA_MIN_P_BITS)
// static DH
|| (exchKeyType == ssl_kea_dh && !ephemeral && size < DH_MIN_P_BITS)
// ...and we don't check EC yet
That's complicated, but does it make sense?
Assignee | ||
Comment 61•10 years ago
|
||
(In reply to Wan-Teh Chang from comment #59)
> We can use SECKEY_GetPublicKeyType(pubKey) instead of pubKey->keyType.
That makes perfect sense. Greater knowledge prevails. That's what I was looking for.
Assignee | ||
Comment 62•10 years ago
|
||
This passes the whole test suite and I think that it's a better check.
Attachment #8607687 -
Attachment is obsolete: true
Attachment #8607687 -
Flags: review?(wtc)
Attachment #8608187 -
Flags: review?(wtc)
Comment 64•10 years ago
|
||
Can this bug be made public now?
Comment 65•10 years ago
|
||
Comment on attachment 8608187 [details] [diff] [review]
bug1138554.patch
Martin: the ssl3con.c file in this patch still contains white
space changes. So I can't mark this patch review+.
Assignee | ||
Comment 66•10 years ago
|
||
Again! I'm sorry, let me fix that.
Assignee | ||
Comment 67•10 years ago
|
||
I'm so sorry about the whitespace thing.
Attachment #8608187 -
Attachment is obsolete: true
Attachment #8608187 -
Flags: review?(wtc)
Attachment #8608384 -
Flags: review?(wtc)
Assignee | ||
Comment 69•10 years ago
|
||
Don't we usually land this stuff before opening them?
Flags: needinfo?(martin.thomson)
Comment 70•10 years ago
|
||
usually, but usually bugs aren't announced until a fix is available. We hide bugs to protect users from bad guys (or, additional bad guys?) from figuring out how to abuse the problem while there is no fix. If the problem is public then there's probably nothing to protect. Unless there are additional details about the problem here that haven't been made public then we should un-hide it.
Comment 72•10 years ago
|
||
Comment on attachment 8608384 [details] [diff] [review]
bug1138554.patch
Review of attachment 8608384 [details] [diff] [review]:
-----------------------------------------------------------------
r=wtc. I hope you can add a short comment to explain why
the minimum key size is 1023 bits rather than 1024 bits.
It may be enough to only allow off-by-one key sizes for
RSA keys. Does the EFF SSL Observatory report any 1023-bit
RSA certificates?
Attachment #8608384 -
Flags: review?(wtc) → review+
Assignee | ||
Comment 73•10 years ago
|
||
https://hg.mozilla.org/projects/nss/rev/ae72d76f8d24
Thanks for your perseverance Wan-Teh.
I added comments in two places, the one for RSA is a little shorter. When we get some better stats (and we should do now that the accounting has been updated), we will be able to collect accurate data. I'd like to hoist these numbers soon, but that might be a good thing to add to the "remove export ciphers entirely" work.
I don't know about the observatory data. I suspect that there are not and we can increase this to 1024 safely. The security difference between the two is marginal.
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Comment 74•10 years ago
|
||
we know there have been 1023 bit certs in the past, and they may still be on intranets
Comment 76•10 years ago
|
||
Martin, please fill the uplift request to aurora, beta & esr38 asap. thanks
Flags: needinfo?(martin.thomson)
Comment 77•10 years ago
|
||
39.0b1 users could plan to use the addon: https://addons.mozilla.org/en-US/firefox/addon/disable-dhe/ I think that the patch we want to uplift is in bug 1166031, but that needs more time on Nightly. Beta 2 will be out next Friday so we may aim for that.
Comment 78•10 years ago
|
||
Martin how about this. Let's uplift this patch to aurora on Monday. And plan to uplift whatever happens in bug 1166031 later in the week.
Meanwhile let's write a release note with a link to the addon. If you can improve the wording of the note (which is long -- I just used your addon description) that would be great. This seems like too much detail.
Release Note Request (optional, but appreciated)
[Why is this notable]:
[Suggested wording]: The logjam attack allows an attacker to impersonate servers that support weak keys.
Later versions of Firefox 39 will include changes that will increase the minimum strength of keys to 1024 bits.
Until then, users who are concerned about their online security can install this addon to disable the TLS cipher suites that could be used to mount the logjam attack.
[Links (documentation, blog post, etc)]: https://addons.mozilla.org/en-US/firefox/addon/disable-dhe
relnote-firefox:
--- → 39+
Comment 80•10 years ago
|
||
Revised relnote: Users concerned about the Logjam vulnerability can install the Disable-DHE addon (https://addons.mozilla.org/en-US/firefox/addon/disable-dhe)
Comment 81•9 years ago
|
||
(In reply to Martin Thomson [:mt] from comment #73)
> https://hg.mozilla.org/projects/nss/rev/ae72d76f8d24
This caused a bustage with older Windows compilers, because of the C++ style variable declaration in the middle of a code block.
https://bot.nss-crypto.org:8011/builders/1-win-x32-DBG/builds/447/steps/shell_1/logs/stdio
I've checked in a bustage fix:
https://hg.mozilla.org/projects/nss/rev/be964e8dc8e9
(I think it would have worked to reorder the two lines, but I didn't want to introduce a semantic change as part of a bustage fix, that's why I've kept the order of assignments.)
Comment 82•9 years ago
|
||
Kai: thanks for fixing the compilation error. I confirm it is also OK
to just reorder those two lines.
Assignee | ||
Comment 83•9 years ago
|
||
:lizzard just asked about the bustage in comment 81. The concern here is that taking NSS_3_19_1_BETA1, which has the bustage, might cause problems. As far as I know - though I'm not certain - we use the same compiler configuration to build everything from Nightly through to Release. Since this bustage didn't cause problems with Nightly (and soon, DevEdition) builds, I think that it is safe enough to take the _BETA1 label into Beta.
Release channels are going to wait for the _RTM tag, which will include the fix.
Updated•9 years ago
|
tracking-firefox-esr31:
--- → ?
tracking-firefox-esr38:
--- → ?
Comment 84•9 years ago
|
||
Martin,
This change ended up breaking Chrome's use of NSS, and I'm fairly certain represents a bigger change than intended.
That is, I think it's totally reasonable to reject these keys during TLS handshakes or usage. However, by raising the minimum that can be used to generate, I think this represents a real API change.
Concrete requests:
1) Bob, can you take a look at the changes to blapit.h and indicate your/Red Hat's feelings on the API change of this nature
2) Martin, can you make sure to review the release notes prior to them going out to make sure subtle-but-big changes like this are documented? We really should have a "BREAKING CHANGES" section for things like this.
3) Kai, do we have a process for updating/amending the release notes after we've announced?
I've been following this bug and even I wasn't aware of those changes - I truly thought you were just fixing the SSL layer, and that's what was documented in the release notes ( https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_3.19.1_release_notes )
I'm not trying to derail this bug with a discussion of why - but we do it so that tests run in a reasonable time under instrumentation.
I also explicitly didn't reopen this bug, because I think that depends on Bob's feedback. If it is a breaking change, then we decide whether to roll back or simply accept the documentation and roll forward, although if that was the case, I think we should have called 3.19.1 3.20
Flags: needinfo?(rrelyea)
Flags: needinfo?(martin.thomson)
Flags: needinfo?(kaie)
Assignee | ||
Comment 85•9 years ago
|
||
Patch notes updated. Feel free to add any shouting you feel appropriate, I didn't.
Flags: needinfo?(martin.thomson)
Comment 86•9 years ago
|
||
Shouldn't the Compatibility section of the 3.19.1 release notes be updated also?
Comment 87•9 years ago
|
||
We can easily lower RSA_MIN_MODULUS_BITS, DH_MIN_P_BITS,
and DSA_MIN_P_BITS in blapit.h and add new SSL-specific
macros for the minimum RSA, DH, and DSA key sizes.
Please suggest the appropriate values for RSA_MIN_MODULUS_BITS,
DH_MIN_P_BITS, and DSA_MIN_P_BITS for freebl/softoken. Thanks.
Comment 88•9 years ago
|
||
For Redhat, we will won't be picking up the softoken changes for our RHEL-6 build because it affects the FIPS boundary. I'm in general OK with increasing it. I am concerned that Ryan has ran into some compatibility issues, however. If these issues are 'real', then I would prefer not increasing the min values in softoken. It's better to handle that in the protocols we are using (e.i. SSL, cert verification, etc.). I'd like to know what the issues were, because they may affect whether we pick the changes up in RHEL7 (as well as recommendations here, I don't want to arbitrarily hose Chrome, for instance)
I too thought we were just changing the ssl layer, and was surprised when we got the note that there were softoken changes.
I think Kai is on PTO right now, so you may not get a quick response from him.
Flags: needinfo?(rrelyea)
Comment 89•9 years ago
|
||
@Robert: the breaking change I found was that PK11_GenerateKeyPair now fails if you ask for a small key.
Comment 90•9 years ago
|
||
This appears to also be underlying bug 1171502 where there is some disagreement about whether we should be rolling back this change or expecting sites to upgrade in response.
Comment 91•9 years ago
|
||
I filed bug 1172128 to fix the problem Ryan reported
in comment 84.
Comment 92•9 years ago
|
||
(In reply to Ryan Sleevi from comment #84)
> 3) Kai, do we have a process for updating/amending the release notes after
> we've announced?
We don't have a process yet.
If it's just clarification or additional information, I think it is fine to add to the release notes page.
If it's an important addition to the release notes, then it would be nice to send a follow up message to the release announcement on the dev-tech-crypto newsgroup.
Flags: needinfo?(kaie)
Comment 93•9 years ago
|
||
Is manual testing needed for this fix? If yes, could you provide some testing steps?
Flags: needinfo?(martin.thomson)
Assignee | ||
Comment 94•9 years ago
|
||
Visit https://weakdh.org/.
It should say " Good News! Your browser is safe against the Logjam attack. "
Flags: needinfo?(martin.thomson)
Comment hidden (obsolete) |
Comment 96•9 years ago
|
||
Dan: That was the same CVE for https://bugzilla.mozilla.org/show_bug.cgi?id=1086145#c84 . Perhaps I've botched things in my email?
Comment 97•9 years ago
|
||
Er, sorry, I botched the #. Bug 1086145 was CVE-2015-2721.
Updated•9 years ago
|
Whiteboard: Embargo until multi-vendor coordinated info release (May 19) → [adv-main39+][adv-esr38.1+][adv-esr31.8+] Embargo until multi-vendor coordinated info release (May 19)
Comment 98•9 years ago
|
||
MITRE says we should use the CVE assigned to the protocol issue overall rather than assigning one specific to our implementation of DHE. That CVE is CVE-2015-4000
Alias: CVE-2015-2723 → CVE-2015-4000
Comment 99•9 years ago
|
||
Verified fixed on:
* ESR 31.8.0 (20150624141335),
* ESR 38.1.0 (20150624141534),
* RC 39.0 Build 4 (20150624153222),
* Aurora 40.0a2 (2015-06-25),
* Nightly 41.0a1 (2015-06-25),
using Windows 8.1 (x64), Ubuntu 14.04 (x86) and Mac OS X 10.9.5.
Testing was done according to the instructions from Comment 94.
Comment 101•9 years ago
|
||
As per my weekend analysis, this is the biggest site compatibility issue of Firefox 39.
https://developer.mozilla.org/en-US/Firefox/Releases/39/Site_Compatibility#Security
https://input.mozilla.org/en-US/?product=Firefox&q=ssl_error_weak_server_ephemeral_dh_key
https://support.mozilla.org/en-US/search?q=ssl_error_weak_server_ephemeral_dh_key
I'm reaching out to those affected sites via https://twitter.com/FxSiteCompat
Keywords: dev-doc-needed,
site-compat
Comment 102•9 years ago
|
||
Isn't Thunderbird also affected by CVE-2015-4000 when using (START)TLS? I couldn't find a bug report for Thunderbird (only bug 1175816, but this seems to be invalid/duplicate).
Comment 103•9 years ago
|
||
(In reply to Christian Kujau from comment #102)
> Isn't Thunderbird also affected by CVE-2015-4000 when using (START)TLS? I
> couldn't find a bug report for Thunderbird (only bug 1175816, but this seems
> to be invalid/duplicate).
I assume TB will pick up the fix for this in their 38.1 build that is being built right now. They are included in the advisory but haven't shipped it yet.
Updated•9 years ago
|
Summary: NSS accepts export-length DHE keys with regular DHE cipher suites → NSS accepts export-length DHE keys with regular DHE cipher suites ("Logjam")
Updated•9 years ago
|
Flags: needinfo?(lhenry)
You need to log in
before you can comment on or make changes to this bug.
Description
•