Closed Bug 596692 Opened 14 years ago Closed 12 years ago

NSS claims sec_error_invalid_key for 16K RSA keys

Categories

(NSS :: Libraries, defect, P2)

3.12.2
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: eddy_nigg, Assigned: wtc)

References

Details

NSS wrongly claims sec_error_invalid_key for an RSA 16K key. Works with Opera though.

https://www.ssllabs.com/ssldb/ also has no problem. There is probably a hard limit somewhere incorrectly set.
Also works with MSIE 8.0.7600.16385
If somebody can point me to the limitation in the code, I can submit a patch for that. And it would be probably a good idea to add a different error code in case the key size is too big for NSS.
This restriction is intentional. It's a simple #define. The code itself has not problems with larger key sized

Use of outrageously large key sizes can be used to generate denial of service attacks. (clients sending long chains of certs with very large keys as client auth certs to servers).

We can discuss where that limit should be (it's currently set to 8K). The number was set a number of years ago when processors were quite a bit slower.

I would like to ask, though, why in the world are you bothering with such large keys. 8K is bigger than anything NIST even defines.

bob
I can't tell a user not use a 16K key, I believe that's still within limits way before considering it a DOS. Maybe that limit should be somewhere much higher up. Specially since most other browsers support it (except Firefox and Chrome).
I wish to use a 16K key to protect medical data being transmitted over public networks. 

2048 bit might have sufficed, although I see no reason why not to put technology to use where security is in high demand.
(In reply to comment #5)
> I wish to use a 16K key to protect medical data being transmitted over public
> networks.

So, are you using RSA for *direct* encryption of your data...? Otherwise this just looks like another case of a "security theater" - it's the strength of your symmetric (aka session) key which really matters here.
No, I am not using any encryption on the data itself since it will be exposed in a human readable format inside the the web browser in guise of web application/SaaS.

I am aware that the weakest link in the chain most probably is the client but I still do wish to use the strongest encryption available today. 

And yes, I do admit that using 16K key is mainly due to marketing but considering that today's commodity hardware amply exceeds the requirements for this cryptographic application, I cannot comprehend why this artificial limitation is still in place.
(In reply to comment #4)
> I can't tell a user not use a 16K key,
OK, but we can. :)
Well, look at the opening comment and find that NSS claims that it's an invalid key. Lets start with the fact that the key is valid. If you want to limit at 8K or not, that's probably an unfortunate decision, but users will judge it eventually.
To be completely honest I will be very disappointed if this artificial limitation is not being removed since I'd have to tell my clients that, in order to use the service at maximum security, they would have to chose use some other browser.

After a decade of education nine out of ten clients use Firefox. They had to be taught to chose a browser that respects the web standards, is feature-complete and customizable.

My clients will not be particularly pleased getting told that Firefox is not fit for the job.
We have not yet said it is being removed, but I have not yet seen *ANY* justification.

You are asking for a 16K key. Why not ask for a 32K, or a 64K. The define is the only thing stopping that. The performance cost of a 16K key is 4x the cost of an 8K key, 16x the cost of a 4k key, and a whopping 256x the cost of the standard 1K key.

The 8K key is 192 bits in strength, the strongest that NIST actually acknowledges.

I you want us to support 16K keys you need to make a solid technical argument that you are getting benefit for the cost. Saying 'other browsers support it' is not sufficient as we already told you why we limit key size. It should have been clear that we are looking at strength versus performance arguments. I may be able to be swayed.

> > I can't tell a user not use a 16K key,

Actually you can. If the customer does not know *WHY* he needs that key size, you need to help him. If I make a request for 512K RSA key are you going to accept that?

So--- I am willing to hear arguments for supporting a stronger key size, but you had better have a stronger position than you need to support 16K RSA keys for SSL Server certs. You will have to convince my why when the strength of the connection is based on the Asymmetric cipher, and even more, the weak hashing (currently only 80 bits of strength). And even more, why the weak link isn't the intermediate key that signed it (tell me that is at *least* a 4K key!).

Anyway, go do your research, come back with your numbers and then we'll talk.
> And yes, I do admit that using 16K key is mainly due to marketing but
> considering that today's commodity hardware amply exceeds the requirements for
> this cryptographic application, I cannot comprehend why this artificial
> limitation is still in place.

Because there are denial of service attacks when you allow larger keys. You need to make a case the the marketing value or 16K (when your competitor will slam you for security naivety because you are providing no benefit over an 8K key) is worth the DOS cost. 

The answer may be yes, but I want to see *real* arguments with *real* numbers.

BTW there will be no browser UI that would show your site is any more secure than an 8K site.

bob
Above a certain key size (which differs by algorithm), you don't get 
increased security, you get only decreased efficiency.  People really 
interested in security don't have to play key size comparison games 
any more than they must play penis size comparison games.  

Please don't tell us you want to brag about your size.
The requested case has not materialized. If there is a future need for 16K keys with the appropriate justification, we can revisit this issue.

bob
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → WONTFIX
I would say, compatibility with other cryptography stacks. Specifically the Microsoft CryptoAPI (rsaenh.dll default CSP) supports key sizes of up to 16384 bytes, and has supported that length for a decade and a half.

I am not advocating for moving to larger key sizes, such as 16K, but merely advancing an argument of why it makes reasonable engineering sense to remove (or lift) an arbitrary limit. On a modern processor (Intel Core 2 Quad) you can definitely "feel" when a 16K key is being used, as it takes about 0.75-1.2 seconds to perform a private key op. But this is not so large that it slows down the entire OS to a crawl. (This is also a user experience argument: some users want to "feel" like their computer is doing some heavy lifting. Fry some eggs.)
Sean is right, compatibility to other browsers should be considered.

The marketing argument might be a weak one at first glance, but valid none the less. If clients demand strong cryptography and providing such cryptography is technically feasible, there should be no artificial limitation preventing the usage of strong cryptography.

Furthermore I advocate the lifting of this artificial limit because my client specifically demanded to use Firefox, since all their workstations have it installed as default browser. 

I have to admit that the fault is on me, I might have tested key sizes with Firefox prior to generate the certificate for my client.
FreeBL isn't the place to enforce such a restriction. The application using Softoken/FreeBL is the one that should be responsible for restricting the key lengths, key chain sizes, etc. that it supports--especially since it might be using a PKCS#11 module that isn't Softoken and thus doesn't implement this restriction. If the DoS attack is that the client will upload a long certificate chain with huge keys in a client certificate message, then the best place to guard against that is the code that processes the client certificate message.

Sean has a good point in that it doesn't make sense that Firefox has a limit lower than Internet Explorer and other browsers, in order to protect against DoS in server products that are only related to Firefox in that they happen to use the same PKCS#11 implementation. In order to improve our OS keychain integration in Firefox, we need to be able to support all the certificates the user has installed in the native key store. One option is simply to use the native OS APIs (e.g. CAPI/CNG) instead of the PKCS#11 token to sign client verify messages in Firefox. But, even if did so, it would be strange to have a different key size limit in Windows vs. Linux.

Also, note that RSA 15360 is about the same strength--approximately a 256-bit security level--as the P-521 ECC curve, which we already support. A developer who is told to develop a portable application at a 256-bit security level will be tempted to use RSA 15360 to work around the many server which do not implement ECC. It would be very inefficient but it would meet security criteria that aren't unrealistic.
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
> Sean has a good point in that it doesn't make sense that Firefox has a 
> limit lower than Internet Explorer and other browsers

It's not an argument if the other browsers have no limit. If there were thirty guys pounding on the door saying their certs don't work (particularly their intermediate certs don't work), then I would say we may have an interoperibilty problem.

> FreeBL isn't the place to enforce such a restriction.

It's extremely common for those limits to live at the PKCS #11 layer. In fact PKCS #11 specifies exactly it's supposed to communicate those limits.

> But, even if did so, it would be strange to have a different key size
> limit in Windows vs. Linux.

No stranger than the algorithm differences you'll find on those platforms. Once you take the native step, all sorts of these anomalies will appear. (In fact if you are on native Windows today, your security parameters vary depending on which version of windows you are running. (IE 8 on Windows XP can't do the same security that IE 8 on Windows 7 does).

> Also, note that RSA 15360 is about the same strength--approximately a 256-bit
> security level--as the P-521 ECC curve, which we already support. A developer
> who is told to develop a portable application at a 256-bit security level will
> be tempted to use RSA 15360 to work around the many server which do not
> implement ECC. It would be very inefficient but it would meet security
> criteria

This is the kind of discussion I welcome. It's the first reasonable rationale I've seen for upping limit. Is this a real scenario?

bob
(In reply to comment #18)
> It's not an argument if the other browsers have no limit. If there were thirty
> guys pounding on the door saying their certs don't work (particularly their
> intermediate certs don't work), then I would say we may have an
> interoperibilty problem.

I think it is important to be able to say we can import any (client) cert exported from the Windows or MacOS keychains, with as few caveats as possible, as long as we have our own separate keychains. A quick check shows that Apple's keychain, OpenSSL defaults, CryptoAPI/CNG, and GNUTLS/libgcrypt all allow RSA 16384, AFAICT. I think we should avoid making the judgement call for the user as far as whether such large keys are suitable--especially at such a low level. 

> > FreeBL isn't the place to enforce such a restriction.
> 
> It's extremely common for those limits to live at the PKCS #11 layer. In fact
> PKCS #11 specifies exactly it's supposed to communicate those limits.

Many PKCS#11 tokens have (even stricter) limits and PKCS#11 allows them to do so. But, if this code in FreeBL is the only thing that is mitigating the resource consumption attacks you mentioned, then the rest of NSS should be fixed to not rely on it, since Softoken might not be the PKCS#11 module in use. Plus, we shouldn't be doing RSA calculations using a RSA key that the user doesn't already trust; it doesn't matter if the signature is valid if we don't trust the signer, and if we're worried about resource consumption attacks, we should mitigate them even at smaller key sizes. (Lots of people are claiming even 2048 bit RSA is too expensive to do on servers.)

> This is the kind of discussion I welcome. It's the first reasonable rationale
> I've seen for upping limit. Is this a real scenario?

I don't know. My point is just that there is no way to get to a 256-bit security level on Fedora/RHEL using NSS right now, AFAICT. (I know some would say that 192 is the max you can get with current symmetric ciphers anyway). Again, I don't think this is the right layer to make judgement calls regarding key size vs. performance tradeoffs for the user. I think we should enable maximum interoperability unless such interoperability would be harmful. And, AFAICT, such interoperability is harmful only because of problems in higher layers that should be fixed (if present at all).

I would think that for Fedora/RHEL, you would want the system NSS to match (at least) the limits of OpenSSL, in order to increase participation in the crypto consolidation project, and to otherwise minimize the number of products that choose to ship their own crypto code.
> I think it is important to be able to say we can import any (client) cert
> exported from the Windows or MacOS keychains,

Do you really want to support the union of all the algorithms that Windows and MacOS might decide to implement? I think this bar is too high. What if Microsoft tries to push a new patented encumbered format which everyone else ignores? What about key chains with broken algorithms (for a long time Microsoft accepted key changes with MD-4 as the hash).

It's ok to use general principles to make decisions, but we need to verify that they are rational.

>  with as few caveats as possible,

16K his hardly a major caveat. If you could generate a 256K key on Microsoft, do you really think we should import it?


> I would think that for Fedora/RHEL, you would want the system NSS to match
> (at least) the limits of OpenSSL, in order to increase participation in the
> crypto

Only if those limits make sense. There are areas where there is missing overlap between openSSL and NSS. RSA Key size is not one which has ever come to the fore, which is why I'm highly skeptical of the need. If people were really clamouring for this

> I don't know. My point is just that there is no way to get to a 256-bit
> security level on Fedora/RHEL using NSS right now, AFAICT.

That's really what I'm asking. It seems like SSL really can't be that case, particularly with the roots mostly 1K and 2K RSA keys. Is there a 256-bit use case. NIST defines one, but is it real? Everything I've heard is NIST is really only looking at 128 and 192 bit security strength. Maybe that's sufficient. In any case, this bug will get the most traction on discussions along these lines.
Offline, there was an important clarification about this bug: the bug isn't about removing the limit on RSA key sizes, but merely bumping the limit up to 16KB.

Wan-Teh pointed out that some internal optimizations in FreeBL may depend on the fact that the key size limit is 8KB. So, the FreeBL RSA code needs to be reviewed.

I filed bug 610029 to deal with the DoS issues that Robert pointed out.
Depends on: 610029
(In reply to comment #4)
> I can't tell a user not use a 16K key, I believe that's still within limits way
> before considering it a DOS. Maybe that limit should be somewhere much higher
> up. Specially since most other browsers support it (except Firefox and Chrome).


Eddy, does this help (while you wait); Use PGP with HTTPS. "There's a Plug-in for that" (TM)


Monkeysphere is a system for using the OpenPGP web of trust for authentication of HTTPS communications.

https://addons.mozilla.org/en-US/firefox/addon/125272/?src=collection&collection_id=26aa4d81-029d-8a7f-56d9-8b85087d4e18

Monkeysphere
http://web.monkeysphere.info/doc/user-ssh-advanced/

GNU Privacy Guard
http://www.gnupg.org/


Rob
Any progress on this bug?

Perhaps another thing to consider is whether the encrypted conversations can be recorded, and cracked 5 years from now? 10 years? 30 years? ...

I would not set a strict limit at all, but rather allow one to be set at runtime (e.g. defaults to 16K), if possible. This would allow implementations to decide for themselves (e.g. make it a configuration option) what is the maximum allowed key size, depending on their processing power (in fear of DoS) and the level of future-proof privacy required.

PS: I'm not a crypto-expert (yet) and might be talking garbage. I just want this fixed as well so I don't have to go through all that trouble of running some Windows browser under Linux every time.
Like Bob, I would like to understand why this limitation is significant.

In the face of current analytical techniques, a 4096-bit RSA key offers
somewhere greater than 128 bits of effective security, and nobody
recommends keys > 16k even for very long term use (see 
http://www.keylength.com/en/3/ for instance). So, needing keys
in excess of 16k would require (a) some incredibly long-term
use case and (b) a major analytical breakthrough, in which case
all bets are off.

Moreover, as Bob observes, the integrity properties of the system are limited
to about 2048 bits at best because of the strength of the roots. And if
you're interested in long-term confidentiality you would be better served
to move to large ECC keys in RSA/ECDHE mode.
> I would not set a strict limit at all

The strict limit is not structural (that is the underlying code does not have any limit on the size of key it can handle), it's a design choice. Handling an unlimited key size can expose one to DOS attacks where the attacker provides changes with massive keys and massive signatures. If the key size is large enough, you can just supply random data and keep a server quite busy for a very long time. We will not likely every *NOT* have a limit.


> Perhaps another thing to consider is whether the encrypted conversations can 
> be recorded, and cracked 5 years from now? 10 years? 30 years? ...

The question then becomes is 16K too small. I have no problem with moving it up if it provides more security. At this point, if you use an 8k key, The conversation you speak up is likely to be cracked by breaking the symmetric key that the conversation is encrypted in rather than the RSA key that the symmetric key is encrypted it. 

In short, there will be no traction until some very coherent arguments which I've asked for over a year ago are made. There is no technical work or heavy coding that needs to be done to make this change, we just need a real justification. We haven't had it in over a year. Once one appears, we can look at bumping up the number.

bob
Well, I like to point some facts out:

1. Error on the argument: The referred DoS attack makes sense only when one uses a long CLIENT certificate. This problem happens even when I have a 16kiB SERVER certificate;

2. Has no effect on DoS attacks: If I'm going to attack a server with this DoS, I'll write code or use ANY browser available. Imposing this limitation creates inconvenience for users and adds no security at all for the web servers arround the world;

3. Blocks good use of available technology: 16kiB asymmetrical cryptography is widely available. That's a fact. Making good use of it is a smart decision, just because WE DO NOT KNOW THE FUTURE. We are designing apps now that will be used during a undefined amount of time.

4. Who defines what is good enough: I don't know which is the source for comparing key lengths with penises (I found nothing on en.wikipedia.org/wiki/Penis), but who decides what is good enough is who knows what the data is and for how long it will be relevant.

This restriction is causing good users and developers trouble; Is causing no good for the intended world-protection-against-16kiB-sized-client-certificate-DoD-attack. It makes no sense. For those reasons the restriction should be changed to match the 16kiB de facto public key size limits.
Additionally:

5. Root certificates: A good point was made about the root certificate's strength. It makes no sense to use 16kiB or even 8kiB-sized keys if the CA chain contains even one 4kiB-sized key. However, the international root certificate chain is not the only one and it's not the safest one.
This bug has been fixed by the patch in bug 636802 (attachment 631576 [details] [diff] [review]).
The limit for RSA key size is now 16K bits.

Checking in blapit.h;
/cvsroot/mozilla/security/nss/lib/freebl/blapit.h,v  <--  blapit.h
new revision: 1.29; previous revision: 1.28
done
Assignee: nobody → wtc
Status: REOPENED → RESOLVED
Closed: 14 years ago12 years ago
Priority: -- → P2
Resolution: --- → FIXED
Summary: NSS claims sec_error_invalid_key for 16K keys → NSS claims sec_error_invalid_key for 16K RSA keys
Target Milestone: --- → 3.14
You need to log in before you can comment on or make changes to this bug.