2.17 KB, patch
Robert Relyea: review+
Wan-Teh Chang: superreview+
Mike Schroepfer: approval1.8.1+
|Details | Diff | Splinter Review|
2.30 KB, patch
Nelson Bolyard (seldom reads bugmail): review+
|Details | Diff | Splinter Review|
This bug is similar to, but not the same as, bug 350640. There are many paths through NSS for RSA signature verification. Some of them supply the expected "DigestInfo" as input to the verification function, which compares the entire digestInfo with the one extracted from the "decrypted" signature. But most of them don't work that way. Instead, most paths parse the DigestInfo from the decrypted signature into its component parts, and then examine (some of) them. The DigestInfo contains a "DigestAlgorithm" and a "Digest" (hash value). The DigestAlgorithm contains an Algorithm IDentifier OID, and an optional octet (byte) string of "parameters". We always check the hash value. Some paths check the Algorithm ID OID, to ensure that it's at least a known OID. None check the parameters. This gives an attacker room to create forgeries. We need to a) always check the OID for being a known valid OID. It would be best if we checked it against a single expected value, or at least against a small list of expected values. And, b) always check the parameters. I *think* they should be empty, (not sure), but we mustn't allow them to be arbitrary size (e.g. 100 bytes) and contain "random" cruft. If we give an attacker a place to hide a hundred bytes of "random" stuff, we are vulnerable to forged signatures. I don't yet know how easy this is to exploit, so I cannot yet rate this vulnerability as being critical or even high. If this is exploitable with ordinary computer resources (no super computers required) then it is "high" vulnerability.
Created attachment 236478 [details] [diff] [review] patch v1 This patch addresses the vulnerability, in all places I found it, except 1, which is RSA_CheckSign, called only from NSC_Verify. This function does not verify that the caller is passing in the entire DigestIinfo. So the caller could pass in just the raw hash value, without the AlgorithmID and parameters, and this function could still pass, ignoring the signature's values for those fields. I need to investigate the formal definition of C_Verify to see if it is permitted to use less than the full DER-encoded digestInfo.
A certificate that exploits this vulnerability has just been attached to bug 350640. The patch attached above stops that exploit.
The URL for the zip file that contains the exploit cert is https://bugzilla.mozilla.org/attachment.cgi?id=236479 It's attachment 236479 [details]
Comment on attachment 236478 [details] [diff] [review] patch v1 Asking for review. Julien is on vacation. Reviewer, please satisfy yourself that the new restriction on algorithm parameters is acceptable.
I still dislike the algorithm flexibility as part of signature interpretation. The certifying key has an attribute stating which algorithm it signs with, afaik. It should get checked that the subject at hand indeed matches the algorithm. A way to avoid interpreting anything alien is to pad out the expected hash according to the CA cert and compare the entirety of the padded image against the one resulting from raising the signature. And yes, constructing a fake cert to tickle this bug requires no searching or heavy computation. Instant gratification, once one gets past the DER encoding intricacies.
Comment on attachment 236478 [details] [diff] [review] patch v1 r+ = relyea. I like this way of checking, at least unless we can get a lot of testing on other ways. I seem to remember there were issues that some toolkits produced signatures with explicitly encoded NULLs for the parameters and some which encoded the parameter field as blank.
Marius, did you really mean 'key'? most public keys (in particular RSA public keys), have no way of restricting which hash algorithms you need to use with the key. This is why the Hash oid is encoded in the signature blob. Most X509 signatures have an algorithm parameter which specifies the hash and the signing algorithm. This value is checked against the one that is specified by this code before the hash is even processed. If the values don't match the code fails. (Note it's important to make this check one way or the other. Either the OID is passed in or the oid is passed out). Older programs had cases where they had raw RSA signatures and fetched the hash from that signature. NSS maintains binary compatibility with those programs.
I was referring to the attributes that come with a self-signed cert (i.e. the CA roots?), which seem to include an algorithm specifier. I'm by no means expert on what get interpreted how, but to me that would the correct, unambiguous way to go: the algorithm/interpretation gets specified along with the public key. I understand the legacy problem and the history behind optional becoming mandatory null etc. That said, I know of no way around the patch as proposed, given the limited set of tolerated algorithms. At some point in the future, when another algorithm is added to the trusted set, hopefully this co-dependency gets taken into account.
(In reply to comment #1) > This patch addresses the vulnerability, in all places I found it, except 1, > which is RSA_CheckSign, called only from NSC_Verify. This function does not > verify that the caller is passing in the entire DigestIinfo. I was mistaken. It does check. So patch 1 above addresses all the potential vulnerabilities I've found so far.
This bug is just as serious as bug 350640 and should block the same releases as that bug did.
Comment on attachment 236478 [details] [diff] [review] patch v1 r=wtc. These checks should be centralized in SGN_DecodeDigestInfo so that all its callers benefit automatically. PKCS #1 v2.1 only allows MD2, MD5, SHA-1, SHA-224 (added later), SHA-256, SHA-384, and SHA-512 as the digest algorithm in DigestInfo, and the parameters for these digest algorithms are all NULL. So the checks should only allow these 7 digest algorithms and should only allow NULL algorithm parameters. The maximum algorithm parameters length of 2 is a magic number.
Hi marius, The only hash specifier in a certificate is the hash specifier for the actual signature on the certificate itself. This mirrors the identifier inside the signature or RSA (sort off, the outer oid specifies something like RSA with SHA1, the inner oid just specifies sha1). This does not say anything about what the key is allowed to sign. Once you go up the certificate chain to intermediates, it may not be possible for the hash to match the key that's verified (the signer may be an RSA certificate MD2 signature and the key a DSA which can only use SHA1). bob
Comment on attachment 236478 [details] [diff] [review] patch v1 Checkin comment: Improve the validity tests on the AlgorithmID when verifying a PKCS#1 v1.5 RSA signature. Bug 351079. r=rrelyea,wtchang. On trunk: Checking in cryptohi/secvfy.c; new revision: 1.19; previous revision: 1.18 Checking in softoken/pkcs11c.c; new revision: 1.90; previous revision: 1.89 On NSS_3_11_BRANCH cryptohi/secvfy.c; new revision: 22.214.171.124; previous revision: 126.96.36.199 softoken/pkcs11c.c; new revision: 188.8.131.52; previous revision: 184.108.40.206 Regarding comment 12: I think we want to continue to keep decoding separate from validating. I believe we want to be able to decode the digestInfo, even if we subsequently decide that the decoded value is not acceptable. Testing for a valid/known OID, as opposed to testing for one of a smaller set of OIDs, allows us to add new hash OIDs without having to modify this code again. I agree that 2 is a magic number, designed to be just large enough to permit decoded NULLs. Perhaps in a subsequent patch to the trunk, we can define a symbol for it, perhaps in util/secoidt.h, perhaps #define SEC_ALGORITHM_ID_PARAMETERS_BYTES_MAX 2 but I wanted to get this reviewed patch checked in now, in case this bug is blocking FF2.
Then we should check that di->digestAlgorithm.parameters is a decoded NULL. The lesson we should learn from these two bugs is that every byte in the RSA-decrypted block should be accounted for.
(In reply to comment #11) Nelson, all good variants to test indeed. Also, the NULL parameter with non-null data is an interesting one. Unittests for the DER signature blob parser should be able to take plaintext DER blobs, which are 'easily' constructed for the variants. Actually constructing the fake signatures is the hard way to tickle that code. Perhaps a testing root CA with public exponent 1 goes a long way; I'm not familiar at all with the Mozilla testing framework. I'll try to find some time later today and prep these cube roots for the 2048 bit case, just for kicks. If someone would care to list the input DER encodings for the variants, that'd save me some hexediting time.
(In reply to comment #15) > The lesson we should learn from these two bugs is that > every byte in the RSA-decrypted block should be accounted for. I believe the lesson is that there should be no "large" blocks of bytes that are unaccounted/uncontrolled. I have defined "large" as exceeding two bytes.
Marius, I'll contact you outside of bugzilla about the test certs. I can tell by mere code inspection whether mozilla/NSS is vulnerable to any of these. Test certs for the other cases would facilitate "black box" testing of numerous implementations.
Comment on attachment 236478 [details] [diff] [review] patch v1 Bob Relyea recommends we add this fix to Mozilla as soon as possible. I just filed bug 351494 to update NSS on Mozilla's trunk to a snapshot that includes Nelson's patch. I will check in today so we'll have it on the trunk ASAP. For branches, including MOZILLA_1_8_BRANCH, Bob is ok to take the isolated patch only.
Comment on attachment 236478 [details] [diff] [review] patch v1 This patch applies fine to MOZILLA_1_8_BRANCH, which will be the base for Firefox 2. This patch does NOT apply to MOZILLA_1_8_0_BRANCH, which is the base for Firefox 1.5 / Seamonkey 1.0. Removing 1.8.0.* approval. If we want a fix there, backporting to NSS 3.10.2 is required.
Created attachment 236905 [details] [diff] [review] moz 1.8.0 branch version of patch v1 This patch applies cleanly to 1.8.0 branch. There are small differences in the context, when comparing with the trunk patch. Nelson, is this patch still correct, despite the context difference?
Comment on attachment 236905 [details] [diff] [review] moz 1.8.0 branch version of patch v1 yes, the only contextual difference I see is in the digest length checking, which is equivalent between the two implementations (though it looks different).
Comment on attachment 236478 [details] [diff] [review] patch v1 a=schrep for followup to crit ssl security bug on 18 branch
Patch v1 landed on Mozilla trunk as part of the checkin for bug 351494. Patch v1 landed on Mozilla 1.8 branch.
Have these test certs been added to the NSS test suite? If not, what's the best way to help that happen?
Comment on attachment 236905 [details] [diff] [review] moz 1.8.0 branch version of patch v1 approved for 1.8.0 branch, a=dveditz for drivers.
(In reply to comment #25) > Have these test certs been added to the NSS test suite? By "these test cert", I gather you mean the two certs attached to bug 350640. Marius and I are working together to create certs for all the test cases listed above, and more.
Created attachment 237001 [details] multiple flavors of forged certs for localhost 6 flavors of ASN1 signature blobs as forged signatures, for the 2048 bit modulus, exponent 3 case. See testcases.README for details. testcase-A.crt is the original test cert with garbage after proper sig blob, testcase-B.crt is the 2nd one with the additional parameter inside the sig blob. And then some variations. Only A & B fool my unpatched firefox & openssl but other implementations/configurations might be less lucky. I'll add some more. Suggestions welcome.
Comment on attachment 236478 [details] [diff] [review] patch v1 The ASN.1 NULL is encoded as two octets: 0x05 (tag) 0x00 (length). This explains the magic number 2 in this patch. I found that Microsoft Outlook Express checks the digest algorithm identifier's parameters is an encoded NULL. I created a signature in which the digest algorithm identifier's parameters is 0x38 0x00. Outlook Express says the signature is invalid, while Thunderbird says the signature is valid. While researching the parameters for the digest algorithms, I found that PKCS #1 seems to require a NULL parameters to be present in the digest algorithm identifier, even though in general an AlgorithmIdentifier's parameters field is optional. The following comment in SGN_CreateDigestInfo seems to support this: /* * PKCS #1 specifies that the AlgorithmID must have a NULL parameter * (as opposed to no parameter at all). */ But I found that Outlook Express also allows a digest algortihm identifier with a missing parameters.
In reply to comment 29, Yes, the magic length of 2 is to permit optional NULLs. (I thought that was known.) Some implementations don't supply those NULLs in the signatures they create. They rely on the fact that algorithm parameters are declared optional. And they expect other implementations to honor that declaration. NSS interoperates with both flavors of parameters: explicit NULL, and omitted. I did not want the fix for this bug to break any valid signatures accepted by NSS before the fix. Yes, we could write code that more strictly permits only parameter length of zero, or paramater length of 2 with value 0x0500. I wouldn't oppose such an implementation, but I believe it is unnecessary. I didn't do that because I have been told (by an unnamed cryptographer :) that 2 bytes is not enough space to be exploitable in the manner of the forgeries being considered here (not enough to permit the construction of a perfect cube). Perhaps Marius would like to comment on that. Now that Marius has developed additional test forgeries for the other test cases that seemed like potentials, and NSS has shown not to be vulnerable to them (only to case a, subject of bug 350640, and case b), I feel very confident that mozilla can proceed to respin FF2 based on the existing patches for these two bugs. (Many Thanks! Marius) Has this patch been checked into a enough branches that this bug can now be marked resolved/fixed?
Can those that have been investigating this bug please help QA verify this fix with the latest 220.127.116.11 candidate builds (http://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/18.104.22.168-candidates/rc5/) and 1.8.1 nightly builds? We have had some trouble setting up a server to serve these certs. Please replace "fixed1.8.x" keywords with "verified1.8.x" once you have completed your tests. Thanks! Nelson: If everything that is needed has landed on the Trunk, you can resolve this bug fixed. The "fixed" keywords already indicate that the correct patches have landed on those branches.
Thanks, Jay. I will contact you offline about helping you with test servers.
Tested 9 invalid certs (testcase-A thru testcase-I) on Fedora Core Release 5. Each cert was reported as an error by the browser and rejected. 1.8.1b2 2006090804 verified 1.8.1 22.214.171.124 20060906 verified 126.96.36.199
I think one of our test certs may be defective. :( If so, then testing with that cert gives us a false sense of security. I am investigating.
In reply to comment 34, it appears that the test cert was not defective.
I guess this bug is now fixed in all the places it needs to be, so I'm clearing the flags that request blocking of aviary1.0.9, 1.7.14, 1.8.1, and 1.9.