Closed
Bug 115360
Opened 23 years ago
Closed 23 years ago
fips tests: pk12util failures in backwardcompatibility tests
Categories
(NSS :: Libraries, defect, P1)
Tracking
(Not tracked)
RESOLVED
FIXED
3.4
People
(Reporter: sonja.mirtitsch, Assigned: bugz)
References
Details
Attachments
(1 file, 2 obsolete files)
5.51 KB,
patch
|
Details | Diff | Splinter Review |
the backward compatibility run binaries in nss322/builds/20010820.1/y2sun2_Solaris8 and libraries that were build today. The regular tests (new tip binaries passed, new tip libs), so did the backwardcompatibility tests of 3.3.2 binaries against 3.2.2 libraries 3.3.2 and 3.3.2 use DBM 1.55 the tip here uses DBM 1.6 Export the certificate and key as a PKCS#12 file (pk12util -o) Failed Import the certificate and key from the PKCS#12 file (pk12util -i) Failed Export the certificate as a DER (certutil -L -r) Failed The failures show up on AIX, Linux, Win2K and Solaris and seem intermittant. Some machines show problems in the optimized build (Solaris 9, Win2K running 95 build, redhat linux 6.1) some in the debug build (Solaris 8 64 bit running 32 bit build, redhat linux 6.2), some platforms have one machine not show any problems (galileo, Solaris 8 64 bit) and others that do show the problem. I am repeating the test right now, but I expect it to show a random pattern of pass and fail results. fips.sh: FIPS 140-1 Compliance Tests =============================== fips.sh: List the FIPS module certificates ----------------- certutil -d ../fips -L Certificate Name Trust Attributes FIPS_PUB_140-1_Test_Certificate Cu,Cu,Cu p Valid peer P Trusted peer (implies p) c Valid CA T Trusted CA to issue client certs (implies c) C Trusted CA to certs(only server certs for ssl) (implies c) u User cert w Send warning fips.sh: List the FIPS module keys ------------------------- certutil -d ../fips -K -f ../tests.fipspw.26591 <0> FIPS_PUB_140-1_Test_Certificate fips.sh: Attempt to list FIPS module keys with incorrect password certutil -d ../fips -K -f /share/builds/mccrel/nss/nsstip/builds/20011214.1/booboo_Solaris8/mozilla/tests_results/security/bct/kentuckyderby.1/tests.fipsbadpw.26591 incorrect password entered at command line. certutil: problem listing keys: security library failure. fips.sh: Validate the certificate -------------------------- certutil -d ../fips -V -n FIPS_PUB_140-1_Test_Certificate -u SR -e -f ../tests.fipspw.26591 certutil: certificate is valid fips.sh: Export the certificate and key as a PKCS#12 file -- pk12util -d ../fips -o fips140.p12 -n FIPS_PUB_140-1_Test_Certificate -w ../tests.fipsp12pw.26591 -k ../tests.fipspw.26591 pk12util: add cert and key failed: Unable to export. Private Key could not be located and exported. fips.sh: List the FIPS module certificates ----------------- certutil -d ../fips -L Certificate Name Trust Attributes FIPS_PUB_140-1_Test_Certificate Cu,Cu,Cu p Valid peer P Trusted peer (implies p) c Valid CA T Trusted CA to issue client certs (implies c) C Trusted CA to certs(only server certs for ssl) (implies c) u User cert w Send warning fips.sh: Delete the certificate and key from the FIPS module certutil -d ../fips -F -n FIPS_PUB_140-1_Test_Certificate -f ../tests.fipspw.26591 fips.sh: List the FIPS module certificates ----------------- certutil -d ../fips -L Certificate Name Trust Attributes p Valid peer P Trusted peer (implies p) c Valid CA T Trusted CA to issue client certs (implies c) C Trusted CA to certs(only server certs for ssl) (implies c) u User cert w Send warning fips.sh: List the FIPS module keys. certutil -d ../fips -K -f ../tests.fipspw.26591 fips.sh: Import the certificate and key from the PKCS#12 file pk12util -d ../fips -i fips140.p12 -w ../tests.fipsp12pw.26591 -k ../tests.fipspw.26591 pk12util: Initialization failed: fips140.p12 fips.sh: List the FIPS module certificates ----------------- certutil -d ../fips -L Certificate Name Trust Attributes p Valid peer P Trusted peer (implies p) c Valid CA T Trusted CA to issue client certs (implies c) C Trusted CA to certs(only server certs for ssl) (implies c) u User cert w Send warning fips.sh: List the FIPS module keys -------------------------- certutil -d ../fips -K -f ../tests.fipspw.26591 fips.sh: Export the certificate as a DER-encoded file ------ certutil -d ../fips -L -n FIPS_PUB_140-1_Test_Certificate -r -o fips140.crt certutil: could not find certificate named "FIPS_PUB_140-1_Test_Certificate": security library: bad database.
Assignee | ||
Comment 1•23 years ago
|
||
Given that it is failing in FIPS (DSA certs), and that is is intermittent and seemingly random, I'll wager that this has something to do with leading 0's in the DSA public key used to index the private key in the database. Bob, can you think of any way that would cause backward compatibility tests to fail? Are we handling the leading 0's differently now?
Reporter | ||
Comment 2•23 years ago
|
||
Should I try to use DBM 1.55 and see if that makes a difference?
Comment 3•23 years ago
|
||
Yes, that would eliminate the new DBM 1.6 as a factor. My guess is that it won't make a difference.
Reporter | ||
Comment 4•23 years ago
|
||
I changed the init files, tomorrow we should see the results
Comment 5•23 years ago
|
||
I doubt that it's a database issue. Ian's probably right, it's probably a stupid leading '0' or not in DSA code issue (sigh). bob
Reporter | ||
Comment 6•23 years ago
|
||
build with dbm 1.55 today to eliminate the possibility that this was a factor and saw the failures on Solaris 8 32 bit and OSF. I will remove building with specific dbm from init files now that it is switched in coreconf
Comment 7•23 years ago
|
||
Ian, could you discuss with Bob about how this bug should be fixed?
Assignee: wtc → ian.mcgreer
Priority: -- → P1
Target Milestone: --- → 3.4
Assignee | ||
Comment 8•23 years ago
|
||
MPI treats bignums as unsigned. When a bignum is used to index an entry in the key database (modulus for RSA, public value for DSA), the byte stream that is used as an index originates from an unsigned bignum. There is never a leading 0. This is true of NSS 3.4 and NSS < 3.4, AFAIK. See bug 95135. However, the public key of a DSA certificate is signed. I assume (without looking it up) that X.509 requires the encoded integers to be signed, therefore our DER engine adds a leading zero, if needed, to the public value. The backward compatibility failure arose because we now index our key database by the SHA-1 hash of the public value (in the case of a version 3 key database, lookups are done via traversal, and the hashed public value is compared to the passed CKA_ID attribute). If the public value was extracted from a certificate and the first bit (of the bignum) is 1, the byte stream will have a leading 0. The computed hash will then not match the hash stored in (or computed from) the database key. I believe the attached patch is the correct fix, though there are many ways to solve this problem. The situation I have described restricts the problem to 1) DSA keys, 2) with a leading 0, and 3) searched for by extracting the public value from the corresponding certificate. The workaround restricts itself to that specific case. The open question is the function pk11_FindDSAPublicKeyAttribute <http://lxr.mozilla.org/mozilla/source/security/nss/lib/softoken/pkcs11u.c#506> This function returns the CKA_ID value for a DSA public key object (as opposed to a DSA certificate). In every case I can think of, the public key object will have been extracted from a certificate, and thus will have the same leading 0 issue. I believe a similar fix is needed there. cc'ing Nelson for comment/review.
Assignee | ||
Comment 9•23 years ago
|
||
Reporter | ||
Updated•23 years ago
|
Severity: normal → blocker
Whiteboard: waiting for Nelson's review
Comment 10•23 years ago
|
||
We had an email discussion of our bignum libraries and "signed" vs. "unsigned" bignums. Wan-Teh asked me to post my comments on these subjects. Here is a brief summary. Backgound: The ASN.1 DER and BER encoding rules for the INTEGER data type encode all integers as arrays of 8-bit bytes, most significant byte first. The number of bytes is explicitly stored as part of the encoding. The left most (most significant) bit of the first (left most, most significant) byte is interpreted as a sign bit. If it is a 1, then the INTEGER is understood to be a negative number, stored in twos-complement form. Generally, leading zero bytes are used only under two circumstances: 1. The INTEGER is zero, in which case it is encoded a one zero byte. 2. The most significant bit of the INTEGER has value 2^n where n%8 == 7, in other words, when the number is expressed as a series of bytes, the most significant bit of the most significant byte is 1, but the number is positive. There is a similar rule about leading 0xFF bytes for negative numbers, but in practice we do not enounter negative ASN.1 INTEGERs. By contrast, in PKCS#11, all "Big Integers" are unsigned integers. That is, they are never negative. The most significant bit of the most significant byte is never interpreted as a sign bit, adn there is no need to ever produce or interpret twos-completement negative numbers in PKCS#11. Leading zero bytes are not ever supposed to occur in PKCS#11 Big Integers. Proposal: In the PKCS#11 soft token, all "Big Integer" attributes should be output from the token conforming to PKCS#11's rules for Big Integers. On input to the token, any unnecessary (and technically incorrect) leading zero bytes on "Big Integer" attributes should be stripped so that they are not stored internally in the token's session or token objects. The soft token's internal DB should not have leading zero bytes in Big Integers. (Note that by "internally", I am not referring to the internal bignum format of the bignum library wherein bignums are stored as arrays of integers; I am referring only to the places where big integers are stored as arrays of bytes, typically as SECItems.) Presently there are no circumstances in which NSS encodes negative numbers as ASN.1 INTEGER types. So, I propose that the ASN.1 encoder should add leading zero bytes when needed, rather than requiring the leading zero byte to be prepended prior to invoking the encoder. Similarly, stripping leading zeros should be done by the ASN.1 decoder to produce "unsigned" integers as a result of decoding. The token should also strip leading zeros as a safety precaution. It occurs to me that in the future, NSS may deal with cryptographic algorithms that really do use signed numbers which may include negative values. I'd suggest that the "kind" values in the "Templates" that are used by our ASN.1 encoders and decoders have a new bit added to them that tells them whether the unencoded value is expected to be signed or unsigned. I propose that we reserve the bit now, but leave the implementation until it is needed.
Whiteboard: waiting for Nelson's review
Assignee | ||
Comment 11•23 years ago
|
||
Comment on attachment 62402 [details] [diff] [review] patch described above Last Friday, I checked in the workaround patch for this bug. While the patch fixes backward compatibility failures, it causes new failures on the tip. I looked into this, and realized that the workaround implemented in the patch needs to be applied in two more places in the softoken, along with one place above PKCS#11 (CKA_ID is calculated in four different places!). Even then, I couldn't quite fix all the failures (there seemed to be another problem in the key db code, which was hard to fix since at that level one couldn't determine if the key was DSA or RSA). Over the weekend, I decided that it was useless to go on working with a growing patch that was just a workaround anyway.
Attachment #62402 -
Attachment is obsolete: true
Assignee | ||
Comment 12•23 years ago
|
||
Instead, I offer this patch, which I believe implements the first part of Nelson's proposal. I have tested this both with the tip and with backward compatibility. I would like Bob and Nelson to review it, to make sure my PKCS#11 and ASN.1 changes are correct.
Comment 13•23 years ago
|
||
Ian, Questions. 1. Does today's build (20020114.1) have the workaround that you marked obsolete? 2. Is the second part of Nelson's proposal not required to pass both the regular and backward compatibility tests?
Assignee | ||
Comment 14•23 years ago
|
||
> 1. Does today's build (20020114.1) have the workaround that you > marked obsolete? No build does. I checked it in Friday, then promptly removed it once tinderbox started showing failures. > 2. Is the second part of Nelson's proposal not required to pass > both the regular and backward compatibility tests? No. This patch is all that is needed for now (though reserving the bit is a good idea).
Comment 15•23 years ago
|
||
The patch looks good. My only concern would be backward compatibility in some keygen cases. Currently CKA_ID is generated using signed values. In our new code it is generated using unsigned values. This is only a problem in the key gen case when we are trying to import the user cert for a generated key. We should be able to solve this by making that case look for both types of key id's. That will allow us to find all orphaned keys in all databases, though new orphaned keys will not be findable in old applications. Once the key has a cert, there is not problem using it because CKA_ID is not generated in this case. Anyway I think Ian's patch is good. I'll look into the other issue. bob
Assignee | ||
Comment 16•23 years ago
|
||
Nelson's suggestion, along with an addition to handle the fact the the length returned needs to reflect the length consumed (including the leading zeroes). This was checked in on the tip.
Attachment #64830 -
Attachment is obsolete: true
Comment 17•23 years ago
|
||
I am getting the following error from certutil when running ssl.sh: certutil: could not change trust on certificate: security library: bad database. This is a tree I just pulled and built.
Assignee | ||
Comment 18•23 years ago
|
||
Marking fixed since no failures have shown up in the past two days' worth of QA.
Status: NEW → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
You need to log in
before you can comment on or make changes to this bug.
Description
•