Closed
Bug 321765
Opened 19 years ago
Closed 19 years ago
Allow NSS to import certs with unsupported critical extensions
Categories
(NSS :: Libraries, enhancement, P2)
Tracking
(Not tracked)
RESOLVED
FIXED
3.11.1
People
(Reporter: nelson, Assigned: julien.pierre)
Details
Attachments
(3 files, 6 obsolete files)
13.77 KB,
application/zip
|
Details | |
4.79 KB,
patch
|
Details | Diff | Splinter Review | |
7.73 KB,
patch
|
Details | Diff | Splinter Review |
Today, NSS refuses to "import" or "decode" certificates that contain
unknown critical extensions. This is exactly as the standards say it
should. NSS does not understand policy extensions, and so it rejects
certificates that contain critical policy extensions.
This is now a problem for certain mozilla users, in particular users
in Korea. Many Korean's CAs have critical policy extensions in their
intermediate CA certs and in the EE certs they issue to their users.
The Korean government reportedly requires these policies in the certs
used when citizens interact with the government, and (?) in citizens'
email signing certs.
There is a project now under development to add policy processing to
NSS. This is a very large project. Several developers have been
working on it for more than 12 months now, and there is still much
to do. Policy support is expected to be complete in NSS in 2006,
but the work to add full support to mozilla (and other NSS-based apps)
will not begin until then. I expect that it will be about a year
before policy extension support is fully implemented in NSS and mozilla.
This is discouraging for Korean users.
So, I am proposing a radical solution for the Korean users. I expect
(some of) my fellow PKI developers to dislike this proposal and will
not be shocked if it is rejected.
I propose to mark all 3 types of Policy Extension OIDs as being supported
in NSS. This 3-line source code change will mean that NSS will no longer
reject certs with critical policy extensions, but will ignore those
extensions in those certs. It surely does not conform with the letter
or the spirit of the standards for policy extensions (e.g. RFC 3280).
But it does allow users of certs with critical policy OIDs to use
mozilla while waiting for full policy OID support to arrive.
I think a case can be made that ignoring the policy extension OIDs is
acceptable in the browser. Here's that case. It appears that policy
extensions are intended to be used in a "closed" environment, wherein
all the participants (clients, servers, peers) and their applications
adhere to a single policy model (which defines a small set of known
policies). In that model, the application wishing to determine the
validity of a cert chain tells the cert path validator (code) "I will
accept this cert chain if it conforms to any of the policies whose
OIDs are listed here." If the cert chain does not claim conformance
to any of the application's listed policies, it is treated as invalid,
as it would if the signature was invalid, or if it was issued by an
untrusted issuer.
That model works fine for single-purpose applications, such as banking,
and in closed environments, e.g. for signatures used in all forms of
interactions with one's government. But a browser is not a single
purpose application, and it is not used exclusively in any one closed
environment. It is not used only for banking. Citizens of one country
do not use it exclusively to interact with the government of their
country, not even in Korea.
So, I submit that in the context of a browser, or internet email client,
the proper model is not for the application to list the allowed policies
for each action, and let the validator decide if the chain honors any of
those policies, but rather is for the validator to check the chain and
say "here is the set of policies that this chain claims to honor --
decide for yourself if any of these is good enough." Then the browser
could display those policies (lots of hand waving here about how that
happens), and the user could decide if the policies are adequate to his
immediate purposes.
For example, when dealing with his government, the user can look to ensure
that the web server claims to follow the Korean Government High Web Security policy. When receiving a signed email from a fellow countryman, the email
reader can check that the email signature claims to honor the Korean
Government High Security Email policy. (I just made those policy names up
to illustrate the idea.)
This approach means that NSS need not *enforce* policy oids. And if NSS is
not going to enforce them, then it seems logical that NSS need not treat
them an unknown (for criticality purposes).
I will attach a patch for NSS 3.11 that makes the proposed change.
With this patch, and when bug 321584 is fixed, mozilla will work
satisfactorily for the Koreans, I believe.
Patch forthcoming...
Reporter | ||
Comment 1•19 years ago
|
||
With this patch in place, and the fixes for bug 321584 in place,
I am able to import all the Korean CA certs and PKCS12 files that
were received as examples.
While developing this patch, I (re)discovered that NSS has two tables
that claim to be the set of known cert extension OIDs.
http://lxr.mozilla.org/security/source/security/nss/lib/util/secoid.c#538
http://lxr.mozilla.org/security/source/security/nss/lib/pki1/oids.txt
They are badly out of sync.
The two tables list non-identical sets of known policy OIDs.
One table says the cert policy extensions are not supported.
The other says those extensions *are* supported. (sigh)
Assignee | ||
Comment 2•19 years ago
|
||
Nelson,
The problem with the incorrect policy OIDs being incorrect and marked unsupported is a known one. We fixed it last summer on the NSS_LIBPKIX_BRANCH . See bugzilla 300929 .
The standard that you are referring to, RFC3280 - doesn't imply that NSS should fail to "import" or "decode" any certificate that contain unknown critical extensions . RFC3280 only implies that the certificate validation should fail if unknown critical extensions are encounted .
I'm fine with relaxing the restrictions on importing certs with critical extensions. After all, people can use PKCS#11 tokens in non-NSS applications, and put the certs in themselves, even if they have critical extensions that NSS doesn't understand.
I think it doesn't make sense for the NSS decoder to fail either just because there was an unknown critical extension. Only the path validation should fail, if the algorithm doesn't know about the critical extension.
In NSS 3.12, we will have a new PKIX implementation that understands policy extensions, but it will only be exposed through new API calls. The current API calls (CERT_VerifyCert*) will still be present, but will still not understand policies. These old calls will only be migrated to the new PKIX implementation in the future .
In the context of having 2 PKIX implementations, it really doesn't make sense for the critical extension check to be at the decoder step. It will always be wrong half the time if it's there.
This is no different than the problem we have with AuthorityKeyID anad SubjectKeyID extensions currently in our single PKIX implementation. We support these extensions for certs only, but not CRLs. But the OIDs for these extensions are the same whether they are included in certs or CRLs, and we have a single "supported extensions" table for both certs and CRLs. So, we allow the CRLs to be decoded, even though the code doesn't support AKID/SKID in CRLs. This causes failures later on in signature verification, but for other extensions it might be completely ignored.
I think these 2 examples show it's time we get rid of the critical extension checks at decoding time. The decoder should not fail if unknown critical extensions are present. However, the PKIX code definitely should fail until it is changed to understand them.
Comment 3•19 years ago
|
||
It seems to me we've had this discussion before: bug 277797
Currently NSS checks for critical extensions when decoding a certificate (which we usually do before importing it). I thought we had a discussion on the idea of failing the verification if we run into an invalid extension, and we were in general agreement until Nelson pointed out a corner case which I forget. Unfortunately I can not find the bug with that discussion.
There is a work around, which involves checking for the existance of the oid in a pluggin, and if it doesn't exist, create a dynamic entry for it.. At that point the certificate would import correctly.
bob
Assignee | ||
Comment 4•19 years ago
|
||
I don't remember that discussion at all - I wasn't on the cc list.
Nelson, do you remember what the corner case was that made you favor not moving the critical extension check from the decoding to the validating step ?
Reporter | ||
Comment 5•19 years ago
|
||
No, I don't recall the corner case at this moment, sorry.
That discussion was long ago. Perhaps I will recall it soon.
But for purposes of this bug, merely moving the test from import time
to cert validation time doesn't address the need being expressed.
The customer's need is for the certs bearing these critical extensions
to work, to pass validation, as if they were not marked critical
(or as if NSS fully understood and implemented policy extensions).
We believe that NSS 3.12 will have new code to support policy extensions,
although it is not clear (to me) whether that new code will be fully
integrated into the first release of mozilla/FF/TB that uses 3.12.
This RFE proposes to provide an interim workaround (*hack, cough*)
to be used until libPKIX is integrated, that essentially ignores the
critical flag in policy extensions, thereby allowing the particular
Korean certs to be used.
If that proposal is not acceptable in principle then I think we should
mark it WONTFIX, and not spend time moving the unknown-critical-extension
check from import time to validation time. Let's discuss this Thursday.
Assignee | ||
Comment 6•19 years ago
|
||
I don't think the proposal to ignore the critical extension is acceptable.
But with the two PKIX implementations in 3.12, we will still need to move the check for the critical policy extensions to some place other than decoding or import time.
Assignee | ||
Comment 7•19 years ago
|
||
I'm not sure if it's OK to add the hasUnsupportedCriticalExt member to the CERTCertificate. If not, we can add it to the STANCertificate instead.
I haven't tested this patch, but I think it should do the job.
In particular, it should allow SSL servers that don't do client auth, that have no export cipher suites configured, and that don't act as clients, to startup even if they have a cert chain containing those extensions.
Reporter | ||
Comment 8•19 years ago
|
||
Comment on attachment 210215 [details] [diff] [review]
patch that allows decoding certs with unsupported critical extensions, but fail to validate
Julien,
>+
>+ /* check if the cert has an unsupported critical extension */
>+ if ( subjectCert->hasUnsupportedCriticalExt ) {
>+ PORT_SetError(SEC_ERROR_UNKNOWN_CRITICAL_EXTENSION);
>+ LOG_ERROR(log,subjectCert,count,0);
>+ goto loser;
I'm rather certain that those last two lines above should be
replaced by this one line:
LOG_ERROR_OR_EXIT(log,subjectCert,count,0);
>+ }
>+
I can't help bug wonder if there are other places where a check like
the one above needs to be inserted.
Assignee | ||
Updated•19 years ago
|
Summary: Pretend that NSS recognizes policy extensions until it does → Allow NSS to decode certs with unsupported critical extensions
Assignee | ||
Comment 9•19 years ago
|
||
Assignee: nelson → julien.pierre.bugs
Attachment #207059 -
Attachment is obsolete: true
Status: NEW → ASSIGNED
Assignee | ||
Updated•19 years ago
|
Priority: -- → P2
Target Milestone: --- → 3.11.1
Assignee | ||
Comment 10•19 years ago
|
||
I tested the patch with the changes suggested by Nelson, and using the DB of certs I just attached.
I was able to start selfserv successfully with one of the certs :
./selfserv -d . -n signCert -p 2000
I can also connect successfully to the server using tstclnt from that build, as long as I override cert verification :
./tstclnt -d . -h monstre -p 2000 -o
The cert is only good for signing, and it is expired, so there is no hope of checking that tstclnt will correctly fail to verify for the reason of the presence of certificate policies.
However, I checked this successfully with certutil -V :
certutil -d . -V -n signCert -u S -b 051108111000Z
certutil: certificate is invalid: Certificate contains unknown critical extension.
This is the expected error.
Assignee | ||
Comment 11•19 years ago
|
||
This patch update has Nelson's feedback incorporated.
Bob, could you please review ?
In particular I would like your opinion about whether this bit belongs in CERTCertificateStr or NSSCertificateStr (Stan).
Thanks.
Attachment #210215 -
Attachment is obsolete: true
Attachment #210304 -
Flags: superreview?(nelson)
Attachment #210304 -
Flags: review?(rrelyea)
Reporter | ||
Comment 12•19 years ago
|
||
Comment on attachment 210304 [details] [diff] [review]
patch update
r=nelson.
This patch is OK PROVIDED that it's OK for us to extend struct CERTCertificateStr. If not, then this patch is not OK.
Bob, please tell us if it's OK to extend struct CERTCertificateStr.
Attachment #210304 -
Flags: superreview?(nelson) → review+
Comment 13•19 years ago
|
||
Comment on attachment 210304 [details] [diff] [review]
patch update
r+ for the logic.
Extending the CERTCertificate structure, is a riskier proposition than extending other structures. It's possible someone may have created code with either an array of CERTCertificates or created a static CERTCertificate on the stack.
On the other hand, making such a structure work is iffy at best (no Slot or nssCert will likely be associated with such a structure), so it may not be so bad. The only worry would be that we have not changed the size of this structure since we released NSS 3.2 (other structures like the slot structure we have extended a couple of time).
Putting it in the STANCert would be safest, but it really does belong in the CERTCertificate from a 'proper division of labor' point of view (which admittedly we currently don't have).
bob
Attachment #210304 -
Flags: review?(rrelyea) → review+
Comment 14•19 years ago
|
||
so that was wishy washy...
So my vote is to extend CERTCertificate, but I'm pretty sure I can handle the customers on my side that tried to create a static CERTCertificate.
If either julien or nelson has any doubts then we should go the safer route and put in NSSCertificate.
bob
Comment 15•19 years ago
|
||
We may be able to use the authsocketlist field:
http://lxr.mozilla.org/seamonkey/search?string=authsocketlist
It is a pointer, so its size may be different from PRBool.
Note that we already recycled the authsocketcount field.
Without the authsocketcount field, the authsocketlist field
seems useless. (I assume authsocketlist points to an
array and authsocketcount is the number of elements in
the array.)
Assignee | ||
Comment 16•19 years ago
|
||
We unfortunately have product code that has a CERTCertificate on the stack and makes copies of the structure. It does not call CERT_VerifyCert on that stack object afterwards, but it is still a potential problem. The code was OK back in the days where our products were always running with the NSS version they were compiled against. But it's not OK now that it's no longer the case, and the product code didn't change.
I like the idea of replacing the authsocketlist field very much. We can use a union to make it the same size, as PRBool is never larger than a pointer. I think I will also named bits in an upcoming patch. This will give us room for at least 31 future extra option bits if needed.
Comment 17•19 years ago
|
||
I like the union idea, as well as the idea of turning it into a bit field rather than a full integer.
bob
Comment 18•19 years ago
|
||
We should also consider using the pointer field to
point to a private certificate structure for future
expansion.
Assignee | ||
Comment 19•19 years ago
|
||
This patch uses name bits and a union to keep the structure size constant.
I didn't feel like pointing to a separate structure because we already have that with the NSSCertificate if needed .
Attachment #210304 -
Attachment is obsolete: true
Attachment #210554 -
Flags: review?(rrelyea)
Assignee | ||
Updated•19 years ago
|
Attachment #210554 -
Flags: superreview?(nelson)
Reporter | ||
Comment 20•19 years ago
|
||
Comment on attachment 210554 [details] [diff] [review]
don't extend the CERTCertificate structure, use a named bit field
>+ struct {
>+ unsigned int hasUnsupportedCriticalExt :1;
>+ unsigned int :0;
Is that last line valid?
It has no variable name, and its size is zero bits.
I've never seen such a declaration before.
Do all our c compilers allow it?
What purpose does it serve?
>+ /* add any new option bits needed here */
>+ } optionBits;
r=nelson. But I suggest dropping that one strange line.
Attachment #210554 -
Flags: superreview?(nelson) → review+
Comment 21•19 years ago
|
||
Comment on attachment 210554 [details] [diff] [review]
don't extend the CERTCertificate structure, use a named bit field
r+=relyea echoing nelson's comments.
Attachment #210554 -
Flags: review?(rrelyea) → review+
Comment 22•19 years ago
|
||
Comment on attachment 210554 [details] [diff] [review]
don't extend the CERTCertificate structure, use a named bit field
>- struct SECSocketNode *authsocketlist;
>+ union {
>+ void* apointer; /* was SECSocketNode* authsocketlist */
>+ struct {
>+ unsigned int hasUnsupportedCriticalExt :1;
>+ unsigned int :0;
>+ /* add any new option bits needed here */
>+ } optionBits;
>+ } extraOptions;
The comment should say
"was struct SECSocketNode *authsocketlist".
I suggest that we shorten 'extraOptions.optionBits'
because the union and struct are only there to preserve
the size of the CERTCertificate structure. I would use
just 'u.s', but you may find 'options.s', 'options.bits',
or 'options.flags' more readable. (In C99, the union
and the struct can be made anonymous, so we could just
say subjectCert->hasUnsupportedCriticalExt.)
I also suggest that we take precautions to ensure that
when we add new bitfields, the size of the union remains
the same as the size of a pointer. Depending on whether
we assume a pointer's size is at least 16 or 32 bits, we
can list all of the 16 or 32 bitfields now, with most of
them named 'unusedx':
struct {
unsigned int hasUnsupportedCriticalExt :1;
/* add new option bits by replacing
* unused bitfields below */
unsigned int unused14 :1;
unsigned int unused13 :1;
...
unsigned int unused0 :1;
} optionBits;
And add an assertion in NSS initialization like this:
CERTCertificate cert;
/* New option bits must not change the size of CERTCertificate. */
PORT_Assert(sizeof(cert.extraOptions) == sizeof(void *));
Comment 23•19 years ago
|
||
An alternative to 15 or 31 unused bitfields is to use a
15-bit or 31-bit bitfield to reserve the space:
struct {
unsigned int hasUnsupportedCriticalExt :1;
/* add new option bits by replacing
* reserved bitfields below */
unsigned int reserved :15;
} optionBits;
Assignee | ||
Comment 24•19 years ago
|
||
re: comments #20 and #21, the "unsigned int:0" line is valid.
It tells the compiler to pad the bitfield to the size of an unsigned int.
I believe all our compilers support it. I tested HP, Solaris, AIX. However, what we really want is to pad to the size of a pointer rather than unsigned int. So, I updated this patch to use PRUptrdiff instead of unsigned int in the bitfield declaration. This removes the need for the union. This also makes it unnecessary to add reserved bitfields as in comment #23.
In order to check that we aren't changing the CERTCertificate structure size, I added the assertion suggested by Wan-Teh in comment #22 . It should be tripped during the debug build in shlibsign if the structure size is changed.
Attachment #210554 -
Attachment is obsolete: true
Attachment #211321 -
Flags: review?(wtchang)
Assignee | ||
Comment 25•19 years ago
|
||
Comment on attachment 211321 [details] [diff] [review]
remove union and use PRUptrdiff
Hmm, never mind, AIX likes the "unsigned int:0" syntax, but not "unsigned long:0" . It complains that the bitfield type can only be signed int, unsigned int or int ... sigh.
Attachment #211321 -
Attachment is obsolete: true
Attachment #211321 -
Flags: review?(wtchang)
Assignee | ||
Comment 26•19 years ago
|
||
It turns out ANSI C requires bit field types to be int. C++ allows any integral type. Most compilers except AIX support the other integral types for bitfields even in C programs. Unfortunately there is no option in the AIX compiler to support this extension in C. I tried all the possible values of /langlevel .
Assignee | ||
Comment 27•19 years ago
|
||
Attachment #211333 -
Flags: review?(wtchang)
Comment 28•19 years ago
|
||
Comment on attachment 211333 [details] [diff] [review]
this should be the last patch, hopefully.
The patch looks good. Thanks.
I highly recommend removing the unnamed
bit-field of width 0. Its purpose is to
make the next bit-field begin at the
edge of the next allocation unit. Since
there are no bit-fields after this unnamed
bit-field of width 0, it's not useful here.
>- struct SECSocketNode *authsocketlist;
>+ union {
>+ void* apointer; /* was SECSocketNode* authsocketlist */
Please add 'struct' to the comment.
Attachment #211333 -
Flags: review?(wtchang) → review+
Assignee | ||
Comment 29•19 years ago
|
||
Checked in to the tip :
Checking in lib/certdb/certdb.c;
/cvsroot/mozilla/security/nss/lib/certdb/certdb.c,v <-- certdb.c
new revision: 1.77; previous revision: 1.76
done
Checking in lib/certdb/certt.h;
/cvsroot/mozilla/security/nss/lib/certdb/certt.h,v <-- certt.h
new revision: 1.34; previous revision: 1.33
done
Checking in lib/certhigh/certvfy.c;
/cvsroot/mozilla/security/nss/lib/certhigh/certvfy.c,v <-- certvfy.c
new revision: 1.46; previous revision: 1.45
done
Checking in lib/nss/nssinit.c;
/cvsroot/mozilla/security/nss/lib/nss/nssinit.c,v <-- nssinit.c
new revision: 1.71; previous revision: 1.70
And to NSS_3_11_BRANCH :
Checking in lib/certdb/certdb.c;
/cvsroot/mozilla/security/nss/lib/certdb/certdb.c,v <-- certdb.c
new revision: 1.76.2.1; previous revision: 1.76
done
Checking in lib/certdb/certt.h;
/cvsroot/mozilla/security/nss/lib/certdb/certt.h,v <-- certt.h
new revision: 1.33.2.1; previous revision: 1.33
done
Checking in lib/certhigh/certvfy.c;
/cvsroot/mozilla/security/nss/lib/certhigh/certvfy.c,v <-- certvfy.c
new revision: 1.44.10.2; previous revision: 1.44.10.1
done
Checking in lib/nss/nssinit.c;
/cvsroot/mozilla/security/nss/lib/nss/nssinit.c,v <-- nssinit.c
new revision: 1.69.2.2; previous revision: 1.69.2.1
done
Attachment #211333 -
Attachment is obsolete: true
Assignee | ||
Updated•19 years ago
|
Status: ASSIGNED → RESOLVED
Closed: 19 years ago
Resolution: --- → FIXED
Reporter | ||
Comment 30•18 years ago
|
||
I changed the bug summary to reflect the fact that, in the end, we allowed
NSS to "import" (store) certs with unknown critical extensions, but we did
not allow it to pretend that it decoded them correctly.
Note that we deliberately did NOT do the same for CRLs.
Summary: Allow NSS to decode certs with unsupported critical extensions → Allow NSS to import certs with unsupported critical extensions
Reporter | ||
Comment 31•18 years ago
|
||
This patch provides a way to build NSS such that
a) NSS's OID table knows about every cert and CRL extension OID defined
in RFC 3280, and
b) optionally, through conditional compilation, the unsupported RFC3280
extension OIDs that compliant implementations are required to implement
can be marked as supported, so that we don't reject certs that bear them.
Yes, I know this breaks a major rule of FRC 3280 and is anathema to me
and my fellow security colleagues, but sometimes we have to do things to
keep the lights on.
Reporter | ||
Updated•17 years ago
|
Attachment #210302 -
Attachment is patch: false
Attachment #210302 -
Attachment mime type: text/plain → application/zip
You need to log in
before you can comment on or make changes to this bug.
Description
•