Closed Bug 149832 Opened 22 years ago Closed 22 years ago

CERT_VerifyCert is a deficient API

Categories

(NSS :: Libraries, enhancement, P1)

enhancement

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: julien.pierre, Assigned: julien.pierre)

References

(Blocks 1 open bug)

Details

Attachments

(3 files, 2 obsolete files)

CERT_VerifyCert is defined as :

extern SECStatus
CERT_VerifyCert(CERTCertDBHandle *handle, CERTCertificate *cert,
		PRBool checkSig, SECCertUsage certUsage, int64 t,
		void *wincx, CERTVerifyLog *log);

The problem is with the SECCertUsage certUsage . It is only possible to check 
for one type of ertificate usage when calling this function. If the application 
needs to check for more than one usage, it will need to call the function again. 
This causes a very noticeable performance impact. If OCSP is enabled in NSS, for 
every call to check the usage, the certificate will be verified again, and a 
new OCSP connection to the responder is created ... If CRLs are used, there will 
be a linear search of the certificate in the CRL, which can also be long. I'm 
talking about actual measured time of 10s to display a single certificate in PSM 
on a 1 GHz+ machine here.

The main problem is that SECCertUsage is an enum, so only one type of cert 
usage can be passed in to the function.

typedef enum SECCertUsageEnum {
    certUsageSSLClient = 0,
    certUsageSSLServer = 1,
    certUsageSSLServerWithStepUp = 2,
    certUsageSSLCA = 3,
    certUsageEmailSigner = 4,
    certUsageEmailRecipient = 5,
    certUsageObjectSigner = 6,
    certUsageUserCertImport = 7,
    certUsageVerifyCA = 8,
    certUsageProtectedObjectSigner = 9,
    certUsageStatusResponder = 10,
    certUsageAnyCA = 11
} SECCertUsage;

I propose that there should be a new type for SECCertUsageExtended that would be 
a bitfield instead, and a new CERT_VerifyCertExtended function that would be 
able to check multiple usages passed in all in one call. It would fail if any of 
the usages requested were missing from the cert. And it could also return a 
bitfield of the actual supported cert usages, so that the caller could compare 
it with the requested bitfield and see which one(s) posed a problem if it 
needed to.
Blocks: 149834
Alternative proposal:  Instead of having a bit-field input, have the function
return a bit field as output, showing all the usages for which the cert 
chain is trusted.  Then the application can decide if the returned bits are
satisfactory.  
No longer blocks: 149834
Nelson,

I think my proposal is better for applications.
See below. Keep in mind that I simplified the prototype a bit for the purpose of 
this discussion - all we care about is the cert and usage bits, so I dropped 
the other arguments.

I see several possible application cases here :

1) case of an application that needs to have a cert valid for several purposes. 
Eg. S/MIME sign and encrypt. It wants a TRUE/FALSE answer, but doesn't care 
about the reason if there is a failure.

All it needs to do with my proposal is one line,

if (SECSuccess != CERT_VerifyCertExtended(cert, certUsageEmailSignerBIT | 
certUsageEmailRecipientBIT, NULL) )

The third NULL argument here is a pointer to the returned usages, which the 
caller does not care about.

If I follow your proposal, the application would have to query all the usage 
bits by passing a pointer in for returning, and then check each bit 
individually. Since the existing enum usage would still be kept as input 
presumably, the function call would only check one usage, then the application 
would have to check the other usage separately in case of success

So you would end up with something like this :

SECCertUsageBitfield retusage = 0;
if (SECSuccess != CERT_VerifyCertExtended(cert, certUsageEmailSigner, &retusage) 
) || (! (retusage & certUsageEmailRecipientBIT))

I think it's not logical for an application to have to make the function call to 
check the first purpose, and then to check the second (and subsequent) usage 
bits separately.

2) Same as previous case, but the application wants to know the specific failure 
reasons if they are related to cert usage.

In this case, with my proposal, it does :

SECCertUsageBitfield retusage = 0;
if (SECSuccess != CERT_VerifyCertExtended(cert, certUsageEmailSignerBIT | 
certUsageEmailRecipientBIT, &retusage) )
{
   if (! (retusage & certUsageEmailSignerBIT) )
   {
      // missing signing bit failure
   }
   if (! (retusage & certUsageEmailRecipientBIT) )
   {
      // missing encrypting bit failure
   }
   // other cert verification failure (ie. signature or revocation)
}

With your proposal, the application has to do this :

SECCertUsageBitfield retusage = 0;
if (SECSuccess != CERT_VerifyCertExtended(cert, certUsageEmailSigner, &retusage) 
)
{
   if (! (retusage & certUsageEmailSignerBIT) )
   {
      // missing signing bit failure
   }
   if (! (retusage & certUsageEmailRecipientBIT) )
   {
      // missing encrypting bit failure
   }
   // other cert verification failure (ie. signature or revocation)
}
// the verify succeeded, but we still have to check for the encrypting bit ...
if (! (retusage & certUsageEmailRecipientBIT) )
{
   // missing encrypting bit failure
}

I think the second code is very quirky application code and my proposal makes it 
easier for the application.

3) an application may just want to verify the signature and revocation status, 
and not care about the cert usage. With a bit field it can pass 0, which means 
not to check any of the usage bits . This isn't possible with the current 
SECCertUsage as defined which dosn't have a value designated to "not check" any 
of the usage. Of course, we could add a value for that, but I don't think that 
makes as much sense given the other problems of the enum approach. Both problems 
can be solved together with my proposal.
Severity: normal → enhancement
Your case 1 is the AND case.   The application requires that ALL of the 
bits about which it inquires must be 1.  That may be a common case, but
it is not the only case.  Implementing a mechanism that supports only
the AND case is less general and flexible than the alternative I proposed.

Consider this example.  The SSL client library today makes two separate
calls to CERT_VerifyCert, one to ask if the cert is valid as an SSL 
server, and a second one to ask if the cert is valid as an SSL step-up
server.  SSL does not require that both be true.  It requires that one
be true (SSL server).  The library will take different actions depending
on wether the other one (SSL step-up server) is also true.  SO, there
are 3 relevant combinations:  
1. Not valid as an SSL server
2. Valid as an SSL server, but not as a Step-up server,
3. Valid as an SSL server and as a step up server.

The approach I proposed would allow a single call to the cert verify
function to return a bit vector that reports both the SSL server and
SSL step-up server trust bits separately.  It would allow libSSL to 
reduce the number of cert verify calls from 2 to 1.  

The approach you suggested, where the function returns a boolean
result indicating whether or not ALL of the bits requested by the 
caller were true, would still require two separate calls to the verify
function in order to determine the value of the two types of trust
separately.
Nelson,

You are right, I overlooked that case.

Here is a new proposal which is somewhere in between your current one and my 
original one :

- The API would take the bitfield as input. All bits specified would be checked, 
in addition to cert signature and revocation. This is the same as my original 
proposal, and still allows the AND case that I had. For the case you objected 
where SSL server is required and step-up optional, the application would pass in 
only the required bitmask corresponding to SSL-server.

- If a non-NULL pointer was specified to return cert usage, the full cert usage 
bitfield would be returned, not just an AND of the inquired bitfield passed as 
input as my original proposal had it. That way, upon success of the 
CERT_VerifyCertExtended, the application could then do an AND of the returned 
bitfield with the step-up bitmask, without having to make a second call.

Just like yours, I believe this new proposal always minimizes the number of 
function calls to one.

The advantage it has over yours is that it also minimizes the application 
checking code by adding the ability to pass in a bitfield when multiple cert 
usages are required - ie. the application may only want to check for SECFailure 
and SECSuccess and never care about specific bitfields, as I had in case 1) of 
comment #2.
This seems like a good compromise.  
What type do you propose to use for the bitfield of usages?
Looks to me like a PRInt32 would do, since we only have 12 different usages at 
the moment. That leaves room for 20 more options to check.
One of the things that PSM does often is discover which verified purposes the
cert has.

We do that once for each cert displayed in the cert manager and we also do it
when we view an individual cert.

We currently call CERT_CertVerifyNow once for each purpose for each cert.
With the current proposal as I understand it we would make one call (with all
bits set as input) receive the verified purposes output bit field, then we'd
have to parse that (maybe we'd used macros defined by NSS as in
IS_CERT_USAGE_SSL_CLIENT(outputBitField))

So our code would become one call to NSS and a bunch of if elseif to add the
verified purpose string for UI Display. That would be a great improvement in
performance.

I just wanted to describe one of the things we do today so that it gives you
context.
Thanks, Stephane.
This means there are two APIs to change, CERT_VerifyCert and CERT_VerifyCertNow.

I will create a patch to implement this. This should be a very straightforward 
one. I would be in favor of checking it into NSS 3.5 and Mach V rather than 
waiting for NSS 3.6 and later, since the fix will very visibly benefit the 
client UI performance and the risk is minimal.

Browsing certvfy.c, I see many other APIs that would also benefit from using the 
bitfield type :

CERT_FindCertIssuer
CERT_FilterCertListByCANames
CERT_TrustFlagsForCACertUsage
CERT_VerifyCertChain
CERT_FindMatchingCert
CERT_GetCertChainFromCert

This would probably means a full rewrite of certvfy.c though, and we would end 
up with two sets of APIs, the old ones taking SECCertUsage and the new ones 
taking and returning SECCertUsageBitfield. This would add to code size, and 
wouldn't benefit applications until they got modified to use the new APIs.

I think there is a good case for CERT_VerifyCert and CERT_VerifyCertNow since 
these functions are called so frequently by applications. The others probably 
will have to be done on a case-by-case basis, at least for NSS 3.x . I would be 
in favor of removing the old SECCertUsage type and any functions using it for 
NSS 4.0, and move everything to the more flexible bitfield approach APIs then. 

This would force people to take a quick look at their application code to 
recompile them with the new usage bitfield, hopefully making them eliminate all 
the extraneous calls in the process and speeding them up.
These sound good, except I do not really like the idea of bitmasks, only because
I know that very next thing we are going to have to deal with in NSS 3.6 is
expanded cert usages, and application defined cert usages, which NSS may not
know about.

That's why I'd prefer arrays of oid tags. for the usages, so we can compare
directly to the key usage extension fields.

bob
OK, I was not aware of those other requirements. I didn't see them on the 3.6 
plan.

It seems that the array approach could work just like the bit field approach - 
pass an array of required oid tags, get an array with all the cert usage's oid 
tags.

However, I don't like the array for the following reasons :

- The application has to construct an oid tag input array, which is more complex 
than a bitfield and probably can't be done on one line (or a very long one).

- Then, there the issue of allocating and freeing the returned oid tag array, 
which doesn't happen with the returned bitfield, as its size is known ahead of 
time and a pointer can be passed in . This is a potential for leaks.

- Lastly, it is also more cumbersome in the application code to walk the whole 
returned array to check for failures than to check a bit field.

I don't really have another proposal that solves both the original problem and 
the expanded and application-defined usages, though. Perhaps Nelson has a better 
idea ?
I prefer bit fields as well. We currently have some room to grow with 32 bits
available and only 12 or so usages. How likely is it to grow beyond 32?
The application code would be so much easier to code and faster.

You can have bit fields and plan for more than 32 usages by using a pointer to
an array of ints, each to be interpreted as a bit field. If we go beyond 32, the
bit field can be expanded.
Stephane,

If we use an array of ints, presumably with a null termination, it has to be 
allocated too, same as my 2nd objection.

Also the array of ints will only help for those new cert usages that are 
standardized. Currently there are only 12. I don't think it's likely to grow 
beyond 32. We could use PRInt64 to be really safe for a while.

The problem is with the application-defined cert usages. I don't see how we 
could possibly deal with that using ints with bitfields. That would require 
oids.

I'm not sure if checking those extra oids really belongs within CERT_VerifyCert, 
though. If the application is defining its own usages, it is willing to go the 
extra mile of writing special checking code for them, and use another function 
that would return all the cert usage's OIDs.
I'd like to jump in and note that this is something I've been looking into for
4.0.  I came to the exact same conclusion as Julien (that bitmasks are the best
solution), and I was myself wondering why the Stan API is written for arrays of
usages.  Since this discussion has been opened now, I'll outline the Stan
equivalents, so that we can try to do the same thing in 3.6/4.0.

In Stan, there are three functions for verifying a cert. 
NSSCertificate_Validate is currently defined to take an array of NSSUsage's. 
NSSCertificate_ValidateCompletely does the same, but includes logging for all
errors (much like CERT_VerifyCert).  Lastly,
NSSCertificate_ValidateAndDiscoverUsageAndPolicies would return the array of all
valid usages for the cert.  See
http://lxr.mozilla.org/security/source/security/nss/lib/pki/nsspki.h#153

As I noted above, I looked at this API and felt that a bitmask of usages made
more sense than an array, for the same reasons Julien noted in comment #10.

Where application-defined usages are concerned, I also agree with Julien.  For
one thing, I think SECCertUsage has always been "meta-data" about a cert.  Some
usages combine multiple extensions, and that decision is made by NSS.  I don't
see how NSS can _validate_ an application-defined usage, simply by determining
that the OID exists in the cert's extensions (does it have to be critical?  what
if more interpretation is involved)?  I think that by definition, applications
should have to validate application-defined usages.

If that is not the case, I would propose something like this:

struct NSSUsages {
  PRInt32 nssUsageBitMask;
  NSSOID **applicationDefined;
}

where the bitmask is the set of NSS-defined usages, essentially shortcuts to the
set of extensions we know about.
My worry with the bitfield is not the bitfield size, but the implicit
requirement that we have to understand the meaning of the Usage field.

The more I think about it, though, the more I think that we have a lot of work
to handle more usages dynamically anyway, and at some point someone will need to
understand the meaning of a particular usage. So Maybe bitfields are OK. If we
later find we need to handle 'arbitary' Usages, we can add a way to map a
bitfield to an oid tag at run time.

Added Terry to the CC list since this involves cert validation.
Ian,

In the Stan design the usages would map more directly to the usage extensions in
the certificate, so that the semantic meanings of the usages would be more
opaque to the Verify call. Traditional NSS usages would be sets of predefined
arrays of low level usages.

I think I agree with you, however, about the notion that we can provide a
programatic mapping between bit in bitmask and usage oid. The big question for
Stan, however, is 32 enough for the lifetime of that API?

bob
In Comment 8, Julien wrote "there are two APIs to change".  
Hopefully you meant "there are two new functions to add to the API",
since existing API functions should NOT change signature or semantics.

One requirement for any new functional interface for determining the 
suitability of a cert chain for any subset of NSS's long established 
cert usages is that the new functional interface perform at least as
well as the old method of doing the same thing.  

IMO, it is no longer acceptable to introduce new functions that are 
intended to (eventually) replace older ones, when those new ones are
going to be slower than the old.  
Nelson,

Yes, in #8 I meant that there should be two new versions of these APIs,
CERT_VerifyCertificate and CERT_VerifyCertificateNow.

As far as speed, the new API with bitfields that we agreed on will perform
slightly  worse, since each bit will have to be checked individually.

The big upside is that of course the application will make far fewer calls to
the new API than the old one.

The matter of extended/custom cert usages is too complex to handle in this
simple new API. It is the subject of a new bug, 150742 .

Blocks: 151010
Assigned the bug to Julien.

Tentatively target this bug at NSS 3.5 (for MachV).
We will make the final decision after measuring the
performance improvement of PSM's certificate manager
brought by the new API functions.
Assignee: wtc → jpierre
Priority: -- → P1
Target Milestone: --- → 3.5
Attached patch tentative patch (obsolete) — Splinter Review
This is a preliminary patch only, as it is quite a bit more complex than I
expected. It currently purports does the following (I have not yet run the
code, as it requires a new application to test with) :

1) introduces a new SECCertificateUsage using bit-fields
2) introduces two new functions, CERT_VerifyCertificate and
CERT_VerifyCertificateNow, that 
a. take a bitfield certusage as input, instead of enum
b. optionally return a bitfield of the cert usage's status . Passing NULL is
allowed
c. a boolean to select whether to check status for all possible cert usages or
only the ones requested in the input bitfield. I find this to be desirable for
optimal performance. 

This allows applications to make only one function call to make multiple
usages, and get detailed status. The functions are implemented so that OCSP and
signature checking will only happen at most once per call.

I have also made some internal changes to the implementation of
CERT_VerifyCertChain. This doesn't change the external signature, but there is
a new internal __CERTVerifyCertChain function that can additionally take a
PRBool to skip CRL checking, as well as a pointer to the issuer cert, in case
it is already known. This allows the caller, in this case ,
CERT_VerifyCertificate, to bypass the unnecessary repeated lookup of the issuer
cert, and also avoids repeated checking of the CRL.

I am not certain that this last optimization - the caching of the issuer cert -
is entirely correct. It seems to me that for a given cert, there could only be
a single issuer cert, but the definition of CERT_FindCertIssuer includes a
SECCertUsage. Is there a possibility that it would return different issuer
cert, given the same subject cert, but different cert usages ?

This patch turned out to be significantly more difficult to implement than I
expected. The old CERT_VerifyCert code (which is still there, untouched, for
compatibility with existing applications) was heavily using goto's and aborting
immediately at the first failure. The new code needs to check all usages in
order to be able to return full usage status. There are still some questions
about how the new function should behave, for instance the use of NSPR return
codes. Since multiple cert usages may be checked, it is possible for multiple
failures to occur. The way the function is currently implemented, a new NSPR
error will replace the old one from a previous check. The NSPR error codes are
set only for the cert usages that the user required for the verification, not
for the other usages, in case they are checked, but not required.

There were also other oddities in the existing code WRT macros . Some of the
macros included gotos and would either log or abort. The codepath was very
different if a log was provided or not. These macros could even cause a failure
case to fall through to a success (just look at the end of CERT_VerifyCert,
after the OCSP check - LOG_ERROR_OR_EXIT is used).
> I am not certain that this last optimization - the caching of the issuer cert -
> is entirely correct. It seems to me that for a given cert, there could only be
> a single issuer cert, but the definition of CERT_FindCertIssuer includes a
> SECCertUsage. Is there a possibility that it would return different issuer
> cert, given the same subject cert, but different cert usages ?

It's true that a given cert has only one issuer.  But the issuer may have been
re-signed, and the only way for the subject cert to correctly identify the
actual issuing cert is via keyID extensions.  If those extensions are missing,
the subject cert falls back on picking the "best" issuer certificate that
matches its issuerName.  All over NSS, the decision of what makes the "best"
cert in a set is made by 1) validity period and 2) usage.  If you look in
CERT_FindCertIssuer, you find exactly this logic.  If the keyID extensions
aren't present, all certs matching the issuer's subject are found, and the
"best" is chosen from them.

I admit it seems strange.  I don't believe it is possible for re-signed certs,
with the same subject, to change their usage extensions.
I have some stylistic comments:

1)  You remark that SECCertUsage is now obsolete.  That type is used in many
more places than CERT_VerifyCert, so that statement is not at all true.  All of
our other cert API functions will continue to take SECCertUsage, so we shouldn't
confuse people in the header file by saying they shouldn't use it.

2)  Instead of a boolean parameter to indicate whether the caller wishes to
discover all of the cert's valid uses, I would prefer to have certificateUsage
== 0.  By passing in a usage of zero, the caller implies that he/she wants to
find usages.  I think this is more consistent -- why would the caller validate
against one usage and discover all others?  What would CERT_VerifyCertificate
return if the cert is not valid for the given usage, but is valid for something
else?  I think that creates an ambiguity.

To expand on that, I think certificateUsage != 0 should be the AND case, that
is, the cert *must* be valid for all the given usages in order for the function
to return SECSuccess.  The verify call would fail immediately if one of the
usages was not present, nothing would be placed in retUsages, and SECFailure
would be returned.

But if certificateUsage == 0, all usages are collected and placed in retUsages.
 If there are any such usages, SECSuccess is returned, else SECFailure.

OK, I retract my statement above about certificateUsage != 0 being the AND case.
 Now I understand why that bitmask should be used to restrict the set of usages
to validate for performance reasons.

Is this (simplified code) correct:

usages = usage1 | usage2 | usage3;
rv = CERT_VerifyCertificate(cert, usages, &retUsages);
if (rv == SECSuccess) {
  if (usages == retUsages) {
    /* the cert verified for all requested usages, the AND case */
  } else {
    /* the cert verified for at least one requested usage, the OR case */
  }
} else {
  /* the cert failed to verify for any usage */
}
Ian,

#21

1) OK, I will change that comment

2) I never meant to implement the OR case with the existing patch, only the AND
case. My original idea was that passing in zero in the bitfield indicated that
no particular usage was required, and the call was for the sole purpose of
revocation checking. Do you think the OR functionality may be needed by
applications ?

I agree that we may eliminate the additional checkAllUsages argument by also
using that value to indicate that we want to return all usages. Let's analyze
the different behaviors with different sets of arguments.

Here is what my existing patch tries to do, given a list of arguments :

1) certificateUsage = 0, retUsage = NULL, checkAllUsages = PR_FALSE :
only do revocation checking

2) certificateUsage = 0, retUsage = NULL, checkAllUsages = PR_TRUE :
same as 1) . We can't return cert usages without retUsage being set

3) certificateUsage = 0, retUsage set, checkAllUsages = PR_FALSE :
a. do revocation checking
b. return a 0 bitfield since no usages were required or checked

4) certificateUsage = 0, retUsage set, checkAllUsages = PR_TRUE :
a. do revocation checking
b. lookup and return all cert usages, without considering their values in the
success or failure of the overall certificate verification

5) certificateUsage > 0, retUsage not set, checkAllUsages = PR_FALSE :
a. check and enforce requested usages
b. Do revocation checking if any single usage is not explicitly trusted

6) certificateUsage > 0, retUsage not set, checkAllUsages = PR_TRUE :
same as 5) . We can't return cert usages without retUsage being set

7) certificateUsage > 0, retUsage set, checkAllUsages = PR_FALSE :
a. check and enforce requested usages
b. Do revocation checking if any single usage is not explicitly trusted
c. return status only for requested usages

8) certificateUsage > 0, retUsage set, checkAllUsages = PR_TRUE :
a. check, enforce and return requested usages
b. Do revocation checking if any single required usage is not explicitly trusted
c. lookup and return all other cert usages, without considering their values in
the success or failure of the overall certificate verification

With your proposal to remove the checkAllUsages bool, the cases are simplified :

1) certificateUsage = 0, retUsage = NULL :
only do revocation checking

2) certificateUsage = 0, retUsage set :
a. do revocation checking
b. lookup and return all cert usages, without considering their values in the
success or failure of the overall certificate verification

3) certificateUsage > 0, retUsage not set :
a. check and enforce requested usages
b. Do revocation checking if any single usage is not explicitly trusted

4) certificateUsage > 0, retUsage set :
a. check and enforce requested usages
b. Do revocation checking if any single usage is not explicitly trusted
c. return status only for requested usages

The main thing that is lost with the new proposal is case 8) .
Discussed this with Bob. The revocation checking should only be considered for
the non-explictly trusted cert usages.
Assuming the checkAllUsages argument goes away, this brings the question of what
this API should do for the case where certificateUsage = 0 and retUsage = NULL .
Perhaps in fact it shouldn't do revocation checking for this case and just
return PR_TRUE, since no cert usages are required or returned.
Attached patch updated patch (obsolete) — Splinter Review
1) remove checkAllUsages arguments
2) rename certificateUsage / retUsage arguments to requiredUsages and
returnedUsages be more self-explicit
3) do CRL revocation checking outside of cert chain verification
4) do OCSP checking within the usage checking loop, and only once
Attachment #87346 - Attachment is obsolete: true
Blocks: 121916
I tried to debug this code. The caching of the issuer cert in the included
patches does not work, because the issuer is looked up in the cert chain loop,
and it also gets destroyed for each CERT_VerifyCertChain call. I'm only caching
the immediate parent issuer cert, not the whole cert chain. This probably is not
helpful.
By removing the issuer cert caching, I got the code to work with certutil on
Solaris.

I couldn't get it to work with PSM on OS/2 though, after making a small patch to
use the new function. It crashed there within the certchain function. The
debugger seemed to display incorrect line information (many comments were being
"executed" when I stepped over) so I couldn't get a good handle of what the
problem was.

I haven't successfully built PSM on any other platform than OS/2, so I can't
test my 2 patches together there.

I have opened a PSM bug on this, #121916.
Looks like this won't make it into Mach V unfortunately, so moving the target to
NSS 3.6 . This API will probably still undergo some changes to implement
multiple path checking, but I feel that this patch is still a good starting
point for the optimizations that are already included.
actually moving the target
Target Milestone: 3.5 → 3.6
As discussed in comment #26, the issuer cert caching didn't work, and I didn't
want to have to cache the entire cert chain, so I'm removing the code here.
Attachment #87464 - Attachment is obsolete: true
There was an assertion occurring in this code when passing in a usage of 0. This
is because I tried to check every cert usage's status to return them all in that
case. However, 3 usages cannot be checked :

certUsageAnyCA, certUsageProtectedObjectSigner, and certUsageUserCertImport
So I had to ensure I did not try to check these.

I have changed the bitfield to a 64-bit type, as discussed in our meeting.

I have tested the function with a PSM patch as well as certutil, and it works fine.

The code is now checked in to the NSS tip (NSS 3.6).
Status: NEW → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
Follow-up to comment #31 :
certUsageVerifyCA is another usage that cannot be verified - it causes an
assertion. So this bit will now always be returned as false.

Blocks: 149834
The macro 'highestUsage' does not have a distinctive
prefix to avoid potential conflicts.  I renamed it
'certificateUsageHighest'.
Comment on attachment 87749 [details] [diff] [review]
patch that removes the issuer cert caching

>Index: certt.h
[...]
>+typedef PRInt32 SECCertificateUsage;

I just noticed that in the version of this patch
that was checked in, the SECCertificateUsage type
is defined as a PRInt64.

Using a 64-bit bitmap has the advantage of
allowing us to have more than 32 certificate usages,
but it may be a porting problem.  Strictly speaking,
we need to use NSPR's LL_ macros to operate on
PRInt64, and it is impossible to define macros as
64-bit constants on platforms that don't have a
64-bit integer type.

On the other hand, all of the compilers we are using
today have a 64-bit integer type (not necessarily
called long long), and long long became a standard
integer type in the 1999 revision of the C Standard
(C99).	So the porting problems of PRInt64 will go
away eventually.  Assuming we don't need to port NSS
3.6+ to older systems, I am just worried that compilers
for handheld or embedded systems may be slower in
supporting the C99 long long type.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: