Closed Bug 294531 Opened 19 years ago Closed 16 years ago

Design new interfaces for certificate path building and verification for libPKIX

Categories

(NSS :: Libraries, enhancement, P1)

3.10
enhancement

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: julien.pierre, Assigned: rrelyea)

References

(Blocks 1 open bug)

Details

(Whiteboard: PKIX NSS312B1)

Attachments

(12 files, 26 obsolete files)

5.50 KB, text/plain
Details
9.16 KB, patch
rrelyea
: review+
Details | Diff | Splinter Review
3.59 KB, patch
Details | Diff | Splinter Review
16.99 KB, patch
rrelyea
: review+
Details | Diff | Splinter Review
1.96 KB, patch
rrelyea
: review+
Details | Diff | Splinter Review
210.67 KB, application/pdf
Details
815 bytes, patch
alvolkov.bgs
: review+
Details | Diff | Splinter Review
14.14 KB, patch
Details | Diff | Splinter Review
1.26 KB, patch
Details | Diff | Splinter Review
9.87 KB, patch
rrelyea
: review+
nelson
: superreview+
Details | Diff | Splinter Review
15.64 KB, patch
KaiE
: review+
Details | Diff | Splinter Review
2.90 KB, patch
rrelyea
: review+
Details | Diff | Splinter Review
Sun is currently in the process of developing a C version of the libpkix
library, which is a counterpart for the certpath API in the JDK . One of its key
goals is RFC3280 compliance.

libpkix has a very rich API, but one which is also too complex to expose
directly to all NSS applications.  We want to expose only the subset of
functionality which is needed. The goal is to deliver this new functionality in
NSS about 8 months from now.

It is already clear that the prototypes for the existing NSS functions
CERT_VerifyCert and CERT_VerifyCertificate will not be suitable . New APIs have
to be created.

The purpose of this bug is to discuss what the prototype for these new APIs
should look like.

Here is a list of the requirements that have been identified so far :

1) we should provide a function to do only verification on an already-built cert
chain . Currently, the APIs always to build and verify. Building a chain from
the end-entity certificate is wasteful in many cases for S/MIME and SSL
protocols, since the chain is usually available up to the root CA. The chain
should be completed if needed, and then separately verified. This change
requires that the new NSS API take something other than one CERTCertificate*
pointer. We have multiple cert chain structures. Which one is the most appropriate ?

2) we should provide a new NSS API to build a cert chain from a leaf, or from a
partial cert chain.

3) we should not eliminate the "build and verify" API. This is also useful, and
doing both together allows certain optimizations to be performed together, if
both tasks truly need to be done. The current APIs only fill this need.

4) the new APIs should support non-blocking network I/O . Network I/O is
currently needed only for OCSP during verification, and is done only in blocking
mode . libpkix supports discovery of certs during building through LDAP and HTTP
. I/O is also needed to fetch CRLs during verification.

5) the new APIs should support proxies when doing network I/O . Currently, the
OCSP network code is hardcoded to go directly to the network, and doesn't work
with proxies. This is a big issue in particular for the mozilla browser.

6) the new APIs should support certificate policies and policy mappings. This
may require a separate argument to enable/disable mapping.

Any comments on this topic are welcome. Since we are designing new NSS APIs, any
deficiencies in the existing ones are fair game to fix.
> 1) we should provide a function to do only verification on an already-built cert
> chain . Currently, the APIs always to build and verify. Building a chain from
> the end-entity certificate is wasteful in many cases for S/MIME and SSL
> protocols, since the chain is usually available up to the root CA. The chain
> should be completed if needed, and then separately verified. This change
> requires that the new NSS API take something other than one CERTCertificate*
> pointer. We have multiple cert chain structures. Which one is the most
> appropriate ?

I'm not against separating chain building from chain verification, but I don't
think we can get away with not doing chain building for S/MIME or SSL. It is not
always true that the complete chain is sent. In fact the root is usually
excluded. Even if the complete chain is sent, it may not be the appropriate
chain for the receiving application. This would be particularly true for
applications which are using libpkix with cross certificates.

#2) Has to at least work from a leaf cert (sometimes that is all you have to
work with. I think that passing in a 'suggested chain' to start with can be a
good thing, but it shouldn't break if the 'suggested chain' is broken, out of
order or contains spurious certs (as long as an unbroken chain can be constructed).

If we support Cross certificates, we probably need to have chain caching API's.

The rest sound like at least a good starting point.

bob
QA Contact: bishakhabanerjee → jason.m.reid
Priority: -- → P1
Target Milestone: --- → 3.12
Here is a draft of what I'm proposing we expose in 3.12 as the new way to verify certs/cert chains. Bob, Nelson, please review and comment.

typedef PRUint32 CERTStores;
#define PKIX_LDAP 0x1  /* fetch certs/CRLs from LDAP optionally */
#define PKIX_HTTP 0x2  /* fetch certs/CRLs from HTTP optionally .
                          Not in 3.12 release. */

typedef PRUint32 CERTRevocation;
#define PKIX_CRLS       0x1  /* do CRL checking on whole chain */
#define PKIX_OCSP       0x2  /* do OCSP checking on leaf only */
#define PKIX_OCSP_CHAIN 0x4  /* do OCSP checking on whole chain */

typedef void PKIXContext; /* opaque type for future use */
typedef void PKIXNBIOContext; /* opaque type */

/* Use to get one or more PRPollDesc* out of the nbiocontext before PR_Poll */
SECStatus CERT_GetPollDescs(PKIXNBIOContext* context, PRUint32 maxDescs,
                            PRPollDesc* out);

/* New function based on libpkix implementation.
   - can verify from a partial chain rather than just a single EE cert
   - optionally returns the chains that it built and verified
   - supports bridge CAs (multi-path) when building
   - supports LDAP
   
   - supports policy OIDs
   - supports either blocking or non-blocking I/O mode for LDAP & OCSP   
   - is extensible through an optional parameter object
   - trust domain can be added later if needed through the optional
     buildparam object
   
   Arguments :
   * leaf and chain :
   - if leaf is supplied but chain is NULL, a chain will be built from that
     leaf certificate
   - if leaf is NULL and chain is supplied, this is assumed to be a
     pre-built chain, but not necessarily a complete one . The first cert is
     assumed to be the leaf. This function will first attempt to complete the
     chain up to a trusted root, but if it can't, it will try alternate paths
     from the leaf
   - if both leaf and chain are supplied, the chain is assumed to not contain
     the leaf. Behavior is as if only chain was supplied with leaf as the first
     cert
   
   - either leaf or chain can be supplied
   
   * if verifyOnly is set, then no path building will be done, only verification
     
   * stores
     Bit-field of cert stores. PKCS#11 is implied. LDAP optional. HTTP in
     the future.
     
   * revCheckers . bitfield indicating whether to do CRL checking, OCSP
     checking, or use other methods in the future.
     
   * policyOIDs and supportedPolicies :
     policyOIDs is an optional array of policy OIDs to check. supportedPolicies
     returns the policies that were actually supported in the chain.
     If the application wants to require all these policies, it should leave
     supportedPoliciesNULL
   
 */

extern SECStatus
CERT_VerifyPKIXCertificate(CERTCertificate* leaf, CERTCertList* chain,
                           CERTCertList** retchain, PRBool verifyOnly,
		           SECCertificateUsage requiredUsages,
                           SECCertificateUsage* returnedUsages,
                           PRTime time,
                           CERTStores stores, CERTRevocation revcheckers,
                           CERTVerifyLog *log,
                           PKIXNBIOContext** nbiocontext,
                           SECItem** policyOIDs, PRBool** supportedPolicies,
                           PKIXContext* extraParams, void* wincx);
I'm responding here to feedback I got from Richard via e-mail :

JP>/* Use to get one or more PRPollDesc* out of the nbiocontext before PR_Poll */
JP>SECStatus CERT_GetPollDescs(PKIXNBIOContext* context, PRUint32 maxDescs,
                           PRPollDesc* out);

RF>One or more? We block on at most one poll descriptor. I'm not sure where or how 
RF>we might return more than one. 

It's true that libpkix currently only has one operation blocked on I/O (even though it may keep several descriptors alive). However, we should allow for the possibility of more than one pending operation in the future, even though this is not supported by libpkix yet.

RF>We also support policy constraints, inhibit-any policy, and policy mapping extensions. Do you want to mention them? 

re: policy constraints, what would be the choices the application would be faced with ? Honoring or not honoring the extension ? Can't we simply always honor it if it's present in the cert ? RFC3280 says this may be non-critical or critical. If it's non-critical there might be a case for ignoring it, but it can't be that common. There isn't anything in any of the NIST test suites that requires ignoring policy constraints, is there ? 

I feel more strongly about the inhibit-any policy extension. The RFC says the extension must be critical. So if it's present, we should always disallow the match to the anyPolicy OID .

On the other hand, policy mapping is always non-critical. There is probably a good case for having an option to enable or disable honoring the policy mappings.

JP> * policyOIDs and supportedPolicies :
JP>     policyOIDs is an optional array of policy OIDs to check. supportedPolicies
JP>     returns the policies that were actually supported in the chain.
JP>     If the application wants to require all these policies, it should leave
JP>     supportedPoliciesNULL

RF>The path validation algorithm, as published in RFC3280, allows for inputs user-initial-policy-set, initial-policy-mapping-inhibit, RF>initial-explicit-policy, and initial-any-policy-inhibit. The first is a set of policies, with the default being { ANY }; the other three are booleans. Do we RF>want to support any of these?

initial-policy-mapping-inhibit is what I just discussed above - whether we want to honor the policy mapping extension or not. It makes sense to add this as an option.

initial-explicit-policy states that at least one policy must be supported. As I had defined the API above, it was all policies or none (just return their status). It makes sense to also offer the choice of requiring at least one. Thanks for pointing this out.

initial-any-policy-inhibit says to never honor the generic "any policy" OID if it's in a cert. In other words, it's as if the "inhibit-any-policy" was included. That's probably a good option to have for government customers.

I propose to deal with this by adding a bitfield argument to specify these various policy processing options.

typedef PRUint32 PolicyOptions;

#define PKIX_NO_POLICY_MAPPING 0x1
#define PKIX_NO_ANY_POLICY 0x2
#define PKIX_REQUIRE_ONE_POLICY 0x4
#define PKIX_REQUIRE_ALL_POLICIES 0x8

The PolicyOptions argument can be added to CERT_VerifyPKIXCertificate before the PolicyOIDS argument.

This addition brings another change. Since there is now an argument for policy processing, there is no longer a need for implied behavior about required policies based on whether supportedPolicies is set. The default is to require no policies. policyOIDs are just checked, and the status is returned in supportedPolicies. supportedPolicies can be set to NULL only if PKIX_REQUIRE_ALL_POLICIES is set, or if no policyOIDs are passed in.
It occurred to me that another thing is missing : an API to interrupt the verification, if it is in non-blocking mode, and the caller is no longer interested in the result. This could be useful for example in the case of an SSL connection that has been dropped in the middle of a handshake, while the certs are still being verified. The interface to destroy the context is needed to avoid a leak. I will add this in the next draft.

Another thing is related to the HTTP client interface. I'm not sure any change is needed here. In 3.11.1 we are adding a way to register a global HTTP client for the application. This is probably good enough for 99% of the cases. But there may be some in which one wants to register alternate clients. The "extraParams" could be used to register a specific one for the operation. I'm just not sure it's needed.
Attachment #209931 - Flags: superreview?(rrelyea)
Attachment #209931 - Flags: review?(nelson)
Comment on attachment 209931 [details] [diff] [review]
New API interface as a patch to cert.h

With some comments:

PolicyOptions.
We have a bunch of mutually exclusive Bit masks. It might be more rational to have a fixed value for the options with optional modifying bits:
PKIX_NO_POLICY_MAPPING  1
PKIX_REQUIRE_ONE_POLICY        2
PKIX_REQUIRE_ALL_POLICIES 3

PKIX_POLICY_TYPE_MASK 0xff
PKIX_NO_ANY_POLICY     0x100

CERT_GetPollDescs need more explanation, but I think it looks good to get you moving forward.

bob
Attachment #209931 - Flags: superreview?(rrelyea) → superreview+
libpkix supports LDAP either by following cert issuer in URLs from AIA/SIA cert extensions, or by searching for certs in explicitly configured LDAP servers. The first method is already supported in the new API I have proposed simply by turning on the LDAP bit in the "stores" argument. Do we want to support the second method ? My feeling is that we don't, at least at this time. But if there is a good reason for that feature to be exposed now, then it should be in the initial new public API. Comments welcome.
Another question which my current API proposal doesn't cover is whether we want to allow the NIST policy to be settable in the public API. See bug 233806 for more details. The most basic requirement is that if there is no revocation information available (no CRL), then the validation should fail. The other considerations are about the date of issuance of the CRL (thisUpdate) in relation to the date that the cert is being verified at, and its validity period, in order to determine whether a CRL should be considered too old to be used, even if it is present.

I'm thinking we probably want to add one option bit at most for this.
Any comments on this topic are welcome.
Severity: normal → enhancement
I'm an applications programmer with an interest in security, but I'm not your typical security geek.  In general, I find security APIs to be much too complex to be understood or used by mere mortals and this proposal is no exception.  Please don't take offense at that statement -- you're just doing what everyone else in the security industry does and rarely will you run across someone like me who will point out the problem. :-)

Recall that about 95% of end-users mistakenly think of security as a binary option -- you have it or you don't (and those annoying security pop-ups prevent you from seeing what you want unless you hit "OK" so that's always the right thing to do :-).  System administrators will do their best to present a binary security option to end-users.  Coming up with a way for system administrators of all skill levels to do the right thing is a real challenge -- the default has to be correct in such a way that only system administrators who have unusual requirements need to change any settings.  Finally, the programmers who integrate SSL into their software are often little better than the users in terms of security knowledge.  They have a TCP connection and just want to sprinkle "SSL" pixie dust over it to make sure it's secure (indeed the success of SSL is due largely to the fact it falsely gives programmers the impression they can do exactly that :-).

So designing a security API really needs to take into account the "secure-by-default" concept given the limitations of all three types consumers and do your best to make sure it's difficult for any of them to make a security mistake through ignorance.  That's hard to do, but I find the though process interesting and challenging.

The most two most interesting cases are:
1. dumb-user, dumb-sysadmin, dumb-programmer.

So assume nobody working with the system understands what any of the parameters to CERT_VerifyPKIXCertificate mean.  How can the API design minimize the chances anyone will compromise the system through a coding or configuration error?

2. dumb-user, smart-sysadmin, dumb-programmer.

In this case the sysadmin has lots of funky options she wants to set which the programmer doesn't understand.  How can you design the API so the programmer will expose those options to the sysadmin without creating a security flaw in the default behavior?

I'd break the parameters down as follows:
1. Inputs which must be specified or it won't work.
2. Inputs with "secure" defaults that a smart-sysadmin or
   smart-programmer may override.
3. Outputs which must be checked or it won't work.
4. Outputs which can be ignored if you don't understand them.

For type 1, you need to tell the dumb programmer where to get the values.  These should probably be parameters to the operational call.
For type 2, my inclination would be to put them in a versioned parameter structure and provide calls to initialize/dispose it.  If possible stick to only true integers (no bitmasks) and simple string inputs (e.g., comma-separated ASCII/UTF-8) so the dumb-programmer has minimal work to make them available to the smart-sysadmin for override.
For type 3, fold them into a single error return value if possible so the dumb-programmer doesn't make a mistake.  For example, instead of expecting the caller to check the OID policies on return, have the caller supply parameters to state which OIDs are necessary and fold those into the function success/fail return.
For type 4, stick them in the parameter structure (or in a return parameter structure).

Finally, I like the async support.  But I've been doing a lot of calls which support both sync and async lately and the lession I've learned is that if possible the async support should be completely invisible to the sync-only programmer.  Putting the PKIXNBIOContext in a parameter structure (see 2 above)  would suffice.  While one parameter that's NULL for sync operation isn't unacceptable, it still annoys the sync-only programmers so try to do better if possible.

Oh a call with 16 parameters most of which are not straightforward to understand is daunting to the typical programmer.  Again, the parameter structure approach is a good way to trim that.
Chris, thanks for the input.

Most of what you said applies to Application UI and certainly application functions. This particular function we are talking about, however, is a pretty low level function. It really only applies to people building secure protocols. The average application programmer would not likely call this function directly, he's more likely to call something at the SSL or S/MIME layer, where the protocol has already been worked out.

That being said some of your input is applicable in the sense that we need to understand what we want at the UI and application API level. Some of the new functionality supplied by this function will need to be surfaced. (Async I/O, for instance is the result of wanting the to allow the SSL api to support real non-blocking sockets instead of the current blocking model).

As far as the "dumb-user, dumb-sysadmin, dumb-programmer" scenario, no one in that category should be designing new secure protocols, and thus, should not be calling this function.

bob

QA Contact: jason.m.reid → libraries
Blocks: 217267
Taking
Assignee: bugzilla → nelson
Comment on attachment 209931 [details] [diff] [review]
New API interface as a patch to cert.h

I'm finally beginning to review this patch for the first time.  
Here are observations/comments/questions.

1. CERT_VerifyPKIXCertificate appears to be intended as a replacement
for CERT_VerifyCertificate, ONLY.  Presumably we also need replacements for :

CERT_VerifyCertChain       (if this is truly public)
CERT_VerifyCert            (maybe not needed ?)
CERT_VerifyCertNow         (maybe not needed ?)
CERT_VerifyCertificateNow
CERT_VerifyCACertForUsage

2. Does the new interface honor "slop time" ?
Do we want to have a slop time argument as one of the arguments to 
this function, rather than having it be a global value?

3. 16 arguments!  A new record (for NSS).  Wow!  uh, ...
Maybe some/all of them should go into a structure, and the address
of that structure be passed?  This especially makes sense to me
in the non-blocking IO case, where the function will be called
repeatedly with essentially the same values each time.  If they're
kept in a stucture, they don't have to be copied on to and off of 
the stack with each invocation.

4. Is any of this implemented?  Or is this merely a proposal for 
code to be implemented?  I fear the latter.

5. Perhaps this is more of a general libPKIX question than a question
about this particular proposed API: Does libPKIX honor all our trust
flags?  Many of our trust flags are overrides for various potential
failures in the chain validation algorithm.  Does/will libPkix honor 
them all as the old code does?

6. While we're redesigning the CERT_Verify* interfaces, we need to 
solve the problem of being able to filter/verify by Key Usage as 
well as by Extended Key Usage.  Today's SECCertUsage type values 
reflect EKU (e.g., SSL server), but not KU (e.g. signature, or key
agreement).  Consequently, when we know we need a server cert with 
a signature key usage, or with a key agreement key usage, we have 
no way to specify that.  A cert with the wrong KU but the right EKU
will verify when it should not.  

Perhaps one way to solve that is to define some new values for the 
types SECCertUsage  and SECCertificateUsage.  One could imagine 
certUsageSSLServerSigner and certUsageSSLServerEncryption.  Or perhaps
we need to add an explicit keyUsage argument to the function's list 
of arguments.  

7. we may also need a way to specify a requirement of, or prohibition of,
the NR key usage.  

8. This API addresses code that calls the cert validation API directly.
What about application code that uses libSSL or libSMIME?  

- How does an application tell libSSL that it requires a certain policy 
or policies?  
- How does an application tell libSSL that it wants libSSL to return a 
list of the policy OIDs for which the chain is valid?
Here are some more questions about the integration of libPKIX with NSS.

1. NSS will have two sets of cert verification APIs, the old set and the new 
one.  The two sets may come to different answers for the same question (same 
input chain).  The NSS libraries themselves sometimes calls the cert 
verification functions, and applications may also call them directly.  
Because the two sets can come to different ocnclusions, it seems to me that 
the application and NSS should agree on which set they are using, old or new, 
and both use the same set, so that they get the same answers.  Given that:

- How does the application tell the NSS libraries to use the new set instead
of the old set?   
- Is that setting global for the whole application?  
- Is it per-thread (perhaps recorded in thread-local-storage)?
- Is it per-socket (settable as a socket option) ?

2. The thought has occurred to me of having a way for the old API to be 
reimplemented to work by calling the new API.  It occurs to me that it might
be possible to have a separate new function that sets the values for some of
the new API arguments separately from the call that invokes the validation.
This might enable us to get down to having a single implementation of cert
chain validation, so we don't have to support the old and new implementations
going forward.

Imagine (for example) that we have one or more functions by which the app 
records (in thread local storage, say, or globally) the policies it wants
to require, and the types of stores it wants to search.  Then later the 
app calls good old CERT_VerifyCert and that function is able to pass the
previously stored values for the policies, etc.  We would choose suitable
defaults for the new arguments in the case where the app has not chosen to
set them.  

- Do we want to do this?  
Comment on attachment 209931 [details] [diff] [review]
New API interface as a patch to cert.h

See my prevoius comments in this bug about why I am giving this proposal r-.
Attachment #209931 - Flags: review?(nelson) → review-
Whiteboard: PKIX
Quick question about policies:

  One of the parameters of the validation process that libPKIX produces is validated policy tree. I'm wondering if there is going to be a use of policy tree in the browser.

One of the application that I can think of is to use the tree in UI, to show user why this particular policy and it's modifier was picked. If there is no use of it, we can limit API to return only the set of valid policy oids of the leaf certificate. Please not, that policy tree is not available if build/validation process failed. 

Any comments?

I noticed that the return value of the function is not defined. I propose that the validation function return PR_SUCCESS if a validation evaluation could
be completed (irrespective of whether the resulting cert is trusted or not).

The failure status codes would then be limited to non-PKI/crypto related
values such as:
 SEC_ERROR_INVALID_ARGS    // arguments don't meet spec
 SEC_ERROR_IO


We should try to future-proof this function by making the argument list somewhat extensible. In other words - take the arguments out of the formal parameter list and use some auxiliary data structure to pass in the arguments

In looking at Julien's proposal: we can see that there are several
classes of arguments to this function:

extern SECStatus
CERT_VerifyPKIXCertificate(
   CERTCertificate*     leaf,           // target cert
   CERTCertList*        chain,          // 'useful information' for validation algorithm
   CERTCertList**       retchain,       // result
   PRBool               verifyOnly,     // behavior flag
   SECCertificateUsage  requiredUsages, // parameter to validation algorithm
   SECCertificateUsage* returnedUsages, // result
   PRTime               time,           // parameter to validation algorithm
   CERTStores           stores,         // 'useful information' for validation algorithm
   CERTRevocation       revcheckers,    // behavior flag
   CERTVerifyLog *      log,            // result
   PKIXNBIOContext**    nbiocontext,    // context info
   SECItem**            policyOIDs,     // parameter to validation algorithm
   PRBool**             supportedPolicies,// result
   PKIXContext*         extraParams,    // ?
   void*                wincx           // context info
);

Next, most of the inputs to the validation algorithms are optional. We can thus specify them as an extensible array of inputs. And we could combine the inputs which are parameters the validation algorithm with their respective result data structures.

Same goes for the 'context' parameters. We should bundle them all into an 
extensible array so they can be optional.


Next, the extraParams parameter. In many Microsoft API's, they leave a parameter as 'reserved', only to give it special meaning later when a certain flag is set. We should consider this mechanism in NSS so that we can avoid
introducing new functions with new signatures in future versions.


There is also one flag which changes behavior: verifyOnly. I would prefer
a generic 'flags' argument, of which this is just one bit. Again, it future-
proofs the API by adding the ability to add additional flags without changing
the function signature.


Incidentally, it might also be instructive to take a look at the microsoft
API's to see what their semantics are like - maybe they have thought of
something we haven't. A good starting point might be:

http://msdn2.microsoft.com/en-us/library/aa377163.aspx

More comments to come.
Returning PR_SUCCESS when there's any problem whatsoever with the certificate violates the secure-by-default principle, so I disagree that's a good idea.  When (not if) someone uses this API who doesn't fully understand it, they might just check the return value and nothing else.  In that case, the API should fail-safe meaning PR_SUCCESS should only be returned on complete an unconditional success.  It should only be possible to determine partial success based on analysis and understanding of the exact status result and/or other parameter returns.

I agree about moving the bulk of the parameters to an extensible data structure.  That's a good design to reduce the number of parameters and plan for extensibility without cluttering the argument list with "reserved" arguments (I dislike the latter approach to API extensibility as including the ", 0" is annoying and thus has a negative impact on API usability and perception).

Also agree about using a generic flags parameter for extensibility.

If we're pointing to existing work in this area, Apple has a rather sophisticated API:

http://developer.apple.com/documentation/Security/Conceptual/CertKeyTrustProgGuide/index.html

They take the additional approach to extensibility of creating a context for the trust verification operation and then performing the action separately so there it is easy to add new API entry points to apply additional policies or constraints.
For reference, this is the header file describing the API that 
Alexei once proposed for the new NSS cert validation API.
I think I agree with Steve's proposal to separate the success/failure of 
the operation from the resultant outcome of the operation.  

We've been doing that in more and more new APIs.  We have APIs for finding
objects that do this, separating the success/failure outcome from the 
number of results returned.  This allows us to distinguish between 
- failure to complete the search because of some problem (such as out of memory)
- successful completion of the search, with zero results found.

In this case, it allows us to separate
- failure to determine the validity of the certificate as good or bad, vs
- successful determination that the certificate was bad

I think that, as long as the documentation of the new functions is absolutely
clear about how to interpret the return value (success/failure) separately
from the output test outcome (good/bad cert), it should be OK.  
Steve,  
As you may know, NIST/NSA is desires very much to require (in the next version
of FIPS 140) that modules never allow signature keys to be used for decryption
and never allow decryption keys to be used for signatures.  This will mean an
end to private keys that are both signing and decryption keys, and an end to
public keys that are both signature verification keys and encryption keys. 
For these purposes, an NR key is considered a signature key.

In preparation for that, we need to ensure that the new API has the ability 
for the caller to always specify whether he needs the cert chain to be 
validated for signing, or for encryption, or for NR, or for some combination
there of.  That should be specified separately from the "usage" (e.g. SSL
server, ssl client, email, etc.)  
Assignee: nelson → sparkins
I think I found an additional requirement for the new extended CERT_Verify* functions.

We recently introduced a new global setting for OCSP behaviour, see the definition of SEC_OcspFailureMode at http://lxr.mozilla.org/seamonkey/source/security/nss/lib/certhigh/ocspt.h#314

While PSM wants to globally use the "graceful failure mode" (called ocspMode_FailureIsNotAVerificationFailure), for most verifications, things are different for verifying EV certs.

In my understanding, a cert may only be considered as "valid for EV" after having successfully verified the cert is not revoked.

When PSM calls into a CERT_Verify* function, it requires a mechanism to specify "really use OCSP". This can be achieved by ensuring that OCSP is globally enabled, in combination with specifying the OCSP "hard failure mode".

Thinking how this works with regards to the OCSP cache, it would be OK if the cache contains a "recent good response", but a "cached recent failed attempt" must result in a new attempt to obtain OCSP status, and only a successful verification may result in a positive return value from CERT_Verify.


When brainstorming about this, one more detail comes to my mind.

In theory, a CA is free to use either CRLs or OCSP for providing revocation information.

From the application's point of view, what we really require is: "check a recent source with revocation information to ensure the cert is not revoked". The application should not have to worry whether NSS uses OCSP or a recent CRL to do that check.

We could use a more abstract parameter like:

enum {
  noSpecialRevocationCheckingRequirements,
  requireSuccessfulVerificationAgainstRecentSourceOfRevocationInformation
} whatToDoWithRegardsToOCSPandCRLs

In reply to comment 21,  Kai, You wrote:

> Thinking how this works with regards to the OCSP cache, it would be OK if 
> the cache contains a "recent good response", but a "cached recent failed 
> attempt" must result in a new attempt to obtain OCSP status, and only a 
> successful verification may result in a positive return value from 
> CERT_Verify.

I think (not sure) you are suggesting that, for EV validations we should
have a mode that says "if you don't have a currently valid cached OCSP 
response (positive or negative), then you must do a fresh OCSP request now, 
regardless of how recently you did the previous attempt."  If that's what
you're saying, I disagree with it.  If you're saying something else, please
help me better understand your suggestion.
(In reply to comment #22)
> I think (not sure) you are suggesting that, for EV validations we should
> have a mode that says "if you don't have a currently valid cached OCSP 
> response (positive or negative), then you must do a fresh OCSP request now, 
> regardless of how recently you did the previous attempt."  If that's what
> you're saying, 

Yes, that's close to what I'm saying. Although I might be ok to say
  "if we have a very recent failure cached, let's not try again,
   and immediately return a verification failure 
   (because OCSP connectivity is broken)."


> I disagree with it.

Why?
We designed a minimum cache update request interval into the cache 
precisely for this situation, precisely to limit the frequency of 
requests to the OCSP responder.  I see absolutely no reason to violate
that for EV certs.  

The "hard failure mode" should ensure that if the cache contains a
"recent request failed" status, and the minimum time until the next
request is permitted has not yet elapsed, then the result (for hard
failure) should be to report a negative OCSP response (e.g. revoked,
NOT validated).  

Consider the case of an EV web page from one server that has image tags
that fetch images from another, second, EV server.  When that happens,
as the browser reads the base html document and finds all those image 
rags, it initiates some number of parallel connection attempts to the 
second server to fetch the images.  Sadly, it doesn't attempt to wait 
until a first request completes, ensuring that a session is established
before sending the other requests.  Consequently, every one of those 
parallel connections may do a full RSA handshake, which gets a cert chain
and tries to validate it.(*)  In such a case, we may attempt to validate 
the same cert chain repeatedly in the same second.  Now, imagine that we 
are unable to reach the OCSP responder for the issuer of the cert for the 
second server.  IINM, the present cache design will cause us to make a 
single OCSP request, and use the result of that (even if the result is
"attempt failed") for the immediately subsequent validation requests.
I think your proposal would cause us to send N OCSP requests in rapid
succession.  IMO, that's bad.  

Note (*): if we did delay all but the first image request to a second 
server, to give the first connection a chance to finish the RSA handshake
before the others are sent, and thereby attempt to avoid multiple full
RSA handshakes, that would help ONLY if the first handshake succeeded.  
If the first handshake fails, (say, because the cert chain cannot be 
validated with OCSP), then all the subsequent attempts will do full RSA
handshakes, even though they were delayed.  My only point here is that the
existence or lack of a request delay doesn't change the requirements of 
the OCSP cache, IMO.
Please note,

in comment 21 I'm asking for an addition to the Cert Verification API, allowing the application to override a global graceful failure mode, and requiring a hard failure mode for the scope of the cert verification function call.

I am also proposing that we abstract from OCSP and generalize this to OCSP-or-recent-CRL.


Comment 22, comment 23 and comment 24 shall be seen as a little side track discussion that discusses an implementation detail of the OCSP cache behaviour.

Nelson, I propose we discuss this implementation detail outside of this interface-design-bug. In general I'm ok with your proposal to extend the minimum cache time to hard failure mode and EV certs. However, I'm worried a minimum time of one hour is bad in situations where an OCSP responder is down for one minute (or unreachable because of a network hickup), because it will break EV for a full hour.
Steve,  I understand that you have coded a new API for use by PSM as 
partial fulfillment of this RFE.  I understand that RH desires that it
be committed before you leave for vacation at the end of this week.
But we have not yet seen a specification of the new API, much less a 
patch.  Please attach them to this bug ASAP so we can start to review them.
Summary: Design new interfaces for certificate chain building and verification → Design new interfaces for certificate path building and verification for libPKIX
BobR gave me some pointers today on how to clean this up, and I have done
as best as I can.

Note: I will be leaving for vacation on Sunday afternoon 8/19.
Attachment #277193 - Flags: review?(nelson)
Comment on attachment 277193 [details] [diff] [review]
Steve's patch implementing 'fake' ev check, and basic calls into libpkix

I tried this patch, and I think it's mostly what we want for our testing purposes.

My only comment is: I want to propose a slight enhancement to the INSECURE_OID_CHECK mode

In order to correctly simulate 
  "is the issuer cert in my list of issuers and OIDs"
I'd require the function to return the issuer.

Right now, when using the "hack" mode, this patch will never return the issuer.

I would like to propose, when in the hack mode, the function shall use the classic NSS code to obtain the chain and issuer.

This allows me use code at the PSM level that does this final "is in my list" check.

With this addition, I think I'll be able to finalize the PSM code - besides the list of EV issuers and their OIDs.

I will attach a slightly enhanced patch.
Steve's patch has this block of code:

+    /* Currently, this block does not set the outputTrustAnchor */
+    /* XXX This needs to go away */
+    if (mode == EV_HACK_INSECURE_OID_CHECK) {
+	if (params == NULL) {
+	    PORT_SetError(SEC_ERROR_INVALID_ARGS);
+	    goto cleanup;
+	} else {
+	    for (i=0; params[i].type != cvpt_end; i++) {
+		if (params[i].type == cvpt_policyOID) {
+		    param = &params[i];
+		}
+	    }
+	    if (param == NULL) {
+		PORT_SetError(SEC_ERROR_INVALID_ARGS);
+		goto cleanup;
+	    }
+	    if (cert_hasPolicy(cert,param->value.si) == SECSuccess) {

### Kai's marker

+		r = SECSuccess;
+		goto cleanup;
+	    } else {
+		PORT_SetError(SEC_ERROR_INADEQUATE_CERT_TYPE);
+		goto cleanup;
+	    }
+	}
+    }


I added a marker in the middle of his block.

In this updated patch I'm attaching, at the marked position, I added the following code:

+                CERTCertList *certChain = NULL;
+                CERTCertListNode *node = NULL;
+                CERTCertListNode *next_node = NULL;
+                certChain = CERT_GetCertChainFromCert(cert, PR_Now(), 
+                                                      certUsageSSLServer);
+                for (node = CERT_LIST_HEAD(certChain);
+                     !CERT_LIST_END(node, certChain);
+                     node = next_node) {
+                  next_node = CERT_LIST_NEXT(node);
+                  if (CERT_LIST_END(next_node, certChain)) {
+                    /* We arrived at the top level cert */
+                    *outputTrustAnchor = CERT_DupCertificate(node->cert);
+                  }
+                }
+                CERT_DestroyCertList(certChain);
Attachment #277486 - Flags: review?(nelson)
Attachment #277486 - Attachment is patch: true
Attachment #277486 - Attachment mime type: application/octet-stream → text/plain
Blocks: 374336
This RFE started out requesting the design of a new general purpose very
flexible libPKIX-base cert verificaton API.  It was intended that that 
new API would be usable by any & all appliations that use NSS today, 
although it was not expected that all NSS apps would migate to it, soon.  
Many of the comments in this bug serve to collect the requirements of 
that final API.  

Now, it seems that several other different goals have arisen, that are 
being tracked in this bug.  They include:
- an interface specifically for the needs of PSM, long term
- an interface for the needs of PSM, very short term ("hack" or "fake" API).

There is a patch attached to this bug that is labelled as a "fake" API. 
It seems clear to me that this API is intended to be an API for PSM, and 
that it is NOT intended to be that most flexible new one-size-fits-all 
cert verification API.  But it's not clear to me if this is thought to be
adequate for NSS 3.12 RTM and FF3 RTM, or if it is only a very short term
temporary interim solution.  Please clarify that for me.

I would like to suggest/request the following changes to how we're approach
this.

1. Make a new RFE that requests an API specifically for PSM.  
2. The function(s) in that API should have the letters PSM in the name.

3. We should declare that it is private to PSM and is unstable, meaning that
in any future release, the function signature may be changed in a way that 
is not backwards binary compatible.  This gives PSM the flexibility to 
evolve the API over time.  Perhaps the function(s) name should include 
"PSM" "Private" and "Unstable" to underscore these points.  

4. If we really are thinking of two separate APIs, one "hack" for advanced
UI testing, perhaps even before libPKIX is ready, and also another long-term
PSM API, we should probably have to RFEs and two function names reserved for
that purpose.  
Nelson,

Regarding comment 30, item 3, many OSes including Solaris and Linux deliver NSS separately from the client apps. So if we change the signature in a way that is binary incompatible, we will have crashes. Unless there is a way for the application to dynamically determine the signature of the function before making the actual call. Even then, the application would not crash, but just not be able to run anymore, because we would break binary compatibility. It would be much preferable not to go there. I don't remember a precedent to us doing this with our shared libraries, and I don't think we should start it now. We should have a single one-size-fits-all new PKIX API, not a hack. If PSM wants to have a private function, maybe it should link libpkix statically itself. But IMO, such a hack does not belong in NSS shared libraries.
Just to be clear, the function design is exactly that general mechanism.

Steve is proposing an interface that allows future expansion of the types of arguments it will accept with changing the signature.

The 'hack' nature is the current implementation. The offending code is clearly ifdef, and expected to be ifdef'ed out when 3.12 GA's.

There is no API differences between the short term and long term, only the function semantics are different (controlled by the environment variable).

Please look at the API itself to make sure it's sufficient for the general needs (not just PSM's needs).

bob
Comment on attachment 277193 [details] [diff] [review]
Steve's patch implementing 'fake' ev check, and basic calls into libpkix

The function API proposed in this patch does not appear to address numerous
requirements given in the early comments in this bug.   The API does not 
even provide equivalent capability to the old CERT_VerifyCert, much less
a new superset of those old functions.  

Here are SOME (a few) of the requirements that appear to be lacking from
the new API.  This is not a complete list.  These are just the first 
things that come to mind.

1. A way to input the required cert usage type (SSL server, SSL client, 
Email signing, Email encryption, etc.)  These things map onto EKUs.

2. Support for an "error log" or equivalent.

3. A way to output the constructed and verified cert chain(s).

4. A way to input a (partial or complete) cert chain, such as the chain
received from the remote SSL peer, or the chain from a PKCS#7 signature,
rather than using the old approach of dumping all the certs into the 
"temp DB" bucket.

5. Support for non-blocking verification, which necessitates a way to 
output (and then input) a context that preserves state from call to the 
next. 

6. A way to input Key Usages (separate from EKUs).  

7. A way to output the list of certusages for which the chain is valid
(like CERT_VreifyCertificate).

8. A way to output the list of policies for which the chain is valid. 

(this list is not complete)
Attachment #277193 - Flags: review?(nelson) → review-
Comment on attachment 277486 [details] [diff] [review]
Steve's 'fake' patch v2, added fake issuer cert output

12345678901234567890123456789012345678901234567890123456789012345678901234567890

This patch is a derivative of the preceding patch, which has received a 
negative review, and so this patch will also receive an r-.

However, there is an additional issue with this patch, not present in the
preceding patch.  This patch inserts a call to CERT_GetCertChainFromCert 
into the new function CERT_PKIXVerifyCert.  But CERT_GetCertChainFromCert 
does not (CANNOT) always return the same chain that libPKIX constructs 
and verifies.  

CERT_GetCertChainFromCert is defined to return a cert chain, even if it 
cannot be validated, even if it does not chain up to any root.  It is
used to construct cert chains to send out (e.g. with client auth certs
in SSL client auth) where the chain may be incomplete in the DB.  
(Recall that relying parties must validate chains, but subject parties,
who send out those chains, need not validate their own chains before
sending them.)  

So, my point is that one cannot assume that the chain returned from 
CERT_GetCertChainFromCert is complete or that it is the same chain as
would have been constructed and verified by libPKIX.  

Now, perhaps in EV_HACK_INSECURE_OID_CHECK mode, that is not an 
important consideration.  In that case, it may be OK for CERT_PKIXVerifyCert 
to call CERT_GetCertChainFromCert in EV_HACK_INSECURE_OID_CHECK mode.
Attachment #277486 - Flags: review?(nelson) → review-
NSS developers and users should be aware that there are some existing NSS 
functions for building cert chains and finding cert issuers that DO NOT 
(indeed CAN NOT) use the same algorithm as libPKIX, and therefore will on
occasion return a different chain or different issuer than the chain or 
issuer that libPKIX would have found.  It may be unwise to use those old
functions in conjunction with cert chains validated with libPKIX.

The definition of CERT_GetCertChainFromCert does not require that the 
returned chain be complete, or that it must be verified.  Likewise, the 
definition of CERT_FindIssuer does likewise does not require that the 
input cert must be part of any validated cert chain.  Consequently those
functions find an issuer (or a chain of issuers) using the old PKCS#11 
method, even in the world of libPKIX.   

The ONLY way to find the issuer of a cert, or to find the whole chain of 
issuers of a cert, that is the SAME issuer (chain) that was found and 
verified by libPKIX is to have libPKIX acutally construct and verify a 
chain, and then output that chain.  Then finding a cert issuer can be done
by finding the desired cert in the chain output by libPKIX, and then 
finding its issuer in that chain.  

That is why CERT_PkixVerifyCert needs to be able to output a cert chain.

We may also need to have a new function similar to CERT_FindIssuer that 
either calls libPKIX or else that takes an an input a chain that was 
output by libPKIX, and returns the issuer found in that chain.
Other API design considerations:

I think the idea of a cert verification API that takes SOME of its input 
arguments in an extensible data structure/object/array is a good idea.
It has the potential to allow us to grow the capabilities of the function
over time without breaking binary (or source) compatibility with old callers.  

But I do not think that ALL function parameters should be passed that way.
Any cert verification option has some REQUIRED parameters.  IMO, Those should 
be function arguments, rather that optionally settable components in that new
data structure/object/array.  Here are some reasons why I say that.

1. As function arguments, it is clear to the writer of code that calls the 
function that those values MUST be passed to the function.  Writers of applications that will call this function will be much less likely to forget
to pass required parameters if the compiler forces them to do so.  

2. There is no need to write code in CERT_PKIXVerifyCert that verifies that all 
the required parameters were passed, given that the compiler will do so.
(however, NULL checks and other validity checks will still be needed.)

Further, I would STRONGLY prefer that this new extensible object (data 
structure) be an opaque structure with setters and getters, and NOT be an 
array whose contents are not well ordered, and that do not have fixed 
indexes.  

The code needs to be able to get its arguments (including those passed via 
this extensible structure/object) efficiently.  The array design in the 
preceeding patches requires that functions search through the array, element 
by element, checking the "type" of each element in the array, until they find 
the element of the desired type, for EACH and EVERY argument that can be 
passed in the array.  The act of discovering that a particular type has NOT 
been passed in the array requires searching the entire array, end to end.  

In contrast with that, an opaque structure with named members, or an array 
where the type is defined as the index into the array, (inside the opaque
object type) rather than being a value in each member of the array, allows 
the code to obtain arguments, and to discover that an argument is present 
or absent, without doing any searching of an array.  
(In reply to comment #34)
> However, there is an additional issue with this patch, not present in the
> preceding patch.  This patch inserts a call to CERT_GetCertChainFromCert 
> into the new function CERT_PKIXVerifyCert.  But CERT_GetCertChainFromCert 
> does not (CANNOT) always return the same chain that libPKIX constructs 
> and verifies.  
> 
> ... [snip] ...
>
> Now, perhaps in EV_HACK_INSECURE_OID_CHECK mode, that is not an 
> important consideration.  In that case, it may be OK for CERT_PKIXVerifyCert 
> to call CERT_GetCertChainFromCert in EV_HACK_INSECURE_OID_CHECK mode. 
> 


This addition was only done and intended for the special hack mode, that will get removed once the correct implementation gets delivered.


(In reply to comment #35)
> The definition of CERT_GetCertChainFromCert does not require that the 
> returned chain be complete, or that it must be verified.

Nelson, please let's distinguish between 

  "we work on an correct implementation" 
  (which is not what I had attempted in that modified patch) 

and 

  "we want a code path enabled only 
   by a scary environment variable 
   that helps us with early testing".

I'm not trying to question your other statement around the API and the patch, just this one.
On review, the primary problem with the API was the lack of generalized output parameters.

Other changes from steve's proposed function:
1. I moved the trustAnchor to the
generalized output parameters (only specialized callers need it).
2. I specified a lot more parameters. They may not be right, but they show 
the basic flexibility of the API.
3. It's clear we really need to specify and return arrays in the this block,
so I added a count value so we can do so.
4. It became clear that most of the parameters we are passing are parameters that a given application really wants to 'set and forget'. Our current api does this, but has the drawback that you can't change the defaults for a specific verification. Because of this, I added a 'Set' function which takes a generalized input parameter list.

Some self comments... 
   On closer examination, the Certusage should probably just be a regular parameter. It's more symetric to make it one of the passed in parameters, but
if you make it a regular one, then 90% of the calls would not have to pass
in any of the generalized input parameters. It should stay as a generalized
output parameter, since only specialized usages want it returned.
   At first glance, the association of the policyOIDs with the policyFlags
appears to call for a combined structure, a la alexei's proposal. However, the normal use case is most likely specifying a bunch of policyOIDs all with the application default policyFlags. In those cases where you really do want several
different policyOIDS, each with their own flag, can be handled with this API as is (question, do we want the ability to specify a bunch of oids, but only one policyFlag [not the application default] that affects all the oids? That seems like it might be a pretty common case).
 I split the input and output generalize parameter structures up. It helps documentation, as some parameters are both input and output, but there I can't
think of another good reason to keep them separated. For asthetic purposes I
arranged for the output parameter values to have the same type value as their
input parameter values, but if there is a programatic reason for them to be the same then clearly merging the 2 lists back into one.

Finally words about Non-blocking I/O.
Clearly the application has to be involved in non-block operations, so it seemed resonable to require the caller to explicitly ask for the nbio context in the generalized output buffer.

If the caller asked for an nbio context, and the operation has to block, the
Verify call will return SECWouldBlock with the nbio context filled in.

I did not specify any accessors for the sockets to block on yet (but are clearly needed in a nbio implementation).

on retry, the verify function is called with the nbio context (which is opaque) as the only generalized input value. The caller should pass in the same output
parameter block as in the previous call (as those parameters have not yet been
filled in). On SECWouldBlock, repeat his paragraph.

On completion (SECFailure/SECSuccess), the nbio context is freed and the rest
of the generalized output parameters (if any) are filled in.

To abort the operation pass the nbio context in the nbioAbort parameter block.


bob
Attachment #278148 - Flags: review?(nelson)
I just read Nelson's comments about getters and setters. I actually prefer steve's proposals for an array of input arguments. All the input arguments are needed anyway. If we were to go with the getter/setter method, we may as well just expose the raw pkix apis.

bob
Comment on attachment 278148 [details] [diff] [review]
Updated header for Steves' Proposed function

These are my initial thoughts on this proposed API.  
This proposal is much closer to complete than the last one.

In no particular order:
1) name space polution: this patch introduces two new prefixes.
   Let's try to stick to SEC or NSS or CERT or some already used prefix.
2) typos: Some of the  input enums are named cvpto_ 
          Some of the output enums are named cvpti_
          The word "path" is spelled "patch" in at least one place.

3) Lots of type safety concerns.
   a) All pointers in the union of input param values should be pointers
      to const things.  But pointers in the union of output param values
      should be pointers to non-const things.  Greatest type safety 
      necessitates two unions for this purpose.  
   b) The input union should include a PRTime member, rather than relying
      on PRTime being a PRUint64.  cvpti_date should use it.
   c) The output union should include a pointer to CERTVerifyLog, and
      cvpto_errorLog should use that rather than a pointer to void.

   I'm concerned there are lots of ways for programmers to get this wrong,
   to forget array counts where they're needed, or to supply them where 
   they are not.

4) For output values, do you intend to have the function modify the values
   in the array of CERTValParamValues itself?  Or do you intend to have 
   the CERTValParamValues point to places where the outputs are placed?
   If you're going to modify them in place, then it seems the union doesn't
   need pointers to integer types.  If the function is not going to modify 
   the output array itself, then the output union should include 
   a (SECCertificateUsage *) type to return the set of valid cert usages.

5) We have 3 choices for how policy OIDs will be represented in this API.
   They are: 
	(a) DER-encoded OIDs in SECItems, 
	(b) ASCII OIDs in zStrings (e.g. "2.6.13.12345.1.2.3")
	(c) SECOIDTags (the set of which may be dynamically extended by
	    the application).

6) If we stick with DER-encoded OIDs, then cvpti_policyOID should perhaps 
   use a (const SECITEM * const *), that is, a pointer to a const array of 
   pointers to const secitems, rather than an array of (const) secitems. 
   The extra level of indirection allows us to avoid copying SECItems. 

7) I don't think you need an array of policy flags, that is parallel in 
   size to the array of policyOIDs.  I think one set of policy flags is 
   enough for all OIDs, but if I'm wrong about that, please enlighten me.
   I generally hate parallel arrays, and would much rather see a single 
   array of a struct type that includes one of every item that would have
   been in the parallel arrays.

8) There are 3 types of usage. 
   cvpti_usage     is a SECCertificateUsage (64-bit bit mask)
   cvpti_keyusage  (not sure what type of value is used here. 
                    also, is it an array?  or one single value?  bits?)
   cvpti_extusage  (what is expected here?  another array of OIDs?)

9) I don't understand the meaning of cvpti_requireAllUsage.  It's a 
   single boolean. Does it refer to all 3 other types of usage? or ?

10) do we need an output key usage?  An output extusage (whatever that is?)

11) I think we MAY need another output type.  As I understand it, under
    some circumstances, it can output a tree of validated paths.  
    Perhaps that is represented as an array of validated chains?  (Alexei?)

12) This API continues the NSS 3.x model of a single global trust domain.
    Do we instead want to make the trust domain explicit, in Stan style?

13) Please append the word "flags" to any of those cvpt enums that is a
    mask of bit flags. e.g. cvpti_certStoreFlags cvpti_usageFlags

I'm going to ask Julien to also review this.
Attachment #278148 - Flags: review?(nelson)
Attachment #278148 - Flags: review?(julien.pierre.boogz)
Attachment #278148 - Flags: review-
I want to expand on one remark I made about type safety.  I wrote:

>   I'm concerned there are lots of ways for programmers to get this wrong,
>   to forget array counts where they're needed, or to supply them where 
>   they are not.

Programmers of software that uses NSS really hate it when NSS crashes.
Even when they have passed totally bogus arguments to NSS, they expect NSS
to return an error code instead of crashing.  I think this API is likely to
result in a lot of crashes.  I don't think we can detect that the caller 
has passed an integer where a pointer is expected, or vice versa, or has 
forgotten to initialize the array count.  It would be nice if the compiler 
(or a tool like Coverity) could detect some of this for the programmer.  
Thanks for your review, it answers lots of questions I had!
I'll wait for Julien's notes before I give a respin, but I'll answer
questions here.
> 1) name space polution: this patch introduces two new prefixes.
>   Let's try to stick to SEC or NSS or CERT or some already used prefix.
> 2) typos: Some of the  input enums are named cvpto_ 
>          Some of the output enums are named cvpti_
>          The word "path" is spelled "patch" in at least one place.

OK, I'll make that change (I'm leaning to CERT currently).

> 3) Lots of type safety concerns.
>   a) All pointers in the union of input param values should be pointers
>      to const things.  But pointers in the union of output param values
>      should be pointers to non-const things.  Greatest type safety 
>      necessitates two unions for this purpose.  
>   b) The input union should include a PRTime member, rather than relying
>      on PRTime being a PRUint64.  cvpti_date should use it.
>   c) The output union should include a pointer to CERTVerifyLog, and
>      cvpto_errorLog should use that rather than a pointer to void.

Sounds reasonable. I was worried about type safety, so I started down the
patch. This also answers my question about whether or not the input and 
output params should be 2 separate types or not. Now there's a strong
technical reason to keep the separate.


> 4) For output values, do you intend to have the function modify the values
>    in the array of CERTValParamValues itself?  Or do you intend to have 
>    the CERTValParamValues point to places where the outputs are placed?
>    If you're going to modify them in place, then it seems the union doesn't
>    need pointers to integer types.  If the function is not going to modify 
>    the output array itself, then the output union should include 
>    a (SECCertificateUsage *) type to return the set of valid cert usages.

For basic types (int, usage, PRTime), it would just set the value. For arrays,
or object types (CERTCertificate, CERTCertList, etc) the function would return
a pointer that the caller would then own. The pointer to int was for an array 
(though I think that was input only). I'd have to go back and see if we need
to return an array ints. I know we were passing in an array of ints, but
that is likely to change based on this review.

> 5) We have 3 choices for how policy OIDs will be represented in this API.
>    They are: 
>         (a) DER-encoded OIDs in SECItems, 
>         (b) ASCII OIDs in zStrings (e.g. "2.6.13.12345.1.2.3")
>         (c) SECOIDTags (the set of which may be dynamically extended by
>             the application).
>
> 6) If we stick with DER-encoded OIDs, then cvpti_policyOID should perhaps 
>    use a (const SECITEM * const *), that is, a pointer to a const array of 
>    pointers to const secitems, rather than an array of (const) secitems. 
>    The extra level of indirection allows us to avoid copying SECItems.

Oops, my intention was to make OIDS SECOIDTags rather than SECItems. I had
already suggested that to steve. 

> 7) I don't think you need an array of policy flags, that is parallel in 
>    size to the array of policyOIDs.  I think one set of policy flags is 
>    enough for all OIDs, but if I'm wrong about that, please enlighten me.
>    I generally hate parallel arrays, and would much rather see a single 
>    array of a struct type that includes one of every item that would have
>    been in the parallel arrays.

I don't think it's a common case. The question is is it a possible case.
I agree the common case is one policy flag for all the policyOIDs. (I
like the idea of the common case being easy, so I'll incorporate the
suggestion). Perhaps we can add an 'parallel' array
if we find a use case that warrents it later. (afterall, that is the point
of designing the API the way we have).

> 8) There are 3 types of usage. 
>    cvpti_usage     is a SECCertificateUsage (64-bit bit mask)
>    cvpti_keyusage  (not sure what type of value is used here. 
>                     also, is it an array?  or one single value?  bits?)
>    cvpti_extusage  (what is expected here?  another array of OIDs?)

I'll look up the correct values for the other 2. They should be our internal
representation for the corresponding certificate extension. extusage should
actually be extendedKeyUsage.

> 9) I don't understand the meaning of cvpti_requireAllUsage.  It's a 
>    single boolean. Does it refer to all 3 other types of usage? or ?

I was intended to apply to cvpti_usage as a boolean, though we could
extend it as well. It maps to our existing meaning of requireAll in the
current certAPI. It's typically true, but may be set to false when
you are trying to identify all the usages for a given certificate.

> 10) do we need an output key usage?  An output extusage (whatever that is?)

If we make requireAllUsage apply to key and extusage, then

> 11) I think we MAY need another output type.  As I understand it, under
>     some circumstances, it can output a tree of validated paths.  
>    Perhaps that is represented as an array of validated chains?  (Alexei?)

We can return arrays or just simple pointers. It would be good to nail
this down, though.

> 12) This API continues the NSS 3.x model of a single global trust domain.
>    Do we instead want to make the trust domain explicit, in Stan style?

As a public function, we have to use the 3.X model for now unless we start
exposing Stan API's. (currently all the Stan APIs are private).

I think the latter is a whole discussion (and probably the subject of a new NSS release).

13) Please append the word "flags" to any of those cvpt enums that is a
    mask of bit flags. e.g. cvpti_certStoreFlags cvpti_usageFlags

I'm going to ask Julien to also review this.

Comment #42 [reply] Nelson Bolyard  2007-08-28 02:18:51 PDT
Private

> I want to expand on one remark I made about type safety.

Let me think about that for a bit. In general, though most users should will
be calling this function with no parameters (well if we move CERTUsage out).
The rest are likely to set only one or two parameters, which should be much easier to review (albeit still harder for tools to automatically detect). I
think we can probably do something that will help us catch the most agregious
errors without loosing flexibility (I'm thinking we group the union into
basic types, pointers, and arrays. We loose the ability to detect that a 
CERTCertificate was passed instead of a SECItem, but we can handle int instead
of a pointer.

bob
I looked over the entire history of this bug, since I hadn't really participated much lately. I may be late to the battle, but here are my responses to various comments in this bug anyway.

Nelson, in comment 12 :

1. CERT_VerifyCertChain is not public
CERT_VerifyCACertForUsage / need to build fake cert, not sig. check ?

2. no slop time in the args
We could have this in the structure as an optional arg.

3. yes, use a structure for some of them

4. not implemented

5. we tried to honor them. obviously there were some bugs

6. sounds like we need binary logic to specify the filters .
For AND conditions it should be doable with libpkix cert
checkers. For OR, maybe not.

Perhaps the answer is that the new API should completely
drop the SECCertUsage / SECCertificateUsage types, though
this will make migration harder.

7. again, doable through another checker, probably the
same as case 6.

8. there were other tasks left to convert other part of
NSS to use the new API. It wasn't supposed to be
conditional.
But those questions should be handled in the application's
cert verification callback.
A new prototype for such callbacks is already needed to handle
other things like NBIO. See bug 324867, 288788, and 151010

Nelson, in comment 13 :

1.

It shouldn't be per-thread. There can be different pieces of
code written by different programmers running on one
thread.

It cannot be global either. Really it is up to each
programmer to decide whether to call the old API or the new
API. Some products may use multiple components that have a
mix of them. I don't see anyway to prevent it. We need to
support such an environment.

Per-socket makes sense and should be doable through the
callback interface. The app could choose to use the old
limited callback interface, or the new one, and then use
the new PKIX functions with them.

2.
The original plan was to have the new API,
CERT_VerifyPKIXCertificate, call into libpkix, in 3.12 .

Then, in 3.13, CERT_VerifyCert and other legacy calls would
be reimplemented to use libpkix also, and not necessarily by
calling into CERT_VerifyPKIXCertificate, since the new API could
be sufficiently different not to allow that directly, even if
the level of functionality of the new API was greater.

It looks like the current plan has pulled this 3.13 task into
3.12 . The work for that was never scoped before.

Steve, in comment 16 :

Chris, in comment 17 :
I agree that it is better to return success (probably SECSuccess,
not PR_SUCCESS) only if the building and validation fully
succeeded.

We can get rid of the added 0 argument if we have an extensible
structure. Actually, we can go one step further and have a
versioned structure. That allows us to also get rid of fields
that may not be needed anymore when some new ones are added,
rather than having an ever-growing structure.

Nelson, in comment 19,

It should be possible to define the API so that the application
can differentiate those cases by getting a failure status,
and then inspecting the value of some output arguments, which
the API would be required to set.

Nelson, in comment 20,

It sounds like we need to have KU/EKU filters with the ability
to do NOT or XOR also.

Kai, in comment 21,

It sounds to me like it is not right for the new call to
support SEC_OcspFailureMode as a global setting. Multiple
libraries using NSS can be mixed in the same program and
may want different behavior. This should be in the argument
or structure. We can amend the definition of
SEC_OcspFailureMode to specify that it is a legacy call.
Or maybe we can allow an override only in the structure.
The same could be true about the HTTP client callback.


Nelson, Kai in comments 22 - 24 :
I agree the cache interval between requests should deal with
the IO error situations, and shouldn't be treated specially
for EV. But on the other hand, it would be OK to have an
override for the interval value in the API. Though perhaps
this is moot since it may be managed in the callback provided
by the caller anyway.

Bob, comment 32 :
Good. We can have a hack API in an alpha, but as long as we
go to beta or anything called feature-complete, the hack API
should be removed, and of course for GA/RTM.

Nelson, comment 35 :

Yes, unfortunately the old API prototypes have a lot of
limitations, and not enough arguments in general. The
behavior will be less than ideal. We could fix some of those
problems by storing some information either globally or
per-thread, and perhaps trying to poke into the libpkix
chain cache first, to try to get the same answer as the
other calls that may have been made prior to it.
But this would not be guaranteed.

And I think there are other reasons for the API to output the
chain - applications may want to display it for one.

Nelson, comment 36 :
I think we can deal with the issue of complexity of the
caller's code by having versioned structures that could
completely change size and content fields based on the
value of the initial version field. This is also used
by MS and IBM, as opposed to a "reserved" argument.

Bob, comment 39 :
Good call on freeing the NBIO context on success/failure - one
less API to expose.

Bob, comment 40 :
I agree we should not expose a bunch of getter/setter functions.
This will be harder to support.

Patch review to follow separately.
Attachment #278148 - Flags: review?(julien.pierre.boogz) → review-
Here are my comments about attachment 278148 [details] [diff] [review] :

1) in the comments, there should be something about the cert argument being the leaf cert

2) I don't really like the use of arrays. I think this will be a source of error for application users, and they are not very easy to use. Callers have to allocate an array of the proper-size, initialize the values properly, and NULL-terminate it. This is extra code and extra chance for mistakes. If possible we should come up with something easier.

3) Type safety.
Nelson is correct that callers will complain when they get crashes with invalid arguments. But we have an API/library with a lot of inputs and outputs, and we cannot get rid of all the functionality to simplify it all. I think one step to reduce the possibility of type errors would be not to use a union. If each field was a named field and typed, it would be less likely to make a mistake in the value.

4) I don't see an option to specify the NIST/non-NIST policy

5) Maybe we should have a way of passing in slop time as an override (see previous discussions)

Here is a different suggestion that doesn't use dynamic arrays or unions :

#define CERT_VPC_END 1
#define CERT_VPC_NBIOCONTEXT 2
#define CERT_VPC_CERTLIST 4
#define CERT_VPC_POLICYOIDS 8
#define CERT_VPC_DATE 16

typedef struct {
    PRInt32 version; /* initial version (0) includes all the fields below.
                        other versions could be totally different */
    PRInt64 flags; /* bit flag that the caller must set to specify the 
parameters that he wants to include
*/

   void* nbiocontext;
   CERTCertList* certList;

   unsigned char numOIDs;
   PolicyOIDAndFlags policies[MAX_POLICY_OIDS];

   PRTime date;
   /* etc */
} CERTValInParams;

#define CERT_VPC_END 1
#define CERT_VPC_NBIOCONTEXT 2
#define CERT_VPC_CERTLIST 4
#define CERT_VPC_POLICYOIDS 8
#define CERT_VPC_DATE 16
/* etc */

With this approach, there is a flat fixed-size input structure. All the parameters are strongly typed. The API only has to look at the fields that correspond to the flags that are are passed in. The main downside is the arbitrary limit on the number of policy OIDs the caller could specify, or number of key usages, etc, when there is more than one . But we could choose values that are large enough, or just go to the v2 structure to increase them if needed

We can use the same approach for the output parameters.
Thanks Julian.

1) if you mean the name of the argument, I think that's easy to fix.

4) what is NIST/ non-NIST policy, and is it something that can be specified
by a particuliar set of OID flags (that is, is it simple a preset bits in the OID flags, or does it imply something more -- a different semantic?).

2, 3, 5)

I intend to unroll the union a bit, which would make it more strongly typed than it is, though I don't think I want to completely unroll it. I think we can get back enought type safety to catch most major errors (int versus pointer, or instance).

After seeing code that set's the arrays before a call and code that uses getter/setters, the latter is actually more cumbersome to the programmer, and a single function that has a billion options is fairly confusion.

Remember the model is most callers would set one or 2 options (at the most) and take the defaults for all the rest, and the *Very* common cases none of the array options should be set. I'll attach a new version shortly with your's and nelson's input..


Bob,

4) it's a different semantic, basically not failing if we have no revocation method available

I am not proposing we have getter and setter functions. If a programmer wants to set just one or 2 options, all he has to do is 
a) allocate the whole structure (possibly on the stack since it's a fixed-size struct)
b) zero it .
c) set the version (initially, this can be skipped, since it's 0)
d) set the bits in the "flags" field to specify the 1 or 2 options
e) fill the fields in the structure that correspond to those options

This doesn't require a union and I don't see how it could be simpler with the array/stack allocation.
Oh, ok, so it's a revocation flag.

ah, so the problem with that is it's not extensible.

One of the criteria was that we could handle new parameters in the future. If we unroll the entire structure, then either we can no longer add new parameters or the size of the structure has to be variable (meaning we need to alloc and free it).

Also, this means we can't have '0' as an explicit parameter because it this scheme it means 'default'. 

I intend to unroll the union a bit, so we get a certain amount of type-checking (pointers versus scalars).

bob
I am skeptical about the idea that most callers will not need to pass any
arguments.  All it takes is two callers that want different usages (SSL vs
SMIME or OCSP) and then one default does not fit all.  

There are many ways to make a struct effectively extensible.  One is to add
a generous array of reserved members at the end.  Another is to put explicit
version and/or length information at the beginning.  This allows the caller
to pass a fixed size array, and allows the callee to know where to stop in 
the caller's array.  I believe we have other examples of this in NSS now.

I do think that having a struct of named members of specific types is easiest 
for the app programmer and also most type safe.  
Bob,

The structure I proposed is extensible through versioning (see the version field at the beginning). Granted, that means there will be multiple versions of it in the future, but it still would need to be allocated only once by each caller, and not as an array.

zeroing the structure only means default by virtue of the fact that that sets the "flags" field to zero, which means none of the following field values in the structure are to be even considered by the API. We can use zero as explicit values if flags is non-zero.
Nelson: My next interation was going to make usage a normal parameter (I believe I said so in my comments), since almost all callers will include a usage.

As far as OCSP goes, most of the time you want to specify an OCSP setting app wide, and only in very specific places would you want to override that. In fact today we don't let you at all. Pretty much all the extra arguments are 'special use' arguments which only a few apps would care about, and then usually in only a few places.

While both yours and Julien's solution give some future expandability, the reserved field idea limits how much we can expand it, and the versioning idea means the application program need to remember to set it when adding future fields (which seems more error prone than the complaints about arrays).

Julien, I was presuming you meant zero to be 'default' (ignore this parameter), but that means we can never have an parameter whose actual value is '0' (how do I distinguish between use the default flags and turn all the flags off).

Finally the structure method means I have to check each structure element in the actual parsing code. In the current, I only take arguments as exceptions to the default.

bob
Bob,

I think the array vs version error-proneness is debatable. Personally I prefer the version scheme. With array you might stil need to change the individual element type if you need something bigger. Unless you use pointers inside the array element, but then it becomes even uglier for the caller.

As to the zero/exception, I guess I haven't been conveying what I meant well enough. With my scheme, the API code does not need to parse the whole structure. It only needs to parse the version field, to know what structure type to use, and then the "flags" field, to know which flags inside the structure to actually look at, which are then treated as exceptions to the default.

If you look at my example from comment 45, if the caller does not set CERT_VPC_DATE in the flags for instance, then the API does not look at the date field of the structure. The value of date does not have to be zero in that case, it is just ignored. Zero could potentially be a valid explicit value for date.

Basically, in my proposal, each flag maps to a given number of strongly-typed field of the structure, whereas in yours, you add new dynamic elements to your array, each of which are weakly-typed. But in both cases they should be treated as exceptions to the default.

> I think the array vs version error-proneness is debatable. Personally I prefer
> the version scheme. 

I was reading version number as a true version number, not as a bitmask of valid arguments.

> With array you might stil need to change the individual
> element type if you need something bigger. Unless you use pointers inside the
> array element, but then it becomes even uglier for the caller.

The arrays already have pointers in them for structure elements. Most arguments are natively pointers to structures to begin with (certs, certlists, etc). The array size would not change. It's already planned for.

> As to the zero/exception, I guess I haven't been conveying what I meant well
> enough. With my scheme, the API code does not need to parse the whole
> structure. It only needs to parse the version field, to know what structure
> type to use, and then the "flags" field, to know which flags inside the
> structure to actually look at, which are then treated as exceptions to the
> default.

Oh, so version isn't really version, it's a bitmask of valid arguments? That limits the number of possible *types* of arguments to 32 or 64 (depending on the
size of bitmask). I'm currently at 11 now... Might be ok, but I don't see it
as a huge advantage over arrays (used exactly as you say your structure would
be used). (I am a little worried of including a structural limit to our future
flexibility).




I would like to describe how I would implement this API in object oriented C++, and I'm translating this into C:

struct CertVerification_Request
{
  struct somePrivateNSSStruct *private_data;
};

struct CertVerification_Result
{
  struct somePrivateNSSStruct *private_data;
};


SECStatus CERT_Verify(CertVerification_Request *input,
                      CertVerification_Result **output);



CertVerification_Request myRequest;
CERT_InitRequest(&myRequest);

This produces a generic verification request, all set to defaults.

Now we offer a bunch of functions that can be used to stick additional parameters into the request object:

CertVerificationRequest_SetUsage(CertVerification_Request *req,
                                 usage a_usage);

CertVerificationRequest_SetTime(CertVerification_Request *req,
                                usage a_usage);


When you are done adding all requests, you call the verification function, and you'll get a result object.


You can obtain the various output parameters using accessor functions operating on CertVerification_Result.


With this approach you are avoding all data structure versioning problems and arrays. You can imlement the input and output transport struct as you like them.
Attached patch Update header 2. (obsolete) — Splinter Review
This proposal still uses arrays, but all other issues have been addressed. Some of they type safety has also been addressed by splitting the unions into 3 parts: scalars, pointers, and arrays. The underlying code can runtime check if the caller mistakenly passed the wrong major catagory of argument.
Attachment #278148 - Attachment is obsolete: true
Attachment #279930 - Flags: superreview?
Attachment #279930 - Flags: review?(nelson)
Attachment #279930 - Flags: superreview? → superreview?(julien.pierre.boogz)
I like what Kai described in comment 54.  Bob's proposal in comment 55
looks complex at first but should be familiar to people who used PKCS #11
attribute templates before.

It's not clear how the runtime type checks will be performed.  Bob,
could you give an example?  It seems that we can also detect null pointers
and arrays, and only if their uninitialized values happen to be 0.
Attached file food for thought (obsolete) —
During the NSS conference call I converted Bob's proposed header to this object-oriented style.

Bob correctly said this would require an init and one cleanup call for each verification.

Only as food for thought.
I believe this approach would be much more readable and in my opinion the burden for the cleanup call would be acceptable.

In those situations where only simple parameters where used (no lists), no alloc/free activity will happen in the init/cleanup functions.
Bob,

No, version is really version.

My structure definition proposal had separate fields for version and flags.

The version field is the initial one and the only one required. The value of the version field defines everything that follows, including flags and everything else. For version = 0, it would be the initial structure proposed, that includes flags also. For version = 1, it could be anything else we want it to be.

In the structure version 0, the flags field is a way for the caller to tell which additional criteria he wants to set for the building/verification. Ie. each bit in the flags is equivalent to one new element in your array.

It can't be good that my proposal from comment 45 isn't getting understood. I'll give one more shot at explaining it.

/* Values for flags */
#define CERT_VPC_END 1
#define CERT_VPC_NBIOCONTEXT 2
#define CERT_VPC_CERTLIST 4
#define CERT_VPC_POLICYOIDS 8
#define CERT_VPC_DATE 16
/* etc */

typedef struct {
    PRInt64 flags; /* bit flag that the caller must set to specify the 
                    * parameters that he wants to include
                    */

    void* nbiocontext; /* only considered if flags & CERT_VPC_NBIOCONTEXT */
    CERTCertList* certList; /* only considered if flags & CERT_VPC_CERTLIST */

    unsigned char numOIDs; /* only considered if flags & CERT_VPC_POLICYOIDS */
    PolicyOIDAndFlags policies[MAX_POLICY_OIDS]; /* only considered if
                                                 flags & CERT_VPC_POLICYOIDS */

    PRTime date; /* only considered if flags & CERT_VPC_DATE */
   /* etc */
} StructV1;

typedef struct {
    PRInt32 version; /* defines format of everything below */
    union {
        StructV1 v1;
    } parameters;
} CERTValInParams;

Now, let's say it's NSS 3.13 time. We want to add stuff to the input .
We define new flags.

#define CERT_VPC_NIST 32

typedef struct {
    PRInt64 flags; /* bit flag that the caller must set to specify the 
                    * parameters that he wants to include
                    */

    void* nbiocontext; /* only considered if flags & CERT_VPC_NBIOCONTEXT */
    CERTCertList* certList; /* only considered if flags & CERT_VPC_CERTLIST */

    unsigned char numOIDs; /* only considered if flags & CERT_VPC_POLICYOIDS */
    PolicyOIDAndFlags policies[MAX_POLICY_OIDS]; /* only considered if
                                                 flags & CERT_VPC_POLICYOIDS */

    PRTime date; /* only considered if flags & CERT_VPC_DATE */
    PRBool NIST; /* only considered if flags & CERT_VPC_NIST */
} StructV2;

And we redefine CERTValInParams to be :

typedef struct {
    PRInt32 version; /* defines format of everything below */
    union {
        StructV1 v1;
        StructV2 v2;
    } parameters;
} CERTValInParams;

Then it's NSS 3.14 time . We decide we don't like bit flags anymore for some reason.

typedef struct {
    /* and now for something completely different. */
} StructV3;

typedef struct {
    PRInt32 version; /* defines format of everything below */
    union {
        StructV1 v1;
        StructV2 v2;
        StructV3 v3;
    } parameters;
} CERTValInParams;

I hope it's making sense now.
Attachment #279947 - Attachment is patch: false
Comment on attachment 279930 [details] [diff] [review]
Update header 2.

r=nelson
I'm suffering from reviewer fatigue.  So, this review is not as thorough 
as past ones.  The changes I suggest here are mostly cosmetic.

1. indentation 
Some of the new lines use spaces for indentation, others use tabs.
Please make them all use spaces, consistently.  

2. All the members of the enum CERTValOutParamTyp have names that
start with cert_po_ except one, which is cvpto_max.  pls fix it.

3. I'd still like to see us eliminate the member named void * p 
in the union named pointer, and use pointers to opaque structure
types instead.  For example, the nbio context pointer should be 
an opaque structure rather than a void.  But I realize that this 
is only a symbolic fix, since someone could still use a pointer 
of one of the other types in the union.  

4. I think the #defines for the new flags could each use a bigger comment.
Attachment #279930 - Flags: review?(nelson) → review+
Comment on attachment 279930 [details] [diff] [review]
Update header 2.

There are syntax errors in this patch, to make it compile:


>+   cert_pi_extendedKeyusage= 7, /* specify what the extended key usage of the 
>+                                 * certificate will be. Specified as an array
>+                                 * of oidTags in value.array.oids. If not
>+                                 * specified, no extended key usages will
>+                                 * be checked. */.

Remove the dot at the end of above line.


>+typedef struct CERTValParamInValueStr {
>+    union {
>+        PRBool   b;
>+        PRInt32  i;
>+        PRUint32 ui;
>+        PRInt64  l;
>+        PRUint64 ul;
>+        PRTime time;
>+    } scalar;
>+    union {
>+        const void*    p;
>+        const char*    s;
>+        const CERTCertificate* cert;
>+	const CERTList *chain

add semicolon above

CERTList is an unknown type, I guess you want CERTCertList?


>+typedef struct CERTValParamOutValueStr {
>+    union {
>+        PRBool   b;
>+        PRInt32  i;
>+        PRUint32 ui;
>+        PRInt64  l;
>+        PRUint64 ul;
>+    } scalar;
>+    union {
>+        void*    p;
>+        char*    s;
>+	CERTVerifyLog *log;
>+        CERTCertificate* cert;
>+	CERTList *chain;

again CERTCertList


>+    } pointer;
>+    union {
>+	void 	  *p;
>+	SECOidTag *oids;
>+    } array;
>+    int arraySize;
>+} CERTValParamOutValue;
>+
>+typedef struct {
>+    CERTValParamInType type;

CERTValParamInType is an undefined type, you want CERTValInParamType


>+    CERTValParamInValue value;
>+} CERTValInParam;
>+
>+typedef struct {
>+    CERTValParamOutType type;

CERTValParamOutType is an undefined type, you want CERTValOutParamType
Comment on attachment 279930 [details] [diff] [review]
Update header 2.

>+   cert_pi_policyOID       = 4, /* validate certificate for policy OID.
>+                                 * Specified in value.array.oids.  
>+                                 * Default is no policyOID */

>+typedef struct CERTValParamInValueStr {
...
>+	const SECOidTag *oids;
...
>+} CERTValParamInValue;


Are you sure you want to use SECOidTag?

If I understand correctly, that type is an enum. Don't we need a SECItem?
Comment on attachment 279930 [details] [diff] [review]
Update header 2.

>+ * for all out parameters:
>+ *  out parameters are only returned if the caller asks for them in
>+ *  the CERTValOutParam array. Caller is responsible for the CERTValOutParam
>+ *  array itself. The pkix verify function will allocate and other arrays
>+ *  pointers, or objects. The Caller is responsible for freeing those results.
>+ * If SECWouldBlock is returned, only cert_pi_nbioContext is returned.
>+ */

Who makes the decision whether the call to verify will potentially block?

The caller or the implementation of CERT_PKIXVerifyCert?

In your current design, 
if the caller does not include a type=cert_po_nbioContext member in the CERTValOutParam[] parameter, 
then the implementation of CERT_PKIXVerifyCert is unable to delay the operation.

In that case CERT_PKIXVerifyCert will either have to block
or return an error if it's unable to block.
This is an update to Bob's patch "Updated header 2." which contains my fixes to the syntax errors, and as suggested by Nelson I renamed cvpto_max to cert_po_max.

It also changes CERTList to CERTCertList.

It also changes "SECOidTag *oids" to "SECItem *oids".


I updated my patch for bug 374336, the PSM client code that wants to make use of this new API in order to do EV verification. I will attach a patch named "Patch v3" to bug 374336 momentarily. Feel free to have a look and confirm this is what you had envisioned.
Comment on attachment 281263 [details] [diff] [review]
proposed changes to "Update header 2."

Kai,
Two comments:
1) it is clearly the intent that the types be named ParamIn and ParamOut
rather than InParam and OutParam.  (note the strings _pi_ and _po_)
At a minimum, they should be consistent with the abbreviations.

2) NSS uses SECOidTags wherever possible, and not DER OIDs, so I'm sure
that it is right for the API to use SECOidTags, as Bob's patch did.
Indeed, I would have given it r- for DER OIDs in SECItems.  

SECOidTags represent a significant efficiency, an optimization.
The fact that libPKIX doesn't use them, and (worse yet) actually stores
OIDs as ASCII dotted-decimal strings, which it converts to DER OIDs
*EVERY* time it wants an OID for comparison, is one of the great 
inefficiencies of libPKIX that is on our hit list.  Let's not 
further entrench the inefficient methods.
(In reply to comment #65)
> (From update of attachment 281263 [details] [diff] [review])
> Kai,
> Two comments:
> 1) it is clearly the intent that the types be named ParamIn and ParamOut
> rather than InParam and OutParam.  (note the strings _pi_ and _po_)
> At a minimum, they should be consistent with the abbreviations.

Fine with me. So, the person who attaches the next patch revision should 
rename CERTValInParamType to CERTValParamInType and 
rename CERTValOutParamType to CERTValParamOutType


> 2) NSS uses SECOidTags wherever possible, and not DER OIDs, so I'm sure
> that it is right for the API to use SECOidTags, as Bob's patch did.
> Indeed, I would have given it r- for DER OIDs in SECItems.  

If I understand correctly, SECOidTags are index integers that point to the real DER OID stored internally in NSS.

If you require the interface uses SECOidTags, you are limiting the application to only use the OIDs that were known to NSS at compile time, right?


How does this relate to our short term plan to have PSM store the EV OIDs (and keep NSS away from the list of EV OIDs)?


> SECOidTags represent a significant efficiency, an optimization.

Sure, but unless I misunderstand, this is also a limitation.

If we want flexibility and efficiency at the same time, I propose the caller should have a choice of whether to pass in a SECOidTag-known-to-NSS or an DER-OID-unknown-to-NSS.
To answer my own comment 67:

I'm learning that NSS provides a mechanism for dynamically registering new OID tags at runtime.

If I understand correctly, PSM is supposed to register all OIDs in an init phase and afterwards use the SecOidTags.
Comment on attachment 279947 [details]
food for thought

obsoleting my proposal, as Bob has a reviewed patch
Attachment #279947 - Attachment is obsolete: true
Julien, are you willing to drop your alternate proposal in favor of the reviewed patch?

I think you were primarily concerned the caller will have to dynamically allocate memory. But I think the caller will be able to use automatic variables in most cases, as my example shows, see my patch v3 in bug 394733.
Updated version of Bob's patch attached:

(In reply to comment #66)
> Fine with me. So, the person who attaches the next patch revision should 
> rename CERTValInParamType to CERTValParamInType and 
> rename CERTValOutParamType to CERTValParamOutType

done


(In reply to comment #67)
> I'm learning that NSS provides a mechanism for dynamically registering new OID
> tags at runtime.

Switching back "oids" to type SecOidTags* => done
Attachment #281263 - Attachment is obsolete: true
Mostly whitespace changes to improve wrapping, and a couple of typo corrections.

Material changes are tightening down the comments, as
indicated with new sentences marked by >>

  cert_pi_policyOID       = 4, /* validate certificate for policy OID.
>>                               * Specified in value.array.oids. Cert must
>>                               * be good for at least one OID in order                                   * to validate. Default is no policyOID */ 
  cert_pi_policyFlags     = 5, /* flags for each policy specified in policyOID.
                                * Specified in value.scalar.ul. Policy flags
                                * apply to all specified oids.
>>                               * Use CERT_POLICY_FLAG_* macros below. If not
                                * specified policy flags default to 0 */
  cert_pi_keyusage        = 6, /* specify what the keyusages the certificate
                                * will be evaluated against, specified in
>>                               * value.scalar.ui. The cert must validate for
>>                               * at least one of the specified key usages.
                                * Values match the KU_  bit flags defined
                                * in this file. Default is derived from
                                * the cert usage */
  cert_pi_extendedKeyusage= 7, /* specify what the required extended key
                                * usage of the certificate. Specified as
                                * an array of oidTags in value.array.oids.
>>                               * The cert must validate for at least one
>>                               * of the specified extended key usages
                                * If not specified, no extended key usages
                                * will be checked. */

Not sure if the following was meant to refer to ku or eku.
So I took it out, since the caller can check the outparams
anyway.
  cert_pi_requireAllUsage = 10,/* Set to true if all cert usages are
                                  required to succeed the validation, or
                                  if only one is sufficient. Specified
                                  in value.scalar.b. default is false. */



I split out cert_po_usage as follows:

  cert_po_keyUsage        = 6, /* Return what key usages the certificate
                                * is valid for.
                                * Returned in value.scalar.usage */
  cert_po_extendedKeyusage= 7, /* Return what extended key usages the
                                * certificate is valid for.
                                * Returned in value.array.oids */


The only ambiguity I see is that it doesn't say how an array of OIDs
is terminated. We can add a clarifying comment later.
Attachment #279930 - Attachment is obsolete: true
Attachment #281270 - Attachment is obsolete: true
Attachment #281836 - Flags: review?(kengert)
Attachment #279930 - Flags: superreview?(julien.pierre.boogz)
Comment on attachment 281836 [details] [diff] [review]
Minor changes to kai's latest patch

As you're changing bits that Bob had proposed (especially since you're removing some bits), I think that Bob should do the final review.
Attachment #281836 - Flags: review?(kengert) → review?(rrelyea)
Change from last rev, after reviews from Bob on Friday:

 - add 'usages' output param back (I'd removed this while adding keyusage/
extended key usage).

Now we have three usages:
 - usages:              the SECCertificateUsage bitfield
 - keyusage:            key usage extension specified explicitly
 - extended key usage : extended key usage extension specified explicitly

The first is passed in as function parameter, 2 & 3 are optional

All three can be output in the output params if requested.
Attachment #281836 - Attachment is obsolete: true
Attachment #281836 - Flags: review?(rrelyea)
Attachment #282175 - Flags: review?(rrelyea)
This patch implements the hack according to the new API. Should be
enough for kai to get it working for FF testing.
Attachment #282176 - Flags: review?(rrelyea)
Here's some sample code - as a patch to certutil enabling EV testing.
Comment on attachment 282177 [details] [diff] [review]
Sample code to test EV

I don't imagine that you're proposing to check this is, but if you are, r- for the overloading of a single letter option.  NSS supports "long options" now, e.g. --longoption=value. 
please use them rather than overloading old options.

also, don't you need to explicitly zero out pi[2] and po[1]?  As automatic variables, they're not guaranteed to be zero'd.
(In reply to comment #76)
> (From update of attachment 282177 [details] [diff] [review])
> I don't imagine that you're proposing to check this is, 

Quite right.

> 
> also, don't you need to explicitly zero out pi[2] and po[1]?  As automatic
> variables, they're not guaranteed to be zero'd.

The cert_pi_end and cert_po_end 'types' do not have values defined.
We can think of them as of 'void' type, and so the values are not inspected
or set.

Bob pointed out some error cases that I didn't handle properly.
Fixed.
Attachment #282176 - Attachment is obsolete: true
Attachment #282198 - Flags: review?(rrelyea)
Attachment #282176 - Flags: review?(rrelyea)
Comment on attachment 282198 [details] [diff] [review]
Update to implementation after in-person review from relyea

r+ rrelyea
Attachment #282198 - Flags: review?(rrelyea) → review+
Comment on attachment 282175 [details] [diff] [review]
FINAL changes to API

r+ (especially since I already approved the patch based on this api).
Attachment #282175 - Flags: review?(rrelyea) → review+
Bob, Steve, 
Please move all this new code from certvfy.c to a new file ASAP.
Perhaps you should move it to the new file certhigh/certvfypkix.c 
that you also created for this bug.

Also, please ensure that the bug number in the checkin comment is correct.
Thanks.
(In reply to comment #81)
> Perhaps you should move it to the new file certhigh/certvfypkix.c 
> that you also created for this bug.
> 
We were talking about this earlier today. I think I will need to make
use of some of alexei's static functions, so this makes sense.

This fixes some bugs in the cleanup code for my new function
cert_GetTargetCertConstraints() which can cause a crash when the
wrong type of object is freed.
Attachment #286316 - Flags: review?(rrelyea)
Comment on attachment 286316 [details] [diff] [review]
CertSelector leak fix

r+

I would also highly entertain the patch to move this whole works to certpkix as well. 

bob
Attachment #286316 - Flags: review?(rrelyea) → review+
Assignee: sparkins → rrelyea
Whiteboard: PKIX → PKIX NSS312B1
Blocks: 390532
Blocks: 216832
RFC 3280 defines the path validation algorithm.  It defines several inputs
that the caller of that algorithm must supply.  They include (from page 67)

      (a)  a prospective certification path of length n.

      (b)  the current date/time.

      (c)  user-initial-policy-set:  A set of certificate policy
      identifiers naming the policies that are acceptable to the
      certificate user.  The user-initial-policy-set contains the
      special value any-policy if the user is not concerned about
      certificate policy.

      (e) initial-policy-mapping-inhibit, which indicates if policy
      mapping is allowed in the certification path.

      (f) initial-explicit-policy, which indicates if the path must be
      valid for at least one of the certificate policies in the user-
      initial-policy-set.

      (g) initial-any-policy-inhibit, which indicates whether the
      anyPolicy OID should be processed if it is included in a
      certificate.

At a minimum, the new API must provide a way for the caller to specify 
values for each of these inputs.  It should also define defaults for them
if the caller supplies no values.  I think the RFC suggests that the default
values for the last 3 of those values is false (or "off"); that is, that 
no inhibiting should be done, and that explicit policy extensions are not required in intermediate CA certs.

Recently we identified that EV processing requires the caller to set 
initial-explicit-policy (to true), and it does not appear that 
CERT_VerifyPKIX provides an interface by which it can do so.  

So, we need to (a) ensure that there is a way to specify each of these 
values in the function call interface, and (b) that these are actually 
"hooked up" (actually affect the behavior of libPKIX).  

Also note that, when initial-explicit-policy is "off", checking the leaf 
cert to see if it is valid for the user-initial-policy-set appears to be
outside of the scope of RFC 3280's algorithm.  IOW, CERT_VerifyPKIX needs 
to check that the EE cert has a valid policy, rather than relying on 
libPKIX to do it in that case.
I want to make one clarification on explicit policy.  In today's conference 
call, it was suggested that if explicit policies are not required, then a 
cert without any policy extension is treated as a cert with an anypolicy
extension.  Actually, the effect of not requiring explicit policies is much broader than that.  

Explicit_policy can be required in two ways:
- the caller sets initial-explicit-policy to true, or 
- an intermediate CA cert includes a policy constraints extension that 
forces explicit_policy.

UNLESS one or both of those things happens, then the algorithm will not fail 
due to the content of any policy extensions (or lack thereof) in intermediate 
CA certs.  Even if a CA cert has a policy extension that disallows the 
caller's required policy, or disallows the only policy allowed by a superior 
or inferior cert in the path, thereby causing a "NULL valid-policy-tree" to 
be created, this does not cause a path validation error unless explicit 
policy becomes required.

The final state of the "valid-policy-tree" affects the output of the algorithm. 
It tells which policies are valid for the path, once validated.  That set
of valid policy outputs can be empty if the valid-policy-tree is NULL (empty) 
at the end of the algorithm.  But a NULL valid-policy-tree will not cause any 
failure unless explicit policy is required by either the caller or some CA in 
the path.  So, apart from the valid policy tree outputs, it may be said that 
the algorithm ignores policy extensions in intermediate CA certs unless 
explicit policies are required.  
As we discuss on Fridays bug review meeting, the API is missing the feature, by which application can pass an alternative root CA cert list. The reason to supplying this list is to define a subset of valid CA root certs for a particular validation attempt. Only leaf certs that chain to a cert from the set should considered to be valid for this attempt.

The feature is needed to implement suggestion from bug 406755 comment 18
Blocks: evtracker
Semantics this API does not support:

1. CRL's must be present, and check OCSP if available.
2. OCSP must be present, and check CRL's if available.

Examples of what it does support:
1. Check all revocation if available:
    (CERT_REV_FLAG_OCSP|CERT_REV_FLAG_CRL)
2. Require OCSP for Leaf, check the ocsp on the chain if available)
    (CERT_REV_FLAG_OCSP|CERT_REV_FLAG_REQUIRE_LEAF_ONLY)
3. Check and require the leaf only:
    (CERT_REV_FLAG_OCSP|CERT_REV_FLAG_CRL_LEAF_ONLY|CERT_REV_FLAG_REQUIRE_LEAF_ONLY)
4. NIST policy:
   (CERT_REV_FLAG_CRL|CERT_REV_FLAG_REQUIRED)
5. EV policy:
   (CERT_REV_FLAG_CRL|CERT_REV_FLAG_OCSP|CERT_REV_FLAG_REQUIRED).

I've removed the soft failure flags, but we may want to add them back to support the following semantic:

1. Require OCSP if AIA is present. (translation: if the AIA is present but we fail to connect to the OCSP responder, revocation state would be revoked, not unknown).
2. Require CRL if crldp is present. (same as above).

----------------------------------

Implementation notes:

At the revocation level, we need to record the current revocation state.
Once we get a 'good' or 'revoked' status, we can stop further revocation checking. Once all revocation checks are complete we can check the final revocation state it it's unknown, we treat the state according to the CERT_REV_FLAG_REQUIRED value.

bob
Oops attached the wrong patch...
Attachment #300780 - Attachment is obsolete: true
Comment on attachment 300781 [details] [diff] [review]
API changes to require revocation checking-V2

reviews not strictly limited to the reviewers I've selected.

bob
Attachment #300781 - Flags: superreview?(alexei.volkov.bugs)
Attachment #300781 - Flags: review?(kengert)
Comment on attachment 300781 [details] [diff] [review]
API changes to require revocation checking-V2

This looks mostly good, because it addresses the needs I see for EV in general.
However, I would like to request some clarification, so r-


>+ * some for of 'active' revocation is required

some "sort" of?


>+ *    any give revocation check can have 3 states:
>+ *    1) Cert is revoked
>+ *    2) Status of this cert is unknown.
>+ *    3) Status of this cert is good.
>+ * CERT_REV_FLAG_REQUIRED controls the meaning of 'unknown'.
>+ * If the overall status of a cert is unknown, it's treated as 'good' if
>+ * CERT_REV_FLAG_REQUIRED is not set and treated as revoked if CERT_REV_REQUIRED
>+ * is set. (NOTE: this is the overall status. If revocation checking is 
>+ * available from more than one source, then all sources must be unknown
>+ * for the overal status to be unknown).
>+ *
>+ * Inability to fetch revocation status (such as missing CRL, no AIA,
>+ * corresponding responders are down and cached data is not existant or
>+ * stale) is the same as unknown.

I like your definition of missing == unknown.


>+ */
>+#define CERT_REV_FLAG_REQUIRED            0x40
>+#define CERT_REV_FLAG_REQUIRED_LEAF_ONLY  0x80


There is a risk for confusion here.
If both bits are combined, what will be checked?
Please add a comment that makes it clear.

Excursion: {
  There is an alternative, which also had been proposed by Nelson in our meeting yesterday.
  The alternative is: rename CERT_REV_FLAG_REQUIRED to CERT_REV_FLAG_REQUIRED_INTERMEDIATES.
  With this approach, a caller that requires checks on both would have to use 
    CERT_REV_FLAG_REQUIRED_INTERMEDIATES | CERT_REV_FLAG_REQUIRED_LEAF_ONLY
  However, the argument against that is:
  CERT_REV_FLAG_REQUIRED_INTERMEDIATES does not make much sense on its own,
  (and it would be tricky to implement).
}

Because of the above, I'd vote for the flags as proposed by Bob, with an added comment.

Do you want to rename CERT_REV_FLAG_REQUIRED to CERT_REV_FLAG_REQUIRED_ALL_BUT_ROOT ?

Actually, I think the API should return a failure when the caller attempts to pass 
  CERT_REV_FLAG_REQUIRED_ALL_BUT_ROOT | CERT_REV_FLAG_REQUIRED_LEAF_ONLY


> /* REV_NIST is CRL and !CRL_LEAF_ONLY and !FAIL_SOFT_CRL */

Please update this comment, because you have removed FAIL_SOFT_CRL.
Maybe that comment is obsolete it's better to force people to read the real definition.

>-#define CERT_REV_NIST		   (CERT_REV_FLAG_CRL)
>+#define CERT_REV_NIST		   (CERT_REV_FLAG_CRL|CERT_REV_FLAG_REQUIRED)


So, I think this API correctly handles all scenarios I'm currently aware of.

I assume, until NSS can take CRL checking results into consideration,
you will "implement" the verification checks to require OCSP.

This requirement is an implementation detail hidden inside NSS.
Later, when the abilities improve, you can change the implementation to require either OCSP or CRL.
Right?


For Firefox 3 PSM will request CERT_REV_FLAG_REQUIRED_LEAF_ONLY

Let's say we encounter a cert that looks really like a valid EV cert, but it does NOT contain an OCSP AIA.
I assume the implementation for CERT_REV_FLAG_REQUIRED_LEAF_ONLY in NSS, the version planned for Firefox 3, will reject that cert (regardless of whether it contains an CRLDP or not).
Right?


I know you are planning this to be an API only discussion, but I would like to ensure I understand your intention behind the API in order to support EV short term.

Thanks.
Attachment #300781 - Flags: review?(kengert) → review-
Furthermore I propose more comments that clarify on allowed combinations of

#define CERT_REV_FLAG_OCSP              1
#define CERT_REV_FLAG_OCSP_LEAF_ONLY    2
#define CERT_REV_FLAG_CRL               4
#define CERT_REV_FLAG_CRL_LEAF_ONLY     8
#define CERT_REV_FLAG_REQUIRED            0x40
#define CERT_REV_FLAG_REQUIRED_LEAF_ONLY  0x80

The combination of 
  CERT_REV_FLAG_OCSP | CERT_REV_FLAG_CRL
is that meant to
  (a) try first, if succeeds fine, only on failure try second
  (b) always try both

I guess you intend the above to mean (b)
I guess you intend CERT_REV_FLAG_REQUIRED to mean (a)

It should not be necessary to guess or look at the source,
the comments should make it absolutely clear.
What about libpkix ocsp checking and the "ocsp failure mode" (soft/hard) that we introduced in core NSS?

I think we must decide that now and add a comment to the API, in order to avoid confusion.

IMHO it would be ok to say: "this API always ignores the global nss failure mode, and the revocation checking will happen exactly as specified by the given flags."
In general Soft failure is:

CERT_REV_FLAG_OCSP  (or more exactly CERT_REV_FLAG_OCSP_LEAF_ONLY).

The current hard failure would not be supported, thus the following comment I made in this bug:

> I've removed the soft failure flags, but we may want to add them back to
> support the following semantic:
>
> 1. Require OCSP if AIA is present. (translation: if the AIA is present but we
> fail to connect to the OCSP responder, revocation state would be revoked, not
> unknown).
>
> 2. Require CRL if crldp is present. (same as above).
>

Hard failure is not the same as:
   CERT_REV_FLAG_OCSP | CERT_REV_FLAG_REQUIRED

As this would fail on a certificate that had no AIA and no global OCSP responder is specified.

Maybe a flag that says 'Required if specified' could be added if we still want to support the old hard failure case.

bob
> The combination of 
>  CERT_REV_FLAG_OCSP | CERT_REV_FLAG_CRL
> is that meant to
>  (a) try first, if succeeds fine, only on failure try second
>  (b) always try both

I would always mean (a) where 'failure' means 'unknown' (couldn't contact the server, no 'fresh' response in cache, etc.).

If I get revocation status that's 'fresh enough' from any source I can always proceed.

bob
Comment on attachment 300781 [details] [diff] [review]
API changes to require revocation checking-V2

We need to really spell out what each of these flags means in quite some
detail.  We don't want users to guess what these flags mean.  I observe
that various members of the NSS already have different understandings of
the meanings of some of these flags.  That seems to prove pretty clearly
that these things are not yet sufficiently well defined.  

One area that is definitely underspecified is what the code will do if
the cert lacks the extensions necessary to do certain forms of revocation
checking.  If a cert has no AIA and no CRLDP extension, and the REQUIRED
flag is set, will the cert be treated as revoked?  

I think we need a flag that tells us what to do when the cert contains no
extension for revocation.  The choices are (a) consider it revoked, and 
(b) do not consider it revoked.  It seems clear to me that EE certs want
one policy and normal web browsing wants the other.  OBTW, I don't ever
see the browser using NIST policy.  

I think the present interface design allows the caller to specify combinations
that may not make sense, or that we'll have to think hard about answering.
For example: what happens if the user specifies OCSP_LEAF_ONLY and CRL and
required, and some CA cert has an AIA but no CRLDP?  Does it make sense to 
allow such a combination?

What happens if the user specifies OCSP_LEAF_ONLY and CRL_LEAF_ONLY and 
REQUIRED (not leaf only)?  Does that combination make all cert chains fail?

I think we need to take the time to think through all the combinations. 
I think that means making a rather large table, showing all the possible
combinations of:
- cert extensions
- operation successes/failures (including cache)
- operation outcomes (for successes) (e.g. revoked, unrevoked)
- user-requested behaviors (flags) 
and for each combination, indicate what the outcome of the function should be.
I'd be willing to participate in a conference call to work out such a table.

Perhaps we need to change the interface from a large collection of boolean
flags to a single enumerated type that specifies a small number of sensible
combinations.
Attachment #300781 - Flags: review-
Comment on attachment 300781 [details] [diff] [review]
API changes to require revocation checking-V2

Need more comments for the flags.
Attachment #300781 - Flags: superreview?(alexei.volkov.bugs) → superreview-
> One area that is definitely underspecified is what the code will do if
> the cert lacks the extensions necessary to do certain forms of revocation
> checking.  If a cert has no AIA and no CRLDP extension, and the REQUIRED
> flag is set, will the cert be treated as revoked?

If there is not other way to get authentication information (there is no cached CRL that's fresh enough, no default OCSP responder defined). This was the clarification I made in comment #95.

This happens today if you set NIST policy REQUIRED, and the required semantic for EV (though EV could exist without this since the EV specs say CA's must issue certificates with some sort of revocation checking in them, so no chain that would otherise validate as EV should have certs with no AIA or CRLDB).

> I think the present interface design allows the caller to specify combinations
> that may not make sense, or that we'll have to think hard about answering.

I agree that it is possible to pick combinations that would automatically fail, or have no useful meaning...

> For example: what happens if the user specifies OCSP_LEAF_ONLY and CRL and
> required, and some CA cert has an AIA but no CRLDP?  Does it make sense to 
> allow such a combination?

This is not one of those cases. You are saying you do not want to accept intermediate CA's that do not have CRLDP's.

> What happens if the user specifies OCSP_LEAF_ONLY and CRL_LEAF_ONLY and 
> REQUIRED (not leaf only)?  Does that combination make all cert chains fail?

Given the comments in this bug of what REQUIRED means, why would you expect this combination to do anything other than always fail?

> I think that means making a rather large table, showing all the possible
> combinations of:
> - cert extensions
> - operation successes/failures (including cache)
> - operation outcomes (for successes) (e.g. revoked, unrevoked)
> - user-requested behaviors (flags) 

I would be willing to build such a table, but I *REALLY* would like direction to the question no one has answered yet...

Do we need the equivalent of 'REQUIRED' where you mean 'REQUIRED unless no extensions exist'?

bob



(In reply to comment #99)
> I *REALLY* would like direction to the question no one has answered yet...

Has this question been asked anywhere above?
If so, I missed it.  

I suspect there are other questions that you think are obvious but that 
aren't yet clear to others.  I think the table would help us find all of 
them and make sure we haven't omitted any others.

> Do we need the equivalent of 'REQUIRED' where you mean 'REQUIRED unless no
> extensions exist'?

Yes, I believe we do.
> Yes, I believe we do.

OK, then I'll present a new UI with that in mind, or actually a table like you described (rather than the UI itself) as my next step..

bob
I can also attach the .ods file as well if people want it.

This is a first cut. Look especially at the types of policies that would not be expressible in this proposals.

I've attached 3 proposals here:
1) the existing one which got the r- for reference.
2) a modified version of the existing one based on the comments I believe I've gotten back.
3) a third proposal which similifies the bit fields (removing some of the questions people generate, but also limitting some of the possible policies that could be expressed).

My feeling: I suspect the we will end up with more bit fields in the end, leading to a complicated table and/or description, but with only a small number of policies real people would be expected to use in most cases (so the burden of understanding the various bit meaning and interactions would fall only to those that need some sort of custom revocation policy).
One other idea would be to take each of the 8 rows of the table which describes the certificate type (leaf/intermediate, AIA/y-n, crldb/y-n) and give it a value from the table itself (G, F, O, O+, C, C+, E, E+).

Issues: It may not be expandable with new revocation schemes (each new revocation scheme doubles the cert types), but it is a possible idea.

bob
Bob, thank you very much for your work on this.
It really helped me and allowed me to do some more constructive thinking.
In the following text I'm replying to your comments contained in the document.


Bob's attachment 302741 [details] contains proposals 0, 1, 2.
I think we must behave like proposal 0, because it's the only one that fully matches the EV specs, according to Bob's comments.
I don't like proposal 1, because it's the same set of flags, just with different behavior.
I think proposal 2 is a step into the right direction, but still, it does not allow to specify the full variety of behavior we need.
I agree with Bob's assessment that more flags are needed, and I agree that we must find a way to reduce the matrix.

So, here's a new proposal 3.
I've renamed the flags and included action verbs.

I propose we define some more clarifiation flags that are "zero",
and specify the opposite of each "do" flag.
I assign a value of non-zero to each "do" flag,
and a value of zero to the opposite default behavior.

This greatly helps for self-documenting, IMHO

CERT_REV_DO_NOT_TRY_OCSP         0
CERT_REV_MAY_TRY_OCSP            1

CERT_REV_DO_NOT_TRY_CRL          0
CERT_REV_MAY_TRY_CRL             2

CERT_REV_MAY_DOWNLOAD_CRL        0
CERT_REV_MUST_DOWNLOAD_CRL       4

CERT_REV_ALLOW_MISSING_RESPONSE  0
CERT_REV_REQUIRE_RESPONSE        8

CERT_REV_ALLOW_MISSING_EXTENSION 0
CERT_REV_REQUIRE_EXTENSION       16

CERT_REV_REQUIRE_ON_ALL_CERTS    0
CERT_REV_REQUIRE_LEAF_ONLY       32

Explanation:

CERT_REV_MAY_TRY_OCSP:
(renamed from CERT_REV_FLAG_OCSP)
Depending on other flags and present extensions,
NSS may or may not perform an OCSP check.

CERT_REV_MAY_TRY_CRL:
(renamed from CERT_REV_FLAG_CRL)
Depending on other flags and present extensions,
NSS may or may not perform a CRL check.

CERT_REV_MUST_DOWNLOAD_CRL:
(new flag to address Bob's "3rd can not do")
Indicates that the application expects NSS to support
dynamic downloading of CRLs as specified in extensions.
If passed to current NSS and NSS detects it
must check CRL, then NSS should return failure.
For Firefox 3, PSM will NOT specify this flag.
Instead, PSM will specify CERT_REV_MAY_DOWNLOAD_CRL,
which can allow a later NSS to define the behavior on
its own (potentially controlled by a future global switch).

CERT_REV_REQUIRE_RESPONSE:
(renamed from CERT_REV_REQUIRE)
NSS requires a good response.
The response may come from either OCSP or CRL,
based on the defined flags.
A failure to obtain any response from any allowed
sources means: treat as revoked.

CERT_REV_REQUIRE_EXTENSION:
Require that an extension is present for at least one of
the allowed revocation sources.
To clarify:
If both AIA and CRLDP are missing => failure.
If only OCSP is allowed, but no AIA => failure.
Note: a default OCSP responder is treated as a synthetic
AIA and is treated as "extension present".

CERT_REV_REQUIRE_LEAF_ONLY:
This flag shall cover both (!) 
CERT_REV_REQUIRE_RESPONSE and CERT_REV_REQUIRE_EXTENSION.
That is, if the application specifies leaf-only,
it really cares for the leaf cert most.
It can request both extension-required and response-required
for the leaf.
If the application wants to allow soft failures for the intermediates,
it should accept failures for both missing-extension 
and missing-response at the same time.


Now, combinations of flags and reducing the matrix.



If CERT_REV_REQUIRE_RESPONSE is given,
then yes, we must define all 3 combinations of
CERT_REV_REQUIRE_EXTENSION and CERT_REV_REQUIRE_LEAF_ONLY.

But on CERT_REV_ALLOW_MISSING_RESPONSE, it's a single combination,
the other require flags are irrelevant.
Well, NSS may attempt to do OCSP and/or CRL,
but missing extensions or missing responses should not influence
the verification result, IMHO.

The "may/must download crl" is simply an engine feature,
but does not influence the behavior regarding CRLs.
It's a single 
  "if download not implemented and download required, then fail".
No need to list in the matrix.


Finally, if anybody thinks we ever need support for
"require OCSP AIA extensions, but it's fine if crldp is missing"
(or vice versa), then we could easily split the
CERT_REV_REQUIRE_EXTENSION into two separate flags.
The matrix would need not grow, IMHO, the behavior would simply be
slightly more tolerant 
(e.g. crldp not required, no need to perform CRL, missing crldp => ok).


So, here's my proposed matrix 3

X means irrelevant.

In order to simplify the matrix further, 
each single cell for "require" contains two entries,
the values for "all certs / leaf only".
The ? sign indicates such columns.



   Leaf no AIA, no crlDp   G  G  G  G    G     G     G       F     F     F
   Leaf    AIA, no crlDp   G  O  G  O    O+    G     O+      O+    F     O+
   Leaf no AIA,    crlDp   G  G  C  C    G     C+    C+      F     C+    C+
   Leaf    AIA,    crlDp   G  O  C  E    O+    C+    E+      O+    C+    E+

Interm. no AIA, no crlDp   G  G  G  G    G     G     G       F /G  F     F /G
Interm.    AIA, no crlDp   G  O  G  O    O+/G  G     O+/G    O+/G  F     O+/G
Interm. no AIA,    crlDp   G  G  C  C    G     C+/G  C+/G    F /G  C+/G  C+/G
Interm.    AIA,    crlDp   G  O  C  E    O+/G  C+/G  O+/G    O+/G  C+/G  E+/G

CERT_REV_MAY_TRY_OCSP      0  1  0  1    1     0     1       1     0     1
CERT_REV_MAY_TRY_CRL       0  0  1  1    0     1     1       0     1     1
CERT_REV_REQUIRE_RESPONSE  X  0  0  0    1     1     1       1     1     1
CERT_REV_REQUIRE_EXTENSION X  X  X  X    0     0     0       1     1     1
CERT_REV_REQUIRE_LEAF_ONLY X  X  X  X    ?     ?     ?       ?     ?     ?
So, Nelson just came up with a detail that's missing in proposal 3.
It's "test all certs" or "test leaf only".
I think we can simply define those additional constants.

It will influence the amount of revocation tests that NSS would execute, but non-executed tests default to "good" and do not influence the verification results, so I think it does not have to expand the matrix.

In other words, if "TEST_LEAF_ONLY" is defined, the extension properties of intermediate certs are irrelevant.

CERT_REV_TEST_ALL_CERTS          0
CERT_REV_TEST_LEAF_ONLY          64
Comment on attachment 302741 [details]
First cut at the table.

asking myself for review, to put this on my radar
Attachment #302741 - Flags: review?(nelson)
Attachment #302741 - Attachment filename: Revocation processing.pdf → Revocationprocessing.pdf
Attachment #302741 - Attachment mime type: application/octet-stream → application/pdf
Blocks: 420151
I *urgently* need the feature that allows PSM to pass in a list of candidate roots into CERT_PKIXVerifyCert, which would force NSS to ignore any other roots when finding the issuer root.

This is needed to make the EV feature work correctly in cross certification scenarios, see also the comments in bug 406755.

We are running out of time, because this will require changes at both the NSS and the PSM level.

I'm desperate, and I mean it !!!
Attachment #302741 - Flags: review?(nelson)
Attachment #307622 - Flags: review?(nelson)
(In reply to comment #108)
> Created an attachment (id=307622) [details]
> A way of passing trusted certs into libpkix

The patch also makes CERT_PKIXVerifyCert to return log if it is requested.
Attachment #307752 - Flags: review?(alexei.volkov.bugs)
Alexi, your patch for CERT_PKIXVerifyCert looks good except for one thing... the definition of cert_pi_certlist is not a list of trust anchors, but an actual certchain. You need the API changes in my API and change cert_pi_certlist to cert_pi_trustAnchors.

r+ for the CERT_PKIXVerifyCert with those changes.

bob
Comment on attachment 307752 [details] [diff] [review]
Add trust anchor inputs to the API

r=alexei
Attachment #307752 - Flags: review?(alexei.volkov.bugs) → review+
Comment on attachment 307622 [details] [diff] [review]
A way of passing trusted certs into libpkix

See Bob's comment 111.
Change the switch case label to use the symbol Bob defined, then you can commit your patch and Bob's in one step.
Attachment #307622 - Flags: review?(nelson) → review+
Checking in certt.h;
/cvsroot/mozilla/security/nss/lib/certdb/certt.h,v  <--  certt.h
new revision: 1.42; previous revision: 1.41
done


certt.h has been committed, it's safe to commet the rest.

bob
Proposal 5.

Revocation checking strategy
============================

The NSS engine is always allowed to look at any cached information that might indicate "is revoked".
When designing the API for verification functions, no parameter will be available to change this behavior.

In other words
- if a crl is locally available, even if the crl is no longer fresh
and/or
- if an ocsp response is locally available, even if the response is no longer fresh
then the engine may look at it and abort on a known revoked status.

(Note, one must carefully distinguish between "cached valid ocsp response or valid crl available"
and "the cache contains information about a very recent failure".
In soft failure mode (param 4), "cached recent failure" might mean we will skip testing.
In hard failure mode, it's necessary to allow a retry (except maybe a very short grace period to
repetitive network hits).
)


Parameter 1:
============
Which actions are allowed that might hit the network?
- try to obtain fresh OCSP response using AIA
  (use cached OCSP response if still considered fresh)
- try to obtain fresh CRL using CRLDP
  (used locally avaialble CRL if still considered fresh)
- try either mechanism, stop as soon as fresh data has been found

bits: 000000xx

CERT_REV_DO_NOT_DOWNLOAD_OCSP         0
CERT_REV_MAY_DOWNLOAD_OCSP            0x01

CERT_REV_DO_NOT_DOWNLOAD_CRL          0
CERT_REV_MAY_DOWNLOAD_CRL             0x02

Set both bits for specifying "try either".

The first attempt to obtain fresh information will be sufficient.
If the first attempt has fresh good information, we'll never attempt to download
using the second mechanism.


Parameter 2:
============
Does the caller have a preference which network actions should be preferred (tried first)?
(If only a single network action is allowed, then specifying this parameter does not make sense.
 The implementation is allowed to abort with error "invalid parameters"
 or it might simply ignore this parameter.)

- try ocsp network download first, then try other mechanism(s)
- try crl network download first, then try other mechanism(s)

In addition, this parameter also defines which locally cached information should be checked first.
If a preference for CRL is signalled, then the implementation shall begin by looking for a locally
avaialble fresh CRL, and if fresh, and if not revoked, no further tests will be done.

bits: 00000x00

CERT_REV_PREFER_DOWNLOAD_OCSP        0
CERT_REV_PREFER_DOWNLOAD_CRL         0x04


Parameter 3:
============
When verifying the chain for a cert, which parts of the chain may trigger downloading of revocation information?
By default, network activity for each cert in the chain is allowed.
Use this parameter to limit revocation network activity to the leaf cert.

bits: 0000x000

CERT_REV_DOWNLOAD_ALL_CERTS              0
CERT_REV_DOWNLOAD_LEAF_ONLY              0x08


Parameter 4:
============
By default, all performed tests are allowed to fail without being able to obtain a fresh result (fail soft).
Failure reasons for obtaining a result can be network errors, invalid signatures on response, etc.
By default, even if all network tests failed, the verification succeeeds.

This parameter is used to change that behavior to mean:
For at least one requested mechanism it must be possible to find fresh information,
either locally available, or obtained from the network.
In other words, if it's impossible to find or download good fresh information,
the verification fails (fail hard).

This is a three state parameter, in order to specify soft/hard failure independently for the leaf cert
or the full chain.
- fail soft on both leaf and intermediates
- fail hard on leaf, fail soft on intermediates
- fail hard on both leaf and intermediates

(the 4th state "fail soft on leaf, fail hard on intermediate" will not get implemented,
 the implementation should return with "invalid args" when requested by a caller)

bits: 00xx0000

CERT_REV_DOWNLOAD_LEAF_FAIL_SOFT         0
CERT_REV_DOWNLOAD_CHAIN_FAIL_SOFT        0

CERT_REV_DOWNLOAD_LEAF_FAIL_HARD         0x10
CERT_REV_DOWNLOAD_CHAIN_FAIL_HARD        0x20


Parameter 5:
============
Even if a test mechanism is allowed to hit the network, it may not be possible to perform the action,
because it's necessary to know the source of revocation information.
For CRL, the CRLDP extension of a cert may or may not indicate a CRL download location.
For OCSP, the AIA extension of a cert may or may not indicate an OCSP responder.
For OCSP, in addition, a default OCSP responder might have been configured in the application context,
and is another valid OCSP source.

By default, if no download location is known, it is treated as "skip test".
This parameter can be set to say "treat missing location as a test failure".

If mutliple download mechanisms are allowed (OCSP and CRL) and failure on missing source is requested,
then it's sufficient to have the source for at least one mechanism available.
The test where the source is available will be performed, and failure will be treated according to the other flags.

This is a three state parameter:
- skip if missing on any cert
- fail if missing on leaf cert, skip if missing on intermediates
- fail is missing on any cert

(the 4th state "skip test when missing in leaf, fail when missing in intermediate" will not get implemented,
 the implementation should return with "invalid args" when requested by a caller)

bits: xx000000

CERT_REV_SOURCE_MISSING_LEAF_SKIP         0
CERT_REV_SOURCE_MISSING_CHAIN_SKIP        0

CERT_REV_SOURCE_MISSING_LEAF_FAIL         0x40
CERT_REV_SOURCE_MISSING_CHAIN_FAIL        0x80


Official EV revocation checking
===============================
The definition of revocation checking for EV (according to specs),
together with our preferrence for OCSP, is expressed as:

CERT_REV_MAY_DOWNLOAD_OCSP
CERT_REV_MAY_DOWNLOAD_CRL
CERT_REV_PREFER_DOWNLOAD_OCSP
CERT_REV_DOWNLOAD_ALL_CERTS
CERT_REV_DOWNLOAD_LEAF_FAIL_HARD
CERT_REV_DOWNLOAD_CHAIN_FAIL_HARD
CERT_REV_SOURCE_MISSING_LEAF_FAIL
CERT_REV_SOURCE_MISSING_CHAIN_FAIL

However, for the initial implementation in PSM,
because of our inability to download CRLs on demand, we might relax that policy
and be strict on the leaf cert only, in other words we might define:
CERT_REV_MAY_DOWNLOAD_OCSP
CERT_REV_MAY_DOWNLOAD_CRL
CERT_REV_PREFER_DOWNLOAD_OCSP
CERT_REV_DOWNLOAD_ALL_CERTS
CERT_REV_DOWNLOAD_LEAF_FAIL_HARD
CERT_REV_DOWNLOAD_CHAIN_FAIL_SOFT ###
CERT_REV_SOURCE_MISSING_LEAF_FAIL
CERT_REV_SOURCE_MISSING_CHAIN_SKIP ###


Comparing with Nelson's latest proposal
=======================================
Here is how Nelson's alternate proposal (sent by email) matches these flags.

In the either group, Nelson defines a "prefer OCSP" flag.
I've converted that into global flag CERT_REV_PREFER_DOWNLOAD_OCSP / _CRL,
which gets ignored if only a single download mechanism is specified.

attempt OCSP   = CERT_REV_MAY_DOWNLOAD_OCSP
attempt CRL    = CERT_REV_MAY_DOWNLOAD_CRL
attempt either = CERT_REV_MAY_DOWNLOAD_OCSP 
                 | CERT_REV_MAY_DOWNLOAD_CRL

OCSP only if present = CERT_REV_MAY_DOWNLOAD_OCSP
                       | CERT_REV_SOURCE_MISSING_LEAF_SKIP
                       | CERT_REV_SOURCE_MISSING_CHAIN_SKIP

CRL only if present  = CERT_REV_MAY_DOWNLOAD_CRL
                       | CERT_REV_SOURCE_MISSING_LEAF_SKIP
                       | CERT_REV_SOURCE_MISSING_CHAIN_SKIP

note: if CERT_REV_MAY_DOWNLOAD_CRL is specified, 
      then the test "already has fresh CRL" must come first in the implementation,
      and only afterwards it should test for "has CRL source".
      (That is, fresh revocation information avoids download attempts,
       and the check for available/missing source only happens on download attempts.)
      This way a locally available fresh CRL is sufficient,
      even if no CRL source is known
      and even if failure on unknown CRL source is requested.

either only if present = CERT_REV_MAY_DOWNLOAD_OCSP
                         | CERT_REV_MAY_DOWNLOAD_CRL
                         | CERT_REV_SOURCE_MISSING_LEAF_SKIP
                         | CERT_REV_SOURCE_MISSING_CHAIN_SKIP

OCSP fail hard    = CERT_REV_MAY_DOWNLOAD_OCSP
                    | CERT_REV_SOURCE_MISSING_LEAF_FAIL
                    | CERT_REV_SOURCE_MISSING_CHAIN_FAIL

CRL fail hard     = CERT_REV_MAY_DOWNLOAD_CRL
                    | CERT_REV_SOURCE_MISSING_LEAF_FAIL
                    | CERT_REV_SOURCE_MISSING_CHAIN_FAIL

either fail hard  = CERT_REV_MAY_DOWNLOAD_CRL
                    | CERT_REV_MAY_DOWNLOAD_OCSP
                    | CERT_REV_SOURCE_MISSING_LEAF_FAIL
                    | CERT_REV_SOURCE_MISSING_CHAIN_FAIL


definition of freshness
=======================
In our first iteration, the implementation will make use of the current global parameters
that define freshness of OCSP responses in the cache.
The implementation will use an internal default value to define freshness of CRLs.
We might add functions to allow for application defined CRL freshness at a later time.


Matrix
======
I will work next on a matrix that fully describes this proposal number 5.
Addition to proposal 5, regarding parameters 4 and 5.

If parameter is set to "fail soft", then parameter 5 will get ignored.

In other words:

CERT_REV_DOWNLOAD_LEAF_FAIL_SOFT
=> ignore CERT_REV_SOURCE_MISSING_LEAF_*

and

CERT_REV_DOWNLOAD_CHAIN_FAIL_SOFT
=> ignore CERT_REV_SOURCE_MISSING_CHAIN_*
Header file changes that match what I describe in comment 116 + 117 and attachment 307921 [details].
Attachment #300781 - Attachment is obsolete: true
Comment on attachment 307922 [details] [diff] [review]
API changes to require revocation checking, proposal 5

Asking Bob and Nelson for review of these bit definitions. I hope I have made possible all combinations you were looking for.
Attachment #307922 - Flags: superreview?(nelson)
Attachment #307922 - Flags: review?(rrelyea)
Comment on attachment 307922 [details] [diff] [review]
API changes to require revocation checking, proposal 5

Asking Alexei for review on the NIST definition.
Attachment #307922 - Flags: review?(alexei.volkov.bugs)
I added a wiki page with Kai's latest proposal.  
I find it more readable than this bug, and it is editable.
http://wiki.mozilla.org/NSS:Revocation_API_Proposals
I added my "proposal 4" to that Wiki page.

In Bob's PDF file, he mentioned several possible combinations that could not 
be expressed with his proposals.  I tried to ensure that my proposal allowed 
all those to be expressed.  

I observe that some of them cannot be expressed with Kai's proposal 5.  
The question is: are the combinations that cannot be expressed necessary?

As I understand it, proposal 5 allows one to specify: OCSP only, CRL only, 
either, but not "both".  If one specifies both the OCSP and CRL flags, that 
is understood in proposal 5 to mean "either", not both.  
In proposal 4, that combination is understood to mean "both", and there is a separate set of flags for "either".

In proposal 4, the application's requirements for the leaf and for the 
intermediate CAs were specified independently.  This made it possible to 
specify different checking for the intermediates than for the leaf, 
(e.g. OCSP for one, CRLs for the other), and it made it possible to 
specify NO checking for the leaf while requiring checking for the 
intermediates.  These were missing abilities called out in Bob's document.

In proposal 5, it appears to me that one can specify leaf only, or leaf and
intermediates, but not intermediates only.  And when it calls for both leaf
and intermediate checking, whatever sorts of checking are specified for the
leaf are the same for intermediates, although soft failure and "if present"
are separately specified for leaf and intermediates.  

Having observed these differences, the question is: do these missing 
capabilities matter?

Bob noted that proposal 4 provides no explicit control (enable or disable) 
of one aspect of "NIST policy", namely whether CRLs are regarded as expired 
when they reach their nextUpdate date.  I think we resolved that by saying
that there are already separately specified parameters for OCSP and CRLs,
such as times that define adequate "freshness" and cache lifetime, and that
specifying NIST policy with respect to "expiration" could be handled through
those APIs, although they probably don't handle that now.

I think proposal 5 is trying to express that same idea, that NIST CRL revocation policy is specified through control over freshness time parameters, 
although it doesn't mention NIST's CRL expiration policy.
This comment is a discussion comment.
(I'll post another comment where I name my new proposal.)


(In reply to comment #123)
> 
> I observe that some of them cannot be expressed with Kai's proposal 5.  
> The question is: are the combinations that cannot be expressed necessary?

I was hoping they are not necessary, 
but I understand the desire to allow for flexibility.


> As I understand it, proposal 5 allows one to specify: OCSP only, CRL only, 
> either, but not "both".  If one specifies both the OCSP and CRL flags, that 
> is understood in proposal 5 to mean "either", not both.  
> In proposal 4, that combination is understood to mean "both", and there is a
> separate set of flags for "either".

It feels unlikely to me that an application would ever want to say:
"even if we already have fresh OCSP information, attempt to fetch a fresh CRL, too"
(or vice versa), but I'm ok add that flexibility.

We could introduce one more parameter, 1b:

#define CERT_REV_SINGLE_MECHANISM_IS_SUFFICIENT  0
#define CERT_REV_TRY_ALL_ALLOWED_MECHANISMS      0x100

The trouble is, this new ability raises more questions.

Let's say, the caller allows both OCSP and CRL and specified the new flag
CERT_REV_TRY_ALL_ALLOWED_MECHANISMS.
In addition the caller also requested "hard failure".

Now, let's say the caller prefered OCSP.
The OCSP check succeeds.
We continue and notice, there is no (or no fresh) CRL locally available.
We attempt to download the CRL and... fail!

What should happen now?
(a) hard failure has been requested, we fail
or
(b) we skip this failure, because we already passed a test with success

Always (a) or (b)?
Or yet another flag like this?

    CERT_REV_SECONDARY_TESTS_FAIL_SOFT 0
    CERT_REV_SECONDARY_TESTS_FAIL_HARD 0x200



> In proposal 4, the application's requirements for the leaf and for the 
> intermediate CAs were specified independently.  This made it possible to 
> specify different checking for the intermediates than for the leaf, 
> (e.g. OCSP for one, CRLs for the other), and it made it possible to 
> specify NO checking for the leaf while requiring checking for the 
> intermediates.  These were missing abilities called out in Bob's document.

It feels unlikely an application would want to do that,
but of course I can be wrong.

I think if you're asking for duplication, we'd have to duplicate
parameters 1, 1b, 2.


> In proposal 5, it appears to me that one can specify leaf only, or leaf and
> intermediates, but not intermediates only.

When I discussed with Bob 2-3 weeks ago, he couldn't think of a reason
why an application would ever want to check the intermediates only
(and argued it might be tricky to implement).


> And when it calls for both leaf
> and intermediate checking, whatever sorts of checking are specified for the
> leaf are the same for intermediates

You already raised this concern further up in your comment, and I already
addressed it in this comment (duplicating parameters 1, 1b, 2).


> although soft failure and "if present"
> are separately specified for leaf and intermediates.  

Yes, we already have that, no need for duplication here.


> Having observed these differences, the question is: do these missing 
> capabilities matter?

It feels unlikely to me.
And we could even postpone that decision.
We could go with proposal 5 now, with the option to define additional flags later.

I think all of the folowing (repeating the flags from earlier in this comment)
could be defined later:


CERT_REV_SINGLE_MECHANISM_IS_SUFFICIENT        0
CERT_REV_TRY_ALL_ALLOWED_MECHANISMS            0x0100

CERT_REV_SECONDARY_TESTS_FAIL_SOFT             0
CERT_REV_SECONDARY_TESTS_FAIL_HARD             0x0200


CERT_REV_CHAIN_SAME_MECHANISMS                 0
CERT_REV_CHAIN_DIFFERENT_MECHANISMS            0x0400


CERT_REV_CHAIN_DO_NOT_DOWNLOAD_OCSP            0
CERT_REV_CHAIN_MAY_DOWNLOAD_OCSP               0x0800

CERT_REV_CHAIN_DO_NOT_DOWNLOAD_CRL             0
CERT_REV_CHAIN_MAY_DOWNLOAD_CRL                0x1000

CERT_REV_CHAIN_SINGLE_MECHANISM_IS_SUFFICIENT  0
CERT_REV_CHAIN_TRY_ALL_ALLOWED_MECHANISMS      0x2000

CERT_REV_CHAIN_SECONDARY_TESTS_FAIL_SOFT       0
CERT_REV_CHAIN_SECONDARY_TESTS_FAIL_HARD       0x4000

CERT_REV_CHAIN_PREFER_DOWNLOAD_OCSP            0
CERT_REV_CHAIN_PREFER_DOWNLOAD_CRL             0x8000


> Bob noted that proposal 4 provides no explicit control (enable or disable) 
> of one aspect of "NIST policy", namely whether CRLs are regarded as expired 
> when they reach their nextUpdate date.  I think we resolved that by saying
> that there are already separately specified parameters for OCSP and CRLs,
> such as times that define adequate "freshness" and cache lifetime, and that
> specifying NIST policy with respect to "expiration" could be handled through
> those APIs, although they probably don't handle that now.
> 
> I think proposal 5 is trying to express that same idea, that NIST CRL
> revocation policy is specified through control over freshness time parameters, 
> although it doesn't mention NIST's CRL expiration policy.

Right.
Proposal 5 assumes that NSS defines in some way how long a CRL might still
be considered fresh.

I'd assume this "maximum age" parameter would be specified relative
to the nextUpdate property.

Would it help us to have a simple boolean flag which either says
"crl never expires" or "crl expires when nextUpdate time reached"?

It think it would not help us.
Using "crl never expires" would mean, we'd never consider to fetch a fresh
CRL.
I would like to extend proposal 5 with one more sentence:

"Let's define the bitmask variable big enough, so that more flags can be added later."

--------------

My new proposal 6 is:
Let's use proposal 5 as a base, and add the new flags that I list in comment 124.

Before we write down proposal 6 we should make a decision whether
  SECONDARY_TESTS_FAIL_*
flags should be used, or whether we always want (a) or (b).

Clarification: Unless the bit for CERT_REV_CHAIN_DIFFERENT_MECHANISMS is set, all other flags named CERT_REV_CHAIN_* will be ignored.

I'd mostly copy the comments from proposal 5 and make adjustments for the additional flags (or, if we expect to go with proposal 6, edit the wiki page).
> Now, let's say the caller prefered OCSP.
> The OCSP check succeeds.

If you succeed and the response is fresh enough, you are done. You only go on
if you get the equivalent of 'unknown' status. (No response returned/cached response non-existant/not fresh enough, responder returns status 'unknown').*

> We continue and notice, there is no (or no fresh) CRL locally available.
> We attempt to download the CRL and... fail!

> What should happen now?
> (a) hard failure has been requested, we fail
> or
> (b) we skip this failure, because we already passed a test with success
>
> Always (a) or (b)?

Given the above, you never have the ambiguous case. You only try the CRL if the OCSP response was unknown or the OCSP AIA does not exist. In either case, it becomes clear if the CRL also fails to get a positive response, you will fail on 'hard' and succeed on 'soft'.

In all the proposals I don't see the REQUIRE case. Require differs from hard failure in the hard failure fails only if the certificate requests a revocation type and we were unable to fulfill it. REQUIRE says 'we must have revocation information, even if the cert doesn't request it'. Basically it fails any chain that does not request the revocation information that you are checking.

I can accept the argument that the REQUIRE case is not necessary, but only if we can determine that those are the semantics of the NIST_REV policy.


(I'd also like to see the sets of flags expanded a bit. Currently we have a single bit Prefer OCSP or prefer CRL. When we add a third, shouldn't we have a set of bits that say 'the preferred revocation scheme is __________', or would we have a series of bits that say 'OCSP over CRL' or 'CRL over OCSP', 'Bob's Superrevocation over OCSP' or 'OCSP over Bob's Superrevocation', 'Bob's Superrevocation over CRL' or 'CRL over Bob's Superrevocation'.

The former doesn't grow the structure as fast, but requires us to reserve some contiguous bits and doesn't give a heirarchy. The latter grow bits a bit faster, allows a heirarchy, but also allows invalid combinations (OCSP over CRL, Bob's SR over OCSP, CRL over Bob's SR), but all three bits are needed (OCSP over CRL, OCSP over Bob's SR, still need to specify CRL/Bob's SR relation).

bob

Bob, My proposal has the "REQUIRE" case.  It's the behavior when 
"only if present" is NOT chosen/enabled.
Comment on attachment 307922 [details] [diff] [review]
API changes to require revocation checking, proposal 5

r~   means: more questions before review can be completed (:-)

>+#define CERT_REV_DOWNLOAD_ALL_CERTS           0
>+#define CERT_REV_DOWNLOAD_LEAF_ONLY           0x08

What about DOWNLOAD_NO_CERTS?  We need that.
If apps can't get the same functionality they've had in the past 
with the new API, they won't stop using the old API.
It's not the same as the combination of 
DOWNLOAD_*_FAIL_SOFT and SOURCE_MISSING_*_SKIP 
We want to be able to say: don't even TRY to download any certs,
not even if the AIA is present.

Question: how do the various OCSP control flags interact with the 
use of NSS's (badly named) "default" OCSP responder feature?
(It's not a "default", because as presently implemented, it cannot
be overridden.  When it is specified, it is the ONLY OCSP responder
ever used for any cert from any issuer.)  

Does it override the various OCSP flags defined in this API? 
Do they override it?  

Does the use of a "default responder" make the SOURCE_MISSING 
flag irrelevant for OCSP?  I think that, today, when a default
responder is set, all certs are checked through that responder,
whether they have an AIA for OCSP or not.  Not sure about that.

What is the default revocation action of this API when no revocation
flags are passed as arguments to this API?
(In reply to comment #128)

> >+#define CERT_REV_DOWNLOAD_ALL_CERTS           0
> >+#define CERT_REV_DOWNLOAD_LEAF_ONLY           0x08
>
> What about DOWNLOAD_NO_CERTS?  We need that.
> If apps can't get the same functionality they've had in the past 
> with the new API, they won't stop using the old API.
> It's not the same as the combination of 
> DOWNLOAD_*_FAIL_SOFT and SOURCE_MISSING_*_SKIP 
> We want to be able to say: don't even TRY to download any certs,
> not even if the AIA is present.


Nelson, I think proposal 5 already allows that:

#define CERT_REV_DO_NOT_DOWNLOAD_OCSP         0
#define CERT_REV_DO_NOT_DOWNLOAD_CRL          0

If neither CERT_REV_MAY_DOWNLOAD_OCSP nor CERT_REV_MAY_DOWNLOAD_CRL is specified, the NSS implementation is limited to check what's already locally available.


> Question: how do the various OCSP control flags interact with the 
> use of NSS's (badly named) "default" OCSP responder feature?

It interacts in exactly the same way as it did before.

I'd say, if a global default responder is configured, it will be used.

If you re-read the proposal, you will notice, at one point the proposal talks about "known OCSP source" or "unknown OCSP source". It mentions, a globally configured default responder is treated as a "known source".

> Does it override the various OCSP flags defined in this API? 
> Do they override it?  


There is no need to override anything, in my opinion.

The global default responder simply is a way to specify the source. It says "use the default responder as the OCSP source, even if the cert has its own source".

I think all other aspects around OCSP checking should be based on the flags passed to the CERT_PKIXVerifyCert function call.

If you wish, we might add yet another bit:


#define CERT_REV_ALLOW_GLOBAL_OCSP_RESPONDER       0

=> if a global OCSP responder has been configured,
   always use it when revocation checking needs an OCSP source


#define CERT_REV_DO_NOT_USE_GLOBAL_OCSP_RESPONDER  0x10000

=> ignore the global configured OCSP responder


> Does the use of a "default responder" make the SOURCE_MISSING 
> flag irrelevant for OCSP?  

Yes, that's what the proposal originally defined.


> I think that, today, when a default
> responder is set, all certs are checked through that responder,
> whether they have an AIA for OCSP or not.  Not sure about that.

Yes, that's my understanding, too.


> What is the default revocation action of this API when no revocation
> flags are passed as arguments to this API?

When no revocation flags are passed into the API, in my opinion, it should be treated as if "zero" was given to the API.

Which means, the behavior is used that is described by the flags that have a value of "zero" assigned.
(In reply to comment #116)
> Parameter 3:
> ============
> When verifying the chain for a cert, which parts of the chain may trigger
> downloading of revocation information?
> By default, network activity for each cert in the chain is allowed.
> Use this parameter to limit revocation network activity to the leaf cert.
> 
> bits: 0000x000
> 
> CERT_REV_DOWNLOAD_ALL_CERTS              0
> CERT_REV_DOWNLOAD_LEAF_ONLY              0x08


I had assumed this proposal was obvious, but based on Nelson's question I'd want to clarify:

This flag specifies the amount of certs that are downloaded for an enabled mechanism.

If neither OCSP nor CRL have been allowed to download new information, then this flag is irrelevant and nothing will be downloaded.
I'm quoting the following snippet to point Nelson to the section of proposal 5 I was referring to regarding the default responder. See the last sentence of this snippet.


(In reply to comment #116)
> Parameter 5:
> ============
> Even if a test mechanism is allowed to hit the network, it may not be possible
> to perform the action,
> because it's necessary to know the source of revocation information.
> For CRL, the CRLDP extension of a cert may or may not indicate a CRL download
> location.
> For OCSP, the AIA extension of a cert may or may not indicate an OCSP
> responder.
> For OCSP, in addition, a default OCSP responder might have been configured in
> the application context,
> and is another valid OCSP source.


I should have been stricter in my wording and have said: "if configured, the default OCSP responder is always used as the OCSP source, if NSS attempts to perform an OCSP download".

We could make this behavior controllable through the API by defining 
  #define CERT_REV_ALLOW_GLOBAL_OCSP_RESPONDER       0
  #define CERT_REV_DO_NOT_USE_GLOBAL_OCSP_RESPONDER  0x10000
which I proposed in comment 129.
This comment is to spell out the answer to Nelson's question 
about "default behavior" when no flags are passed to the function
(I said: use the flags that have value zero):


CERT_REV_DO_NOT_DOWNLOAD_OCSP         0
CERT_REV_DO_NOT_DOWNLOAD_CRL          0

CERT_REV_PREFER_DOWNLOAD_OCSP        0
  (ignored because neither OCSP nor CRL enabled)

CERT_REV_DOWNLOAD_ALL_CERTS              0
  (ignored because neither OCSP nor CRL enabled)

CERT_REV_DOWNLOAD_LEAF_FAIL_SOFT         0
CERT_REV_DOWNLOAD_CHAIN_FAIL_SOFT        0
  (meaning, because we don't download,
   we may check for local information only,
   but if no local information is available
   => no problem, ignore that)

CERT_REV_SOURCE_MISSING_LEAF_SKIP         0
CERT_REV_SOURCE_MISSING_CHAIN_SKIP        0
  (irrelevant, because we don't download,
   we won't attempt to look at sources)


If you prefer to go with proposal 6 (define additional flags now), 
then we'd have these additional flags from comment 124:

CERT_REV_CHAIN_SAME_MECHANISMS                 0
(which means same mechanisms for leaf and intermediates,
and also means the other CERT_REV_CHAIN_* flags are irrelevant)


CERT_REV_SINGLE_MECHANISM_IS_SUFFICIENT  0
CERT_REV_SECONDARY_TESTS_FAIL_SOFT 0
CERT_REV_SINGLE_MECHANISM_IS_SUFFICIENT        0
CERT_REV_SECONDARY_TESTS_FAIL_SOFT             0

(still under discussion whether we will allow secondary tests at all,
I doubt it, but I will reply to Bob's comment 126)
(In reply to comment #126)
> > Now, let's say the caller prefered OCSP.
> > The OCSP check succeeds.
> 
> If you succeed and the response is fresh enough, you are done.

Well, yes, that's my understanding, too.
I think that behavior will always be sufficient.

The point is, Nelson is explicitly asking (or brainstorming) about a different behavior.

His proposal is: The API should allow to specify that NSS shall always attempt both mechanisms, even if the first attempted mechanism already gave a "successful download, not revoked" result...!


That's the only reason why I proposed new flags with *_SINGLE_MECHANISM_* and *_SECONDARY_TEST_* in comment 124.

I personally think it's overkill. But I'm ok to add it to the API (even if nobody might ever implement or request it).



> You only go on
> if you get the equivalent of 'unknown' status. (No response returned/cached
> response non-existant/not fresh enough, responder returns status 'unknown').*

Yes, that seems reasonable to me.

But it seems that Nelson is asking for the option "continue testing on success".


> > We continue and notice, there is no (or no fresh) CRL locally available.
> > We attempt to download the CRL and... fail!
> 
> > What should happen now?
> > (a) hard failure has been requested, we fail
> > or
> > (b) we skip this failure, because we already passed a test with success
> >
> > Always (a) or (b)?
> 
> Given the above, you never have the ambiguous case.


The ambiguous case is (only) possible should we implement Nelson's idea to implement "continue testing other mechanisms even on success".

(But I must stress, I don't like the idea to implement the ability to continue testing on success.)


> You only try the CRL if the
> OCSP response was unknown or the OCSP AIA does not exist. In either case, it
> becomes clear if the CRL also fails to get a positive response, you will fail
> on 'hard' and succeed on 'soft'.

Yes, you are right, if we "never continue testing if we already seen a success".

My question was not directed to Bob.

Nelson, if you still think that the API must allow for "continue testing other mechanisms even if the first mechanism already returned with a successfule download and not-revoked-answer", then please state your preference on how to solve that ambiguity (first test succeeded, second test failed).

Or alternatively, simply state that you agree with Bob and me that "continue testing after success" is not necessary.


(I'll continue to respond to Bob's comment about REQUIRE in my next comment)
Bob:

(In reply to comment #126)
> In all the proposals I don't see the REQUIRE case. Require differs from hard
> failure in the hard failure fails only if the certificate requests a revocation
> type and we were unable to fulfill it. REQUIRE says 'we must have revocation
> information, even if the cert doesn't request it'. Basically it fails any chain
> that does not request the revocation information that you are checking.


I think this is already covered by the combination of flags
"MAY_DOWNLOAD" with "DOWNLOAD_LEAF/CHAIN_FAIL_HARD"
and "SOURCE_MISSING_LEAF/CHAIN_FAIL".

In other words, parameter 5 is what you want.

If we don't have information, and we don't know where to get that info from
(because the cert doesn't specify the source), and missing source has been
defined as a failure, then we fail.

-----

But your comment inspires me to propose that we rename some flags
for clarification.

I think in parameter 2 and 4 we should rename the flags and change
the DOWNLOAD substring to something better, because all those flags
are not limited to downloading, they are also used for testing
information that is already locally available.

(In parameter 1 and 3 we should keep the DOWNLOAD wording.)


In the comment for "parameter 4" I had already explained that
the flags shall mean "must be able to find fresh information".
(The flags don't force a new download, 
if fresh information is already available).

We currently have:
CERT_REV_DOWNLOAD_LEAF_FAIL_SOFT         0
CERT_REV_DOWNLOAD_CHAIN_FAIL_SOFT        0
CERT_REV_DOWNLOAD_LEAF_FAIL_HARD         0x10
CERT_REV_DOWNLOAD_CHAIN_FAIL_HARD        0x20

I think we should rename _DOWNLOAD_ to _INFO_MISSING_
in all those flags, resulting in new:
CERT_REV_INFO_MISSING_LEAF_FAIL_SOFT         0
CERT_REV_INFO_MISSING_CHAIN_FAIL_SOFT        0
CERT_REV_INFO_MISSING_LEAF_FAIL_HARD         0x10
CERT_REV_INFO_MISSING_CHAIN_FAIL_HARD        0x20


For further clarification,
I propose we rename the flags from "parameter 2":

currently:
CERT_REV_PREFER_DOWNLOAD_OCSP        0
CERT_REV_PREFER_DOWNLOAD_CRL         0x04

proposed new names:
CERT_REV_PREFER_INFO_OCSP        0
CERT_REV_PREFER_INFO_CRL         0x04


In other words, if download is not allowed, and no fresh information
is available, then "info is missing".

You want to "require that info is available",
therefore you'd specify "INFO_MISSING_FAIL_HARD".

If download is allowed, it will be attempted.
But if the download did not succeed, the info is still missing,
and we fail.

--------

I want to spell out a complete example
that illustrates Bob's concern about REQUIRE
(based on renamed flags).

You could use the following combination of (renamed) flags:

CERT_REV_MAY_DOWNLOAD_CRL             0x02
CERT_REV_DOWNLOAD_ALL_CERTS              0
CERT_REV_INFO_MISSING_LEAF_FAIL_HARD         0x10
CERT_REV_INFO_MISSING_CHAIN_FAIL_HARD        0x20
CERT_REV_SOURCE_MISSING_LEAF_FAIL         0x40
CERT_REV_SOURCE_MISSING_CHAIN_FAIL        0x80

In this combination, NSS would start by looking at locally available 
fresh CRLs. Let's say we don't have it.

Because MAY_DOWNLOAD_CRL is specified, NSS would attempt to download.

Because DOWNLOAD_ALL_CERTS is specified, it will attempt to 
download missing fresh CRLs for each cert that is part of the chain.

In your example at least one cert doesn't specify a CRLDP,
and therefore we'll fail, because you specified SOURCE_MISSING_LEAF_FAIL
and SOURCE_MISSING_CHAIN_FAIL
I realize, we're missing the ability to say:
  "Require that fresh revocation information is available,
   but downloading is forbidden"

I realize renaming the flags in parameter 4 doesn't help in that scenario, because when downloading is not allowed, then we don't know which certs should get tested.

This issue came up before, I think we'll need the flags from comment 105.

----

Sorry, it's time for me attach a new proposal.

I'll also look at Bob's comment that I haven't addressed yet (comment 126, prepare for future revocation testing mechanisms).
Attached patch Proposal 7 as patch (obsolete) — Splinter Review
I hope this new proposal 7 is self documenting.
I hope it does not require a matrix.

I've given up on the idea to use a single bitmask, because of Bob's requests to:
- allow for extending of additional mechanisms
- allow to specify preference

Because of this I now propose we use a separate structure to describe the desired revocation tests.
Attachment #307921 - Attachment is obsolete: true
Attachment #307922 - Attachment is obsolete: true
Attachment #309006 - Flags: superreview?(nelson)
Attachment #309006 - Flags: review?(rrelyea)
Attachment #307922 - Flags: superreview?(nelson)
Attachment #307922 - Flags: review?(rrelyea)
Attachment #307922 - Flags: review?(alexei.volkov.bugs)
Attached patch Proposal 7 as patch (v2) (obsolete) — Splinter Review
Updated the patch to use C style comments and removed the obsolete CERT_REV_* flags.

Nelson, please ignore that this has long lines, I hope I can postpone the cleanup until after we agree on structure.
Attachment #309006 - Attachment is obsolete: true
Attachment #309007 - Flags: superreview?(nelson)
Attachment #309007 - Flags: review?(rrelyea)
Attachment #309006 - Flags: superreview?(nelson)
Attachment #309006 - Flags: review?(rrelyea)
Comment on attachment 309007 [details] [diff] [review]
Proposal 7 as patch (v2)

Some further comments for the patch, in relation to the previous discussion and proposals:


>+  int allowed_test_mechanisms;
>+      /* one or more bits from REVOCATION_MECHANISM_*
>+       * 0: do not perform revocation testing (all other variables are ignored)
>+       */


Matches what had been said in comment 105.
Request testing for a mechanisms, independent of downloading.


>+  int preferred_revocation_mechanism[8];
>+      /* Array that may specify an optional order of preferred mechanisms (up to 8).
>+       * It's allowed to set index [0] to value zero,
>+       *   then NSS will decide which mechanisms to prefer.
>+       * Each array entry may contain a single bit from REVOCATION_MECHANISM_*
>+       *   (but of course, only bits also specified in allowed_test_mechanisms).
>+       * The entry at index [0] specifies the mechanism with highest preferrence.
>+       * An entry value of zero indicates the end of preferred mechanisms.
>+       * These mechanisms will be tested first for locally available information.
>+       * Mechanisms allowed for downloading will be attempted in the same order.
>+       */

Bob's new requirement for expressing an order of preference, involving multiple mechanisms.
Matches previous "parameter 2".


>+  int allowed_download_mechanisms;
>+      /* one or mutliple bits from REVOCATION_MECHANISM_*
>+       * Must be equal to allowed_test_mechanisms or a subset of bits.
>+       * Download will never happen if fresh information for mechanism is already available.
>+       * set bit to zero to specify that downloading using a mechanism is not allowed
>+       */

Matches previous "parameter 1".

Previous "parameter 3" has been replaced by having two separate sets of download flags,
one for leafs, one for the intermediates.


>+  int allow_global_default_source;
>+      /* (for example, the globally configured default OCSP responder)
>+       * mutliple bits from REVOCATION_MECHANISM_*
>+       * bit set to 0: ignore the global default source, whether it's configured or not.
>+       * bit set to 1:
>+       *   if a global default source is configured, 
>+       *     then it overrides any available or missing source in the cert
>+       *   if no global default source is configured,
>+       *     then we continue to use what's available (or not available) in the certs
>+       */

This matches Nelson's request to clarify the relationship to the global OCSP responder.
This definition makes it possible to allow for a separate global default source per mechanism.


>+  int ignore_missing_info_if_source_missing;
>+      /* one or multiple bits from REVOCATION_MECHANISM_*
>+       * If fresh information is not yet available
>+       *   and the cert does not specify a revocation download location
>+       *   (and no alternate source location is configured by other means)
>+       * then:
>+       *   bit set to 1: skip testing this mechanism, assume it's not revoked
>+       *   bit set to 0: treat as "missing fresh info" and behave as defined by other flags
>+       */

Matches previous "parameter 5".


>+  int require_fresh_info;
>+      /* multiple bits from REVOCATION_MECHANISM_*
>+       * bit set to 1: fail hard on missing fresh info
>+       * bit set to 0: ignore missing info
>+       */

Matches previous "parameter 4".


>+  int continue_testing_after_success;
>+      /* only applicable if multiple mechanisms are specified in allowed_download_mechanisms.
>+       * 0: stop after having obtained first fresh info with a not-revoked result
>+       * 1: continue testing all mechanisms
>+       *    test result for each mechanism will be treated according to the 
>+       *    mechanism's bit value in require_fresh_info
>+       */

This matches Nelson's proposal to continue testing all mechanisms,
even if performing other tests/downloads already succeeded.

I made it unnecessary to discuss the ambiguity that has been mentioned in previous comments,
by allowing to specify a separate fail soft/hard flag for each mechanism
(using bits in variable require_fresh_info).


>+typedef struct {
>+    CERTRevocationTests leafTests;
>+    CERTRevocationTests chainTests;
>+} CERTRevocationFlags;

This illustrates that we have full flexibility for "leaf" and "remainder of chain".
In response to comments 129 and 130, 

When I wrote the question in comment 128, I overlooked something much
more fundamental about this proposed bit:

>+#define CERT_REV_DOWNLOAD_ALL_CERTS           0
>+#define CERT_REV_DOWNLOAD_LEAF_ONLY           0x08

If I understand the meaning of that bit correctly, it is NOT a revocation 
control.  It is controlling the behavior of downloading missing certs for 
the purpose of constructing a cert path to be validated.  I agree that we
need to control that behavior, but I am not sure that the revocation flags
is the right place.  Or maybe the flags we've called "revocation flags" 
are really broader things, controlling algorithm behavior for more than 
just revocation.

Kai, your answers suggest that the controls over downloading REVOCATION 
information (OCSP & CRLs) also implicitly control the downloading of 
CERTIFICATES for constructing a chain.  Those things are independent,
and must be independently controlled in the API.  

An application may want to allow missing certs to be downloaded, but may
still wish to not do any revocation, or alternatively, an application
may with to do revocation, but not to download any missing certificates.

So I do not find the answers in those two comments to be satisfactory.
In reply to comment 133,
As I understand it, in the old cert DB code, we always check the CRL (if it 
is present) FIRST.  If it says the cert is revoked, then we stop.  Otherwise,
if OCSP is enabled, we ALSO check it.  We end up doing two checks.  
Julien has defended this behavior.  
I think the new API needs to be able to provide this behavior.
Assignee: rrelyea → nobody
I don't have any problems with the new structure rather than bit flags.

The structure should not be an explicit parameter to CERT_PKIXVerifyCert(), however. Applications should be able to call CERT_PKIXVerifyCert expecting a globally set policy. The expectation would be that places the application wants to call with a different policy than the default should be exceedingly rare. Instead add it to the union of pointers and continue to make it a parameter flag.

> allowed_download_mechanism

I think 'allow_remote_fetching' might be better. (no change to the semantics).

> allow_global_default_source

My first thought is to scotch this, but on further thinking I suspect it just needs a name change. It should be allow_implicit_default_source.

I'm also inclined to prefer reversing the sense of the bits so the common case (allow) is zero.

> ignore_missing_info_if_source_missing

I'd also like to see the sense of this field reverse (though it plays havoc with the name).

> continue_testing_after_success

Like you, I think this is probably not needed, but can be convinced otherwise.


I'd also like to see some predefined policies (example using your current definition).

(in certt.h)
extern const CERTRevocationFlags CERTRevocationPolicy_NIST;


(in certpkix.c)
const CERTRevocationFlags CERTRevocationPolicy_NIST = {
  { REVOCATION_MECHANISM_CRL, { 0 } , -1, -1,  0, -1, 0 },
  { REVOCATION_MECHANISM_CRL, { 0 } , -1, -1,  0, -1, 0 },
};


Attached patch Proposal 8 as patch (obsolete) — Splinter Review
Attachment #309254 - Flags: superreview?(nelson)
Attachment #309254 - Flags: review?(rrelyea)
Attachment #309007 - Attachment is obsolete: true
Attachment #309007 - Flags: superreview?(nelson)
Attachment #309007 - Flags: review?(rrelyea)
Attached patch Proposal 8 as patch (v2) (obsolete) — Splinter Review
I quickly produced an updated patch of proposal 8.

Two comment clarifications (where the comment refered to the name of a variable I had changed).

I also removed the get/set functions for global revocation policy.
Bob convinced me that it's sufficient that we just use the existing
  extern SECStatus CERT_PKIXSetDefaults(CERTValInParam *paramsIn);
Attachment #309254 - Attachment is obsolete: true
Attachment #309260 - Flags: superreview?(nelson)
Attachment #309260 - Flags: review?(rrelyea)
Attachment #309254 - Flags: superreview?(nelson)
Attachment #309254 - Flags: review?(rrelyea)
Comment on attachment 309260 [details] [diff] [review]
Proposal 8 as patch (v2)

r+ rrelyea
Attachment #309260 - Flags: review?(rrelyea) → review+
In rpely to comment 141:
As I mentioned in the phone call, "continue testing after success" is the
behavior of the old pre-PKIX cert library, and the new API must be capable
of expressing "behave like NSS 3.11".  

In the call, we discussed the idea of imposing a freshness limit on CRL 
checks, so that a "positive" outcome from a check of a stale CRL would be
treated the same as an absent CRL.  That behavior would need to be optional.
How would that be controlled/expressed ?  I don't think that "fail soft"
quite covers it, because it really affects the definition of "soft failure".
(In reply to comment #145)
> As I mentioned in the phone call, "continue testing after success" is the
> behavior of the old pre-PKIX cert library, and the new API must be capable
> of expressing "behave like NSS 3.11".  

Yes, we agreed to keep it, and the latest proposal still includes it, so I assume you're ok with this aspect.


> In the call, we discussed the idea of imposing a freshness limit on CRL 
> checks, so that a "positive" outcome from a check of a stale CRL would be
> treated the same as an absent CRL.  That behavior would need to be optional.
> How would that be controlled/expressed ?  I don't think that "fail soft"
> quite covers it, because it really affects the definition of "soft failure".

I think we should introduce separate freshness defaults for CRLs, much like we already define freshness for OCSP responses in the cache.

Proposal 8 defines what happens if CRLs are fresh or not fresh.

As of today, it's up to NSS internal interpretation what it accepts as fresh or not fresh.

I think we can start with a global internal default (maybe relative to crl's next-update attribute) and introduce new functions that allow to specify the freshness defaults at a later time.

I think the definition of freshness can be seen independent of the revocation flags.

Nelson, is that sufficient?
Comment on attachment 309260 [details] [diff] [review]
Proposal 8 as patch (v2)

r~ (I have more questions).

1. I think the structure of this proposal is very good.  

2. I have a nit about the terminology.  The word "mechanism" has a 
very specific meaning in the context of crypto (specifically, PKCS#11)
and here that word is being used with a very different meaning. 
Is there another term we can use?  
- Algorithm?
- Protocol?
- Scheme?

3. I think perhaps our idea of specifying a preference order is too 
simplistic.  Consider the case where 
- we have a fresh OCSP response in hand, but not a CRL, 
- we prefer CRLs
- we require some positive revocation test result.  
- we will accept either a fresh positive response from either CRL or OCSP.

In this case, it seems to me that we don't want to attempt to download the
CRL, because we have a fresh OCSP response in hand.  But as I understand 
the proposal, if we say that we prefer CRLs, then we will attempt to fetch
a CRL before going on to try OCSP and discovering that we have a fresh 
OCSP response.  

So, do we want to somehow specify an algorithm in which we test for the 
presence of a fresh result from each of the preferred algoirthms in order,
first, and then if none of them them a fresh response, we take a second
pass through the algorithms in the order of preference to do the fetching?

Alexei, is that proposal a whopping huge change to libPKIX?
(In reply to comment #146)

> I think we can start with a global internal default (maybe relative to crl's
> next-update attribute) and introduce new functions that allow to specify the
> freshness defaults at a later time.

IIRC, recently (within the last year or so) we defined new functions by which
this freshness information is configured into NSS for OCSP.  As part of this
bug, we should also define the equivalent functions of CRL freshness configuration as part of this work.

> I think the definition of freshness can be seen independent of the revocation
> flags.

Yes, I agree that it's separate from the flags.  But it's still part of the 
total API being devise for this RFE, IMO.
(In reply to comment #147)
> 1. I think the structure of this proposal is very good.  

I'm glad to hear that and hope we're very close :-)


> 2. I have a nit about the terminology.  The word "mechanism" has a 
> very specific meaning in the context of crypto (specifically, PKCS#11)
> and here that word is being used with a very different meaning. 
> Is there another term we can use?  
> - Algorithm?
> - Protocol?
> - Scheme?

I personally think it's absolutely ok to reuse words and give them a slightly different meaning based on context. We do that all the time, don't we?

Should we stop talking about "bits" when talking about revocation behavior, because pkcs#11 defines bitmasks, too?

(But I'm also ok to rename "revocation mechanism" and replace it with "revocation protocol", if you wish.)


> 3. I think perhaps our idea of specifying a preference order is too 
> simplistic. 
> ...
> So, do we want to somehow specify an algorithm in which we test for the 
> presence of a fresh result from each of the preferred algoirthms in order,
> first, and then if none of them them a fresh response, we take a second
> pass through the algorithms in the order of preference to do the fetching?


It seems that you would like to propose yet another flag, but this time a global flag, well, not global, but independent of an individual mechanism?

    #define CERT_REV_TEST_ALL_LOCAL_INFORMATION_FIRST  0
    #define CERT_REV_TEST_EACH_MECHANISM_SEPARATELY    1

We could add a new member for CERTRevocationTests, which allows us to add further revocation checking behavior that is global to all mechanisms.

typedef struct
{
...
    PRUint32 cert_rev_mechanism_independent_flags;
...
} CERTRevocationTests;
(In reply to comment #148)
> Yes, I agree that it's separate from the flags.  But it's still part of the 
> total API being devise for this RFE, IMO.

I'm ok to do it as part of this bug, but:
- the revocation flags are required right now, because we already use them,
  and we are blocked on the final specification
- the CRL freshness is not yet used

So, could we please finish the review of the flags, and do the freshness definition as a separate patch?
Nelson,

re: comment 140
Yes, this is the way the legacy certdb code operates - check for the CRL first. The CRL check is very inexpensive except the first time we check status for a cert by a given issuer. After that it's all cached and it comes down to a lock and a hash table lookup. We should have a way to provide this behavior in the new API, even if it's not the default.

I'm sorry I haven't been able to provide more input on this. I have been sick all week long. I'm try to go over the other proposal and comments.
Kai, The addition of freshness APIs need not affect this patch.

Also, as we discussed earlier, we need to be able to enable or disable
libPKIX's new ability to fetch certs from URLs in AIA extensions.  
That's not part of revocation, but it is a behavior that needs to be 
controllable.  We need to decide on the default for this behavior.

Is there any other feature in the API that provides flags by which 
the caller can affect behavior?  Or must we invent yet another 
cert_pi_<something> member of the CERTValParamInType enum?
(In reply to comment #152)
> Kai, The addition of freshness APIs need not affect this patch.

good


> Also, as we discussed earlier, we need to be able to enable or disable
> libPKIX's new ability to fetch certs from URLs in AIA extensions.  
> That's not part of revocation, 

right, so let's make it a separate patch.


> but it is a behavior that needs to be 
> controllable.  We need to decide on the default for this behavior.

I really don't know.
If you're asking me for an opinion, I'd say, classic behavior is default, and "download certs on demand" is a new feature that can be enabled.

But I really would propose that we try to focus on revocation today, so we get that done.


> Is there any other feature in the API that provides flags by which 
> the caller can affect behavior?  Or must we invent yet another 
> cert_pi_<something> member of the CERTValParamInType enum?

I'm not aware of another flag that would allow to specify it, but I think the API is flexible enough to allow the introduction of an additional cert_pi_allow_download_missing_intermediates_on_demand at a later time.
Nelson, I'm not sure which of your comments are review comments in the sense of "please make that change". You were asking questions, and I made proposals.

Will you summarize your change requests before you give review?

Or would you like me to attach a new patch with the proposals I made (that came after Bob'r r+) ?
Comment on attachment 309260 [details] [diff] [review]
Proposal 8 as patch (v2)

Please replace "mechanism" with "method" everywhere (upper and lower case).  
Then r=nelson for this patch.
Attachment #309260 - Flags: superreview?(nelson) → superreview+
Kai,

Re: comment 143,

I see that CERT_PKIXSetDefaults is declared in cert.h, but it is not implemented anywhere, nor is it exported. Are you planning to do that ?
One of the problems we have had with our old PKIX code relates to global settings. For example, with OCSP. The application makes a call to CERT_EnableOCSPChecking which affects all subsequent certificate verification in the process.

This may be useful for a monolithic product, but Sun products are often put together from multiple pieces. We recently had the case of a plug-in to the web server that called CERT_EnableOCSPChecking in order to check revocation status for its outgoing SSL connections. But this also caused all incoming client auth SSL connections to the web server to also be checked for OCSP, which was not intended at all.

The new PKIX API is supposed to allow the application to choose how it wants to do the verification, and pass that information in. We need to have global defaults of course, but I would advocate against adding global setting functions, since this doesn't work well in layered applications with plugins. Even though it seems easy, global setting functions are a trap for the application developers, IMO. If an application wants to do something differently than the global default, it needs to be passing that information in, not call a global function that may have unintended consequences on other callers. And we should make it as easy as possible for them to only modify the behavior they want to change from the default (eg. use a different revocation policy only). I'm sorry that I haven't read the full proposals yet, but I think this needed to be stated.
This change should help prevent application components from stepping on each other's toes.

We still have CERT_SetUsePKIXForValidation which is another vexing process-global setting that may create problems with mixed components, IMO. If we keep this, I hope we are going to make it return SECFailure when we get rid of the old PKIX code completely in the future.
Attachment #309303 - Flags: superreview?
Attachment #309303 - Flags: review?(rrelyea)
Attachment #309303 - Flags: superreview? → superreview?(kengert)
Attached patch Proposal 8 as patch (v3) (obsolete) — Splinter Review
This is a minor revision of the previous patch (proposal 8 as patch v2).

I've globally replaced the word "mechanism" with "method", as requested by Nelson.

Also, as discussed with Nelson outside of this bug, I've included the proposal seen in comment 149 section 3, which gives us additional future flexibility on adding more flags.

I'm carrying forward Nelson's and Bob's review, but feel free to do an interdiff and critize.
Attachment #309422 - Flags: superreview+
Attachment #309422 - Flags: review+
Comment on attachment 309303 [details] [diff] [review]
Replace set default function with get default function

Julien, I think this is explicitly not what we would like to happen.

Your change would make it necessary to pass in the changed policy to each call to CERT_PKIXVerifyCert.

My understanding is, we don't want that.
Attachment #309303 - Flags: superreview?(kengert) → superreview-
(In reply to comment #156)
> 
> I see that CERT_PKIXSetDefaults is declared in cert.h, but it is not
> implemented anywhere, nor is it exported.

Correct. We were still discussing the API.


> Are you planning to do that ?

Sure.


> One of the problems we have had with our old PKIX code relates to global
> settings. For example, with OCSP. The application makes a call to
> CERT_EnableOCSPChecking which affects all subsequent certificate verification
> in the process.
> 
> This may be useful for a monolithic product, but Sun products are often put
> together from multiple pieces. We recently had the case of a plug-in to the web
> server that called CERT_EnableOCSPChecking in order to check revocation status
> for its outgoing SSL connections. But this also caused all incoming client auth
> SSL connections to the web server to also be checked for OCSP, which was not
> intended at all.
> 
> The new PKIX API is supposed to allow the application to choose how it wants to
> do the verification, and pass that information in. We need to have global
> defaults of course, but I would advocate against adding global setting
> functions, since this doesn't work well in layered applications with plugins.

In my opinion both can be mixed.
An application is able to set a global default, and the latest API allows the application to specify an overriding individual behavior with individual function calls, too.


> Even though it seems easy, global setting functions are a trap for the
> application developers, IMO. If an application wants to do something
> differently than the global default, it needs to be passing that information
> in, not call a global function that may have unintended consequences on other
> callers.

I don't think we must be that strict, programming is full of traps anyway :-)
The current API offers flexibility and is comfortable at the same time.


> And we should make it as easy as possible for them to only modify the
> behavior they want to change from the default (eg. use a different revocation
> policy only). I'm sorry that I haven't read the full proposals yet, but I think
> this needed to be stated.

So, we're a bit under time pressure.

We must ensure that today we will finalize the API for all code that is already in use, to ensure our promise that all currently APIs will continue to be supported going forward.

Your idea sounds good, allow to make it easier to change only the aspects of the struct a programmer absolutely has to care about. This could be done (as you proposed in your patch) by returning a copy of the default, allow the programmer to change it, and pass it back in. However, that will require we think a bit more about memory management of this structure, we might have to offer deep copying and a destroy function. I think all of this are simply additional functions on top of the current API. As we are always able to add additional functions, I propose we move your proposal away from today's time pressure and postpone to a later time.
Do you want me to change 
  PRUint32 *cert_rev_flags_per_method;
and
  PRUint32 cert_rev_method_independent_flags;
to use 
  PRUint64
?
Attachment #309449 - Flags: review?(rrelyea)
Attached patch Proposal 8 as patch (v4) (obsolete) — Splinter Review
I made minor tweaks to the "proposal 8 patch v3":

- removed the old CERT_REV_ flags that are no longer supported
- changed the size of the flag integers to 64 bits
- changed the bit value definitions to use a L suffix

(I've confirmed on irc.mozilla.org that using LL_ macros for bit testing is not necessary on any platform.)
Attachment #309422 - Attachment is obsolete: true
Attachment #309454 - Flags: superreview+
Attachment #309454 - Flags: review+
Comment on attachment 309422 [details] [diff] [review]
Proposal 8 as patch (v3)

Kai, in the world of NSS we don't use the practice of "carrying forward" other people's reviews.  
But in this case, I'm happy with your latest change so I'll just let it stand.
Comment on attachment 309449 [details] [diff] [review]
Patch: adjust implementation to Proposal 8

Kai, NSPR has an error code that means "not implemented".  It is PR_NOT_IMPLEMENTED_ERROR.  NSS requires that any function that returns an indication of failure (e.g. NULL or SECFailure) MUST set an error code (unless the correct error code has already been set by a subordinate function). In cases where the failure is because the function is not implemented, the correct error code is PR_NOT_IMPLEMENTED_ERROR.  So, make sure that all execution paths in this new patch that are not implemented set that error code.
Attachment #309449 - Flags: review-
Blocks: 406755
(In reply to comment #164)
> (From update of attachment 309422 [details] [diff] [review])
> Kai, in the world of NSS we don't use the practice of "carrying forward" other
> people's reviews.  
> But in this case, I'm happy with your latest change so I'll just let it stand.

Thanks Nelson. Are you also happy with the changes I describe in comment 163? You can use interdiff between "proposal 8 v3" and "proposal 8 v4".
Comment on attachment 309454 [details] [diff] [review]
Proposal 8 as patch (v4)

A trivial subset in this file is now obsolete: The declarations for the functions that return static policies.

I'll be including this snippet in the next revision of patch that adjusts the NSS implementation, because it Bob made me aware:

We need 3 policy functions in total:
- NIST
- NSS 3.11 with OCSP disabled
- NSS 3.11 with OCSP enabled
Attachment #309449 - Attachment is obsolete: true
Attachment #309476 - Flags: review?(rrelyea)
Attachment #309449 - Flags: review?(rrelyea)
Attachment #209931 - Attachment is obsolete: true
Attachment #277193 - Attachment is obsolete: true
Attachment #277486 - Attachment is obsolete: true
Comment on attachment 309454 [details] [diff] [review]
Proposal 8 as patch (v4)

Adding a trailing 'L' suffix to small integers is almost never necessary
in the c language because of the language's rules for type promotion.

It's only necessary when sizeof(long) > sizeof(int), and then only when
a) the value of the constant is greater than MAX_INT, e.g. 11222333444L, or
b) the constant is used in an arithmetic operation where the size of the
result will exceed MAX_INT, e.g.  (1L << 40).

And adding the L suffix to small constants creates problems for assigning
their values to integers with some compilers.  

So, overall, I'd prefer not to add those L suffixes, but I won't block 
this patch over it.  

There is one thing in this patch (and in the last one too) that I'd like
you to fix.  It's a trailing underscore in the following definition:

>+#define CERT_REV_M_IGNORE_MISSING_FRESH_INFO_        0L

You don't need to do another round of patch/review just for that, IMO.
(In reply to comment #169)
> ...
> So, overall, I'd prefer not to add those L suffixes, but I won't block 
> this patch over it.  

Thanks Nelson, if Bob agrees, I'll remove the L suffixes, but I'd like to note, it was him who proposed adding them.


> There is one thing in this patch (and in the last one too) that I'd like
> you to fix.  It's a trailing underscore in the following definition:
> 
> >+#define CERT_REV_M_IGNORE_MISSING_FRESH_INFO_        0L

Thanks for catching that!
I'll attach an updated patch to document what I'll check in (but not ask for review)
In that patch I'll also remove the cert.h header changes that have moved to the  implementation patch (as I explained in comment 167).
Attached patch Proposal 8 as patch (v5) (obsolete) — Splinter Review
Attachment #309478 - Flags: superreview+
Attachment #309478 - Flags: review+
Comment on attachment 309476 [details] [diff] [review]
Implementation v2 (for Proposal 8)

I'm not going to do a full review of this patch (unless you ask :)
but I have this comment about a pattern used in the definition 
of various arrays in this patch.  Here is one example of several:

>+static PRUint64 certRev_NSS_3_11_Ocsp_Disabled_Policy_ChainFlags[2] = {
>+  /* crl */
>+  CERT_REV_M_TEST_USING_THIS_METHOD
>+  | CERT_REV_M_FORBID_NETWORK_FETCHING
>+  | CERT_REV_M_CONTINUE_TESTING_ON_FRESH_INFO 
>+  ,

The issue is that comma on a line by itself.

>+  /* ocsp */
>+  0
>+};
>+
>+static const CERTRevocationFlags certRev_NSS_3_11_Ocsp_Disabled_Policy = {
>+  {
>+    /* leafTests */
>+    2,
>+    certRev_NSS_3_11_Ocsp_Disabled_Policy_LeafFlags,
>+    0,
>+    0,
>+    0
>+  },
>+  {

This style of coding used in these definitions strongly suggests 
that the intent is to declare one member per line.  So, as a reviewer,
when I see that comma on a line by itself, and no comma at the end
of the preceding line, it really stands out and makes me wonder if an
error has occurred there.  It makes me wonder if the writer intended to 
define a member there, but forgot and left out the value, and left off
the trailing comma from the preceding line, or if the placement of the 
comma on its own line was essentially just a "typo".  

Assuming that there is no missing member here, I ask that you join that
comma to the preceeding line, in each place where it appears by itself.
You don't need to respin this patch just for this.
I wrote:
> You don't need to respin this patch just for this.
I meant that you don't need to attach a new patch and request another review.
You can just fix that before checkin.  
Depends on: 412468
Yet another update to the implementation.
Bob made me aware that we should distinguish between "hard OCSP failure policy in 3.11" and "soft OCSP failure policy in 3.11", so I've added another policy definition.

This patch is pending on Alexei's feedback, whether the NIST policy requires
  CERT_REV_M_SKIP_TEST_ON_MISSING_SOURCE
or
  CERT_REV_M_REQUIRE_INFO_ON_MISSING_SOURCE

But I think we can clarify this detail later, and fix the code should the current definition be wrong.
Attachment #309476 - Attachment is obsolete: true
Attachment #309493 - Flags: review?(rrelyea)
Attachment #309476 - Flags: review?(rrelyea)
Comment on attachment 309493 [details] [diff] [review]
Implementation v3 (for Proposal 8)

r+ with nelson's comments.
Attachment #309493 - Flags: review?(rrelyea) → review+
RE: 'L'

I made the request to match the change in the base flag types from PRUint32 to PRUin64. It seems to me best to keep the flag constants and the variable you matched them to the same time.

I don't have any qualms about PRUint32 for the flags (they are not likely to grow as fast the the previous flags, since the are per revocation type, not global.

Thanks to Bob for realizing: We have lost the ability to express "require at least some fresh info, by whatever method is able to provide it".

This patch defines the following for the method-independent flag:

/*
 * Use this flag to specify that it's necessary that fresh information
 * is available for at least one of the allowed methods, but it's
 * irrelevant which of the mechanisms succeeded.
 * NO_OVERALL_INFO_REQUIREMENT means:
 *     We strictly follow the requirements for each individual method.
 * REQUIRE_SOME_FRESH_INFO_AVAILABLE means:
 *     After the individual tests have been executed, we must have
 *     been able to find fresh information using at least one method.
 *     If we were unable to find fresh info, it's a failure.
 */
#define CERT_REV_MI_NO_OVERALL_INFO_REQUIREMENT       0L
#define CERT_REV_MI_REQUIRE_SOME_FRESH_INFO_AVAILABLE 2L
Attachment #309478 - Attachment is obsolete: true
Attachment #309517 - Flags: review?(rrelyea)
Comment on attachment 309517 [details] [diff] [review]
Proposal 8 as patch (v6) [checked in]

r+ rrelyea.

Adding Nelson to bring attention to the change.


We forgot the semantic that kicked of the revocation re-examination to begin with: A flag which sees require some sort of revocation information (versus a flag that says require some sort of OCSP or some sort of CRL).

With this API we only need one flag (where before we were looking at 2), because we have full control a this point to 'ignore missing revocation information' on each revocation type so we can handle both REQUIRE and REQUIRE if AVAILABLE.
Attachment #309517 - Flags: superreview?(nelson)
Attachment #309517 - Flags: review?(rrelyea)
Attachment #309517 - Flags: review+
This fixes the lines that contain commas, only.
I'm keeping the L suffixes for now, because Bob's comment that we should match variable types and constant types seems reasonable to me, and Nelson is only concerned that it *might* cause problems when not caring attention when using the flags.
Attachment #309523 - Flags: review+
Alexei told me that we must set CERT_REV_M_FAIL_ON_MISSING_FRESH_INFO
for the NIST CRL policy on both leaf and intermediate certs.

This patch has this as the only change from the previous patch revision, so I'm marking r=alexei on this change.
Attachment #309260 - Attachment is obsolete: true
Attachment #309454 - Attachment is obsolete: true
Attachment #309493 - Attachment is obsolete: true
Attachment #309523 - Attachment is obsolete: true
Attachment #309549 - Flags: review+
Comment on attachment 309517 [details] [diff] [review]
Proposal 8 as patch (v6) [checked in]

This patch checked in.

Checking in mozilla/security/nss/lib/certdb/certt.h;
/cvsroot/mozilla/security/nss/lib/certdb/certt.h,v  <--  certt.h
new revision: 1.43; previous revision: 1.42
done
Attachment #309517 - Attachment description: Proposal 8 as patch (v6) → Proposal 8 as patch (v6) [checked in]
Comment on attachment 309549 [details] [diff] [review]
Implementation v5 (for proposal 8) [checked in]

This patch checked in.

Checking in mozilla/security/nss/lib/certdb/cert.h;
/cvsroot/mozilla/security/nss/lib/certdb/cert.h,v  <--  cert.h
new revision: 1.66; previous revision: 1.65
done
Checking in mozilla/security/nss/lib/certhigh/certvfypkix.c;
/cvsroot/mozilla/security/nss/lib/certhigh/certvfypkix.c,v  <--  certvfypkix.c
new revision: 1.17; previous revision: 1.16
done
Checking in mozilla/security/nss/lib/nss/nss.def;
/cvsroot/mozilla/security/nss/lib/nss/nss.def,v  <--  nss.def
new revision: 1.191; previous revision: 1.190
done
Attachment #309549 - Attachment description: Implementation v5 (for proposal 8) → Implementation v5 (for proposal 8) [checked in]
The problem with the idea of using L so that it will match the variable
type is that it will either match on LP64 platforms or on ILP32 platforms
but cannot match on both.  If the variable type is 64-bit, then you need 
an LL suffix on ILP32 platforms.  An L suffix is no better than no suffix
on those platforms.  If the variable type is 32-bit, then on 64-bit platforms
the L suffix will cause at least a warning and perhaps an error.

You could solve that by declaring the flags variable as type "long", rather 
than as any of the NSPR types that have explicit word widths.  But then the 
number of available bits is platform dependent.  Another solution is to 
define the constant with a type case, e.g. ((PRUint64)64L).

So, the desire to match constants to variable sizes is noble, but using an
L suffix doesn't accomplish it on all platforms.   

But this is pretty minor, and we can tweak these constants later.  The 
important short term objective is to get the symbols defined, and get on
with the implementation.
Comment on attachment 309517 [details] [diff] [review]
Proposal 8 as patch (v6) [checked in]

I'm not wild about the new name string, but the idea is good.
Attachment #309517 - Flags: superreview?(nelson) → superreview+
Comment on attachment 309523 [details] [diff] [review]
Implementation v4 (for proposal 8)

I suggest that
CERT_GetPKIXVerifyNSS_3_11_OCSP_Enabled_Hard_Policy etc.
be renamed
CERT_GetPKIXVerifyClassicOCSPEnabledHardPolicy
or even
CERT_GetClassicOCSPEnabledHardPolicy.
Comment on attachment 309517 [details] [diff] [review]
Proposal 8 as patch (v6) [checked in]

I suggest shortening
CERT_REV_MI_TEST_ALL_LOCAL_INFORMATION_FIRST
to
CERT_REV_MI_TEST_ALL_LOCAL_INFO_FIRST because in other
macros we use the INFO abbreviation.
Comment on attachment 309566 [details] [diff] [review]
Patch to rename some policy function names [checked in]

r+ rrelyea

I agree, wtc's names are better.

bob
Attachment #309566 - Flags: review+
Comment on attachment 309566 [details] [diff] [review]
Patch to rename some policy function names [checked in]

Checking in certdb/cert.h;
/cvsroot/mozilla/security/nss/lib/certdb/cert.h,v  <--  cert.h
new revision: 1.67; previous revision: 1.66
done
Checking in certhigh/certvfypkix.c;
/cvsroot/mozilla/security/nss/lib/certhigh/certvfypkix.c,v  <--  certvfypkix.c
new revision: 1.20; previous revision: 1.19
done
Checking in nss/nss.def;
/cvsroot/mozilla/security/nss/lib/nss/nss.def,v  <--  nss.def
new revision: 1.192; previous revision: 1.191
done
Attachment #309566 - Attachment description: Patch to rename some policy function names → Patch to rename some policy function names [checked in]
Assignee: nobody → rrelyea
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Attachment #309303 - Flags: review?(rrelyea)
Comment on attachment 309549 [details] [diff] [review]
Implementation v5 (for proposal 8) [checked in]

I'm pretty amazed that this patch passed review. 
These function declarations aren't even ANSI c prototypes.  
Please fix them.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: