Open Bug 968556 Opened 10 years ago Updated 2 years ago

Document internal design of mozilla::pkix

Categories

(Core :: Security: PSM, defect, P3)

defect

Tracking

()

People

(Reporter: briansmith, Unassigned)

References

(Blocks 1 open bug)

Details

(Whiteboard: [psm-documentation])

* See bug 921891 comment 15 about describing the difference between RecoverableError and FatalError.

* Document intention to have errors of different X.509 extensions result in different NSS errors.

* Document how/why insanity::pkix uses SEC_ERROR_UNKNOWN_ISSUER when it fails to build a valid path instead of SEC_ERROR_UNTRUSTED_ISSUER, except for the special case of active distrust, where SEC_ERROR_UNTRUSTED_ISSUER is returned. The idea is to allow the application to distinguish active distrust from general failure to build a path, so that it could refuse to do cert error overrides for active distrust but still offer them to the application.

Probably a lot of this documentation belongs in the public-facing documentation in bug 968451.

David, Camilo, let me know what other internal design decisions you would like better described.
PR_* > PORT_* (e.g. PR_ASSERT, PR_SetError), because PR_* is an NSPR dependency and PORT_* is an NSS dependency. Also, wtc usually requests PR_* to be used instead of PORT_* when he reviews PSM code.
* Avoidance of memory allocation. Prefer re-processing simple data over allocating and caching. (CPU faster than memory.) Exception: name constraints processing.

* Assumption that signatures are exact multiples of 8 bits:

  // XXX: Really the constraint should be that unusedBitsAtEnd must be less
  // than 7. But, we suspect there are no valid OCSP response signatures with
  // non-zero unused bits. It seems like NSS assumes this in various places, so
  // we enforce it. If we find compatibility issues, we'll know we're wrong.

  Need to verify that this is correct & makes sense.

* Certs and OCSP responses are always verified at the given time, not at the time the signed object was claimed to be signed.

* OCSP logic ignores revocation time and producedAt; only thisUpdate and nextUpdate matter when determining response expiration.

* Amount of slop and maximally-old OCSP response. TODO: Is capping OCSP response age at 10 days going to cause compatibility trouble?
* How do we rank certain errors as being more important or more urgent to return than others? How do we deal with a cert that has multiple errors? How do we deal with multiple candidate chains where each chain has a different error? What are the implications for cert override processing?
Priority: -- → P4
PR_Now: Just say no!

How certificate loops are detected. (Not 100% confident in this.)

How we determine trust/distrust from the NSS certdb. (Not 100% confident in this.)

OCSP expiration logic.

OCSP cache policy.
From http://blog.bro.org/2014/03/dissecting-gnutls-bug.html:

From an attacker perspective, making the signature lookup fail is probably the easiest method to exploit the bug. According to the function _gnutls_x509_get_signature, the signature verification fails if the number of bits in the certificate signature is not divisible by 8. Certificate signatures have an ASN.1 bit string encoding, which theoretically may exhibit an arbitrary number of zeros and ones, and may not necessarily be divisible by 8. But the GnuTLS function assumes a byte-aligned signature, hence the check. In the ASN.1 DER encoding, used in X509 certificates, a bit string begins with a single byte which denotes the number of missing bits in the last byte. For all normal certificates, this value is always zero, and the function continues. But if we change this byte to a non-zero value, the function fails the divisibility check and returns with wrong error code, causing the certificate verification to succeed.
Normally in Gecko we don't check operator new because operator new will never return null. But, eventually, we want this code to work outside of Gecko without libmozalloc, which may have operator new return null or throw an exception. Should we be using "new (nothrow)" to make the behavior consistent as much as possible.
Summary: Document internal design of insanity::pkix → Document internal design of mozilla::pkix
Stefan made the following comment in his unit tests of the DER decoder:

// TODO: The DER encoding specification says that Enumerated follows
// Integer. So although we only expect single byte values, it would
// actually be legal to have an enumerated value encoded in multiple
// bytes. How about 'overly long ints' ? (Is 0x00000001 valid ?)

That is a good point that needs to be documented. The mozilla::pkix DER decoder isn't a general-purpose library for decoding DER-encoded ASN.1. It is intentionally limited to parsing values that we encounter in certificates and OCSP responses. In particular, in X.509 there are no ENUMERATED values or even any INTEGER values are allowed to be negative or larger than 127. That means we can use a much simpler, more-efficient, and more-obviously-correct parsing strategy than a general-purpose ASN.1 parser would be able to use.

Conversely, Stefan also noticed that we accept some prohibited values such as overly-long serial numbers. These are workarounds put in place based on implementation experience trying to parse real-world certificates that don't follow all the rules. We should document all of these cases. Kathleen Wilson has started doing that here:
https://wiki.mozilla.org/SecurityEngineering/mozpkix-testing#Things_for_CAs_to_Fix
Assignee: brian → nobody
Whiteboard: [psm-documentation]
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.