Closed Bug 1091778 Opened 5 years ago Closed 4 years ago

heuristically classify unverifiable certificates as part of or not part of the "real PKI"

Categories

(Core :: Security: PSM, defect)

defect
Not set

Tracking

()

RESOLVED WONTFIX

People

(Reporter: keeler, Assigned: keeler)

References

Details

Attachments

(1 file)

With mozilla::pkix, we've tried to more strictly verify certificates. This has caused a number of compatibility issues, particularly with certificates on devices that are not part of the "real PKI" (i.e. issued by a certificate authority in our root program). While we want to protect as many of our users as possible, it's clear that there is a benefit to still allowing users who need to to connect to these broken devices. To that end, one potential solution is to try to determine if a certificate that has failed to verify successfully was probably issued by an authority in our program or not. If it was, we'll be as strict as we can with it to protect our users. If it wasn't, then users would have to add an "untrusted issuer" exception anyway, so we can essentially treat all such certificates as untrusted in that way, thus allowing overrides.

To be clear, this is an application-specific determination that would be done in PSM, not mozilla::pkix.
Blocks: 1084606
David, I don't think that heuristics are going to do much good.

Instead, I'd propose something else: Certificate pinning. Self-signed sites usually have long-lived certs, and if this is your internal router or webmail server, you probably use it rather often over a long period of time. We should record that. For example, we can allow the user to whitelist a server and cert (e.g. "[x] This is an internal server of mine"), and then insist that the cert stays always the same for that host.
xref bug 744204 for cert pinning
> more strictly verify certificates.
> failed to verify successfully

Can you elaborate? I assumed you spoke about the traditional checks (CA who signed it, hostname match, expiry etc.). Your blocking bug suggests other checks like cyphers used. The checks that fail and should be allowed to be circumvented are relevant to the solution we come up with here, so could you please post a comprehensive list?
Ben: you can't "pin" a cert you weren't allowed to accept in the first place.
(In reply to Ben Bucksch (:BenB) from comment #1)
> Instead, I'd propose something else: Certificate pinning. Self-signed sites
> usually have long-lived certs, and if this is your internal router or
> webmail server, you probably use it rather often over a long period of time.
> We should record that. For example, we can allow the user to whitelist a
> server and cert (e.g. "[x] This is an internal server of mine"), and then
> insist that the cert stays always the same for that host.

We actually more or less already do that. When you add a permanent override, we "pin" to that certificate's fingerprint and expect it in the future. (Although, I don't think there's any messaging along the lines of "Hey, you already added an override for this hostname, but now the certificate wasn't what we were expecting." That's an issue for another bug, though.)

(In reply to Ben Bucksch (:BenB) from comment #3)
> > more strictly verify certificates.
> > failed to verify successfully
> 
> Can you elaborate? I assumed you spoke about the traditional checks (CA who
> signed it, hostname match, expiry etc.). Your blocking bug suggests other
> checks like cyphers used. The checks that fail and should be allowed to be
> circumvented are relevant to the solution we come up with here, so could you
> please post a comprehensive list?

For example, if a CA issued a certificate with a weak key, we probably wouldn't want to allow overrides for it (the site should generate a new key and get a new certificate). If your device has a certificate with a weak key, however, since in some cases there's not much that can be done, we'll allow the error to be overridden, even though the weak key won't protect that communication.
(In reply to David Keeler (:keeler) [use needinfo?] from comment #5)
> (In reply to Ben Bucksch (:BenB) from comment #3)
> > > more strictly verify certificates.
> > > failed to verify successfully
> > 
> > Can you elaborate? I assumed you spoke about the traditional checks (CA who
> > signed it, hostname match, expiry etc.). Your blocking bug suggests other
> > checks like cyphers used. The checks that fail and should be allowed to be
> > circumvented are relevant to the solution we come up with here, so could you
> > please post a comprehensive list?
> 
> For example, if a CA issued a certificate with a weak key, we probably
> wouldn't want to allow overrides for it (the site should generate a new key
> and get a new certificate). If your device has a certificate with a weak
> key, however, since in some cases there's not much that can be done, we'll
> allow the error to be overridden, even though the weak key won't protect
> that communication.

Whoops - hit save too early. The comprehensive list would basically be "more or less everything but revocation and active distrust". We can also tune the list as necessary.
Comment on attachment 8517709 [details] [diff] [review]
patch

David,

Sorry. I thought I explained this in another bug. I'll try again. The server sends us this:

   [EE, Int(v1)]

where EE <- Int(v1) <- Mozilla-Trusted-Root. As I understand it, your goal is to make it so that we don't allow cert error overrides for this cert, because it was issued by a v1 intermediate, right? But, that really is only a security issue if EE is being used by an attacker, right? But, if the attacker wants to get SEC_ERROR_UNKNOWN_ISSUER instead of MOZILLA_PKIX_ERROR_INVALID_V1_CA_CERTIFICATE (or whatever the error code is), then he could just send us this instead:

  [EE]

That is, he could just NOT send Int(v1), and get SEC_ERROR_UNKNOWN_ISSUER.

And, even if you made it so that Firefox had complete, perfect knowledge of every sub-CA certificate in Mozilla's program, he could just send this:

  [EE, Untrusted-Root-CA]

Where EE <- Untrusted-Root-CA. This will also result in SEC_ERROR_UNKNOWN_ISSUER.

So, basically, an attacker can always force SEC_ERROR_UNKNOWN_ISSUER if he wants the user to see that error. Consequently, I don't see any security advantage to this extra complexity.

Now, I understand that you guys are working on some "Wall of Shame" to point out mistakes that CAs in Mozilla's program are making, and this helps with that. While in general I think that that is a good idea (I proposed something similar over a year ago), I don't think the core of the most sensitive part of Gecko is the right place to be calculating such scores. The less code, and the less complexity, the safer Firefox will be, and the more effort Mozilla will have available to fight back against real attacks.

Anyway, I recommend to not do this, but I don't strongly object to it either, particularly because it doesn't touch mozilla::pkix. I won't have time to help with it, beyond this comment, though, except I'll help review any part that touches mozilla::pkix.
Attachment #8517709 - Flags: feedback?(brian) → feedback-
Comment on attachment 8517709 [details] [diff] [review]
patch

Review of attachment 8517709 [details] [diff] [review]:
-----------------------------------------------------------------

The approach here seems sound to me, and it looks like the decision is getting inserted in the right place.  However, it seems like being "heuristic" is unnecessary here -- we can see the real cert chain, so we should just use the root there.

::: security/certverifier/CertVerifier.cpp
@@ +447,5 @@
>                              &builtChainTemp, evOidPolicy);
>    if (rv != SECSuccess) {
> +    bool realPKI;
> +    PRErrorCode error = PR_GetError();
> +    if (IsCertProbablyPartOfRealPKI(peerCert, time, pinarg, hostname, realPKI)

It seems like at this point, you have the chain that the verifier built (in builtChainTemp).  Couldn't you just add something like:

> IsCertBuiltInRoot(CERT_LIST_TAIL(certList)->cert, realPKI)

... instead of re-building the chain?

@@ +451,5 @@
> +    if (IsCertProbablyPartOfRealPKI(peerCert, time, pinarg, hostname, realPKI)
> +          == Success && !realPKI &&
> +        error != MOZILLA_PKIX_ERROR_KEY_PINNING_FAILURE &&
> +        error != SEC_ERROR_UNTRUSTED_CERT &&
> +        error != SEC_ERROR_UNTRUSTED_ISSUER) {

It seems like it might be better to abstract this a little, say by making a method of the form:

> bool IsValidationResultFatal(Result rv)

(In general, I would prefer if our security policies were more consolidated, rather than being scattered through the code.)

::: security/certverifier/CertVerifier.h
@@ +60,5 @@
>    enum ocsp_strict_config { ocsp_relaxed = 0, ocsp_strict };
>    enum ocsp_get_config { ocsp_get_disabled = 0, ocsp_get_enabled = 1 };
>  
> +  enum BuiltinCertMode {
> +    BuiltinObjectTokenOnly = 0,

I take it this means that only objects stored in softoken are regarded as built-in?  (as in IsCertBuiltInRoot)  It might be clearer for these to be stated more in terms of the semantic rather than mechanism.  E.g.:

> enum RootSegregationMode {
>   SeparateNonBuiltin = 0,
>   AllRootsEquivalent = 1
> }

Since the current behavior is "AllCertsEquivalent", should that be value 0?  It would be helpful to have a comment that describes what this option is for and what the modes mean.
Attachment #8517709 - Flags: feedback?(rlb) → feedback-
No longer blocks: 1084606
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.