Open Bug 1196364 Opened 9 years ago Updated 3 months ago

Proper handling for wildcard certificates for all tlds

Categories

(NSS :: Libraries, enhancement, P4)

enhancement

Tracking

(Not tracked)

People

(Reporter: kmckinley, Unassigned)

References

Details

Attachments

(3 files, 5 obsolete files)

During certificate validation, NSS only checks for the existence of a suffix containing at least two labels to validate. This is incorrect since:

1) Some suffixes should be longer (e.g., we should not accept "*.co.uk", but "*.example.co.uk" is fine)
2) Some corporate TLDs consist of a single label (e.g., "*.google"), and will need to be whitelisted to allow those wildcard certs.

Use the PSL (https://publicsuffix.org/list/) to determine what matches as a suffix.

The definitions must be allowed to change over time. For example, the last PSL update was 2015/08/07.
Assignee: nobody → kmckinley
Blocks: 806281
As maintainer of the PSL, I'd say that NSS should not be validating hostnames in this way. The CAB Forum Baseline Requirements are the document which say which names are permitted and which are not, and they (quite intentionally) say that the PSL can be used to kick a request into the CA's "high risk" bucket, requiring extra vetting, but they should _not_ be used to automatically deny issuance.

NSS should not be in the business of rejecting certs which are issued by an anchor it trusts, based solely on the number of components in the domain name or their length, or on the PSL.

Gerv
Blocks: 1169149
NSS should not blindly trust anything that has been signed by a trust anchor. Even if an entity proved that it should have access to *.com or *.co.uk, it would allow the owner of that certificate's private key to impersonate any site under that TLD[1], and should only be whitelisted in after careful scrutiny.

Chrome already uses the PSL to determine which suffixes are registry controlled. There is a validation that a wildcard certificate has at least one label between the wildcard and a registry controlled suffix[2]. Determining what is a public suffix is based on the Mozilla nsEffectiveTLDService code performing the same function[3][4][5].

[1] Ignoring Public-Key Pinning
[2] https://code.google.com/p/chromium/codesearch#chromium/src/net/cert/x509_certificate.cc&sq=package:chromium&type=cs&l=569
[3] https://code.google.com/p/chromium/codesearch#chromium/src/net/base/registry_controlled_domains/registry_controlled_domain.cc&cl=GROK&gsn=GetRegistryLength&rcl=1442561093&l=202
[4] https://mxr.mozilla.org/mozilla-central/source/netwerk/dns/nsIEffectiveTLDService.idl#13
[5] https://mxr.mozilla.org/mozilla-central/source/netwerk/dns/nsEffectiveTLDService.cpp#134
(In reply to Kate McKinley [:kmckinley] from comment #2)
> Chrome already uses the PSL to determine which suffixes are registry
> controlled. There is a validation that a wildcard certificate has at least
> one label between the wildcard and a registry controlled suffix[2].
> Determining what is a public suffix is based on the Mozilla
> nsEffectiveTLDService code performing the same function[3][4][5].

What Chrome does shouldn't be duplicated. We shouldn't have to keep the PSL up to date to determine which certificates to trust or not trust.

See also bug 806281 comment 9 and bug 1169149 comment 15 where I go into more detail about why Firefox shouldn't copy Chrome.
(In reply to Gervase Markham [:gerv] from comment #1)
> and they (quite
> intentionally) say that the PSL can be used to kick a request into the CA's
> "high risk" bucket, requiring extra vetting, but they should _not_ be used
> to automatically deny issuance.

No, that's not what the BRs say :)

From v1.3.0, Section 3.2.2.6
"If	a	wildcard	would	fall	within	the	label	immediately	to	the	left	of	a	registry‐controlled†	or	public	sufϐix,	CAs	
MUST	refuse	issuance	unless	the	applicant	proves	its	rightful	control	of	the	entire	Domain	Namespace.	(e.g.	
CAs	MUST	NOT	issue	“*.co.uk”	or	“*.local”,	but	MAY	issue	“*.example.com”	to	Example	Co.)."

(Yeah, I'm too lazy to fix all the whitespace from the PDF)

It says the CA MUST automatically deny issuance UNLESS the applicant can demonstrate complete control over the domain namespace.

This means that for the case of ccTLDs, according to IANA policy, the only applicant which can possibly issue for that domain namespace is the domain registry. It's questionable whether or not Mozilla (or any certificate validator) wants to enable such issuance that you could have Nominet being able to MITM all certificates (setting aside the practical reality that they could spoof domain validation records).

On the matter of the PSL's suitability, this seems to be the endless cycle of domain boundary distinctions, and what the PSL's "purpose" is. I think the only way to 'solve' that would be some clear documentation in the PSL about what it is - or isn't - suitable for.

It sounds sound like that, despite Brian's objections, there is some agreement between he and I that with respect to IANA-delegated ccTLDs, and for 'legacy' TLDs (.com, .net, .museum, .aero, etc), there's benefit to be had in restricting wildcard issuance, and with little risk. It's simply a matter of distinguishing those domains from the general set.
(In reply to Ryan Sleevi from comment #4)
> It sounds sound like that, despite Brian's objections, there is some
> agreement between he and I that with respect to IANA-delegated ccTLDs, and
> for 'legacy' TLDs (.com, .net, .museum, .aero, etc), there's benefit to be
> had in restricting wildcard issuance, and with little risk. It's simply a
> matter of distinguishing those domains from the general set.

I think that's a fair summary. My main point is that the list of restricted TLDs should be short and static. If we need to add a new ccTLD once a year or so, OK. I just want to avoid having to add .nike, .walmart, etc., potentially every month.
(In reply to Kate McKinley [:kmckinley] from comment #2)
> NSS should not blindly trust anything that has been signed by a trust
> anchor. 

Er, that's what it means to be a trust anchor.

> Even if an entity proved that it should have access to *.com or
> *.co.uk, it would allow the owner of that certificate's private key to
> impersonate any site under that TLD[1], and should only be whitelisted in
> after careful scrutiny.

I don't think it would be possible for anyone to legitimately prove that for *.com or *.co.uk. However, it would for *.amazon, for example. Are you going to classify all 1100 new gTLDs by which ones we should allow wildcards for and which ones we shouldn't? The PSL does _not_ give that information.

> Chrome already uses the PSL to determine which suffixes are registry
> controlled. There is a validation that a wildcard certificate has at least
> one label between the wildcard and a registry controlled suffix[2].

Then they are going to give some new private gTLD owners an unnecessary bad time. This isn't to be copied.

(In reply to Ryan Sleevi from comment #4)
> (In reply to Gervase Markham [:gerv] from comment #1)
> > and they (quite
> > intentionally) say that the PSL can be used to kick a request into the CA's
> > "high risk" bucket, requiring extra vetting, but they should _not_ be used
> > to automatically deny issuance.
> 
> No, that's not what the BRs say :)
> 
> From v1.3.0, Section 3.2.2.6
> "If	a	wildcard	would	fall	within	the	label	immediately	to	the	left	of	a
> registry‐controlled†	or	public	sufϐix,	CAs	
> MUST	refuse	issuance	unless	the	applicant	proves	its	rightful	control	of	the
> entire	Domain	Namespace.	(e.g.	
> CAs	MUST	NOT	issue	“*.co.uk”	or	“*.local”,	but	MAY	issue	“*.example.com”	to
> Example	Co.)."
> 
> (Yeah, I'm too lazy to fix all the whitespace from the PDF)
> 
> It says the CA MUST automatically deny issuance UNLESS the applicant can
> demonstrate complete control over the domain namespace.

But that doesn't say _anything_, then - CAs must automatically deny issuance of _any_ wildcard cert unless the applicant can demonstrate complete control over the domain namespace. Right? Just because I control www.foo.com doesn't mean I get a cert for *.foo.com. Does it?

My point is that (see the "unless") CAs are _not_ outright forbidden from issuing such certs, so there may be legitimately issued and perfectly reasonable certs out there in this category, and we should not be rejecting them.

> It sounds sound like that, despite Brian's objections, there is some
> agreement between he and I that with respect to IANA-delegated ccTLDs, and
> for 'legacy' TLDs (.com, .net, .museum, .aero, etc), there's benefit to be
> had in restricting wildcard issuance, and with little risk. It's simply a
> matter of distinguishing those domains from the general set.

I'm uneasy about privileging particular TLDs. If we are going to have many TLDs, which it seems we are, then giving certain ones privileges ("Use .com - it's more secure, and the browsers make it so!") seems gut-feel unreasonable.

How would you make, and update, such a list, given that the PSL is not it?

Gerv
(In reply to Gervase Markham [:gerv] from comment #6)
> I don't think it would be possible for anyone to legitimately prove that for
> *.com or *.co.uk. However, it would for *.amazon, for example. Are you going
> to classify all 1100 new gTLDs by which ones we should allow wildcards for
> and which ones we shouldn't? The PSL does _not_ give that information.

Sure they could - Verisign or Nominet totally could.

That said, the class of "able to prove ownership" is essentially the ".brand TLD" question, and that's something Mozilla needs to figure out its support of, not only with respect to wildcard certificates, but other usages. I mean, by including .google on the PSL, you prevent .google from setting cookies at the .google level, just like they're prevented from being set at the .com level. How should Mozilla determine what parts of the ".brand" TLD it supports (such as wildcard certificates) and what it rejects (such as gTLD cookies)?

I just present it like this so that it's clear there's a calculus involved here - risk versus return. On the Chromium side, we opted to not support wildcarded .brand TLDs by default, by virtue of using the PSL to exclude such purposes. To us, the risk of a *.com (or, more practically, a *.ly) was riskier than the harm done to the .brand TLD. It's also no secret that on the Chrome team, we're not supportive of a lot of the stuff ICANN "wishes" worked for brand TLDs - including single-level host records ("http://google") and gTLD cookies.

> Then they are going to give some new private gTLD owners an unnecessary bad
> time. This isn't to be copied.

It's a principled and consistent response, but yes, there will be some use cases not met, just like Mozilla is not meeting some use cases for .brand TLDs by including them in the PSL to begin with.

> But that doesn't say _anything_, then - CAs must automatically deny issuance
> of _any_ wildcard cert unless the applicant can demonstrate complete control
> over the domain namespace. Right? Just because I control www.foo.com doesn't
> mean I get a cert for *.foo.com. Does it?

Actually, today it does. That's been discussed in the Policy WG (in Zurich and the meetings) as a 'fix' of the domain language.

But that's a side-note, because the language I pasted is what you indicated was quite intentionally provided. That was the language that was introduced w/ respect to gTLDs, not for www vs *.

> My point is that (see the "unless") CAs are _not_ outright forbidden from
> issuing such certs, so there may be legitimately issued and perfectly
> reasonable certs out there in this category, and we should not be rejecting
> them.

Or you could determine that the overall ecosystem harm from rejecting *.brand TLDs is less than the overall security risk of accepting a *.com/*.co.uk certificate. It's not a black/white decision, and it's not the only one with respect to .brand TLD handling that you have to consider.

> I'm uneasy about privileging particular TLDs. If we are going to have many
> TLDs, which it seems we are, then giving certain ones privileges ("Use .com
> - it's more secure, and the browsers make it so!") seems gut-feel
> unreasonable.

Agreed - that's why we decided to consistently reject, as we saw regular attempts to get browsers to accept such wildcarded ccTLDs. You could make a list easily from the root zone database feed ( https://www.iana.org/domains/root/db for the human readable version identifying ccTLDs ). You could restrict it to all domains in the https://data.iana.org/TLD/tlds-alpha-by-domain.txt set that aren't in the https://newgtlds.icann.org/newgtlds.csv set. Or you could take the PSL itself and make a decision that to make a cake, you have to break a few eggs.

Anyways, I think it's reasonable for Mozilla(ns) to disagree with the stance Chrome has taken, but I do want to make sure that the principles - and the consistency at which they're being applied - were understood.
One note: Gervase, I don't think that "is a trust anchor" should necessarily mean "is an anchor for total trust, no questions asked". Partial trust is a thing; the BRs specify certain certificate lifetimes, signing algorithms, validation procedures, et c. in part as ways to reduce the trust from 'total' to 'less than total'. And then there are block lists, name constraints, and things of that kind.

Thus, I don't see that deciding not to trust trust anchors to issue for *.eTLD is necessarily inconsistent with the meaning of the term "trust anchor".
(In reply to Chris Palmer from comment #8)
> One note: Gervase, I don't think that "is a trust anchor" should necessarily
> mean "is an anchor for total trust, no questions asked".

Indeed not; that's not what I intended to imply, I was just abbreviating. A fuller statement of what I meant was that:

"If we have decided to trust a CA to issue SSL certificates, and that certificate is within the technical parameters (algorithm, key length, lifetime etc.) that we permit, then we should not second-guess the part of the system that the CA is explicitly responsible for - that is, the bit which decides whether the holder of the certificate is entitled to hold it."

> Thus, I don't see that deciding not to trust trust anchors to issue for
> *.eTLD is necessarily inconsistent with the meaning of the term "trust
> anchor".

It means that someone can come along, pass a CA's vetting process, get a technically competent and entirely legal and proper certificate for domain space they control... and find it doesn't work in our product because "we decided". That doesn't sit right with me.

Gerv
(In reply to Ryan Sleevi from comment #7)
> of, not only with respect to wildcard certificates, but other usages. I
> mean, by including .google on the PSL, you prevent .google from setting
> cookies at the .google level, just like they're prevented from being set at
> the .com level.

That is true; however, Google could ask us to replace:

google

with

!google

which, hopefully (or if we fix the code) should take precedence over the default * rule and allow cookies to be set for all of dot-google. Or is there some reason this wouldn't work?

> I just present it like this so that it's clear there's a calculus involved
> here - risk versus return. On the Chromium side, we opted to not support
> wildcarded .brand TLDs by default, by virtue of using the PSL to exclude
> such purposes.

"By default"? You have a whitelist?

> To us, the risk of a *.com (or, more practically, a *.ly) was
> riskier than the harm done to the .brand TLD. It's also no secret that on
> the Chrome team, we're not supportive of a lot of the stuff ICANN "wishes"
> worked for brand TLDs - including single-level host records
> ("http://google") and gTLD cookies.

We are also not supportive of single-level host records, and AIUI neither is the ICAN SSAC.

\> > But that doesn't say _anything_, then - CAs must automatically deny issuance
> > of _any_ wildcard cert unless the applicant can demonstrate complete control
> > over the domain namespace. Right? Just because I control www.foo.com doesn't
> > mean I get a cert for *.foo.com. Does it?
> 
> Actually, today it does. That's been discussed in the Policy WG (in Zurich
> and the meetings) as a 'fix' of the domain language.

Having just re-read 3.2.2.4, I can't see how anyone can construe the BRs to say that demonstrating control of www.foo.com allows issuance of *.foo.com. Can you explain how some people do?

Gerv
This set of patches provides a proof of concept for more strict wildcard checking, including the following:

1) Create a preference (security.pki.strict_wildcard_checks), initially set to false
2) Create a test to show that strict wildcard checks are on or off, depending on the preference
3) Expands nsEffectiveTLDService to provide information about the ICANN TLD part of a given hostname
4) Reports Telemetry on whether we would pass or fail strict checks
5) Based on the pref, either denies the load or allows the load to proceed

Based on the above conversations, the proposed patch chooses the following definition of strict wildcard checks.

If security.pki.strict_wildcard_checks is set to true:
1) There must be 2 labels to the right of the wildcard, for example, *.example.com is allowed, but *.example is not. This is unchanged, and is how NSS handles wildcard validation.
2) The wildcard part `*.' is removed to leave the wildcard domain, and the ICANN TLD is retrieved from nsIEffectiveTLDService.
3) The ICANN TLD is tested against the wildcard domain and if equal, the verification is rejected.
4) The remaining part of the wildcard domain to the left of the ICANN TLD is tested to verify that at least one label exists. If no labels exist to the left of the ICANN TLD, the verification is rejected.
5) Otherwise the load is accepted

Example:
A load for "example.co.uk" receives a certificate for "*.co.uk". The wildcard domain is "co.uk", and the ICANN domain is "co.uk". Since the two are equal, the load is rejected.
Flags: needinfo?(rlb)
As maintainer of the PSL, I continue to say that the PSL is not suitable for use in this manner, and doing so will lead to incorrect rejections of certificates which should be valid.

Gerv
Status: NEW → ASSIGNED
Kate: So, to summarize, a wildcard is accepted only if:

1. There are two labels to the right of the wildcard label
2. There is at least one label in addition to an ICANN TLD

That seems roughly correct to me.  Rule (1) seems largely redundant with rule (2) -- it seems like you only need it when you are dealing with something that isn't under an ICANN TLD.

Speaking of which: What happens with non-public names here, e.g., *.local?  I assume from the lack of an else branch on the "if (NS_SUCCEEDED(nsrv))" for the eTLD check that these checks aren't enforced for such wildcards?

It seems like it might be better to do these checks within CheckCertHostname(), i.e., to pre-filter the names in the cert before trying to match them to the target name, as opposed to doing a post-hoc check on the matched name.  That way, if the target name matches both a bad name and an OK name, then the thing can still work.  (Whereas with this, if it matches the bad name, it fails.)

Good idea saving failure until the end.  We should run this for a little while in telemetry-only mode before flipping the switch.
Flags: needinfo?(rlb)
Gerv: This isn't really a PSL issue now, since the PSL is just being used as a proxy for ICANN.  It's a question of whether it's a good utility/security trade-off to allow "*.tld" wildcards.

I appreciate your concerns about "*.tld" names, but I think we agree that there are clearly some names for which this is not appropriate, including a big chunk of the new TLDs (.accountants, .ninja, ...).  I suggest we land these policies in telemetry-only mode and see what the usage looks like, which can help inform a policy discussion about whether enabling "*.tld" names for the small-ish fraction of TLDs that are single-owner is worth taking on additional risk for the rest of the TLDs.

Of course, it would be delightful if the PSL wanted to provide some signal about which ICANN TLDs were single-owner, e.g., by allowing the domain operator to put a "*" in front of their entry.
Attachment #8709828 - Attachment is obsolete: true
Attachment #8709829 - Attachment is obsolete: true
Attachment #8715127 - Flags: review?(rlb)
Attachment #8715127 - Flags: review?(dkeeler)
Attachment #8709833 - Attachment is obsolete: true
Attachment #8715129 - Flags: review?(rlb)
Attachment #8715129 - Flags: review?(jduell.mcbugs)
Comment on attachment 8715127 [details] [diff] [review]
Tests for strict wildcard checks

Review of attachment 8715127 [details] [diff] [review]:
-----------------------------------------------------------------

The mochitest framework is a bit more than we need, here. I think xpcshell unit tests would be more appropriate. (This means creating some .certspec files with various subjectAlternativeName extensions and attempting to verify them.)

::: security/certverifier/test/wildcard/test_wildcard.html
@@ +1,1 @@
> +<!DOCTYPE HTML>

Rather than mochitests, we should use xpcshell tests along these lines:

https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/tests/unit/bad_certs
https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/tests/unit/test_name_constraints.js
Attachment #8715127 - Flags: review?(dkeeler) → review-
Something came up and I couldn't finish the other review today. I'll do my best to have it done tomorrow.
Comment on attachment 8715130 [details] [diff] [review]
Implement  strict checking for certificates presented with wildcards behind a pref

Review of attachment 8715130 [details] [diff] [review]:
-----------------------------------------------------------------

This is the right idea but I think there are some simplifications we can make that would be beneficial.
(On a process note, it looks like this patch only has 3 lines of context - 8 is the preferred amount, I believe.)

::: netwerk/base/security-prefs.js
@@ +47,5 @@
>  pref("security.ssl.errorReporting.url", "https://data.mozilla.com/submit/sslreports");
>  pref("security.ssl.errorReporting.automatic", false);
> +
> +pref("security.pki.strict_wildcard_checks", false);
> +pref("security.pki.allow_local_wildcard_domains", true);

Since the time we shipped the new name matching implementation, I haven't heard of this being an issue. Let's not worry about having the ability to allow local wildcard domains until we're aware of a need for it.

::: security/apps/AppTrustDomain.cpp
@@ +369,5 @@
>  }
>  
> +Result
> +AppTrustDomain::CheckWildcardValidity(
> +    const Input& hostname, bool& valid)

nit: I think these two lines can all fit on one line in <80 chars

::: security/certverifier/CertVerifier.cpp
@@ +49,2 @@
>  {
> +  mozilla::Preferences::AddBoolVarCache(&mStrictWildcardChecks,

The documentation for AddBoolVarCache says mStrictWildcardChecks would have to be a static variable here. The way we've dealt with preferences changing in the past is by observing them changing in nsNSSComponent and creating a new SharedCertVerifier: https://dxr.mozilla.org/mozilla-central/source/security/manager/ssl/nsNSSComponent.cpp#882

@@ +706,5 @@
>      return SECFailure;
>    }
> +  Input matchingName;
> +
> +  NSSCertDBTrustDomain trustDomain(trustSSL,

When we were planning this out, it hadn't occurred to me that this is how this would have to be implemented. Maybe it would be better to define a new interface ("NamePolicy"?) much like TrustDomain that just has the one callback 'CheckWildcardValidity'.

::: security/certverifier/CertVerifier.h
@@ +11,5 @@
>  #include "pkix/pkixtypes.h"
>  #include "OCSPCache.h"
>  #include "ScopedNSSTypes.h"
> +#include "nsIEffectiveTLDService.h"
> +#include "mozilla/Preferences.h"

These should be #included in the cpp files they're needed in.

::: security/certverifier/NSSCertDBTrustDomain.cpp
@@ +933,5 @@
> +  uint32_t telemetryValue = 0;
> +
> +  // For our purposes, a hostname is a valid wildcard if and only if
> +  // the first character is a '*' followed by a '.'. This is stricter
> +  // than RFC6125 allows, but is the same behavior as Chromium.

Doesn't the name matching code already check this? I don't think we need to enforce this again.
Long story short, I think we should make it so this code can assume all of the checks done in MatchPresentedDNSIDWithReferenceDNSID up until this point https://dxr.mozilla.org/mozilla-central/rev/584870f1cbc5d060a57e147ce249f736956e2b62/security/pkix/lib/pkixnames.cpp#1196 have been done.

@@ +936,5 @@
> +  // the first character is a '*' followed by a '.'. This is stricter
> +  // than RFC6125 allows, but is the same behavior as Chromium.
> +  if (presented.Peek('*')) {
> +    if (presented.Skip(1) != Success
> +        || (!presented.Peek('.') || presented.Skip(1) != Success)

nit: || goes at the end of the previous line

@@ +952,5 @@
> +    if (!mIDNService) {
> +      mIDNService = do_GetService("@mozilla.org/network/idn-service;1");
> +    }
> +
> +    if (mETLDService && mIDNService) {

nit: flip this - check for errors first and return Result::FATAL_ERROR_LIBRARY_FAILURE if either of these aren't available.

@@ +956,5 @@
> +    if (mETLDService && mIDNService) {
> +      uint8_t b;
> +      nsresult nsrv;
> +
> +      uint32_t dotCount    = 0;

nit: don't bother aligning the = 0; bits

@@ +958,5 @@
> +      nsresult nsrv;
> +
> +      uint32_t dotCount    = 0;
> +      uint32_t labelLength = 0;
> +      // read the reset of the domain into a string we can validate

nit: s/reset/rest/

@@ +959,5 @@
> +
> +      uint32_t dotCount    = 0;
> +      uint32_t labelLength = 0;
> +      // read the reset of the domain into a string we can validate
> +      nsAutoCString wild = nsAutoCString();

I imagine just 'nsAutoCString wild;' will work, right?

@@ +961,5 @@
> +      uint32_t labelLength = 0;
> +      // read the reset of the domain into a string we can validate
> +      nsAutoCString wild = nsAutoCString();
> +      do {
> +        if(presented.Read(b) != Success) {

nit: space after if

@@ +966,5 @@
> +          Telemetry::Accumulate(Telemetry::CERT_VALIDATION_WILDCARD_MATCH_RESULT, 3);
> +          valid = false;
> +          return Result::ERROR_BAD_DER;
> +        }
> +        if (b == '.') {

We need to check !presented.AtEnd() as well, right? (or does the caller already take care of this?)

@@ +978,5 @@
> +
> +      if (!IsASCII(wild)) {
> +        // Make sure the domain name is in ASCII representation
> +        nsresult rv = mIDNService->ConvertUTF8toACE(wild, wild);
> +        if (NS_FAILED(rv))

This is why we always use braces around conditional bodies.

@@ +992,5 @@
> +      if (labelLength > 0) {
> +        labelCount = dotCount + 1;
> +      }
> +
> +      if (labelCount < 1) {

Again, I really think the caller should be responsible for verifying this.

@@ +1005,5 @@
> +      nsAutoCString icannSuffix = nsAutoCString();
> +
> +      // fetch the icann suffix part
> +      nsrv = mETLDService->GetICANNPublicSuffixFromHost(wild, icannSuffix);
> +      if (NS_SUCCEEDED(nsrv)) {

Generally the pattern is to check for and handle errors first. Since error handling usually involves returning early (and since you then don't need an else after the return), this keeps indentation levels down and makes the code flow more linear, which can be easier to understand.

@@ +1056,5 @@
> +      }
> +    }
> +  }
> +
> +  Telemetry::Accumulate(Telemetry::CERT_VALIDATION_WILDCARD_MATCH_RESULT, telemetryValue);

nit: watch out for lines that are >80 chars

::: security/certverifier/OCSPVerificationTrustDomain.cpp
@@ +114,5 @@
>    /*out*/ uint8_t* digestBuf, size_t digestBufLen)
>  {
>    return mCertDBTrustDomain.DigestBuf(item, digestAlg, digestBuf, digestBufLen);
>  }
> +

nit: unnecessary extra line

::: security/pkix/include/pkix/pkix.h
@@ +117,5 @@
>  //   backward compatibility (see SearchNames in pkixnames.cpp)
>  // - A wildcard in a DNS-ID may only appear as the entirety of the first label.
> +Result CheckCertHostname(Input cert,
> +                         Input hostname,
> +                         /*out*/ Input& matchingName,

Is matchingName actually used by any of the callers? I'm not sure it's necessary.

::: security/pkix/lib/pkixnames.cpp
@@ +377,5 @@
> +        // if we get a wildcard we would reject, we shortcut out with
> +        // an error, even if a valid name follows. However, if we get
> +        // a wildcard we would accept before one we would reject, we
> +        // would accept the cert.
> +        rv = trustDomain->CheckWildcardValidity(presentedID, valid);

I'm not sure this is the best place for this check. In particular, the presentedID hasn't been validated yet, so CheckWildcardValidity has to do some extra work that wouldn't be necessary if we restructured things. Also, this doesn't validate wildcards found in the subject CN, right? I was thinking this check would go at the bottom of MatchPresentedDNSIDWithReferenceDNSID.
Attachment #8715130 - Flags: review?(dkeeler) → review-
Comment on attachment 8715129 [details] [diff] [review]
Modifications to nsEffectiveTLDService to support getting ICANN TLD given a hostname

Review of attachment 8715129 [details] [diff] [review]:
-----------------------------------------------------------------

This looks simple enough.  I'm assuming you're testing both ICANN and private example in your tests.

::: netwerk/dns/nsEffectiveTLDService.h
@@ +112,5 @@
>    , public nsIMemoryReporter
>  {
>  public:
> +  NS_DECL_THREADSAFE_ISUPPORTS
> +  //NS_DECL_ISUPPORTS

Get rid of the //NS_DECL_ISUPPORTS
Attachment #8715129 - Flags: review?(jduell.mcbugs) → review+
Comment on attachment 8715129 [details] [diff] [review]
Modifications to nsEffectiveTLDService to support getting ICANN TLD given a hostname

Review of attachment 8715129 [details] [diff] [review]:
-----------------------------------------------------------------

Also note that this patch is likely to conflict with bug 1246013 (AFAICT the conflict would just be nearby lines of code changed, not semantic conflicts).
Comment on attachment 8715129 [details] [diff] [review]
Modifications to nsEffectiveTLDService to support getting ICANN TLD given a hostname

Review of attachment 8715129 [details] [diff] [review]:
-----------------------------------------------------------------

::: netwerk/dns/nsEffectiveTLDService.h
@@ +24,4 @@
>    uint32_t strtab_index : ETLD_ENTRY_N_INDEX_BITS;
>    uint32_t exception : 1;
>    uint32_t wild : 1;
> +  uint32_t icann : 1;

Drive by comment: please change ETLD_ENTRY_N_INDEX_BITS from 30 to 29, otherwise you'll introduce unnecessary padding into this array.
kmckinley, I landed bug 1246013 which may cause conflicts with your patches here. Sorry! But my changes were pretty simple so hopefully they conflicts will be easy to resolve.
Attachment #8715129 - Flags: review?(rlb) → review+
Comment on attachment 8715130 [details] [diff] [review]
Implement  strict checking for certificates presented with wildcards behind a pref

Clearing review pending resolution of keeler's comments.
Attachment #8715130 - Flags: review?(rlb)
Attachment #8715127 - Flags: review?(rlb)
Whiteboard: [psm-assigned]
P1 because this is assigned, but P3 if that changes.
Priority: -- → P1
Assignee: kmckinley → nobody
Status: ASSIGNED → NEW
Priority: P1 → P3
Whiteboard: [psm-assigned] → [psm-backlog]

Is there any progress on this issue / feature / mechanism?
Would really appreciate if somebody could have an eye on Bug 1757758 which is affected by this.

No progress has been made. Historically, this has not been prioritized because we haven't seen significant demand for this in the public web PKI.
In any case, mozilla::pkix lives in NSS now, so this bug should go there.

Assignee: nobody → nobody
No longer blocks: 806281
Severity: normal → --
Type: defect → enhancement
Component: Security: PSM → Libraries
Priority: P3 → --
Product: Core → NSS
Whiteboard: [psm-backlog]
Version: unspecified → other
Severity: -- → N/A
Priority: -- → P4
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: