Open
Bug 806281
Opened 12 years ago
Updated 1 year ago
Certificate validation does not properly prohibit *.<tld> wildcard certificates
Categories
(NSS :: Libraries, defect, P5)
NSS
Libraries
Tracking
(Not tracked)
NEW
People
(Reporter: devd, Unassigned)
References
Details
While accepting certificates with an asterisk, NSS relies on "atleast 2 dots" logic, which is flawed since some TLDs have 2 dots. This means, that a certificate issued to *.co.uk works.
Theoretically, CAs shouldn't be issuing such certificates, but I don't think NSS should rely on that. For example, a CA could also rely on the two dot logic.
Also, if you hax0r a CA, it might be easier to get 1 cert out without raising flags, than it is to get an intermediate or certs for lots of domains.
Mozilla already has the code to find the TLD (nsIEffectiveTLDService, I think) that NSS can use.
( Certificate exceptions are stored per host, so this isn't an issue for user added exceptions to the cert)
Comment 1•12 years ago
|
||
NSS is standalone so it can't call any Gecko services, but it could incorporate a copy of the data if it's worthwhile. What do other browsers do when given a *.co.uk wildcard cert?
*.co.uk would be bad, but so would *.google.com and that wouldn't be caught by this proposed heuristic. Certificate Transparency seems like a better way to catch hax0red CAs than keeping a huge exception list that only solves one minor slice of the problem.
Comment 2•12 years ago
|
||
nsIEffectiveTLDService should be moved to NSPR
Comment 3•12 years ago
|
||
The raw data is in file effective_tld_names.dat
This seems to be very general data that would benefit a lot of applications.
If NSS needs to know that data in order to make correct decisions about domain names, then that data should be moved into a more generic place, and NSPR seems like a good place to me.
nsIEffectiveTLDService would of course still reside in Gecko, only an array with the raw data would live in NSPR.
The only disadvantage is that file effective_tld_names.dat seems to get updates frequently - it would require us to release updates of NSPR more frequently.
The alternative of creating a copy of the data in NSPR/NSS seems insufficient to me, redundancy wastes space and leads to inconsistency.
Comment 4•12 years ago
|
||
Comment 5•12 years ago
|
||
The question here is whether this sort of check should be part of NSS, or whether it should rely on the application in which it is embedded. I can certainly see a case that NSS should do this check itself, but Kai is right, the PSL is a data set which changes fairly frequently. And embedded copies of NSS in non-Firefox software are less likely to recieve regular, timely updates than copies in auto-updated software like Firefox.
It would be somewhat unfortunate if out-of-date copies of the PSL embedded in old NSS versions started causing certificate interoperability problems for the owners of certain domains or TLDs.
Say, for example, the .zz domain only allows registrations under com.zz, org.zz and net.zz, and a list of others. So the PSL says:
*.zz
Then, it opens up for registration directly below the TLD. So I register gerv.zz. But old NSS will refuse to accept my cert for the perfectly valid domain https://gerv.zz.
If we move this to NSPR, then NSPR should provide a sensible API to the data, not just the data itself. The PSL data needs processing before you can answer common questions with it, like "what's the Public Suffix of this domain?" There are several PSL APIs around; NSS should work out which ones are good, and copy them.
dveditz: assuming Google had authorized it, there's nothing at all wrong with a *.google.com certificate.
Gerv
Reporter | ||
Comment 6•12 years ago
|
||
Also, we should maybe only look at ICANN domains in the list, and not the private domains ? For example, Google requested appspot.com be a top level, but there are certificates for *.appspot.com
Comment 7•12 years ago
|
||
BTW, is there any reason this bug needs to remain private?
Gerv
Reporter | ||
Comment 8•12 years ago
|
||
I marked it as secure only because I didn't want to be the one to make that decision.
Comment 9•12 years ago
|
||
(In reply to Devdatta Akhawe [:devd] from comment #0)
> While accepting certificates with an asterisk, NSS relies on "atleast 2
> dots" logic, which is flawed since some TLDs have 2 dots. This means, that a
> certificate issued to *.co.uk works.
I agree that *.co.uk should not be allowed. However, in general, the full PSL is not the right list to use. The PSL is about cookies and is generally a horrible mess and we should avoid adding any new uses of it (in fact, we should be working to greatly scale it back, IMO). Definitely, we should not consider the PSL to be generally defining what is a TLD and what isn't.
As luck would have it, I was just talking with some Googlers last night who were saying it makes sense for them to have a *.appspot.com certificate so they can provide SSL to all of their AppEngine customers automatically. I think that is very reasonable. In general, I think it is wrong to forbid non-TLD and non-CC-TLD+1 wildcard certs even if there are entries for those domains in the PSL.
In fact, even forbidding *.<TLD> might be too limiting. Any wildcard pattern is OK as long as the same entity has control over all the domains covered by the wildcard. For example, it isn't completely nonsensical to allow *.gov and *.mil (even both in the same certificate) since the US government owns those TLDs. (In practice, the USG is not going to use a wildcard cert for that, but that's their decision to make.) Similarly, if/when Google's "blog" TLD registration is approved, it would be reasonable for Google to have a *.blog wildcard certificate. (*.google is perhaps a more obvious example.)
It turns out that for our CA policy regarding name constraints as a mechanism to avoid the need for public disclosure + a full audit, we need to have a good definition of a TLD for similar reasons. I think the list we use be the root zone database + the stated policies of each country-code TLD - the "new gTLDS" like *.google, *.youtube, *.lol, etc. That's pretty much what https://wiki.mozilla.org/TLD_List is. This would have the advantage that it is much more static than the PSL.
Group: core-security
Summary: NSS doesn't use publicsuffix list → Certificate validation does not properly prohibit *.<tld> wildcard certificates
Comment 10•12 years ago
|
||
(In reply to Brian Smith (:bsmith) from comment #9)
> In fact, even forbidding *.<TLD> might be too limiting.
For the records: this was added due to bug 159483 comment 27, to increase the chances of getting the proposed changes accepted (there was a very strong reluctance to get rid of the old pattern matching nonsense at the time - cf. e.g. bug 159483 comment 43 -, something I always failed to understand).
The *.appspot.com example isn't that convincing, however - instead of stuffing the customer identifier into the host part of the URL (foo.appspot.com), they could simply put it into the path component (appspot.com/foo).
Reporter | ||
Comment 11•12 years ago
|
||
(In reply to Kaspar Brand from comment #10)
> The *.appspot.com example isn't that convincing, however - instead of
> stuffing the customer identifier into the host part of the URL
> (foo.appspot.com), they could simply put it into the path component
> (appspot.com/foo).
I think they want to rely on the isolation provided by the same origin policy. Stuffing the id into the path doesn't isolate app1.appspot.com from app2.appspot.com. I think it is for the same reasons that GitHub stuffs the userid in the path.
Reporter | ||
Comment 12•12 years ago
|
||
> the "new gTLDS" like *.google, *.youtube, *.lol, etc. That's pretty much
> what https://wiki.mozilla.org/TLD_List is. This would have the advantage
> that it is much more static than the PSL.
The wiki link you mention says it has been obsoleted by the PSL. If I am not wrong, what you want is exactly the PSL, but ignoring the private additions. Does that look ok to you ?
I agree that the current glob matching algorithm might not be the best. I also agree with you that *.<whatever> is fine as long as everything covered by that wildcard is controlled by the same entity. Why not allow *.foo.bar.com to match a.foo.bar.com as well as a.b.foo.bar.com? We can do this today, even without fixing this particular issue.
See http://mxr.mozilla.org/mozilla-central/source/security/nss/lib/certdb/certdb.c#1391 for the current policy.
Moving forward, I also agree that we want a better list than the PSL. Does anyone have suggestions for something better? What does Chrome use?
Comment 13•12 years ago
|
||
Chromium wants to use the PSL. I actually had a chance committed to Chromium that used the PSL for evaluating this - and we noticed it broke when, quite unfortunately, it broke *.appspot.com, which we had just rolled into the PSL.
Our bug for this is http://crbug.com/100442
The PSL use cases are being tracked at https://wiki.mozilla.org/Public_Suffix_List/Use_Cases which reflects conversations had with Gerv at Mozilla and Peter Kasting here at Google.
Yngve Pettersen of Opera proposed an alternate format, http://tools.ietf.org/html/draft-pettersen-subtld-structure-09 , which may better allow for the tracking and decentralization of these various policy flags - at least, when I discussed it with him. The current public suffix list right now only distinguishes "public-public" and "public-private" registrations, which is admittedly a little weird.
When discussing this, the concerns Gerv raised in comment #5 were discussed, but based on our update cycle, it did not seem too significant a concern. Presumably, the presence in the public suffix list for .zz would cause enough trouble for existing navigation and cookie handling that the wildcard support issues would be less of a concern.
Updated•2 years ago
|
Severity: minor → S4
You need to log in
before you can comment on or make changes to this bug.
Description
•