Created attachment 546861 [details] [diff] [review]
patch to nsNSSBadCertHandler
If the handshake included a dnssec chain, verify it in nsNSSBadCertHandler and possibly trust the certificate if the chain checks out.
We should enable multiple modes of DANE verification. One mode is "certificate chain must be valid and DANE information isn't available or DANE verification must pass." Another mode is "certificate chain must contain one self-signed certificate and DANE verification must pass." The application should be able to enable/disable these modes independently from each other and the default should be to enable only the first mode I mentioned.
In order to enable the first mode I mentioned, you will have to put this logic in the cert auth callback instead of the bad cert handler.
(In reply to comment #1)
> One mode is "certificate chain must be valid and DANE information isn't
> available or DANE verification must pass."
This should be "certificate chain must be valid and (DANE verification must pass OR the domain was not marked as "requires DANE"). This depends on a new mechanism for remembering that a domain requires DANE, which should be a new bug.
Created attachment 547138 [details] [diff] [review]
patch for self-signed cert case
Handles self-signed cert with dnssec chain case (pref'd on 'security.DNSSECTLS.selfSignedCert.enabled')
Created attachment 555231 [details] [diff] [review]
Created attachment 557975 [details] [diff] [review]
Comment on attachment 557975 [details] [diff] [review]
Clearing review request until we re-assess how this fits in with our certificate validation improvement plans.
As the RFC DANE was approved on the 14th June 2012 can the inclusion of Domain Authenticated Named Entities TLSA be integrated in the next release of Firefox. Given that it that the new TLSA gives the potential eliminate all CA root certificates within the Firefox this should make certificate management easier. In fact surely the current CA companies should be placing their Root CA certificates into DNS and signing them with DNSSEC.
annie, I don't think the CAs will ever disappear completely.
They do fulfill a function, let me explain.
There are currently 2 things wrong with the CA-model.
1. There are to many of them, for the CA-model to really work. The EFF SSL observatory found over 2000 CAs and Sub CAs in use on the open Internet, if I remember correctly. The browser trusts a certificate signed by any of them.
2. The pricing model is a race to the bottom (StartSSL has them available for free and a lot of times their checks are faster than the competition because they automated large parts of it). A certificate is a generic product and there is no way for CAs to differentiate as normal users won't look at what CA the cert is signed by.
With the current CA-model, there are basically 2 types of verifications for certificates:
1. domain validated - you own the domain, they check the WHOIS information and sent you a verification code by email for example
2. Extended validation ('green bar') - you are a certain organisation and you own the domain ("PayPal, Inc. (US)" owns paypal.com)
There used to be just 1 type, but because CAs had to deliver cheaper and cheaper certificates they did less and less checks so now we have 2 types.
Getting a certificate for a domain is pretty easy. If you are the owner of pypal.com it is pretty easy to get a cert for it.
EV-certificates gives the user an indication that if you want to do business with PayPal the CA checked that the domain owner is who she says she is. For example they check if the organisation is registered in the country of residence.
So pypal.com should never get a cert for pypal.com saying the organisation behind that name is PayPal in the US.
The number of CAs which create EV-certificates is a lot smaller.
DNSSEC with DANE, allows for 4 things, I'll discuss the important ones for this discussion:
1. CA Constraints, a value is stored in DNS and signed with DNSSEC which says only trust this CA for this domain
2. Trust anchor assertion, a value is stored in DNS and signed with DNSSEC which says we used our own CA for this domain
3. Domain-issued certificate, a value is stored in DNS and signed with DNSSEC which says this is the certificate used for this domain
So what does DANE solve ?:
It fixes the problem of 'to many CAs', because the browser can check if the DNSSEC-verified certificate information in DNS matches the certificate and CA information presented by the server.
The domain owner, or actually DNS-operater, thus can be sure that browsers that support DANE will not accept any certificate issues by CAs expect those CAs or even certificates specified.
So what does DANE NOT solve ?:
Extended validation - the organisation validation
So with DANE the user can't be sure he/she actually typed the right name in the address bar and/or clicked a link actually pointing to the real site.
That is one reason why the CAs will remain or at least those that do the "real work" (extended validation).
Also not all browsers support DANE currently and it will take years before you can use just a DNSSEC/DANE certificate. For compatibility with older or other browsers and other clients you'll need a certificate signed by a CA.
DNSSEC is also not available everywhere, there are for example DSL-routers which block DNSSEC-queries because they only understand regular DNS (most of the time: any larger DNS answer).
Not all Top Level Domains support DNSSEC currently, al though in one year time 90+ are now signed.
Deploying DNSSEC is also not as easy as regular DNS. DNSSEC keys need to be regularly "rolled" and communicated with the Top Level Domain a lot of that communication can't be automated yet in a lot of cases. Because the domain has been registered by a registrar which doesn't support it yet.
Summary: DNSSEC/DANE solves some problems, after many years most domains don't need to use a CA for their certificate needs.
Maybe I should add to the summary:
A much smaller number of CAs will remain in the browser after all those years, many (sub)CAs currently exists to handle domain validation.
Annie, Leen, and others: Please discuss the pros/cons of DANE vs what we have now in the dev.security.policy mailing list/newsgroup, not in this bug report.
Bill, agree that we don't want the bug to become a debating forum, but would just ask that you guys indicate the resolution back into the bug sometime Real Soon Now.
As RFC6698 (DANE) has been implemented in BIND since 9.9.1-P3 since September 2012 has supported the new TLSA record (DNS Record type 52) to hold Public Key or Server Certificate.
Is there any time-line for the RFC6698 support to be included in Mozilla?
FYI, we recently updated our DNSSEC patch for firefox (see https://bugzilla.mozilla.org/show_bug.cgi?id=748232)
to also support DANE. It would be great if this functionality could be integrated into Firefox at some point.
(The updated patches are at https://www.dnssec-tools.org/svn/dnssec-tools/trunk/dnssec-extras/nspr/sources/
I would say that this is something we're still interested in, but it has been put on hold due to other higher-priority projects. For more information, you can take a look at https://wiki.mozilla.org/Security/Roadmap (when the DNSSEC-TLS item goes from "Unprioritized" to P1 or P2, it probably means we're actively working on it).
DANE is so cool
(In reply to David Keeler (:keeler) [Winter Jaycation until Jan. 2 or 6] from comment #14)
> I would say that this is something we're still interested in, but it has
> been put on hold due to other higher-priority projects. For more
> information, you can take a look at
> https://wiki.mozilla.org/Security/Roadmap (when the DNSSEC-TLS item goes
> from "Unprioritized" to P1 or P2, it probably means we're actively working
> on it).
In light of the NSA/GCHQ MITM attacks, support for DNSSEC/DANE should have the highest priority. CAs apparently are not the beacons of trust we assumed them to be.
DNSSEC/DANE in FF would be awesome.
I am not actively working on this.
Until this is built into firefox, see https://www.dnssec-validator.cz/, a DNSSEC/DANE/TLSA plugin for firefox and other browsers.
The information in https://wiki.mozilla.org/Security/DNSSEC-TLS-details is quite outdated. It still links to an old IETF draft instead of http://tools.ietf.org/pdf/rfc6698.pdf and the section "6 CNAME issues" is dealt with in the RFC. In case multiple hostnames/domains share the same TLS-certificate (e.g. multi-domain-/wildcard-certificate for shared hosting) you can even use CNAMES with wildcards to have only one TLSA resource-record to change on certificate updates.
*._tcp 3600 IN TLSA 1 0 1 c6b6e2ab05d606db8f8b6039c1053bf0430e5d9d75797e5f475f957dd6ff610e
*._udp 3600 IN CNAME *._tcp
*._udp.www 3600 IN CNAME *._tcp
*._tcp.www 3600 IN CNAME *._tcp
* 3600 IN CNAME @
@ 3600 IN A 220.127.116.11
@ 3600 IN AAAA 2001:1c50:11:0::1
will assign the certificate-hash in the TLSA-RR to ALL UDP-/TCP-ports on <domain> and *.<domain>.
I suggest "tlsa" from the hash-slinger tools (https://people.redhat.com/pwouters/hash-slinger/) which are available in most linux distros to create TLSA records manually:
tlsa --create --port '*' --protocol tcp --certificate <cert.crt> --output rfc --usage 1 --selector 0 --mtype 1 <hostname>
If your nameservers do not support TLSA-RRs, yet, you can use "--output generic" to generate generic TLSA-RRs. As a quick start you can use https://www.huque.com/bin/gen_tlsa to generate TLSA resource-records.
The sections "Transmitting the DNSSEC Chain" and "Format of TLS Extension" are superfluos. If you do a DNS-query for a hostname just check for DNSSEC- and TLSA-resource records. If there are DNSSEC- and TLSA-RRs just do DNSSEC-validation and check if the hash of the certificate or the certificate in the TLSA-RR matches the certificate provided by the server. If TLSA-RR and certificate do not match don't load the page and show an error message.
I'm running my three private DNSSECed domains without problems. A good DNS-service should provide automatic creation of DNSSEC-records. The CZNIC-validator (https://www.dnssec-validator.cz/) works fine. As Dnsmasq supports working DNSSEC-validation since version 2.71 most consumer routers will be DNSSEC-capable in near future. Dnsmasq also supports adding custom trust-anchors for any domains/subdomains. That way trust-anchors can be added for each top-level domain if someone doesn't trust IANA/ICANN.
DANE can also protect downloads of Firefox/updates/addons against MITM-attacks of intelligence angencies and other criminals.
So almost year has passed since DANE went into roadmap, sadly we still dont see status P1 or P2 on it, are we going to see native DANE support in mozilla in 2015 ?
The usage of dnssec generaly is getting to point browsers are actualy the part stopping DANE to be used in production rather then isps not validating dns/tlds not signed.
So stop being part of the problem and start beind part of the solution ;)
Maybe Mozilla inofficially decided they don't want to support DNSSEC? Just wondering because of the recent announcement of their "new baby", together with EFF etc.: https://letsencrypt.org/
Stefan Neufeind, sadly looks like DNSSEC/DANE has too powerful antilobby, even in Mozilla.
Yep, it moves power out of the hands of Mozilla and CAs into the hands of the domain owners... Plus, it makes MITM attacks much more difficult.
(In reply to Stefan Neufeind from comment #23)
> Maybe Mozilla inofficially decided they don't want to support DNSSEC? Just
> wondering because of the recent announcement of their "new baby", together
> with EFF etc.: https://letsencrypt.org/
Who will run the letsencrypt-CA?
Isn't the problem with DNSSEC that some ISP resolving DNS-servers and DSL-routers are not DNSSEC compatible ? (forget validating, but just allowing the new DNSSEC-types to be queried without causing errors)
If an ISP is that incompetent the user should run off like hell ... ;)
Shouldn't validation occur in the browser anyhow?
(In reply to dirk husemann from comment #29)
> Shouldn't validation occur in the browser anyhow?
Yes. But some resolvers seem to have problems with passing down RRSIG-/DNSKEY-RRs to downstream resolvers which should be fixed by an ISP.
By the way, DNSSEC-proxying is snake-oil as an MITM-attacker can suppress the DO-/AD-bit and insert any unsigned resource record. Always validate in the application or at least with the resolver on the host.
> > Shouldn't validation occur in the browser anyhow?
> Yes. But some resolvers seem to have problems with passing down
> RRSIG-/DNSKEY-RRs to downstream resolvers which should be fixed by an ISP.
Exactly, also number of broadband-routers is in use as a DNS-relay and they also have problems.
In Germany they did a good test in 2010:
- not only problems with RRSIG/DNSKEY-RRs, but handling large DNS-responses or support for DNS-over-TCP.
http://netalyzr.icsi.berkeley.edu/ has stats from 2011:
Haven't found more recent statistics.
A lot of newer broadband-routers don't have this problem anymore.
To work around that problem DNSSEC-trigger uses a HTTP(S) fallback:
AFAIK so far no organisation has been willing to host such a fallback for a large userbase.
And anycast services don't have the best reputation either.
I don't understand why issues with DNSSEC+DANE should prevent implementation. For the next years CA's will not vanish magically. So if there are problems, the CA fallback is there.
So when DNSSEC does not work fallback to CA only and present the user a "Your secure DNS implementation does not work. Contact your network administrator to prevent security trouble." from time to time and you'll see that a year from now on most of these issues will be solved like they few IPV6 issues have been solved.
And inbetween implementing DANE for own domains will have a real advantage instead of being only a early adopters playtoy. If at least one of the major browsers would support DANE I'd enable it for all domains I control as soon as possible.
But as said above, the new CA initiative is a sign, that DNS based validation is not really wanted.
I don't see why the new CA initiative is a sign against DANE, adding another CA doesn't solve the "too many CAs" problem, unless you actually believe that all other CAs will go out of business and browsers will drop them as trusted roots in the near future.
If nothing else, DANE allows a extra layer of verification that has value to the user in establishing trust in the connection. When present, DANE records should be validated and an indication presented to the user.
> I don't see why the new CA initiative is a sign against DANE
Not in itself, but together with the fact that for years no progress has been made here regarding DNSSEC and DANE.
Sure, but IMO there are two main reasons why adoption of DNSSEC/DANE has been slow: 1) lack of support for DNSSEC at registrars and DNS providers, 2) lack of support in clients. So there's been a bit of a chicken and egg problem, DNS hosts don't care about supporting DNSSEC because no clients bother to verify DNSSEC, and clients don't bother to verify DNSSEC because so few hosts provide the information.
That in itself does not imply that DNSSEC is a bad idea and should not be supported. In fact there has been a significant improvement in support for DNSSEC in the registrar and DNS provider space. All new TLDs are required to support DNSSEC and older TLDs are adding it rapidly. There has also been a recent uptick in support by DNS hosts.
Other clients and servers, such as SMTP and XMPP have also been adding support for DNSSEC/DANE, and there has also been growing support for additional DNS record types, such as SSH and PGP keys.
The more clients add support for DNSSEC/DANE validation the more demand there will be for support by DNS providers. This has to start somewhere, why shouldn't Mozilla be the pioneer and set a good example?
Hi folks - it's great to see interest and a lively discussion on this topic, but bugzilla isn't the most appropriate place for it. Please do continue this discussion on a mailing list like dev.tech.crypto ( https://lists.mozilla.org/listinfo/dev-tech-crypto ) or dev.security ( https://lists.mozilla.org/listinfo/dev-security ). Thanks!
@David: Could you maybe give a *short* feedback why (as it seems to be) Mozilla seems to be actively ignoring this inquiry? I doubt it can be argued with a lack of interest or "provide patches".
Personally, I think it's a cool idea. However, this feature is not a priority right now because there are a number of unanswered questions as to how to implement and support it and because we are focusing on other technologies that we think will have a greater impact on the security of a larger number of users than this will. I know that's disappointing to hear. However, in the future we might work on making it easier for addons to experiment with different methods of certificate verification, and this could be implemented as an addon that we might eventually incorporate into Firefox.
I guess stuff like this should be in the core. How do i know, which addon can be trusted with validating certificates? I am not even sure, if the AMO review would really catch malicious addons, but addons promising validation and implementing it badly possibly will go unnoticed, while the users think they are safe.
For CAs: Same argument as above. In the long run it does not need more CAs, but less. With DANE i need to trust my DNS registar and nobody else. With the current CA situation i need to trust, that my visitors are only trusting trustworthy CAs. Which is not the case in most browsers. So even a CA fallback when a site uses DANE should be disabled eventually.
You cannot trust any code until you have reviewed it yourself. ;-)
DNSSEC/DANE is a temporary step to improve domain security.
Currently I'm promoting BTLS (Blockchain-based Transport-Layer Security) which uses a blockchain to assign fingerprints of X.509 client-certificates to usernames. It seems to be the only way for tamper-free TLS with domains, too. But it won't allow to dispute domain registrations as there won't be any registries anymore.
Re Comment 40, I have no comment about BTLS, but DNSSEC would be very useful so that the DNS itself could be used to convey application characteristics prior to connection establishment without worrying about downgrade attacks. It is an enabling technology.
It really comes down to a matter of who do I trust?
The current situation: I have to trust
- my domain name registry
- a HUGE number of CAs scattered all over the world (any one of which could be hacked, be used to issue rogue certificates)
- my server
Essentially 2+X, with X being fairly large.
With DNSSEC I can narrow this down to
- my current domain name registry
- my server
Essentially just, ehm, 2.
So: required trust with current setup >> required trust with DNSSEC.
If Mozilla is really concerned with creating a safe browsing experience it MUST go and support DNSSEC and co. If...
Nobody mentioned yet that DANE solves certificate revocation in a very nice manner. If I get hacked I simply revoke the cert and switch it with a new one in DNS. No need for CRL or OCSP.
This should get at least experimental opt-in support on dev so people could start banging on it.
(In reply to imbacen from comment #43)
> Nobody mentioned yet that DANE solves certificate revocation in a very nice
> manner. If I get hacked I simply revoke the cert and switch it with a new
> one in DNS. No need for CRL or OCSP.
You're right. You can just revoke a certificate by removing/replacing it's TLSA-RR from the zone of a domain. DNSSEC also allows to use self-signed certificates. There's no need for CAs when browsers or other TLS-clients support DNSSEC.
Added my vote to this. We should definitely implement this!
Well, they are improving things around revocation and OCSP in Firefox 37 so that is at least something:
You can set up your server(s) to do OCSP stapeling and add a HPKP header.
Mozilla will add a CRL-like list for the intermediate CA certs.
The CA can add an OCSP must-staple to the certificate they issue to you.
You control which public keys are valid (make sure you have multiple keys pre-generated) and the CA controls the revocation (and you can contact the CA to revoke a cert).
Here are the links:
yes, i'm sure that will be quite nice. the key issue still is: you still are dependent on the CA.
OCSP (along with OCSP-stapling) does not address the core of this issue:
(a) we need to empower the site owner to control the chain of trust - if they do want to trust a CA, fine let them do that; if they do want to issue their own certs instead, fine let them use DNSSEC/DANE.
(b) the CA model is flawed and compromised - c.f. NSA/GCHQ/MITM-certs, etc.
Well, that is why I mentioned the HPKP-header. As long as you use the Public Key Pinning there isn't anyone that can impersonate your site if they don't have a private keys which match the same hash (we assume they will need to have your private key for that). Which CA signed the certificate used is then irrelevant.
Obviously: HPKP only helps regular visitors of your site and not first time visitors.
With the OCSP improvements you can revoke if one of your private keys or CA keys gets compromised.
So it's not yet a 100% perfect*, but a big improvement over everything else we've had so far.
Yes, for a short period the person with the compromised key can impersonate your site. At first they can use your old certificate, because they already have your key, getting the certificate is easy. Then you would ask the CA to revoke the certicificate. Now they will need to find an other CA to issue a certificate for your key because OCSP-stapling wouldn't work anymore.
As long as the website visitor doesn't see the fake site and visits your real site they can not be fooled anymore as you've obviously updated the HPKP-header.
Some people at Google and others are working on http://www.certificate-transparency.org/ to solve the CA part. So you can detect if an other CA issue a cert. And browser would need to reject certificates which didn't include a CT-timestamp.
Yep, it's a lot more complicated than I would like to see.
* it still needs the OCSP-must staple part. The HTTP-header or X.509 certificate extension hasn't been enabled in Firefox or is on the roadmap to do so in the near future. The header support has been implemented, but not enabled: https://bugzilla.mozilla.org/show_bug.cgi?id=901698 My guess would be they are waiting for the X.509 certificate extenstion to be ready and the change to NSS has been made.
In my opinion it makes no sense to mess about with symptoms of a completely broken system like the CA infrastructure. It's just a waste of man hours.
To put it straight: The CA infrastructure is expensive, complicated, error-prone and any CA which has made it into OSes or browsers can fake certificates of anyone for anyone. A lot of CAs think it's legit to issue fake intermediate certificates for "security"-systems like firewalls - which can also be modified for eavesdropping or manipulation of data. So there's no trust either. If you look at the percentage of websites using HTTPS you can clearly see that CA infrastructures are just a big bureaucratic flop preventing the comprehensive roll out of encryption.
HPKP is limited to HTTP while DNSSEC/DANE is a generic, protocol-independent solution for any domain-related authentication.
Why use an additional authentication tree if we can just put a few additional DNS-records in the already existing DNS-tree? It is also much simpler to understand when name and signature of the name use the same tree/database.
OCSP and CRLs have never worked reliably. A lot of CAs don't even support CRLs via IPv6. In my opinion OCSP, CRL-support and HPKP are just bloatwares making applications like Firefox huge, error-prone, not maintainable and eat up man hours.
Technically we can solve any problem. Mentally we cannot get our dumb brain to get rid of old habits. So we're stuck in superstition of systems failing over and over. :-(
Politically you're right and i am totally on your side. But technically almost no domain really uses DNSSEC. Maybe an "better validated" badge in firefox could motivate some people, but i doubt it.
It's the typical hen-egg-problem. DNS operators whine about missing client support and client application vendors whine about missing server support resulting in a feature dead-lock. One party just has to start.
Mozilla has a powerful position influencing users.
I suggest two traffic lights in the address bar:
GREEN: DNSSEC correctly validated
YELLOW: No DNSSEC records, be careful warning!
RED: DNSSEC validation failed, block loading resource
GREEN: DANE correctly validated
YELLOW: No TLSA records, be careful warning!
RED: TLSA record validation failed, block loading resource
This will also improve security of Mozilla updates as the download of an update cannot be manipulated by MITM-attacks anymore.
Technically no sane person would run a script from mozilla to alter his configs to set up some bogus certificate.
So, now we can agree that this is the same FUD/bull....t as saying "no domain really uses DNSSEC"
Could you please provide some real arguments against DNSSEC and not opinions, thx.
DNSSEC seems to have surpassed the hen-egg-stage. Worldwide the validation rate is at just under 13%, with some countries like the USA seeing higher rates of 23% (source: http://stats.labs.apnic.net/dnssec/XA). In fact, there's a very strong growth visible.
All TLDs currently have DNSSEC keys, and some of those zones have extremely high signing rates (.nl, the fifth largest ccTLD, is at about 40%).
I'd say that we have definitely reached a point where implementing DNSSEC in Firefox makes a lot of sense, and as soon as that's done TLSA will be easy.
It's not really hen-egg problem anymore.
Most TLDs are signed, more and more registrars support at least DNSSEC and a lot of DNS resolvers already support DNSSEC including Google DNS. So if you wanted to use DNSSEC/DANE on your server at this very moment you could. The problem is that not a single browser supports it so browsers are the real blockers here.
It seems that web services unrelated to browsing will adopt the standard first.
That dnssec is not used widely is simply because no program cares, especially the browser.
As soon as firefox/chrome or maybe IE starts to change the default behavior to show a yellow instead of a green symbol for sites that have TLS but no DANE , sites will adopt change. There is simply pressure from a large userbase. At least all larger sites will implement it because they don't want to appear unsecure for a default user.
This is not a henn/egg problem, its a problem of user visibility and thats a browser responsibility.
I blame Google, Mozilla and Microsoft for endangering common users. I really hope its not because of the CA signup fees but of believe in broken technologies like argued before.
And don't come up with arguments that some TLDs are not signed. They had the time and now they will get the pressure from their customers, so be it. It's not something invented yesterday.
I did not say it's an argument AGAINST DNSSEC. I said this is the problem. And DNSSEC can be deployed without any browser support at almost no additional cost, but even some registrars do not offer it, yet. So i want DANE to happen, but i do not have very much hope for now.
Current Dnsmasq (>=2.71) already supports DNSSEC-validation and is available in Ubuntu 14.10. It won't take long to make it in consumer routers and other distros. My private .de-domains are already secured with DNSSEC/DANE. So it's possible to use it. The two weak points are finding a registrar which has the capability to handle DNSSEC with TLSA-RRs and application support -> Firefox, etc.
Who is in charge of implementing security features like DNSSEC at Mozilla?
Chinese CA Issues Certificates To Impersonate Google: http://it.slashdot.org/story/15/03/24/1730232/chinese-ca-issues-certificates-to-impersonate-google
Now that even Mozilla plans to mark http deprecated ( https://blog.mozilla.org/security/2015/04/30/deprecating-non-secure-http) and considers this in the FAQ, is there hope to get this bug fixed?
I wanted to suggest to close it with "wontfix" for a while already
Almost two years has passed since DANE went to roadmap, has mozilla security group come to conclusion how to proceed with DANE support on firefox ?
This is not a priority at the moment. See comment 38.
Maybe the recents rfc7671 (and rfc7673) may give some answer to the "number of unanswered questions" discussed in comment 38.
*** Bug 666148 has been marked as a duplicate of this bug. ***
This is a potential implementation of bug 1201841. However, that bug has no current active work being done on it, so it's not even clear if this is the direction the implementation will take. Marking this as WONTFIX until we know if this is the approach we want to take.
Reopening since we have a MOSS project working on this now.
*** Bug 1201841 has been marked as a duplicate of this bug. ***