Open Bug 1059392 Opened 10 years ago Updated 2 years ago

Switch default pinning enforcement level to strict

Categories

(Core :: Security: PSM, defect, P3)

x86
macOS
defect

Tracking

()

People

(Reporter: mmc, Unassigned)

References

(Blocks 1 open bug)

Details

(Whiteboard: [psm-backlog])

From https://wiki.mozilla.org/SecurityEngineering/Public_Key_Pinning#How_to_use_pinning

the default enforcement level is allow user-installed trust anchors to bypass pinning checks. The fact that we have this pref is sufficient, and we should default to the level that makes sense for most users (i.e., not enterprise users whose admins want to sniff all their traffic). Now that pinning's about to hit stable with no major problems, we should switch this to strict.
mmc: why the blocking relationship with 1004350?
For this to have a real security impact the "user allowed mitm" setting should have clearly visible feedback though. Else stuff like Superfish and so on can just flip the pref.
bug 1168603 points out that the behavior when security.cert_pinning.enforcement_level = 1 is significantly worse than it needs to be, and should be tightened up no matter what.  

I agree with mmc here that the default should really be 2.
gcp: if a piece of malware would be able to just flip the pref, chances are it would also be able to disable whatever visible feedback one might get. Theoretically, this is definitely the case, but of course it might still be the case that the resulting cat-and-mouse game is worth playing.

That said, I also agree that the default should be set to 2 (as I only ended up here in an attempt to figure out why the default value is what it is) as it might at least catch the lazier non-pref-flipping malware. And taking that as the premise: if a user or a stalking employer knows how to mess with trust anchors, they should also easily be able to change the preference, and so I do not quite see the value of (arguably) lowering the security in the much more common use case.
>gcp: if a piece of malware would be able to just flip the pref, chances are it would also be able to 
>disable whatever visible feedback one might get. 

I guess it depends on how Firefox is installed (are our binaries accessible to the user's permission level?) and whether the relevant UI is accessible for extensions/addons. For the case where Firefox is installed with admin permissions but run as a user (the typical one on Windows AFAIK) I think we can make it safe?
It would be nice to do this, but we would need considerable ux help in terms of informing the user when something goes wrong and giving them the tools to set their configuration in a way that will work for them (e.g. if they're behind a corporate mitm box).
Whiteboard: [psm-backlog]
If a user is behind a corporate MITM box, the operator of that MITM box is expectd to make the situation clear to the user, including "here's how to get the certificate and here's how to install it", which should involve checking a "allow complete compromise of everything" box.

I don't think mozilla should be providing "considerable ux help"  to TLS MITM boxes that the user doesn't already know about.  We shouldn't be encouraging users to permit this sort of intrusion.
a number of security software on windows getting close to 50% marketshare (avast, kaspersky, bitdefender, eset, bullguard) does some sort of man-in-the-middeling and its users don't have a clue of what's going on most of the time.
Sure, but that software gets regular updates from its vendors, and they can easily be given a heads-up about which bits to twiddle before mozilla changes the default setting.

The point is to make it harder for someone to say "oh, add this CA to your browser to access website X" and have it turn out to break protections that are otherwise in place for websites Y and Z.
maybe i don't fully understand the technical background to this, but what would be the practical implications for a regular user who has -say kaspersky installed-, all of whose connections on the machine will be intercepted and resigned with a "Kaspersky Anti-Virus Personal Root Certificate" and who is trying to access a pinned domain like google.com if this bug were to be enacted? would there be an error message?
If they have Kapersky anti-virus personal root certificate installed legitimately, then Kapersky's tools (the same tools that installed the root cert) should either explicitly flip security.cert_pinning.enforcement_level back to 1.

Or (even better) Mozilla could implement something like the proposal in bug 1168603, and encourage Kapersky to install their cert with the extra option of "yes, please allow this root CA to MITM everything", while leaving the main security.cert_pinning.enforcement_level set to 2, so that incidentally-user-installed certs still don't get to MITM everything.

Responsible anti-virus vendors would presumably prefer the latter option (so that their users aren't exposed to additional risk), but either way should work.
> if a piece of malware would be able to just flip the pref, chances are it would also be able to disable whatever visible feedback one might get.

Per bug 1228714, this issue can be addressed. The purpose of a pin is to prevent MITM attacks. I think Monica's suggestion is good, and the issue of malware "flipping the prefs" can be addressed by making there not be a pref to flip. A special enterprise version of browser can be used by those who need to override pins.

Anti-virus software should not be overriding these pins, that is an attack vector in of itself, and we've seen plenty of stories (even recently) about anti-virus software being used as attack vectors. There is no real reason for anti-virus software to be MITMing all of the traffic leaving a computer. The correct behavior, if for some reason that traffic is objectionable, would be to block it outright—not disable the pin.
Alternatively, it would be possibly sufficient to simply display a clearly visible special icon in the URL bar that tells the user that their connection is currently being MITM'd.
(In reply to Greg from comment #13)
> Alternatively, it would be possibly sufficient to simply display a clearly
> visible special icon in the URL bar that tells the user that their
> connection is currently being MITM'd.

fwiw, i'm not convinced this is suffcient.  Sensitive material (cookies, passwords or other form variables, etc) may end up being sent over the MITM'd connection before the user even has a chance to see (let alone react to) the "you are being MITM'd" icon.
If the attacker has no access to the CA store (i.e. the local system), HPKP will protect you from rogue CAs issuing false certificates. No settings flip or changes to Firefox are needed for that.

If someone has access to the local CA store, it means they already have limited or full control over the user system, and whether HPKP works is then besides the point because the "attacker" can do what he wants already.

The uses cases there are all intentional snooping (whether by corporate decision, or anti-virus, and note the latter case requires some cooperation for this bug to be workable). Anyway, that's already described in this bug.

You can believe that corporate installations shouldn't be able to do that, or that antivirus solutions shouldn't MITM, and we might or might not agree, but that's beside the point: they work like that, they have control over the local system and nothing we decide to do in this bug will change that.

My understanding is that in order to resolve this bug one of these issues need to be addressed:

1) Anti-virus software that is MITMing (the majority) on all computers is magically updated to a version that flips our pref.

or

2) Someone comes up with a UX that is able to explain to even the least aware of our users that their traffic is being intercepted and that they are being snooped on, but that this may be intentional on behalf of their employer or their anti-virus or firewall software, or that maybe they have some malware on their system, or had at some point.

I would expect that you don't get a commitment to fixing this until someone can come up with a solution for either of these.
Accusing Firefox developers of refusing to protect users from people who've already got root access on their computer via some other means is misguided; accusing us of deliberately and maliciously harming the security of our users is not acceptable.

Please keep further comments in this bug focused on resolving the issue.
Mike, you manage to spectacularly miss the point of HPKP. It is not to protect users from attackers who have root access. Obviously, this isn't possible for an attacker with full access and thus it's a strawman argument, and a pretty obvious one at that.

The question is: How difficult is it to pull off? Currently, all a user has to do to become vulnerable is perform a few misclicks, they'll have a new root CA installed, and all bets are off. Because HPKP is de-facto DISABLED as a standard as of now. Why?

If you really crave interoperability with broken "anti-virus" software, why not display at the first HPKP violation a huge warning screen asking the user if they're having snake oil installed that breaks their TLS open. In any case, the default should be "security on" and not the other way around.

You, Mike, hiding my comments is simple intellectual dishonesty. You *are* putting your users at risk since you're by default disabling a crucial security feature. That is fact, not opinion. You might have some misguided reason for it (because probably you don't want to get the rivers of tears from anti-virus software users), but that's entirely beside the point.

To get back to the issue, can you at least give a commitment either way (keep it broken or fix it) instead of having this open issue lingering about for ALMOST THREE YEARS without anyone appearing to care about it?
Another thought on this as some form of middle ground: Since this is an option that some people like to enforce, why is it buried so deep in Firefox internals instead of the preferences? And it's completely obscure at that as well (security.cert_pinning.enforcement_level set to some magic values of 0, 1 or 2 is completely unintuitive). So it could at least become part of the advanced preferences, so it's easy for people who care about this sort of thing to enable.
> Another thought on this as some form of middle ground [..] So it could at least become part of the advanced preferences

That's kinda funny, cause it would be easier for them to implement a "You're being MITM'd!" icon in the URL bar than what you're suggesting as a middle ground. :P
(In reply to Johannes Bauer from comment #19)
> Mike, you manage to spectacularly miss the point of HPKP. It is not to
> protect users from attackers who have root access.

Correct, it's to protect against rogue CA's issuing certificates they weren't asked for.

> The question is: How difficult is it to pull off? Currently, all a user has
> to do to become vulnerable is perform a few misclicks, 

This description applies to a full compromise of the system via a rootkit too, so your argument falls flat entirely.

> If you really crave interoperability with broken "anti-virus" software, why
> not display at the first HPKP violation a huge warning screen asking the
> user if they're having snake oil installed that breaks their TLS open.

This warning won't do any good unless users understand what it means. As already pointed out, explaining this in an understandable manner is non-obvious and a blocker for this bug.

We're not happy with the way anti-virus software undermines Firefox's security, but moving the problem to the users isn't solving anything.

> You, Mike, hiding my comments is simple intellectual dishonesty.

Your comments were hidden because they were in violation of basic etiquette. It doesn't even have anything to do with the arguments being valid or not. https://bugzilla.mozilla.org/page.cgi?id=etiquette.html

> To get back to the issue, can you at least give a commitment either way
> (keep it broken or fix it) instead of having this open issue lingering about
> for ALMOST THREE YEARS without anyone appearing to care about it?

Please reread the last part comment 17. Your suggestion of "displaying a huge warning screen asking the user if they're having snake oil installed" most likely doesn't meet the requirements of (2).

>why is it buried so deep in Firefox internals instead of the preferences

Because in order to surface it a usable UX explaining what it does is needed. If you can construct that, you can probably construct a usable "warning" and we might as well default it on in the first place (as already explained several times now). Right now, you won't find the pref unless you understand the underlying problem it addresses.

If this data doesn't exist already, I guess something that would be *constructive to an actual solution here* might be to try to determine if the anti-virus-vendors installed CA's are fixed enough that we can include the list with decent coverage. Massaging the notification to read "Your anti-virus software is inspecting Firefox's connections" (if recognized) vs. "An unknown party is reading your internet traffic"  (if not) would help a lot with raising the appropriate level of alert.
Gian-Carlo,

> This warning won't do any good unless users understand what it means. As already pointed out, explaining this in an understandable manner is non-obvious and a blocker for this bug.

Let me see if I understand your argument correctly: you're saying that offering *no explanation at all* and letting people think their connection is secure when in fact it is not, is *better* than offering an explanation that some users might not understand (and moving forward with other bugs to improve the UI/UX of the explanation)?
I am indeed saying that showing a "security warning" that for most users will actually be a false alert (or at least, something they intended, i.e. having an anti-virus) is extremely counterproductive, because it will cause real warnings (that they *should* act on) to be ignored.
> I am indeed saying that showing a "security warning" that for most users will actually be a false alert (or at least, something they intended, i.e. having an anti-virus) is extremely counterproductive, because it will cause real warnings (that they *should* act on) to be ignored.

Maybe we have something different in mind then when it comes to the design of the warning.

I am not suggesting anything similar to the other security warnings, but a unique icon that just sits there, and when clicked on gives a deeper explanation of what it means.

That would not cause other warnings to be ignored, and therefore it would be productive.

As is, I have to agree with Johannes, it does not seem like Firefox is taking its users security seriously.
> Mike, you manage to spectacularly miss the point of HPKP. 

My apologies for misunderstanding. I'd assumed that when you described the "Someone locally injecting a CA certificate and intercepting traffic", that you meant somebody with the access required to locally inject a certificate authority onto a machine and leveraging that to intercept network traffic. It's not clear to me why somebody able to modify a local certificate store would be unable to modify that browser or OS to circumvent browser warnings or restrictions in other ways, but as you say, I'm not an expert.

Nevertheless, my argument about the basic requirement for participants in this threat to respect the bugzilla etiquette and contributor guidelines - linked from the bottom of this comment - stands, and accusing our colleagues of malice or incompetence is not acceptable. You're welcome to disagree with this, but please take that up with me over email, not in Bugzilla.
(In reply to Greg from comment #25)
> I am not suggesting anything similar to the other security warnings, but a
> unique icon that just sits there, and when clicked on gives a deeper
> explanation of what it means.

OK, but what should the explanation be?
 
> That would not cause other warnings to be ignored, and therefore it would be
> productive.

You're trying to have your cake and eat it.

Something like 75% of the computers have an anti-virus, say about 50% of those do MITM (rough ballpark estimate from comment 8). So (very roughly estimated) one in three users will get the icon.

Now you have an impossible problem: either the warning (I'm sorry, you can call it "unique icon" if you want) is clear and alerting, and you've just given 37.5% of the users a false warning they can't understand, or it isn't, and the users that are actually compromised won't notice (in time), thereby defeating the purpose entirely. 

Either way, looks to me like you're not "taking ours users security seriously" because you're showing them useless warnings/not shown them a warning. (See how easy it is to make meaningless platitudes, yet how hard it is to make something that actually improves user security?).

In comment 15 someone argued you shouldn't be making any connections at all to begin with in this situation. Does this mean that if the "unique icon that just sits there" shows up, the browser is broken until the user figures out how to make it go away? 

Basically, I do not think you're going to get something feasible here, unless you can take the anti-virus out of the equation first (let's assume the corporate MITM case is dealt with via proper configuration).
OK, we're done here. 

I've elected to restrict this bug to comments only from people with editbugs privileges. Thank you all for your contributions. If you have technical information or well-supported information about threat modelling and user behavior that you would like to add to this bug, feel free to send it to me directly and I will be happy to pass it on.
Restrict Comments: true
Summary: Switch default pinning enforcement level to strict instead of "user allowed mitm" → Switch default pinning enforcement level to strict
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.