Closed Bug 1479168 Opened 3 years ago Closed 3 years ago

Captive Portal Authentication Provides Easy Pickings for Evil Network Operators and Men in the Middle

Categories

(Firefox :: Security, defect)

defect
Not set
major

Tracking

()

RESOLVED INVALID

People

(Reporter: mparnell, Unassigned)

References

Details

Attachments

(2 files)

To reproduce:

1. Redirect http://detectportal.mozilla.com/success.txt to a URL of your own choosing, or keep the host and put our own page in it's place as a captive portal might. For testing you can also just plug in a local network URL, it's the same difference for our purposes.

2. Do any number of evil things - request notification permissions, push out an infected "update" for Firefox, a download link for a malware to be allowed to connect to the internet, a phishing page, or a 0 day. The list goes on...

3. As a result of these actions, the attacker or evil network operator may now own everyone who uses Firefox on the network.

I realize that we can't fully protect users who don't have any sense, but at the very least the process of detecting captive portals can be vastly improved.

My suggestions:

1. Emulate Chrome by expecting a 204 no content response, and no redirects from the http://detectportal.mozilla.com url. If we get content, or a redirect, we should add a secondary validation step:

2. Attempt to reach out and connect to an https domain controlled by Mozilla, or perhaps simply use the user's default search URL. If we get a cert error, we can be certain we are not connected to the internet.

From this point, we can try to open a browser page to any given http URL, even if we just recycle http://detectportal.mozilla.com to attempt to login to the cpative portal.

Beyond these steps having some kind of user friendly list of privacy/security pointers somewhere would maybe be a good idea, as I realize non technical users like my mother or father, who both use Firefox, would still potentially fall for attempts by bad actors to comprimise them.
I should mention that in the case of mitigation #2 that if we get a good cert after having #1 appear to be a redirect to a portal login page or something else, it is likely something nasty is afoot, as we would have an internet connection over https if the cert checks out, yet the captive portal detection over http says otherwise.

Because this check is every 60 seconds and on by default, I think that the severity may be understated.
(In reply to mparnell from comment #0)
> To reproduce:
> 
> 1. Redirect http://detectportal.mozilla.com/success.txt to a URL of your own
> choosing, or keep the host and put our own page in it's place as a captive
> portal might. For testing you can also just plug in a local network URL,
> it's the same difference for our purposes.
> 
> 2. Do any number of evil things - request notification permissions, push out
> an infected "update" for Firefox, a download link for a malware to be
> allowed to connect to the internet, a phishing page, or a 0 day. The list
> goes on...

Uh, how? We don't actually load the response in a browser window, AFAIK... If your point is that the captive portal can redirect people to somewhere that loads this once you jump through whatever hoops the captive portal gives you - it can do that anyway, it's not restricted to the detectportal domain...

Finally, I'm pretty sure requesting notification permission is a SecureContext only feature, so "http://<whatever>" can't request that - only https:, and if you manage to successfully load a document for https://www.mozilla.com/ in the user's browser (ie you can get a certificate for a mozilla.com domain which you don't actually control) without compromising a CA or the user's local root list, we have much bigger problems.

> 3. As a result of these actions, the attacker or evil network operator may
> now own everyone who uses Firefox on the network.
> 
> I realize that we can't fully protect users who don't have any sense, but at
> the very least the process of detecting captive portals can be vastly
> improved.
> 
> My suggestions:
> 
> 1. Emulate Chrome by expecting a 204 no content response, and no redirects
> from the http://detectportal.mozilla.com url. If we get content, or a
> redirect, we should add a secondary validation step:
> 
> 2. Attempt to reach out and connect to an https domain controlled by
> Mozilla, or perhaps simply use the user's default search URL. If we get a
> cert error, we can be certain we are not connected to the internet.

Even if your described scenario was accurate, how would this help? Your assumptions include that the user is subject to active MITM. In that case, the attacker can just MITM everything else as well? That's kinda what captive portals do...

Nihanth, who owns our captive portal detection stuff at this point?
Flags: needinfo?(nhnt11)
Flags: needinfo?(mparnell)
Attached image portal detected.png
portal detected using local network address as our "success" page
Flags: needinfo?(mparnell)
Attached image login page.png
Our "captive portal login page" asking for notification permissions. Note that if I was spoofing as detectportal.mozilla.com it would look even more "official" to non-technical users.
1. Yes, we do load the response in the browser window, in our attempt to open the captive portal page - see attachments. In the event we don't get a 200 and the response body "success," the system simply assumes we aren't connected and prompts the user to click a button to open the network login page. Upon clicking this button, we are taken to a tab that uses the captive portal detection URL as the destination, assuming we're going to a portal's auth page.

Nothing would be wrong with this if it were a real portal, but in a situation where the network operator is evil or the network is comprimised, checking an https URL for whether or not a valid SSL cert is on the other end of the request would help protect users from being unnecessarily pwned due to their ignorance. I believe that Chrome already does something similar, at least some of the time - their generate_204 endpoints include a mix of http and https, and some ipv6 mixed in as well. 

As for the secure context, see my attachments. I guess this may be a twofold issue.

Yes, any attacker can subject a user to a MITM and put them where he wants them over http, but through deductive reasoning we could assure with higher certainty that a MITM isn't going on, by checking an https URL after we are unable to contact our initial http URL.

If the https request succeeds, there is a high chance that someone is performing an attack, unless the network or attacker is using enough foresight to allow all https transmissions through, which would negate their attack in the first place if they're passively collecting traffic. 

I see this issue to be an easy way to establish a foothold on a user's machine or an easy way to phish their information - through a realistic "upgrade firefox" page, or through a web form.

While my attachments are just hitting http://webdev on my network, I can go the extra mile and spoof as detectportal.mozilla.com if you need me to prove my points on both counts further.
As an addendum, I should mention that it makes a lot more sense in the long term future if we were to also switch to using a 204 response for the detectportal.mozilla.com responses, as in the event any 0days show up that could be triggered by merely visiting a page, or even by the XHR request we make to check the captive portal URL, a 204 expected response could help prevent attackers from further using the captive portal detection system on public wireless networks to attack.

Even having a SSL version of detectportal, whether we use 204 responses or not, for the secondary connectivity check I suggested before would be enough to provide assurance that we aren't being attacked.
(In reply to mparnell from comment #5)

Just to be clear, I haven't personally worked on our captive portal detection, and I'm not very familiar with it. Nihanth probably has more accurate information here. But the report here is a bit confusing. In my mind, there are 2 parts to the captive portal stuff:

1. (regularly / based on OS network change notifications) do a request to determine if we're in a situation where the user needs to pass a captive portal (via login, payment or agreeing to T&C or whatever arbitrary condition the portal sets)
2. open said portal to allow the user to do this (if/when the user tells us to).


I understood comment #0 to be arguing that the way we've implemented (1) causes security risks to users. It's not very clear because the steps in comment #0 miss what the user is doing - it just magically happens! Hence why I assumed you were talking about the initial detection request. I was confused, because I'm pretty sure we don't run JS or do anything but run the network request for the initial detectportal check, it's just a background HTTP request. So nothing happens automatically as part of that request, until the user clicks to go to the captive portal, so you're claiming there's an issue with (2). Is that right? Are you saying something else? What specifically are you arguing goes wrong at this point, if anything?

For (2), once the user loads the page, the page can do whatever pages can normally do.

> 1. Yes, we do load the response in the browser window, in our attempt to
> open the captive portal page - see attachments.

"our" attempt - does this happen automatically? I've only ever seen the notification bar...

> In the event we don't get a
> 200 and the response body "success," the system simply assumes we aren't
> connected and prompts the user to click a button to open the network login
> page. Upon clicking this button, we are taken to a tab that uses the captive
> portal detection URL as the destination, assuming we're going to a portal's
> auth page.

So clicking via the notification bar or some other prompt is how the user ends up on this page? Or automatically? Under what circumstances does the latter happen?

> Nothing would be wrong with this if it were a real portal, but in a
> situation where the network operator is evil or the network is comprimised,
> checking an https URL for whether or not a valid SSL cert is on the other
> end of the request would help protect users from being unnecessarily pwned
> due to their ignorance.

Uh. Why would checking for a valid cert make the website benign? Certificates only mean that you're talking to the correct address. That website could still be evil.


But sure, let's ignore that for the moment and assume you're interested in trading on mozilla's good name (or google's, or whoever's not used HSTS for some particular subdomain). Let's also say that we checked for valid TLS certs. I would get a valid cert for <randomstring>.wherever.<some-tld> via LetsEncrypt. I would intercept requests for detectportal.mozilla.com so that the user gets a captive portal warning. That redirects to my perfectly TLS-secured domain with its nice certificate, that the code for (1) ran into. The contents of the page, once loaded into the browser look like this:

<script>

/* if necessary, first send a fetch request to some API on my MITM server to make detectportal.m.c return evil content */

location.href = "http://detectportal.mozilla.com/"; // or some other domain you'd like to do stuff with
</script>

And now we're back in the same place as we have now. I don't see how the TLS cert helps.

On the other hand, requiring a recognized TLS cert would probably break captive portal detection in quite a few places, where the captive portal page lives on a box in that physical place, and uses a self-signed cert, because everything is terrible. :-(


> I believe that Chrome already does something
> similar, at least some of the time - their generate_204 endpoints include a
> mix of http and https, and some ipv6 mixed in as well. 

Can you elaborate on what you mean here?

> As for the secure context, see my attachments. I guess this may be a twofold
> issue.
> 
> Yes, any attacker can subject a user to a MITM and put them where he wants
> them over http, but through deductive reasoning we could assure with higher
> certainty that a MITM isn't going on, by checking an https URL after we aref
> unable to contact our initial http URL.

I don't understand what this means. A captive portal *is* a MITM. The point is you try to go somewhere without paying / logging in / agreeing to T&C, and the network redirects you to the portal page instead. That's MITM by definition. Sure, maybe you trust the average hotel and there are others you don't trust - but a TLS cert is neither necessary nor sufficient proof that a particular MITM is "benign" or "trustworthy".

> While my attachments are just hitting http://webdev on my network, I can go
> the extra mile and spoof as detectportal.mozilla.com if you need me to prove
> my points on both counts further.

It's not that I don't believe you can MITM yourself, or get the captive portal warning to show up by doing so, the problem is I don't understand how you think our captive portal detection exacerbates the MITM, or puts users at risk. It's quite possible you've found a way in which it does, I'm just not following you. :-)
(In reply to :Gijs (he/him) from comment #7)

Sorry for the confusion, I haven't had much coffee today!

1 and 2: The check is just a timer that hits the detectportal http endpoint every 60 seconds - there are some other aspects to it but in general the check fires every 60 seconds. While user interaction is required to click the button to open the captive portal page, are you aware of a user who wouldn't do such a thing if they are unable to connect to the internet?

In some older versions I have noted the page to open in a tab seemingly on it's own but this is an isolated case with a friend who uses Waterfox. In Firefox 61, the button requires a click but then proceeds to the supposed detectportal.mozilla.com/success.txt endpoint, although I have not tested with session resume enabled - this could be the deciding factor in such cases.

You are correct in regard to #2 being the main security concern here. While I agree that everything is awful in terms of self signed certificates appearing on many captive portsls, they for the most part will usually redirect to their own internal domain or to a local IP. I would argue the ones who do not should be considered malicious anyway, if they attempt captive portal auth with a self signed cert on google.com or something.

That said, a true attacker will never have a valid SSL cert for mozilla.org, google.com, duckduckgo, or any other major SSL domain. Checking a major SSL domain before or after the http check would allow us to do a better job of seeing if we are being attacked or not, as being able to access a known legitimate SSL site while getting a redirect over http to something other than our "success" content would confirm a bad actor is in play.

All that said, Chrome does at least attempt to find valid HTTP auth required response codes, as opposed to detecting network errors... We should consider cribbing some of their methods in addention to the potential https check, as right now these captive portal checks are really just a 60 second repeating timer that lets anybody who feels like it push people to whatever content they want - to phish or to infect.

Chrome's portal code:
https://cs.chromium.org/chromium/src/components/captive_portal/captive_portal_detector.cc?q=generate_204&sq=package:chromium&dr=C&l=18
(In reply to mparnell from comment #8)
> (In reply to :Gijs (he/him) from comment #7)
> 
> Sorry for the confusion, I haven't had much coffee today!
> 
> 1 and 2: The check is just a timer that hits the detectportal http endpoint
> every 60 seconds - there are some other aspects to it but in general the
> check fires every 60 seconds. While user interaction is required to click
> the button to open the captive portal page, are you aware of a user who
> wouldn't do such a thing if they are unable to connect to the internet?
> 
> In some older versions I have noted the page to open in a tab seemingly on
> it's own but this is an isolated case with a friend who uses Waterfox. In
> Firefox 61, the button requires a click but then proceeds to the supposed
> detectportal.mozilla.com/success.txt endpoint, although I have not tested
> with session resume enabled - this could be the deciding factor in such
> cases.

OK, presumably Nihanth can clarify this. Either way, we're opening a website, and the website can do whatever the website can do...

(responding in a slightly different order here...)

> All that said, Chrome does at least attempt to find valid HTTP auth required
> response codes, as opposed to detecting network errors... We should consider
> cribbing some of their methods in addention to the potential https check, as
> right now these captive portal checks are really just a 60 second repeating
> timer that lets anybody who feels like it push people to whatever content
> they want - to phish or to infect.
> 
> Chrome's portal code:
> https://cs.chromium.org/chromium/src/components/captive_portal/
> captive_portal_detector.cc?q=generate_204&sq=package:chromium&dr=C&l=18

I'm really not an expert on the Chrome code, but AFAICT from that snippet, if the connection succeeds and it gets a non-204 response between 200 (inclusive) and 400 (exclusive), they (also) assume they're in a captive portal situation - not just when it gets a 511 network auth required:

https://cs.chromium.org/chromium/src/components/captive_portal/captive_portal_detector.cc?q=generate_204&sq=package:chromium&dr=C&l=150

That might still be more restrictive than what Firefox does, but I don't see how it helps defeat bad actors. Once we assume that the captive portal entity is capable of full MITM (which they kind of have to be because otherwise the captive portal wouldn't work), they can use whatever http status codes they like, also 301/302 redirects to evil.com or whatever. 


> You are correct in regard to #2 being the main security concern here. While
> I agree that everything is awful in terms of self signed certificates
> appearing on many captive portsls, they for the most part will usually
> redirect to their own internal domain or to a local IP. I would argue the
> ones who do not should be considered malicious anyway, if they attempt
> captive portal auth with a self signed cert on google.com or something.

I still don't see how you claim a browser can meaningfully differentiate good/bad MITM here. The captive portal can redirect to https://some-evil-actor.com, and they can get a cert for some-evil-actor.com, given they control the domain. They don't need a self-signed cert for somewhere else for that. Then that supposed "captive portal page", once opened in a tab in order for the user to pass the captive portal, can use JS or other mechanisms that only do something once opened in a browser, to redirect to non-https versions of "trusted-site.com", and because there's no TLS they can do whatever you refer to in comment #0 (peddle fake updates, phish for credentials, you name it).

As far as I can tell this exact scheme would also work against Chrome's captive portal detection (or the one built-in to macOS, or...)


> That said, a true attacker will never have a valid SSL cert for mozilla.org,
> google.com, duckduckgo, or any other major SSL domain. Checking a major SSL
> domain before or after the http check would allow us to do a better job of
> seeing if we are being attacked or not, as being able to access a known
> legitimate SSL site while getting a redirect over http to something other
> than our "success" content would confirm a bad actor is in play.

But you wouldn't be able to access the legitimate SSL site - the captive portal would just redirect all traffic to evil.com, which would serve a valid cert for evil.com, which would break the connection because the cert is for evil.com and you tried to access trusted.com . That's what happens for "legitimate" captive portals today. But "evil" captive portals would just do exactly the same thing. The only way to pass both a legitimate and an "evil" captive portal would be to access the portal site, which can then do whatever it wants over http (non-TLS, no cert, so anything goes). I don't see a reasonable way for the browser to protect the user from that.
Flags: needinfo?(mparnell)
I suppose there's no real way to plug this in entirety but I believe we can protect users a lot better than anybody else does.

In regard to your last paragraph, my whole point is that the lack of a valid certificate with no redicrect, loading up the contents of https://evil-domain.com https://captive-internal.privatetld or https://1.1.1.1/login or whatever would indicate an attack or a poorly configured captive portal - if it is a portal that is willing to hijack mozilla.org or google.com for login purposes we should probably consider it malicious, as adding a cert exception to allow it to hijack whatever our https domain of choice may be is unacceptable, even for login, at least as we do now since even temporary exceptions for certificates remain active for an entire browsing session. I'm not sure how session restore factors in with this, either.

We can probably at the least detect what kind of situation we have in some cases by determining where the assets are being loaded from in cases without redirects and then attempt to access the portal or attack site directly as a result, for logging the user in. 

Beyond that...My proposal for the short term:

1. Consider more checking of status codes as Chrome does.

2. Add the https domain/cert check. If it doesn't return a valid cert, without a redirect, we should show a message to the user that is different from our usual bad certificate error pages, explaining the situation and also laying out some basic bullet points on privacy, and also encourage the user not to download some random software from what appears to be a possible captive portal. Provide a direct button to temporarily (perhaps in the tab context only) whitelist the certificate for captive portal purposes. As soon as we detect internet access, delete the exception and close the tab.

2a. I'd suggest we consider doing this in an internal way - "about:captiveportal" and if the user clicks the button to proceed, we could then iframe the portal within our internal page, or mask it in some other way so as to prevent user misattribution of the portal login to google.com or some other site.

Long term:

1. Do the same as above but use the isolated tab feature to further sandbox the process.
Flags: needinfo?(mparnell)
Making notification requests secure-context-only (as Chrome has done) is bug 1429432.
Hi folks! I apologize for the delay in responding to this, I was on PTO for the last two days and neglected to indicate that in my BMO name. Here are my thoughts:

1. Captive portals are a mess.

2. As Gijs has been saying, it's really hard (I'd even say impossible, on a philosophical level) to distinguish "good" MITM from "evil" MITM.

3. There are many portals out there that redirect to a secure login page that is served from the greater interwebz. For example, Wi-Fi on airplanes and airports, which is a huge use-case for captive portal detection.

4. Many of these portals are extremely fine-tuned: they require the user to register credentials, etc., and often allow restricted (secure) internet access to verify their email address for example. Detection is optimized for fewer false-negatives than false-positives, in the interest of UX.s

5. The point about implementing an internal page with more information was actually something we considered. I believe this is still one of the considered possible enhancements to this feature (but we haven't allocated resources for it in the near future).

6. I think the problem being highlighted here is that while we cannot protect the user against every arbitrary malicious actor, in this case, we are in a situation where we a) KNOW that the user is entering unknown territory, and b) actively enable them to engage that unknown territory without much hand-holding. This is where the opportunity to do better is revealed.

7. I do think that educational material about this would be useful. I also think it would be very interesting to try and restrict web content functionality to the bare minimum required to enable what a legitimate captive portal actually wants to do, once we have determined that we are in a MITM state.

8. I think this would all be a lot more straightforward once we have stronger web standards (by which I mean, once some standards exist) for captive portals. I know that RFC 7710 exists (https://tools.ietf.org/html/rfc7710) but this is a lonely baby step as far as I understand.

9. Valentin owned the necko side of captive portal detection, needinfo?ing to comment on the suggested immediate concerns/mitigations suggested in this bug.


Thanks!
Flags: needinfo?(nhnt11) → needinfo?(valentin.gosu)
I agree with Nihanth. Captive portals are a messy affair, and every CP implementation is different from another. You are effectively man-in-the-middle'd, so there's not much we can do there, except get them to quickly unlock it and then favor HTTPS pages. There were a few enhancements we had in mind, such as clearing the cache, or loading the login page in a container.

However, I've read the comments twice and I can't see why the reporter's recommendations would improve things. The CP basically controls your entire network. The reason we have the big "log into the portal pop-up" is so that people DON'T accept certs when their HTTPS load is being redirected.

(In reply to mparnell from comment #10)
> 1. Consider more checking of status codes as Chrome does.

The status code is part of the response, which the CP can modify as it wants.
Maybe the 204 is used by chrome because it's less common for this to happen in practice, but security wise I don't see how it would help.

> 2. Add the https domain/cert check. If it doesn't return a valid cert,
> without a redirect, we should show a message to the user that is different
> from our usual bad certificate error pages, explaining the situation and
> also laying out some basic bullet points on privacy, and also encourage the
> user not to download some random software from what appears to be a possible
> captive portal. Provide a direct button to temporarily (perhaps in the tab
> context only) whitelist the certificate for captive portal purposes. As soon
> as we detect internet access, delete the exception and close the tab.

Loading the login page in a container is useful, for captive portals, not for malicious MitM.
You can be MitM'd selectively, and only certain content could be changed. This is impossible to detect, unless all the internet was HTTPS, then this attack would be worthless.

> 2a. I'd suggest we consider doing this in an internal way -
> "about:captiveportal" and if the user clicks the button to proceed, we could
> then iframe the portal within our internal page, or mask it in some other
> way so as to prevent user misattribution of the portal login to google.com
> or some other site.

That would only make the login page seem more official, and it could masquerade as part of Firefox - you shouldn't trust the URL bar, especially in a CP, but it's important you still have a sense of location.
Note that a MITM could never impersonate google.com, for cert and HSTS reasons.

> Long term:
> 1. Do the same as above but use the isolated tab feature to further sandbox
> the process.

I think we already had a bug for that (login page in a container), although I don't seem to find it now.

===

So unless I'm missing something, I don't really see how this could be used to mount an attack.
If the reporter has a proof-of-concept or a step-by-step attack description I'd love to see it.
Flags: needinfo?(valentin.gosu)
Group: firefox-core-security
Closing as invalid. Please reopen with new arguments if you disagree.
Status: UNCONFIRMED → RESOLVED
Closed: 3 years ago
Resolution: --- → INVALID
Duplicate of this bug: 1521377
Duplicate of this bug: 1524492
Duplicate of this bug: 1524588
You need to log in before you can comment on or make changes to this bug.