Closed Bug 776278 Opened 12 years ago Closed 9 years ago

Auto Upgrade Insecure Requests from Secure Context

Categories

(Core :: Security, defect)

defect
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: devd, Unassigned)

References

(Blocks 1 open bug)

Details

(Keywords: sec-want)

https://bugzilla.mozilla.org/show_bug.cgi?id=62178 blocks insecure requests from a secure context. Instead, firefox should at least try to automatically replace the URL for the image/script/css load with https before giving up and complaining to the user.
Depends on: 62178
OS: Mac OS X → All
Hardware: x86 → All
I think we already take care of this with Http String Transport Security.
@mayhemer HSTS ensures example.com is always accessed over HTTPS. If page on http://example.com includes a script from http://example.net, HSTS afaik doesn't say anything.
The core of this idea  AIUI is that when loading an https page with mixed content, try to load the mixed content (maybe just for mixed script only ??) over HTTPS and if this fails, fall back to blocking it or showing whatever UI indicates mixed script/display content.
Version: unspecified → Trunk
Though having http://foo and https://foo return the same content is a sane design, IMO it would be imprudent for a browser to assume this is the case.  In the worst case, another site might be able to open an iframe on https://bar/?q=evil that would make a request to http://foo/?q=evil that, when redirected to https://foo/?q=evil, results in XSS.  Granted, a MITM attacker could already cause this XSS, but the proposed change would expand the exposure to any site open in the browser.
Hmm .. 

Lets clarify a bit: this is only upgrading the sub-resource loads: scripts/images/styles etc, which will run in the origin of the page (which is already https). The browser is never upgrading the main request.

Now, if I am not wrong, your scenario is:
http://www.foo.com/script.js is a nice and good script, but https://www.foo.com/script.js is a script that will do bad things. Can you give concrete examples of where/how this could happen? Seems like website owners need to go out of their way to do something like this.

Note that we have to weigh the cost of this possible attack vs. the cost we are imposing on our users by blocking mixed content and showing a dialog.
Yes, another argument (comment 4).  Resource at http://example.com/path is completely different then a resource at https://examle.com/path (in other words: we cannot just assume it has the same content).  

But also, what is more important, it is very different from point of security privileges.  The page might want to isolate insecure and secure scripts because of same-origin policy.  This bug would just open security hole where normally insecure scripts could access secure script data and objects.  Not really a good idea.

I vote for WONTFIXing this...
I am trying to understand the threat you are worried about:

A website sends differing content for http://www.example.com/script.js and https://www.example.com/script.js

Further, the http script plays nice and good, but the https script does bad things which cause XSS?

Am I understanding this right? Can you give concrete examples of how this could happen? While I agree this is possible, I think it is extremely improbable and not something we should worry about.
(In reply to Devdatta Akhawe from comment #7)
> I am trying to understand the threat you are worried about:
> 
> A website sends differing content for http://www.example.com/script.js and
> https://www.example.com/script.js
> 
> Further, the http script plays nice and good, but the https script does bad
> things which cause XSS?
> 
> Am I understanding this right?

Correct.

> Can you give concrete examples of how this
> could happen?

I've seen a lot of cases of a web server that has multiple DNS names pointing to it and does proper virtual hosting for http but shows the default virtual host for https regardless of the Host header in the request.  Such a server may have a wildcard SSL certificate if the admin was lazy and anticipated serving https on more than one subdomain in the future and didn't want to have to go back for another certificate.  So you could get a case in which the https URL works, but refers to a completely different virtual host.  Depending on what control the attacker has over the URL parameters (e.g., if they are computed from URL parameters passed to the main page), the attacker may be able to make that URL return something harmful.

(In reply to Honza Bambas (:mayhemer) from comment #6)
> But also, what is more important, it is very different from point of
> security privileges.  The page might want to isolate insecure and secure
> scripts because of same-origin policy.  This bug would just open security
> hole where normally insecure scripts could access secure script data and
> objects.  Not really a good idea.

FWIW, I don't understand the substance of this concern.  What would an attack look like?
> I've seen a lot of cases of a web server that has multiple DNS names
> pointing to it and does proper virtual hosting for http but shows the
> default virtual host for https regardless of the Host header in the request.

This is interesting. Can you share examples and data on how widespread (host header is checked for HTTP but not for HTTPS) this is?

> Such a server may have a wildcard SSL certificate if the admin was lazy and
> anticipated serving https on more than one subdomain in the future and
> didn't want to have to go back for another certificate.  

Now we are further narrowing down the field: what % of the above is this case?

> So you could get a
> case in which the https URL works, but refers to a completely different
> virtual host.  Depending on what control the attacker has over the URL
> parameters (e.g., if they are computed from URL parameters passed to the
> main page), the attacker may be able to make that URL return something
> harmful.


To be clear, I have never made the case that this wasn't possible. My point is that, I think, these cases are really rare, and that it is far and away more probable that if we get a valid cert and a non-404 response, the value is correct. 

We have to measure the cost of this vs. the number of times, we will throw this warning to our users, and how much of their time we are wasting.
I agree with Dev that we should be finding a way to automatically fix these things that are being blocked. But, I also agree with Matt that, at a minimum, we need to be very careful with how we do so, and we shouldn't (yet) assume that https://example.com/x is a good substitute for http://example.com/x.

It seems like it would be simpler to do said rewriting for images, fonts, and less critical resources (maybe CSS?). What's the worse that could happen? On the other hand, the reason it is safer to try to "fix" these types of subresource loads is the same reason it isn't *too* bad to just not load them at all.

I guess we'd like sites to use HSTS so that this is a non-issue. But, I think it may be worth investigating other mechanisms too, given that many sites don't want to be HSTS (yet).

I had suggested to Brandon that we should define and support a "SV: 1" response header on the HTTPS version of the https://foo.com/x response that means "I am the same thing that is at http://foo.com/x." A website where everything at http://foo.com/ is mirrored at https://foo.com/ could just add the "SV: 1" response header to every request automatically (e.g. in their web server configuration.)
I've thought about this some more.

What are the odds that we see <img src='http://example.com/whatever.png'> AND we get a response of type image/* from https://example.com/whatever.png, AND the image we get from https://example.com/whatever.png != the same image at http://example.com/whatever.png? Similarly for <audio> and <video>?

I would predict that the most likely situations are that:
(a) The content at both URIs is exactly the same, or
(b) the https:// response ends up being having a non-image/video/audio content-type, or 
(c) the https:// response is a redirect to the http:// site.

This might be the case where being pedantic is much worse than using common sense.

Luckily, this seems like something that we could measure before we implement: When we come across mixed-content images, audio, and video, load the https:// variant AND the http:// variant, and then compare the critical headers (e.g. content-type) and response data byte-for-byte. Then use telemetry to report, for each type of content, how often the reasoning above holds. I think the results of this experiment would be extremely interesting to many people.
Instead of auto-upgrading everything, what if we have a whitelist of domains that we can autoupgrade?  facebook connect, youtube, twitter, google-analytics, ....
(In reply to Tanvi Vyas [:tanvi] from comment #12)
> Instead of auto-upgrading everything, what if we have a whitelist of domains
> that we can autoupgrade?  facebook connect, youtube, twitter,
> google-analytics, ....

In general, I think we should avoid whitelists and having to maintain them BUT we are already doing something like with user agents for Firefox OS, IIRC. I wonder also if there could be a situation where we would want to put the site on our whitelist but they wouldn't want to be... pretty hypothetical, I know.
When implementing the auto upgrade functionality, we should test with HTTPS Everywhere and make sure we aren't breaking the addon.  If for some reason we are, we should talk with the EFF.
https://www.fbi.gov/ would be fixed by this feature, if and only if this feature also auto-upgrades <link rel=stylesheet> and <script>.
Blocks: 863517
I tend to think this is a bad idea, for a few reasons:

 (1) Heuristics make bad platform.  If you're implementing a Web content handling tool that authors don't care about, heuristics might be ok.  But if you're implementing something that handles Web content, and the producers of that content often test in your implementation, heuristics are going to make the behavior of the platform confusing to authors and make the experience of writing for the Web more difficult.

 (2) Auto-upgrading behavior would lead to authors who test primarily in Firefox finding the behavior in Firefox is secure, when it's not secure in other browsers, and it would thus hurt the behavior of those other browsers.  (In other words, if you want to do this, I think it should be standardized.)

 (3) It feels to me like it's working around a UI bug.  Shouldn't authors generally be noticing the browser indicators showing that things are insecure when they're authoring mixed content?  Or are the indicators that different between browsers that these cases don't trigger mixed content in other browsers?
> (3) It feels to me like it's working around a UI bug.  Shouldn't authors
> generally be noticing the browser indicators showing that things are
> insecure when they're authoring mixed content?  Or are the indicators that

I don't understand. The mixed content UI is for users, not developers, right?

I think large parts of the web has been developed a while ago and will break under mixed content blocking and it is far far cheaper for web sites to just tell users to "click allow" than to hire developers to fix all these pages. This is, imo, the long tail (i.e., also the majority) of web sites. We have seen examples like fbi.gov, brendaneich.com break---when I go to brendaneich.com and I can't see a video but see a warning, I only get annoyed and click through. 

Now this could have been fine, but when it comes to security UI this is really really bad. When websites ask users to click through, they don't just impact the security of their own site. They impact the security of everything---they are training users to clickthrough everywhere. It would be nice if we had users who were smart enough to say "ok .. site A asked me to clickthrough, but not site B so I will not click through" It is more likely that we will get users who will say "Site A started working when I clicked through this warning. I should clickthrough here too"
Keywords: sec-want
> (2) Auto-upgrading behavior would lead to authors who test primarily in Firefox 
> finding the behavior in Firefox is secure, when it's not secure in other browsers, 
> and it would thus hurt the behavior of those other browsers.  (In other words, if 
> you want to do this, I think it should be standardized.)

Now that other browsers are also blocking mixed active content, I'm not sure this is still true.
One weird possibility is to warn *and* upgrade.  Then Firefox users get a reasonably secure and usable experience, but web developers also get the message.

Of course, I'd also argue that the warnings should be progressive over the course of several releases (web console only, then an address bar charm) rather than a doorhanger suddenly in everyone's face.
We could even upgrade silently when users click through the existing warning.  I'm having a lot of trouble finding a downside here.
If we can upgrade safely, and without adding complexity to the experience, our users will definitely be happier. Can we get some collective action here from the other browsers that block mixed content?
Mozilla can take the lead. I suspect that once one browser does it, others will follow.
Blocks: 900440
Blocks: 900449
Blocks: 900458
Silent automatic upgrade could pose a security risk (as content might differ), defeating the purpose of mixed content blocker.
Automatic upgrade would only work if default ports are used (80 -> 443).

Currently the user can interactively disable mixed content blocker (if he notices the shield in the address bar).

Giving the user the option to upgrade offending requests to https would 
- make the requests fail initially, requiring user interaction (same as disabling blocker)
- make users make a non-trivial choice between disabling blocker or upgrading to https (bad UX)
- give him false sense of security
- fail if non-default ports are used

Various ideas with preflight requests to http and https, looking for http redirects, HSTS headers, comparing content and only then loading the content into secure context ... would all rely on the content served on the original MITM-able http URL or trusting the content served at the guessed https URL.

Therefore i propose to resolve this with WONTFIX (or WORKSFORME).
Silent (or non-silent) upgrade based on a whitelist would not have those problems.
Just thought of a safe way to implement this: download the secure URL, AND the insecure URL, and if they provide the same content, then process and use it, otherwise discard it.

I still think WONTFIX may well be the BEST solution, but my idea should completely resolve the security issues raised (comment 4 and comment 6) but at the cost of increased complexity (and the resultant security risks), and resources.

My idea doesn't have the downsides others mentioned regarding warn *and* upgrade, or just upgrade, or offer to upgrade.

Anyone care enough to implement it?
I just had another AHA! moment.  This bug's summary, "Auto Upgrade Insecure Requests from Secure Context" applies not just to content in pages, but to downloads too.  

The fix to Bug 62178 supposedly blocks insecure requests from a secure context.  But does it, when the request is for a file download?  I don't think so.  That's a big problem!

That bug's summary is:"implement mechanism to prevent sending insecure requests from a secure context"

As defined above, that bug isn't fixed.  I think users should be warned or prevented from insecurely downloading files from a secure context too!

I have pref "security.mixed_content.block_active_content" set to true but I was just able to insecurely download a .dmg (which is active content) from a secure context - Bug 906835, which I just opened - happens to have an example of this.

As I see it, it's much more important to REALLY fix 62178 than to fix 906835.

What next?  Any expert bug wranglers want to take action (or comment)?  I'm wondering if it makes more sense to open a new bug or reopen 62178.  I'm thinking it makes sense to close 906835 as WONTFIX.  I intend do that and reopen if no one else jumps in within a few days.  

(Perhaps I should have posted this elsewhere? I'm hoping folks read this and my comment from this morning together, saving some context switches.)
Just noting that in the mean time, a new standard has been developed for this at https://w3c.github.io/webappsec/specs/upgrade/ and a new bug raised: https://bugzilla.mozilla.org/show_bug.cgi?id=1139297
I'm pretty sure this needs to be WONTFIX at this point. HSTS plus upgrade-insecure-requests is the way to go.
Agreed. Closed in favor of bug 1139297
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
Restrict Comments: true

Despite the 2015 WONTFIX, not only have we implemented the CSP upgrade-insecure-request mentioned in comment 27++ (bug 1139297) we have implemented and shipped "HTTPS Only" mode. That's opt-in for now because of too many broken site experiences, but available.

Flags: needinfo?(emmanuelprincewill119)
You need to log in before you can comment on or make changes to this bug.