Closed Bug 826805 Opened 11 years ago Closed 10 years ago

Allow HTTPS for all CSP directives that restrict loads to HTTP

Categories

(Core :: DOM: Core & HTML, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla36

People

(Reporter: briansmith, Assigned: ckerschb)

References

(Depends on 1 open bug, Blocks 1 open bug)

Details

Attachments

(2 files, 2 obsolete files)

Because of HSTS, which can cause of to automatically redirect HTTP requests to HTTPS, we need to interpret a "http:" scheme-source as "http: https:", and we need to interpet a host-source of the form "http://host[:port]" as "http://host[:port] https://host:[port]".

Otherwise, if we get the CSP directive before the site turns on HSTS, and then later the site turns on HSTS, the site will break. In particular, consider the case where we've cached a page in the HTTP response cache with a CSP with scheme-sources of the form "http:", which references sub-resources (using http:// URLs) of a newly-HSTS-enabled site. Without this change, that page would be broken until it is evicted from the cache, because HSTS will rewrite the http:// URLs as https:// URLs, and then CSP will block the https:// content from loading.

The spec should be updated to specify this. I remember that there was a brief  discussion of this issue on the mailing list previously, but I don't remember the outcome.

See also bug 826766, which is a similar issue for AppCache.
In the case you mention, both CSP and HSTS are currently doing what they're supposed to.  This means they just don't play nicely together.  There could be other combinations of features that don't play nice together, not just these two.

I think it's counter-intuitive to change the meaning of a source expression explicitly written by a site developer; we could very easily detect this situation in Firefox and explain (in the web console) what could go wrong.  I think we should absolutely detect and report the conflict between two features, but I'm not sure we should do any policy re-writing.
Consider something like HTTPSEverywhere. It turns on HTTPS for foo.com, and foo.com's CSP says "script-src http://www.foo.com"; so now the scripts won't load. (the model is that the website doesn't really want to serve content over https, so it is conceivable that it might have hardcoded the sources to be http)

Now this all is not surprising. What is annoying is that, if I am not wrong, there is nothing HTTPSEverywhere can do to override CSPService, short of rewriting the headers before CSP gets to them. In fact, I think this breakage can occur today. Doing what this bug suggests seems like a nice solution, with minimal to no impact.

I don't think it is all that counter-intuitive to say "http" also means "http and https". I can't think of any other place on the web platform where saying http disables https: the opposite (saying https disabling http) is common. bsmith's suggested behavior, imho, would actually be intuitive.
fwiw, I just brought this up on the w3c webappsec list to gather feedback there.
Depends on: csp-w3c-1.0
The spec explicitly allows a scheme-less source expression to match both http and https if the document is served over http

https://dvcs.w3.org/hg/content-security-policy/rev/a7dc8820946e

That is, Given the page http://example.com with 
  Content-Security-Policy: script-src 'self' api.google.com;
then scripts should be allowed from
  http://example.com
  https://example.com
  http://api.google.com
  https://api.google.com

If the page is any scheme other than HTTP (e.g. https) then a scheme-less expression only matches if the resource being loaded has exactly the page's scheme.

That's not quite what bsmith originally asked for: he's saying if the CSP explicitly gives a scheme of http:// then we should allow https:// anyway. That violates the spec. It's arguably safer in most cases, but why would a site bother with those extra 7 characters in their policy if they didn't mean it?  If the 3rd party site turns on HSTS and that breaks the site with the unnecessarily verbose CSP then it will break  in Chrome and Mozilla (~50% of web users), and they'll fix their CSP policy.
Blocks: CSP
I've been bitten by this too.
Chrome and webkit implement this the right way: no explicit scheme accesed over http connection implies https.
I had to change my CSP (duplicate everything) in order to get firefox working.
The 1.0 spec currently says nothing about allowing HTTPS for loads from a protected resource when a scheme-less host source matches the load.  In fact it says "If the source expression does not have a scheme and if uri-scheme is not a case insensitive match for the scheme of the protected resource's URI, then return does not match."

http://www.w3.org/TR/CSP/#matching

But in the upcoming level 2 of CSP:
http://www.w3.org/TR/CSP2/#match-source-expression

It does say that any source expression without a scheme is a valid match for http and https loads if the protected resource is http.  ('self' apparently only allows exact matches of scheme/host/port).

So we should fix this to be CSP2 compliant.  Scheme-less sources where the protected resource is http should match http loads.
Assignee: nobody → mozilla
Status: NEW → ASSIGNED
Priority: -- → P2
Attached patch bug_826805.patch (obsolete) — Splinter Review
Dan, since Sid is out for a while, do you wanna review that? Otherwise we can wait till Sid is available again.
Attachment #8518262 - Flags: review?(dveditz)
Attached patch bug_826805_tests.patch (obsolete) — Splinter Review
Attachment #8518264 - Flags: review?(dveditz)
Comment on attachment 8518262 [details] [diff] [review]
bug_826805.patch

Review of attachment 8518262 [details] [diff] [review]:
-----------------------------------------------------------------

r=dveditz

::: dom/base/nsCSPParser.cpp
@@ +657,5 @@
>      resetCurChar(mCurToken);
>      nsAutoCString scheme;
>      mSelfURI->GetScheme(scheme);
>      parsedScheme.AssignASCII(scheme.get());
> +    allowHttps = scheme.EqualsASCII("http");

Just a nit, but this would be clearer if you renamed "scheme" to "selfScheme" to better distinguish it from parsedScheme. IMHO. totally optional.
Attachment #8518262 - Flags: review?(dveditz) → review+
Comment on attachment 8518264 [details] [diff] [review]
bug_826805_tests.patch

Review of attachment 8518264 [details] [diff] [review]:
-----------------------------------------------------------------

r=dveditz
Attachment #8518264 - Flags: review?(dveditz) → review+
(In reply to Daniel Veditz [:dveditz] from comment #9)
> > +    allowHttps = scheme.EqualsASCII("http");
> 
> Just a nit, but this would be clearer if you renamed "scheme" to
> "selfScheme" to better distinguish it from parsedScheme. IMHO. totally
> optional.

I incorporated your suggestion, no need for a new patch to be uploaded though.
Here we go:

https://hg.mozilla.org/integration/mozilla-inbound/rev/936504af8ef1
https://hg.mozilla.org/integration/mozilla-inbound/rev/06a8a827447d
Flags: in-testsuite+
Target Milestone: --- → mozilla36
sorry had to back this out for test failures like https://treeherder.mozilla.org/ui/logviewer.html#?job_id=3725008&repo=mozilla-inbound
Flags: needinfo?(mozilla)
Attached patch bug_826805.patchSplinter Review
(In reply to Carsten Book [:Tomcat] from comment #12)
> sorry had to back this out for test failures like
> https://treeherder.mozilla.org/ui/logviewer.
> html#?job_id=3725008&repo=mozilla-inbound

Thanks for backing it out.

Since it got backed out, I am uploading an updated patch where I addressed Dan's suggestions.

I will take a look what's going on B2G.
Attachment #8518262 - Attachment is obsolete: true
Flags: needinfo?(mozilla)
Attachment #8519995 - Flags: review+
I am not completely sure why the tests are not passing on B2G. I debugged using a b2g-desktop-build. I confirmed that CSP is blocking/not blocking correctly in the testcase which makes me think that something is fishy with the B2G testharness when loading https scripts.

When I load the script using http, the script is loaded when allowed by CSP, but when I use https the script never gets executed at all.

Initially I thought that tests might me loaded using an 'app:://'-scheme or something similar on B2G, but that's not the case, here are the values at the beginning of ::permits()

> aUri: https://example.com/tests/dom/base/test/csp/file_csp_path_matching.js#foo
> 
> mScheme: http
> mHost: example.com
> mPort:
> mPath:
> mAllowHttps: yes


Interestingly, Mixed content blocker tests loading resources over:
> https://example.com...
are disabled on B2G as well, see:
http://mxr.mozilla.org/mozilla-central/source/dom/base/test/mochitest.ini#695

Tanvi, can I safely disable tests for this bug on B2G, what do you think?
Flags: needinfo?(tanvi)
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #14) 
> Interestingly, Mixed content blocker tests loading resources over:
> > https://example.com...
> are disabled on B2G as well, see:
> http://mxr.mozilla.org/mozilla-central/source/dom/base/test/mochitest.ini#695
> 
> Tanvi, can I safely disable tests for this bug on B2G, what do you think?

When this mixed content test was written, fennec didn't have support for ssl (the https protocol didn't work in a fennec mochitest).  Hence, I disabled the test for fennec with a note "SSL_REQUIRED".  Not sure if fennec's testing framework now supports ssl or if b2g's does.
Flags: needinfo?(tanvi)
Chatted with jgriffin on IRC. Currently SSL tests require ssltunnel, but the ssltunnel made by emulator builds is arm, we would need support for x86. Hence there is no other solution than disabling this test on 'b2g'.

Carrying over r+ from dveditz.
Attachment #8518264 - Attachment is obsolete: true
Attachment #8520722 - Flags: review+
Depends on: 921493
You need to log in before you can comment on or make changes to this bug.