Closed
Bug 1187951
Opened 9 years ago
Closed 9 years ago
Make sure that service workers obey the child-src CSP directive
Categories
(Core :: DOM: Service Workers, defect)
Core
DOM: Service Workers
Tracking
()
RESOLVED
FIXED
People
(Reporter: ehsan.akhgari, Unassigned)
References
(Blocks 1 open bug)
Details
We need to do something here, but the spec side of things needs to be clarified first.
Comment 1•9 years ago
|
||
Should this really block our v1 release? Mike West was somewhat surprised we would require CSP to be solved first:
http://logs.glob.uno/?c=freenode%23whatwg#c956280
It seems CSP will not be solved for service workers in the spec in the near term.
Reporter | ||
Comment 2•9 years ago
|
||
Hmm, that link doesn't seem to work, but I believe Jonas was fairly concerned about this...
Flags: needinfo?(jonas)
Yup, we absolutely need this for FirefoxOS to be able to use service workers.
Flags: needinfo?(jonas)
Comment 4•9 years ago
|
||
I think it might be helpful to define a subset of CSP that firefoxos. It sounds like Mike West does not expect CSP to be fully fixed for service workers in the spec any time soon.
Reporter | ||
Updated•9 years ago
|
Flags: needinfo?(jonas)
For the most part I actually think the CSP spec defines most of what we need defined.
The CSP spec defines that if a HTML page is loaded with a header that says "Content-Security-Policy: script-src: https://my-script-server.com", then no matter what, only executes javascript loaded from https://my-script-server.com. So even if I have some bugs on my webserver which allows content injection, that that can't lead to XSS. (As long as header-injection isn't possible).
It also meant that eval() and similar functions are disabled since without a tainting system we couldn't tell that the string passed to eval is loaded from https://my-script-server.com.
The intent of CSP was that that would hold true even as new features were added to the web platform.
Fortunately this is actually the only dependency that FirefoxOS has. That we honor that part of CSP. We currently have a CSP that looks like "script-src: 'self', style-src: 'self' 'unsafe-inline'", but I think we could live without the style-src part.
What is a bit more complex is the fact that the ServiceWorker can generate HTML pages. So even on a server which serves all pages with "Content-Security-Policy: script-src: https://my-script-server.com", the website might have a ServiceWorker which generates Response objects with a HTML page entirely without a CSP header. If that page runs without a CSP policy, then the server admins policy is no longer applied.
Here I can see that there's room for debate over how ServiceWorkers should work.
I think the most simple solution would be to say that the CSP policy that is applied to the service worker, should also be applied to any pages and workers created from Response objects that the ServiceWorker creates. That would mean that a server that today sends a policy for its website will keep getting that policy applied.
But I could see arguments for allowing more complex solutions. Happy to hear proposals.
But the short of it is that FirefoxOS needs the ability to restrict a website that uses HTML and ServiceWorkers to only load javascript from the website itself, and that eval() and similar APIs are disabled.
Currently we do that by sending a CSP policy of "script-src: 'self', style-src: 'self' 'unsafe-inline'" for all pages on that website.
Flags: needinfo?(jonas)
Comment 6•9 years ago
|
||
A working link for comment 1 is
http://logs.glob.uno/?c=freenode%23whatwg&s=27+Jul+2015&e=28+Jul+2015#c956280
Comment 7•9 years ago
|
||
Let's be sure to test bug 1189668 after we've got a fix here.
Blocks: 1189668
Comment 8•9 years ago
|
||
There are several things that need to be updated to align CSP with service workers, lets start with the easiest:
1) Implement child-src directive
In CSP 2, workers are no longer governed by script-src, but rather by child-src (Bug 1045891). We have an initial implementation and getting closer to get that landed. Basically, the top level page uses child-src which restricts where workers (nested contexts) can be loaded from. In earlier versions of CSP, workers were restricted by script-src. FirefoxOS potentially needs to update their default CSP to account for that change. Please note, that if child-src is not specified, then loading workers would be restricted by the directive default-src.
2) Workers can have their own CSP
At the moment we don't have any code that would enforce a worker's CSP (Bug 959388). That bug was de-prioritized when triaging CSP bugs. It seems it's becoming more of an issue now, so probably we need to change priorities here again.
3) Service Worker specific challenges
There are particular challenges for service workers in regards to CSP and it might take a while till the working group has sorted them out in detail. Some of the discussion is summarized here [1]. From a technical perspective we could see the service worker as some kind of proxy. In that sense let's have a look at a small example that visualizes one of the major concerns:
a) Document A creates a Service Worker (which will be governed by CSPs child-src)
b) Document A loads an image through that service worker. Fetching that image from site B within the service worker now needs to be whitelisted by the workers CSP.
c) Finally the CSP of document A needs to be consulted to make sure the CSP of document A allows loading resources from Site B. That proxy mechanism can be seen very similar to a regular network redirect, where the CSP of the document is also consulted after the redirect.
[1] http://logs.glob.uno/?c=freenode%23whatwg&s=27+Jul+2015&e=28+Jul+2015#c956280
If we make it impossible to use SW from signed content, then I think we can. For now that simply means that we must not support SW for pages loaded from app://
Flags: needinfo?(jonas)
At least from a FirefoxOS point-of-view. I'll let the security team speak to if they are ok with websites being able to work around CSP and mixed-content policies.
Reporter | ||
Comment 13•9 years ago
|
||
(In reply to Jonas Sicking (:sicking) from comment #10)
> If we make it impossible to use SW from signed content, then I think we can.
> For now that simply means that we must not support SW for pages loaded from
> app://
Bug 1171651 has already done that.
Comment 14•9 years ago
|
||
CSP will be enforced on the pages themselves, we can think of Service Workers as sort of equivalent to remote proxies which CSP would never have control over (handwaving furiously). In the future adding CSP to SW themselves will be better, and clarifying in the spec(s) how CSP deals with them, but now is OK enough.
I'm also not worried a "website" would want to "work-around" CSP. Just don't use CSP in the first place and there's nothing to work around.
An attacker is another story. The big problem I see here, though, is not the lack of CSP but that an attacker could load a malicious Service Worker. Service Workers should be 100% opt-in for sites, so that legacy sites that never heard of them can't be attacked through malicious planted workers. The spec now requires these legacy sites to update to recognize and block a request header to OPT-OUT of service workers. With all the problems we had with the jar: protocol (originally not opt-in) I am saddened we haven't learned from that experience. One easy way to make Service Workers opt-in is to make the Service-Worker-Allowed: header mandatory. Another equivalent would be to create a special content-type such as 'application/service-worker' or some such. Otherwise any site that allows script uploads and has an XSS (and almost all sites have suffered XSS) can be permanently pwned.
Reporter | ||
Comment 15•9 years ago
|
||
(In reply to Daniel Veditz [:dveditz] from comment #14)
> An attacker is another story. The big problem I see here, though, is not the
> lack of CSP but that an attacker could load a malicious Service Worker.
> Service Workers should be 100% opt-in for sites,
They are. Only content laded using TLS can register a service worker, which means that a MITM attacker cannot inject a service worker for a website. Or are you worrying about a different way that an attacker could load a malicious service worker?
> so that legacy sites that
> never heard of them can't be attacked through malicious planted workers. The
> spec now requires these legacy sites to update to recognize and block a
> request header to OPT-OUT of service workers. With all the problems we had
> with the jar: protocol (originally not opt-in) I am saddened we haven't
> learned from that experience.
Can you please clarify what experience you're talking about? We're trying to figure out if we have all of the bits and pieces ready to ship our implementation of service workers, so concrete risk vectors are much appreciated.
> One easy way to make Service Workers opt-in is
> to make the Service-Worker-Allowed: header mandatory.
The Service-Worker-Allowed header only controls the scope a service worker can be registered to control...
> Another equivalent
> would be to create a special content-type such as
> 'application/service-worker' or some such. Otherwise any site that allows
> script uploads and has an XSS (and almost all sites have suffered XSS) can
> be permanently pwned.
Yeah, if the site gets XSSed then I guess the attacker can plant a service worker. But I _think_ What do you suggest we should do about that before we can ship?
Thanks!
Flags: needinfo?(dveditz)
Comment 16•9 years ago
|
||
(In reply to Jonas Sicking (:sicking) from comment #11)
> I'll let the security team speak to if they are ok with websites being able to work around CSP and mixed-content
> policies.
What's the current thinking on mixed-content? Is there a bug for that or is that discussion happening here too?
Comment 17•9 years ago
|
||
(In reply to Ehsan Akhgari (don't ask for review please) from comment #15)
> (In reply to Daniel Veditz [:dveditz] from comment #14)
> > An attacker is another story. The big problem I see here, though, is not the
> > lack of CSP but that an attacker could load a malicious Service Worker.
> > Service Workers should be 100% opt-in for sites,
>
> They are. Only content laded using TLS can register a service worker, which
> means that a MITM attacker cannot inject a service worker for a website. Or
> are you worrying about a different way that an attacker could load a
> malicious service worker?
I'm worried about XSS. XSS on _any_ page on a TLS site -- and more or less all sites have an XSS somewhere at some time -- means an attacker can load any script from the the site as a service worker. It does have to have a script-ish mime type, but nothing otherwise special. Yes, the attacker-controlled page has "opted in" to using service workers, but the "site" has not otherwise given any signal they wish to allow this powerful new feature.
This is much like the jar: protocol, where originally we allowed you to address any zip-formatted thing on a site. This led to XSS on sites that just thought they were allowing office document uploads, or any of a number of other seemingly benign file types. In the end we asked sites to "opt-in" by using a specific "executable" content type. It happened to be a default one and is still causing some sites trouble, but that's a story for another time.
> > One easy way to make Service Workers opt-in is
> > to make the Service-Worker-Allowed: header mandatory.
>
> The Service-Worker-Allowed header only controls the scope a service worker
> can be registered to control...
I realize that is how it is specified _now_, but the specification could be tweaked to make this a required header as a sign of opt-ing in to self-MITMing. Or it could be a completely new header, but as long as you've got this, one option is to make it do double-duty. Maybe not the best option since that prevents pages from defining their own scope. I actually pref the special content-type, but a required header is an option. Another straightforward option is an opt-in file in the .well-known directory
> > create a special content-type such as 'application/service-worker' or some such.
>
> Yeah, if the site gets XSSed then I guess the attacker can plant a service
> worker. But I _think_ What do you suggest we should do about that before
> we can ship?
Obviously none of these suggestions could be taken without getting the spec to budge. If Chrome needs a normal javascript content type and Firefox a special service-worker one then it's difficult for any site to work for both browsers. Some of the other options would just be "extra things for Firefox" which at least wouldn't break existing Chrome-compatible sites, but without a spec change means lots of sites simply won't do the extra Firefox stuff.
I don't know if there's anything we can reasonably do at this point, I just think we're going to be sorry in a year or two.
Flags: needinfo?(dveditz)
Reporter | ||
Comment 18•9 years ago
|
||
(In reply to Daniel Veditz [:dveditz] from comment #17)
> (In reply to Ehsan Akhgari (don't ask for review please) from comment #15)
> > (In reply to Daniel Veditz [:dveditz] from comment #14)
> > > An attacker is another story. The big problem I see here, though, is not the
> > > lack of CSP but that an attacker could load a malicious Service Worker.
> > > Service Workers should be 100% opt-in for sites,
> >
> > They are. Only content laded using TLS can register a service worker, which
> > means that a MITM attacker cannot inject a service worker for a website. Or
> > are you worrying about a different way that an attacker could load a
> > malicious service worker?
>
> I'm worried about XSS. XSS on _any_ page on a TLS site -- and more or less
> all sites have an XSS somewhere at some time -- means an attacker can load
> any script from the the site as a service worker. It does have to have a
> script-ish mime type, but nothing otherwise special. Yes, the
> attacker-controlled page has "opted in" to using service workers, but the
> "site" has not otherwise given any signal they wish to allow this powerful
> new feature.
Yes, that's true. But note that there are four levels of protection here:
* The Mime type must match one of text/javascript, application/x-javascript, and application/javascript.
* By default, a page from https://example.com/path/to/page.html can register a service worker for the /path/to scope, or some descendant of it, but not /path, /foo, or / unless if the service worker main script is served with a Service-Worker-Allowed header which is opt-in. So if a page like the above has an XSS bug, the attacker won't automatically get access to the whole origin.
* If running the service worker script's global scope throws any unhandled errors (such as access to APIs that are not available on workers if the attacker is trying to fool us to register a random script) then the registration fails.
* After the service worker is registered, we check for an update every 24 hours, so the server will have the chance to fix the symptom of the attack by for example returning an HTTP error from the said URL, or return an empty benign script, etc.
But more importantly, the chances of an attacker being able to *use* such an attack in practice is almost 0, since in order to do something bad, it needs to respond to the fetch event in the service worker, but it cannot control the source code of the script that it uses for the registration, unless they find another vector permitting them to upload files to the server.
> Obviously none of these suggestions could be taken without getting the spec
> to budge. If Chrome needs a normal javascript content type and Firefox a
> special service-worker one then it's difficult for any site to work for both
> browsers. Some of the other options would just be "extra things for Firefox"
> which at least wouldn't break existing Chrome-compatible sites, but without
> a spec change means lots of sites simply won't do the extra Firefox stuff.
Yeah. But I really don't think we need anything more than a proper integration with CSP so that a website can specify where its own service workers can come from. We can also tweak the enforcement of that policy to also remove existing registrations that do not adhere to it as a measure to deal with possible past attacks.
(Also, note that we're shipping service worker registration in 42, so we have about 5 weeks to make a decision. If you feel strongly about this, you should start the spec discussions *now*! :-)
> I don't know if there's anything we can reasonably do at this point, I just
> think we're going to be sorry in a year or two.
The above four points make me feel a lot better about this issue, so I hope we won't be sorry!
Comment 19•9 years ago
|
||
Since Bobby added support for consulting the CSP of the main page before loading any subresources into that page within Bug 1189668 (See Bullet 3 of Comment 8 for details). I think it should be fine to ship SW support. As long as Dan is fine with that as well.
I do not think there is any need to support that workers can have their own CSP. However, it would be nice to land the child-src directive so workers are governed by child-src rather than script-src.
Flags: needinfo?(mozilla)
Reporter | ||
Updated•9 years ago
|
Depends on: 1045891
Summary: Fix our handling of CSP in the presence of service workers → Make sure that service workers obey the child-src CSP directive
Updated•9 years ago
|
Blocks: ServiceWorkers-compat
Reporter | ||
Comment 20•9 years ago
|
||
Kate, it looks like your changes in bug 1045891 have effectively fixed this one too. Is there anything else left to do here?
Flags: needinfo?(kmckinley)
Comment 21•9 years ago
|
||
(In reply to :Ehsan Akhgari from comment #20)
> Kate, it looks like your changes in bug 1045891 have effectively fixed this
> one too. Is there anything else left to do here?
Kate is on a plane right now, but I can answer. There is nothing left to do for this bug per se, so we can close this one.
Overall, the only thing that's missing from Comment 8 is (2): 2) Workers can have their own CSP, which we haven't implemented yet - see Bug 959388.
For the record, everything important for this bug was fixed within Bug 1045891.
Status: NEW → RESOLVED
Closed: 9 years ago
Flags: needinfo?(kmckinley)
Resolution: --- → FIXED
Reporter | ||
Comment 22•9 years ago
|
||
Thanks!
You need to log in
before you can comment on or make changes to this bug.
Description
•