Open Bug 1201160 Opened 5 years ago Updated 6 days ago

Service workers violate SOP for "no-cors" CSS

Categories

(Core :: DOM: Service Workers, defect)

defect
Not set
normal

Tracking

()

REOPENED
Tracking Status
firefox42 --- disabled
firefox43 --- wontfix
firefox44 + wontfix
firefox45 + wontfix
firefox46 --- affected
firefox47 --- affected
firefox-esr38 --- unaffected

People

(Reporter: ehsan, Unassigned)

References

(Blocks 1 open bug, )

Details

(Keywords: csectype-disclosure, sec-moderate)

Attachments

(2 files, 3 obsolete files)

+++ This bug was initially created as a clone of Bug #1180145 +++

An attacker attacker.example can figure out what resources a stylesheet target.example loads by including it on attacker.example and using either the resource timing (shipped) or service worker (about to ship) API.

This violates SOP.
Group: dom-core-security
Is the addition of the SW vector enough to block SW on this?
Flags: needinfo?(ehsan)
Yes.
Flags: needinfo?(ehsan)
To be clear, what needs to be done here is that any resources fetched from a stylesheet, that was itself fetched using "no-cors" and is cross-origin (not sure about cross-origin with redirect to same-origin), need to skip the service worker.
I assume this means we need opaque interception tainting for CSS stylesheets as well then.  If the stylesheet has a same-origin URL, but was intercepted with a no-cors opaque Response, then we also need to block further interception for that sheet's subresources.  Right?
Yes. 

For the redirect case, currently per Fetch no-cors cross-origin redirect to same-origin will taint and return an opaque response. That seems the most safe, that way a cross-origin URL cannot be used to manipulate a same-origin resource to give up information. (At least not more than it already could.)
I'll take a look at this one.  The solution here will probably depend on what lands in bug 1173811.
Assignee: nobody → bkelly
Status: NEW → ASSIGNED
Looks like I either need to call this or do something like it when loading a resource:

  https://dxr.mozilla.org/mozilla-central/source/layout/style/CSSStyleSheet.cpp#1782

And if it fails to subsume, then set ForceNoIntercept().
There is an existing wpt test from blink that tests the case where the stylesheet is loaded from an opaque interception and has a subresource also controlled by the service worker:

  https://dxr.mozilla.org/mozilla-central/source/testing/web-platform/mozilla/tests/service-workers/service-worker/fetch-request-css-base-url.https.html

Its mainly checking to make sure the resulting URLs are correct, but we'll need to change it to enforce that the png file is not seen at all by the SW.
No longer blocks: 1173811
Depends on: 1173811
This is what I plan to test here:

// For frame that loads <link> stylesheet with a png subresource:

// frame a.com, stylesheet a.com, SW sees png
// frame a.com, stylesheet b.com cors, SW sees png
// frame a.com, stylesheet b.com no-cors, SW does not see png
// frame a.com, stylesheet a.com redirected to b.com cors, SW does not see png
// frame a.com, stylesheet a.com redirected to b.com no-cors, SW does not see png
// frame a.com, stylesheet a.com intercepted to synthetic, SW sees png
// frame a.com, stylesheet a.com intercepted to a.com, SW sees png
// frame a.com, stylesheet a.com intercepted to b.com cors, SW sees png
// frame a.com, stylesheet a.com intercepted to b.com no-cors, SW does not see png
// frame a.com, stylesheet a.com intercepted to a.com redirected to b.com cors, SW sees png
// frame a.com, stylesheet a.com intercepted to a.com redirected to b.com no-cors, SW does not see png
How are case 4 and 5 not intercepted (or 1, 2, and 3 for that matter)? I also miss stylesheet b.com redirected/intercepted to a.com.
(In reply to Anne (:annevk) from comment #10)
> How are case 4 and 5 not intercepted (or 1, 2, and 3 for that matter)?

Sorry, maybe its confusion over my use of "intercepted".  Here I mean that the SW will not call FetchEvent.respondWith() for the stylesheet and just allow normal browser processing.

> I also miss stylesheet b.com redirected/intercepted to a.com.

What should the behavior be in those cases?  And does it differ if b.com is cors vs no-cors?
Flags: needinfo?(annevk)
> What should the behavior be in those cases?

You get tainting for "no-cors" and no tainting for "cors" (the redirect itself and the final response need to have CORS headers, even though the final response is same-origin).
Flags: needinfo?(annevk)
(In reply to Anne (:annevk) from comment #12)
> > What should the behavior be in those cases?
> 
> You get tainting for "no-cors" and no tainting for "cors" (the redirect
> itself and the final response need to have CORS headers, even though the
> final response is same-origin).

I commented on this over in bug 1173811 comment 46.  I don't think our current cross-origin tainting plan will handle this case.  So if I write these tests we will have to mark them expected fail.
The beginning of the wpt tests.
An updated version of the tests with some redirect cases.  Note, these depend on bug 1206124, but currently hit an assert I added in the patch over there.

Also, these tests need to add a unique token to the background-image to avoid image cache confusion.

I will finish this when I get back from PTO.
Attachment #8662679 - Attachment is obsolete: true
I still need to remove the setTimeout() and cleanup some debugging, but these tests hit all the cases we need.

Currently fails due to this bug and also because redirected interceptions are changing the Response.url.  I think Ehsan has patches in queue or recently landed to fix the second issue.
Attachment #8663151 - Attachment is obsolete: true
This is a bit harder than I thought in comment 7.  I need to find all the places stylesheets load sub-resources to call the ForceNoIntercept().  That is different than the place it does NS_NewChannel() in style/Loader.cpp.

This means doing something different for images vs @import, etc.
I think it will be a lot easier to implement the intercept bypass for all the different cases if we do bug 1210941 first.
As I dig into this, I think its much harder than I originally thought.

Consider:

1) a.com/index.html loads stylesheet at b.com/foo.css as no-cors
2) b.com/foo.css @imports stylesheet at a.com/bar.css
3) a.com/bar.css loads background-image a.com/snafu.jpg

The service worker should get a FetchEvent for b.com/foo.css, but not for any of the other sub-resources.  Since b.com/foo.css loads a same-origin a.com/bar.css, I need some what to propagate the no-cors tainting through the @import.

Maybe @import is same-origin only, though.  That would make this a bit easier.
(In reply to Ben Kelly [:bkelly] from comment #19)
> Maybe @import is same-origin only, though.  That would make this a bit
> easier.

I don't think it is.  See Loader::CheckLoadAllowed().
Yea, many people have told me its not.

So I need to create a tainting flag on the stylesheet and propagate it through to child sheets somewhere in Loader::LoadChildSheet() here:

   https://dxr.mozilla.org/mozilla-central/source/layout/style/Loader.cpp#2256

Of course, we could end up with some interesting behavior in this condition:

1) a.com/index.html loads stylesheet at b.com/foo.css as no-cors
2) b.com/foo.css @imports stylesheet at a.com/bar.css
3) a.com/bar.css loads background-image a.com/snafu.jpg
4) a.com/index.html loads stylesheet at a.com/thepain.css
5) a.com/thepain.css @imports stylesheet at a.com/bar.css
6) a.com/bar.css loads background-image a.com/snafu.jpg

Here we have a.com/bar.css being @imported twice; once from cross-origin and once from same-origin.  At steps (2) and (3) we would block the service worker from seeing the fetch events.

What do we do at steps (5) and (6)?  The naive implementation would not do any network loads at all because the stylesheet from step (2) would be reused.  But this then allows a.com to have insight into what b.com/foo.css loads.  Also the service worker could result in completely different content being returned.

At this point I think I am just going to implement the naive solution for now.
I guess I could just avoid adding tainted sheets to the reusable sheet map here:

  https://dxr.mozilla.org/mozilla-central/source/layout/style/CSSStyleSheet.cpp#2308
In IRC Boris pointed out the real stylesheet reuse is done in Loader.cpp using this Hash key:

  http://mxr.mozilla.org/mozilla-central/source/layout/style/Loader.h?force=1#41

Which includes CORS mode, so I think it will prevent the kind of sharing discussed in comment 21.
I discussed this more with Boris.  Rough plan:

1) I will add a no-cors taint flag to the sheet.
2) I will propagate the taint flag during CreateChildSheet().
3) I will add the taint flag to the hash key in Loader.cpp to avoid reuse.
4) I will add the taint flag to URLValue so it can be passed to the ImageLoader from nsCSSValue.

I will also have to extend the tests further to cover the @import case.
I'm not convinced that we need to do this.

We're already exposing a lot of information that's contained in the stylesheet through features like getComputedStyle. An attacker can already probe through through a list of ids and classnames and use getComputedStyle to get at the background-url expressed in a rule in the cross-origin stylesheet.

I think blocking these requests from going through the SW will add a lot of complexity for web developers which is why I think we should really think through if we need to do this.

Does anyone remember the specific issues that we had that caused us to block access to the CSSOM for cross-origin stylesheets? My recollection was that it was related to pointing <link rel=stylesheet>s to non-CSS resources and taking advantage of the quite forgiving CSS parser.

However I thought that we required cross-origin stylesheets to be served with a text/css mimetype. Not sure if that means that my recollection above was wrong, or if that mimetype restriction was added later?
The mimetype restriction was added much later.
You cannot get at cross-origin @import rules (and potentially others?) and finding stuff through getComputedStyle() is much more involved.
So, looking at the spec issue:

  https://github.com/slightlyoff/ServiceWorker/issues/719

It seems Jake made some of the same points Jonas makes in comment 25.  The discussion with google still concluded we should hide these requests from service worker.

From the spec issue it seems service workers would also leak font source URLs which previously were not observable.

Blocking no-cors CSS being hosted on a CDN may cause some pain for developers, but service workers already require other configuration changes for safety; e.g. https.  These developers simply have to switch their CDN hosting to use CORS headers and then it should work for them.

I think we should move forward with this.
Another reason to do this is Josh found that with our fake-redirect approach to opaque tainting that the stylesheet subresources would get intercepted with the opaque final URI.  In essence, this is a case where we expose the final URI of a no-cors redirect, and thus breaks the assumptions with that plan.

So I need to implement the SW block here or come up with another fix to account for the opaque tainting.

At our service worker meeting today, though, we decided just to go ahead with this.  Mainly for the reasons in comment 28.  Sorry Jonas.
Which versions does this affect?
Only affects versions with service workers enabled.  So right now nightly and aurora.
Yes, this leaks more data. But I'm not convinced that it's leaking meaningfully more data. I don't think any developer would be advised to put user-private data in any CSS file. If they do, the risk is very high that there is some way to get at that data.

I don't think "easier" vs. "harder" is meaningful here. Keep in mind that often libraries can turn "harder" into "easier".

Adding restrictions here comes at a very real cost, which is that it makes it harder to develop for the web platform. Are these costs really outweighed by the security benefits?

As far as I can tell, aren't any security benefits that makes the lives of developers easier. I.e. developers can't be less careful about what content they put on their webserver.

But there are definitely development costs here. I.e. we're making developer's lives harder.

There's also a non-negligible security cost here in the sense of false sense of security. The fact that we're making it very visible that we're protecting the embedded URLs here, makes it harder to see that getComputedStyle leaks far more information.


I don't think things are as simple as "it exposes more data so we must not do it". For example, the complexity that we have with the CORS Content-Type header has probably been a net-bad for the web. That one was my fault and I definitely regret it.
Jonas, this exposes stuff that getComputedStyle() doesn't.  Therefore, the fact that getComputedStyle() exists cannot be used as a reason to not fix this...
Ehsan, did you read the last paragraph of comment 32?
Or rather, all of comment 32 is about how the question isn't as simple as that.
I did.  I understand the trade-off.  What I was trying to say is that this is not just about exposing more or less data, it's also about exposing new data.

</preach-to-choir>  :)
(Also the discussion as to whether or not we should fix this should really happen on the spec issue...)
(In reply to Ehsan Akhgari (don't ask for review please) from comment #37)
> (Also the discussion as to whether or not we should fix this should really
> happen on the spec issue...)

https://github.com/slightlyoff/ServiceWorker/issues/719
We decided to punt on this for now.
Blocks: ServiceWorkers-postv1
No longer blocks: ServiceWorkers-v1
The service worker f2f decided this could be a v2 issue.
I asked about this again in #whatwg IRC logs:

  http://logs.glob.uno/?c=freenode%23whatwg#c974181

I think the current state is a bit confused and should be clarified before we take on the complexity of implementing this.
If this is being discussed in the open elsewhere and Chrome is shipping with this and we're consciously holding off on doing anything here until the spec is sorted, should we keep this as sec-high?
Flags: needinfo?(bkelly)
(In reply to Andrew Overholt [:overholt] from comment #42)
> If this is being discussed in the open elsewhere and Chrome is shipping with
> this and we're consciously holding off on doing anything here until the spec
> is sorted, should we keep this as sec-high?

Thats a good question, but I don't feel comfortable lowering it myself.  I'd prefer if someone from the security team made that determination.
Flags: needinfo?(bkelly)
(In reply to Ben Kelly [:bkelly] from comment #43)
> (In reply to Andrew Overholt [:overholt] from comment #42)
> > If this is being discussed in the open elsewhere and Chrome is shipping with
> > this and we're consciously holding off on doing anything here until the spec
> > is sorted, should we keep this as sec-high?
> 
> Thats a good question, but I don't feel comfortable lowering it myself.  I'd
> prefer if someone from the security team made that determination.

Dan, can that be you?
Flags: needinfo?(dveditz)
Also ni'ing Al in case he might be able to answer the question posed in comment 44. I came upon this bug while reviewing open 44+ tracked bugs.
Flags: needinfo?(abillings)
I really don't know. I'll have to defer to Dan here.
Flags: needinfo?(abillings)
Moving my wip patches here to get them out of my local queue.  I'm hoping to get some spec feedback from google later this month.
Attachment #8704786 - Attachment is patch: true
We are in RC mode, it's too late and this is now a wontfix for Fx44.
Ben, are you still working on this bug? Thanks
Flags: needinfo?(bkelly)
(In reply to Sylvestre Ledru [:sylvestre] from comment #51)
> Ben, are you still working on this bug? Thanks

I asked at the service worker face-to-face and did not get much back.  Still trying to understand what the other browsers are doing, if anything.

So far internally we have a strong vote for (annevk) and a strong vote against (sicking).  If we can come to a consensus internally we could move forward.  Right now the "don't implement this" side is winning simply because its the default situation.
Flags: needinfo?(bkelly)
Given that we don't know what we're going to do here, should we continue tracking this?
Flags: needinfo?(sledru)
Yes, let's wontfix it.
Please resubmit for tracking when we have a proper fix.
Flags: needinfo?(sledru)
Dropping assignment until we reach some spec clarity.
Assignee: bkelly → nobody
Status: ASSIGNED → NEW
Do we normally keep things with a lack of spec clarity on the sec-high list?
Unhiding because the spec issue is public.

(In reply to Andrew Overholt [:overholt] from comment #56)
> Do we normally keep things with a lack of spec clarity on the sec-high list?

Those are unrelated--consider several recent TLS problems that were baked into the protocol or common implementation mistakes due to unclear specs and yet were still usable to exploit folks. Is this a Same-origin policy violation? Yes, so sec-high is a good place to start whether it's an implementation or spec bug.

If we want to argue that the attack is unlikely to yield valuable data we could lower it to moderate. Do sites serve custom CSS to users? If not then the attacking server could just download the CSS directly and read it and there's no disclosure. Is such a custom CSS rare enough to call this moderate? Do common resources loaded by the CSS redirect to different places in a way that a service worker can detect? If not then there's no real disclosure. I don't know how common or rare any of this is and without telemetry I'd prefer to err on the side of rating this too high. If we have some, though...
Flags: needinfo?(dveditz)
Group: dom-core-security
Also note that CSS in general is extremely unclear about its integration with Fetch. That did not stop us from implementing service worker support that included passing CSS through service workers (and thereby assuming something about what a CSS specification might say about the manner). 

I think we should similarly assume that nobody wants to introduce new same-origin policy violations yet for some reason Jonas keeps pushing for these which is why we end up having them. Here, but now also with our experimental implementation of <script type=module>. It would really help if the security team could make it clearer what our stance is with respect to the same-origin policy and its various extensions.
So are we going to do anything here?
Valentin, you are fixing bug 1180145 (sec-high), but unless we also fix this bug, we are not actually closing the hole.

Daniel and bz considered that bug very much worthy of fixing. I don't understand why it's any different here. We either need to fix both, or enshrine yet another SOP exception. (Note that if we enshrine this one, the JavaScript module folks probably want an CORS-exception for their stuff too... Or at least it'll be even harder to convince them. Sigh.)
Flags: needinfo?(valentin.gosu)
Flags: needinfo?(dveditz)
Flags: needinfo?(bzbarsky)
I still see no point in fixing this unless other browsers do. And I also don't think that the complexity introduced by this bug, and the security bugs that are sure to result from that complexity, is worth the benefit claimed in this bug.
Flags: needinfo?(bzbarsky)
I have a WIP patch posted in bug 1180145, but it's not enough to cover all of the cases.
I'd rather someone else picks it up, if we decide we do want to fix it after all.
Flags: needinfo?(valentin.gosu)
No longer blocks: 1218227
I'm going to leave this open for now, but I don't think we will implement anything here unless other browsers implement it first.  In particular, it would be hard for us to take the compat hit unless chrome implemented it first.
I'm going to mark this WONTFIX if the decision here is to not fix it. Please reopen the bug if the decision changes.
Assignee: nobody → bkelly
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → WONTFIX
Flags: needinfo?(dveditz)

Reopening as Safari has addressed this reportedly and standards-wise it's still an issue: https://github.com/w3c/ServiceWorker/issues/719#issuecomment-662900026.

Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---

Before we do anything here I think we should be super mindful about negative effects on ability to write offline web applications.

In particular, a website can already read large parts of the contents of unprotected CSS files. What this bug suggests is that we specifically protect values that contain URLs, while leaving the other parts of the CSS file just as exposed as before.

As a developer, it seems really risky to rely on the default SOP to protect the contents of CSS files given how much of the file contents that is readable across sites.

A better solution to me seems to be to enable solutions that enable protecting the entire contents of CSS files, images, scripts and other resources that can be embedded (and thus partially read) cross origin.

It's definitely true that leaving this bug unfixed could cause a security issue. I.e. where there's a CSS file that contains sensitive URLs but not other sensitive CSS values. But the same argument can be made about many changes to CSS syntax, such as with the introduction of variables. They too expanded the set of files which could theoretically now expose sensitive information to an attacker. Do we know of any such instances in the few years since Service Workers or CSS variables were shipped?

Additionally, some of the URLs that we're trying to protect here will still be exposed in the form of computed values on elements in the DOM.

In short, I think implementing the proposal here has significant cost for web developers, both in terms of reduced features, and in terms of a more complex security model. And the benefit seems mainly theoretical to me?

Separately, would Chrome and Safari be willing to fix [1]? If not would there actually be any security benefit for web developers?

[1] https://jsbin.com/pigihubuxa/edit?html,output

Assignee: ben → nobody

Hi Anne, can you investigate further what should be our path forward here? Thank you!

Flags: needinfo?(annevk)

I agree with Jonas that the situation isn't exactly great here and anyone putting secrets in CSS resources is well-advised to use Cross-Origin-Resource-Policy or equivalent.

Fixing this would help with:

  1. Inadvertently leaking secret URLs in @import and similar rules not directly exposed through computed styles.
  2. Leaking information about the resources tied to those secret URLs through timing attacks on caches or fetching, or equivalent attacks. (At least, it seems this would be a lot harder if you do not get granular information through service workers.)
  3. Leaking secret URLs in rules that are guarded by selectors that are (close to) impossible to match.
  4. Moving the web platform to a state that's acceptable to everyone (as per comment 65 Safari is shipping this).
  5. Making it clear we take exceptions to the same-origin policy seriously.

So overall I continue to recommend we fix this in due course. Hope that helps.

Flags: needinfo?(annevk)
You need to log in before you can comment on or make changes to this bug.