Closed
Bug 390910
Opened 17 years ago
Closed 10 years ago
Content Restrictions
Categories
(Core :: DOM: Core & HTML, defect, P3)
Tracking
()
RESOLVED
DUPLICATE
of bug 493857
People
(Reporter: sayrer, Assigned: bsterne)
References
()
Details
Attachments
(2 files)
276 bytes,
text/html
|
Details | |
24.19 KB,
patch
|
Details | Diff | Splinter Review |
Allow sites to restrict the types locations of loaded scripts and other unsafe content.
Reporter | ||
Comment 1•17 years ago
|
||
when you select-all and view selection source, there should be no onclick attribute and no script element.
Assignee: nobody → sayrer
Status: NEW → ASSIGNED
Reporter | ||
Comment 2•17 years ago
|
||
Gerv's initial design implies a blacklist. We should probably flip that to a whitelist. That would make it a lot easier to deal with XML, too.
Reporter | ||
Comment 3•17 years ago
|
||
Couple of extra changes here:
Changed the header name to "Restriction". Content-* names have special meaning in MIME, so HTML-specific things would be frowned upon.
It also eliminates styles from the body when script=head is specified, because we do all sorts of crazy things in CSS. :(
Still to do:
Change to NOSCRIPT behavior when script=none is specified.
Add plugins=none parameter and disable plugins on the docshell.
I'm not very exited about this solution to be honest. Mostly because it contains a blacklist of attribute names which is sure to get out of date.
What I'd rather do is to set a flag on the elements that we want to disable and tell them to not execute any dangerous content. In bug 343037 I have a patch that implements such a flag. (The patch won't land, for other reasons). This would IMHO be a much safer way to block scary content.
Reporter | ||
Comment 5•17 years ago
|
||
(In reply to comment #4)
> I'm not very exited about this solution to be honest. Mostly because it
> contains a blacklist of attribute names which is sure to get out of date.
Yeah, the patch would need to have that blacklist changed to a whitelist, at the very least.
> What I'd rather do is to set a flag on the elements that we want to disable and
> tell them to not execute any dangerous content. In bug 343037 I have a patch
> that implements such a flag. (The patch won't land, for other reasons). This
> would IMHO be a much safer way to block scary content.
For the record, that patch checks a flag at the point in the code where check "is this a script" and then execute it. It is a good approach, as long as we don't add code that routes around it. Still going to have problems with plugins and styles, though. I don't really have a preference for either approach, so we might as well try both for the moment. Jonas has already written for the alternative approach, so it's not much extra work.
I thought of another concern: serializing dangerous content that is then reloaded without the protection of the Restriction header. A classic Metadata Containment issue[0]. We'll need to elide dangerous content as we serialize.
[0] http://www.goland.org/Tech/spe-whitehead.pdf
Plugins should be disabled by the patch, see the change to nsObjectLoadingContent.cpp.
I'm not too worried about the serialize-and-reparse issue. Whatever we do is going to have some limitations so the site will have to know not to be too reckless about the 'foreign' content in its page.
Bindings are unsolved in the patch currently, however that should be pretty easy to fix by adding a check for the flag in the XBL code.
Reporter | ||
Comment 7•17 years ago
|
||
(In reply to comment #6)
> Plugins should be disabled by the patch, see the change to
> nsObjectLoadingContent.cpp.
Well, one aspect of Gerv's proposal is selective enabling of plugins via the host parameter. We could definitely leave that out as a start, though.
> I'm not too worried about the serialize-and-reparse issue. Whatever we do is
> going to have some limitations so the site will have to know not to be too
> reckless about the 'foreign' content in its page.
We wouldn't have this problem if sites weren't reckless. We can definitely play the blame game for many of these exploits, but I think we should make up for the failings of others if it's at all practical.
>
> Bindings are unsolved in the patch currently, however that should be pretty
> easy to fix by adding a check for the flag in the XBL code.
>
Nice.
Actually, I realized that XBL is taken care of. While you can add bindings to your elements, the contents in those bindings will have the same restrictions as content on your page so it should be harmless.
Comment 9•16 years ago
|
||
Brandon has been trying to implement an evolution of this idea as an addon (as a demonstration, without touching the parser he's somewhat limited). Getting this into 3.1 as an experimental feature for websites (using "X-..." headers) would help us move this to a real specification and cross-browser adoption.
I suspect sayrer isn't working on this anymore, and if not this bug could be reassigned to bsterne
Flags: wanted1.9.1?
Reporter | ||
Comment 10•16 years ago
|
||
(In reply to comment #9)
>
> I suspect sayrer isn't working on this anymore, and if not this bug could be
> reassigned to bsterne
Last we talked, I outlined the testcases that such a feature would need to land on the trunk. The bug that blocks this is bug 411791, and that's the one someone should take. No quibbles with the approach.
Marking wanted but with relatively low priority for now, until we figure out what way we want to do this.
Flags: wanted1.9.1? → wanted1.9.1+
Priority: -- → P3
Comment 12•16 years ago
|
||
Yeah, should give this to bsterne if he's working it. The test cases probably need to come first, since getting them in shape will let us reason much more concretely about the threat model, and (as I said in the test case bug) likely enlist some help from the broader security community in making them -- and therefore our solution -- more comprehensive and robust.
Assignee: sayrer → bsterne
Status: ASSIGNED → NEW
Comment 13•16 years ago
|
||
Other systems provide the option of a white list
or black list, such as the LIMIT directive in Apache.
I'd have it default to a white list style approach,
but for some cases a white list could get very long
and cumbersome to manage. In those cases a black
list would be needed. It seems at first glance that
once one is implemented the other would be comparitively
trivial - simply take the complement, essentially.
Either way, I'd much rather see this implemented in
some simplistic but extensible way soon than watch it
sit for the next nine years with no implementation.
Even a simplistic implementation could avoid tens of
thousands of exploits or more. Thus I wonder is anyone
still working on this? Brandon? If not, is there
interest in a working but very simplistic patch?
I might implement it if it was wanted, but I'm I
wouldn't implement it all that well.
Assignee | ||
Comment 14•16 years ago
|
||
Thanks for your interest. Yes, we are still actively pursuing this idea. One proposal that we are working on is Content Security Policy which you can read more about here:
http://people.mozilla.org/~bsterne/content-security-policy/
Updated•10 years ago
|
You need to log in
before you can comment on or make changes to this bug.
Description
•