Closed
Bug 1430428
Opened 7 years ago
Closed 7 years ago
CORS Policy not enforced for xhtml and xul resources loaded using chrome:// url
Categories
(Firefox :: Untriaged, defect)
Tracking
()
RESOLVED
INVALID
People
(Reporter: francois.lajeunesse.robert, Unassigned)
Details
Attachments
(1 file)
50.38 KB,
image/png
|
Details |
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:57.0) Gecko/20100101 Firefox/57.0
Build ID: 20180103231032
Steps to reproduce:
When any xhtml or xul resources is loaded using the chrome:// url, it is possible to make a cross-origin request, using ajax, which is not subjet to CORS policy enforcement. Therefore, it is possible to retrieve user sensitive information.
To exploit this issue one can trick a user to perform a self-xss using the developer console. For example, loading chrome://global/content/aboutAbout.xhtml using Mozilla 57.0.4 (64 bits) and executing the javascript code, shown in the cors_notenforce.png file, using the developer console.
Other possible exploitation scenarios are :
- Chrome.manifest poisoning to resolve a chrome url toward a malicous xhtml or xul file ;
- omni.ja resource poisoning to include malicious javascript code loaded by a xhtml or xul file ;
- A xhtml or xul resource loaded using a chrome URL which include a JavaScript resource accessible over an unsecure communication.
Actual results:
Executing the javascript code, shown in the cors_notenforce.png file, it has been possible to retrieve the content of the bugzilla profile page of an authenticated user (https://bugzilla.mozilla.org/user_profile).
The highligthed portion in file cors_notenforce.png, shows that it has been possible to retrieve the name and email of the currently authenticated user : "User Profile: FLR <francois.lajeunesse.robert@gmail.com>"
Expected results:
The expected result would be that the content of the response to the bugzilla profile page is not shown in the developer console and an error message is displayed like when the same resource is loaded using a jar url (jar:file:///C:/Program%20Files%20(x86)/Mozilla%20Firefox/omni.ja!/chrome/toolkit/content/global/aboutAbout.xhtml).
Comment 1•7 years ago
|
||
(In reply to FLR from comment #0)
> To exploit this issue one can trick a user to perform a self-xss using the
> developer console. For example, loading
> chrome://global/content/aboutAbout.xhtml using Mozilla 57.0.4 (64 bits) and
> executing the javascript code, shown in the cors_notenforce.png file, using
> the developer console.
This is basically saying "If I can run system- (ie chrome-)privileged JS, I can exploit the user". That's not an exploit, that is obvious. But you can't explicitly create a link to chrome pages that users can click, and there are paste warnings in the developer console by default (you may have turned them off and forgotten, of course, but they're there to start with), so this can't really be called an exploit in its own right.
Unrelated, we should stop making about:about privileged, but even when we do, there will be other pages that are privileged.
We should also stop letting you eval things in system-privileged pages if you haven't explicitly told the browser you want to be able to do that, but we already have a bug on file for that.
> Other possible exploitation scenarios are :
> - Chrome.manifest poisoning to resolve a chrome url toward a malicous xhtml
> or xul file ;
> - omni.ja resource poisoning to include malicious javascript code loaded by
> a xhtml or xul file ;
This requires write access to the app dir. At that point, you can replace the entirety of the browser code so how XMLHttpRequests are treated isn't really our main issue...
We no longer support non-system add-ons with a chrome.manifest file, and the systme ones are generally installed in the app dir. Even when they're not, they should be subject to signature checks, so manipulating chrome.manifest there ought not to work - and again, CORS headers on XHRs are really not the worst part of that situation.
> - A xhtml or xul resource loaded using a chrome URL which include a
> JavaScript resource accessible over an unsecure communication.
Firefox doesn't ship any such resources, and even if we did you would need to convince the user to load them.
I think this bug should be closed as INVALID and the security flags removed. Assuming you get to run privileged script is essentially "begging the question" - it'd be a security exploit if you found a new and unexpected way to run system-privileged code, but we're not going to break non-CORS requests from chrome: contexts because that would break our own code and be far and away in the land of diminishing returns. You could just read the data from cache if you have system privileges, and not make any requests at all. Dan/Al, can you confirm?
Flags: needinfo?(dveditz)
Flags: needinfo?(abillings)
Comment 2•7 years ago
|
||
Oh, and if you wanted to get the user to eval things so you can read their bugzilla page, it would be much simpler to just link directly to bugzilla and have them paste the code into the devtools on bugzilla... at least links to bugzilla will work and users can just click them, unlike links to about:about...
> This is basically saying "If I can run system- (ie chrome-)privileged JS, I can exploit the user". That's not an exploit, that
> is obvious. But you can't explicitly create a link to chrome pages that users can click, and there are paste warnings in the
> developer console by default (you may have turned them off and forgotten, of course, but they're there to start with), so
> this can't really be called an exploit in its own right.
CORS should ALWAYS be enforced. Especially when running JS in developer console even if pasting protection has been disabled.
Moreover, CORS is enforced for some chrome-privileged resource like :
- about:license
- resource://cloudstorage/global/license.html
- about:buildconfig
- resource://cloudstorage/global/buildconfig.html
- resource://cloudstorage/global/about.xhtml
- about:cache
- about:checkerboard
- etc.
but not for :
- chrome://global/content/license.html
- chrome://global/content/buildconfig.html
- chrome://global/content/aboutCheckerboard.xhtml
- about:
- about:about
- about:accounts
- about:addons
- about:config
- about:crashes
- about:debugging
- etc.
And even stranger CORS is enforced in devleoper console for :
- chrome://global/skin/in-content/common.css
but not for :
- chrome://devtools/content/aboutdebugging/aboutdebugging.css
> This requires write access to the app dir. At that point, you can replace the entirety of the browser code so how
> XMLHttpRequests are treated isn't really our main issue...
The browser executable itself is signed and cannot "in theory" be modified by a third party. Chrome.manifest and omni.ja can be. It's a perfect location for persitent malware to spy on users without having to always run in privileged mode and having to dump the memory to retrieve sensitive information on users. Once the malware implant is there, having the user open a new window of heigth and width 0 toward, for example, chrome://global/content/license.html would be sufficient to act as the user on every Web site that he is currently logged in.
In reply to :Gijs from comment #2)
> Oh, and if you wanted to get the user to eval things so you can read their
> bugzilla page, it would be much simpler to just link directly to bugzilla
> and have them paste the code into the devtools on bugzilla... at least links
> to bugzilla will work and users can just click them, unlike links to
> about:about...
So you're saying that CORS policy is useless and reflected XSS issues are not issues ... In addition to retrieve content from a response. Not having CORS implies that NO PREFLIGTH requests are made for PUT, DELETE, CONNECT, OPTIONS and PATCH request. Therefore if you are connected to a WebDav server it is possible to dump everything to a remote server and than delete all content !
I forgot to add that credentials where sent even if the xhr.withcredentials property haven't been set to true even if the default value is false. (see https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials)
Also without CORS it's possible to access content of files located on local filesystem or foreign file system using the file:// URI.
(function () {
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function () {
console.log("State changed");
if(xhr.readyState === 4 && xhr.status === 200) {
console.log(xhr.responseText);
}
}
xhr.open("GET","file:///C:/Windows/win.ini");
xhr.send();
console.log("Sent");
})();
Comment 7•7 years ago
|
||
(In reply to FLR from comment #3)
> > This is basically saying "If I can run system- (ie chrome-)privileged JS, I can exploit the user". That's not an exploit, that
> > is obvious. But you can't explicitly create a link to chrome pages that users can click, and there are paste warnings in the
> > developer console by default (you may have turned them off and forgotten, of course, but they're there to start with), so
> > this can't really be called an exploit in its own right.
>
> CORS should ALWAYS be enforced. Especially when running JS in developer
> console even if pasting protection has been disabled.
This doesn't really make sense. The implication would be that if you typed a URL in the URL bar, we would have to send it as a CORS request and send chrome://browser/content/browser.xul as the Origin string, or something. That would break websites.
Internally, there is no difference between running stuff in the devtools from a page in a tab that has system/chrome privileges, to doing the same request "as a browser" at the top level of the browser.
We could try to make such a difference, but that wouldn't really help. The JS in the content process can simply send messages to the parent instructing it to load any URL it liked in a separate tab, that was same-origin with the desired site, and then execute its requests in that scope.
You don't seem to get the fundamental point here: once you run system-privileged JS, especially in the parent process, you're stuffed. There's no point "enforcing" anything.
Yes, sandbox hardening is a thing, but it doesn't really apply here. Websites in the sandbox obviously need to be able to initiate non-CORS requests to themselves, and so the content process needs to have that ability, and thus compromising that process means you gain that ability.
> Moreover, CORS is enforced for some chrome-privileged resource like :
> - about:license
> - resource://cloudstorage/global/license.html
> - about:buildconfig
> - resource://cloudstorage/global/buildconfig.html
> - resource://cloudstorage/global/about.xhtml
> - about:cache
> - about:checkerboard
*None* of these are chrome-privileged. You can check by opening them and checking `gBrowser.contentPrincipal.isSystemPrincipal` in the browser (not 'plain' devtools) console (cmd/ctrl-shift-j).
Not all about: pages, and no resource: pages, are chrome-privileged.
> > This requires write access to the app dir. At that point, you can replace the entirety of the browser code so how
> > XMLHttpRequests are treated isn't really our main issue...
>
> The browser executable itself is signed and cannot "in theory" be modified
> by a third party.
They can be replaced by another signed binary. I'm also not sure how (if) signing works on Linux, but in any case, it doesn't matter, because:
> Chrome.manifest and omni.ja can be.
Yes, but if you're replacing omni.ja the user has already lost - you can do whatever you like, CORS headers aren't going to stop you... You can just replace all the JS in the browser, make completely custom requests, call arbitrary other applications on the OS with arbitrary parameters... CORS headers are literally the least of your concerns at that point.
> In reply to :Gijs from comment #2)
> > Oh, and if you wanted to get the user to eval things so you can read their
> > bugzilla page, it would be much simpler to just link directly to bugzilla
> > and have them paste the code into the devtools on bugzilla... at least links
> > to bugzilla will work and users can just click them, unlike links to
> > about:about...
>
> So you're saying that CORS policy is useless and reflected XSS issues are
> not issues ...
No, I'm saying your premise here is you can:
1) convince the user to open arbitrary URL
2) without being able to remotely modify the content of the page they open, convince them to open the devtools
3) without being able to control devtools contents on the resulting page, convince them to bypass the paste protection and/or input your custom JS.
All of those are required for your "exploit" using a chrome-privileged-page-in-a-tab.
All of those are *also* required to do exactly the same on a bugzilla page, which would also work. So even if we "fixed" CORS for system-privileged things, your "exploit" would still apply - but directly to the third-party page.
Worse, while it's impossible to frame chrome-privileged pages, if websites aren't served with the appropriate CSP / x-frame-options headers etc., you could frame them, which would make it a lot easier to convince the user to do things (because you can tell them what to do on your page that they're still looking at!).
> In addition to retrieve content from a response. Not having
> CORS implies that NO PREFLIGTH requests are made for PUT, DELETE, CONNECT,
> OPTIONS and PATCH request. Therefore if you are connected to a WebDav server
> it is possible to dump everything to a remote server and than delete all
> content !
Sure, and again, if you can convince the user to load an arbitrary page (like a chrome-privileged one shipped with Firefox) and execute JS on it, you can convince them to do this on a page served by said webdav server, which would have exactly the same effect.
(In reply to :Gijs from comment #7)
> This doesn't really make sense. The implication would be that if you typed a URL in the URL bar, we would have to send it as
> a CORS request and send chrome://browser/content/browser.xul as the Origin string, or something. That would break websites.
> Internally, there is no difference between running stuff in the devtools from a page in a tab that has system/chrome
> privileges, to doing the same request "as a browser" at the top level of the browser.
> We could try to make such a difference, but that wouldn't really help. The JS in the content process can simply send messages
> to the parent instructing it to load any URL it liked in a separate tab, that was same-origin with the desired site, and then
> execute its requests in that scope.
> You don't seem to get the fundamental point here: once you run system-privileged JS, especially in the parent process, you're
> stuffed. There's no point "enforcing" anything.
I get your point about system-privileged JS vs "as a browser". Still even if it's "by design", personnally I found the impact of tricking the user in system-privileged JS tab worste that doing it on let say bugzilla tab as you described. Why ? Because when you do it in a system-privileged JS, you instantly can reach each and every single Web site the user is connected to. In counterpart doing it in bugzilla you will be only able to access bugzilla content. From a business perspective in think this is really bad since one could harvest any internal Web site the user is connected to.
Comment 9•7 years ago
|
||
(In reply to FLR from comment #8)
> I get your point about system-privileged JS vs "as a browser". Still even if
> it's "by design", personnally I found the impact of tricking the user in
> system-privileged JS tab worste that doing it on let say bugzilla tab as you
> described. Why ? Because when you do it in a system-privileged JS, you
> instantly can reach each and every single Web site the user is connected to.
Sure, but you could do that even if we restricted CORS in some way. I don't know how many other ways I can explain this.
- if you're after a specific target, and CORS were restricted on chrome scopes, you could just load the specific target instead and do a same-origin request
- if the premise is that you run system-privileged script, and CORS were restricted, you'd just get a list of history and use XPCOM APIs to fetch cookies, credentials etc., and send them to evil.com . Or you'd open tabs for each site from history and do same origin requests there. You could still do whatever you liked.
Restricting CORS in chrome scopes does nothing to help anyone here.
The real issue, if anything, is going to be with making it harder to trick users into running arbitrary JS on other pages. As I noted all the way back in comment #1, this is already on file.
Status: UNCONFIRMED → RESOLVED
Closed: 7 years ago
Resolution: --- → INVALID
Comment 10•7 years ago
|
||
Note that if we make this public we need to hide comment #6, which is really about an unrelated (but more serious) bug...
Updated•7 years ago
|
Group: firefox-core-security
Flags: needinfo?(dveditz)
Flags: needinfo?(abillings)
Comment 11•7 years ago
|
||
CORS is designed for the interaction between web pages. It does not govern the behavior internal to user agents. That some parts of the Firefox browser "look like" web technology (chrome:// urls) doesn't change the fact that they are browser internals.
You need to log in
before you can comment on or make changes to this bug.
Description
•