Bypass directory traversal protection in chrome: urls.
Categories
(Core :: DOM: Core & HTML, defect)
Tracking
()
People
(Reporter: dveditz, Unassigned)
References
Details
(Keywords: sec-moderate, Whiteboard: btpp-fixlater)
| Reporter | ||
Updated•9 years ago
|
| Reporter | ||
Comment 2•9 years ago
|
||
| Assignee | ||
Updated•6 years ago
|
Comment 4•6 years ago
|
||
Hi Gijs, from my very glance over the bug history, bug 1432870 which looks very related to this one. Would you please help me understand if there's any action here after bug 1432870 was fixed? And, I'm not sure DOM:Core&HTML is the best component, do you have a idea for the proper component? Thank you.
Comment 5•6 years ago
|
||
(In reply to Hsin-Yi Tsai [:hsinyi] from comment #4)
Hi Gijs, from my very glance over the bug history, bug 1432870 which looks very related to this one. Would you please help me understand if there's any action here after bug 1432870 was fixed?
In principle, yes, because it allows bypassing a security mitigation. The fix in bug 1432870 only removes the ability to use / as the start of the relative path after the initial chrome://[packagename]/content/ preface. It doesn't affect the use of .. in a later path component.
And, I'm not sure DOM:Core&HTML is the best component, do you have a idea for the proper component? Thank you.
I think it's in this component because the normalization of the URL happens in fetch/URL code. If we want that normalization to happen differently or not at all for chrome URIs (if that's even reasonably possible) then that's probably where it needs fixing.
However... I'm not sure there's actually a security issue here. As far as I can tell the ../ get removed when parsing the string input into fetch as a URL, which means we will try to load the "canonical" URL anyway, which is what will attempt to pass security checks - and those will fail.
Put differently, web content can load images, styles and scripts from web accessible packages without any "tricks", cf. https://jsbin.com/jiyalorimu/edit?html,output which loads chrome://browser/skin/info.svg as an image. (Fixing this is bug 443400 - and I think there's a more recent copy of that bug somewhere)
But contentaccessible as a "permission" is chrome-package-wide. That is, it applies to all of chrome://browser/ at a time. And using ../ does not let you remove browser and replace it with e.g. devtools, like what happened in bug 1432870, because browser is considered the "host" in the URL parsing sense. So you cannot traverse your way out of the specific chrome package.
You can't even traverse your way back to exploiting bug 1432870, because something like chrome://browser/content/foo/..// gets normalized to chrome://browser/content// and then we apply the checks from bug 1432870, so the URL parser still throws, as it should.
I expect that still doesn't necessarily mean we can get rid of the checks in nsChromeRegistry::Canonify because not all paths are guaranteed to go through the URL/fetch-related normalization. But perhaps this can be resolved as invalid?
Boris and Dan, can you check my logic here? I feel like I'm probably missing something but I'm not sure what. :-)
Comment 6•5 years ago
|
||
That all sounds possinly correct to me.
Per the URL spec, /../ in paths is resolved at URL parse time. So by the time there's an nsIURI or nsIURL around, there should be no /../ in the URL path, afaict.
What I'm not clear on is whether we implement that part of the spec, either in Necko or in rust-url (and which parser we use for chrome:// URLs, for that matter).
Comment 7•5 years ago
|
||
(In reply to Boris Zbarsky [:bzbarsky, bz on IRC] from comment #6)
That all sounds possinly correct to me.
Per the URL spec,
/../in paths is resolved at URL parse time. So by the time there's an nsIURI or nsIURL around, there should be no/../in the URL path, afaict.What I'm not clear on is whether we implement that part of the spec, either in Necko or in rust-url (and which parser we use for chrome:// URLs, for that matter).
Chrome URLs use the nsStandardURL parser which resolves all instances of ./ ../ %2e./ etc by calling net_CoalesceDirs
So I think Gijs' arguments in comment 5 hold for chrome URLs
| Reporter | ||
Comment 8•5 years ago
|
||
The original issue bug 413250 was solving was for "flat" add-ons (unzipped to disk) using file directory traversal to get out of the add-on and into other arbitrary parts of your disk. Things seems to be different now (comment 7) but we were seeing chrome: urls being mapped to their file: equivalent before the dots were being resolved by the URL parser. That was 12 years ago and a lot has changed!
If we can't traverse out of the chrome component (the syntactic "host" in a chrome URL) anymore then this can't be a problem.
| Reporter | ||
Updated•2 years ago
|
Description
•