Closed Bug 1420049 (CVE-2018-5118) Opened 7 years ago Closed 7 years ago

Activity Stream screenshots can be coerced to (attempt to) load File:// URIs

Categories

(Firefox :: New Tab Page, defect, P1)

defect

Tracking

()

RESOLVED FIXED
Firefox 59
Iteration:
1.25
Tracking Status
firefox-esr52 --- unaffected
firefox57 --- wontfix
firefox58 + fixed
firefox59 + fixed

People

(Reporter: pauljt, Assigned: ursula)

References

Details

(Keywords: regression, sec-moderate, Whiteboard: [adv-main58+][post-critsmash-triage])

Attachments

(1 file, 3 obsolete files)

In 1393924 we introduced functionality to pull images references from meta tags, so that we show better screenshots on the newtab page. I'm not sure if the screenshot code is new or existing, but as a consequence, web pages can the content process to attempt a load of a local file. Luckily the sandbox stops the load, but it seems bad that the load is even attempted. STR: Create a page with a tag like: <meta name="twitter:image" content="file:///Users/paul/Pictures/dog.jpg"> Expected results: nothing loads Actual results: Error message observable in system console: getScreenshot error: Error: page-thumbnail:error After turning on sandbox logging (set security.sandbox.logging.enabled to true), I noticed the content process failing to access a file, using OSX console: "SandboxViolation: plugin-container(39465) deny file-read-metadata /Users/paul/Pictures/dog.jpg" The code in ContentMetaHandler[1] doesnt seem to do any validation of the data pulled out of the content page. That sounds dangerous to me, but Im not sure if the expectation is that sanitzation/validation happens later. [1] https://searchfox.org/mozilla-central/source/browser/modules/ContentMetaHandler.jsm#74 == Impact== Allowing web pages to coerce chrome privileged requests (I assume this is chrome privileged since the file load occurs) is dangerous. I don't _think_ there is any way to exploit this directly since a) the content doesnt come back to the web and b) sandboxing adds a layer of defense. But not fixing this seems like it would be asking for future trouble if this bug could be chained with something else.
Places stores all sorts of uri specs including file: place: about: schemes, so it might be acceptable to store potentially unused file: preview_image_url entries (where either activity stream or the thumbnailer decides not to use) The saving of the file uri as preview_image_url to places happens before an attempt to thumbnail it, but not all preview_image_url entries result in a thumbnail request which would happen at some later time from some other content process.
(In reply to Ed Lee :Mardak from comment #1) > Places stores all sorts of uri specs including file: place: about: schemes, > so it might be acceptable to store potentially unused file: > preview_image_url entries (where either activity stream or the thumbnailer > decides not to use) I think the defense-in-depth solution here would be: - add a checkloaduri check in https://searchfox.org/mozilla-central/rev/33c90c196bc405e628bc868a4f4ba29b992478c0/browser/modules/ContentMetaHandler.jsm#98-105 , and ignore tags that have URLs that the webpage shouldn't be allowed to link to. - in AS / thumbnailer where we read/load preview_image_url entries, ensure that the request has the correct triggering/loadingPrincipal, such that gecko/necko/caps automatically says "no" a second time. But we really ought to do at least one of these. file: things are sandboxed, but chrome: (and moz-icon: and ...) things are not, and just allowing any old request might well bite us at some point, even if right now, all we do is try to load it in an image element or as a background-image element.
We do have Bug 1397800 on our radar about being smarter when handling meta tags. We could potentially do this work there as well as a bunch of other stuff to gracefully handle these kinds of cases
Priority: -- → P1
Assignee: nobody → usarracini
Sounds good Ursula - is this 1397800 already prioritized? If not it, could I ask that it be considered as soon as possible. Frederik Braun pointed out that one tangible attack here is to inject a UNC network path ( \\machine\share\ ). This could be potential used to induce the victims machine to connect remotely over an insecure protocol (e.g. SMB) and enable a number of attacks. Marking as sec-high for this reason. Its also possible that this bug is the cause of bug 1419790, which is why I was looking at this code in the first place.
Keywords: sec-high
See Also: → 1419790
f? just as a sanity check for the principal stuff (make sure I'm doing that right)
Attachment #8931487 - Flags: review?(gijskruitbosch+bugs)
Attachment #8931487 - Flags: feedback?(ptheriault)
Comment on attachment 8931487 [details] [diff] [review] Bug 1420049 - Do not store preview images with incorrect principals.patch Review of attachment 8931487 [details] [diff] [review]: ----------------------------------------------------------------- The use of checkLoadURIStrWithPrincipal looks great. However, I do have some other comments, see below. Note that as a sec-high, the tests would have to land separately and you'll need sec-approval before landing this (once it has r+ etc.) Are you still planning to also pass a principal to the loadURI call inside the thumbnailer code? Did you run into issues with that that I could help with? (Only) When it comes to security, I'm partial to a belt + braces approach... ::: browser/base/content/test/metaTags/browser.ini @@ +1,4 @@ > [DEFAULT] > support-files = > head.js > + bad_meta_tags.html Nit: you can add this specifically to the test where you use a particular file. This makes it easier to (re)move tests if that's ever necessary. So: [browser_bad_meta_tags.js] support-files = bad_meta_tags.html (it'll "inherit" the head.js from [DEFAULT] ) ::: browser/base/content/test/metaTags/browser_bad_meta_tags.js @@ +1,5 @@ > +/* Any copyright is dedicated to the Public Domain. > + * http://creativecommons.org/publicdomain/zero/1.0/ > + */ > + > +const URL = "https://example.com/browser/browser/base/content/test/metaTags/bad_meta_tags.html"; Nit: I like relative URLs to make tests portable, see https://dxr.mozilla.org/mozilla-central/source/toolkit/components/reader/test/browser_readerMode.js for an example. Nit: this clobbers the "URL" global, so I'd use "TEST_URL" or something as a variable name, but I don't really care too much. @@ +16,5 @@ > + > + // Wait until places has stored the page info > + const pageInfo = await waitForPageInfo(URL); > + is(pageInfo.description, "description", "did not collect a og:description because meta tag was malformed"); > + is(pageInfo.previewImageURL.href, "twitter:image", "did not collect og:image because of invalid loading principal"); Sorry, so are we setting .href for something to "twitter:image" ? That doesn't seem right... ::: browser/modules/ContentMetaHandler.jsm @@ +48,5 @@ > +function checkLoadURIStr(url) { > + try { > + let ssm = Services.scriptSecurityManager; > + let principal = ssm.createNullPrincipal({}); > + ssm.checkLoadURIStrWithPrincipal(principal, url, ssm.DISALLOW_INHERIT_PRINCIPAL); So, bizarre protocols are allowed (hence the twitter:image example in the browser_bad_meta_tags.js). Would you be opposed to enforcing the target URL must be an http/https URI? If that sounds OK, you could modify this method to take a URL object and, before calling checkLoadURIStrWithPrincipal, doing something like: if (!["http:", "https:"].includes(urlObj.protocol)) { return false } @@ +100,5 @@ > }; > > + // Malformed meta tag - do not store it > + const content = metaTag.getAttributeNS(null, "content"); > + if (!content || content === "") { if the content is really === "", then it will also match the first if condition, so this can just be: if (!content) { @@ +112,3 @@ > } else if (shouldExtractMetadata(PREVIEW_IMAGE_RULES, tag, entry.image)) { > // Extract the preview image > + const value = new URL(content, url).href; new URL can throw if the contents aren't a valid URL. I guess we should catch errors?
Attachment #8931487 - Flags: review?(gijskruitbosch+bugs) → feedback+
Comment on attachment 8931487 [details] [diff] [review] Bug 1420049 - Do not store preview images with incorrect principals.patch Review of attachment 8931487 [details] [diff] [review]: ----------------------------------------------------------------- This looks safe, but I'm interested know why we use null principal here. ::: browser/modules/ContentMetaHandler.jsm @@ +47,5 @@ > > +function checkLoadURIStr(url) { > + try { > + let ssm = Services.scriptSecurityManager; > + let principal = ssm.createNullPrincipal({}); Is this the correct principal? I was expecting that we would use the page's principal here. Are we using null principal, because at this point we don't have access to the actual principal of the page which contained the meta tag? I guess that is OK - but I think this will be more restrictive, than if we passed the page's principal - I wonder if this will break the feature? (Though I'd probably trust Gijs judgement here, and he didnt flag this line)
Attachment #8931487 - Flags: feedback?(ptheriault)
Attachment #8931487 - Flags: feedback+
Just a note re: the sec-high here - that may actually be a bit overrated, but I was being conservative because: - this bug allows a webpage to trigger a file load - I _think_ this could be used with a UNC path to cause the user make an insecure load EXCEPT that - our sandbox blocks this (at least on OSX) The reason why I marked this high, despite the sandbox, is that read access isn't totally locked down on windows, and I don't know how the windows sandbox interacts with UNC paths (what integrity would they get? Something I need to test...). If it is even possible i think it would actually be very difficult to exploit, but I'm just being cautious, especially since the fix is relatively straightforward. TLDR: Al, when you are looking at sec-approval I don't think we have to be TOO cautious about this, and also I think its pretty unlikely that this would be exploited, so I wouldn't suggest we rush a fix out for this. (But happy to be overruled if Im missing an attack vector here).
(In reply to Paul Theriault [:pauljt] from comment #7) > > +function checkLoadURIStr(url) { > > + try { > > + let ssm = Services.scriptSecurityManager; > > + let principal = ssm.createNullPrincipal({}); > > Is this the correct principal? I was expecting that we would use the page's > principal here. Are we using null principal, because at this point we don't > have access to the actual principal of the page which contained the meta > tag? I guess that is OK - but I think this will be more restrictive, than if > we passed the page's principal - I wonder if this will break the feature? > (Though I'd probably trust Gijs judgement here, and he didnt flag this line) Yeah, this is more restrictive. We do actually have the original principal, I think, from the meta tag's nodePrincipal (in the caller of this utility method, so we could pass it in). Anyone can link to http/https images, I think, including null principals, so I think being more restrictive is OK? We could use the node principal if that seems preferable.
(In reply to :Gijs from comment #9) > (In reply to Paul Theriault [:pauljt] from comment #7) > > > +function checkLoadURIStr(url) { > > > + try { > > > + let ssm = Services.scriptSecurityManager; > > > + let principal = ssm.createNullPrincipal({}); > > > > Is this the correct principal? I was expecting that we would use the page's > > principal here. Are we using null principal, because at this point we don't > > have access to the actual principal of the page which contained the meta > > tag? I guess that is OK - but I think this will be more restrictive, than if > > we passed the page's principal - I wonder if this will break the feature? > > (Though I'd probably trust Gijs judgement here, and he didnt flag this line) > > Yeah, this is more restrictive. We do actually have the original principal, > I think, from the meta tag's nodePrincipal (in the caller of this utility > method, so we could pass it in). Anyone can link to http/https images, I > think, including null principals, so I think being more restrictive is OK? > We could use the node principal if that seems preferable. No, totally agree with more restrictive, just wanted to make sure I understood the decision, thanks.
(In reply to :Gijs from comment #6) > Note that as a sec-high, the tests would have to land separately and you'll > need sec-approval before landing this (once it has r+ etc.) Ah, should I revert the changes to the tests and file a follow up bug and land after this lands? > Are you still planning to also pass a principal to the loadURI call inside > the thumbnailer code? Did you run into issues with that that I could help > with? (Only) When it comes to security, I'm partial to a belt + braces > approach... I can do that as well, I thought I could do that in a separate patch that I could test, mostly to keep this one contained in the metadata parsing module. Does that sound ok? > @@ +16,5 @@ > > + > > + // Wait until places has stored the page info > > + const pageInfo = await waitForPageInfo(URL); > > + is(pageInfo.description, "description", "did not collect a og:description because meta tag was malformed"); > > + is(pageInfo.previewImageURL.href, "twitter:image", "did not collect og:image because of invalid loading principal"); > > Sorry, so are we setting .href for something to "twitter:image" ? That > doesn't seem right... This was just the easiest way to test if it got the correct content for the tag it parsed. Technically new URL("twitter:image") is actually valid so checking the href on it works for the sake of the test. > ::: browser/modules/ContentMetaHandler.jsm > @@ +48,5 @@ > > +function checkLoadURIStr(url) { > > + try { > > + let ssm = Services.scriptSecurityManager; > > + let principal = ssm.createNullPrincipal({}); > > + ssm.checkLoadURIStrWithPrincipal(principal, url, ssm.DISALLOW_INHERIT_PRINCIPAL); > > So, bizarre protocols are allowed (hence the twitter:image example in the > browser_bad_meta_tags.js). > > Would you be opposed to enforcing the target URL must be an http/https URI? > If that sounds OK, you could modify this method to take a URL object and, > before calling checkLoadURIStrWithPrincipal, doing something like: > > if (!["http:", "https:"].includes(urlObj.protocol)) { return false } Sure, can do.
(In reply to Ursula Sarracini (:ursula) from comment #11) > (In reply to :Gijs from comment #6) > > > Note that as a sec-high, the tests would have to land separately and you'll > > need sec-approval before landing this (once it has r+ etc.) > > Ah, should I revert the changes to the tests and file a follow up bug and > land after this lands? I'd just split out the test changes to a separate patch and attach it separately, then consult with the security team about when to land. > > Are you still planning to also pass a principal to the loadURI call inside > > the thumbnailer code? Did you run into issues with that that I could help > > with? (Only) When it comes to security, I'm partial to a belt + braces > > approach... > > I can do that as well, I thought I could do that in a separate patch that I > could test, mostly to keep this one contained in the metadata parsing > module. Does that sound ok? Sure!
Attached patch Bug 1420049.patch (obsolete) — Splinter Review
Updated patch with Gij's feedback. Contains some test fixes to get them passing again. The remainder of the tests are attached in a secondary patch.
Attachment #8931487 - Attachment is obsolete: true
Attached patch Tests for Bug 1420049.patch (obsolete) — Splinter Review
Comment on attachment 8931745 [details] [diff] [review] Bug 1420049.patch [Security approval request comment] How easily could an exploit be constructed based on the patch? Not easily Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? Not really Which older supported branches are affected by this flaw? None - this was introduced in release 57 If not all supported branches, which bug introduced the flaw? Bug 1393924 Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? N/A How likely is this patch to cause regressions; how much testing does it need? Not likely to regress - tests are attached in secondary patch
Attachment #8931745 - Flags: sec-approval?
Attachment #8931745 - Flags: review?(gijskruitbosch+bugs)
Attachment #8931746 - Flags: review?(gijskruitbosch+bugs)
(In reply to Ursula Sarracini (:ursula) from comment #16) > Try run: > https://treeherder.mozilla.org/#/ > jobs?repo=try&revision=4ab221889917e309631330b02e876f057f33078f OK, so, um, it would have been a good idea to read https://wiki.mozilla.org/Security/Bug_Approval_Process . Please don't push patches like this to try (which is public), certainly not patches that have full descriptions of what the issue is. People watch our checkins, especially for sec bugs landing. :-(
Maybe try should reject patches with a secbug in them.
Attachment #8931745 - Flags: review?(gijskruitbosch+bugs) → review+
Sorry, I had no idea!! (first security bug). Would have been helpful to know these resources existed before hand
Attachment #8931746 - Flags: review?(gijskruitbosch+bugs) → review+
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #18) > Maybe try should reject patches with a secbug in them. This is a good idea. I think in our new lando/phabricator world, we'll have a private place for sec patches, so hopefully/maybe also a private try repo? X-ref bug 1136954. I would support having a push hook in the interim, though. I'll file a bug. (In reply to Ursula Sarracini (:ursula) from comment #19) > Sorry, I had no idea!! (first security bug). Would have been helpful to know > these resources existed before hand Right, sorry for not bringing it up earlier, and for the tone of comment 17 - I should have been more constructive. :-( I thought this page used to be linked from the sec-approval request template that you filled in, but I must have imagined that, and I don't remember how else I would have found out about it, it's been too long. As Paul said in comment 8, so far we don't have a concretely exploitable thing because the sandbox saves our ..., so I hope/assume we'll be OK.
Filed bug 1420510 for the push hook.
(In reply to :Gijs from comment #21) > Filed bug 1420510 for the push hook. There's also a warning on (some) sec bugs below the attachment table that says approval is required, and links to the page in comment 17. Unfortunately the warning doesn't appear in enough cases (bug 1404173), and it's not obvious that it's a link.
(In reply to Daniel Veditz [:dveditz] from comment #22) > There's also a warning on (some) sec bugs below the attachment table that > says approval is required, and links to the page in comment 17. > Unfortunately the warning doesn't appear in enough cases (bug 1404173), and > it's not obvious that it's a link. Ahhhh. I did spend some time looking for this warning and didn't see it anywhere. I'm glad I'm not entirely crazy (though it's sad it doesn't appear consistently).
sec-approval+ for trunk for the patch *without* any tests. Do NOT check in tests into mozilla-central. Tests need to not be checked in until well after we ship Firefox 58 with this fix. This is so we don't 0day ourselves anymore than we may already have. Let's get the patch nominated for beta as well so we fix it everywhere.
Attachment #8931745 - Flags: sec-approval? → sec-approval+
Attachment #8931745 - Attachment description: Bug 1420049 - Do not store preview images with incorrect principals.patch → Bug 1420049.patch
Attachment #8931745 - Attachment filename: Bug 1420049 - Do not store preview images with incorrect principals.patch → Bug 1420049.patch
Keywords: leave-open
Attached patch 1420049.patchSplinter Review
Updated patch with correct commit message, carrying over r+ and sec-approval+. Will request beta uplift for 58 in a couple days when this patch lands.
Attachment #8931745 - Attachment is obsolete: true
Attachment #8932207 - Flags: review+
Spoke with Al over IRC; informally carrying over sec-approval+ for the patch :) Thanks!
Pulsebot doesn't comment on security bugs, so for the record: https://hg.mozilla.org/integration/mozilla-inbound/rev/b991da77be92
(In reply to :Gijs from comment #27) > Pulsebot doesn't comment on security bugs, so for the record: Yes, have to do that manually after the training of the Softvision sheriffs.
Status: NEW → ASSIGNED
Iteration: --- → 1.25
This merged on Tuesday morning (UTC). https://hg.mozilla.org/mozilla-central/rev/b991da77be92
Status: ASSIGNED → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → Firefox 59
Comment on attachment 8932207 [details] [diff] [review] 1420049.patch Approval Request Comment [Feature/Bug causing the regression]: bug 1393924 [User impact if declined]: potential to read files (stopped by sandbox in e10s, but not otherwise) [Is this code covered by automated tests?]: generally yes. For this specific issue, there are tests but we won't land them until this is released because it's a sec-high sec bug. [Has the fix been verified in Nightly?]: not yet [Needs manual test from QE? If yes, steps to reproduce]: Probably good given lack of automated testing in the tree right now. See comment 0. After setting security.sandbox.logging.enabled to true, it should be possible to verify that prior to the patch, sandbox errors are logged (if e10s is enabled). After the patch, there should be no sandbox errors for the file embedded in the page that AS is loading. [List of other uplifts needed for the feature/fix]: n/a [Is the change risky?]: not really [Why is the change risky/not risky?]: adding necessary security checks to some utility code. This shouldn't affect well-meaning webpages, ever. [String changes made/needed]: none.
Attachment #8932207 - Flags: approval-mozilla-beta?
This bug has a leave-open flag, did you mean to close it regardless?
Flags: needinfo?(gijskruitbosch+bugs)
The leave-open is just so that we remember to land the second patch with the tests later. Is that ok?
Flags: needinfo?(gijskruitbosch+bugs)
(In reply to Ursula Sarracini (:ursula) from comment #32) > The leave-open is just so that we remember to land the second patch with the > tests later. Is that ok? Personally, I think it would be easier to track the tests in a separate bug, because the target milestone and per-release affected/fixed/verified flags are going to be more sane (rather than either being wrong for the patch or wrong for the tests). Up to you though, you're the bug assignee. If we want to leave this open, we can do, but then we may need to chase people for uplift requests.
Comment on attachment 8932207 [details] [diff] [review] 1420049.patch Fix a sec-high. Beta58+.
Attachment #8932207 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
Filed Bug 1422325 for landing tests
Attachment #8931746 - Attachment is obsolete: true
Keywords: leave-open
Blocks: 1422325
Group: firefox-core-security → core-security-release
Lowering the severity a bit. Attacking page can trigger activity stream to attempt loading a URL, but the content of that URL doesn't get passed back to the page. There's no same-origin information leak, even if the file: did get loaded. A bigger worry to me is if some chrome: url or moz-extension: url (assuming you can guess the UUID; currently possible in some cases) performs some action that would be dangerous just by loading it (not necessarily rendering it). I can't think of any off-hand, but some legacy extensions had security bugs by exposing custom protocol handlers so it's not out of the question. Maybe that's already protected against on the Activity Stream side? Probably not because if we were loading file: urls (without the sandbox) then we weren't calling checkLoadURI and I'm not sure what else would filter out chrome:.
Keywords: sec-highsec-moderate
Whiteboard: [adv-main58+]
Alias: CVE-2018-5118
Flags: qe-verify-
Whiteboard: [adv-main58+] → [adv-main58+][post-critsmash-triage]
Flags: in-testsuite+
Group: core-security-release
Component: Activity Streams: Newtab → New Tab Page
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: