Turns out Collin's paper on dns rebinding already mentioned this attack, back in 2007. Chrome won't fix this issue. I'm unsure what you guys are going to do... In case you can access it, here is the chrome thread: http://code.google.com/p/chromium/issues/detail?id=98357
Alok, who on the Chrome team are you working with on this issue? I would like to contact them? Also, can you name your points of contact for MS, Apple, and Opera?
Brian, did you get any further on this issue? I cannot read the link from comment #1... To avoid this, wouldn't we have to enforce that the current dns-record for the host is the same as when we fetched the cached entry (as indicated in comment #0)? Should be possible to implement this by attaching IP-address at load-time as metadata for the cache-entry and verify in nsCrossSiteListenerProxy::CheckRequestApproved() - might reduce cache-hit rate but only for cross-site requests, no..?
As I mentioned previously on this thread, Chrome's response is won't fix. I will need to look up the status with other browsers. I have not reported this issue to Opera.
I can reproduce this and can talk to Opera about it if nobody has done that yet. The initial idea from comment #3 seems feasible although there is a (as always) complication in the details. Current thinking is as follows: When loading "secret.txt" and after nsXMLHttpRequest::CheckChannelForCrossSiteRequest() has decided that this is a x-site request, we find the URI of our *principal*, look up its cache-entry (if any) and mark it to "contain" an cross-site request (e.g. by attaching the IP-address to cache meta-info). Upon subsequent retrieval, nsHttpChannel::CheckCache() can look for this meta-info and verify that it matches the current IP of the host. This will limit cache-misses to principals whose IP-address changes and also contains cross-site requests.
I'd be curious to know why Chrome decided not to fix this. Let's contact someone on the Chrome security team and see if they can fill us in, or let us access the bug. If they think the bug isn't worth fixing I wonder why they are continuing to hide it.
Adam, in the paper you co-wrote "Protecting Browsers from DNS Rebinding Attacks" , there is this statement: "To prevent this attack, objects in the cache must be retrieved by both URL and originating IP address. This degrades performance when the browser pins to a new IP address, which might occur when the host at the first IP address fails, the user starts a new browsing session, or the user’s network connectivity changes. These events are uncommon and are unlikely to impact performance significantly." I am surprised by the last sentence quoted. I suspect that, at least in 2011, it is quite common for users' connectivity to change in such a way that performance would be impacted significantly--especially on mobile, but even with laptops. When people are travelling, I suspect it is both more likely that they would be impacted by such a change (because of geo-aware DNS) and more critical that the cache hit rate be as high as possible, as often the network you are on is very poor. Note that there is no reason to have a same-IP restriction for cached resource retrieved over SSL. Instead, a same-public-key check should be done. A same-public-key check is more secure (AFAICT) AND would be better for the cache hit rate for most sites. Note that the paper at  mentions *several* issues that (AFAICT) are may still be unaddressed in Firefox, besides this one.  http://crypto.stanford.edu/dns/
> I am surprised by the last sentence quoted. We don't have any data to back up that statement. :)
I'd also recommend that Firefox not fix this issue. It's not feasible for the browser to protect the user from DNS rebinding attacks. Servers need to protect themselves by validating the Host header and firewalls need to protect themselves by preventing external names from resolving to internal IP addresses.
I think we should change the sg rating to sg:low and make this bug public while we decide whether or not we want to fix this. This clearly isn't sg:moderate or higher at this point and discussion will benefit from this being made public.
while I understand that it's something browsers might not always be able to address, it would be nice if browsers tried to: a) mitigate the risk as much as possible b) help web developers understand that the solutions currently in place are only meant to block a subset of rebinding attacks and that the flaw needs to be addressed at a different level. Adam: don't you think checking for same-public-key for resources retrieved over ssl would be a good thing?
> Adam: don't you think checking for same-public-key for resources retrieved over ssl would be a good thing? That won't work for deployments that use a server farm with different public keys for each host, as required when using extended validation certificates. To your larger question, in some cases providing less protection is actually better because it's clear whether the responsibilities lies. Protecting against these attacks in the browser is infeasible. Protecting against them at the server or at the firewall is pretty easy.
I'm going to bow to the reality of wontfix on this one; matching chromium. The security model for the web requires https to enforce origin semantics.