If you think a bug might affect users in the 57 release, please set the correct tracking and status flags for Release Management.

Cross-Protocol Theft from non-HTTP services via DNS rebinding + "HTTP/0.9"

NEW
Assigned to

Status

()

Core
Networking
P3
normal
2 years ago
a month ago

People

(Reporter: Jordan Milne, Assigned: mcmanus)

Tracking

({sec-moderate})

45 Branch
sec-moderate
Points:
---

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [Don't 0day other browsers][necko-backlog])

Attachments

(1 attachment)

(Reporter)

Description

2 years ago
Created attachment 8738124 [details]
Using DNS rebinding + HTTP/0.9 to dump a Redis DB bound to localhost

## Issue

DNS rebinding attacks can be mounted against non-HTTP services to steal their responses cross-protocol. Non-HTTP services' responses can be read via XHR as their response streams will be interpreted by browsers as `HTTP/0.9`.

This is an issue because, HFPA attacks aside[0], many services that bind to localhost don't require authentication, and haven't factored browsers being able to read their responses into their threat model. 

For example: MySQL *cannot* be attacked via a standard HFPA attack, as it closes the connection due to the handshake failing while the browser is still writing the HTTP headers; however, being able to read the server's half of the handshake tells us a lot about the server itself[1]. 

Memcached *can* have its contents manipulated via a standard HFPA attack, but this behaviour allows actually leaking the database as well.

Likely all services not bound to blacklisted ports that have "fault tolerant" parsers, or that leak sensitive data before or during the handshake can be attacked this way.

## Reproducing

* To simulate a non-HTTP service bound to the loopback run `python -c 'print("\x00\x01I am not an HTTP response\r\n\r\nfoo\x00")' | nc -l 127.0.0.1 11212` on the same machine as your browser
* Start up a webserver on another host on port `11212` and have it serve the following document:

		<pre id="output"></pre>
		<script>
		var outputElem = document.getElementById("output");
		outputElem.textContent = "Waiting 120s to fetch";
		setTimeout(function() {
		  var xhr = new XMLHttpRequest();
		  xhr.open("GET", "http://" + window.location.host + "/?a=" + (new Date()).getTime());
		  xhr.onload = function() {
			outputElem.textContent = xhr.responseText;
		  };
		  xhr.send();
		}, 120000);
		</script>

* Create an entry for "rebinding.example.com" in your `/etc/hosts` file that points to the host running your webserver
* Load "http://rebinding.example.com:11212/" in your browser
* To simulate DNS rebinding, edit the "rebinding.example.com" entry in your `/etc/hosts` file and make it point to `127.0.0.1`
* After 120 seconds the XHR should fire and the `<pre>` should contain `\x00\x01I am not an HTTP response\r\n\r\nfoo\x00`.

For convenience, I've hosted a version of this PoC that uses a custom DNS server[2] to perform the rebinding 3 seconds after the first resolution of a unique subdomain at <http://(unique_string).bondage.computersareca.ca:11212/tests/simplest_rebinding.html>.

I've also made PoCs that abuse this behaviour to dump local Memcached[3] and Redis[4] databases, as well as the MySQL[5] server handshake. Screenshots are attached. 

The rebinding method used in these PoCs takes about 5 seconds to execute reliably, though faster somewhat more complex methods are also available. I won't go into great detail about bypassing the internal DNS cache's 60s TTL as it's been better documented elsewhere. The short version is that returning a TCP RST appears to invalidate the domain's DNS cache entry and force a fresh DNS lookup. I do that by hitting an endpoint that adds an iptables filter to return RSTs to the client for the next 15 seconds.

## Remediation

Mitigations for DNS rebinding attacks on HTTP services are well understood, the server forcing the use of HTTPS or verifying the `Host` header[6] in the request mitigates the worst issues. What's less clear is how non-HTTP services bound to non-blacklisted[7] ports should protect themselves.

Previous attacks on non-HTTP services via DNS rebinding relied on now-fixed vulnerabilities in plugins and their TCP socket APIs[8]. This is still an issue due to browsers interpreting anything that doesn't specifically declare itself as HTTP to be `HTTP/0.9`.

I think the best thing would be for DNS servers to filter responses containing "private" addresses, including loopback addresses; however, these filters are not commonly enabled, and are generally insufficient. Many that I've audited do not block loopback addresses, ignore IPv6 or improperly handle its edge cases, etc.

With that in mind, I think the *most reasonable* place to mitigate DNS rebinding attacks on non-HTTP services in particular is in the browser. Since updating the port blacklist for every new service that comes out isn't feasible, we should instead restrict where `HTTP/0.9` is allowed.

The easiest fix would be to disallow HTTP/0.9 on "unusual" ports. If that was too restrictive, we could disallow HTTP/0.9 only if the request was for a subresource or child document over an "unusual" port. Loading `HTTP/0.9` at the top level should still be safe.

I'd be interested to hear your thoughts on how this should be mitigated, or whether mitigation is the browser's responsibility at all, as this behaviour is present in multiple browsers and I couldn't find any previous mentions of DNS rebinding + HTTP/0.9 attacks.

I've reported this separately to the Chromium team as CRBug 600352. As this affects all the browsers I tested (Firefox, Chrome, Safari and Edge) we should coordinate on how best to mitigate this.

[0]: https://www.jochentopf.com/hfpa/hfpa.pdf
[1]: https://dev.mysql.com/doc/internals/en/connection-phase-packets.html#packet-Protocol::Handshake
[2]: https://github.com/JordanMilne/FakeDns
[3]: http://computersareca.ca/tests/rebinding_frame.html?memcached
[4]: http://computersareca.ca/tests/rebinding_frame.html?redis
[5]: http://computersareca.ca/tests/rebinding_frame.html?mysql
[6]: https://bugzilla.mozilla.org/show_bug.cgi?id=689835#c9
[7]: https://mxr.mozilla.org/mozilla-central/source/netwerk/base/nsIOService.cpp#100
[8]: https://bugzilla.mozilla.org/show_bug.cgi?id=389625 and http://www.adambarth.com/papers/2009/jackson-barth-bortz-shao-boneh-tweb.pdf
Component: Security → Networking
Patrick, maybe you could take a look at this? Thanks.
Flags: needinfo?(mcmanus)
(Assignee)

Comment 2

2 years ago
This is interesting - thanks. Clearly a problem.

A sort of similar bug was bug 667907. As a result of that one, we have disabled content sniffing for 0.9 responses on non default ports. We've been running like that for 5 years so that does show that while we can't ban 0.9 we can restrict it in some ways.

I would be happy to gather data on 0.9 via non default ports and we can break that down into top-level vs not. With that info we can assess the pain in banning a subset. If the pain looked reasonable and Chrome was also willing to forward then I think it would be reasonable to implement the proposed mitigation.
Flags: needinfo?(mcmanus)
(Assignee)

Comment 3

2 years ago
see bug 1262572 (I won't mark a blocking relationship due to the security screen on this bug)
(Reporter)

Comment 4

2 years ago
The assignee for the Chrome bug indicated that they're looking at either blocking `HTTP/0.9` entirely or going with a whitelist of ports where `HTTP/0.9` is allowed no matter what initiated the request. 

I'm not super clear on how access to same-origin windows opened via `window.open("/", "windowname")` works in Firefox (I know lcamtuf mentioned some weirdness there in the past,) but blocking all non-whitelisted ports regardless of the initiator would be safest unless a lot of people are really serving documents over `HTTP/0.9` on unusual ports.
Group: core-security → network-core-security
Status: UNCONFIRMED → NEW
Ever confirmed: true
Whiteboard: [Don't 0day other browsers]
(Assignee)

Updated

2 years ago
Assignee: nobody → mcmanus
Whiteboard: [Don't 0day other browsers] → [Don't 0day other browsers][necko-active]
Keywords: sec-moderate
(Reporter)

Comment 5

2 years ago
If I'm reading the telemetry data for the beta channel correctly, most HTTP/0.9 responses are for top-level documents on the default port, but 10%~ of the responses are for sub-resources over a non-default port.

Do we have any way to tell what the non-default port was? Allowing common alternate HTTP ports like 8000, 8080, etc should be fine.
(Assignee)

Comment 6

2 years ago
we don't have per port data.. I agree, in general the data is discouraging. But let's let it see the release audience before making any conclusions.
(Reporter)

Comment 7

a year ago
Hi Folks, just wanted to give you a heads-up that I intend to publicly disclose this as part of some related inter-protocol exploitation research in 3 months (August 31st.)

If you wish to co-ordinate with the other vendors then the updated bug references are: CRBug 600352, Apple ProdSec Followup #639367943, MSRC Case 33254.
(Reporter)

Comment 8

a year ago
This was rediscovered and publicly disclosed at http://bouk.co/blog/hacking-developers/, can probably remove the sensitive flag.
(Reporter)

Comment 9

a year ago
Also, seems like both Apple and Google are attempting to remove HTTP/0.9: https://bugs.chromium.org/p/chromium/issues/detail?id=624462
So, what's the status of this?
Should we revisit disabling HTTP/0.9 support?
See Also: → bug 1299881
https://bugzilla.mozilla.org/show_bug.cgi?id=1262128 is discussing blocking the ports, just like we do for FTP, Telnet, IRC etc.
(Reporter)

Comment 12

a year ago
Both WebKit and Chromium ended up restricting HTTP/0.9 to port 80 and 443 for HTTP and HTTPS, respectively, that seems reasonable to me:

https://trac.webkit.org/changeset/201895/trunk/Source
https://chromium.googlesource.com/chromium/src.git/+/a7da6da864dce77fe1c931653635c3ac757219cb

Chromium also added a Group Policy setting to relax that restriction, so maybe Firefox could have an option in `about:config` as well?
The reason they backed off of fully disabling is due to incompatibility with some home routers
https://bugs.chromium.org/p/chromium/issues/detail?id=624462#c13
Group: network-core-security
(In reply to Frederik Braun [:freddyb] from comment #11)
> https://bugzilla.mozilla.org/show_bug.cgi?id=1262128 is discussing blocking
> the ports, just like we do for FTP, Telnet, IRC etc.

That should have been bug https://bugzilla.mozilla.org/show_bug.cgi?id=1299881

Comment 15

11 months ago
Chrome 55 has disabled HTTP/0.9 for other ports and this has effectively killed listening to any Shoutcast radio station on the internet. 

See more details here: https://bugs.chromium.org/p/chromium/issues/detail?id=669800&can=2&start=0&num=100&q=audio&colspec=ID%20Pri%20M%20Stars%20ReleaseBlock%20Component%20Status%20Owner%20Summary%20OS%20Modified&groupby=&sort=#

Comment 16

11 months ago
IMO this should be handled in a way that does not affect such services.
(Reporter)

Comment 17

11 months ago
(In reply to Andrei Boros from comment #16)
> IMO this should be handled in a way that does not affect such services.

I don't believe blocking HTTP/0.9 in Firefox would break Shoutcast 1.x. netwerk treats ICY responses as if they were HTTP/1.0 https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpResponseHead.cpp#1093

Comment 18

10 months ago
That is good news. Thank you.
(Assignee)

Updated

6 months ago
Whiteboard: [Don't 0day other browsers][necko-active] → [Don't 0day other browsers][necko-backlog]
Bulk change to priority: https://bugzilla.mozilla.org/show_bug.cgi?id=1399258
Priority: -- → P1
Bulk change to priority: https://bugzilla.mozilla.org/show_bug.cgi?id=1399258
Priority: P1 → P3
You need to log in before you can comment on or make changes to this bug.