Closed Bug 1014600 Opened 10 years ago Closed 10 years ago

HTTP/2 header decompression fails after CONNECT request failure via HTTP/2 secure proxy

Categories

(Core :: Networking: HTTP, defect)

x86_64
Linux
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla33

People

(Reporter: tatsuhiro.t, Assigned: u408661)

Details

(Whiteboard: [spdy][http2release])

Attachments

(2 files)

User Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.132 Safari/537.36

Steps to reproduce:

0. I use Firefox nightly 32.0a1 2014-05-22

1. Run nghttpx HTTP/2 proxy in the following command line:

    nghttpx server.key server.crt -b127.0.0.1,3128 -LINFO -o -s

    server.key and server.crt are SSL/TLS private key and certificate respectively for nghttpx proxy

    nghttpx is included in nghttp2 repository https://github.com/tatsuhiro-t/nghttp2
    It is also packaged in debian sid now, so you can install: apt-get install nghttp2

2. nghttpx uses squid as backend, so run squid at port 3128

3. Configure firefox to use nghttpx as secure proxy, something like a following proxy.pac:

    function FindProxyForURL(url, host) {
        return "HTTPS localhost:3000";
    }

4. Enable HTTP/2 (h2-12) feature by turning on network.http.spdy.enabled.http2draft

5. Visit https://nghttp2.org:999/
    Actually, the port number is chosen so that connect request will fail.

6. Visit any http site (e.g., bbc.co.uk).
    


Actual results:

HTTP/2 header decompression is corrupted when reading response header for step 6.
There are several symptoms:  header decompression error or resulting hugely corrupted response header, mixing the response header from the previous CONNECT response header fields.  You can see what happens in decompressor in HTTP log.


Expected results:

HTTP/2 header decompression must be done normally without errors.
Component: Untriaged → Networking: HTTP
Product: Firefox → Core
Whiteboard: [spdy]
Whiteboard: [spdy] → [spdy][http2release]
After fighting nghttpx on mac os x (it crashes, fun times!) and finally retreating to linux, I can definitely confirm that the compression state is WAY messed up. Initial look seems to indicate that there's something going wrong processing literals with an indexed name, as the decompressor table is filled with odd pairings such as "vary: 728" and other such fun nonsense. Haven't done a full log dig to determine exactly where we're getting off the rails yet, but I have a (big) log in front of me to dig through.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Attached patch part 1Splinter Review
This patch is just some cleanups to the HPACK logging I added while debugging this issue, to make it easier to see the full state of things and not spam the logs with stuff that doesn't matter.
Assignee: nobody → hurley
Status: NEW → ASSIGNED
Attachment #8453917 - Flags: review?(mcmanus)
Attached patch part 2Splinter Review
And here's the fix. Turns out, we weren't truncating the decompression buffer when a tunnel stream comes back with a non-200, so when we started trying to decompress the headers we received for the next stream, the buffer still had the failed stream's headers at the beginning.

I moved the truncation to the session's handling of RecvHeaders and RecvPushPromise, since the session is what actually owns the buffer, and it seems to me to make sense to truncate when we know we're going to need a clear buffer, instead of willy-nilly after we've finished processing the headers (but then only sometimes!)

Try run: https://tbpl.mozilla.org/?tree=Try&rev=84364a91b3c1
Attachment #8453920 - Flags: review?(mcmanus)
interesting - good catch.

This might conflict with the proxy auth patches I'm just about to upload.. stay tuned!
Comment on attachment 8453920 [details] [diff] [review]
part 2

Review of attachment 8453920 [details] [diff] [review]:
-----------------------------------------------------------------

thanks nick! lgtm assuming you're ok with the comment

::: netwerk/protocol/http/Http2Stream.cpp
@@ -874,5 @@
>        aHeadersIn.Length() * 100 / aHeadersOut.Length();
>      Telemetry::Accumulate(Telemetry::SPDY_SYN_REPLY_RATIO, ratio);
>    }
>  
> -  aHeadersIn.Truncate();

let's keep this truncate() (in addition to adding the new ones) just for footprint reasons.
Attachment #8453920 - Flags: review?(mcmanus) → review+
Attachment #8453917 - Flags: review?(mcmanus) → review+
https://hg.mozilla.org/mozilla-central/rev/e28d922d5d17
https://hg.mozilla.org/mozilla-central/rev/9ded0b5313b4
Status: ASSIGNED → RESOLVED
Closed: 10 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla33
Tatsuhiro, if you want to have a try at confirming this fix, you can use the latest Firefox 33 Beta from ftp://ftp.mozilla.org/pub/mozilla.org/firefox/nightly/33.0b1-candidates/build1/.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: