Closed Bug 1279483 Opened 4 years ago Closed 3 years ago
Issues proxying h2 using apache mod
User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:43.0) Gecko/20100101 Firefox/43.0 SeaMonkey/2.40 Build ID: 2016011800 Steps to reproduce: 1) Configure Apache 2.4.20 with mod_proxy 2) Configure Firefox to use this proxy (tested with localhost and port 80) 3) Use the addon "Live HTTP headers" 3) Start the SSL version of Facebook (https://www.facebook.com/) 4) Visit some sites, click and scroll Actual results: - some images will not load - site is not correctly dynamically updated - some clicks result in a empty page Expected results: Facebook should work normally, like without the Apache proxy. In "Live HTTP headers" log I find this headers: X-Firefox-Spdy: h2 Facebook also works normally, if I set "network.http.spdy.enabled" to false.
Hi Bjoern, thanks for filing the bug report. If you could, do you think you could help us by uploading the logs you record while reproducing the bug? https://developer.mozilla.org/en-US/docs/Mozilla/Debugging/HTTP_logging#Linux use the following when logging: export NSPR_LOG_MODULES=timestamp,nsHttp:5,proxy:5,nsSocketTransport:5,nsStreamPump:5,nsHostResolver:5
The log file was too big for this bugzilla installation. I uploaded the log file here https://www.magentacloud.de/share/bs891d3.i8 - firefox-facebook-apache-proxy-test.log.bz2: the log file with options described in https://bugzilla.mozilla.org/show_bug.cgi?id=1279483#c1 Firefox 47.0 from www.firefox.com, Linux, visited site: https://www.faceboook.com/mozilla, unauthenticated - firefox-facebook-click-in-view-5-more-comments.png: the link "View 5 more comments" does not work; only some links have this problem - firefox-facebook-missing-image.png: the image in this post is empty; some posts show this problem - firefox-facebook-click-on-see-more.png: the link "show more" or similar on button of the Facebook page opens a new empty page
Nick, can you take a look at this? Thanks!
So first of all, the title is a bit misleading - the X-Firefox-Spdy header is created internally because we're using h2 to connect to facebook (which is to be expected) and apache never sees that. I haven't dug into the logs yet, but the real issue seems to be somehow related to using a CONNECT tunnel over mod_proxy - that's how we'll connect to facebook, and is the only way we would use h2 through a proxy. I'll take a look at the logs and see if I can come up with anything.
Summary: Firefox and Seamonkey should not generate a X-Firefox-Spdy header, if Apache mod_proxy is used as proxy → Issues proxying h2 using apache mod_proxy
Assignee: nobody → hurley
(In reply to Nicholas Hurley [:nwgh][:hurley] from comment #4) > So first of all, the title is a bit misleading - the X-Firefox-Spdy header > is created internally because we're using h2 to connect to facebook (which > is to be expected) and apache never sees that. > > I haven't dug into the logs yet, but the real issue seems to be somehow > related to using a CONNECT tunnel over mod_proxy - that's how we'll connect > to facebook, and is the only way we would use h2 through a proxy. I'll take > a look at the logs and see if I can come up with anything. You will find some HTTP/2 status codes in the logs like this: 2016-06-12 10:58:30.962588 UTC - [Socket Thread]: I/nsHttp HTTP/2.0 200 OK Yes, Apache does not see the X-Firefox-Spdy header because it is SSL encrypted for the Apache proxy. But the Facebook servers seem to analyze this header and switch on HTTP/2 based on the header or based on other headers which are configured with the Firefox config option network.http.spdy.enabled. I never found a documentation about tunneling HTTP/2 SSL traffic through a HTTP/1.1 proxy server. I can imagine that it should work, but maybe there are some multiplexing problems, if multiple connections are running between Firefox over Apache/mod_proxy to the Facebook servers.
nick is on the right track. x-firefox-spdy is a response header injected by the browser. FB will never see it. no doubt there is some h2/spdy issue going on here, so when you change that pref it goes away.. but focusing on the header is a likely a red herring. nonetheless this bears investigation.
So I've spent some time this week digging through the logs here. The tl;dr is that I probably need to try to reproduce this locally to get more info on exactly what's happening, but here's an update: So far as I can tell, the problem is that the CONNECT tunnel hangs or gets silently closed by mod_proxy, which is causing the issue(s) being seen. I only looked directly at the requests related to the "load more comments" link, but what appears to be happening is that when the link is clicked, we send the XHR POST associated with that link just fine. Eventually, we get OnSocketReadable for the CONNECT tunnel, and when the Http2Session goes to try to read a 9-byte frame header from the socket, PR_Read returns 0, which then is translated into BASE_STREAM_CLOSED. As soon as the next XHR comes in, we open another CONNECT tunnel to facebook, which proceeds fine for some amount of time. The problem is, though, we can't retry a POST once we've sent it, and since we successfully sent the POST for that link, we don't retry on the new connection. Depending on how facebook has their js written, clicking on the link again may or may not actually retry the POST. One other thing to note, out of 8 or 9 "load more comments" POSTs in the log, only one fails. Like I said, I haven't looked at the image issues yet, so I can't say if the issue there is the same/similar, or somehow different, though it seems on the face of it that the image issue might be a separate thing, given images usually aren't retrieved via POST. Facebook could be doing something I'm not expecting there, though.
OK, I got more time to look into this - the reason we don't retry the image when the proxy connection gets closed out from under is is that we are not shutting down cleanly in Http2Session::Shutdown (https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/Http2Session.cpp#151). Because mCleanShutdown == false, and we haven't received any data on the stream, we close with NS_ERROR_ABORT (https://dxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/Http2Session.cpp#159), which is an error we won't retry with (see the comment block above the if/else block). That seems like a reasonable course of action (since we don't know much if anything about the stream we're closing, whether the server did anything on its end, etc, etc). The question now is, why is our socket being closed out from under us? Is there something we could do to make this less likely to happen, or is mod_proxy just being nasty to us for no good reason?
So I've been able to reproduce this with apache 2.4.18 on my os x 10.11 machine. There's a few particular lines of interest in the apache log (note that these appear for every CONNECT we do, this is just one particular example): [Wed Jul 27 10:33:43.975675 2016] [proxy_connect:debug] [pid 40403] mod_proxy_connect.c(246): [client 127.0.0.1:49976] AH01019: connecting www.facebook.com:443 to www.facebook.com:443 <snip some lines> [Wed Jul 27 10:34:06.152642 2016] [reqtimeout:info] [pid 40403] [client 127.0.0.1:49976] AH01382: Request header read timeout [Wed Jul 27 10:34:06.152752 2016] [proxy_connect:debug] [pid 40403] mod_proxy_connect.c(181): (70007)The timeout specified has expired: [client 127.0.0.1:49976] AH01018: error on client - ap_get_brigade If I disable the header/body read timeouts in the reqtimeout module with <IfModule reqtimeout_module> RequestReadTimeout header=0 body=0 </IfModule> then this problem goes away. So, it seems to me like this is an apache bug around interactions between reqtimeout and mod_proxy_connect. By default, these timeouts are 20s - note the difference between the initiation of CONNECT and the "Request header read timeout" error in the log lines above - just over 20s (the extra time is probably accounted for by the first request - the CONNECT - taking a bit of time to succeed, which means apache has to restart its request header timeout). This is born out looking at pcaps of this behavior - apache is the one that sends the first FIN both to the browser *and* to facebook for the TCP connection in question. So facebook isn't upset with anything that firefox has done, nor is firefox upset with anything facebook has done, apache is the only one getting upset in this situation. Again, with the request read timeout disabled, apache doesn't get upset. The reason I suspect this works if you disable h2/spdy is that we open more than one connection with http/1, and we'll keep opening more as they get closed, which means it's more likely that either (1) firefox won't have reached its per-host connection limit, and will just open a new connection that won't timeout yet, or (2) firefox will place your request on a connection that has not yet been timed out by apache. I suspect you actually *could* hit this bug with http/1, but it's significantly less likely. I'm going to mark this as INVALID, as it does not appear to be a firefox bug in the least, and I'm not seeing any mitigations we could perform to make this problem go away on our end. Reporter, I recommend filing a bug against apache, and point them here for more information.
Status: UNCONFIRMED → RESOLVED
Closed: 3 years ago
Resolution: --- → INVALID
Thanks for investigation. I will try to get a solution from Apache devs by reporting the issue on the Apache bug tracker. Before I will try to reproduce the issue with other browsers. Does anybody know, if browsers like Chrome, Chromium or Edge use H2 over proxies? In best case it would take months until Firefox/Apache users will see a Apache solution. Is it easily possible for the Mozilla Firefox side to blacklist H2 if the browser is connected over an Apache proxy? The browser sees the Apache signature, so it can detect an Apache proxy.
Just a note from emails with apache devs, a fix for this has been landed in apache: http://svn.apache.org/r1754391 and http://svn.apache.org/r1754399
You need to log in before you can comment on or make changes to this bug.