Thanks Yann - its certainly a development we're all going to have to deal with, but I'm totally convinced this is good for the web. we currently timeout at 5 seconds if that helps you and generally seems like a good value. (any constant is never good for everyone :)) you'll see more speculative connections from everyone - latency is not improving (and really can't) in the way other resources (bandwidth, cpu, ram, etc..) are and if we don't build algorithms around evading it, latency will stagnate the web. speculative connections are a good value - they cost a tiny amount of bandwidth and ram (scaling to be cheaper) and some fraction of them save a round trip's worth of latency (not scaling.) I'm going to close the bug, just because this isn't really a bug per se - but I appreciate the comment, it remains linked to 634278.
Status: UNCONFIRMED → RESOLVED
Last Resolved: 7 years ago
Resolution: --- → INVALID
I think there's a legitimate question here, but it seems like there's plenty one could do on the side of the ad agency. Each browser opens a deterministic number of preconnections, right? So you can use your HTTP logs to calculate the expected number of preconnections which will come along with each real connection, based on how popular each browser is. Suppose Firefox opens 2 preconnections and is 40% of your traffic and Chrome opens 3 preconnections and is 60% of your traffic, then your expected number of preconnections per connection is .4 * 2 + .6 * 3 = 2.6. So now look at your TCP logs. Suppose we have 30 successfully-completed connections and 200 failures. Divide 230 by (2.6 + 1) to get the number of non-preconnections, 64 in this case. All of the successful connections are due to real connections (because you only serve one file per domain), so our rate of connection success is 30/64 = 47%. Now you can experiment with changing your timeout and have a (rough) idea of how it affects the real connections to your server. Does that solve your problem?
Hello Justin, I see what you mean- indeed that would help get a very rough idea, if we had perfect numbers. But I fear it might be too approximate to be useful - and figuring out the number of expected preconnects from the useragent might be quite a challenge... Plus Chromium uses a speculative number of preconnect: http://code.google.com/p/chromium/issues/detail?id=64246 , so sometimes it might open one preconnect (or several)- and the next time not any, from what I understand. A way for the server to give hints to the browser how many preconnect sessions to open would help - the server could hint the browser to open 10 tcp pre-connect sessions, or.. none. But I wouldn't be sure how to get this done, considering we're at TCP level.
> sometimes it might open one preconnect (or several)- and the next time not any, from what I > understand. I see, that's tricky. Can you do something with the ordering of the connections? Like, is the first SYN received usually the real connection, while the other ones are usually the preconnections?
One thing which crosses my mind -- this wouldn't necessarily help Yann, but it could help us -- is if Firefox used historical data to tune how many preconnections to create. Are preconnections at all a scarce resource on the client side? If so, we could probably improve our allocation of them by remembering which domains benefit from preconnection and which ones don't.
(In reply to Justin Lebar [:jlebar] from comment #5) > One thing which crosses my mind -- this wouldn't necessarily help Yann, but > it could help us -- is if Firefox used historical data to tune how many > preconnections to create. > > Are preconnections at all a scarce resource on the client side? If so, we > could probably improve our allocation of them by remembering which domains > benefit from preconnection and which ones don't. we'll probably do something like that when we really implement this. (there is a reason the linked bug is still open :)) we don't actually implement extra preconnections explicitly yet, we've just got a couple algorithms that effectively create some of the same thing as an artifact. syn-retry is one of them. If we don't get a handshake quickly we'll start another connection in parallel. If they both eventually complete we keep the "extra" one around for a few seconds to see if it would be useful - given that everybody has done all the work for it already :) transaction cancellations can have the same effect - if we open a connection for a pending transsaction and that trans gets cancelled before the connection is complete we don't abandon the connection - we just put it in the pool for a few seconds to see if it gets matched up with another soon to arrive transaction. We're going to start racing connection establishment against some cache lookups too because the IO and locking involved in the cache can be significant depending on the host. Its possible we'll make too many connections for what we really need in the end, they go into the pool too. The theme here is that the handshake is a real bottleneck and the penalty involved in wasting a few connections is something we (in the internet sense) should be willing to architect around. nginx claims the server side state per connection is 250 bytes userspace, plus 250 bytes kernel space - not a big deal. (Yann has a different issue, I understand. Its just something that should be considered.).
Well getting away from my initial issue, but let me throw one idea too :) Cross domain preconnect (intelligent preconnect ip pooling?) - I'm not familiar with firefox's preconnect implementation (maybe already implemented?), and it wouldn't help in my specific case as my different domains listen on different ip addresses ; however, imagine a website like openstreetmap or google maps that uses several different virtual hosts (a.tiles.*, b.tiles.*, ....) - all pointing to the same proxies. I believe this used to be done to get around the network.http.pipelining.maxrequests limit - that used to be fairly low. It could be useful to use the same preconnect connections for all the domains, if firefox realize the different domains point to the same proxy. It's quite tricky as a webmaster: you're trying to get around the maxrequest limit by putting your tiles on several domains, but by doing this you might now be killing TCP optimisations! This could get funny (continuing with the openstreetmap example) if they had 20 proxies, in DNS round robin. Like: a.tile.openstreetmap.org resolves to 184.108.40.206, 220.127.116.11, 18.104.22.168, 22.214.171.124 and b.tile.openstreetmap.org resolves to 126.96.36.199, 188.8.131.52, 77.77.77, 184.108.40.206 - loading a file from a.tile.openstreetmap.org, firefox opens a two TCP connections to 220.127.116.11 (one regular, one preconnect). A few milliseconds later it needs to load a file from b.tile.openstreetmap.org - it could notice that it has already a preconnect connection opened to one of the IPs b.tile.openstreetmap.org resolves to, and reuse that connection. Not sure if it would be worth the CPU & RAM spent though - but with this, and a speculative number of preconnects like mentionned above, things like maps tiles could load significantly faster.
What you are referring to is IP-coalescing. This is not what HTTP is currently designed for. But SPDY does that.
You need to log in before you can comment on or make changes to this bug.