a pretest http can be used to determine any basic pipelining problems with forward proxies on the main internet path for the UA. "detect and recover" is still needed at runtime for reverse facing proxies, but something more thorough can be used as a global litmus test when getting started.
Created attachment 484784 [details]
A sample server
A client side sanity test is going to require a server to run against. This is a ~250 line proof of concept server in c++. The basic outline is pretty scalable but a number of tweaks should be made to it if is desired for a production scenario - the idea of including it here is just as a reference implication right now. An HTTP framework that operates on a transaction level isn't appropos for implementing this as you can see it plays fast and loose with connection semantics.
Created attachment 484789 [details] [diff] [review]
Adds a global "pipeline pre-test" to test out the network compatibility with pipelining between the client and whatever server is running the pretest-server (presumably pipeline.mozilla.com by default). The URL to test against and whether to use the test are defined by prefs:
pref("network.http.pipelining.testURL" , "http:/pipeline.ducksong.com:9111/sanity-test");
pref("network.http.pipelining.testURL.enabled" , true);
The test works like this
1] it opens a connection
2] it sends down that connection a burst of 3 requests, each of which is labeled with a serial number
3] Until the first response is seen it pipelines an additional 3 requests down the same pipe (with new serial numbers) every 33ms
4] After the first response is received it continues to send a burst of 3 requests down the same pipe (with new serial numbers) every 33ms for the next 300 ms
5] after the 300ms timer is complete it sends no more requests but waits to receive all the responses
Each response contains a header correlating it to the request that generated it. Both the client and server confirm that they are all received in the correct order and on the same connection and that a full window of transactions can be maintained.
The http responses are a variety of 200, 100 + 200, and 304 responses - designed to give intermdiaries with a loose sense of framing fits. The server does not generate any responses until at least 3 requests have arrived - validating that there is not an intermediary breaking the pipeline.
The content length of each response generally varies to match the low 5 bits of the serial number, so it can be verified on the client side, except for the first response which carries of list of pipeline-blacklist host names to be used by bug 603506.
There are some open issues - primarily what server to run against, when to run the test (right now it is on every startup, which isn't right)
Any reason why we can't extend httpd.js to test pipelining? Ought to be easier to write/maintain than a C++ server. Or maybe not--just want to make sure it's on your radar.
(In reply to comment #3)
> Any reason why we can't extend httpd.js to test pipelining? Ought to be easier
Well, the real answer is "I don't know - the server here is just a PoC".. but my inclination on httpd.js is no..
* need to work on the connection level not the transactions level.. can httpd.js let me inspect transaction n+3 before responding to n while still correlating that they are on same connection? At first glance, I don't think so - but I'm not certain. This is the fundamental problem with reusing any framework I've stumbled across.
* the "test" in question is a runtime topography test, not a quality-assurance test - is httpd.js scalable to the demands of all those parallel users? (I don't know the answer) Other than some RAM and silly copies in the c++ version that could be fixed with a little effort, the c++ version is very light weight and other than serialization at the OS level on an accept() call, lockless but still multi-threaded.
* the poc server is just 250 lines long and has no dependencies, it doesn't even speak (nor need to) full http.
but then again, while it has some good qualities, the crude thing that is there is almost certainly just a PoC. I'm torn on what to really do both here and for 603513.. for the moment I'm happy to focus on whether or not the test being performed integrates ok into FF and accomplishes what it sets out to do (i.e. identify obstructed paths).
Created attachment 485029 [details]
a sample server (r2)
the server should not respond until req 4 is seen (not req 3 as previously written) - this ensures a significantly more challenging test as the outstanding requests now cross different batches (and most certainly different tcp segments).
Created attachment 485030 [details]
passed sanity test packet capture
If anyone is interested - this is a capture of what the test ends up looking like over a normal broadband connection.. ~44 transactions taking just 15KB in about 400ms including all network overhead.. all done in bg on ff. Nice overlap in sending from both directions at the same time which is what we want to see to feel confident everything is working.
Created attachment 485041 [details] [diff] [review]
update this to actually verify the response bodies and drain the response queue
Jeff, I remember you wanted to add support for pipelining to httpd.js in the past, is it still actual? You could cooperate with Patrick on making that change. He seems to have a good picture of what's needed for that.
mnot sends along this: http://gist.github.com/569672 .. which is in twisted python and is a nice illustration of processing a pipeline of requests ahead of generating any responses..
Created attachment 485293 [details] [diff] [review]
update to implement weakref to correctly be an observer
Created attachment 490100 [details] [diff] [review]
pipeline pretest v4
fix compiler failure under different build opts
Created attachment 495142 [details] [diff] [review]
pipeline pretest v5
update bitrot, confrom better to style guide, updates based on experience (i.e. bugs and tweaks), etc..
Can we merge the pipelining pretest with captive portal detection and/or build pipelining pretest on top of captive portal detection?
I added a dependency on bug 340548. Either we could disable pipelining when we have detected a captive portal or else we need to retry the pipelining pretest after we've gotten past the captive portal.
I meant 562917. Sorry for the spam.
Created attachment 503352 [details] [diff] [review]
pipeline pretest v6
Created attachment 513671 [details] [diff] [review]
pipeline pretest v7
bitrot due to syn retry deps.. putting feedback? instead of r? on this because we need to reoslve the hosting considerations..
Created attachment 542083 [details] [diff] [review]
pipeline pretest v8
update bitrot to reflect larch
Comment on attachment 542083 [details] [diff] [review]
pipeline pretest v8
Review of attachment 542083 [details] [diff] [review]:
If this is actual:
"The test works like this
1] it opens a connection
2] it sends down that connection a burst of 3 requests, each of which is
labeled with a serial number
3] Until the first response is seen it pipelines an additional 3 requests down
the same pipe (with new serial numbers) every 33ms
4] After the first response is received it continues to send a burst of 3
requests down the same pipe (with new serial numbers) every 33ms for the next
5] after the 300ms timer is complete it sends no more requests but waits to
receive all the responses"
..then you really don't need so much of a new code and modifications for it.
Let me outline my idea:
According the server: I would like to use something existing. PHP will probably not be enough (however my knowledge is not that deep to say certainly) to wait for 3 requests to come to the server and only then send out responses. If it gets confirmed, we might want to find or write a simple Apache mod. I would rather put effort this way rather then inventing a new server code with potential security holes running on public production machine.
Let's start talking about how to implement the pretest a simpler way then this patch does. For now I'm dropping the review flag. Patrick, next time we should talk first before you write so much code..
I've decided to put this patch into cold storage for a while until a demonstrated need for it is established. I do this because:
* experience with mobile pipelining (in FF and other browsers) indicates to me this is not a pervasive problem.
* any issues the pre-test discovers should be discoverable through other means
* privacy concerns
* services concerns and lack of progress on server side production implementation