Closed
Bug 165060
Opened 22 years ago
Closed 9 years ago
RFE Decide whether to use pipelining based on "server" return
Categories
(Core :: Networking: HTTP, enhancement, P5)
Core
Networking: HTTP
Tracking
()
RESOLVED
WONTFIX
Future
People
(Reporter: zzxc, Unassigned)
Details
Mozilla should maintain a list of known servers that do support pipelining, and as an option, users should be able to only enable pipelining for these servers. This would stop bugs from servers that don't correctly implement it while speeding up requests to those that do. (Just have mozilla not use pipelining if the server responce isn't confirmed supportive of pipelining) Technically, this shouldn't be needed. If a server returns HTTP/1.1, this should be all that is needed. But until this happens, there must be a work around.
Updated•22 years ago
|
Status: UNCONFIRMED → NEW
Ever confirmed: true
Comment 1•22 years ago
|
||
determining that a server fails to support pipelining correctly is non-trivial. the server may simply appear to take a long time to respond to the second request. we could have a timeout to catch such cases, but this would not result in a very pleasant user experience. still, other servers simply return garbage, or the wrong document!! those cases are even harder to detect. even worse, sometimes it's not even the server that is at fault, but rather a transparent proxy. mozilla has no hope of reliably detecting the presence of a transparent proxy because it is by definition supposed to be transparent to the client!
Severity: normal → enhancement
Status: NEW → ASSIGNED
Priority: -- → P5
Summary: Decide whether to use pipelining based on "server" return → RFE Decide whether to use pipelining based on "server" return
Target Milestone: --- → Future
Reporter | ||
Comment 2•22 years ago
|
||
That's why it would be a good idea to provide an option to the user that if the server responds with a server reply that hasn't been tested for pipelining support, it will assume that it doesn't support pipelining. Any such list that would be used my mozilla should allow servers to be added to this "whitelist". With the proxy servers, there's that handy option to disable pipelining for them. Most users should never be turning this on. Those that would venture as far as to fool with "advanced http networking" options would most likely know what they were doing. Also, if after the "advanced http networking" change things start acting screwy, something should ring a bell. I'm sure netscape's release will have words of warning to keep people away from this. Also, I'm curious, which web servers return html/1.1 WITHOUT supporting pipelining correctly? A blacklist of servers that don't support it correctly might also work. When a user files a bug regarding a site that doesn't load with a web server, after confirming that the release of the server really doesn't support it, this server should be added to the blacklist. Whitelist or blacklist - something needs to be done or pipelining is worthless. (Why can't web servers just support the standard?)
Comment 3•22 years ago
|
||
we currently have a hard coded blacklist. see http://lxr.mozilla.org/seamonkey/source/netwerk/protocol/http/src/nsHttpConnection.cpp#188 but like i said, a black-list is not a sufficient solution. it does not address the problem of transparent proxies, which are the worst offenders in my experience!
Reporter | ||
Comment 4•22 years ago
|
||
I wasn't aware that snippet of code was in there. Still, that doesn't suffice. There needs to be an easily updatable list of these servers. (99.9% of users can't apply a patch and recompile) A whitelist mode would help for people who want to load pages that use many known-to-work web browsers faster, but not risk a server not correctly supporting it. The whitelist, if implemented, should of course have an option with a UI. With transparent proxies - there's nothing that can be done besides trying to get the developers of these to make it compatable with http/1.1. What have these transparent proxies been known to do with attempted pipelining? (cutting the connection short, routing it wrong, just plain not supporting it?) This bug is just for unique server replies that are known to mess things up. A transparent proxy would keep the "Server" intact, thus this workaround won't work on them. A user workaround to transparent proxies would be to allow users to specify sites to not use pipelining with. (see bug 165350)
The only thing I can think of that might work is support for pulling a list from some place, sort of the way PAC does. This would probably need to be different than PAC, because you would want to subscribe to the list from a source you trust for quality data. Ideally, the list would come from some group or organization that is deadicated to reporting and updating the list. Some groups have a similar setup to fight SPAM. They collectively gather, evaluate, and publish data that their members subscribe to.
Comment 6•22 years ago
|
||
such a list would have to be consulted for each and every new connection. if the list ever grew long, it might seriously impact the performance of the browser.
Comment 7•22 years ago
|
||
Transparent proxies that do not support pipelining are not transparent because they do not support pipelining :) Would it be sufficient to assume you have a transparent proxy if you find a server that you know should pipeline but doesn't? Then the many users who are not in this situation could get their significantly faster surfing without breaking the browser for others :) How you know could be based on a whitelist or on HTTP/1.1, or some other combo of heuristics.
Comment 8•22 years ago
|
||
the problem comes w/ determining that a server does not support pipelining. it is often difficult to associate an error condition with that particular cause. coding up a guessing algorithm would be tricky :-/
Comment 9•22 years ago
|
||
Well, it's time-consuming but if I understand it, there is a sure-fire algorithm to determine if a server pipelines or not: you try to pipeline, and then if the connection closes after the first file, it doesn't pipeline. Thus the algorithm would have to be "If the server should support pipelining, try to pipeline, and then if pipelining fails, assume we have a crappy transparent proxy." You don't want to be doing this often (it's going to be a bad user experience whenever you try to pipeline on a server that doesn't support it because you won't open simultaneous connections), but if you turn it off quickly after realizing what's happening, even users that have bad transparent proxies will only get this once.
Comment 10•22 years ago
|
||
> Well, it's time-consuming but if I understand it, there is a sure-fire
> algorithm to determine if a server pipelines or not: you try to pipeline, and
> then if the connection closes after the first file, it doesn't pipeline. Thus
> the algorithm
nope.. sorry, no such thing as a sure-fire algorithm. generally, what happens
is the server ignores subsequent requests. the clients sees this as the server
taking a long time to respond to subsequent requests. or, sometimes the server
returns junk responses as a result of pipelined requests. for example, it might
have an algorithm for reading from the network such that it reads everything
available and then extracts 1 request from that, discarding whatever other bytes
it may have read. such an algorithm totally horks pipelining.
if we only considered the case of servers that drop subsequent requests, we
could possibly use a timeout-based algorithm, but what would the timeout be?
what if a server is just slow to respond? what is a reasonable timeout? hard
to say. some servers can take minutes to respond.
Comment 11•22 years ago
|
||
This is just a heuristic, remember. A 30s timeout on individual sites should be sufficient to handle most pipelining sites, and if they don't. This is where a whitelist containing known good servers comes in. Of course, one bastard pretending to be a pipelining server, or who stalls on the second request, could spoil the surfing party for everyone, but you can invent more complex heuristics for that ("if you've surfed other sites successfully, and you've never surfed this one before, blacklist this particular site for pipelining"). Anything we do is going to require heuristics anyway. An alternative, which could at least detect the crappy transparent proxy earlier, is to have a "pipelining test" dialog that goes to a known pipelining server we set up @mozilla.org. If the first request succeeds and the second fails, we turn off pipelining forevermore. This "pipelining test" could come up the first time we suspect a failure to pipeline (and therefore would only affect users on crappy transparent proxies). Just ideas. Given the speed gains I've heard about it might be worth it to be able to turn it on by default.
Comment 12•22 years ago
|
||
well, there was a suggestion to introduce something like 'Connection: pipeline" to indicate that the server is explicitly OK with pipelining. it might be better to evangelize the use of such a header instead of trying to fix the very broken world.
Comment 13•22 years ago
|
||
Sounds good to me on the single-server detection end (maybe that plus a small whitelist would be enough to give users a huge benefit)--though we'll have to do something for transparent proxies anyway :(
Comment 14•22 years ago
|
||
well, the connection header is hop-by-hop, so it is likely that transparent proxy servers are already re-writing the connection header (or maybe that would violate the transparent-ness of the proxy... i'm not sure yet).
Updated•18 years ago
|
Assignee: darin → nobody
Status: ASSIGNED → NEW
Comment 15•16 years ago
|
||
so, 6 years later, have things changed in a way that would enable this to happen in some way?
Updated•9 years ago
|
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → WONTFIX
You need to log in
before you can comment on or make changes to this bug.
Description
•