Closed Bug 264354 Opened 20 years ago Closed 10 years ago

Enable HTTP pipelining by default

Categories

(Core :: Networking: HTTP, enhancement)

enhancement
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: bryansmiley78, Unassigned)

References

Details

(Whiteboard: [pipelining][depends on fallback strategy][p-opera],[sec-assigned:curtisk:749235 ])

User-Agent:       Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)
Build Identifier: 

network.http.pipelining  - be set to true, not false by default.

Reproducible: Always
Steps to Reproduce:
1.
2.
3.
Several people keep reporting problems when pipelining is switched on, for
example bug 200298, bug 192929, bug 205686, bug 145392, ... So maybe it's not
yet possible to switch it on by default.

Note : it works for me, I always set network.http.pipelining and
network.http.proxy.pipelining to true.
The pref panel reads
"WARNING: Pipelining is an experimental feature, designed to improve page-load
performance, that is unfortunately not well supported by some web servers and
proxies"

So i think as long as Mozilla has no automatic detection method coupled to a
fall back to "non-pipelining", this pref should REMAIN off by default. That
would make this a WONTFIX.

just my thoughts though.
no, i'll accept this bug as valid.  the requirement for fixing this bug is
indeed implementing a fallback strategy.  the fallback has to be nearly perfect,
though, and that's the challenge.

there's quite a few bugs on file about how different servers misbahave when
confronted with a multiple (pipelined) requests.
Status: UNCONFIRMED → ASSIGNED
Ever confirmed: true
Keywords: helpwanted
Summary: network.http.pipelining - set to TRUE by default → Enable HTTP pipelining by default
Whiteboard: [pipelining][depends on fallback strategy]
Target Milestone: --- → Future
OS: Windows 2000 → All
Hardware: PC → All
This bug could be used to track progress on other blocking pipelining bugs too.

I also agree that it would be wise to revise current batch pipelining code in
order to make it a bit more robust by fixing the most embarassing bugs
(well it could be also more smart in order to make pipelining more effective).

RFC2616 states that on persistent connections (HTTP/1.1) clients MAY send batch
pipelined requests and that they HAVE to be prepared to resend NON pipelined
requests if something goes wrong (i.e. partial responses because connection is
prematured closed, etc.).

Unfortunately RFC2616 does not add a new keyword to HTTP headers, to let new
servers to explicity tell HTTP clients they support pipelining (yes, we all know
that many HTTP/1.1 web servers evolved from non standard HTTP/1.0 + Keep-Alive).

Because of this fact and other wrong assumptions made by many software engineers
(we live in a broken world), now HTTP clients have to *guess* if a server does
or does NOT support pipelining (even if it sends HTTP/1.1 messages).

So here are a few hints / wishes (probably nothing new):

1) Make a proposal to the Networking Working Group (w3c) to add
   an explicit "pipelining" keyword (backward compatible) to HTTP headers;

   all new versions of web servers will include it,
   small patches could be distributed to fix current good web servers;

   NOTE 1: this is not too scaring because right now there are very few
           users who enable pipelining and probably only 2 or 3 browsers
          (not including MSIE) that properly implement it.

   NOTE 2: using 4 persistent connections (instead of 2)
           through an ADSL line is often
           as fast as using pipelining (guess why).

2) If the above working group does not react fastly,
   you can also think to ask web-servers vendors / maintainers
   to add a custom HTTP header like this:

         X-Connection: pipelining

   (most of them will be happy to implement this).

3) While the steps 1) and/or 2) are being performed,
   make also a public call to all web server vendors / maintainers
   to send their black lists (with their bad server versions)
   to Mozilla team.

   NOTE 3: hopefully this move will noticeably improve relationships
           between those guys and Mozilla developers / users.

   NOTE 4: the black-list should be considered a temporary hack.

4) Load the black-list from an external (configurable) file
   instead of leaving it hardcoded.
 
   NOTE 5: the loading of this list should be very fast !
           (a simple editable text format with one server name per line
           should suffice).

   NOTE 6: maybe using jolly characters ( *, ?, [...] ) could help
           keeping the number of entries low.

   NOTE 7: new (updated) versions of Mozilla black-list
           could be easily downloaded from Mozilla sites.
Thanks for the proposed solutions, but what about the problem of intermediate
transparent proxies?  I've seen cases where Apache 2.x is situated behind a
broken transparent proxy that does not even send a Via header.  So, blacklisting
servers is a bad solution. 

Moreover, any new header (X-Connection) would be problematic because proxies
would not understand that the new header is really hop-by-hop.  So, the only
viable "header" solution is to extend the Connection (and Proxy-Connection)
headers with a new keyword.

Also, error detection is challenging in practice.  We have to deal with servers
that either ignore the pipelined requests or send back junk.  The junk case is
can be difficult because that junk might look like a valid response!  The other
case can be dealt with using a timer with some sort of heuristically determined
timeout (perhaps based on past response times).
(In reply to comment #5)
> Thanks for the proposed solutions, but what about the problem of intermediate
> transparent proxies?  I've seen cases where Apache 2.x is situated behind a
> broken transparent proxy that does not even send a Via header.  So, blacklisting
> servers is a bad solution.

It is a poor but not too bad solution if an intermediate transparent proxy
supports HTTP/1.1 + pipelining and the end HTTP/1.1 server does NOT support
pipelining.

It is a problem if the end server is good whereas the intermediate proxy is
broken, completely transparent (i.e. it does not add any proxy header, nor it
rewrites status HTTP version) and it supports persistent / keep-alive connections.

I guess that a sort of Web Police should shut-down those broken proxies.

Netcraft could also monitor web servers and proxies out of specifications.

> 
> Moreover, any new header (X-Connection) would be problematic because proxies
> would not understand that the new header is really hop-by-hop.

I know there is this risk, but the same happened and is happening with the
header "Connection: keep-alive", some HTTP/1.0 proxies resend it even if they
don't support keep-alive (web servers have to deal with this problem, Apache
detects this by looking at proxy headers).

If the intermediate proxy is not completely transparent then this is not a
problem because browser can detect this case.

> So, the only
> viable "header" solution is to extend the Connection (and Proxy-Connection)
> headers with a new keyword.

Yes, but reusing "Connection: new-keyword" could break existing intermediate
proxy servers because many of them implement poorly RFC recommendations
(i.e. they assume that every "Connection:" header not containing "keep-alive" is
equivalent to "Connection: close").

Maybe good web servers should send two Connection headers only if they receive
the same following Connection headers, i.e.:

   Connection: pipelining
   Connection: keep-alive

Unaware / broken proxies, other browsers, etc. usually give priority to the last
known repeated header, in this case they could interpret "Connection:
pipelining" as "unknown" or "close", the second "Connection: Keep-Alive" should
override prior header.

Of course a client or a server could, but in practice SHOULD NOT, use:

    Connection: keep-alive, pipelining

RANT: the only requirement to support pipelining is to preserve unused - but
already read - contents in server buffers, so I really miss the point of why so
many HTTP/1.1 servers (specially proxies) are broken and, most of all, why they
are not fixed yet (HTTP/1.1 specifications are more than 5 years old).

> Also, error detection is challenging in practice.  We have to deal with servers
> that either ignore the pipelined requests or send back junk.  The junk case is
> can be difficult because that junk might look like a valid response!  The other
> case can be dealt with using a timer with some sort of heuristically determined
> timeout (perhaps based on past response times).

The error detection work is indeed really challenging, there are also cases with
wrong Content-Length numbers.

W3C should disambiguate all those cases stating clearly what to do.
> I guess that a sort of Web Police should shut-down those broken proxies.

I'm not sure who these web police are, and what power do they have to shut-down
major e-commerce sites ;-)


> Netcraft could also monitor web servers and proxies out of specifications.

I'm not too familiar with Netcraft's capabilities, but you seem to suggest that
we shouldn't worry too much about this situation because it can be dealt with
through evangelism efforts.  I'm not convinced.  I know from my experience that
bad virtual hosting servers are not uncommon.  At one point (maybe still),
macys.com had a virtual hosting system that directed image requests to a
HTTP/1.0 server and all other requests to a HTTP/1.1 server.  A pipelined
request could actually result in a mix of HTTP/1.1 and HTTP/1.0 responses! 
Mozilla has code to deal with that particular scenario.  The point is that
server infrastructure can be quite a mix of old and new software.


> I know there is this risk, but the same happened and is happening with the
> header "Connection: keep-alive", some HTTP/1.0 proxies resend it even if they
> don't support keep-alive (web servers have to deal with this problem, Apache
> detects this by looking at proxy headers).
> 
> If the intermediate proxy is not completely transparent then this is not a
> problem because browser can detect this case.

Do you mean: by sniffing for a Via header?


> (i.e. they assume that every "Connection:" header not containing "keep-alive" 
> is equivalent to "Connection: close").

ic... that makes things more fun doesn't it? :-/


> already read - contents in server buffers, so I really miss the point of why 
> so many HTTP/1.1 servers (specially proxies) are broken and, most of all, why 
> they are not fixed yet (HTTP/1.1 specifications are more than 5 years old).

I have never studied the source code for the badly implemented servers, but I
can imagine that they might read everything on the wire and assume that it is a
single request and then discard whatever is left in the input buffer (assuming
it could only be junk or should be empty).  The fact remains that most people do
not test their servers with a client that issues pipelined requests, so these
problems slip through the HTTP/1.1 "conformance" tests.


> W3C should disambiguate all those cases stating clearly what to do.

Isn't RFC 2616 an IETF specification?  How is the W3C involved with HTTP?
(In reply to comment #7)
>
> I'm not sure who these web police are, and what power do they have to shut-down
> major e-commerce sites ;-)

Of course here I was joking :-)

> 
> > Netcraft could also monitor web servers and proxies out of specifications.
> 
> I'm not too familiar with Netcraft's capabilities, but you seem to suggest that
> we shouldn't worry too much about this situation because it can be dealt with
> through evangelism efforts.  I'm not convinced.

In the long term evangelism should work but the efforts must be combined;
Netcraft already monitors critical / performance parameters of networks and web
servers (ping time, latency, number of HTTP errors, up-time, etc.), so
monitoring pipelining capabilities of big ISPs, could be a small add-on to their
services.

Anyway I agree that this situation is similar to that of "bad HTML code";  I
guess that the best results could be achieved by involving server vendors /
developers and publishing open source test programs specific to pipelining,
things should then improve a lot in a couple of years.

> I know from my experience that
> bad virtual hosting servers are not uncommon.  At one point (maybe still),
> macys.com had a virtual hosting system that directed image requests to a
> HTTP/1.0 server and all other requests to a HTTP/1.1 server.  A pipelined
> request could actually result in a mix of HTTP/1.1 and HTTP/1.0 responses! 
> Mozilla has code to deal with that particular scenario.  The point is that
> server infrastructure can be quite a mix of old and new software.

Yes, out there it's a jungle but HTTP clients are supposed to be smarter than
servers.

> 
> Do you mean: by sniffing for a Via header?
> 

Yes, at least until a new "pipelin{e|ing}" keyword will be added to HTTP headers
(or the broken servers will die, maybe after 2015).

The reasoning is that if there are one or more proxies between the client and
the end server(s), then the real network latency / congestion between the client
and the first proxy should be (in theory) somewhat lower / better than that of
end to end;  in this case disabling pipelining should not degrade too much
performances with the advantage that it could prevent obscure problems generated
by intermediate web caches / proxies;  in many situations proxies are used
really near the end servers but this is a matter of fact.

> I have never studied the source code for the badly implemented servers, but I
> can imagine that they might read everything on the wire and assume that it is a
> single request and then discard whatever is left in the input buffer (assuming
> it could only be junk or should be empty).  The fact remains that most people do
> not test their servers with a client that issues pipelined requests, so these
> problems slip through the HTTP/1.1 "conformance" tests.

Yes, of course this is the reason but it is so banal / embarassing that
spreading out above mentioned simple test programs, able to detect problems with
pipelined requests, could be helpful; after all, ISPs are supposed to upgrade
their software at least once a year (i.e. for security fixes, etc.).

> 
> > W3C should disambiguate all those cases stating clearly what to do.
> 
> Isn't RFC 2616 an IETF specification?  How is the W3C involved with HTTP?

Sorry, I didn't explain well what I meant.

Of course I know RFCs are IETF things but here we are facing something that is
not only a technical problem (indeed generated by lazyness and non conformance
to standards) but something that touches the speed and usability of the WWW and
that will become more and more noticeable as soon as speed and web traffic
increase but network latency stays high (well over 50 - 60 milliseconds).

W3C is involved in promoting the best usage of the web, it has developed libwww
(http://w3c.org/Library/) and performed lots of tests with HTTP/1.0 and HTTP/1.1
 to prove the efficiency of those protocols.

W3C (or even some other well known organization) could certainly, directly or
indirectly, reopen a general discussion on this issue stating it's not so
marginal as it seems and that fixing it would be good (in order to make the web
experience fast enough to be enjoyable).

    See also: (really outdated but maybe still valid)
               http://www.apps.ietf.org/w3c/index.html

In anycase some IETF members, involved in HTTP area, should then be convinced
that a new small revision of HTTP/1.1 specifications, aimed to fix the broken
world, would be welcome.
As this bug seems to track the overall progress about pipelining (how to enable it in a safe way, etc.), I have added bug 329977 to discuss small technical details about how pipelining could be smarter in sending batch requests (to enhance performances and to avoid some pitfalls).
Assignee: darin → nobody
Status: ASSIGNED → NEW
Target Milestone: Future → ---
*** Bug 359697 has been marked as a duplicate of this bug. ***
According to http://www.die.net/musings/page_load_time/, Opera has pipelining enabled by default.
Keywords: perf
Whiteboard: [pipelining][depends on fallback strategy] → [pipelining][depends on fallback strategy][p-opera]
I initially jumped in on the wrong bug and got redirected here. I've been working on web server code, and was rather surprised to see pipelining not being used despite the RFC saying that the server must support it.

1) If pipelining is turned off in the browser by default, it effectively acknowledges broken pipelining as the defacto standard. In this case servers should be provided with a way to state that they do support pipelined requests.

2) Or turn pipelining on by default, allowing the user to disable it on a per site basis. Its likely the browser can detect a likely pipelining error and allow the user to disable it for the given site. Complaints to the admins will likely rapidly result in server upgrades.
I'm for enabling

network.http.pipelining
network.http.proxy.pipelining

by default, maybe leaving

network.http.pipelining.ssl = false

as caution for sensitive sites.

This could be planned for Firefox 4.0 and it could be one of the steps in order to make it feel "faster".

As a personal experience, I've always changed network.http.pipelining to true since years and I've neved had any major problem. However, if you say there could be problems, maybe you should add a recovery mechanism for problematic servers.

Also, I believe to remember (I'm not sure though) browsers like Opera are using pipelining for years (maybe this is one of the reasons because Opera seems faster to some users).
I very much doubt Mozilla will opt to enable that by default as the broken transparent proxies can't be identified without a lot of effort. A clever hacker will use http pipelining stress tests to find them, though. And since these servers are obviously running buggy software, they are prime targets for such hackers.

But outside the hacker community, there isn't enough of a return to globally find these things, or even get around them.

Even changing the major version to HTTP/2.0 would not get around these issues. At this point in time, HTTP is frozen with concrete shoes. How it was used years ago is how it will continue to be used. Perhaps in another six years that might shift. Maybe.

It will be more likely that HTTP gets thrown out the window first. Perhaps replaced by SPDY, but who really knows. Maybe WAKA (that was a joke).

Maybe HTTP (or SPDY) over SCTP instead of TCP. All of these are more likely than http pipelining getting used by default.

...in other words... this is a dead horse. ;)
Pipelining is default for HTTP/1.1, which is 11 (11!) years old. I think it's time to get over and enable pipelining. If there are broken or misconfigured servers or proxies, it's normal they won't work as they are "broken or misconfigured", moreover I never had problems with any site from years and Opera already enables pipelining by default. We can't wait forever and I'd say 11 years are enough.
Maybe you can enable pipelining by default and add a checkbox into the connection settings page in order to enable/disable it.
(In reply to comment #15)
> I very much doubt Mozilla will opt to enable that by default as the broken
> transparent proxies can't be identified without a lot of effort. . . .
> 
> But outside the hacker community, there isn't enough of a return to globally
> find these things, or even get around them.

Opera seems to think otherwise, no?

(In reply to comment #16)
> We can't wait forever and I'd say 11 years are enough.

We *can* wait forever -- many features are specced but never deployed, sometimes because of backward-compatibility issues.  In the real world, if the new version of Firefox doesn't work for a lot of users, they're going to stop using it and switch to another browser.  They don't care if you blame it on broken proxies, they're going to observe that it didn't happen on the old Firefox version and so it must be Mozilla's fault.

(In reply to comment #17)
> Maybe you can enable pipelining by default and add a checkbox into the
> connection settings page in order to enable/disable it.

Even extremely sophisticated users won't be able to jump from "hey, this site doesn't work" to "it must be because of intermediate proxies that don't support HTTP pipelining correctly", so that wouldn't help much.  It needs to work correctly out of the box or not at all.
> We *can* wait forever -- many features are specced but never deployed,
> sometimes because of backward-compatibility issues.  In the real world, if the
> new version of Firefox doesn't work for a lot of users, they're going to stop
> using it and switch to another browser.  They don't care if you blame it on
> broken proxies, they're going to observe that it didn't happen on the old
> Firefox version and so it must be Mozilla's fault.

In the real world, Firefox has a large share (plus the Opera's share), so admins will probably fix their errors (not Firefox and Opera errors). Just look at the IE-only sites, they are disappearing because people ask for Firefox and standards compliance. Also, I think they wouldn't be "a lot of users" but the very very minority (just look at the popularity of extensions like "Fasterfox").

> Even extremely sophisticated users won't be able to jump from "hey, this site
> doesn't work" to "it must be because of intermediate proxies that don't support
> HTTP pipelining correctly", so that wouldn't help much.  It needs to work
> correctly out of the box or not at all.

At least you'd give them a chance, also there could be an article on the Knownledge Base (yeah, not everybody will google for solutions, but again you give them a chance).
using the existing md5 function is one of many possible heuristics to detect when we have gone off the rails..
Depends on: 232030
Based on comments across the pipelining bugs, I've added some more depends, including some RFEs which most people would want before this is enabled for all users.
Depends on: 593386, 486769, 585939, 165350
Depends on: 363109
Depends on: 603503
Why not announce that Firefox will adopt pipelining by default in 6 months.  That gives admins time to change.  They're not going to switch until there is a reason to do so.

I've been using it for many years with pipelining enabled and I haven't run into any sites that seem to break because of it.
(In reply to comment #23)
> Why not announce that Firefox will adopt pipelining by default in 6 months. 

fyi I am working fiercly at getting code robust enough to have pipelining on by default into the tree.

> That gives admins time to change.  They're not going to switch until there is a
> reason to do so.
> 

admins really aren't the issue. If they knew they had a problem, they would fix it. For the most part they do not know. This isn't a feature they implement, it is generally a latent bug they don't know about.

That being said, as you indicate below, the web has come a long way in the last 5 years - outright breakage is pretty rare. However there are a number of cases where a too simple pipelining policy interacts badly with the web and can actually slow things down.. so I am working to make the firefox one react to those scenarios robustly.

please see bug 603503



> I've been using it for many years with pipelining enabled and I haven't run
> into any sites that seem to break because of it.
No longer depends on: 232030
Whiteboard: [pipelining][depends on fallback strategy][p-opera] → [pipelining][depends on fallback strategy][p-opera],[sr:curtisk]
I'm glad pipelining is moving forward. :)

I also think X-Connection: pipelining or something like it might be good so big sites can do a worldwide pipelining check day, sorta like IP6 day. Well, after Patrick's fixes so pipelining use is always faster... :)
What would that do?
Whiteboard: [pipelining][depends on fallback strategy][p-opera],[sr:curtisk] → [pipelining][depends on fallback strategy][p-opera],[secr:curtisk]
I think it's really funny: Everyone fears HTTP pipelining, even though it is an official and open standard; and a very old one known for ages. I've been using it for maybe 4 years and it never turned out to be the cause of any problem; when something was not working, it was not working after disabling it either. People fear broken web servers, broken HTTP proxies (especially transparent ones), anti-virus software, firewalls and so on. Even though it would clearly be the fault of the vendor of all these "broken things", we fear that breaking one out of ten million HTTP requests might cause Firefox to lose market share.

And then comes Google and suggests SPDY, everyone cheers and Mozilla is really considering supporting it in future releases. As if there was any guarantee, the SPDY implementations will be more bug free than those of HTTP 1.1, as if there is any guarantee that SPDY will not cause issues with some kind of (transparent) proxy servers or anti-virus or firewall software. I think the chances that deploying SPDY breaks something and possibly disappoints users is maybe 100 times higher than in case of HTTP 1.1 pipelining, still we cheer on SPDY and frown at HTTP pipelining. And today Google made some even more strange proposals on how to speed up the web, proposals that a combination of HTTP 1.1 Keep-Alive and Pipelining should make obsolete anyway.

HTTP Pipelining is like IPv6. Servers don't support it because customers don't use it. Customers don't use it because their network environment (company, school, DSL/cable provider) has no support for it. Their network environment has no support for it, since without servers supporting it, the admins of the network environment see no need to support it, they say: customers won't need it. In the end everyone is waiting on everyone else to make the first step and this for how many years now?

Just my two cents on this topic.
(ignore me, I will not further "spam" this bug report, at least not until Chrome finally officially has pipelining enabled by default)
(In reply to TGOS from comment #29)
> I think it's really funny: Everyone fears HTTP pipelining,

hi TGOS, thanks for your comments. I don't fear pipelining - though I respect its dangers greatly.

You cite concerns about breakage. That's a real, but minor, problem that for the vast majority of cases we can detect and workaround breakage. Though see 
http://tech.vg.no/2011/12/14/safari-on-ios-5-randomly-switches-images/ for the one known case where that isn't true. (and no, one case shouldn't hold back the feature).

the bigger issue has frankly been head of line blocking and its impact on performance and robustness. Naive pipelines simply make performance worse. I've spent a lot of time trying to attack that.

but we're still trying.
Opera mobile and on desktop, Android and Safari on iOS5 enable pipelining by default; this should give more assurance to enable it by default in firefox.

Here interesting articles about pipelining implementations:
http://www.blaze.io/mobile/http-pipelining-big-in-mobile/
http://www.blaze.io/technical/http-pipelining-request-distribution-algorithms/
Whiteboard: [pipelining][depends on fallback strategy][p-opera],[secr:curtisk] → [pipelining][depends on fallback strategy][p-opera],[sec-assigned:curtisk:749235 ]
It's worth noting that Fennec and B2G *do* enable pipelining by default, which makes the status of this bug a a bit strange (and sad).
Looks like Chrome is interested in taking this on now, ‘HTTP pipelining has been enabled for 10% of Chrome’s dev-channel users.’ ‐ http://src.chromium.org/viewvc/chrome?view=rev&revision=132444
From a dev‐evang and PR perspective, might be worth doing at the same time, so could this pref be enabled on nightly? :)
FYI, I've had HTTP Pipelining enabled in Nightly builds for a couple of years now and I can honestly say I haven't run into any issues where I've had to disable it due to any problems. I don't understand the newest about:config entries related to pipelining though, so I haven't touched those, but I don't think there would be a problem with enabling it on Nightly. On a related note, there is a 'SPDY Indicator' extension for Firefox that lets you know when SPDY is enabled on a website. Is there a way to do something like this for pipelining? Maybe show an icon when it is working correctly, another icon when it has to fall back to another method, etc.
> Is there a way to do something like this for pipelining? 
Good question, maybe as part of some telemetry work but I couldn't find any bug reports on this.
Quick update:
http://src.chromium.org/viewvc/chrome?view=rev&revision=134439
As of Sat Apr 28 03:46:03 2012 Chrome pipelining trial has been expanded to 100% of dev channel.
Flags: sec-review+
Benchmarking shows HTTP Pipelining is faster than using our current default

http://benjaminkerensa.com/2012/11/12/http-pipelining-benchmark-firefox-16
That's not a benchmark, it's a anecdotal measurement on a single site. Please read up on the (long) history of this issue before making blanket proclamations.
I'm not making blanket proclamations....

275 Million people use Opera which has Pipelining by default.

Millions more use mobile browsers like Safari and Chrome which also have it enabled and my understanding is that Firefox OS itself uses Pipeling.

Nothing anecdotal about the benefits at all unless your suggesting that the IETF and people who created the "standard" have it all wrong and that there really is no benefit.
Just adding my $0.02 here.

I've had pipelining enabled in my own builds based on the Mozilla code for 2+ years now, that is in active use by an estimated 100-150k users world-wide. The only complaints about this feature have been for particularly old (broken) proxies that don't like it (mostly used in the Russian Federation and other similar countries), so http pipelining by default is *on* in my builds, but it's off for proxies to not impact the fair number of users using broken web proxies (I should probably revisit this proxy flag sometime, it's possible proxy vendors have shaped up better by now) - networking performance and page downloads do not suffer from the feared head-of-line issue at all, in my experience; in practice the delays caused by packet loss in the sequential download of elements and/or issues setting up a much larger number of concurrent connections (murder for wireless/residential gateways) are much worse than head-of-line delays. Across the board from slow dialup to fast fiber, pipelining seems to work very well.
I too have had pipelining enabled on my personal machines for 5+ years without issues and have enabled it on 600+ production workstations for a few years now in both Firefox and Seamonkey without any user complaints or issues. My users are 12,000 university students and professors so they beat the *hell* out of our 100Mb connection. If something is wonky, they let us know right away. It's been really quiet for since deployment so AFAIAC, it works and works well. Seeing as how pipelining has been part of Seamonkey (what I use primarily) for close to a decade, I'd hate to see Mozilla be late to the game on a feature they've had for this long. Pipelining's been around way longer than SPDY. Now or never guys.
How is the procedure to enable HTTP pipelining by default? Can I help in some way?
this bug should have been closed a while ago - pipelines have been passed by multiplexing approaches such as spdy and http/2. The inherent head of line blocking challenges in pipelines make them perform too inconsistently.
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → WONTFIX
(In reply to Patrick McManus [:mcmanus] from comment #44)
> this bug should have been closed a while ago - pipelines have been passed by
> multiplexing approaches such as spdy and http/2. The inherent head of line
> blocking challenges in pipelines make them perform too inconsistently.

I know that stream multiplexing is an alternative approach to what pipelining achieves.

So, because people are too dumb to implement HTTP/1.1 (RFC 2616, which clearly specifies pipelining) correctly, let's create HTTP/2.0 and let Google force it. VERY VERY VERY WISE.
HTTP/2 = not implemented by anyone, not even a standards proposal yet.
SPDY = HTTPS only, and having its own set of pitfalls.

Regular HTTP traffic (what, 95+% of all traffic on the web?) could definitely use the advantages of pipelining, so marking this WONTFIX in light of a yet non-proposed standard is incredibly short-sighted, IMHO. Pipelining is implemented in gecko, it works (and works very well), it's part of the HTTP/1.1 standard, so use it already?
re comment 45 - that's not a correct reading of the situation.

pipelines inherently suffer from hol problems even when implemented bug free.

To use them correctly you need to know what the rtt is going to be, the size of the object being transferred, and the effective bandwidth of the pipe at that moment. That's pretty unknowable.

If you get that wrong, you have the potential for delaying important things behind unimportant things - especially as the notion of "important" evolves during page layout. We see this with pipelining all the time - unless you have consistently very bad rtts it is a marginal improvement in the common case and pretty bad for you in the worst case where you have to handle all kinds of errors (and again, I don't mean bugs). Its also a disaster when you want to cancel anything on a navigation event.

So desktop isn't really a good match at this time.. I'm going to leave it on for mobile for now as the higher rtts tip the balance in its favor - but only by a little bit. trying to refine it further is not a great plan when multiplexing approaches are fixing its fundamental problems - we're investing in http/2 instead as the path forward on this problem.

I don't agree with your assessment of google's role here.
(In reply to Patrick McManus [:mcmanus] from comment #47)

> If you get that wrong, you have the potential for delaying important things
> behind unimportant things - especially as the notion of "important" evolves
> during page layout. We see this with pipelining all the time - unless you
> have consistently very bad rtts it is a marginal improvement in the common
> case and pretty bad for you in the worst case where you have to handle all
> kinds of errors (and again, I don't mean bugs). Its also a disaster when you
> want to cancel anything on a navigation event.

Indeed, there are issues with pipelining and it's not a perfect solution for all problems. However, it has been specified by the standard and in the average case, it would offer good performance improvements. I agree that it's not a very useable solution and multiplexing (along with other improvements by HTTP/2.0) are better solutions.

But: Given that it's not even possible to implement HTTP/1.1 (which includes pipelining) correctly within 10+years (on server as well as on client side), how sould it EVER be possible to implement HTTP/2.0 correctly, especially considering that large parts of HTTP/2 incorporate HTTP/1.1 syntax?

> So desktop isn't really a good match at this time.. I'm going to leave it on
> for mobile for now as the higher rtts tip the balance in its favor - but
> only by a little bit. trying to refine it further is not a great plan when
> multiplexing approaches are fixing its fundamental problems - we're
> investing in http/2 instead as the path forward on this problem.

I hope that you will be successful, but in my humble opionion it won't be possible because HTTP/2.0 is NOT human-readable and MUCH more complex, and as you can see, it's not even possible to implement HTTP/1.1 within 10+ years. Logging HTTP/2.0 connections will be a complex task (now it's just reading a text file), as well as generating and parsing requests. I fear that only connections to the large sites will work because Chrome and maybe Firefox <-> gmail.com will be tested, but every other combination will have tons of bugs because of the complexity of the involved standards.

> I don't agree with your assessment of google's role here.

HTTP/2.0 is based on SPDY, which is only successful because Google created it and enforced it via Chrome, isn't it?
(In reply to RH from comment #48)
> 
> HTTP/2.0 is based on SPDY, 

HTTP/2 is indeed based on the positive experiences garnered with spdy.

>which is only successful because Google created
> it and enforced it via Chrome, isn't it?

Spdy is successful because it is a high quality solution to real problems and has an open definition - so multiple independent parties adopted it. And that yielded a lot of positive experience. Google deserves credit for doing that work and illustrating it in the chrome/google.com combo - but I don't see that as forcing anyone else to adopt. Firefox and twitter talk spdy to each other because we think its the best strategy - not because of anything google is making us do.

Standardization of something like that in a broader public forum is how the process works best - http/2 is not a rubber stamp of spdy at all but it certainly is based around the approach spdy showed to work because the participants in the effort believe that to be the right approach. Importantly, google does not have change control of http/2 - that belongs to the IETF.
(In reply to RH from comment #48)
> (In reply to Patrick McManus [:mcmanus] from comment #47)
> 
> > If you get that wrong, you have the potential for delaying important things
> > behind unimportant things - especially as the notion of "important" evolves
> > during page layout. We see this with pipelining all the time - unless you
> > have consistently very bad rtts it is a marginal improvement in the common
> > case and pretty bad for you in the worst case where you have to handle all
> > kinds of errors (and again, I don't mean bugs). Its also a disaster when you
> > want to cancel anything on a navigation event.
> 
> Indeed, there are issues with pipelining and it's not a perfect solution for
> all problems. However, it has been specified by the standard and in the
> average case, it would offer good performance improvements. I agree that
> it's not a very useable solution and multiplexing (along with other
> improvements by HTTP/2.0) are better solutions.
> 
> But: Given that it's not even possible to implement HTTP/1.1 (which includes
> pipelining) correctly within 10+years (on server as well as on client side),
> how sould it EVER be possible to implement HTTP/2.0 correctly, especially
> considering that large parts of HTTP/2 incorporate HTTP/1.1 syntax?
> 
> > So desktop isn't really a good match at this time.. I'm going to leave it on
> > for mobile for now as the higher rtts tip the balance in its favor - but
> > only by a little bit. trying to refine it further is not a great plan when
> > multiplexing approaches are fixing its fundamental problems - we're
> > investing in http/2 instead as the path forward on this problem.
> 
> I hope that you will be successful, but in my humble opionion it won't be
> possible because HTTP/2.0 is NOT human-readable and MUCH more complex, and
> as you can see, it's not even possible to implement HTTP/1.1 within 10+
> years. Logging HTTP/2.0 connections will be a complex task (now it's just
> reading a text file), as well as generating and parsing requests. I fear
> that only connections to the large sites will work because Chrome and maybe
> Firefox <-> gmail.com will be tested, but every other combination will have
> tons of bugs because of the complexity of the involved standards.
> 
> > I don't agree with your assessment of google's role here.
> 
> HTTP/2.0 is based on SPDY, which is only successful because Google created
> it and enforced it via Chrome, isn't it?

SPDY is successful because it works well and everyone who is already implementing it in their web server software (Apache, NGinX, etc) is allowing users to reap the benefits. Pipelining is a woulda-coulda-shoulda solution that should have had it's day in the sun years ago but something better has come along. On paper, pipelining is great. In practice, it's a fustercluck. Too many chefs have spoiled the pot.

I'd been using it enabled at my org for many years but SPDY is just the next natural evolution to what Pipelining was *supposed* to be. The odds of web servers out there actually using Pipelining is slim to none. Even Microsoft is conceding their answer to SPDY / Pipelining is no match for SPDY. SPDY/4 will likely be what HTTP/2.0 will become and we'll all be the better for it. There's nothing complex about it. It's simple implementation. But please do not make the argument that SPDY is a Google only baby, it's not. Anyone can now implement it and it will be an open and ratified standard by, likely, 2015. It's pointless to wast man-hours on a dead standard.
Sales pitch aside, in Feb 2014, use of SPDY was down from 1% in April 2013 to 0.6/0.7% now. If it was that fantastic, you'd expect greater adoption. So no, it's neither a widely implemented protocol, nor increadingly used, despite Google's pushing.

As for pipelining's potential hol issues, the Firefox implementation has implemented rescheduling of elements to a different pipeline and fallback to non-pipelined for a reason, in case there are "holdups". If you set the pipelining parameters in a sane way (keep pipes short and use several in parallel) it results in a big practical win, even with short rtts.

In all practical respects enabling pipelining, even if not a "holy grail" solution, will be beneficial for everyone except a very small percentage running into technical problems (usually proxy related or talking to ancient servers that don't implement http/1.1 properly and are pretty much broken).

I think you should be careful not to have tunnel-vision here. "But http/2 will be awesome" is nice, but doesn't help us now, nor for some years to come.
(In reply to Mark Straver from comment #51)
> Sales pitch aside, in Feb 2014, use of SPDY was down from 1% in April 2013
> to 0.6/0.7% now. 

That does not reflect our userbase.

> As for pipelining's potential hol issues, the Firefox implementation has
> implemented rescheduling of elements to a different pipeline and fallback to
> non-pipelined

I know better than anyone. I wrote it. Its good - but not good enough.
(In reply to Mark Straver from comment #51)
> In all practical respects enabling pipelining, even if not a "holy grail"
> solution, will be beneficial for everyone except a very small percentage
> running into technical problems (usually proxy related or talking to ancient
> servers that don't implement http/1.1 properly and are pretty much broken).

Just for an example, activating pipeling for https will immediately make my ISP's account management interface completely broken. And it happens to be the largest ISP in Australia.

Saying that they're stupid might be truthful, but doesn't change that fact.

------

I agree that SPDY is not perfect - annoyingly it only works for servers that have a verifiable identity on the internet (ie. the server has to know its global address and be able to prove it), which is a larger scope than HTTP and not all HTTP servers work like that.

But if pipelining doesn't work, then it doesn't work :(
(In reply to Patrick McManus [:mcmanus] from comment #52)
> > Sales pitch aside, in Feb 2014, use of SPDY was down from 1% in April 2013
> > to 0.6/0.7% now. 
> 
> That does not reflect our userbase.

http://w3techs.com/technologies/details/ce-spdy/all/all

So, either your metrics are way wrong, or somehow Firefox users only visit SPDY-enabled sites. Which of the two is more likely?

> I know better than anyone. I wrote it. Its good - but not good enough.

Is it better than plain HTTP's one-element-at-a-time? I think not. And just brute-forcing it by opening a different TCP connection for each element is hardly a solution, either (and probably slower, anyway, for small elements).

(In reply to Radek 'sysKin' Czyz from comment #53)
> Just for an example, activating pipeling for https will immediately make my
> ISP's account management interface completely broken. And it happens to be
> the largest ISP in Australia.
> 
> Saying that they're stupid might be truthful, but doesn't change that fact.

If the fact is that they need to update their management interface's webserver to something that supports a 10+ year old standard, or configure it properly, then I'm not saying it isn't a fact. I'm saying that that is something that can be fixed easily by affected providers, and should be fixed. They may not be aware. And I think it would take very little time for them to correct that -- but that's hardly something that should hold back a good technological solution and standard.

> But if pipelining doesn't work, then it doesn't work :(
Pipelining DOES work. And it works very well. See my post (and others) before where pipelining as-implemented in Firefox is and has been enabled for years already, with barely any ill effects.
I think not -> I think so.
(In reply to Mark Straver from comment #54)
> If the fact is

Sorry, I wasn't clear there: the "fact" I was talking about was that this specific website will not work with pipelining on.

And for a consumer product like Firefox, it is not acceptable to release a version where this doesn't work, when it worked on the previous version. (this is obviously not a fact but my opinion).

If you can reconcile those two statements (without referring to what third parties "should") then I'm all for pipelining. In fact I'm a developer of a HTTP-based application server (the kind you install and then open http://localhost/ to use) and I'd *love* to be able to use something more modern than non-pipelined HTTP 1.1 (SPDY is cannot be used because the server can't authenticate itself).
> Sorry, I wasn't clear there: the "fact" I was talking about was that this
> specific website will not work with pipelining on.

By the way, what does "will not work" mean? As far as I understand pipelining, the client would "enqueue" more than one request to the server, which then responds. If pipelining is not supported, the server would just respond to the first request and then drop the connection.

What exactly does the server of your ISP do when you send pipelined requests there?

Would you support integrating other standards violations (and to repeat it one more time: pipelining is a MUST according to RFC 2616: "A server MUST send its responses to those [pipelined] requests in the
same order that the requests were received."), too just to keep the admins of your ISP happy?

> And for a consumer product like Firefox, it is not acceptable to release a
> version where this doesn't work, when it worked on the previous version.
> (this is obviously not a fact but my opinion).

So Firefox' CSS rendering should have never switched from quirks mode to standards mode? There were so many Web sites that stopped "working"...

> If you can reconcile those two statements (without referring to what third
> parties "should") then I'm all for pipelining. In fact I'm a developer of a
> HTTP-based application server (the kind you install and then open
> http://localhost/ to use) and I'd *love* to be able to use something more
> modern than non-pipelined HTTP 1.1 (SPDY is cannot be used because the
> server can't authenticate itself).

I think the HTTP/2.0 approach is interesting, but as I said above: How should HTTP/2.0 in all its complexity be used correctly by all involved parties when even pipelining is such a problem? And secondly, HTTP/2.0 isn't even released and will take some time to come (if it ever comes), while HTTP/1.1 is ready and Firefox' pipelining support is ready, too.
(In reply to Radek 'sysKin' Czyz from comment #56)
> And for a consumer product like Firefox, it is not acceptable to release a
> version where this doesn't work, when it worked on the previous version.
> (this is obviously not a fact but my opinion).

By that reasoning, any non-compliant server would be a blocker for any product progress. That's not how things work. Regressions do happen, and if they are caused by broken servers, then the servers need to be fixed, not the browser.
Pipelining is part of the standard. If the management server breaks, the server is not RFC compliant.

So, I certainly don't agree with that. But this is starting to go off on a tangent that isn't really related to the problem at hand. The fact that it doesn't work properly on a very small percentage of non-compliant/broken servers (and that is indeed a fact) should not mean everyone else has to be disadvantaged.
(In reply to RH from comment #57)
> By the way, what does "will not work" mean? As far as I understand
> pipelining, the client would "enqueue" more than one request to the server,
> which then responds. If pipelining is not supported, the server would just
> respond to the first request and then drop the connection.

Bah, just when you asked they started an overnight maintenance and shut it down...

But the symptom is as follows: the website just "loads" by spinning and spinning and spinning. Half an hour later you might see the result of initial html request without css or images or javascript.

I can only guess that the server responds by returning the first request but then doesn't return the others, and also doesn't close the connection (like with keep-alive).

Since it's https I can't debug with wireshark to go any deeper.
Having said that, I never understood why Mozilla's pipelining logic attempts to maintain a blacklist of known broken servers rather than whitelist of a few popular good ones.

Maybe that's all what's needed.
Please stop spamming this bug.

It is very simple:

Pipelining has been an undeniable pain in the ass. Nobody has gotten it working properly without hacks and even then problems pop up. It should work, but it is nonetheless a mess. It would be nice if we could get it running now, but obviously that hasn't happened. THERE IS NO POINT IN DEBATING THIS. Yes, servers should be fixed, but they aren't. Yes, heuristics to get it working are possible, but they're still not idiot-proof. There is nothing productive in rambling on this topic here.

The guy in charge made the decision that we failed to get this to the point where it could be turned on by default. That's his decision. Mozilla has limited resources. Working on a coherent strategy to fix the issues pipelining aims to fix through a standard that doesn't have the same issues is the way forward. Yeah, it kinda sucks that we won't get this on by default in the meantime, but that's just the way it's going to be. You can still turn it on for yourself if you want. Nobody is talking about yanking it yet.

SPDY is an option and it does work now. Yeah, it isn't as widely used, though the tiny fraction of the Internet that does use it has a LOT of users, so there is a disproportionate benefit from it. Yeah, it's not ideal either.

Unless you are a peer on this project there is nothing productive you can post here. Please stop. You're just wasting people's time and flooding inboxes. If you want to debate this at length, do so somewhere else.
@Dave, if you don't want to read what is very much valid discussion on this feature, you're free to not CC yourself to this bug and save yourself a "flooded inbox".

If peer discussion from people with *practical* experience with pipelining is not desired by the bug owners and want it to be restricted to the "short list" of Mozilla-employed developers, then by all means use the restrict-comments feature like is so often done on these bugzilla bugs when people don't want to admit a valid counter-argument is made. I thought Firefox development was supposed to be community- and peer-driven. I guess I was wrong.

In the meantime, I'll go elsewhere and let you shoot yourself in the foot.
(In reply to Mark Straver from comment #62)
> @Dave, if you don't want to read what is very much valid discussion on this
> feature, you're free to not CC yourself to this bug and save yourself a
> "flooded inbox".
> 
> If peer discussion from people with *practical* experience with pipelining
> is not desired by the bug owners and want it to be restricted to the "short
> list" of Mozilla-employed developers, then by all means use the
> restrict-comments feature like is so often done on these bugzilla bugs when
> people don't want to admit a valid counter-argument is made. I thought
> Firefox development was supposed to be community- and peer-driven. I guess I
> was wrong.
> 
> In the meantime, I'll go elsewhere and let you shoot yourself in the foot.

I think you're raising valid concerns, as we're often faced with the problem that a feature could break a lot of websites (for example, third-party cookies blocking or click-to-play plugins).
I guess one possibility would be to test the N most visited websites, and see how many break.
The thing is, Bugzilla isn't the right place to discuss this. Bugzilla is a bug tracking system, if you're not happy with a decision someone has made on Bugzilla, you should write on a mailing list (mozilla-dev-platform maybe?).
(In reply to Mark Straver from comment #62)
> @Dave, if you don't want to read what is very much valid discussion on this
> feature, you're free to not CC yourself to this bug and save yourself a
> "flooded inbox".
> 
> If peer discussion from people with *practical* experience with pipelining
> is not desired by the bug owners and want it to be restricted to the "short
> list" of Mozilla-employed developers, then by all means use the
> restrict-comments feature like is so often done on these bugzilla bugs when
> people don't want to admit a valid counter-argument is made. I thought
> Firefox development was supposed to be community- and peer-driven. I guess I
> was wrong.
> 
> In the meantime, I'll go elsewhere and let you shoot yourself in the foot.

They have to spend time with useful features like the Social API, not useless features like HTTP pipelining and WebP.
Folks - please stop the squabbling about non-pipeline related topics - you're embarrassing yourselves. Things such as who is allowed to post here, conspiracy theories about bug flags, and the importance other mozilla initiatives. This is a pipeline configuration policy bug and you're more than willing to talk about that here if you like. If the comment trail can't stay on topic then I'll just mute the bug from my notifications.

As has been mentioned, I've made this decision as the module owner. It pains me as I believe firefox has the most sophisticated pipelining algorithm ever built, but the fundamental approach is simply flawed in ways that multiplexing is not. Let's put energy into making multiplexing a success.

The goal with any dispatch strategy is to minimize queuing time. Basically pipelines improve things at the median a bit, but out at the 75th or greater percentile they actually start to make things worse (reschedules due to head of line blockers, etc..) and then you still need to deal with any interop problems. The major interop problem actually often comes from anti-virus software. The curves look better on mobile because rtts are so much worse in that environment - and there aren't the same amount of scanners injecting themselves into the path. which is why we have enabled for android. I also have a suspicion that people simply tolerate the errors a bit better on mobile because mobile environments -  are known for being cruddy.. that's not something I want to perpetuate.

on the other hand, because SPDY combines both multiplexing and priority it is able to achieve constant queue times of ~0 and still not suffer from head of line problems thanks to the priority bits. The telemetry data shows this - its not just anecdotal data. http/2 has refined this with a much more flexible priority strategy and an smaller quantization which makes it even more responsive. Its not a perfect protocol - but its a good step forward.

This is a decision for this point in time based on what we know. I invite folks who disagree to try and address the issues with substantive patches instead of comments and to do controlled experiments to see if they achieve broad results. I'm more than open to A/B tests in nightly using our telemetry infrastructure - I just don't support working on those projects over other initiatives. But of course you should feel free to scratch a different itch if that's your priority.
>and there aren't the same amount of scanners injecting themselves into the path. which is why we have enabled for android.

How about enabling  pipeline on Linux and MacOSX? they only on very rare cases suffer from the "on access scanner" problems?
This way could also catch many sysadmin, that could finally found that their site are broken and put add a nginx/apache server  doing reverse proxy to fix the problem.
I speak for myself, i am responsible for several sites, but i have no idea if any of then have any pipline problem, because i don't have it enabled and none complained.
Chromium recently removed HTTP Pipelining support, it will be in Milestone 37. Its Review URL: https://codereview.chromium.org/275953002 and this is bug url: https://code.google.com/p/chromium/issues/detail?id=364557

Commit message: Remove HTTP pipelining support.

It's been a couple years since anyone worked on it, and there
are no plans to enable it by default.

Cached pipelining-related server information will automatically
be cleared when server information is next saved, so there are
no issues on that front.

Just dropping this information here, does not mean to derail or spam it.
You need to log in before you can comment on or make changes to this bug.