Last Comment Bug 68517 - RFE: Improved support for HTTP/1.1 transfer codings (transfer-encoding)
: RFE: Improved support for HTTP/1.1 transfer codings (transfer-encoding)
Status: RESOLVED WONTFIX
: perf
Product: Core
Classification: Components
Component: Networking: HTTP (show other bugs)
: Trunk
: All All
: P5 enhancement with 9 votes (vote)
: ---
Assigned To: Nobody; OK to take it and work on it
:
Mentors:
http://www.w3.org/Protocols/rfc2616/r...
: 167905 198926 (view as bug list)
Depends on: 59464
Blocks: 68414 95314
  Show dependency treegraph
 
Reported: 2001-02-11 12:51 PST by Andreas M. "Clarence" Schneider
Modified: 2015-11-30 13:59 PST (History)
32 users (show)
See Also:
Crash Signature:
(edit)
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---


Attachments

Description Andreas M. "Clarence" Schneider 2001-02-11 12:51:51 PST
Mozilla should support other transfer codings than "identity" and "chunked".

If we want compression to speed up a connection, we should include a
"TE: trailers, gzip, compress, deflate" header field or something similar in
every HTTP request.
Of course we must be able to decode actually applied transfer codings
indicated with "Transfer-Encoding: ..." (see bug 59464 for that).

Currently, we only support *content* codings (by sending
"Accept-Encoding: ..."). Content codings are not meant to speed up a
connection. They indicate a property of the resource. Therefore such resources
should keep their content coding when they are saved to disk.

See bug 35956 (File extension not changed but gzipped files expanded when
saving) for the confusion caused by using *content* codings instead of
*transfer* codings. Maybe problems with some HTTP/1.0 servers will remain,
but we should do our best to ensure good interoperability with conforming
HTTP/1.1 servers.

Some Links to RFC 2616:
 3.5  Content Codings
      http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5
 3.6  Transfer Codings
      http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.6
 4.3  Message Body
      http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.3
14.3  Accept-Encoding
      http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3
14.11 Content-Encoding
      http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11
14.39 TE
      http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.39
14.41 Transfer-Encoding
      http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.41
Note the changes from RFC 2068 regarding transfer codings:
http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.6.3
Comment 1 Darin Fisher 2001-02-12 08:48:48 PST
Agreed.  This should be fixed.
Comment 2 Darin Fisher 2001-06-05 22:16:25 PDT
-> mozilla 1.0
Comment 3 Vicente Salvador 2001-08-14 07:44:12 PDT
With Build ID: 2001081303 mozilla says it accepts enconfig gzip but when the
servers sends a gziped response mozilla doesn't shows anything.

Mozilla should get rid of Accept-Encoding: gzip if it doesn't supports it!!!

You can take it out now and set it again when gziped content is working.
Comment 4 Bradley Baetz (:bbaetz) 2001-08-14 08:27:28 PDT
Accept-Encoding has nothing to do with transfer encodings. Id you have a problem
with gzip content encodings, then please file a new bug, giving a URL where the
problem can be observed.
Comment 5 Darin Fisher 2001-11-27 12:04:55 PST
not sure if this can be squeezed in for mozilla 1.0, but it sure would be nice.
planning optimistically for now.
Comment 6 Darin Fisher 2001-12-18 13:21:13 PST
punt -> 0.9.9
Comment 7 Darin Fisher 2002-02-06 18:32:53 PST
-> moz 1.0
Comment 8 Darin Fisher 2002-03-01 12:05:30 PST
-> no time for this during the 1.0 cycle, pushing out to mozilla 1.1
Comment 9 Darin Fisher 2002-05-17 16:22:57 PDT
mass futuring of untargeted bugs
Comment 10 Boris Zbarsky [:bz] 2002-09-10 23:52:53 PDT
So as a tiny nudge... what is the answer to the following questions:

  Should the data in the cache be endoded or decoded when TE is used?

The RFC does not really address the issue as far as I can tell...
Comment 11 Boris Zbarsky [:bz] 2002-09-11 00:15:41 PDT
*** Bug 167905 has been marked as a duplicate of this bug. ***
Comment 12 Ben Bucksch (:BenB) 2002-09-11 00:35:38 PDT
Changing severity enh -> normal, because this is part of the spec (although not
as MUST) and appearantly indirectly creates significant problems with
Content-Encoding.
Comment 13 Darin Fisher 2002-09-11 09:45:07 PDT
ben: uhm.. please explain the significant problems before changing the severity
of this bug.  as you said this is not a mandatory part of HTTP/1.1, so this is
an enhancement / feature request.

bz: it probably makes sense to keep the content compressed in the cache as we do
w/ Content-Encoding: gzip, since that saves on space.
Comment 14 Ben Bucksch (:BenB) 2002-09-11 21:37:46 PDT
> please explain the significant problems

See bug 55690 comment 253 - 259.

Basically, because Transfer-Encoding header is supported nowhere, servers send
Content-Encoding instead. But there are semantical differences -
Transfer-Encoding should (usually) be uncompressed by the browser, and
Content-Encoding should usually not. Because this distinction is missing, we
don't know what to do.

Supporting Transfer-Encoding won't solve the problem immediately (servers need
to send it), but would be a first step, probably *the* necessary first step to
get Transfer-Encoding used.
Comment 15 Darin Fisher 2003-04-09 00:27:01 PDT
*** Bug 198926 has been marked as a duplicate of this bug. ***
Comment 16 Jon Hanna 2007-02-14 05:10:08 PST
"Should the data in the cache be endoded or decoded when TE is used?"

It should *appear* to have been decompressed (this is more important in the case of proxy caches than local caches, because TE is point-to-point not client-to-server).

There's no reason why it shouldn't be stored compressed, but access to the cache should appear to be access to uncompressed data.

File saves should definitely be uncompressed.
Comment 17 Jon Hanna 2007-02-14 05:11:05 PST
It's also not true that Transfer-Encoding is supported nowhere, Opera has support (though they had issues with it at one point IIRC).
Comment 18 Terry 2009-10-17 20:52:25 PDT
I'm a programmer, but not a web programmer, and I just learned about this whole transfer-encoding issue from a wikimedia feature request: https://bugzilla.wikimedia.org/show_bug.cgi?id=4947

There is a need for Transfer-Encoding functionality that the current content-type hack can't fill, but browsers have to act first. Has this really been sitting here for over 8 years? O.o

If I understand the situation correctly, then support for legit transfer-encoding should be able to be added without screwing up sites that still used the content-type hack. Couldn't you... continue to treat compression content-types as transfer-encodings just as they are now, unless a transfer-encoding header is present, in which case honor it and interpret content-type strictly according to the standard...?

Given backward-compatible support and time, the less-functional hack could be phased out without site hosts having to make any special effort.

If I had the necessary knowledge, I'd happily put my money where my mouth is and try to submit a patch. (I know that I don't. :-/) Anyone willing to step up and try?
Comment 19 Michael A. Puls II 2010-06-09 07:13:27 PDT
(In reply to comment #17)
> It's also not true that Transfer-Encoding is supported nowhere, Opera has
> support (though they had issues with it at one point IIRC).

This may be of interest:

"Should we try to handle sites that gzip twice?" <http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites-that-gzip-twice>

See this comment:
<http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites-that-gzip-twice#comment30720892>
Comment 20 Jon Hanna 2012-08-14 02:13:05 PDT
We started using content-encoding in cases where transfer-encoding is what we really want in the 1990s as a temporary kludge until the browsers added support, which would be any moment now...
Comment 21 Jon Hanna 2012-08-14 02:21:53 PDT
(In reply to Michael A. Puls II from comment #19)
> "Should we try to handle sites that gzip twice?"
> <http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites-
> that-gzip-twice>
> 
> See this comment:
> <http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites-
> that-gzip-twice#comment30720892>

While I've no doubt that a mishandling of both CE and TE that applied both and then failed to add one of the headers needed is a cause some of the time, I think the case is over-stated.

I've seen this sort of server-side bug several times when it wasn't the case, seen it sent to browsers (like Firefox, for example) that don't support TE, and I'll put up my hand and admit that I have indeed written code with that exact bug more than once myself while only using the content-encoding kludge (it's normally a quick catch though, which is a reason in itself not to have the browsers "fix" it for me if I made that mistake). Where I've most often seen it is when a process internal to the server's handling calls another handler (often for an error page or some other "special case") and both handlers apply the compression.

Of course, supporting both CE and TE requires handling a double-compressed stream that is labelled as such (sub-optimal but valid behaviour), but that's a different matter.

I don't think this is relevant unless I see that there are a large number of cases that only affect Opera, and that those cases are indeed trying to use both and doing so incorrectly (as opposed to only doing TE and merely getting that wrong, which is exactly the same as getting the current CE kludge wrong).
Comment 22 Patrick McManus [:mcmanus] 2015-11-30 13:59:52 PST
The web has definitely centered around content-encoding rather than t-e, so for h1 we won't invest further in this. h2 extensions, inherently hop to hop, could play this role if need be.

Note You need to log in before you can comment on or make changes to this bug.