Mozilla should support other transfer codings than "identity" and "chunked". If we want compression to speed up a connection, we should include a "TE: trailers, gzip, compress, deflate" header field or something similar in every HTTP request. Of course we must be able to decode actually applied transfer codings indicated with "Transfer-Encoding: ..." (see bug 59464 for that). Currently, we only support *content* codings (by sending "Accept-Encoding: ..."). Content codings are not meant to speed up a connection. They indicate a property of the resource. Therefore such resources should keep their content coding when they are saved to disk. See bug 35956 (File extension not changed but gzipped files expanded when saving) for the confusion caused by using *content* codings instead of *transfer* codings. Maybe problems with some HTTP/1.0 servers will remain, but we should do our best to ensure good interoperability with conforming HTTP/1.1 servers. Some Links to RFC 2616: 3.5 Content Codings http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.5 3.6 Transfer Codings http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.6 4.3 Message Body http://www.w3.org/Protocols/rfc2616/rfc2616-sec4.html#sec4.3 14.3 Accept-Encoding http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.3 14.11 Content-Encoding http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.11 14.39 TE http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.39 14.41 Transfer-Encoding http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.41 Note the changes from RFC 2068 regarding transfer codings: http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html#sec19.6.3
Agreed. This should be fixed.
-> mozilla 1.0
With Build ID: 2001081303 mozilla says it accepts enconfig gzip but when the servers sends a gziped response mozilla doesn't shows anything. Mozilla should get rid of Accept-Encoding: gzip if it doesn't supports it!!! You can take it out now and set it again when gziped content is working.
Accept-Encoding has nothing to do with transfer encodings. Id you have a problem with gzip content encodings, then please file a new bug, giving a URL where the problem can be observed.
not sure if this can be squeezed in for mozilla 1.0, but it sure would be nice. planning optimistically for now.
punt -> 0.9.9
-> moz 1.0
-> no time for this during the 1.0 cycle, pushing out to mozilla 1.1
mass futuring of untargeted bugs
So as a tiny nudge... what is the answer to the following questions: Should the data in the cache be endoded or decoded when TE is used? The RFC does not really address the issue as far as I can tell...
*** Bug 167905 has been marked as a duplicate of this bug. ***
Changing severity enh -> normal, because this is part of the spec (although not as MUST) and appearantly indirectly creates significant problems with Content-Encoding.
ben: uhm.. please explain the significant problems before changing the severity of this bug. as you said this is not a mandatory part of HTTP/1.1, so this is an enhancement / feature request. bz: it probably makes sense to keep the content compressed in the cache as we do w/ Content-Encoding: gzip, since that saves on space.
> please explain the significant problems See bug 55690 comment 253 - 259. Basically, because Transfer-Encoding header is supported nowhere, servers send Content-Encoding instead. But there are semantical differences - Transfer-Encoding should (usually) be uncompressed by the browser, and Content-Encoding should usually not. Because this distinction is missing, we don't know what to do. Supporting Transfer-Encoding won't solve the problem immediately (servers need to send it), but would be a first step, probably *the* necessary first step to get Transfer-Encoding used.
*** Bug 198926 has been marked as a duplicate of this bug. ***
"Should the data in the cache be endoded or decoded when TE is used?" It should *appear* to have been decompressed (this is more important in the case of proxy caches than local caches, because TE is point-to-point not client-to-server). There's no reason why it shouldn't be stored compressed, but access to the cache should appear to be access to uncompressed data. File saves should definitely be uncompressed.
It's also not true that Transfer-Encoding is supported nowhere, Opera has support (though they had issues with it at one point IIRC).
I'm a programmer, but not a web programmer, and I just learned about this whole transfer-encoding issue from a wikimedia feature request: https://bugzilla.wikimedia.org/show_bug.cgi?id=4947 There is a need for Transfer-Encoding functionality that the current content-type hack can't fill, but browsers have to act first. Has this really been sitting here for over 8 years? O.o If I understand the situation correctly, then support for legit transfer-encoding should be able to be added without screwing up sites that still used the content-type hack. Couldn't you... continue to treat compression content-types as transfer-encodings just as they are now, unless a transfer-encoding header is present, in which case honor it and interpret content-type strictly according to the standard...? Given backward-compatible support and time, the less-functional hack could be phased out without site hosts having to make any special effort. If I had the necessary knowledge, I'd happily put my money where my mouth is and try to submit a patch. (I know that I don't. :-/) Anyone willing to step up and try?
(In reply to comment #17) > It's also not true that Transfer-Encoding is supported nowhere, Opera has > support (though they had issues with it at one point IIRC). This may be of interest: "Should we try to handle sites that gzip twice?" <http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites-that-gzip-twice> See this comment: <http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites-that-gzip-twice#comment30720892>
We started using content-encoding in cases where transfer-encoding is what we really want in the 1990s as a temporary kludge until the browsers added support, which would be any moment now...
(In reply to Michael A. Puls II from comment #19) > "Should we try to handle sites that gzip twice?" > <http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites- > that-gzip-twice> > > See this comment: > <http://my.opera.com/hallvors/blog/2010/06/08/should-we-try-to-handle-sites- > that-gzip-twice#comment30720892> While I've no doubt that a mishandling of both CE and TE that applied both and then failed to add one of the headers needed is a cause some of the time, I think the case is over-stated. I've seen this sort of server-side bug several times when it wasn't the case, seen it sent to browsers (like Firefox, for example) that don't support TE, and I'll put up my hand and admit that I have indeed written code with that exact bug more than once myself while only using the content-encoding kludge (it's normally a quick catch though, which is a reason in itself not to have the browsers "fix" it for me if I made that mistake). Where I've most often seen it is when a process internal to the server's handling calls another handler (often for an error page or some other "special case") and both handlers apply the compression. Of course, supporting both CE and TE requires handling a double-compressed stream that is labelled as such (sub-optimal but valid behaviour), but that's a different matter. I don't think this is relevant unless I see that there are a large number of cases that only affect Opera, and that those cases are indeed trying to use both and doing so incorrectly (as opposed to only doing TE and merely getting that wrong, which is exactly the same as getting the current CE kludge wrong).
The web has definitely centered around content-encoding rather than t-e, so for h1 we won't invest further in this. h2 extensions, inherently hop to hop, could play this role if need be.