If you think a bug might affect users in the 57 release, please set the correct tracking and status flags for Release Management.

Synchronize Accept-Encoding request header with Content-Encoding response header for range-requests

RESOLVED WONTFIX

Status

()

Core
Networking: HTTP
RESOLVED WONTFIX
7 years ago
2 years ago

People

(Reporter: bjarne, Unassigned)

Tracking

Trunk
Points:
---

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(3 attachments, 1 obsolete attachment)

(Reporter)

Description

7 years ago
This comes form bug #612135, comment #16 second paragraph.

When we want to complete cached content-encoded data with a range-request, it seems like a good idea to request the same encoding in the range-request as we have for the cached data. I.e. make sure the Accept-Encoding request header in the ranged request matches the Content-Encoding header of the cached response.
(Reporter)

Comment 1

7 years ago
See bug #612135, comment #77 

Scope of this bug has been extended to enable range-requests also for content-encoded cache-entries. When doing so, we must set up the request to insist on the encoding of the cached entry. See RFC 2616 section 14.3 for how to do this. We must consequently also handle 406 response to range-requests properly.
(Reporter)

Updated

7 years ago
Assignee: nobody → bjarne
(Reporter)

Comment 2

7 years ago
Created attachment 512065 [details] [diff] [review]
Proposed solution

This is a proposal for how to re-introduce range-requests for content-encoded partial cache-entries. Handling 406 responses is added, but we might want to make sure we don't do this in a loop.

Passes tryserver.
Attachment #512065 - Flags: review?(honzab.moz)
Short note: according to bug 637657 Apache servers also return un-encoded content to range requests.
(Reporter)

Comment 4

7 years ago
Comment on attachment 512065 [details] [diff] [review]
Proposed solution

Requesting additional review by bz to get this ball rolling...
Attachment #512065 - Flags: review?(bzbarsky)
Bjarne, in a nut shell, your patch is doing:

- if the cache entry we have only a partial content for doesn't have Content-Encoding, then we fill Accept-Encoding header with 'identity'
- otherwise we are forcing the server to response ONLY with coding the cache entry holds and we should then never receive a 206 with Content-Encoding different from the cached one
- also it makes the server to response with 406, while it cannot satisfy our request, what we handle with replacement of the channel after we doom the cache entry


I applied the patch and tested on my local setup of IIS 7.5.  The second page load (after a partial load) fails with our favorite "Content Encoding Error" page.  I tried to change *;q=0.0 to identity;q=0.0 - no change, the server responses with identity encoding (implicit, i.e. no Content-Encoding header).

I don't think we may rely on servers to handle Accept-Encoding correctly, even though it is correct to deploy the header.

According to the patch:
- when creating the Accept-Encoding header, don't use new/PR_snprintf/delete ; rather use nsPrintfCString and higher value for the buffer
- when handling 406: you should probably also call CloseCacheEntry(PR_FALSE) after you doom it
- I don't think it is a good idea to replace the whole channel ; we don't carry custom headers, we play with URIs assigned to the channel and with RedirectionLimit in HttpBaseChannel::SetupReplacementChannel
- I would rather go the way of restarting (replacing) the transaction here, as in case of auth
- we have to do that also in case of content-encoding mismatch on 206 when the server doesn't handle the Accept-Encoding requirement
Created attachment 517931 [details] [diff] [review]
restart by transaction

I have spent some time on trying to restart the transaction on content encoding mismatch.  The patch has flaws, it hangs sometimes on reload.  I am probably missing something, but I had just a little time to play with it.  

Bjarne, would be great if you finish it and integrate it to your patch.  But only if Boris also agrees with restarting just the transaction rather then the whole channel.
(Reporter)

Comment 7

7 years ago
(In reply to comment #5)
> I don't think we may rely on servers to handle Accept-Encoding correctly, even
> though it is correct to deploy the header.

Question is: Are we doing worse than current code? I.e. do we get more content-encoding mismatches with this patch applied, or would we get these anyway?

> - when creating the Accept-Encoding header, don't use new/PR_snprintf/delete ;
> rather use nsPrintfCString and higher value for the buffer

Ok.

> - when handling 406: you should probably also call CloseCacheEntry(PR_FALSE)
> after you doom it

Possibly, but this will be done in OnStopRequest anyway. Since the entry is doomed it will not be reused by a new channel.

> - I don't think it is a good idea to replace the whole channel ; we don't carry
> custom headers, we play with URIs assigned to the channel and with
> RedirectionLimit in HttpBaseChannel::SetupReplacementChannel
> - I would rather go the way of restarting (replacing) the transaction here, as
> in case of auth

Hmm...  I see your point and it is tempting to follow. Boris - any insight?
(Reporter)

Comment 8

7 years ago
Comment on attachment 517931 [details] [diff] [review]
restart by transaction

Just a question: I guess there are good reasons why you don't use nsHttpTransaction::Restart() in this code?
(In reply to comment #8)
> Comment on attachment 517931 [details] [diff] [review]
> restart by transaction
> 
> Just a question: I guess there are good reasons why you don't use
> nsHttpTransaction::Restart() in this code?

Transaction restart is used only in occasion of a network failure and when we didn't receive nor send any data (or sent data but the connection has already been used before).  You also increase mRestartCount and disallow pipelining in that method.
(Reporter)

Comment 10

7 years ago
Created attachment 519152 [details] [diff] [review]
Restarting by transaction instead of by channel

This is a merge (pretty much) of the initial proposal and Honzas suggestion from comment #6. Passes local testing, sent to tryserver.

Requesting review by Honza since some of his logic is slightly modified and sr by Christian to verify the approach.
Attachment #519152 - Flags: superreview?(cbiesinger)
Attachment #519152 - Flags: review?(honzab.moz)
(Reporter)

Comment 11

7 years ago
Created attachment 519389 [details] [diff] [review]
Restarting by transaction instead of by channel (removed junk in patch)

Previous patch had some binary junk in the end, otherwise this is unchanged. Tryserver-run seems fine.
Attachment #519152 - Attachment is obsolete: true
Attachment #519389 - Flags: superreview?(cbiesinger)
Attachment #519389 - Flags: review?(honzab.moz)
Attachment #519152 - Flags: superreview?(cbiesinger)
Attachment #519152 - Flags: review?(honzab.moz)
Comment on attachment 519389 [details] [diff] [review]
Restarting by transaction instead of by channel (removed junk in patch)

>diff --git a/netwerk/protocol/http/nsHttpChannel.cpp b/netwerk/protocol/http/nsHttpChannel.cpp
>+    case 406:
>+        if (mCachedContentIsPartial) {
>+            // We sent a ranged request but the server cannot satisfy it. See
>+            // RFC2616 section 14.3 and bug #613159. Re-request the resource
>+            if (NS_FAILED(ResumeFromPartialContentFatalError()))
>+                rv = ProcessNormal();

You need to cancel the previous transaction before you call ResumeFromPartialContentFatalError() here.  However when that method fails, you can no longer use ProcessNormal - there is probably no functioning transaction in that case to load the 406 page content (if there might be one).  ResumeFromPartialContentFatalError() probably needs modification to handle the transaction replacement/revert.  This is different from DoAuthRetry() as it loads 401/7 content and display or discard it depending on the user auth decision (to auth or not to auth).  Here we are canceling the transaction sooner then we load the content data, so we have to take extra care to replace the transaction correctly.

>@@ -1682,16 +1691,33 @@ nsHttpChannel::SetupByteRangeRequest(PRU
>+    // Align requested encoding with what the partial entry in cache has.
>+    // See bug #613159 and RFC2616 section 14.3
>+    const char *enc =
>+        mCachedResponseHead->PeekHeader(nsHttp::Content_Encoding);

Maybe rather |contentEncoding| instead of |enc|?

>+    nsCString buf2 =
>+        nsPrintfCString(20 + PL_strlen(enc), "*;q=0.0, %s;q=1.0", enc);
>+    LOG(("nsHttpChannel::SetupByteRangeRequest[this=%p entry=%p]: "
>+         "Accept-Encoding=%s", this, mCacheEntry.get(), buf2.get()));   

Hm... rather forget about nsPrintfCString as well.  We should probably do:

nsCAutoString acceptEncodingHeader;
acceptEncodingHeader.AssignLiteral("*;q=0.0, ");
acceptEncodingHeader.Append(contentEncoding);
acceptEncodingHeader.AppendLiteral(";q=1.0");

Using hard coded length (20) is dangerous when someone changes the format string and doesn't correctly change the length for the buffer.  What will happen will be unexpected.

BTW: you can do nsPrintfCString buf2(buflen, "format string", args...); and then use buf2 as it would be a normal ACString class instance.

>+    const char *cached =
>+        mCachedResponseHead->PeekHeader(nsHttp::Content_Encoding);
>+    const char *now =
>+        mResponseHead->PeekHeader(nsHttp::Content_Encoding);

Maybe rather |cachedContentEncoding| and |receivedContentEncoding| ?

>+    if (PL_strcasecmp(cached, now) != 0 &&
>+        (cached != nsnull || PL_strcasecmp("identity", now) != 0))

This condition means: if...

   cache != received && cached != null

   OR

   cache != received && "identity" != received

...then cancel the current transaction because we detected content encoding mismatch.  I can see your comment you have added in this condition.  I am just not sure why it is necessary to add that condition.  What if there is a server that doesn't send any content encoding header with 206?  Doesn't that mean the content is in identity encoding that is the same we have in the cache?  Actually - continue loading partial content when cacheEncoding == null && receivedEncoding == null?

>+nsHttpChannel::ResumeFromPartialContentFatalError()
>+{
>+    if (NS_FAILED(rv)) 
>+        return rv;
>+    rv = mAuthProvider->Init(this);

Missing rv check.  Again, double check what actually happens when this method fails.

>+
>+    // Invalidate the cached data we have...
>+    mCacheEntry->SetExpirationTime(0);

Please double check we should not rather use Doom() here.

>+
>+    // Clean up mRequestHead (ensure this code reverses the effect of
>+    // calling SetupByteRangeRequest() )
>+    mCachedContentIsPartial = PR_FALSE;
>+    mRequestHead.ClearHeader(nsHttp::Range);
>+    mRequestHead.ClearHeader(nsHttp::If_Range);
>+    mRequestHead.SetHeader(nsHttp::Accept_Encoding,
>+                           gHttpHandler->mAcceptEncodings);

See my comments bellow on mAcceptEncodings.

>diff --git a/netwerk/protocol/http/nsHttpHandler.h b/netwerk/protocol/http/nsHttpHandler.h
>+    nsCString mAcceptEncodings;
> private:
>-    nsCString mAcceptEncodings;

I'd rather see this made a different way:
- create a new public method of nsHttpHandler called AddStandardContentEncodingHeader(..)
- move these lines to it: http://hg.mozilla.org/mozilla-central/annotate/02e5836b10c2/netwerk/protocol/http/nsHttpHandler.cpp#l382
- call that method from AddStandardRequestHeaders (at that place ^)
- and also from ResumeFrom... method

>diff --git a/netwerk/test/unit/test_range_requests.js b/netwerk/test/unit/test_range_requests.js
>-//   4) the cached entry does not have a Content-Encoding (see bug #613159)
>+//   4) N/A anymore

Is this test no longer valid or what it tests is covered by the new test?

>diff --git a/netwerk/test/unit/test_range_requests_with_encoding.js b/netwerk/test/unit/test_range_requests_with_encoding.js
>+// Simple mechanism to keep track of tests and stop the server 
>+var numTestsFinished = 0;
>+function testFinished() {
>+  if (++numTestsFinished == 1)

There are actually 6 tests.

>+function run_test() {
>+  /*
>+  var chan = make_channel("http://localhost:4444/gzip");
>+  chan.asyncOpen(new MyListener(received_partial_gzip), null);
>+
>+  var chan = make_channel("http://localhost:4444/identity");
>+  chan.asyncOpen(new MyListener(received_partial_identity), null);
>+
>+  var chan = make_channel("http://localhost:4444/no_encoding");
>+  chan.asyncOpen(new MyListener(received_partial_no_encoding), null);
>+
>+  var chan = make_channel("http://localhost:4444/habahaba");
>+  chan.asyncOpen(new MyListener(received_partial_habahaba), null);
>+
>+  var chan = make_channel("http://localhost:4444/gzip_406");
>+  chan.asyncOpen(new MyListener(received_partial_gzip_406), null);
>+*/

Why are these commented out?

When I uncomment all of them I'm getting:

###!!! ASSERTION: Unexpected request: 'request == mCachePump || request == mTransactionPump', file D:/mozilla/mozilla-central/_obj-browser-debug/netwerk/protocol/http/../../../../netwerk/protocol/http/nsHttpChannel.cpp, line 3923


I changed my test environment to use a local bandwidth proxy instead of complicated use of rinetd on a different machine to have the IIS bandwidth limiting work and I do not experience any hangs.  So the transaction replacement seems to work correctly.

r- for the test failures.
Attachment #519389 - Flags: review?(honzab.moz)
(Reporter)

Comment 13

7 years ago
Still waiting for comments/input to the approach of restarting by transaction or not - Christian...?

After some discussions with bsmith during the work-week it seems clear that we must also take the vary-header into account here: If "Accept-Encoding" is listed in the "Vary" response-header we cannot send a different "Accept-Encoding" request-header since that could theoretically give us a different representation of the resource. Bsmith tells me that servers often use "Vary: Accept-Encoding" so we need to take this into account in order to follow the spec.

It is not entirely clear to me, however, why setting "Vary: Accept-Encoding" is very useful. I see the usefulness of e.g. "Vary: Accept-Language" because different translations of a resource is clearly also different representations, but "Accept-Encoding" only determines which form the resource takes when transferred over the wire. Hints on the net suggests using it to prevent caching-proxies from serving encoded content to UAs which cannot handle it, which seems like a neat hack although I doubt there are too many UAs out there today which doesn't handle encoded content.
> "Accept-Encoding" only determines which form the resource
> takes when transferred over the wire. 

That is exactly when you need a Vary. When you have a compressed and an uncompressed representation for the same resource, you need to vary over all the request header fields you inspect to choose which representation to send, and those representations cannot share an (strong) ETag. The only time a server doesn't "need" to send Vary: Accept-Encoding is when it will never choose the Content-Encoding based on the Accept-Encoding. But, many servers get this wrong so we should assume that Vary always includes Accept-Encoding.
(In reply to comment #14)
> That is exactly when you need a Vary. When you have a compressed and an
> uncompressed representation for the same resource, you need to vary over all
> the request header fields you inspect to choose which representation to send,
> and those representations cannot share an (strong) ETag. The only time a 
> server doesn't "need" to send Vary: Accept-Encoding is when it will never 
> choose the Content-Encoding based on the Accept-Encoding. But, many servers
> get this wrong so we should assume that Vary always includes Accept-Encoding.

The main consequence of this is that we must use the same Accept-Encoding for resuming the download that we used when we downloaded the initial part. As long as we send the same request headers, and as long as the server is operating correctly and deterministically, then we will get back the same representation (compressed or uncompressed) for both parts unless it has changed; in that case, the spec requires the server to change the validator (ETag and/or Last-Modified), and if the server is operating correctly, the server will return the entire resource according to the specified semantics of If-Range.
> and as long as the server is operating correctly

Just to be clear, this bug exists because they don't.
(In reply to comment #16)
> > and as long as the server is operating correctly
> 
> Just to be clear, this bug exists because they don't.

Right, but when the server is operating out of spec (matching a strong validator for two different representations of a resource), it is safer and better to be conservative, assume the server is broken, and re-request the whole resource unconditionally (no range request, no If-*).

I am also skeptical about server's ability to correctly interpret range-requests after doing content-coding instead of before doing content-coding; any server that does this will cause a silently corrupted resource. Unless/until we have evidence that the resumption of content-coded downloads is a performance problem for users, I think we should just avoid doing range requests for non-identity-coded resources.
Hmm.  How often are videos served content-encoded?  Or do we already not send Accept-Encoding there?  Apart from redownloads, the media cache is the biggest user of range requests we have.
(In reply to comment #18)
> Hmm.  How often are videos served content-encoded?  Or do we already not send
> Accept-Encoding there?  Apart from redownloads, the media cache is the biggest
> user of range requests we have.

Another question is whether we would have better performance without or without "Accept-Encoding: deflate, gzip" for videos and audio.
Which is why we might have switched it off, yes.
(Reporter)

Comment 21

7 years ago
Seems to me like we disagree on a few basic questions:

1) Should we bother at all to do range-requests for content-encoded resources? (Bug #612135 disabled this after much discussion and this bug was set up to re-introduce it.)

2) Assuming "yes" to (1): How do we set up Accept-Encoding?

I guess part of my problem is that I don't see any reason why applying content-encoding to a resource would make a different variant of the resource. I agree that the Vary-header has to be taken into account, but content-encoding in itself should not make a difference (which is what I tried to express in comment #13). If there are parts of RFC2616 which proves or indicates otherwise, please point them out. In fact, a selected part of section 14.11. 3rd paragraph reads

"[..snip..] a non-transparent proxy MAY modify the content-coding if the new coding is known to be acceptable to the recipient [..snip..]"

How would this work in our context?
The exact case we are considering is used as an example in HTTPbis. See http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-14#section-2.2.5

"Content codings are a property of the representation, so therefore an entity-tag of an encoded representation must be distinct from an unencoded representation to prevent conflicts during cache updates and range requests.  In contrast, transfer codings (Section 6.2 of [Part1]) apply only during message transfer and do not require distinct entity-tags."

> "[..snip..] a non-transparent proxy MAY modify the
> content-coding if the new coding is known to be
> acceptable to the recipient [..snip..]"

Such non-transparent proxies need to be intelligent about range requests (e.g. stripping If-Range from requests) to avoid creating problems like this.
> 1) Should we bother at all to do range-requests for
>    content-encoded resources?
> (Bug #612135 disabled this after much discussion and
> this bug was set up to re-introduce it.)

I understand the motivation for changing Accept-Encoding to try to ensure that the server will use the same content-encoding that was used for the original response. But, do severs actually do that? It appears to me that the problem we ran into is that some servers always use identity content-encoding for 206 responses, regardless of Accept-Encoding. For these servers, nothing we do to the Accept-Encoding header will have any positive effect, AFAICT.
(Reporter)

Comment 24

7 years ago
(In reply to comment #22)
> The exact case we are considering is used as an example in HTTPbis. See
> http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-14#section-2.2.5

Fair enough - I guess this is the answer we want. And I suppose this also means that content-encoding is implicit in an eTag, right?

But then comment #3 worries me a little as well as bug #612135, comment #68. Does apache/IIS see this differently? Or can it be *us* who miss the point that content-encoding is implicit in an eTag and hence we believe there is a mismatch (when there really isn't one)?

(The followup-question will be: What if we use last-modified as the strong validator? What do we know about content-encoding?  ;)  )
To compare two variants, take all the entity-header-fields (normalized for whitespace and whatnot) and the entity-body of each, and compare them byte-for-byte. If they are equal, then they are allowed to have the same ETag; otherwise they aren't. There is nothing special about Content-Encoding (it is just an entity header) as far as the spec is concerned.

Apache and other servers have bugs in their handling of ETags vs Content-Encoding, because their automatic Content-Encoding processing (e.g. mod_deflate) doesn't consider all the possibilities. My hypothesis is that changing our Accept-Encoding header for range requests will not help, based on comment 3 and bug 612135 comment 68--for the simple reason that many servers assume (and don't check) that identity encoding is always acceptable.

What *may* be helpful, would be to use one of the following logics

(a) If we are resuming a download, and the downloaded part was using non-identity content-encoding, then restart the download without an Accept-Encoding and without an If-Range. If we are resuming a download, and the downloaded part was using identity content-encoding, then download the rest without an Accept-Encoding but with an If-Range header.

(b) If we are resuming a download, just resume it with the same Accept-Encoding we normally use. If the result is a partial response with the same entity-headers and the length of the result matches what we expect, then combine the responses. If the result is a partial response with a different Content-Encoding or the combined result's length wouldn't be what we'd expect it to be (based on the Content-Length given in the original response) or other entity headers are different or the length is different than the previously-downloaded part, then restart the download from the beginning.
just tossing out an idea (c) to build on brian's:

adjust the range so that it overlaps 512 bytes or so with data in the cache so we can memcmp them as an integrity check of sorts.
Patrick's idea seems great to me, regardless of Content-Encoding.

Another question is priority? Is this something that we need to do now, or can we do this later? Improving download resume performance wasn't even mentioned in our planning meetings.
Another thing we can do for gzip content-encoding is to check the checksum after combining gzip-encoded parts. (This would be an argument for having Accept-Encoding list gzip before deflate.)
The cached RESPONSE Vary header only says to re-validate a cached response with the server when one of the listed REQUEST headers in the new request is different from those we have sent to the server during the last request (that is now cached), see [1].  So, the patch must consider it, good catch Brian.

But, the HTTP spec says nothing about how UA or cache should behave when the RESPONSE Content-encoding header is different from the cached one in a 206 response.

HTTPbis is not doing any better, it just proposes servers to consider a single resources represented in different content-encoding to be distinct with different ETags.  This is what we do not observe with IIS and Apache these days.  

So, we have to handle these "broken" 206 response on the client side.


(In reply to comment #25)
> (a) 
This seems to me like what we currently do (without patch for this bug).
> (b) 
That seems to me like what the patch for this bug is trying to achieve.
(In reply to comment #26)
> (c)
This goes beyond and is kind of a check if the server works correctly.  Also is a bit weak check (not 100% reliable).  We lived without it all the time the range request code exists, so I don't think we need it now.
(In reply to comment #25)
Looks more promising, though.


I'd do some metrics research for how often we throw away a partially cached data, and how much, based on the current algorithm (no range request for content encoded data).  If this happens often for larger data, let's fix it.  Until we know, I wouldn't push on fixing this bug.


[1] http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6, paragraph 7
(In reply to comment #20)
> Which is why we might have switched it off, yes.

See bug 614760: 

  aChannel->SetRequestHeader(NS_LITERAL_CSTRING("Accept-Encoding"),
                             NS_LITERAL_CSTRING(""), PR_FALSE);
Comment on attachment 519389 [details] [diff] [review]
Restarting by transaction instead of by channel (removed junk in patch)

Review of attachment 519389 [details] [diff] [review]:
-----------------------------------------------------------------

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1706,5 @@
> +        enc = "identity";
> +
> +    // make sure we insist on the same encoding we have in the cache
> +    // overwriting values from nsHttpHandler::AddStandardRequestHeaders()
> +    nsCString buf2 =

I guess I'd call this str, but I'm fine with either way.

@@ +1707,5 @@
> +
> +    // make sure we insist on the same encoding we have in the cache
> +    // overwriting values from nsHttpHandler::AddStandardRequestHeaders()
> +    nsCString buf2 =
> +        nsPrintfCString(20 + PL_strlen(enc), "*;q=0.0, %s;q=1.0", enc);

PL_strlen -> strlen

I have a slight preference for not using nsPrintfCString because it sucks having to specify the string length, but if you want to keep it that's ok.

Also, I'd prefer the more desirable value to come first. So you can just do:

  nsCString str = enc + NS_LITERAL_CSTRING(";q=1.0, *;q=0");

@@ +1738,5 @@
> +    const char *cached =
> +        mCachedResponseHead->PeekHeader(nsHttp::Content_Encoding);
> +    const char *now =
> +        mResponseHead->PeekHeader(nsHttp::Content_Encoding);
> +    if (PL_strcasecmp(cached, now) != 0 &&

(note to myself: PL_strcasecmp returns 0 when both arguments are null)

@@ +1835,5 @@
> +        return rv;
> +    rv = mAuthProvider->Init(this);
> +
> +    // Invalidate the cached data we have...
> +    mCacheEntry->SetExpirationTime(0);

Do you want ->Doom()?
Attachment #519389 - Flags: superreview?(cbiesinger) → superreview+
I didn't read all the comments in this bug... let me know if there's something in there you want my input on.

Also my apologies that the review took this long.
(Reporter)

Comment 33

7 years ago
(In reply to comment #32)
> let me know if there's
> something in there you want my input on.

I was actually most interested in your opinion on whether to restart this by transaction or by replacing the channel (see comment #6). But your detailed review is obviously also deeply appreciated! :)
(Reporter)

Comment 34

7 years ago
(In reply to comment #29)
Honza, thanks for summing up and phrasing this with your own words. It's often fruitful to see things from a different angle, or phrased differently.

> The cached RESPONSE Vary header only says to re-validate a cached response
> with the server when one of the listed REQUEST headers in the new request is
> different from those we have sent to the server during the last request
> (that is now cached), see [1].  So, the patch must consider it, good catch
> Brian.

Yes. (As stated in comment #13, second paragraph.)

Now, during the discussion above (in particular comment #25, first paragraph) we've established that an eTag must be different for different content-encodings. In other words, when doing a conditional request with an eTag, the expected content-encoding is implicitly defined. This is also true for range-requests. However, as Honza observes...

(In reply to comment #29)
> This is what we do not observe with IIS and Apache
> these days.  

...servers may not see this the same way. Furthermore, bug #648429, comment #25 suggests that changing Accept-Encoding has no impact on the response from the server (that study is not in the context of a range-request, but still interesting). Hence doing anything with Accept-Encoding may not have a short-term effect but IMO this is no argument for not implementing our part of the spec as good as possible.

Note also that if we submit a range-request based on a Last-Modified header (i.e. if there is no eTag-header) the expected encoding is *not* implicit.

So, summing up (in my words):

1) We need a mechanism to recover from 406 and mismatching content-encodings on 206. This is not depending on any headers or encodings. Current approach is to restart by the transaction. Could be split into a separate bug.

2) If we have an eTag we can (theoretically) rely on it to specify the expected content-encoding and (again theoretically) drop the Accept-Encoding REQUEST header (unless it's part of the Vary-header, in which case we must leave it unchanged). OTOH, aligning the Accept-Encoding header with the expected content-encoding should not *hurt* (again, subject to any Vary-header) and does not break anything from the protocol pow, afaics.

3) If we don't have an eTag, it is IMO reasonable to use the Accept-Encoding header to try to insist on a certain content-encoding (again, subject to the Vary-header). If the server screws this up we rely on (1) to recover.

Does this make sense?

> (In reply to comment #25)
> > (a) 
> This seems to me like what we currently do (without patch for this bug).

Yes.

> (In reply to comment #25)
> > (b) 
> That seems to me like what the patch for this bug is trying to achieve.

Yes.

> (In reply to comment #25)
> > (c)
> This goes beyond and is kind of a check if the server works correctly.  Also
> is a bit weak check (not 100% reliable).  We lived without it all the time
> the range request code exists, so I don't think we need it now.

Separate bug?
Out of curiosity: Are there any reasons not to leave data-verification to the decompressors? (Leaking data between layers?)
(In reply to comment #34)
> Note also that if we submit a range-request based on a Last-Modified header
> (i.e. if there is no eTag-header) the expected encoding is *not* implicit.

> 1) We need a mechanism to recover from 406 and mismatching content-encodings
> on 206. This is not depending on any headers or encodings. Current approach
> is to restart by the transaction.

+1. We should do what Patrick suggested (compare an overlapping range).

> 2) If we have an eTag we can (theoretically) rely on it to specify the
> expected content-encoding and (again theoretically) drop the Accept-Encoding
> REQUEST header (unless it's part of the Vary-header, in which case we must
> leave it unchanged).

In practice, very many servers vary on Accept-Encoding without including it in the Vary header, because many server app developers don't understand Vary. We should act as though every response has Vary: Accept-Encoding.

> OTOH, aligning the Accept-Encoding header with the
> expected content-encoding should not *hurt* (again, subject to any
> Vary-header) and does not break anything from the protocol pow, afaics.

We have seen that it also does not help. It makes the code more complicated, and exposes us to server parsing errors for "identity;q=0". 

> 3) If we don't have an eTag, it is IMO reasonable to use the Accept-Encoding
> header to try to insist on a certain content-encoding (again, subject to the
> Vary-header). If the server screws this up we rely on (1) to recover.

As you noted, we have evidence that this doesn't work. Either the server would get everything right even when we don't change the Accept-Encoding header, or we would have to rely on (1) to recover.
 
> > (In reply to comment #25)
> > > (c)
> > This goes beyond and is kind of a check if the server works correctly.  Also
> > is a bit weak check (not 100% reliable).  We lived without it all the time
> > the range request code exists, so I don't think we need it now.
> 
> Separate bug?
> Out of curiosity: Are there any reasons not to leave data-verification to
> the decompressors? (Leaking data between layers?)

That check can/should be done even for uncompressed content.
(Reporter)

Comment 36

7 years ago
Brian, I really need to understand your standpoint here: Are you advocating that we should implement wrong or insufficient protocol-behaviour, against better knowledge, in order to make our code simpler and "help" faulty servers hide the fact that they mess things up?

The priority of this bug can clearly be discussed, but that's a different topic.

Patricks suggestion (c) is nice and I like it, but as Honza notes: We have lived without it so far. IMO it can be applied as an additional verification for range-responses but I'd suggest to wait with it until we believe we need it.
(In reply to comment #36)
> Brian, I really need to understand your standpoint here: Are you advocating
> that we should implement wrong or insufficient protocol-behaviour, against
> better knowledge, in order to make our code simpler and "help" faulty
> servers hide the fact that they mess things up?

My standpoint is that for now we don't need to do anything. Eventually, we should try to implement Patrick's solution (c) and start doing range requests for partially cached content-encoded responses. This is not wrong or insufficient, AFAICT. Our current behavior is correct, but a little bit inefficient in some cases where we need to restart a download. The eventual solution that Patrick suggested would work for any server that correctly implements the spec and it would also allow us to detect when we need to retry (without an If-Range) to deal with servers with buggy validation.
(Reporter)

Comment 38

7 years ago
So your arguments are actually about priority? Please just say so. Saves time and energy.

About solution (c): It has the potential to improve how we *detect* that we received bad content. This may be useful when we re-enable ranged requests, although we already do detection which somewhat works (as Honza notes). And if servers did the right thing this trick would not be necessary at all, right?

The idea of this bug is to re-enable ranged requests, trying to use the http-protocol to force the server to either send us the correct encoding or a 406. As a consequence we must handle the 406. Even though servers at the moment deals badly with this I see no reasons not to implement a scheme which follows the protocol to achieve something we want. Servers may change and Firefox is in the position to push a change in the right direction IMO. Priority is another matter - agreed.
(In reply to comment #38)
> And if servers did the right thing this trick would not be necessary
> at all, right?

Right.

> trying to use the http-protocol to force the server to either send
> us the correct encoding or a 406. As a consequence we must handle
> the 406. Even though servers at the moment deals badly with this
> I see no reasons not to implement a scheme which follows the
> protocol to achieve something we want. Servers may change and
> Firefox is in the position to push a change in the right direction IMO.

My point is that it seems unrealistic to expect a server with range support,  broken validator (ETag) support, and broken correctly Vary support to correctly handle "Accept-Encoding: gzip, deflate, identity;q=0," considering that "identity;q=0" is a pretty obscure part of the spec and no browsers in history ever send that. If we find there are a significant number of such servers then I agree that it might make sense to move forward with changing the Accept-Encoding header.
(Reporter)

Comment 40

7 years ago
I agree that data and feedback indicates that there may be no short-term gain from this. Hence I agree upon low priority, However...

(In reply to comment #39)
> If we find there are a significant number of such
> servers then I agree that it might make sense to move forward with changing
> the Accept-Encoding header.

... by pushing our stuff to use advanced/obscure details we may be able to increase this number of servers. Chicken-and-egg problem: One must exist first.

(This obviously assumes that we believe the advanced/obscure details make things better in the long run, which I believe we do in this case.)
Bjarne, do you still want review on the "Proposed solution" patch?
(Reporter)

Comment 42

6 years ago
Comment on attachment 512065 [details] [diff] [review]
Proposed solution

Clearing requests for review due to consensus about low pri.
Attachment #512065 - Flags: review?(honzab.moz)
Attachment #512065 - Flags: review?(bzbarsky)
(Reporter)

Comment 43

6 years ago
(In reply to Honza Bambas (:mayhemer) from comment #29)
> I'd do some metrics research for how often we throw away a partially cached
> data, and how much, based on the current algorithm (no range request for
> content encoded data).  If this happens often for larger data, let's fix it.
> Until we know, I wouldn't push on fixing this bug.

See also bug #701188. Are anyone aware of results in this area or should I add this to the (fast growing) list of telemetry-probes we would like to see?
We probably won't move with this bug forward until we have the numbers.  Bjarne, feel free to add new probe, I can review it.
(Reporter)

Comment 45

6 years ago
-> default owner
Assignee: bjarne → nobody
Status: NEW → RESOLVED
Last Resolved: 2 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.