Closed Bug 612135 Opened 14 years ago Closed 13 years ago

Content Encoding Error (partial content request) on galaxys.samsungmobile.com

Categories

(Core :: Networking: HTTP, defect)

x86
macOS
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla2.0b12
Tracking Status
blocking2.0 --- betaN+

People

(Reporter: ehsan.akhgari, Assigned: bjarne)

References

()

Details

(Keywords: regression, Whiteboard: [softblocker])

Attachments

(3 files, 14 obsolete files)

23.23 KB, patch
Details | Diff | Splinter Review
10.54 KB, patch
mayhemer
: review+
Details | Diff | Splinter Review
10.48 KB, patch
mayhemer
: review+
Details | Diff | Splinter Review
When I go to http://galaxys.samsungmobile.com/, I get a Content Encoding Error page:

Content Encoding Error
          The page you are trying to view cannot be shown because it uses an invalid or unsupported form of compression.

  Please contact the website owners to inform them of this problem.

Here's what the network inspect tool on the web console shows:



  Request URL:
  http://galaxys.samsungmobile.com/

  Request Method:
  GET

  Status Code:
  HTTP/1.1 206 Partial Content

  
    Request Headers
    12:00:29.818
  
    Accept:text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.7
Accept-Encoding:gzip, deflate
Accept-Language:en-us,en;q=0.5
Cache-Control:max-age=0
Connection:keep-alive
Host:galaxys.samsungmobile.com
If-Range:"1d62-4cb81958"
Keep-Alive:115
Range:bytes=0-
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0b8pre) Gecko/20101113 Firefox/4.0b8pre

    Response Headers
    Δ942ms
  
    Cache-Control:max-age=0, no-cache, no-store
Connection:keep-alive
Content-Length:7522
Content-Range:bytes 0-7521/7522
Content-Type:text/html
Date:Sun, 14 Nov 2010 17:00:30 GMT
Etag:"1d62-4cb81958"
Expires:Sun, 14 Nov 2010 17:00:30 GMT
Last-Modified:Fri, 15 Oct 2010 09:05:28 GMT
Pragma:no-cache
Vary:Accept-Encoding


The same page works fine on 3.6, chrome, opera, and safari.
blocking2.0: --- → ?
press shift+reload
Ehsan, it sounds like you have a cached gzipped response for that url, while thei 206 response is not gzipped....  Which can't work, because the byte offsets don't match up.

That said, in this case this looks safe, because the 206 starts with byte 0.  So maybe we should relax our check to allow that?

I really doubt this is a regression; more likely just different cached data in your different browsers.
bug 523859 contains more content-encoding dupes . It's a browser bug in my opinion, gecko should throw away the cached data in such a case.
We can't "throw away the cached data" in the general case, because this is a _partial_ content response.  In other words, the server only sent _part_ of the data, and is expecting us to put what it sent plus what we already have cached to put together the full response.  But this is only possible if byte offsets in the two sets of data mean the same thing, which is not the case if one is gzipped and the other is not.

The one exception I can think of is this case, in which the server claims "partial content" but sends starting at byte 0.  That's why I left this bug open.
It would be interesting to see the cached headers. Do you still have them? (Open about:cache, list disk-cache entries, search for the url, view the entry and dump the headers here.)
Fwiw, loading this page with a clean m-c debug build crashes in an assertion.

###!!! ASSERTION: cannot SetMetaDataElement: 'NS_SUCCEEDED(rv)', file /home/bjarne/Work/Mozilla/CleanFireFox/mozilla-central/content/html/document/src/nsHTMLDocument.cpp, line 945

The cache is clean and the server responds with 200 and gzipped data, yet still crashes.
I don't have an entry for galaxys.samsungmobile.com in my disk cache, but I do have a zero size entry in my memory cache:

Cache entry information
key: 	http://galaxys.samsungmobile.com/
fetch count: 	13
last fetched: 	2010-11-15 14:36:59
last modified: 	2010-11-15 14:36:34
expires: 	1969-12-31 19:00:00
Data size: 	0
file on disk: 	none
Security: 	This document does not have any security info associated with it.
Client: 	HTTP
request-method: 	GET
request-Accept-Encoding: 	gzip, deflate
response-head: 	HTTP/1.1 200 OK
Content-Type: text/html
Last-Modified: Fri, 15 Oct 2010 09:05:28 GMT
Etag: "1d62-4cb81958"
Accept-Ranges: bytes
Vary: Accept-Encoding
Content-Encoding: gzip
Expires: Sun, 14 Nov 2010 16:53:58 GMT
Cache-Control: max-age=0, no-cache, no-store
Pragma: no-cache
Date: Sun, 14 Nov 2010 16:53:58 GMT
Content-Length: 3348
necko:classified: 	1

Looking at the response headers: I'd say that we shouldn't have cached this in the first place.  There is also the discrepancy between the expiration date, content length, etc.
> I'd say that we shouldn't have cached this in the first place.

We should have; we just shouldn't necessarily _use_ it (except for "from cache only" loads).

Certainly the fact that that cache entry gets in the way here is a bug...
OK, is there any more information that I could provide here to help diagnose this?  I'm afraid that this cache entry will go away as soon as I restart my browser...
It would be interesting to know how you ended up with a 0-length entry in the first place because that is what tricks necko into believing the entry is partial. But otherwise I think this case is well documented.

For the record, the assertion I mentioned in comment #6 was probably due to a misconfigured flash-player on the box I was using at the time.

Boris, I believe we can modify the logic in http://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpChannel.cpp#2450 . If we for some reason have a zero-length cache entry we can probably drop the range-request, no?
Assignee: nobody → bjarne
That would make sense to me.
(In reply to comment #10)
> It would be interesting to know how you ended up with a 0-length entry in the
> first place because that is what tricks necko into believing the entry is
> partial. But otherwise I think this case is well documented.

Unfortunately I don't really remember.  I got this error the first time that I navigated to that page.  And if it matters, I was navigated there from a Google search results page.  I don't believe that I have ever visited this website before.
Fair enough. :)  I still think we're in good shape on this one.
The 8-byte limit is pretty random and I'm open to suggestions. It essentially means we will only re-use a partial cache-entry if it contains more than 8 bytes.

Pushed to tryserver.
Attachment #490719 - Flags: review?(jduell.mcbugs)
I still think we should make range responses starting at 0 work even in the face of a Content-Encoding mismatch, btw.
You mean as part of this bug, or should we spin it off in a different one? I assume you mean range-responses starting at 0 and containing the complete content? (Since I really don't see how we can combine part of data in one encoding with another part in another encoding...)

Another fact dawning on me while writing this: If we do send a range-request, we should probably make sure the requested encoding is the same as the one we have in the cache, e.g. by matching the Accept-Encoding request-header with the cached Content-Encoding response header. This is not done afaics.
Attached patch Simplified and updated version (obsolete) — Splinter Review
Also includes code to implement suggestion from comment #16. Pushed to tryserver.
Attachment #490719 - Attachment is obsolete: true
Attachment #490869 - Flags: review?(jduell.mcbugs)
Attachment #490719 - Flags: review?(jduell.mcbugs)
Forgot "hg qref"..  :(
Attachment #490869 - Attachment is obsolete: true
Attachment #490870 - Flags: review?(jduell.mcbugs)
Attachment #490869 - Flags: review?(jduell.mcbugs)
> You mean as part of this bug, or should we spin it off in a different one?

Spinoff is good.

> I assume you mean range-responses starting at 0 and containing the complete
> content?

I mean whatever range responses don't involve merging.  Starting at 0 seems like a necessary condition; if we need to add more conditions too, that's ok.  We should do it.

> we should probably make sure the requested encoding is the same as the one we
> have in the cache

Good idea.  Yet another bug?
FWIW, I ran into this bug just now with the latest nightly on 2010/11/16, using the Apple campus's public wifi (hi Steve!). Several reloads didn't fix it, but then a few minutes later reloading did fix it.

Dave
(In reply to comment #17)
> Created attachment 490869 [details] [diff] [review]
> Simplified and updated version
> 
> Pushed to
> tryserver.

Hmmm..  some of our unit-tests for partial content use pretty short strings, causing necko to re-request the resource again instead of requesting a range like the test expects. These are

netwerk/test/unit/test_bug482601.js
netwerk/test/unit/test_gzipped_206.js
netwerk/test/unit/test_resumable_truncate.js

I need to change these tests to make this patch fit in.
It there a good reason for nsHttpResponseHeader to discard the Content-Encoding header?
Bjarne, what do you mean?
blocking2.0: ? → betaN+
http://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpResponseHead.cpp#459

I interpret this as not storing the header. Maybe to save space (since it isn't used anyway)?
UpdateHeaders is only called for ProcessNotModified and ProcessPartialContent.  So in that situation Content-Encoding needs to match anyway for us to even reach this code.
Should have been more specific about the issue: I don't find Content-Encoding in SetupByteRangeRequest(). Will look closer...
Ahh... header is cleared (pretty early) in ClearBogusContentEncodingIfNeeded()
We should be doing for the partial responses before checking against the cache.  Are we not?
Not sure what you mean. I'm trying to match up accept-encoding for the range-request with whatever content-encoding our cached content has (comment #16), but the content-encoding header for the cached response is not available - it was stripped off by ClearBogusContentEncodingIfNeeded().

The context is test_gzipped_206.js which in fact sends a bad combination of content-type and content-encoding (the exact combination ClearBogusContentEncodingIfNeeded deals with). It actually makes me doubt its value as a test, but I'll verify such suspicions later.

I'll drop this matching for now and focus on the initial problem which is that in presence of a cache-entry with cached length < content-length-header, necko decides to re-use the cached content, set up a range-request for the rest and skip all other checks in CheckCache(). This is why Ehsan's no-cache entry is used.
> It actually makes me doubt its value as a test

Uh...  Did you look at the log?  That test is there _precisely_ to make sure that we call ClearBogusContentEncodingIfNeeded() before comparing Content-Encoding values.  ;)

> I'll drop this matching for now

Why?

> necko decides to re-use the cached content, set up a range-request for the
> rest

That seems to be the right thing to do, in general....
And in particular, might be worth checking why we skip the rest of the checks.  There might be good reasons.  The version control system is your friend!
(In reply to comment #30)
> Uh...  Did you look at the log?  That test is there _precisely_ to make sure
> that we call ClearBogusContentEncodingIfNeeded() before comparing
> Content-Encoding values.  ;)

Didn't read log, no. Probably should have done that as it restores some of my confidence... :)

However, the test cancels the channel in OnDataAvailable(), and (on my systems at least) the cached entry never has any content (size=0). Hence this test only (on my systems at least) checks 206 for the special case where we request partial data starting at 0. IMO we should add coverage by splitting the gzipped stream somewhere in the middle, which I do in the upcoming patch.

> > I'll drop this matching for now
> 
> Why?

Because the Content-Encoding header is not available at the time I need it in the cases handled by ClearBogusContentEncodingIfNeeded(). And to keep separate issues separated - I've filed bug #613159 to address it.

> > necko decides to re-use the cached content, set up a range-request for the
> > rest
> 
> That seems to be the right thing to do, in general....

Yes. But shouldn't cached partial content be subject to the same verifications as other content (no-cache, expired etc)? Possibly with the exception of cache-entries created "inside" the same load i.e. if we trigger a range-request automatically upon receiving too little data. I cannot see where we might do that, however, so if you have any hard facts about it I'd appreciate a pointer.

(In reply to comment #31)
> And in particular, might be worth checking why we skip the rest of the checks. 
> There might be good reasons.  The version control system is your friend!

I definitely have ambitions of staying on friendly terms with the logs, but I cannot find anything which seems related in the mercurial-log. I must admit that my mozilla-log-fu could be stronger - is it possible to also see the older cvs-logs? Maybe there is a web-portal for it?
Attached patch Updated logic and failing tests (obsolete) — Splinter Review
This reflects current thinking (ref previous comment). I also modified three tests since they used very short strings to test the ranges. I believe I have managed to keep the original logic and intentions in the tests but corrections are most welcome. Pushed to tryserver...
Attachment #490870 - Attachment is obsolete: true
Attachment #491488 - Flags: review?(bzbarsky)
Attachment #490870 - Flags: review?(jduell.mcbugs)
(In reply to comment #32)
> Possibly with the exception of
> cache-entries created "inside" the same load i.e. if we trigger a range-request
> automatically upon receiving too little data. I cannot see where we might do
> that, however, so if you have any hard facts about it I'd appreciate a pointer.

I fact, I guess it wouldn't hurt to make use of |mInitedCacheEntry| in any case...
> IMO we should add coverage by splitting the gzipped stream somewhere in the
> middle

Sounds good.

> I cannot see where we might do that, however, so if you have any hard facts
> about it I'd appreciate a pointer.

I think the mResuming case is such a situation, albeit manually.

But more importantly, we only make the range request on partial cached content if it satisfies certain criteria which ensure that if the server is HTTP-compliant then the response can be used.  See nsHttpResponseHead::IsResumable (which conditions the use of the range request in the situation in this bug: partial cached content).
(In reply to comment #33)
>Pushed to tryserver...

Passes tryserver but I would like to use |mInitedCacheEntry| and extent the test to check how the content-decoders behave when receiving partially encoded content. Will also make the test-string a little longer, since the 25-byte one we use currently tends to be decoded on the first chunk if it's > 16 bytes.
In case it wasn't clear, I'm waiting on an answer to comment 35 before reviewing, since I think some of the changes that were made are unnecessary.
Ahh - didn't interpret comment #35 as a question. :)  Thanks for notifying.
The proposed patch doesn't relax the conditions of triggering a ranged request, but rather strengthens them by also requiring a certain amount of cached content for re-use. It skips all the doValidation checks in nsHttpChannel::CheckCache() if the cache-entry was created as part of the same load (although I still don't see where/how this can be done - shouldn't hurt though).

Three unit-tests have been updated to comply with the added requirement, and "test_gzipped_206.js" now uses a longer test-string since the previous one seemed to be decoded by the 17-byte partial data currently sent by the first request. With this longer string, the content-decoder is unable to finish its job prior to the 206 response. In addition, this test now does a second pass using a content-type which doesn't trigger the logic in nsHttpChannel::ClearBogusContentEncodingIfNeeded().

Passed to tryserver.
Attachment #491488 - Attachment is obsolete: true
Attachment #491823 - Flags: review?(bzbarsky)
Attachment #491823 - Flags: approval2.0?
Attachment #491488 - Flags: review?(bzbarsky)
Of-course, a unit-test for Ehsans particular case is missing, but it should be pretty simple to set up and I'll try to do that asap (partial cached content with no-cache header, which IMO should be validated by the server and not re-used).
There - finally got the logic in CheckCache() right. Added a third pass in test_gzipped_206.js which tests Ehsans case (ie tests that Ehsans cache-entry would not be used). Passed to tryserver.
Attachment #491823 - Attachment is obsolete: true
Attachment #491981 - Flags: review?(bzbarsky)
Attachment #491823 - Flags: review?(bzbarsky)
Attachment #491823 - Flags: approval2.0?
It looks like that still changes the logic to do validation on the byte-range cases... why?
Seems to pass tryserver.

(In reply to comment #42)
> It looks like that still changes the logic to do validation on the byte-range
> cases... why?

Not sure I follow...  IMO, the fundamental issue here is that the content of Ehsans partial cache-entry is *used* when it has the no-cache header. The fact that using it gives a content-encoding error is also an issue, of-course, but it is not addressed here.

Furthermore, IMO, the correct way to handle this fundamental issue is to perform normal validation-checks on partial cache-entries. Hence, the logic for validating partial entries has been changed, yes. (Or rather, it has been added since it was pretty much non-existent previously - CheckCache() just set up the range-request and returned.)

Additionally, I think it makes sense to drop ranged requests unless there is a certain amount of (valid!) content stored in the cache, simply because of the overhead in header-data necessary for a ranged request. (I.e. it doesn't make sense to re-use 1 byte from the cache if we transfer 30 bytes extra in the headers.) I have set this "certain amount" to 16 bytes, but it might be better exposed as a pref. (Its optimal value may be different on e.g. mobiles.)

If I've done something stupid in the code, feel free to point it out. :) My intention is to have the above reasoning properly implemented.

There are also some take-aways from this: Bug #613159 to make sure we request the same encoding for the range-request as we re-use from the cache, the issue from comment #15 (your first answer in comment #19 indicates you would spin off a separate bug for it). Additionally, I observe in the logs that when a converter is installed by HttpBaseChannel::ApplyContentConversions, it writes to cache 1 byte at the time. This is pretty suboptimal since each write spins off an event on the cache-io thread.
To be more explicit here:

(In reply to comment #35)
> See
> nsHttpResponseHead::IsResumable (which conditions the use of the range request
> in the situation in this bug: partial cached content).

Rfc 2616 section 13.5.4 defines *when* we can use cached partial content. nsHttpResponseHead::IsResumable only checks for existence of eTag or Last-Modified, and the current code immediately sets up the range-request if these are set and returns. No comparison is done, afaics, and that must be wrong. It may be conservative to apply the full validation-check but it does make intuitively sense to me, and at least we don't re-use something we shouldn't.
Hmm...  just struck me that the comparison I'm missing in previous comment would actually be the responsibility of the *server*...  kind-of breaks my argumentation. :(

Maybe we could resolve this problem by simply calling mCachedResponseHead->NoCache() (possibly also NoStore() ) before going for a range-request..?
Implemented proposal in previous comment. Passes local testing including the the extended "test_gzipped_296.js".

(If this approach is preferred I think should add the extra check for "reasonable sized content" also, btw.)
Attachment #492292 - Flags: review?(bzbarsky)
Anything we need to sort out before moving fwd on this one?
No, I just need time to convince myself that all the various invasive changes are right....

I still think it would be better to have one patch for the problem at hand and a separate patch for the validation changes so that the risk is isolated.  In fact, I think in this bug we should be making the minimal changes that fix the bug in its blocking form; I would _really_ prefer the invasive stuff happen post-2.0 if we do it.
Also, what's with the two patches?  Are both needed, or just one?
They are independent. The first one does a full re-validate of the partial cache-entry (as it would have done with complete entries) before deciding whether to re-use it in a range-request. After seeing the light in comment #45 I came up with the second patch which avoids this by using the fact that a strong validator is implied by IsResumable() and that the server is required to validate it.

Both patches have the extended test which includes Ehsans particular case (no-store/no-cache headers), has a longer compressed string (see comment #39, last paragraph) and sends 17 bytes instead of 0 for partial data (see comment #32, third paragraph).

The first patch is left as a reference, but I believe the second patch is the way to go. I also suggest to improve it by adding a lower limit controlled by a preference to how much data we re-use (see comment #43, fifth paragraph).
Comment on attachment 491981 [details] [diff] [review]
Updated logic in patch, added new aspect to test

OK, removing the review request from this patch, then.
Attachment #491981 - Flags: review?(bzbarsky)
(In reply to comment #48)
> I think in this bug we should be making the minimal changes that fix the
> bug in its blocking form; I would _really_ prefer the invasive stuff happen
> post-2.0 if we do it.

I'm not sure I'm able to divide the current patch in minimal-fix and invasive-fix versions (most of the patch are changes to the test). Any input to this would be appreciated...
Comment on attachment 492292 [details] [diff] [review]
Simpler pproach, keeping extended test

That doesn't match the other code in CheckCache (e.g. the fact that we'll do conditional requests for cached no-cache content; see the |if (doValidation)| block at the end of the function.
Attachment #492292 - Flags: review?(bzbarsky) → review-
One other thing.  The test bits don't make that much sense to me.  Max-age is in seconds, so I don't see why you think that |max-age=360| nessarily won't expire during the test.  That's making assumptions about the test timeout that I don't think we should be making.
(In reply to comment #54)
> One other thing.  The test bits don't make that much sense to me.  Max-age is
> in seconds, so I don't see why you think that |max-age=360| nessarily won't
> expire during the test.  That's making assumptions about the test timeout that
> I don't think we should be making.

Very well - we can increase max-age with a factor of 100 (making it 100 hours). I'd be very surprised if one xpcshell-test would be allowed to run for more than that and also if anyone would run a debug-session for longer than this. (I basically agree with your point about not making such assumptions but I feel this is a reasonable and pragmatic one, kind-of like assuming that the cpu won't corrode while the test runs...  :)  )

(In reply to comment #53)
> That doesn't match the other code in CheckCache (e.g. the fact that we'll do
> conditional requests for cached no-cache content; see the |if (doValidation)|
> block at the end of the function.

IMO, adding conditional request-headers to a range-request (which carries a validator in any case) doesn't make sense. Moreover, if we do find a difference between the cached length and the content-length header, I'd argue that we should make the request *un*conditional on purpose. Hence, the code in the |if (doValidation)|-block should not be relevant in our case.
Whiteboard: [hardblocker]
Comment on attachment 492292 [details] [diff] [review]
Simpler pproach, keeping extended test

Ref comment #55. Requesting review again, adding request for review from Honza.
Attachment #492292 - Flags: review?(honzab.moz)
Attachment #492292 - Flags: review?(bzbarsky)
Attachment #492292 - Flags: review-
> Very well - we can increase max-age with a factor of 100 (making it 100 hours).

Why not just set it to a few years and be done with it?  I'd really prefer that unless there's a reason.  What's the drawback to doing so?

My point is that we will use the cached content for no-cache responses, in general, after validating it... unless we're in this range request situation, with your patch.  And my question is why this is a desirable behavior difference.  Why do you need the no-cache check there?
I'm sure your Socratic debating-style causes you a lot of frustration...  :)

Not being that sophisticated (nor pedagogic), I'll bluntly sum up what I believe you mean:

1) When validating complete no-cache responses with strong validators we add conditional request-headers (if-block from comment #53). Content may be re-used if the entity is unchanged and the server returns 304.

2) Candidates for ranged requests implicitly have strong validators in the partial response (ref comment #45). If the server returns a 206, the entity must be unchanged (RFC 2616, section 14.27) and we can re-use content like in (1).

You ask why we don't set up a range-request under the same rules as setting up a conditional request. And I must admit you're right - that is probably more correct. I also in fact believe we should take |mCustomConditionalRequest| into account, and I'll add a check for minimum cached length as well since I think it makes sense.
Attachment #492292 - Flags: review?(honzab.moz)
Attachment #492292 - Flags: review?(bzbarsky)
Comment 58 is a lot clearer than my fuzzy thoughts were, for what it's worth... if nothing else because I think you understand HTTP a lot better than I do.  ;)

I think your summary is what I meant, yes.
I'm glad our thoughts seem to have converged (and don't worry about that rooster  ;) ).
Follows comment #57 and comment #58
Attachment #492292 - Attachment is obsolete: true
Attachment #504004 - Flags: review?(bzbarsky)
Attachment #504004 - Flags: review?(honzab.moz)
Comment on attachment 504004 [details] [diff] [review]
Simple approach w/ extended test, v2

Pushed to tryserver (although the tree seems rather funky at the moment)
Comment on attachment 504004 [details] [diff] [review]
Simple approach w/ extended test, v2

Argh! Forgot to increase the no-cache value in the test....  will do in final version.
This took a little time to think of, so I want to sum (maybe ones again) what's going on, feel free to jump under the line bellow:

Full response with no-cache:
- we deploy a first request for a resource, not having cached data for it
- server responses with 200 OK, no-cache, either ETag or Last-Modifed
- we load and cache the whole content

- on next request for the resource we check we have ETag or Last-Modified
- we send the request with either If-None-Match or If-Unmodified-Since
- server responses with 304 when our cached data is valid against the data on the server, or
- server responses with 200 OK, server tells us the cache entry we hold is no longer valid and serves a newer content
=> the validation is doing the server based on the If-None-Match or If-Unmodified-Since header


Partial response with no-cache:
- we deploy a first request for a resource, not having cached data for it
- server responses with 200 OK, no-cache, either ETag or Last-Modified
- we cache only a part of the content (network reset, or whatever)

- on next request for the resource we check we have ETag or Last-Modified header and also some other info to create a partial request (IsResumable())
- we send a request with Range header telling the server offset of the part of the content we want AND also with If-Range header that stands for non-partial request headers If-None-Match or If-Unmodified-Since
- server responses with 206 Partial telling us the partial cached content we have is valid and serves us the rest of the data we are missing, or
- server responses with 200 OK telling us the partial cached content is not valid and serves a full newer content
=> the validation is done though the If-Range header

Also from the RFC2616,14.27: "The If-Range header allows a client to "short-circuit" the second request"


The line :)
-------------------------------------------------------------

So, there is no need for a dramatic change to the validation logic.

What I don't understand is:
- why a 16 bytes threshold?  why not just > 0?  also, according to comment 15, I don't think it's the way we would like to go anyway
- also, how this fixes the original problem of the content-encoding mismatch?  what if we cache 17 bytes of data with gzip encoding and later receive 206 with no encoding?
- why the no-store change?  why not to use a memory cached data?

I agree with bypassing range requests when a custom header has been specified by the consumer.  We probably cannot simply morph the user condition from one of If-* headers to the single If-Range header.

In ideal world, when we find out that content-encoding is wrong in the partial response, we should doom the cache entry and restart the request, similarly as we do for authentication.  We didn't call the consumer's OnStart/Stop/Data, so we may freely restart.  But it seems not to be a simple patch.

Re comment 15: Boris, why do you want to make "0-" range request work rather then do a non-range request when we have a zero-length data?  If you argue to do it that way then: if we encounter the encoding mismatch, let's check we don't have by accident a zero cache entry data length.  If so, let's continue with the load as there cannot be a conflict.  This, of course, doesn't fix my second complain.
(In reply to comment #64)
> => the validation is done though the If-Range header
> Also from the RFC2616,14.27: "The If-Range header allows a client to
> "short-circuit" the second request"

Exactly - this was the point I was missing prior to comment #58. IsResumable() implies that the cached entry has a strong validator (eTag or Last-Modified) and If-Rage is a shortcut capable of using this validator.

> - why a 16 bytes threshold? 

It's a proposal. IMO it doesn't make sense to re-use 2 bytes of data if it means adding header-payload in the range of 20 bytes (or whatever it is). I also think this value should be made a pref, ref comment #50, last paragraph. Rationale for this is that this value *may* be subject to advanced users preferences, but also that Fx for devices other than desktop may want to tweak this.

> - also, how this fixes the original problem of the content-encoding mismatch? 

It doesn't fix content-encoding mismatch, ref comment #43, third paragraph (starting with "Not sure I follow.."). But Ehsans cache-entry should not have been re-used because it was marked "no-store" (I say "no-cache" previously which is too weak, but the fact still remains). I hope fixing bug #613159 (mentioned in comment #32) may help avoiding content-encoding mismatches.

> - why the no-store change?  why not to use a memory cached data?

See comment in the |if (doValidation)| block towards the end of CheckCache().

> I agree with bypassing range requests when a custom header has been specified
> by the consumer.  We probably cannot simply morph the user condition from one
> of If-* headers to the single If-Range header.

Phew...  good! A point where we agree... :)
(In reply to comment #64)
> Re comment 15: Boris, why do you want to make "0-" range request work rather
> then do a non-range request when we have a zero-length data?  

Btw, the 16-byte (or some pref-value) limit effectively implements Honza's view here.
(In reply to comment #65)
> > - why the no-store change?  why not to use a memory cached data?
> 
> See comment in the |if (doValidation)| block towards the end of CheckCache().

I see.
So, I have spent some time on try-resume after a different content encoding is detected on 206 response.  

I discovered that IIS 7.5 behaves the same way: when I have a large default document and throttled download to 1kb/s and enabled runtime compression, cancel the page load in the middle and then reload, the server gives me 206 w/o content-encoding header => I get Content Encoding Error page.

Aren't our general expectations about content encoding and partial requests wrong ?  Isn't normal that compressed items can't be resumed ?

Therefor, I would tend to do not send "0-" range requests but do a normal non-range request for a zero-length cache entries, otherwise we loose advantage of having a compressed item in the cache.
(In reply to comment #68)
> I discovered that IIS 7.5 behaves the same way: when I have a large default
> document and throttled download to 1kb/s and enabled runtime compression,
> cancel the page load in the middle and then reload, the server gives me 206 w/o
> content-encoding header => I get Content Encoding Error page.

To clarify: What mime-type is your document, and where do you observe the headers? (See comment #29 first paragraph and comment #30 first part).

> otherwise we loose advantage
> of having a compressed item in the cache.

Not following...  could you explain?
This version passes tryserver with no clear signs of introducing new problems.
Attachment #504004 - Attachment is obsolete: true
Attachment #504826 - Flags: review?(bzbarsky)
Attachment #504004 - Flags: review?(honzab.moz)
Attachment #504004 - Flags: review?(bzbarsky)
Attachment #504826 - Flags: review?(honzab.moz)
(In reply to comment #69)
> To clarify: What mime-type is your document, and where do you observe the
> headers? (See comment #29 first paragraph and comment #30 first part).

text/html

NSPR log and wireshark

Why exactly is this important?

> 
> > otherwise we loose advantage
> > of having a compressed item in the cache.
> 
> Not following...  could you explain?

Server responses to a range request with non-compressed data (IIS7.5).  It must cross the network and is stored in the cache that way.  For non-range requests the server sends the content compressed (gzip/whatever).  The data are stored with encoding in the cache.
Comment on attachment 504826 [details] [diff] [review]
Simple approach w/ extended test and some , v3

I see you have modified two tests quit a lot.  Those tests are important and already very complicated, this makes them even more unreadable because of the 16 bytes threshold which I'm not personally sure we want.

I would vote for checking just size>0 and do not modify the test.  This patch with whatever threshold wouldn't fix the problem described in comment 68 anyway.

Boris?
That seems reasonable to me, yes.

(And yes, this is on my to-review-today list.)
(In reply to comment #71)
> Why exactly is this important?

Because the combination content-type=applicatipon/gzip and content-encoding=compressed will make FF remove latter header from it's cached response. See the ClearBogusContentEncodingIfNeeded() method.

> Server responses to a range request with non-compressed data (IIS7.5).

Ahh - *that* is pretty useful information! I didn't see that point at all. :(  

Need some time to think about that one...  seems like you're right about the general assumptions for range-requests being wrong.  (We *could* refuse the identity-encoding in the range-request though...  see RFC2616 section 14.3. But then we would have to deal with the possible 406 response...)

(In reply to comment #72)
> I would vote for checking just size>0 and do not modify the test.  This patch
> with whatever threshold wouldn't fix the problem described in comment 68
> anyway.

In fact, based on the discussion above, we cannot safely re-use a partial cache-entry at all if it is encoded, right? Checking for size would thus be useless because if size==0 there is nothing to re-use and we might as well do a normal request.

We could re-use content-encoded cache-entries and use a threshold (together with changing the tests) if we can successfully implement refuse-identity-encoding e.g. in bug #613159. Another bug...?
(In reply to comment #74)
> Checking for size would thus be
> useless because if size==0 there is nothing to re-use and we might as well do a
> normal request.

That did not come out very well..  :}  Please forget...

My point was that size doesn't matter if the partial response is encoded since we cannot use it anyway.
Bjarne, I believe your patch is correct, just remove the 16 bytes threshold to make it as simple as needed to fix this one issue.

This issue is present IMO also in older versions of the platform, so no need to do some complicated fixes.

The true fix is to try to resume when content encoding in 206 response is found different from what we have in the cache.  I have no real evidence that every single server implementation returns 206 content unencoded.  Maybe search RFCs to judge on this?  But I don't think you'll find a definite rule, also, the fix wouldn't happen for Fx4 anyway.
(In reply to comment #76)
> Bjarne, I believe your patch is correct, 

This may sound a little weird, but I disagree...  :)

IMO we should avoid range-requests if the cached response has content-encoding.

Your nice experiment from comment #86 shows that servers may respond with the identity-encoding (aka no encoding) on ranged requests, and according to RFC2616 section 14.3 they are entitled to do so. The exception is when the request asks for a particular encoding and also explicitly refuses the identity-encoding (see RFC2616 section 14.3). The downside of refusing identity-encoding is that the server is allowed to respond with 406, but I believe recovering from a 406 is pretty much the same as recovering from a 206 with mismatching encodings.

So I suggest that we

1) use this patch to drop the range request also if the response has content-encoding (in addition to the other criteria already in the patch and discussed)

2) use bug #613159 to re-introduce range-requests w/ content-encoding, but make sure to set up the request properly and handle the possible 406

3) add another bug to introduce the threshold, expose it as a pref and add the modified tests

1 may give a performance-hit but will ensure we never get mismatching encodings (see comment #3 and bug #523859). 2 should recover any performance loss from 1. 3 will be an enhancement to consider later.

With this approach we avoid mismatching encodings and we also avoid situations where we request a range, receive data which we drop due to mismatching encodings, and then re-request the resource entirely.

Thoughts?
That plan seems fine.  Should I still be reviewing the attached patch?
The more the merrier...  :)  No - I'll upload another version asap and request review on that one instead. It will be pretty similar though...  just changing the criteria for setting up a range-request to match comment #77.
Attached file Updated according to comment #77 (obsolete) —
Attachment #504826 - Attachment is obsolete: true
Attachment #505540 - Flags: review?(bzbarsky)
Attachment #504826 - Flags: review?(honzab.moz)
Attachment #504826 - Flags: review?(bzbarsky)
Attachment #505540 - Flags: review?(honzab.moz)
Attached patch Updated according to comment #77 (obsolete) — Splinter Review
Forgot to set the "patch" content-type...
Attachment #505540 - Attachment is obsolete: true
Attachment #505544 - Flags: review?(bzbarsky)
Attachment #505540 - Flags: review?(honzab.moz)
Attachment #505540 - Flags: review?(bzbarsky)
Attachment #505544 - Flags: review?(honzab.moz)
jst: based on the summary of this bug, I have a hard time understanding why it's a hardblocker, so removing that marking for now. Feel free to return that status if you feel I've done this in error.
Whiteboard: [hardblocker]
IIRC I marked this a hardblocker mostly based on the lack of understanding of what was going on here at the time, and the fact that it was an intermittent content loading error of unknown frequency, but now that this is understood I'm fine treating this as a softblocker. We still need to fix this before release IMO, but softblocker is IMO fine.
Whiteboard: [softblocker]
Comment on attachment 505544 [details] [diff] [review]
Updated according to comment #77

>+                PRBool hasContentEncoding =
>+                    mCachedResponseHead->PeekHeader(nsHttp::Content_Encoding);

Please change this to mCachedResponseHead->PeekHeader(nsHttp::Content_Encoding) != nsnull.  In this form it is uncompilable on windows (type-cast error).


test_resumable_truncate.js and test_gzipped_206.js failed locally for me (windows debug).
Attachment #505544 - Flags: review?(honzab.moz) → review-
Comment on attachment 505544 [details] [diff] [review]
Updated according to comment #77

Nix the extra spaces after those '!'.  And the PeekHeader-to-PRBool conversion, as above.  And check on the test failures?

r=me apart from those concerns.
Attachment #505544 - Flags: review?(bzbarsky) → review+
Sigh... I forgot that even the size>0 test blows two tests. In fact, these tests have up till now *only* been checking the special case of a zero-byte partial entry...  my changes make the response-body longer and use a different mechanism to truncate the response.

Hope stuff are ok now. I'm re-requesting reviews since tests were added...
Attachment #505544 - Attachment is obsolete: true
Attachment #506559 - Flags: review?(bzbarsky)
Attachment #506559 - Flags: review?(honzab.moz)
(In reply to comment #77)
> 3) add another bug to introduce the threshold, expose it as a pref and add the
> modified tests

Bug #628607 has been registered for this.
Comment on attachment 506559 [details] [diff] [review]
Updated with comments from reviewers

Passes tryserver without introducing more noise afaics.
The test_resumable_truncate changes no longer cancel the channel, where we used to cancel it.  Was that canceling not part of the test?

Actually, the same question for test_gzipped_206.  Why do we need to do more there than just increase the length of the test data?
When we cancel the channel like in these tests, the partial cache-entries will get 0 bytes. By aborting transfer like done in the patch (i.e. control the output from the handler and send a limited amount of data before closing the channel) we can make the partial cache-entry have any number of bytes. IMO this is particularly important for the gzip-test.

It *may* be that canceling the channel is an explicit part of the test, but I don't believe so.
test_gzipped_206.js, introduced in bug 426273: the test checks that we clear the bogus encoding headers also when receiving 206 in ProcessPartialContent.  We get into this method only when we deploy a Range request.  That is not possible with this patch as the cache entry length is 0.

=> I checked we enter the method and we clear the headers.  The change is OK for me.


test_resumable_truncate.js, introduced in bug 448712: if I understand, it checks a cache entry that is load a second time from a server, allocated in a cache block file, and never written (i.e. size changed to zero - request canceled) is correctly dealocated from the block file.  But when I back the bug 448712 fix out the test doesn't fail.  So it doesn't check the original problem.  Also log w/ and w/o the bug 448712 fix seems equal...

=> However, this is an important test and cannot be changed this way, it needs to cancel the load before we cache any data.


Using unmodified test and the C++ fix, I can see the final response is 304 instead of 206 response to Range: 0- request.  Actually, this could be a real life scenario: we have a zero-length cache entry with an ETag or other strong validator.  We deploy a conditional request, the server responses with 304 and we are f***ed - we don't have the cache entry data to serve from the cache.  The test failure is a real problem catch, not a reason to modify the test.

This could be the reason why Boris wanted to allow Range: 0- requests for zero length items in the cache...

So, I suggest to not send If- validation headers when we have a zero-length cache entry, at least, to avoid 304.

Makes sense?
(In reply to comment #91)
> 448712 fix out the test doesn't fail.  So it doesn't check the original
> problem.  Also log w/ and w/o the bug 448712 fix seems equal...

Ouch! :(

> => However, this is an important test and cannot be changed this way,

Even if it doesn't test what it is supposed to test? :)

The test makes three requests:

1) from do_run_test(): do304 is false, no Range-header. Response is 200 with full content from the handler - it's stuffed into the cache.

2) from start_canceler(): do304 is still false, no Range-header. The Canceler cancels in OnStartRequest(). The content of the 200-response has 0 bytes and the cache-entry is updated with this.

3) from start_cache_read(): do340 is true, unmodified necko adds the If-Range and Range-headers. Handler responds with 206 since the Range-header is set and tested for first.

> Using unmodified test and the C++ fix, I can see the final response is 304
> instead of 206 response to Range: 0- request.

Assuming "the C++ fix" means the proposed patch, are you sure the request (with the proposed path applied) has a Range-header?

The proposed patch should affect step (3) in that necko no longer adds Range/If-Range headers (since the cached entry has size==0). If we use the unmodified test, the handler will return 304 because do304 is true and Range-header is not present. If it is present, the patch doesn't do what it's supposed to do and we'd have to fix it.

>  Actually, this could be a real
> life scenario: we have a zero-length cache entry with an ETag or other strong
> validator.  We deploy a conditional request
[ snip ]
> So, I suggest to not send If- validation headers when we have a zero-length
> cache entry, at least, to avoid 304.

But we don't. :)  We return early from CheckCache() with |mCachedContentIsValid| false.

I'll try to repeat your experiment later.
(In reply to comment #92)
> Even if it doesn't test what it is supposed to test? :)

I had to write "original fix" and not "problem".  This is a valid regression test, we can break what it test by other future changes.

> But we don't. :)  We return early from CheckCache() with
> |mCachedContentIsValid| false.

You are right, I've overseen that.  So the test is actually wrong.  It must not return 304 when there is no If- header.  Question is, how the test actually works.  We should know exactly, before we modify it.  The only think it IMO test is that there is no assertion failure during the execution (bug 448712).
Honza, Bjarne, can you please let me know once you sort out the test situation?  

For the part of it I know something about, looking at test_gzipped_206 the reason we did the canceling business was to ensure that our cache entry size would be smaller than the response body size, so that necko would in fact make a range response and so forth.  As long as the modified test still triggers that codepath, things are good.
(In reply to comment #93)
> I had to write "original fix" and not "problem".  This is a valid regression
> test, we can break what it test by other future changes.

You're a careful man, Honza...  :)  I think I see what you say and I agree. To ensure we're in sync:

Unmodified, this test passes even if you back out the code-fix it was supposed to test (like you say in comment #91). So it seems like other things have happened in the codebase fixing the original issue by other means. You say that we should leave this test around to guard from changes in the codebase which may re-introduce the original issue, right?

>   Question is, how the test actually
> works.  We should know exactly, before we modify it.  The only think it IMO
> test is that there is no assertion failure during the execution (bug 448712).

It also checks that the NS_BINDING_ABORTED given to Cancel() is preserved for OnStopRequest(). A quick search does not reveal any other tests which checks preservation of NS_BINDING_ABORTED in exactly this way, so we might preserve this aspect also.

I suggest to remove the do304-flag and never return 304 from the handler, since this is what makes it fail with the proposed patch for this bug. Will attach an updated patch soon...
Attachment #506559 - Attachment is obsolete: true
Attachment #508105 - Flags: review?(honzab.moz)
Attachment #506559 - Flags: review?(honzab.moz)
Attachment #506559 - Flags: review?(bzbarsky)
Comment on attachment 508105 [details] [diff] [review]
Updated according to comment #95

Thanks, r=honzab.
Attachment #508105 - Flags: review?(honzab.moz) → review+
Test to verify that we drop range-requests in (IMO) all relevant cases. Asking for review and approval but I believe we can check in the patch without this test. I will use it when working on the followup-bugs to this one.

All sub-tests fail independently without the patch. The first sub-test is commented out because it asserts - left for reference and completeness.
Attachment #508730 - Flags: review?(honzab.moz)
Attachment #508730 - Flags: approval2.0?
Oh - almost forgot: With the current r+'ed patch we don't do range-requests for responses with explicit "Content-Length: identity". I don't think that's crucial and I suggest to leave it and handle it in bug #613159.
(In reply to comment #99)
> Oh - almost forgot: With the current r+'ed patch we don't do range-requests for
> responses with explicit "Content-Length: identity". I don't think that's
> crucial and I suggest to leave it and handle it in bug #613159.

Mean "Accept-Encoding" rather then "Content-Length"?


About the patch:

-    self.env["XPCOM_DEBUG_BREAK"] = "stack-and-abort"
+    #self.env["XPCOM_DEBUG_BREAK"] = "stack-and-abort"

Why this change?  Actually, you shouldn't ever do this.  It is set to catch breaks as failures!
And one more thing: could you please add a comment to the top of the test about what the test exactly does?  It helps to understand purpose and functionality in the future.  Thanks.
(In reply to comment #100)
> Mean "Accept-Encoding" rather then "Content-Length"?

Of-course...  :}

> About the patch:
> 
> -    self.env["XPCOM_DEBUG_BREAK"] = "stack-and-abort"
> +    #self.env["XPCOM_DEBUG_BREAK"] = "stack-and-abort"
> 
> Why this change?  Actually, you shouldn't ever do this.  It is set to catch
> breaks as failures!

Yeah - it is not supposed to be there, I qref'ed without removing this.... :(  (the script clobbers my shell-setting, preventing me from checking case-1 in the test so I hacked around it...)  Will update patch!

(In reply to comment #101)
> And one more thing: could you please add a comment to the top of the test about
> what the test exactly does?  It helps to understand purpose and functionality
> in the future.  Thanks.

Uhhh..  you mean beyond what's already written?
Ah, there are 6 cases during we must not send a range request.  Those 6 handlers and 6 asyncOpen's on each of them are all covering it, is that so?  If yes, then please just add a short note that there is one request + handler for each of the 6 cases.  Thanks.
Added more comments, removed change to runxpcshelltests.py
Attachment #508730 - Attachment is obsolete: true
Attachment #508889 - Flags: review?(honzab.moz)
Attachment #508730 - Flags: review?(honzab.moz)
Attachment #508730 - Flags: approval2.0?
Comment on attachment 508889 [details] [diff] [review]
Exhaustive test for range-requests (updated based on comments)

>+function setStdHeaders(response, length) {
>+  response.setHeader("Content-Type", "text/plain", false);
>+  response.setHeader("ETag", "Just testing");
>+  response.setHeader("Cache-Control", "max-age: 360000");
>+  response.setHeader("Accept-Ranges", "bytes");
>+  response.setHeader("Content-Length", "" + length);
>+}
...
>+
>+const partial_data_length = 4;
>+
>+var case_1_request_no = 0;
>+function handler_1(metadata, response) {
>+  switch (case_1_request_no) {
>+    case 0:
>+      do_check_false(metadata.hasHeader("Range"));
>+      setStdHeaders(response, partial_data_length);
>+      response.setHeader("Content-Length", "" + partial_data_length);

No need for this as setStdHeaders do this for you?

>+      response.processAsync();
>+      response.bodyOutputStream.write(clearTextBody, clearTextBody.length);
>+      response.finish();

However, the whole test doesn't seem to me right, in the first case (0) you set Content-length to 4 and send the whole body of 31 bytes, in the second case (1) you send the whole response (31 bytes), Content-length = 31.  You are changing the content length header while you preserve other headers, including ETag.  I see this as a faulty server implementation, that's also why you get an assertion failure NS_ASSERTION(progress <= progressMax, "unexpected progress values");

IMO the test should just check that if we have the complete cache entry, for which then all other then the first condition apply, we don't deploy a range request.


>+// Simple mechanism to keep track of tests and stop the server 
>+var numTestsFinished = 0;
>+function testFinished() {
>+  if (++numTestsFinished == 1)

Shouldn't the number in the condition be 5, or 6, dep on the previous comment?  You cut the server too soon, not running all the tests.

>+    httpserver.stop(do_test_finished);
>+}



I did change the test according to my comments and it fails on line 198 of your original patch:

// due to identity-encoding in previous req we should see a range-request
(do_check_true(metadata.hasHeader("Range")); 

it is in test 4, case 2.
Nice with fresh eyes! :)

(In reply to comment #105)
> 
> No need for this as setStdHeaders do this for you?

Correctomundo...

> 
> >+      response.processAsync();
> >+      response.bodyOutputStream.write(clearTextBody, clearTextBody.length);
> >+      response.finish();
> 
> However, the whole test doesn't seem to me right, in the first case (0) you set
> Content-length to 4 and send the whole body of 31 bytes,

Well... subtest-0 is to ensure we only do a range-request if the cached size is smaller than c-l. I try to trigger the opposite: making cached size larger than c-l. However, it looks like there is logic to assert that we don't read more data than c-l (at least I interpret the assertion in that way) so subtest-0 crash before we come that far (remember the forgotten hack in runxpcshelltests.py). That's why it's commented out.

> in the second case (1)
> you send the whole response (31 bytes), Content-length = 31.  You are changing
> the content length header while you preserve other headers, including ETag. 

This is just to respond properly to the non-range request we should send when we find an entry with cached-size > c-l.

> >+// Simple mechanism to keep track of tests and stop the server 
> >+var numTestsFinished = 0;
> >+function testFinished() {
> >+  if (++numTestsFinished == 1)
> 
> Shouldn't the number in the condition be 5, or 6, dep on the previous comment? 
> You cut the server too soon, not running all the tests.

*blush* You're right - I left this after debugging the test...

> I did change the test according to my comments and it fails on line 198 of your
> original patch:
> 
> // due to identity-encoding in previous req we should see a range-request
> (do_check_true(metadata.hasHeader("Range")); 
> 
> it is in test 4, case 2.

Yes - this worked for me because the server was stopped too early. It is actually the case described in comment #99. With this patch, we drop the range-request if Content-Encoding is *set*. If it's set to "identity", there is effectively no encoding and we should allow it. However, as mentioned in comment #99, let's leave that to bug #613159. I'll take this case out of the test.
A few corrections
Attachment #508889 - Attachment is obsolete: true
Attachment #509267 - Flags: review?(honzab.moz)
Attachment #508889 - Flags: review?(honzab.moz)
(In reply to comment #106)
> Well... subtest-0 is to ensure we only do a range-request if the cached size is
> smaller than c-l. I try to trigger the opposite: making cached size larger than
> c-l. However, it looks like there is logic to assert that we don't read more
> data than c-l (at least I interpret the assertion in that way) so subtest-0
> crash before we come that far (remember the forgotten hack in
> runxpcshelltests.py). That's why it's commented out.


Isn't testing with Content-Length == actual, i.e. cache entry length enough?  Reverse of a < b is a >= b, and it is enough to test with a == b, IMO, for this case.

I don't agree with leaving a test that will always be commented out because of an assert because it answers with a malformed response.
(In reply to comment #108)
> Isn't testing with Content-Length == actual, i.e. cache entry length enough? 

Necko would never consider making a range-request in that case I believe

http://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpChannel.cpp#2500

> I don't agree with leaving a test that will always be commented out because of
> an assert because it answers with a malformed response.

That's fair enough - I can remove it. It was added for completeness and it's a (although weird) case not tested afaics. Code is pretty clear otoh, so I don't think it's a problem in real life.

So - remove case 0 and we're good to go?
(In reply to comment #109)
> Necko would never consider making a range-request in that case I believe
> 
> http://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpChannel.cpp#2500

Aha, you are actually testing this condition: (nsInt64(size) < contentLength).  Now I see.  But that is there for a long time already.

> So - remove case 0 and we're good to go?

Yes, please, I don't think we need to test it and that we can have a good test for it.
Removed case-o test and commented that it is not tested
Attachment #509267 - Attachment is obsolete: true
Attachment #509595 - Flags: review?(honzab.moz)
Attachment #509267 - Flags: review?(honzab.moz)
Comment on attachment 509595 [details] [diff] [review]
Exhaustive test for range-requests (update 3 removed case 0 test)

Thanks!

r=honzab
Attachment #509595 - Flags: review?(honzab.moz) → review+
Thanks yourself. :)  Requesting check-in (patch + test).
Keywords: checkin-needed
http://hg.mozilla.org/mozilla-central/rev/35b9f3cffefc
Status: NEW → RESOLVED
Closed: 13 years ago
Flags: in-testsuite+
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → mozilla2.0b12
I backed this patch as part of this pushlog <http://hg.mozilla.org/mozilla-central/pushloghtml?changeset=7e12e3e16e6c> because of the oranges it caused <http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1296852850.1296854923.2345.gz&fulltext=1>.

I'm not sure which one of the bugs was at fault, so I just backed them all out.  The assignee needs to investigate and make sure that his patch has not been the culprit, and then re-add checkin-needed.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Relanded all but one of the patches in that push.  For this bug, that's:
http://hg.mozilla.org/mozilla-central/rev/637ec623065e


... but did anyone actually check in the patch, or only the test?
er, rather looks like only the patch but not the test was checked in.
(But if you want attachment 509595 [details] [diff] [review] to be checked in, you sholud provide it in patch form.)
The changeset referred in comment #116 contains the test as well. Seems like both test and patch has landed - has this been resolved then?
(In reply to comment #119)
> The changeset referred in comment #116 contains the test as well. Seems like
> both test and patch has landed - has this been resolved then?

Yes.  I landed both the patch and the test attached to this bug in a single changeset, which dbaron relanded.  Sorry  for the confusion!
Status: REOPENED → RESOLVED
Closed: 13 years ago13 years ago
Resolution: --- → FIXED
No need to be sorry! Thanks for the check-in. :)
(hiding link-spam comment)
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: