Closed Bug 247334 Opened 20 years ago Closed 9 years ago

Handle unexpected 206 responses better [mod_gzip: range request on compressed document is returned uncompressed with same ETag]

Categories

(Core :: Networking: HTTP, defect)

defect
Not set
major

Tracking

()

RESOLVED FIXED

People

(Reporter: darin.moz, Unassigned)

Details

Attachments

(1 file, 1 obsolete file)

Handle unexpected 206 responses better [mod_gzip: range request on compressed document is returned uncompressed with same ETag]. See bug 241085 for oodles of context.
Status: NEW → ASSIGNED
Target Milestone: --- → mozilla1.8alpha2
Target Milestone: mozilla1.8alpha2 → mozilla1.8beta
biesi: do you have time to help with this one?
Keywords: helpwanted
so, according to bug 241085 comment 65, blogspot should be fixed and thus the url is no longer a testcase. What should we do with a 206 with unexpected compression? re-request the document, w/o a Range: header? Cancel() the request (this seems suboptimal)? or could we try to recover? given how streamconverters work, we probably don't have the original listener anymore, so that may be hard... unless we save that listener in order to restore it later... that may actually be easiest. is there some testcase available anywhere? is a normal mod_gzip installation sufficient to test this? (the problem is that we reload the document due to lack of charset in an http header but a present <meta http-equiv> tag, right?)
i don't think that we should necessarily try to recover from this bad situation. i think we should just bail gracefully. it is a server bug. right now, we end up spewing garbage into the browser window, and that's bad. i think the solution here should just be to sanity check the response headers. if the 206 response corresponds to the same entity as the original document, then it should have the same content encodings applied to it.
A standard Apache 1.3 with mod_gzip will reproduce this problem. For blogspot, we patched Apache to never return Accept-Ranges: bytes. Thus, Mozilla will never attempt a range fetch, and (thus) never trigger the busted behavior of mod_gzip.
ok taking; I'm having a somewhat hard time creating a testcase though (I can't get mozilla to send a range: header on the reload it does; the cache entry gets doomed for some reason I haven't yet found out...)
Assignee: darin → cbiesinger
Status: ASSIGNED → NEW
Attached patch maybe this? (obsolete) — Splinter Review
so, I still could not create a testcase. but this may be a possibility - it does not send Range: headers when resuming from cache if the original response was gzipped. the downside, of course, is that we would re-request the entire document even if the server would handle this correctly. (note: this does not affect resumed loads via nsIResumableChannel)
Comment on attachment 164098 [details] [diff] [review] maybe this? no, i'm pretty sure we don't want to kill range requests here. this is allowed by the spec, broken in a few servers. instead, we should check for the problem when processing a 206 response.
Attachment #164098 - Flags: review-
meant to attach this before, but got distracted... This should do it then. Unfortunately I'm still unable to test this...
Attachment #164098 - Attachment is obsolete: true
Comment on attachment 164104 [details] [diff] [review] alternative patch (checked in) yes, that looks correct to me. r=darin
Attachment #164104 - Flags: review+
ok, creating a testcase for this sucks, I can tell you. here's how a testcase looks: - have an apache with activated mod_gzip - make sure it does not send a charset in the content-type header (this is important) - then, create an html page. it must have a <meta http-equiv="content-type" content="text/html; charset=XXX"> with an XXX not equal to the browser's default charset (I think) - that <meta> must occur late-ish in the file (probably after the first 1K or 4K, not sure - codewise, not in the first OnDataAvailable) - the file should probably be large-ish. or your network needs to be slow. or you need to hack necko to make the network slow - a PR_Sleep(PR_MillisecondsToInterval(20)); in nsSocketInputStream::Read helps... - now, load the file in mozilla. if it works, press shift+reload. eventually you will see garbage where the bottom of the file should be. for some reason, this only reproduces the bug every second time I try it...
Comment on attachment 164104 [details] [diff] [review] alternative patch (checked in) yep, this gives the right behaviour, at least from necko's point of view. I don't get an alert for that error, just a silent failure and a blank page... and the dump on the console: Error loading URL http://wintermute/~chb/gziptest.html : 8000ffff (NS_ERROR_UNEXPECTED)
Attachment #164104 - Flags: superreview?(bzbarsky)
Status: NEW → ASSIGNED
Keywords: helpwanted
Target Milestone: mozilla1.8beta → mozilla1.8alpha5
Comment on attachment 164104 [details] [diff] [review] alternative patch (checked in) sr=bzbarsky. Note that nsWebShell::EndPageLoad whitelists the errors it propagates to the user... so that's why no alert. A better error code with some changes to that method would help.
Attachment #164104 - Flags: superreview?(bzbarsky) → superreview+
Comment on attachment 164104 [details] [diff] [review] alternative patch (checked in) this patch is checked in; leaving the bug open for some patch to give the user some feedback
Attachment #164104 - Attachment description: alternative patch → alternative patch (checked in)
Status: ASSIGNED → NEW
Target Milestone: mozilla1.8alpha5 → ---
We now give feedback (treating this as a content-encoding decompression error). Can we close this bug?
For diagnosing cases like bug 501953, it would be really useful to hint at the 206 response (as opposed to 200) in the error message.
Assignee: cbiesinger → nobody
I'm going to call this fixed enough
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: