Closed Bug 742174 Opened 13 years ago Closed 13 years ago

Allow empty Location headers again

Categories

(Core :: Networking: HTTP, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla11
Tracking Status
firefox12 + fixed
firefox13 + fixed

People

(Reporter: jduell.mcbugs, Assigned: jduell.mcbugs)

References

Details

(Keywords: regression)

Attachments

(3 files, 3 obsolete files)

Users are reporting they can get into sites with Chrome that Firefox blocks as "Corrupted Content" because there is a blank Location header in the response. I've investigated, and indeed, Chrome allows a single blank Location header w/o error. (Like Firefox, it fails if multiple Location headers with different values are present). Specifically: - For 200 responses, Chrome ignores the Location header (whether it's blank or not) - For 3xx responses, Chrome displays the body of the 3xx response if the Location header is blank (or missing) While Firefox currently fails if Location is empty, if we remove that restriction (i.e. back out 716801), we'll do the following: - For 200 responses, like Chrome, ignore the Location header (whether it's blank or not) - For 3xx responses, treat an empty Location as a relative URI, and wind up redirecting to the same URI that redirected us. (This is allegedly also IE's behavior: see bug 716801 comment 0). This was apparently our behavior for many years until 716801 landed (I've verified it in FF 10). This results in our requesting the same URI again, after which we give up (for reasons I haven't managed to figure out yet), without any page navigation showing to the user (i.e. we stay on previous page). Not pretty, but we don't need to behave prettily for redirects, and it's not IMO a security issue, especially since we don't render the 3xx body (which would be under the control of a CRLF attacker). - For 3xx with a *missing* Location header, we render the 3xx response body. (that's the case whether we back out or not: just mentioning it FYI). I propose that we back out 716801 for beta/aurora/nightly, and allow empty Location headers again. This is causing significant web compat hit (apparently including logging into servers using PeopleSoft: see bug 699502 comment 10). I'm not convinced that there's actually a good case why empty Location headers are actually a special case that deserves banning. In particular, bug 716801 comment 8 claims that empty Location deserves special treatment since an empty Location will result in the (CLRF attacker's) response body being rendered. But as shown above that's not actually true for us for 3xx responses, and it's not worse than any other CLRF injection in the 200 case. One last wrinkle: since 716801 landed, we also added additional tracking of certain empty headers in bug 704227. In my backout I'm going to add Location to the list of tracked headers, which will mean that responses that include Location: Location: http://... will fail. This logic is also in aurora, but not beta, so beta won't fail this case. But I don't think it's a significant one. (For a reference point, the ESR branch declined to take the patch for 716801 in the first place). FWIW all branches will fail the opposite case, where the empty Location comes 2nd.
Attached patch Fix for central (obsolete) — Splinter Review
I misspoke--the fix for bug 704227 is only on central.
Assignee: nobody → jduell.mcbugs
Attachment #612094 - Flags: review?
Attached patch test for central (obsolete) — Splinter Review
Test includes the blank location 1st, then non-blank 2nd, so can't land on beta/aurora w/o tweaking. I suspect it's enough to land it on central.
Attachment #612096 - Flags: review?
Attached patch Fix for beta/aurora (obsolete) — Splinter Review
For aurora/beta this is just a straight backout of 716801.
Attachment #612099 - Flags: review?
Blocks: 738122
No longer blocks: 738122
Target Milestone: mozilla11 → mozilla12
Blocks: 738122
Jason, you didn't set a reviewer on your review request. It seems like this patch wouldn't be a pure backout, so make sure that it isn't labeled or described as such. (In reply to Jason Duell (:jduell) from comment #0) > - For 3xx responses, treat an empty Location as a relative URI, and wind up > redirecting to the same URI that redirected us. (This is allegedly also IE's > behavior: see bug 716801 comment 0). > This was apparently our behavior for > many years until 716801 landed (I've verified it in FF 10). This would also be consistent with HTTPbis and RFC 3986. HTTPbis says that Location is a URI-Reference, and RFC 3986 says that a relative-ref is a URI-Reference, and an empty value is a valid relative-ref. So, an empty Location value is a valid value and we should treat it like IE does--as a redirect to the current location. > place). FWIW all branches will fail the opposite case, where the empty > Location comes 2nd. I agree that, at least for response status codes where Location is relevant, every case where there are multiple differing values of the Location header should cause the corrupted content error, regardless of the order. But, we should avoid emitting the corrupted content error in the case where we should be ignoring the Location header. A more comprehensive way to fix this bug would be to: 1. Only cause SetHeaderFromNet to return a corrupted content error when the Location header is relevant for the given status code. 2. For the methods that return headers, Add an NS_WARNING (or even MOZ_ASSERT) when they are used to retrieve the value of the Location header for a status code that isn't a redirect code we know about or a 201 (if we process 201 in any useful way). 3. Audit the code to ensure that we only inspect the value of the Location header after checking that the status code is in the set whitelisted in #1. This should minimize false positives (no corrupted content errors for 200 responses for GETs, in particular) without making us any less safe, AFAICT.
Target Milestone: mozilla12 → mozilla11
Attachment #612094 - Flags: review? → review?(mcmanus)
Comment on attachment 612096 [details] [diff] [review] test for central Patrick, I'm assigning these to you since you've poked around the setHeader code recently. If you're too busy fixing pipelining assign to someone else.
Attachment #612096 - Flags: review? → review?(mcmanus)
Attachment #612099 - Flags: review? → review?(mcmanus)
> Jason, you didn't set a reviewer on your review request. Thanks. So odd--I remember assigning them all to Patrick. Ghosts! > HTTPbis says... an empty Location value is a valid value and we > should treat it like IE does--as a redirect to the current location. So we should enter an infinite redirect loop because the spec (unintentionally AFAICT) doesn't realize that's what it's specifying? Or perhaps there's some value in redirecting to the same URI (it doesn't need to redirect again): that's definitely an idiom that some sites use (but generally with a non-empty Location). Anyway, I'm fine leaving "empty Location == redirect to self", since it gives us better security characteristics (we won't render 3xx body that might be attacker-controlled), and since I have zero sympathy for this usage pattern on the web. But it's not going to be portable (Chrome renders the 3xx body instead). Whatevs :) > A more comprehensive way to fix this bug Is there a security risk specifically that you can think of that your fix buys us? My sense is that we don't get enough to make it worth doing the architectural changes (making set/getHeader know about HTTP status codes). With this fix we 1) ignore Location for non-3xx responses (true, we haven't audited if the Location header is accessed somehow, but we don't audit lots of semantically non-sensible header usage: is there a risk case you're worried about?); and 2) fail any 3xx that has more than one Location header with a different value. > This should minimize false positives (no corrupted content errors for > 200 responses for GETs, in particular) I agree that we could eliminate a small class of false positives (200s with multiple, different Location headers), but Chrome is already failing those too, so I expect they will disappear on the web (as they should). I don't want to overfix this bug.
Summary: Allow empty Location headers again (Backout bug 716801) → Allow empty Location headers again
Group: core-security
Keywords: regression
Although Kohei didn't say when he added the tracking-firefox12 nomination, this is a regression from Firefox 11 that we haven't shipped anywhere yet and is breaking sites (see bug 738122 for example). For aurora and beta the plan is to back-out the regressing fix.
(In reply to Jason Duell (:jduell) from comment #6) > > HTTPbis says... an empty Location value is a valid value and we > > should treat it like IE does--as a redirect to the current location. > > So we should enter an infinite redirect loop because the spec > (unintentionally AFAICT) doesn't realize that's what it's specifying? Or > perhaps there's some value in redirecting to the same URI (it doesn't need > to redirect again): that's definitely an idiom that some sites use (but > generally with a non-empty Location). > But it's not going to be portable (Chrome renders the 3xx body > instead). Whatevs :) > multiple, different Location headers), but Chrome is already failing those > too, so I expect they will disappear on the web (as they should). I don't > want to overfix this bug. We should maximize compatibility with Chrome AND IE here (i.e. be as permissive as IE when IE is more permissive, and as permissive as Chrome when Chrome is more permissive), unless we've identified some real security problem that comes with that permissiveness. I don't think we need to be more permissive than either of them. Regardless, we should communicate with them to come to some kind of agreement. That said, we can get to that point in multiple stages. It seems like your patch is an improvement that gets us closer to that ideal, if we're not there already.
Blocks: 742508
> we should communicate with them to come to some kind of agreement I'd be happy if HttpBis decided to formally make multiple Location: headers an erroneous behavior (if it's not already: haven't parsed the specs). Absent that I don't think it's worth adding logic to support {multiple, varying} Location headers for non-3xx requests unless there's evidence it'll break a significant number of sites. I haven't heard of any cases where that's an issue. We're definitely not going to support 3xx responses with conflicting Locations, whether the spec allows it or not, for CLRF avoidance reasons. The only remaining permissiveness/spec issue I'm aware of is whether it would be better to render the 3xx response when there's a blank Location (like Chrome) or do our current behavior (wind up ignoring the navigation), or display some sort of "infinite redirect" error page. I've added bug 742508 for that.
fwiw what the spec says about "" as a URI is not really the most interesting thing here - the vulnerability is that the server expects a script to set a URI to redirect to and this is a hole (from xss?) that allows the script to not get redirected and display the smuggled content. If that's happening - we need to stop it imo. anyhow it sounds like we never displayed the content or executed the js. And I've confirmed that's true now with Jason's patch here - though I'm afraid it is really just by circumstance rather than design. Jason can you explain how that all squares with the original report of 716801 comment 0? Did we misunderstand the original issue? I'll approve the patches because they imo do what you want them to do.
Resource here http://ducksong.com/cgi-bin/nph-742174 that does an empty location header with a 302 and a javascript body.
Attachment #612094 - Flags: review?(mcmanus) → review+
Attachment #612096 - Flags: review?(mcmanus) → review+
Attachment #612099 - Flags: review?(mcmanus) → review+
(In reply to Daniel Veditz [:dveditz] from comment #7) > Although Kohei didn't say when he added the tracking-firefox12 nomination, > this is a regression from Firefox 11 that we haven't shipped anywhere yet > and is breaking sites (see bug 738122 for example). For aurora and beta the > plan is to back-out the regressing fix. Thanks Dan. We were tracking bug 738122, but we'll track this instead now.
(In reply to Jason Duell (:jduell) from comment #6) > > Jason, you didn't set a reviewer on your review request. > Thanks. So odd--I remember assigning them all to Patrick. Ghosts! Bug 372539?
Patrick, Thanks a lot for the test case, and the suggestion to revisit the original test cases in bug 716801. So, the plot thickens, and not happily... We must have done something since aurora forked that changes how we handle redirects. Firefox 10 is vulnerable to the attack in comment 11 as well as the two attacks in bug 716801 comment 0. So are beta (FF 11) and aurora (FF 12) if I backout and allow empty Location headers. Only central is fixed with my patch :( I also tested Chrome 17/18, which are mostly vulnerable (the 2nd attack in 716801 results in no navigation: the other two attacks succeed). Tomorrow I will look into what we've changed since the last aurora fork, to see if I can pinpoint the differences here and see if there's a simple fix we can apply to beta/aurora. P.S. :emk--yes, that was it! I hadn't cc'd Patrick yet on the bug when I assigned the reviews to him. Good to know...
(In reply to Jason Duell (:jduell) from comment #6) > ... > So we should enter an infinite redirect loop because the spec > (unintentionally AFAICT) doesn't realize that's what it's specifying? Or > .. No, it's totally intentional. RFC3986 defines URI-ref syntax, and HTTPbis uses that for Location. The empty value is an edge case, but as far as I can tell, it's a simple one. If you believe there's something wrong with HTTPbis, please by all means report this to the Working Group. > ... > > This should minimize false positives (no corrupted content errors for > > 200 responses for GETs, in particular) > > I agree that we could eliminate a small class of false positives (200s with > multiple, different Location headers), but Chrome is already failing those > too, so I expect they will disappear on the web (as they should). I don't > want to overfix this bug. +1
(In reply to Jason Duell (:jduell) from comment #9) > > we should communicate with them to come to some kind of agreement > > I'd be happy if HttpBis decided to formally make multiple Location: headers > an erroneous behavior (if it's not already: haven't parsed the specs). > ... http://greenbytes.de/tech/webdav/draft-ietf-httpbis-p1-messaging-19.html#rfc.section.3.2.p.7
> can you explain how that all squares with the original report of > bug 716801 comment 0? OK, I've figured out the issue. Prior to bug 704227 we weren't keeping track of empty headers, and that hasn't landed on aurora/beta. So once I stopped failing empty Location headers, nsHttpHeaderArray ignored them, and so our redirect logic saw the equivalent of there being *no* Location header present (and so it rendered the 3xx content, as we do for missing Location, and fell for the CRLF attack). If I apply bug 704227 to aurora/beta, and then also apply my patch for central in this bug, we fix this issue completely: empty Locations are allowed, we treat them as "redirect to self", and wind up failing the page for doing too many redirects. Drivers: at this point we have 3 options for aurora/beta: 1) Land both bug 704227 and this patch to aurora/beta. They're simple small patches, apply cleanly, and we get the best result. I vote for this. 2) Land only the backout of bug 716801 on aurora/beta. This will allow empty Locations (and fix a lot of websites), but re-expose the CRLF attack. But we would do no worse than Chrome 18, which is still vulnerable today. 3) Do nothing, keep the security fix, but break lots of sites. I think this is the worst option IMO.
Depends on: 704227
[Approval Request Comment] Regression caused by (bug #): User impact if declined: Testing completed (on m-c, etc.): Risk to taking this patch (and alternatives if risky): String changes made by this patch: [Approval Request Comment] Regression caused by (bug #): User impact if declined: Testing completed (on m-c, etc.): Risk to taking this patch (and alternatives if risky): String changes made by this patch: This rolls up the fix and test, and will apply on all branches if we land 704227 on aurora/beta first.
Attachment #612094 - Attachment is obsolete: true
Attachment #612096 - Attachment is obsolete: true
Attachment #612099 - Attachment is obsolete: true
Attachment #613072 - Flags: review+
Attachment #613072 - Flags: approval-mozilla-beta?
Attachment #613072 - Flags: approval-mozilla-aurora?
(In reply to Jason Duell (:jduell) from comment #0) > - For 3xx responses, Chrome displays the body of the 3xx response if the > Location header is blank (or missing) Although I think we already agreed not to do so, for completeness sake I will mention one problem that we would run into if we were to copy the Chrome behavior: according to nsHttpChannel.cpp, we do not cache the body of a 3xx response. if true, that means that, without changes, we would not be able to emulate the chrome behavior for 3xx responses from the cache.
Comment on attachment 613072 [details] [diff] [review] test and fix for all branches: requires bug 704227 for aurora/beta [Triage Comment] Approved for Aurora 13 and Beta 12 to prevent web regressions.
Attachment #613072 - Flags: approval-mozilla-beta?
Attachment #613072 - Flags: approval-mozilla-beta+
Attachment #613072 - Flags: approval-mozilla-aurora?
Attachment #613072 - Flags: approval-mozilla-aurora+
One last piece: xpcshell test, so we don't unintentionally change our semantics again.
Attachment #613376 - Flags: review?(mcmanus)
Attachment #613376 - Flags: review?(mcmanus) → review+
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Is there something QA needs to do to verify this fix? Is the xpcshell test enough?
> Is there something QA needs to do to verify this fix? Is the xpcshell test enough? The xpcshell test doesn't cover the "HTTP 200 status code plus empty Location header" case (we just ignore the empty Location now instead of failing). I tested it by hand, though.
Thanks Jason. What version of Firefox did you test? Can we mark this verified for Firefox 13?
Attached file nc script to test with
I didn't test FF 12/13, but I'd be really very surprised if they didn't work. If we want to test them again, just take this file and run nc -l 9876 < TEST_ME and open "http://localhost:9876/" in the browser. If you see a page, it's fixed. If it's an error about corrupted content, it's broken.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: