Closed Bug 461413 Opened 16 years ago Closed 14 years ago

Firefox interprets Refresh header on 302 response and ignores location

Categories

(Core :: Networking, defect, P2)

x86
Windows XP
defect

Tracking

()

RESOLVED WONTFIX
mozilla1.9.1

People

(Reporter: bugzilla_mozilla_org.20.ngc, Assigned: bjarne)

References

()

Details

(Whiteboard: [sg:moderate])

Attachments

(1 file, 2 obsolete files)

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.3) Gecko/2008092417 Firefox/3.0.3 This redirector on NYtimes.com echoes newlines which allow me to inject a Refresh HTTP header with a javascript URL that allows me to steal the cookies. This is a XSS attack that only works in Firefox (not in IE 7, Safari 3.1 or Opera 9.5). Reproducible: Always Steps to Reproduce: 1. go to http://www.nytimes.com/login?URI=http://www.nytimes.com:%20http://%0ARefresh:%200;%20url=javascript:alert(document.cookie);%0A%0Ahi 2. 3. Actual Results: Should not execute javascript Expected Results: Alert pops up with cookie. Cookie could be sent to malicious domain. not executed the Refresh in the presence of a Location header on a 302
I don't see an alert when I visit the cited URL.
I got the alert with the cookie, FF 3.0.3.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Whiteboard: [sg:high]
Flags: wanted1.9.0.x+
Flags: blocking1.9.0.5?
Component: Security → Networking
Product: Firefox → Core
QA Contact: firefox → networking
Flags: blocking1.9.0.5? → blocking1.9.0.5+
Steps to reproduce are incomplete, you also have to log in. I get the alert on trunk.
OK, so the reason this works is that the location header is kind of malformed. The header we get is: Location: http://www.nytimes.com: http://^M (not sure what's up with that extra CR at the end) We end up being unable to create a channel for that URI and so Necko just hands off the content of the page to the parser etc, which then interprets the Refresh header. What I don't understand is why this is a security bug in our code? Seems like this is nytimes's issue.
What do other UAs do on encountering malformed Location headers? Perhaps we should just show an error page instead of doing ProcessNormal?
Whiteboard: [sg:high] → [sg:high][needs owner]
Maybe we should differentiate between "New channel creation/opening failed" and "Security check for redirect failed"
How would that help this bug?
(In reply to comment #5) > What do other UAs do on encountering malformed Location headers? Perhaps we > should just show an error page instead of doing ProcessNormal? It is the illegal port which causes the problem (the port is interpreted as 20 in the original URL). In fact, constructing a perfectly legal location-header pointing to localhost port 1 (for example) triggers the same issue. Try for example this http://www.nytimes.com/login?URI=http://localhost:1%0ARefresh:%200;%20url=javascript:alert(document.cookie);%0A%0A For the above URL, Opera 9.62 on my Ubuntu system displays an error-page stating that access to the port (1) is disabled for security reasons. The original URL is actually handled properly by Opera, probably because it applies some guesswork when parsing the malformed Location-header. The patch below can be applied to disable our check for safe ports in AsyncOpen(). This causes FF to hang for a while (timing out) while trying the port and then end up with a error-page stating that it could not establish a connection to the server -------------------------------------------------------------------- diff -r 7d7d075d58a3 netwerk/protocol/http/src/nsHttpChannel.cpp --- a/netwerk/protocol/http/src/nsHttpChannel.cpp Tue Nov 11 01:44:54 2008 -0800 +++ b/netwerk/protocol/http/src/nsHttpChannel.cpp Tue Nov 11 15:20:27 2008 +0100 @@ -4000,19 +4017,19 @@ nsHttpChannel::AsyncOpen(nsIStreamListen LOG(("nsHttpChannel::AsyncOpen [this=%x]\n", this)); NS_ENSURE_ARG_POINTER(listener); NS_ENSURE_TRUE(!mIsPending, NS_ERROR_IN_PROGRESS); NS_ENSURE_TRUE(!mWasOpened, NS_ERROR_ALREADY_OPENED); nsresult rv; - rv = NS_CheckPortSafety(mURI); - if (NS_FAILED(rv)) - return rv; +// rv = NS_CheckPortSafety(mURI); +// if (NS_FAILED(rv)) +// return rv; // Remember the cookie header that was set, if any const char *cookieHeader = mRequestHead.PeekHeader(nsHttp::Cookie); if (cookieHeader) mUserSetCookieHeader = cookieHeader; // fetch cookies, and add them to the request header AddCookiesToRequest(); -------------------------------------------------------------------- However, I'm not certain this is the right way to resolve the problem. First, the check for safe ports has been put there for a reason, and second (as bz indicates in comment #5) calling ProcessNormal() in ProcessResponse() unconditionally if a redirection failed may be wrong. Instead, we might check the actual error-code (NS_ERROR_PORT_ACCESS_NOT_ALLOWED in this case) and handle particular issues (like this one) more specifically. (I guess the reason for calling ProcessNormal() in the first place is to fall back and show the content of the page if redirection fails, but this may be done in a more controlled way.)
Hmm. I guess 3xx responses can in fact have response bodies. In which case, comment 6 is spot-on and we should just do that.
Do browsers honor Refresh headers on 4xx responses?
(In reply to comment #10) > Do browsers honor Refresh headers on 4xx responses? A simple test with 403 (Forbidden) and 404 (Not Found) gives exactly the same effect for FF : The refresh header is used and can contain JS. These responses can have bodies, so it makes sense for the browser to display them. Reading the code for ProcessResponse() shows that ProcessNormal() is called for all 4xx responses except for 401 (Auth required) and 407 (Proxy Auth Required) where ProcessAuthentication() is called. However, if ProcessAuthentication() returns with a failure-code ProcessNormal() is called like for the 3xx responses, probably for the same reason as indicated in comment #8. (In reply to comment #9) > Hmm. I guess 3xx responses can in fact have response bodies. In which case, > comment 6 is spot-on and we should just do that. What exactly from comment #6 do you propose to do?
> These responses can have bodies, so it makes sense for the browser to display > them. 3xx responses can also have bodies, of course. And I know _we_ process the Refresh header there; I'm wondering what other browsers do. In some ways, it might make sense to only process the Refresh header on 2xx responses. AS far as comment 6, I'd say that if the redirect fails due to a security check we should show an error page instead of showing the body. But I'm not convinced that's useful for "fixing" this bug as originally filed.
Refresh is a way for the server to set up a client to request a new resource after loading the current resource - it's not a fall-back mechanism. This defends IMHO to only process Refresh on 2xx responses, as suggested by bz. However, you never know how creative people out there makes use of it... :) "Processing refresh on 2xx responses only" could e.g. be handled in nsDocument::SetHeaderData() if there is any way to determine if loading was successful at that point, or it could be handled in nsHttpChannel::ProcessResponse() by simply removing any Refresh-headers prior to calling ProcessNormal() in the non-2xx cases. (Or if there is a special method "security check for redirect", as indicated in comment #6, that would be a natural place to handle this :) )
Attached patch v1 (obsolete) — — Splinter Review
This patch implements the suggested approach and handles the reported URL. Some error-msg or something might be in order (see TODO). I don't really know who should review this but bz has been active, so I ask him... :) Test-case is missing... I'll try to whip up something.
Attachment #347995 - Flags: review?(bzbarsky)
I'd really like us to figure out the behavior we want (including testing what other UAs do) before we start changing code....
Very well... Note btw that nytimes.com uses Sun-ONE-Web-Server/6.1 which seems to provide this redirection as a service, meaning that there may be other sites out there with similar issues. See http://secunia.com/advisories/26326/ I set up httpd.js with three handlers responding "200 Ok", "302 Found" and "404 Not Found" respectively. All response-bodies are "content" and all responses set the following headers Set-Cookie: SECRET Location: http://localhost:1/ Refresh: 0; url=javascript:alert(document.cookie) Then, I loaded these pages in FF 3.0.3 on Ubuntu, Opera 9.6 on Ubuntu and MSIE 7.0 on Windows XP Home (the only Windoze I have available). Results : 200 OK : FF : displays "content" + alert containing "SECRET" Opera : displays "content" + alert containing "SECRET" MSIE : displays "content", no alert 302 Found: FF : displays "content" + alert containing "SECRET" Opera : displays error page "Error! The server tried redirecting to an invalid address. Please report this to the webmaster of the site." MSIE : displays error-page stating that resource could not be loaded 404 Not Found: FF : displays "content" + alert containing "SECRET" Opera : displays "content" + alert containing "SECRET" MSIE : displays error-page stating that resource could not be found Although it feels really funny to say this, I must admit that MSIE seems to be the most secure implementation with the most reasonable behaviour. Honoring the response-code even if there is content and a refresh-header available makes sense and might be considered a strategy in ProcessResponse() as well. Furthermore, for the "200 OK" response I changed the content so that it contains a meta-tag with "javascript:alert(document.cookie)". MSIE does not run this either, although the same JS in a body.onload does run. I.e. it looks like MSIE does not allow JS in the Refresh meta-tag or header. I'll leave conclusions to more knowledgeable people though.
Comment on attachment 347995 [details] [diff] [review] v1 Removed request for review
Attachment #347995 - Flags: review?(bzbarsky)
Bjarne, thanks for looking into that! One other question: how does behavior differ (if at all) if the Refresh target is not a javascript: URL?
Assignee: nobody → bjarne
Whiteboard: [sg:high][needs owner] → [sg:high]
Repeating the experiment from comment #16 replacing Refresh: 0; url=javascript:alert(document.cookie) with Refresh: 0; url=http://localhost:4444/bad gives the following 200 OK : FF : loads (and displays?) "content" + redirects immediately to bad url Opera : (loads and displays?) "content" + redirects immediately to bad url MSIE : (loads and displays?) "content" + redirects immediately to bad url 302 Found: FF : loads (and displays?) "content" + redirects immediately to bad url Opera : displays error page "Error! The server tried redirecting to an invalid address. Please report this to the webmaster of the site." MSIE : displays error-page stating that resource could not be loaded 404 Not Found: FF : loads (and displays?) "content" + redirects immediately to bad url Opera : loads (and displays?) "content" + redirects immediately to bad url MSIE : displays error-page stating that resource could not be found Thus, it looks like the tested USa treat JS: urls and non-JS: urls principally in the same way. However, MSIE does not seem to allow JS in a Refresh-tag. IMHO, in the long term, FF should be more aligned with what MSIE does in this area, but this should be filed, implemented and reviewed as a larger improvement in how necko deals with server-responses. In the short term IMHO a quick, low-impact fix is better, and it can even be removed once a more fundamental improvement is implemented and reviewed. The fix suggested in comment #14 (enhanced to report some error-msg as mentioned) is one way to handle this vulnerability with very low code-impact and small potential for regressions. Implementing behaviour more aligned with MSIE (if this becomes the decision) will most likely impact bigger parts of the code and thus represents larger potential for regressions in addition to fundamentally changing behaviour of FF on non-2xx responses. Btw, look up this archived post http://www.hpl.hp.com/personal/ange/archives/archives-96/http-wg-archive/2725.html and search the document for "Yeat another neat trick". The paragraph seems to suggest that in fact a Refresh should be honored also in e.g. a 3xx response, which complicates things a little. But I would leave this issue for the suggested bigger improvement.
I see no reason to honor Refresh in 3xx based on something someone wrote with no arguments to back it up. For example, I see nothing about Refresh at all in the HTTP specification, which to me means behavior is UA-defined. So sounds to me like we should do what IE does (and that this is what the attached patch does, right) as regards the Refresh header. IE's handling of non-2xx statuses in general is not something we want to duplicate, though.
The refresh-header is certainly not in any HTTP-spec although it has been considered added to future versions, see e.g. http://www.hpl.hp.com/personal/ange/archives/archives-96/http-wg-archive/1657.html and it also comes as a meta-tag (http://en.wikipedia.org/wiki/Meta_tag#Redirects) With the current dynamic capabilities of modern browsers and W3Cs recommendations against the refresh-construct (e.g. http://www.w3.org/TR/WCAG10-HTML-TECHS/#meta-element and http://www.w3.org/TR/WAI-WEBCONTENT/#tech-no-periodic-refresh) I guess it's a small chance for it to be added to HTTP. However, it is a de-facto standard supported by all modern browsers http://en.wikipedia.org/wiki/URL_redirection#Refresh_Meta_tag_and_HTTP_refresh_header which means we must relate to it. Changing the way it's handled too much should be done carefully. Getting back to the point : If I interpret bz correctly, we'll go for a solution where we ignore refresh-headers for non-2xx responses. I guess this also should apply to refresh in a meta-tag, indicating a patch similar to the one in comment #14. I'll come up with a better patch later today. I do think, however, that it would be wise to rethink our handling of non-2xx responses in general. The code falls back to ProcessNormal() on failures for all types of responses, which means that anyone able to trigger an error in the intended processing will make FF process the content as it was retrieved with a "200 Ok" response. Doesn't feel very secure to me, and I'm pretty sure this pattern is not intentional.
The pattern is intentional. If we can't handle the response, at least show the user the response body. I don't see what's insecure about it. Then again, I also don't think this bug is a security problem; I just think we should align behavior with other UAs better on principle. Oh, and this needs a test, of course.
(In reply to comment #22) > The pattern is intentional. If we can't handle the response, at least show the > user the response body. I don't see what's insecure about it. Let's not waste too much energy arguing about this. :) If I was coding in the context of ProcessNormal() I would (unconsciously, at least) assume things are normal and quite probably not take into account the fact that I may be called as a fallback because something already failed. That's what IMHO makes this pattern (re-using the function) insecure, and that's also the reason for this bug.
Attached patch Mochitest v1 (obsolete) — — Splinter Review
Simple mochitest for the exact issue seen in this bug, i.e. checking if a JS: url in a refresh-header is called on 302 response when location-header is bad. Applying the patch from comment #14 makes the test pass - patch must be extended with some reasonable error-feedback to be generally useful, though... I would like to extend the test to also cover 2xx and 4xx responses, but I don't know enough about mochitests at the moment to emulate more than one response (using the ^headers^ file). If anybody could enlighten me on this topic, I'd be grateful.
Attachment #348744 - Flags: review?(bzbarsky)
https://developer.mozilla.org/en/Mochitest#How_do_I_write_tests_that_check_header_values.2c_method_types.2c_etc._of_HTTP_requests.3f This assumes you're fine with passing state via the query string or similar; if you're not, you want bug 463327 to preserve state across requests. I'll likely be landing that once the current beta cycle ends, so if that's a restriction it shouldn't be around for very long.
(In reply to comment #25) > https://developer.mozilla.org/en/Mochitest#How_do_I_write_tests_that_check_header_values.2c_method_types.2c_etc._of_HTTP_requests.3f > > This assumes you're fine with passing state via the query string or similar; if > you're not, you want bug 463327 to preserve state across requests. I'll likely > be landing that once the current beta cycle ends, so if that's a restriction it > shouldn't be around for very long. Thanks a bundle! I've sent you a followup-email...
Since this is still in progress will have to make the next train.
Flags: blocking1.9.0.5+ → blocking1.9.0.6+
Attached patch Proposed patch and mochitest — — Splinter Review
Suggested patch and test-case. Many thanks to biesi for constructive hints! The current mochitest only handles the particular case reported here. I crashed into bug #444165 when trying to create a more extensive test, so I had to fall back to this simple one. When bug #444165 is fixed I can come back with a better test if necessary. Not sure who should review this - bz has been active so I try him first... :) In particular, I think the error-messages can be improved.
Attachment #347995 - Attachment is obsolete: true
Attachment #348744 - Attachment is obsolete: true
Attachment #349165 - Flags: review?(bzbarsky)
Attachment #348744 - Flags: review?(bzbarsky)
OK, so after thinking about this, should we really make this change? What if a host uses a 404 page to tell the user that something is now gone, but also uses a refresh to redirect them to their homepage or something?
E.g. http://googlewebmastercentral.blogspot.com/2008/08/now-that-weve-bid-farewell-to-soft-404s.html The Yahoo! example from there uses a <meta> instead of a header, but I don't think we should treat those differently from each other.
A variation of this solution could be to block refresh for the 3xx responses only. Or another way to deal with this particular issue might be to disallow JS in the refresh. But I bet there is some site out there which depends on this as well... :)
So, how do we move forward with this one?
I think that since the setup in comment 29 doesn't work in any other browser we shouldn't worry about it too much. But Christian is the guy who knows this code best, so it's his call.
Attachment #349165 - Flags: review?(bzbarsky) → review?(cbiesinger)
Comment on attachment 349165 [details] [diff] [review] Proposed patch and mochitest Changed reviewer
Anything else I can do here to move this forward...?
biesi: Do you have time to review this?
Whiteboard: [sg:high] → [sg:high][needs r=biesi]
Flags: blocking1.9.2?
Flags: blocking1.9.1?
Flags: blocking1.9.0.7+
Flags: blocking1.9.0.6+
So Bjarne, could you test what browsers do with a 404 page with a valid Refresh: header? Preferrably one that's same-origin with the 404 page.
(In reply to comment #37) > So Bjarne, could you test what browsers do with a 404 page with a valid > Refresh: header? Preferrably one that's same-origin with the 404 page. I didn't state it explicitly in comment #19, but the url in the refresh-header is valid (it loaded a page containing the word "bad"). Suboptimal use of the word "bad" in comment #19 ... my fault. Anyway, the test-setup is described in comment #16. Comment #19 is the result of bz's request in comment #18 to check whether JS and non-JS urls are treated differently (turned out they weren't). IM(H)O, the fundamental issue here is not the content of the refresh-header, but the fact that FF process it for non-2xx responses. The reason for FF behaving this way is that ProcessResponse() calls ProcessNormal() if anything (note: anything) unexpected happens. The original bug-report is one (note: only one) example of this. In the 4xx-case, FF may be excused (ref your comment #29 and comment #30). The patch is easily modified to include or exclude 4xx responses. In the 3xx-case, a bad location-header (which can be injected via the request as demonstrated in the original bug-report) makes FF use the refresh-header (which can also be injected via the request, as demonstrated in the original bug-report). Although obscure, this may be exploited, in particular in light of comment #16 first paragraph. In any case I see no reason why FF should be allowed to fall back to use a refresh-header at all if it fails to handle a 3xx-response (see also comment #20). Now... in order to move forward on this one, someone empowered to do so should decide whether FF should fall back to use a refresh-header, for which cases this should happen, and possibly distinguishing between JS and non-JS refresh-targets. Then we can implement a final solution. (Not sure how to distinguish between JS and non-JS targets, but it's probably possible.) Background-data for decision can be found in comment #16 and comment #19. The currently suggested patch prevents FF from using refresh-header in all non-2xx cases, but only handles refresh-headers in the http-response (as opposed to meta-tags in the document) and does not distinguish btw JS and non-JS targets. Other things which may make sense to test in this context include : 3xx with no location-header (as opposed to a bad one), how other UAs behave when the refresh is a meta-tag (as opposed to a response-header), and how other UAs handle 1xx and 5xx responses containing a refresh-header (2xx, 3xx and 4xx have been tested).
Flags: blocking1.9.1? → blocking1.9.1+
Priority: -- → P1
Target Milestone: --- → mozilla1.9.1
So a couple of comments here. First of all, isn't NYTimes hosed no matter what we do here? If you can insert headers, what's to stop you from inserting a body with a <script> as well? Just use %0A%0A and go. Second, what scares me more than the refresh header here is the fact that we allow javascript urls to be used. Is there a reason to?
(In reply to comment #39) > First of all, isn't NYTimes hosed no matter what we do here? If you can insert > headers, what's to stop you from inserting a body with a <script> as well? Just > use %0A%0A and go. Good point. I don't know whether this will work in practice, but it sounds plausible. Guess it depends on the degree of scanning/parsing done on the input (i.e. rejecting input containing <script> and similar). > Second, what scares me more than the refresh header here is the fact that we > allow javascript urls to be used. I don't disagree - see also comment #31. > Is there a reason to? I don't have enough experience with mozilla to come up with hard facts about this. But if I had to guess, I'd say it's unintentional and an effect of processing this header exactly the same way as the refresh meta-tag. Just my 2c. What we need to move on with this defect is a policy-decision. As stated above, I'm not the right person to make this decision so, please, anyone...
bz: can we disallow javascript-uris in refresh headers? As I'm fairly certain we do with location headers.
That could be done, yes. Does IE run javascript: URIs in <meta http-equiv="refresh">? If not, I'm happy to kill them off. Bjarne, I'm not sure anyone really wants to be owning necko right this second, which is part of the reason the policy decision isn't quite happening....
> That could be done, yes. Does IE run javascript: URIs in <meta > http-equiv="refresh">? If not, I'm happy to kill them off. I don't know what IE does with the refresh meta-tag and I won't be able to check it in a day or two. Feel free. (And I guess Safari and Opera should be checked while we're at it...) > Bjarne, I'm not sure anyone really wants to be owning necko right this second, > which is part of the reason the policy decision isn't quite happening.... Well, if we settle on killing off JS-URIs for refresh in general, necko wouldn't be involved at all, would it? :) So that's nice. In my book this is more of a product-decision, though, and not connected to any particular module. For this reason, my choice would be to make FF behave more like IE and block all processing of refresh received as a http-header for 3xx responses, and then possibly move on to kill off JS-URIs in refresh headers in general (depending on how IE handles this).
I think Jason is technically the module owner for necko now? Though I'm not sure if he's comfortable making policy decisions yet. I'm not really sure I see the reason why we wouldn't want to allow refresh headers together with a 3xx response. Unless there is a security issue with doing so, but so far it seems like there isn't (in the header injection case here, inserting a refresh is just one of many many evil things an attacker can do, so we're not really helping anyone). I can see use cases for supporting refresh for a non-200 response though, such as redirecting to the homepage after 10 seconds for a 404, or attempting to reload the URI after a 5xx. So I think this bug, as filed, should be a WONTFIX. We should instead file a bug on allowing javascript URIs in refresh headers and fix that. Also, has anyone contacted NYTimes.com and asked them to fix this issue? Seems like they have a cross browser XSS hole here.
(In reply to comment #44) > I think Jason is technically the module owner for necko now? Uh, really? It's not listed on http://www.mozilla.org/owners.html yet. Typically people earn the module owner role, no? (But that's probably off-topic for this bug.)
> So I think this bug, as filed, should be a WONTFIX. We should instead file a > bug on allowing javascript URIs in refresh headers and fix that. Disallowing JS-URIs for refresh-tags in general is probably a good thing, and I support it fully. (And it will take care of this particular defect.) Would be interesting to know how MSIE handles it, though. Off-topic : IMHO, the way FF handles responses in ProcessResponse() is likely to cause more issues like this one (see also last paragraph of comment #21 and comment 23). Whenever necko gets an owner, I'd suggest to re-think the code-pattern to only process content based on explicit decisions in the code (as opposed to processing content as a fallback-action because something/anything went wrong). But this is clearly another discussion. :)
Attachment #349165 - Flags: review?(cbiesinger) → review-
Comment on attachment 349165 [details] [diff] [review] Proposed patch and mochitest I don't think we should take this patch. For the reasons outlined in the third paragraph of comment 44.
So IE does *not* support refresh to javascript uris, neither in a Refresh: header, nor a <meta http-equiv="refresh">. So I think we should block that. Probably want to block data: as well since if that would also run with the principal of the original page.
OK. Do we just want to block that in the nsIRefreshURI impl in the docshell? That would cover all the different refresh attempts.... The other option, of course, would be the content sink.
I don't care much, but nsIRefreshURI sounds like the safest solution.
Lowering this to P2 as fixing this does not seem very risky.
Priority: P1 → P2
Whiteboard: [sg:high][needs r=biesi] → [sg:high]
Filed bug 475636 on fixing redirects to not allow javascript URIs. So I propose that we close this bug as WONTFIX, unless someone wants to make sure that we propritize Location header over the Refresh header for 3xx responses? In any case this bug is not a blocker.
Flags: wanted1.9.0.x+
Flags: blocking1.9.2?
Flags: blocking1.9.2-
Flags: blocking1.9.1-
Flags: blocking1.9.1+
Flags: blocking1.9.0.7+
The problem was not that we didn't prioritize the Location header, though we should, it was that if the Location header was malformed we didn't report that error and instead went on with the rest of the document (including, but not limited to, the Refresh header). On a side note, it looks like the nytimes has fixed their site.
Flags: wanted1.9.0.x+
Flags: wanted1.8.1.x+
Whiteboard: [sg:high] → [sg:moderate]
(In reply to comment #53) > The problem was not that we didn't prioritize the Location header, though we > should, it was that if the Location header was malformed we didn't report that > error and instead went on with the rest of the document (including, but not > limited to, the Refresh header). This is a nice way to rephrase the point from comment #21 comment #23 and comment #46. The code pattern in ProcessResponse() can be described like : "if something unexpected happens (e.g. a malformed Location-header), just process the rest of the document, including refresh-headers". Receiving a 302 with a malformed (or possibly missing?) Location-header is one special case causing problems with this pattern. There are bound to be other cases, and they will have to be found and handled individually. The alternative approach is of-course to change the code pattern to be rigid about what it accepts, and then allow certain well-understood exceptions. (Yes, I am aware that this is likely to cause lots of regressions in current real-web behaviour, but I think it's worth discussing.)
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → WONTFIX
Group: core-security
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: