Closed Bug 655389 Opened 13 years ago Closed 13 years ago

CRLF Injection and the parsing of HTTP headers

Categories

(Core :: Networking, defect)

2.0 Branch
x86
All
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla7
Tracking Status
firefox5 --- wontfix
firefox6 - wontfix
firefox7 + fixed
blocking1.9.2 --- .23+
status1.9.2 --- .23-fixed
status1.9.1 --- wontfix

People

(Reporter: ladamski, Assigned: jduell.mcbugs)

References

Details

(Whiteboard: [sg:moderate] [landed m-c 7/05] [nom'd by dveditz c#41][qa-])

Attachments

(4 files, 3 obsolete files)

My name is Ian Graham and I’m a Senior Security Engineer working with Citrix Online. Recently, we’ve been doing some research on CRLF injection (HTTP Response Splitting) attacks, and we found a behavior in Firefox that facilitates a specific type of attack. 

Consider a web application which is vulnerable to CRLF Injection via an external redirect. In other words, the application is configured to redirect requests for http://www.vulnerable.com/webmail/* to http://webmail.vulnerable.com/*, or something similar. Provided that the application is not correctly filtering CRLF characters, an attacker could create a malicious URI that, upon visiting, would redirect the victim to a website of his choice (in this case, the venerable badgerbadgerbadger.com).

For example, the request:

http://www.vulnerable.com/webmail/path%0d%0aLocation:%20http://www.badgerbadgerbadger.com

Will yield a response that looks something like:

HTTP/1.1 302 Found
Location: http://webmail.vulnerable.com/path
Location: http://www.badgerbadgerbadger.com
Content-Type: text/html
Content-Length: 156
Date: Thu, 05 May 2011 20:13:07 GMT
The URL has moved <a href="http://webmail.vulnerable.com/path
Location: http://www.badgerbadgerbadger.com">here</a>

And the client will automatically be redirected to http://www.badgerbadgerbadger.com. This is a pretty standard CRLF injection attack, but the interesting thing here is that it only works on Firefox (Tested on 4.0.1). I’ve tested with various versions of Chrome and IE, and they do not redirect the user to the malicious link. 

I’m not super familiar with the internals of Firefox, but I’m guessing that this is because Firefox parses headers in a slightly different way than IE or Chrome (i.e. in the case where there are multiple headers with the same name, IE/Chrome use the first header in the response, while Firefox apparently uses the last).

I don’t know if this particular behavior was intended, or if it’s just a side effect of Firefox’s implementation, and I’m not claiming that this is a Security Vulnerability per se. I just wanted to bring this behavior to the attention of the team, and point out that it facilitates a specific class of attacks.
I thought we did take the first instance of a header. Did we change that along the way or am I misremembering?
It does look like we have code that tries to prevent using a second Location: header (and a few others):
https://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/nsHttpHeaderArray.cpp#235

That should be called while we're parsing the response headers in nsHttpResponseHead.cpp
Would love a testcase here.
Boris, I'm not sure if this is what you're after, but here's a fairly easy way to reproduce the issue (This is almost identical to the attack described in one of the original HTTP Response splitting papers at http://dl.packetstormsecurity.net/papers/general/whitepaper_httpresponse.pdf): 

1. Download and install Tomcat 4.1.24
2. Create a redirect.jsp page in webapps/ROOT that sends a 302 redirect, including a parameter: 

<html>
    <%
        response.sendRedirect("/index.jsp?param=" + request.getParameter("param"));
    %>
</html>

3. Using the latest version of Firefox, submit a request for the page you just created, including an HTTP Header injection attack in the parameter: 

http://server:8080/redirect.jsp?param=%0d%0aLocation:%20http://www.badgerbadgerbadger.com

This will automatically send you to www.badgerbadgerbadger.com. 

4. Try the same attack using IE9 or Chrome; it does not work.
(In reply to comment #2)
> It does look like we have code that tries to prevent using a second
> Location: header (and a few others):
> https://mxr.mozilla.org/mozilla-central/source/netwerk/protocol/http/
> nsHttpHeaderArray.cpp#235
> 
> That should be called while we're parsing the response headers in
> nsHttpResponseHead.cpp

That code just makes sure that you can't _append_ to the header. Normally headers get merged by concatenating them with commas; for headers like Location that makes no sense.
test case: use 

   nc -l 8080 <locationlocation

and open browser to localhost:8080.  If you see google, we've taken the 2nd location header.  If it's my page on people.mozilla.org, then we've taken the 1st.

Firefox currently fails and uses the 2nd Location header.
Assignee: nobody → jduell.mcbugs
Status: NEW → ASSIGNED
> I thought we did take the first instance of a header. Did we change that 
> along the way?

Nope--since 2001 SetHeader() has replaced existing values, i.e. used the last header seen:

    http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/netwerk/protocol/http/src/nsHttpHeaderArray.cpp&rev=1.16

I looked at the chrome source, and what they do when duplicate, non-mergeable headers are added is add them as a separate entry in the underlying array.  Since their (and our) GetHeader implementation gets the 1st entry found, that means they get ignored by most code (unless it uses visitResponseHeaders, in which case the dupes will show up during the iteration). But they're also available if we want to implement things like "ignore all Content-Disposition headers if multiple are seen with differing values, etc" (which some of the HttpBis stuff may ask us to do).

Alas, we can't just change SetHeader to start adding dupe entries instead of replacing values, because we rely on that behavior in a lot of places.  Specifically, nsIHttpChannel::SetResponseHeader() specifically states that passing in 'false' for 'merge' results in replacing any existing value.

So what I've done in this patch is add another parameter to nsHttpHeaderArray::SetHeader, 'noreplace': if a value isn't merged, and noreplace is set, we do the chrome thing and add it as an additional entry.  The only place that calls SetHeader with that semantic right now is ParseHeaderLine, which AFAICT is used only when we're reading headers off the 'net (I don't *think* it's used when loading from cache, but I could be wrong: but it shouldn't matter?)

Ah, crap--I think I see a problem with keeping dupe headers--in nsHttpResponseHead::UpdateHeaders() we iterate over a list of headers and call SetHeader with their values, in order to replace cache header values.  If we keep dupes, we'd wind up having the last header "win" in that case.

I'll change the patch to just drop the 2nd and later headers for incoming network traffic.
OK, here's a version that drops the dupes instead.

bz, feel free to reassign to a likely suspect (dveditz? biesi? honza?), and/or other folks feel free to volunteer to review if you feel competent.
Attachment #536325 - Attachment is obsolete: true
Attachment #536335 - Flags: review?(bzbarsky)
P.S. in the long run it'd still be nice to be able to detect whether duplicate headers were sent, but I figure the security hole needs fixing ASAP, hence the quick and dirty "drop" fix.
Time to land bug 597706 and enhance it to also do not allow modification of the Location header.
Comment on attachment 536335 [details] [diff] [review]
Drop duplicate headers from net, instead of storing as dupes

This needs merging to the patch in bug 597706, and I'd like to see the result of the merge.
Attachment #536335 - Flags: review?(bzbarsky) → review-
Blocks: 646507
Jason, can you please suggest a sg: severity rating here per https://wiki.mozilla.org/Security_Severity_Ratings
Whiteboard: [sg:moderate]
This patch renames test_double_content_length.js to test_duplicate_headers.js, and adds more tests for Content-Length as well as adding tests for Location and Content-Disposition (which assumes want to ignore all value when we see conflicting Content-Dispositions, rather than throw an error or pick one).

The most interesting test is probably the one that shows that you can subvert the fix in bug 597706 by first passing in a blank Content-Length (which clears the header value), then the "evil" one.

(One of the redirection tests--the one that should succeed, test 7, leaks memory so I've disabled it.  Not sure if there's a real bug there or just a bug in my xpcshell code.  We have a test that redirects OK w/o leaking--test_redirect_passing--so I'm not too worried).

Also adds an e10s wrapper for test, and makes some necko NS_ERROR constants visible to JS, so xpcshell can test for them.
Attachment #536281 - Attachment is obsolete: true
Attachment #538665 - Flags: review?(mcmanus)
(In reply to comment #4)
> Boris, I'm not sure if this is what you're after, but here's a fairly easy
> way to reproduce the issue (This is almost identical to the attack described
> in one of the original HTTP Response splitting papers at
> http://dl.packetstormsecurity.net/papers/general/whitepaper_httpresponse.
> pdf): 
> 
> 1. Download and install Tomcat 4.1.24
> 2. Create a redirect.jsp page in webapps/ROOT that sends a 302 redirect,
> including a parameter: 
> 
> <html>
>     <%
>         response.sendRedirect("/index.jsp?param=" +
> request.getParameter("param"));
>     %>
> </html>
> ...

Just to clarify: that's a bug in Tomcat. Has it been raised already?
I'm unconvinced that using the first header field is any better than using the second.

The right thing to do here seems to be similar to what's being done with Content-Length: detect that this is an invalid message, and either drop the whole message or maybe just ignore the Location header field.

(accepting multiple identical header fields seems harmless, but it would be good to know whether it's necessary to be "nice").
(In reply to comment #16)
> Just to clarify: that's a bug in Tomcat. Has it been raised already?

Yes, that's correct. In that case Tomcat *should* be validating the redirect parameter, and filtering newline characters. 4.1.24 is a fairly old version, and the bug was fixed in a subsequent release
This patch fixes this bug, closes a flaw in our fix for bug 597706, and also fixes bug 480100.

Basically, this bug just requires that we give that same treatment to Location: as we do to Content-Length in bug 597706 (throw error if we see multiple different values).  But I noticed some issues with the approach there:

- I wanted to add support for ignoring all values when multiple conflicting values of a header are seen (Content-Disposition in particular, to support bug 480100--but others may follow?).

- For headers that we do allow to have multiple (non-mergeable) values, I wanted us to take the first value seen (for compat with other browsers).

- I noticed the sneaky "pass empty header to reset" attack mentioned in last comment

- I started thinking that we needed a different API for adding a header from the network, since they have different semantics than other calls to SetHeader.  We promise that nsIHttpChannel.SetResponseHeader() will clear or overwrite an existing value, and we also rely on that behavior in nsHttpResponseHead::UpdateHeaders, but that's not at all what we want when adding headers from the net.  We really want "merge, noreplace" for network headers, with some added checks for forbidden/ignored dupes.  So that's what I've implemented with the new SetHeaderFromNet() function, which is called only by nsHttpHeaderArray::ParseHeaderLine (are there any other places we add headers from the network?  I didn't find any).

- I needed to add the idea of "ignored" entries to support 1) ignoring multiple, conflicting Content-Diposition headers and 2) to detect if a blank forbidden header was passed before a value appeared in a later header.  The latter is an edge case that's pretty minor, so if we prefer to always keep the first Content-Diposition seen (rather than ignore all values if a different value is later seen), we can rip out all the 'ignored' code.  But it's pretty well encapsulated (the only exception is PeekHeaderAt, which implicitly exposes ignored headers by returning NULL for them: but the calls sites already check for NULL, so we're ok).

- I took out the mType field from nsHttpHeaderArray, since I don't think it's needed once the rest of this logic is here.

- I removed the nsHttpHeaderArray::Headers() method I misguidedly added a while ago, and switched HttpChannelParent to use the more standard nsIHttpHeaderVisitor interface (we really should have added in a 'context' that's passed into each call to VisitHeader(): instead I've had to add a kludgy member variable to HttpChannelParent to collect all the headers to send the the child).

I still suspect that in the long run we'll want a data structure that keeps all headers seen, to support Firebug and any other developer plugins that wants to see what actually came in from the network.  But that's a more drastic re-write than this.

> I'm unconvinced that using the first header field is any better than using the
> second.  The right thing to do here seems to be similar to what's being done with
> Content-Length: detect that this is an invalid message, and either drop the
> whole message or maybe just ignore the Location header field.

Right. Another way of putting this is to ask whether we want to add more headers that are in IsSingletonHeader() to either IsSuspectDuplicateHeader() or IgnoreIfConflictingHeader().  I.e. which headers do we want to fail on if there are dupes, which do we want to ignore all values for, and which (if any) do we want to just take the first value of?  So far I'm being conservative here--we can land this patch and close the security issue here, then experiment with being stricter about other singleton headers.

> (accepting multiple identical header fields seems harmless, but it would be good
> to know whether it's necessary to be "nice").

Another fine question.  I can change the code to make any dupe--even with same
value--have the same effect as a conflicting value.
Attachment #538712 - Flags: review?(bzbarsky)
(In reply to comment #19)
> I still suspect that in the long run we'll want a data structure that keeps
> all headers seen, to support Firebug and any other developer plugins that
> wants to see what actually came in from the network.

+1 - this is a long-standing and annoying limitation of firebug, livehttpheaders et al.
The more I think about it, the more it seems like ignoring all values for Content-Disposition when there are conflicting ones is a security risk (if you can inject a C-D header as an attacker, you can override the server's directive to treat content as an attachment, and force it to get treated as inline: bad).

So I'm thinking I'll make multiple conflicting values of C-D an error (like content-length), remove all the "ignore" infrastructure here (since C-D was the only header that uses it currently), and assume that we're ok with either barfing on conflicting headers, or are ok taking the first.

Objections?
(In reply to comment #21)
> The more I think about it, the more it seems like ignoring all values for
> Content-Disposition when there are conflicting ones is a security risk (if
> you can inject a C-D header as an attacker, you can override the server's
> directive to treat content as an attachment, and force it to get treated as
> inline: bad).
> 
> So I'm thinking I'll make multiple conflicting values of C-D an error (like
> content-length), remove all the "ignore" infrastructure here (since C-D was
> the only header that uses it currently), and assume that we're ok with
> either barfing on conflicting headers, or are ok taking the first.
> 
> Objections?

Content-Disposition or Location?
Comment on attachment 538665 [details] [diff] [review]
Improve xpcshell tests coverage of duplicate headers

these tests are significantly bulked up - that's great.. but based on comment 21 (which I agree with), I'll drop the r? and wait for the version.. let me know if I misunderstood that.
Attachment #538665 - Flags: review?(mcmanus) → review-
> Content-Disposition or Location?

In the current patch, multiple Location throws an error, and C-D ignored all values.  I'm proposing to make 2 C-Ds also throw an error.
(In reply to comment #24)
> > Content-Disposition or Location?
> 
> In the current patch, multiple Location throws an error, and C-D ignored all
> values.  I'm proposing to make 2 C-Ds also throw an error.

How does affect throwing an error the page behavior? For C-D I believe the right thing is to ignore the header, but to process the remainder of the message...
> How does affect throwing an error the page behavior?

Channel is cancelled.  For document-level loads, user sees a "Malformed content" error message.

> For C-D I believe the right thing is to ignore the header, but to
> process the remainder of the message

This what I'm asserting is unsafe, since if we ignore all C-D headers when >1 is seen, a CRLF attack can cause the browser to treat content the server specified as "attachment" as "inline" (since inline is the default semantic when no C-D is present).  Sites (like Bugzilla) that allow users to upload arbitrary files often want to allow the files to be downloaded, but with attachment sematics, not as inline, in case the file contains some attack vector when rendered.   Make sense?

Perhaps I'm being too paranoid?

(Slightly offtopic:  I suspect this is also why we've generally erred toward treating most malformed C-D header values as == attachment: it's the safer semantic).
(In reply to comment #26)
> > How does affect throwing an error the page behavior?
> 
> Channel is cancelled.  For document-level loads, user sees a "Malformed
> content" error message.
> 
> > For C-D I believe the right thing is to ignore the header, but to
> > process the remainder of the message
> 
> This what I'm asserting is unsafe, since if we ignore all C-D headers when
> >1 is seen, a CRLF attack can cause the browser to treat content the server
> specified as "attachment" as "inline" (since inline is the default semantic
> when no C-D is present).  Sites (like Bugzilla) that allow users to upload
> arbitrary files often want to allow the files to be downloaded, but with
> attachment sematics, not as inline, in case the file contains some attack
> vector when rendered.   Make sense?

It's a bit drastic. But I'm certainly not going to argue against draconian error handling :-) Give it a try!

> Perhaps I'm being too paranoid?
> 
> (Slightly offtopic:  I suspect this is also why we've generally erred toward
> treating most malformed C-D header values as == attachment: it's the safer
> semantic).

Yes, I heard that but wasn't totally convinced; downloading instead of displaying inline might just different kinds of risks...
> Perhaps I'm being too paranoid?

You're not.  I've mentioned this to Julian before, as he notes.  ;)

> downloading instead of displaying inline might just different kinds of risks...

It shifts risk away from the _server_, period.  Risk to the user becomes a different sort of risk.
Comment on attachment 538712 [details] [diff] [review]
Fix v1: fix this and some other header bugs

>+  , mVisitingHeadersTmp(0)

nsnull?

And maybe call it mHeadersBeingSyncedToChild, since that's what it's used for?

>+HttpChannelParent::VisitHeader(const nsACString &header, const nsACString &value)
>+{
>+  NS_ABORT_IF_FALSE(mVisitingHeadersTmp,
>+                    "mVisitingHeadersTmp must be set at this point!");

You can't guarantee that; anyone could QI the channel to the visitor interface and call this method.  Just NS_ENSURE_STATE instead.

>+  // used while visiting headers, to send them to child
>+  RequestHeaderTuples *mVisitingHeadersTmp;

Document that it's null otherwise?

>+                    ((entry.header == nsHttp::Proxy_Authorization) ||
>+                    (entry.header == nsHttp::Proxy_Connection))))

The == tests here are overparenthesized, and the indent on the second line is wrong: it should be indented one more space.

Flatten() needs a mode in which it shows ignored headers too, I think (possibly marking them as ignored?).  Or at least the logging in nsHttpTransaction::HandleContentStart needs to be fixed to more accurately log what's going on...  Ideally it would log enough for us to tell exactly what happened.

This is still ignoring multiple content-disposition headers.  I believe that's not what we want here.

r=me with those issues fixed.
Attachment #538712 - Flags: review?(bzbarsky) → review+
Blocks: 669259
http://hg.mozilla.org/integration/mozilla-inbound/rev/a733e437f95a
http://hg.mozilla.org/integration/mozilla-inbound/rev/cc405052ac0a

We now fail on Content-Disposition conflicts too.

All comments fixed, except 

> Flatten() needs a mode in which it shows ignored headers too

As mentioned in comment 21 I've removed all notion of "ignored" headers.

> the logging in nsHttpTransaction::HandleContentStart needs to be fixed 
> to more accurately log what's going on.

We're back to where we were:  the logging is AFAICT accurate except for 1) we ignore headers with blank values (they don't show up in flattened log); 2) we merge headers as they come in, so Firebug, etc sees the merged header instead of the separate header; and 3) we don't show the 2nd, 3rd, etc values of non-mergeable headers (this is the same as before this patch, except we used to only show the last, now we show the first).  I've opened up bug 669259 for those issues (though it's not high on my list right now).
Summary: CRLF Injection and the parsing of HTTP Location headers → CRLF Injection and the parsing of HTTP headers
Whiteboard: [sg:moderate] → [sg:moderate][inbound]
http://hg.mozilla.org/mozilla-central/rev/a733e437f95a
http://hg.mozilla.org/mozilla-central/rev/cc405052ac0a
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Whiteboard: [sg:moderate][inbound] → [sg:moderate]
I honestly care less about what Firebug logs than our internal log.  We use that all the time for debugging, and having it be inaccurate is a serious pain.

Can we just store all the info in some separate data structure that we use for just that log call but don't expose to necko consumers for now?  This is really really important....
And to be clear, if I say "r=me with those issues fixed", that doesn't mean "r=me even if you don't fix them"....  :(
Or is the point that we now use the first Content-Disposition we see?  If so, that's probably ok for now.
> r=me even if you don't fix them...

My sincere apologies if what I committed did not meet the intent of your +r.

> we now use the first Content-Disposition we see?

For headers in IsSuspectDuplicateHeader() (including C-Dispo) we now fail if there are conflicting dupes.  That was the plan as of comment 24.  For other singleton headers that are not in IsSuspectDuplicateHeader() we now return the first header seen instead of the last.

> the logging in nsHttpTransaction::HandleContentStart needs to be fixed to more
> accurately log what's going on...  Ideally it would log enough for us to tell
> exactly what happened.

This patch didn't change our (imperfect) header logging in any way, except that for duplicate, non-suspect singleton headers, we now log the 1st header instead of the last one.  Otherwise it has the same flaws it had before (doesn't log blank headers, or show dupes, and shows merged headers instead of what came in on the wire).  It seemed/seems like feature creep to try to fix all that in this bug, which is also time sensitive.  My apologies for getting it done last-minute in the release cycle, so we didn't have time to discuss that.  Anyway, let's move the discussion of how to fix the logging to bug 669259.  I have some ideas.
bz: Do we also need this patch to fix this bug?  I.e. do we want to also check the trailers of chunked HTTP responses for suspect singleton headers, and fail the transaction if we see them?
Attachment #544117 - Flags: review?(bzbarsky)
> we now fail if there are conflicting dupes.

Ah, ok.  Could we just log _that_ (ideally log the header names and the two values, plus enough info to tie this back to the channel in question)?  That would be enough to tell when a load fails because of this issue.  A followup bug is fine for this.  Or bug 669259, either way.

For trailers.... do we use them for anything?  If not, then it seems like we don't care that much how they're parsed.
Attachment #544117 - Attachment is obsolete: true
Attachment #544117 - Flags: review?(bzbarsky)
Yes, one of the flaws in the new NS_ERROR_CORRUPTED_CONTENT logic is that we're not logging the request fully.  Will fix in 669259.
(In reply to comment #36)
> Created attachment 544117 [details] [diff] [review] [review]
> Also check chunked trailers for suspect dupes.
> 
> bz: Do we also need this patch to fix this bug?  I.e. do we want to also
> check the trailers of chunked HTTP responses for suspect singleton headers,
> and fail the transaction if we see them?

Trailers aren't relevant to message delimitation; no need to worry about them.
(In reply to comment #37)

> For trailers.... do we use them for anything?  If not, then it seems like we
> don't care that much how they're parsed.

Trailers are not used for anything. I have a patch that will use Content-MD5 out of them, but that's it.
This fix landed in mozilla7. Is it worth landing in Firefox 6 beta or for 3.6.20? Since we don't know of a specific exploitable web app it's probably safest not to rush this in.
blocking1.9.2: --- → ?
Target Milestone: --- → mozilla7
Whiteboard: [sg:moderate] → [sg:moderate] [landed m-c 7/05] [nom'd by dveditz c#41]
Given the sg: rating and the previous comment we won't be rushing this into 6.
(In reply to comment #41)
> This fix landed in mozilla7. Is it worth landing in Firefox 6 beta or for
> 3.6.20? Since we don't know of a specific exploitable web app it's probably
> safest not to rush this in.

I can't provide any details due to confidentiality requirements, but I'm aware of at least one web application server that is exploitable. It's not nearly as widely deployed as Apache or Tomcat, but it is fairly popular, and the vulnerability is present in the latest releases.
(In reply to comment #41)
> This fix landed in mozilla7. Is it worth landing in Firefox 6 beta or for
> 3.6.20? Since we don't know of a specific exploitable web app it's probably
> safest not to rush this in.

I can't provide any details due to confidentiality requirements, but I'm aware of at least one web application server that is exploitable. It's not nearly as widely deployed as Apache or Tomcat, but it is fairly popular, and the vulnerability is present in the latest releases.
blocking1.9.2: ? → needed
Since there are exploitable servers out there and we're going to announce this fix when we release Firefox 7 we need it also fixed in the 3.6.21 release.
blocking1.9.2: needed → .21+
FWIW users are complaining about fallout from this (bug 681140).
Depends on: 681140
Daniel,

This bug depends on bug 597706, so if we want it on 1.9.2 we'll need to merge that too.  The code changes involved are not small (though not horribly complicated AFAICT: but it will take a non-trivial amount of work).  Let me know if we still want to proceed (and maybe mark bug 597706 blocking1.9.2 too?)
Blocks: 681140
Depends on: 597706
No longer depends on: 681140
No longer blocks: 681140
Depends on: 681140
OK, here's a 1.9.2 patch.  It has the fixes for bug 597706 as well (in many ways this was a fixup patch for that bug, so some of it's changes were nullified by this bug, so the combined patch is actually smaller than doing them separately).

I've tested locally (linux64).  I tried to push to try, but too much build infrastructure seems to have changed--I think any 1.9.2 push will fail there now.  I assume I should just land this and backout as needed.

I don't think this needs review.

Let me know if I should land this--I haven't gotten an answer about whether we're OK incorporating the 597706 changes.
Attachment #561009 - Flags: approval1.9.2.23?
Comment on attachment 561009 [details] [diff] [review]
1.9.2 version of patch: includes fixes for bug 597706 as well

Yes, we want this...please land on releases/mozilla-1.9.2 ASAP! Thanks for the patch.
Attachment #561009 - Flags: approval1.9.2.23? → approval1.9.2.23+
Landed, and I'm watching the tree (which seems to be ok with perma-red for linux64 and win-debug?)

 https://hg.mozilla.org/releases/mozilla-1.9.2/rev/398ffaf81015

I don't have the bugzilla-Fu to mark this .23-fixed so I assume someone will do it for me.
Yikes--our automated build infrastructure seems fragile for 1.9.2 builds.  I don't see any actual errors after I pushed, but it's not comforting that many of the builds don't even have logs, so I can't see if oranges are real errors or not:

  https://tbpl.mozilla.org/?tree=Firefox3.6&rev=398ffaf81015

Let me know if there's anything else for me to do here.
Thanks so much! I'll look into the permared.
And I think we're good from your end, I'll hunt down RelEng.
Can we fix this in a more compatibility-friendly way? Denial of access to the site when they have a duplicated header seems...extreme. Can we poke a hole for when the domain is the same or something similar? I'm very worried about the user impact of this as the error message doesn't give them any indication what is going on and that the problem is server-side.
We only barf at the moment on three headers: Content-Length, Location, and Content-Disposition, and the values differ.   I'm not clear on how we could safely display content with different lengths, or a redirect with two different locations--I suppose we could just take the first (or the smallest length?  Or hosts that has similar domain?)

Content-Disposition: we could accept multiple headers provided they have the same semantics (i.e. both attachment, or both inline).  I wrote a patch for it, but it was badly received by the peanut gallery, err, my esteemed colleagues:

  https://bugzilla.mozilla.org/show_bug.cgi?id=683250

FWIW we've gotten fairly few complaints about the dupe header issue so far.  But then again, we haven't shipped it in a release yet...
s/and the values differ/when the values differ/

I.e we're ok if multiple Content-lengths appear with the same length.
(In reply to Jason Duell (:jduell) from comment #55)
   I'm not clear on how we could
> safely display content with different lengths, or a redirect with two
> different locations--I suppose we could just take the first (or the smallest
> length?  Or hosts that has similar domain?)

I don't see how any of those things address the security implications, sadly. If we could come up with some kind of a stand-in for safeness (such as the equiv check on content-length) then I'm fine with letting it go by.
(In reply to Christian Legnitto [:LegNeato] from comment #54)
> Can we fix this in a more compatibility-friendly way? Denial of access to
> the site when they have a duplicated header seems...extreme. Can we poke a
> hole for when the domain is the same or something similar? I'm very worried
> about the user impact of this as the error message doesn't give them any
> indication what is going on and that the problem is server-side.

It isn't just a duplicated header. It is the website telling us that a critical security-sensitive value has two different values. It is fine to allow exact matches for Location but anything other than is unsafe, AFAICT.

I commented in bug 681140 about better documenting this to help website authors fix their code. If we think the error page itself should be reworded, then let's discuss that in bug 681140 too.
qa?: is there a minimized testcase QA can use to verify this fix?
Whiteboard: [sg:moderate] [landed m-c 7/05] [nom'd by dveditz c#41] → [sg:moderate] [landed m-c 7/05] [nom'd by dveditz c#41][qa?]
netwerk/test/unit/test_duplicate_header.js is about as minimized as I can think of, as it part of every xpcshell run we do.
I'm going to call this qa- based on xpcshell coverage.
Whiteboard: [sg:moderate] [landed m-c 7/05] [nom'd by dveditz c#41][qa?] → [sg:moderate] [landed m-c 7/05] [nom'd by dveditz c#41][qa-]
fyi - Chromium's William Chan says that Chrome is intending to implement the same restrictions as firefox in chrome 16 or 17. (the chrome bug is security restricted)
Group: core-security
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: