Closed Bug 380418 (CVE-2009-0357) Opened 17 years ago Closed 16 years ago

XMLHttpRequest allows reading HTTPOnly cookies

Categories

(Core :: Networking: Cookies, defect, P1)

defect

Tracking

()

VERIFIED FIXED
mozilla1.9.1b2

People

(Reporter: jwkbugzilla, Assigned: bjarne)

References

()

Details

(Keywords: fixed1.8.1.21, verified1.9.0.6, verified1.9.1, Whiteboard: [sg:want P2])

Attachments

(7 files, 6 obsolete files)

10.24 KB, patch
dveditz
: review-
Details | Diff | Splinter Review
12.27 KB, patch
Details | Diff | Splinter Review
5.25 KB, application/x-javascript
Details
7.33 KB, patch
Details | Diff | Splinter Review
6.17 KB, patch
sicking
: review+
sicking
: superreview+
Details | Diff | Splinter Review
2.51 KB, patch
Details | Diff | Splinter Review
2.73 KB, patch
Details | Diff | Splinter Review
XMLHttpRequest subverts the idea of HTTPOnly cookies since it allows sending a request and reading out Set-Cookie header of the response - even if it has HTTPOnly flag set. There are quite a few web applications out there that will send out a Set-Cookie header with the session cookie for every request just in case, thus rendering HTTPOnly flag ineffective.

To test you can go to http://www.yabbforum.com/community/ and use this bookmarklet:

javascript:var req = new XMLHttpRequest();req.open("GET", "/community/", false);req.send(null);alert(req.getAllResponseHeaders());alert(req.getResponseHeader("Set-Cookie"))

You will see a list of all response headers including Set-Cookie, and then the result of req.getResponseHeader("Set-Cookie"). I think Set-Cookie headers should be excluded from the list and req.getResponseHeader("Set-Cookie") should always return null. This protection is only necessary for unprivileged scripts, the implementation can be the same as in setRequestHeader.
Good catch, does this work on other browsers?
It works in Opera as well - but on the other hand, I don't have an Opera build with HTTPOnly support yet. It also works in IE6. However, according to http://ha.ckers.org/blog/20070511/bluehat-errata/ this is fixed in IE7 (I couldn't verify this).
Flags: blocking1.9?
Flags: blocking1.8.1.5?
Flags: blocking1.8.1.4?
Jesse, bug 178993 will only land after 1.8.1.4 (if approved), is there a point in having blocking1.8.1.4 here?
Flags: blocking1.8.1.4?
Moved the code used to verify request headers into a separate function so that it can be used for response headers as well. There was one functional change - instead of throwing on security manager errors I simply fall back to the "unprivileged script" case, that should be a sane thing to do here. The other relevant test case for this patch is http://lxr.mozilla.org/mozilla/source/content/base/test/test_bug308484.html
Assignee: xml → trev.moz
Status: NEW → ASSIGNED
Attachment #264674 - Flags: superreview?(sayrer)
Attachment #264674 - Flags: review?(jst)
Flags: blocking1.9? → blocking1.9+
Comment on attachment 264674 [details] [diff] [review]
Proposed patch and mochitest testcase

r=jst
Attachment #264674 - Flags: review?(jst) → review+
Comment on attachment 264674 [details] [diff] [review]
Proposed patch and mochitest testcase


> 
>+// Request headers not allowed to be set by unprivileged scripts
>+const char *kInvalidRequestHeaders[] = {
>+  "host", "content-length", "transfer-encoding", "via", "upgrade", nsnull
>+};

note: I'm not an sr. However, this list needs to match the list given by the W3C:
http://www.w3.org/TR/XMLHttpRequest/#dfn-setrequestheader
Attachment #264674 - Flags: superreview?(sayrer) → superreview-
Robert, this list goes back to bug 302263 and the discussion there mentions the WHATWG spec (that is on W3C now). However, some of the forbidden headers actually seem harmless.
(In reply to comment #7)
> Robert, this list goes back to bug 302263 and the discussion there mentions the
> WHATWG spec (that is on W3C now). However, some of the forbidden headers
> actually seem harmless.
> 

Given that discussion, I agree that changing the list doesn't need to happen in this bug. and you need a real sr. :)
Attachment #264674 - Flags: superreview- → superreview?(jonas)
Flags: blocking1.8.1.5? → blocking1.8.1.5+
Blocks: 178993
Comment on attachment 264674 [details] [diff] [review]
Proposed patch and mochitest testcase

I'd prefer the the nsHeaderVisitor ctor took an |const char **aInvalidHeaders| argument rather than having a separate setter function. Otherwise it's very easy to forget which is hard to notice but is bad security wise.

sr=me with that
Attachment #264674 - Flags: superreview?(jonas) → superreview+
So much about "no arguments constructor com policy" - but yes, fine with me, it makes the code much cleaner.

Requesting approval1.8.1.5 for this patch - bug 178993 is landing on 1.8.1 branch so this should as well.
Attachment #264674 - Attachment is obsolete: true
Attachment #270288 - Flags: approval1.8.1.5?
Whiteboard: [checkin needed]
Comment on attachment 270288 [details] [diff] [review]
Patch to be checked in

This patch makes me nervous -- it looks like you stop reading any cookies, not just the httponly ones mentioned in the summary. I wish this had landed for 1.9a6 so we could get some real-world testing on it, and failing that I don't want to approve it until it's gotten as much trunk nightly baking as possible.

Not minusing, but clearing the request until it's gotten some baking. Please re-request branch approval after landing on trunk.
Attachment #270288 - Flags: approval1.8.1.5?
The visitor isn't an XPCOM object which makes it ok IMHO. In general we're moving away from restricting ourselves to all the rules of XPCOM except when it makes sence.

The ctor for all DOM nodes takes arguments.

The case when you don't want ctor arguments is when the object is being instantiated using XPCOM through do_CreateInstance.
Keywords: checkin-needed
Whiteboard: [checkin needed]
Without trunk landing I'm too nervous about breaking AJAX sites that rely on the ability to read non-httponly cookies. Maybe there are none, but we don't know that until we try it.
Flags: blocking1.8.1.5+ → blocking1.8.1.6?
Comment on attachment 270288 [details] [diff] [review]
Patch to be checked in

Just tested IE7 and results were what I expected: Given a page that returns a mixture of httponly and regular cookies, IE7 does in fact let XMLHttpRequest see the regular Set-Cookie headers as in the past and strips out only the httponly ones.

I'm afraid I have to give this patch an r-minus


>+  PRBool privileged = PR_FALSE;
>+  if (privilege) {
>+    nsIScriptSecurityManager *secMan = nsContentUtils::GetSecurityManager();
>+    if (secMan) {
>+      secMan->IsCapabilityEnabled(privilege,
>+                                  &privileged);
>+    }
>+  }

Why not replace the above with nsContentUtils::IsCallerTrustedForRead() (or Write()) depending on a boolean flag passed rather than a string?

But I don't think UniversalBrowserRead should be good enough to read a header that is explicitly saying "script should never see this". UniversalXPConnect/chrome, sure, but not just "can access windows from other domains". And I likewise don't think UniversalBrowserWrite is good enough to set headers we've said are dangerous for content to set.

In that case nsContentUtils::IsCallerChrome() ought to cover it (ok, that only checks for true chrome and not the equivalent UniversalXPConnect, but you can argue that everywhere else it's used is similarly broken and we should add the UniversalXPConnect check to IsCallerChrome and fix them all at once).
Attachment #270288 - Flags: review-
Keywords: checkin-needed
I take back all I said about IE7 working, I have no idea what RSnake's article is talking about.

getAllResponseHeaders() in IE7 gets _all_ response headers, including the httponly cookies.

getResponseHeader() returns the _first_ header of a given name. At first I thought this was working because I only had two cookies and the httponly one was second. But when I added a third non-httponly one, or swapped them, or made 5 cookies, it became clear it was just the first cookie set whether it was httponly or not.

Loading the same URL as a web page in IE& and running javascript:alert(document.cookie) showed me the correct number of non-httponly cookies and none of the httponly ones, so I know my Set-Cookie syntax matched what IE expected.
Keywords: checkin-needed
Whiteboard: [sg:want]
hm... if IE only shows one cookie of many then it's hard to imagine AJAX apps relying on XHR cookie access... Maybe we can afford to just drop the header entirely after all.

I'd feel better if we got some buy-in from other browser vendors/web app developers on the webapi mailing list and got some consistent restrictions embedded in the spec (http://www.w3.org/TR/XMLHttpRequest/)
IMHO we should drop HTTPOnly cookies and let other cookies through. That seems like the most logical thing. I'll bring this up with the webapi wg
Drive-by note: you can set custom headers on Mochitest pages without having to do channel gunk, which is much easier to read.  I didn't look too hard, but I don't think you actually need anything special that can't be provided by statically-set headers:

http://developer.mozilla.org/en/docs/Mochitest#How_do_I_change_the_HTTP_headers_or_status_sent_with_a_file_used_in_a_Mochitest.3F

Do note that you can't set multiple headers that way, which unfortunately might be a problem here.  :-\  The server running this stuff was written in the (perhaps too ideal) world where multiple headers of the same name can always be combined into one header of the same name with value being the comma-joined header values, but I can change that if you really need it.
Flags: blocking1.8.1.7? → wanted1.8.1.x+
Whiteboard: [sg:want] → [sg:want P2]
Wladimir, are you working on a new patch here? If not, Dan, do you think you could write one up?

I don't care much what access is required to get to the HTTPOnly cookies, i'm fine with never exposing them.

Btw, has anyone notified microsoft of this problem since it seems like IE suffers from it too?
Priority: -- → P4
No, I am not working on this bug - not much time lately, sorry about that.
Assignee: trev.moz → nobody
Status: ASSIGNED → NEW
QA Contact: ashshbhatt → xml
Assignee: nobody → sayrer
Flags: wanted1.9.0.x+
Flags: tracking1.9+
Flags: blocking1.9-
Are there any plans to move forward on this bug?
The patch was minused so we would need a new patch for it.
This is a fairly fundamental security flaw in FireFox that IE7 gets right. I would be thrilled to see any movement on this....
Jim: do you mean that we should follow IE7 in exposing http-only cookies in getAllResponseHeaders, but only show the first header in getResponseHeader?  Unless Dan's analysis in comment 15 is mistaken -- test cases demonstrating such are very welcome -- it seems like IE also exposes HttpOnly cookies here.
Mike: No. IE7 quietly ignores requests to read pre-existing HttpOnly cookies via the js calls getAllResponseHeaders and getResponseHeader as it rightly should. However, the set-cookie header still exposes HttpOnly cookies in all browsers (see http://ha.ckers.org/httponly.cgi )

Mike: I will set up a verbose test case to demonstrate what I think I'm talking about. 
This bug, from comment 0 on, is about reading Set-Cookie, no?

No browser-sent cookies should be in the response headers, since Cookie: is part of the _request_, not the response.  I look forward to your test case!
Mike: My apologies, I'm moving to fast. You are right, of course. The only time set-cookies should ever be included in in a response, not a request.  And you are right again that RSnakes' post saying that ie7 fixes the XMLHttpRequest HttpOnly cookie exposing issues is wrong. Regardless, can we agree that this should be fixed in FireFox? :) And last, I will set up a verbose test page that the FireFox team can use illustrate this vulnerability.
It should be fixed in Firefox, which is why this bug isn't WONTFIX or INVALID.  I think the ha.ckers.org page is ample for developing against, though if you wanted to write a mochitest or two (see waldo's comment 18 for some limitations) that would be helpful.
http://www.codinghorror.com/blog/archives/001167.html asserts that this is indeed fixed in IE7, and makes me wonder if we should be moving this up to blocking status on 1.9.1
Flags: blocking1.9.1?
Bjarne, could you take a look at this?
Assignee: sayrer → nobody
Component: XML → Networking: Cookies
QA Contact: xml → networking.cookies
Bjarne, could you take a look at this?
Assignee: nobody → bjarne
I just tested, IE 7.0.6001.18000 still exposes HTTPOnly cookies via set-cookie headers in XMLHttpRequest.getAllResponseHeaders()

From the OWASP WebGoat HTTPOnly lab: 
Results: 
* FAILURE: Your browser does not prevent an XMLHTTPRequest read for the 'unique2u' cookie.

However, I would be thrilled to see FireFox lead the charge and be the first browser to truly implement HttpOnly correctly and completely.
Indeed. We're trying to supply a better alternative to IE, not something that is no worse.
Flags: blocking1.9.1? → blocking1.9.1+
Priority: P4 → P1
Status: NEW → ASSIGNED
This is a proof-of-concept which handles the example at http://ha.ckers.org/httponly.cgi . I'm attaching it to get feedback on the basic idea, as well as some issues described below.

The basic idea is to strip off httpOnly-cookies from each Set-Cookie header when nsHeaderVisitor::VisitHeader() assembles the array of response-headers. However, no checks for UniversalBrowserRead or similar (see e.g. comment #14 above) are done. Please advise whether this is necessary or not. Moreover, if nsHeaderVisitor::VisitHeader() is used in other contexts, the implementation may have to be improved. Please advise.

Some more challenging test-cases should also be provided to ensure that it works properly, in particular the situation where there are only httpOnly cookies in a header : Should we just drop the header or should we return an empty set-cookie header?

Finally, referring to the TODO in the patch, if it's ok to introduce a dependency to the netwerk/cookie module, we could use the static method nsCookieService::GetTokenValue() instead of copying the code. Please advise.
Attachment #339447 - Flags: review?(jonas)
In the case where there are only httpOnly cookies in a set-cookie header, I suggest the entire set-cookie header be removed from JS visibility.
Is this REALLY a beta 1 blocker?  We'd push back 3.1 for this?
I would prefer to keep this on the blocker list yes.
Attached patch Improved patch (obsolete) — Splinter Review
I completely agree with comment #36, and the patch implements this behavior.

Moreover, I find no code other than GetAllReponseHeaders() referring to nsHeaderVisitor, and I find no calls to GetAllReponseHeaders() in the codebase either. Thus, I seems safe to assume that this change in nsHeaderVisitor::VisitHeader() will not introduce regressions. However, this is obviously just based on reading the code and must be verified during normal test-procedures.

Awaiting review as well as comments to these open questions :

1) Referring to the TODO in the patch, is it ok to introduce a dependency to the netwerk/cookie module? If so, we could use the static method nsCookieService::GetTokenValue() instead of copying code into nsXMLHttpRequest

2) Do we need checks for UniversalBrowserRead or similar (see e.g. comment #14) or do we trust the claim about not introducing regressions? Or put in a different way : are there any scripts that SHOULD be allowed to see httpOnly-cookies?

3) Tests are needed. I can whip up some unit-tests or something one of these days, but if other people are working on this (see comment #26 and comment #28), I'd be happy to wait.
Attachment #339447 - Attachment is obsolete: true
Attachment #340906 - Flags: review?(jonas)
Attachment #339447 - Flags: review?(jonas)
Setting to P2 as this doesn't need to be in beta1.
Priority: P1 → P2
Should I proceed and create some tests for this?
Yes, tests are needed.
Ok, I'll get started on them on Monday. If anyone has a starting-point or ideas, feel free to contact me.  :)
You are only fixing getAllResponseHeaders, need to fix getResponseHeader as well.

So I'm not sure that we want to do things this way. It seems very complicated. I think we should simply deny access to the cookie and auth headers entirely.
(In reply to comment #44)
> You are only fixing getAllResponseHeaders, need to fix getResponseHeader as
> well.

I can quickly fix getResponseHeader using the same mechanism on Monday.

> So I'm not sure that we want to do things this way. It seems very complicated.

The fundamental idea is to return a copy of the cookies, exactly like the original code does, but strip off those marked as httponly. This, imho, is not highly complicated. :)

The patch itself contains a lot of code, I agree, but most of it is cut'n'paste from another module since I didn't want to mess around with the Makefile-magic necessary to access those methods from nsXMLHttpRequest. See open question 1 in comment #39 and comment in the patch. Thus, most of the code can be removed if it's ok to introduce a dependency like that - I'm awaiting a decision or recommendation on this one. 

> I think we should simply deny access to the cookie and auth headers entirely.

About (other) auth headers : It's the first time I've seen that mentioned in this defect. Is this an issue as well? Is there a discussion or defect about that somewhere? Test-case?

My understanding from the discussion in this defect is that the desired behaviour is to allow access to cookies except for the httponly ones. This has been implemented for getAllResponseHeaders and I can also do it for getResponseHeader. If this is *not* the desired behaviour, however, I'd be happy to implement something else. Just let me know what it should be. :)
Attached file Proposed test-skeleton (obsolete) —
This is a starting-point for a test-case. Comments are welcome.

The idea is to extend the test by adding to the global cookies-array with combinations of cookies to be set by the server and the expected value of the Set-Cookie-header.
I see from comments in nsCookieService::ParseAttributes() that the cookie-implementation explicitly does not handle comma-separated cookies in one header, but rather require \n or \r to separate them. Must be fixed in the test-code...
Challenge : Combine the statement in comment #47 with these facts about how nsHttpServer set header-values

1) if the value in the call response.setHeader(name, value) contains \n or \r, an NS_ERROR_INVALID_ARG exception is thrown

2) splitting the value around \n or \r and call response.setHeader(name, value, true) for each cookie results in a header with a list of comma-separated values (as stated in RFC 2616, section 4.2)

Summing up : the (xpcshell) unit-test cannot use commas to separate cookies since the code from nsCookieService does not handle comma-separated cookies. Moreover, response.setHeader() in nsHttpServer will either construct a comma-separated list of cookies, or throw an exception if the cookies are separated with \n or \r.

I'll change the copied code to handle comma-separated cookies...
So I absolutely don't think we should duplicate this code. This is security critical code so it's important that we keep it consistent everywhere.

So either we should do the simple thing and block all 'Set-Cookie' headers completely. Or we should muck around with the APIs such that you can call into the same parsing function as the real cookie code is using.
(In reply to comment #49)
> So I absolutely don't think we should duplicate this code. This is security
> critical code so it's important that we keep it consistent everywhere.
> 
> So either we should do the simple thing and block all 'Set-Cookie' headers
> completely. Or we should muck around with the APIs such that you can call into
> the same parsing function as the real cookie code is using.

I couldn't agree more. :)  If the decision is to strip off httponly cookies and show the rest, the only sensible solution is to tap into the code used in the cookie-module. The other strategy (block Set-Cookie headers) is dead simple to implement...

However, I'm almost there with a patch to handle commas. I'll attach it together with a working unit-test to verify that the concept is sound. Are there existing tests for cookies somewhere that I can use to verify my code in addition to my unit-test?

(Btw, there is also bug #370639 which points out an interesting issue, although not 100% related to this one.)
(In reply to comment #49)
> So I absolutely don't think we should duplicate this code. This is security
> critical code so it's important that we keep it consistent everywhere.

i agree with sicking here - no way should this be copy-paste. if we really do want a subset of cookies, let's API it up.

why aren't we just removing them altogether (comment 16 and comment 17)? sicking, did anything come of the webapi wg?

(In reply to comment #50)
> However, I'm almost there with a patch to handle commas. I'll attach it
> together with a working unit-test to verify that the concept is sound. Are
> there existing tests for cookies somewhere that I can use to verify my code in
> addition to my unit-test?

yes, there's a TestCookie binary unit test in netwerk/test or somesuch, and there are cookie mochitests based in extensions/cookie/test{s}.
This is work-in-progress. I want to run this through the other cookie-tests as well as the attached unit-test.

(Comments are very welcome, though  :) )
Attachment #340906 - Attachment is obsolete: true
Attachment #340906 - Flags: review?(jonas)
Attached file Improved unit-test
This includes some variations. Feel free to add other tricky cases...
Attachment #342871 - Attachment is obsolete: true
So this looks like it's still duplicating the cookie code rather than reusing
it, right?
(In reply to comment #54)
> So this looks like it's still duplicating the cookie code rather than reusing
> it, right?

Yes, by all means. :) But the cookie code I found and copied does not handle comma-separated cookies. This patch handles commas, and because of this I can now run the unit-test and verify that the strategy for removing httponly-cookies works.

(If there is other code somewhere which does handle cookies separated by commas, please point me to it...)
I spent the day patching nsCookieService::GetTokenValue(). All tests in the TestCookie binary as well as cookie-mochitests run successfully with it, and I'll attach it a little later. The same code pasted into nsXMLHttpRequest also successfully run the previously attached unit-tests for this defect.

Now, I would like to outline three different strategies for handling this defect. Deciding which one to follow is not up to me, but I assume the right stakeholder(s) will make the decision.

Listed from simplest to most complex, I see these alternatives :

Alt #1 : We block access to the Set-Cookie-header from getAllResponseHeaders() and getResponseHeader(). This is simple to do, but may not be the most helpful strategy for the users.

Alt #2 : We use the existing nsCookieService::GetTokenValue() and in order to get a proper unit-test I fix nsHttpServer to handle cookies separated by \n and \r. This is safe and should not be too hard to do, but I need some assistance to expose the method from the cookie-module. (I can make it compile, but the linker complains, so I guess the method must be annotated somehow to export it from the lib.)

Alt #3 : We patch nsCookieService::GetTokenValue() to handle comma-separated cookies and expose it in order to use it from nsXMLHttpRequest. The unit-test for this defect will work with the existing nsHttpServer, but I still need assistance to expose the method from the cookie-module. This is by far the most risky alternative, replacing code in the cookie-module, but if there are reasons for actually following RFCs 2109 and 2965, allowing cookies to be separated by commas, then this is the right alternative.

I'll await a decision and attach the mentioned patch for information a little later when I have cleaned up the code.
Referring to alternative #3 above, this is the necessary patch to the cookie-parser.
Why would we want to change how the cookie service parses anything at all? That scares me a lot. Comma separated cookies results in a ambiguous syntax as I recall it since the expire date can contain commas. Parsing cookies in a web compatible way (which is very different from any specs) is hard and very quirky. And security critical.

*if* we want to expose Set-Cookie at all I think we should reuse the existing code exactly as it is. Just make changes needed to expose the existing parsing functionality though APIs that the XHR object can reach.

However the more I think about it the more I think we should just block getting Set-Cookie at all. Cookies are way too charged with security, we have no idea what future updates to cookies might bring. And the value of getting Set-Cookie seems dubious at best.


So unless someone oppose, lets just block getting Set-Cookie entirely.
Currently, scripts with chrome-privileges are allowed to see the Set-Cookie header. Whether this is desirable is an open question. Whether to block the Set-Cookie2 header as well is also an open question.

I have a hard time setting up a unit-test for this since xpcshell-scripts run with chrome-privileges and so does mochitests (according to the docs). Thus, there is no unit-test submitted and the fix has only been verified using the ha.ckers.org webpage. The code-change is trivial, though... If somebody can show me how to run a test *without* chrome-privileges I'd be happy to write one for this defect.
Attachment #343383 - Flags: review?(jonas)
(In reply to comment #58)
> Why would we want to change how the cookie service parses anything at all? That
> scares me a lot.

Me too, for sure. :)

> Comma separated cookies results in a ambiguous syntax as I
> recall it since the expire date can contain commas.

This seems to be covered by some of the existing cookie-tests, and it is handled by the patch.

> Parsing cookies in a web
> compatible way (which is very different from any specs) is hard and very
> quirky. And security critical.

We don't disagree here. :)

> *if* we want to expose Set-Cookie at all I think we should reuse the existing
> code exactly as it is. Just make changes needed to expose the existing parsing
> functionality though APIs that the XHR object can reach.

I'm leaning towards this one (alternative 2 in comment #56). However, if there is any desire for supporting comma-separated cookies among whoever is working with the cookie-module, then the submitted patch handles commas and passes all existing cookie-tests. It's still a bit scary, though... :)

> However the more I think about it the more I think we should just block getting
> Set-Cookie at all. Cookies are way too charged with security, we have no idea
> what future updates to cookies might bring. And the value of getting Set-Cookie
> seems dubious at best.
> 
> 
> So unless someone oppose, lets just block getting Set-Cookie entirely.

Very well. See patch above. :)
Btw, while working on this I found some possibly related (not duplicated) defects : defect #250859 and defect #358320. Defect #370639 has already been mentioned in comment #50, although this is about constructing the cookie-string to send.
How about un-bitrotting attachment 270288 [details] [diff] [review] and replacing security checks by nsContentUtils::IsCallerChrome()? It has a unit test as well ;)
(In reply to comment #62)
> How about un-bitrotting attachment 270288 [details] [diff] [review] and replacing security checks by
> nsContentUtils::IsCallerChrome()?

Good point...  :)  If the decision really is to follow the simplest strategy and just block the header, your original patch with the suggested modification will probably do the trick. Sicking...?  Dveditz...?

> It has a unit test as well ;)

... which seems to be just what I was asking for...  wish I'd discovered it earlier today...  :-}
If you use IsCapabilityEnabled("UniversalXPConnect") rather than nsContentUtils::IsCallerChrome you can easily test for that using mochitest directly. See

http://developer.mozilla.org/en/Mochitest#How_can_I_get_around_the_error_.22Permission_denied_to_get_property_XPCComponents.classes.22.3f
Validations in setRequestHeader() resulting from bug #302263 is not touched by this patch. However, it should be pointed out that UniversalBrowserWrite is sufficient to pass these validations, which may not be good enough (see comment #14).

The unit-test is a clean copy of the (excellent) one by Wladimir Palant.

For the record : I still think strategies 2 and 3 from comment #56 will result in an overall better product, but this is not my call. :)
Attachment #343383 - Attachment is obsolete: true
Attachment #343525 - Flags: review?(jonas)
Attachment #343383 - Flags: review?(jonas)
Comment on attachment 343525 [details] [diff] [review]
Patch implementing the simplest strategy, honoring comments from reviewer and including unit-test by Wladimir Palant

This looks great!! Thanks for doing this.

An alternative way to getting the channel and calling setResponseHeader is to use ^header^ files to actually get those headers set. See

http://developer.mozilla.org/en/Mochitest#How_do_I_change_the_HTTP_headers_or_status_sent_with_a_file_used_in_a_Mochitest.3f
Attachment #343525 - Flags: superreview+
Attachment #343525 - Flags: review?(jonas)
Attachment #343525 - Flags: review+
Comment on attachment 343525 [details] [diff] [review]
Patch implementing the simplest strategy, honoring comments from reviewer and including unit-test by Wladimir Palant

So I was told that we should block Set-Cookie2 as well since some browsers implement that.
Attachment #343525 - Flags: review+ → review-
Also modified unit-test according to comments from reviewer.
Attachment #343525 - Attachment is obsolete: true
Attachment #345336 - Flags: review?(jonas)
Comment on attachment 345336 [details] [diff] [review]
Patch also blocking Set-Cookie2 header
[Checkin: Comment 85]

>+	// Try reading headers in privileged context
>+	netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect UniversalBrowserRead");
>+	is(request.getResponseHeader("Set-Cookie"), "test", "Reading Set-Cookie response header in privileged context");
>+	is(request.getResponseHeader("Set-Cookie2"), "test2", "Reading Set-Cookie2 response header in privileged context");
>+	is(request.getResponseHeader("X-Dummy"), "test", "Reading X-Dummy response header in privileged context");
>+	
>+	ok(/\bSet-Cookie:/i.test(request.getAllResponseHeaders()), "Looking for Set-Cookie in all response headers in privileged context");
>+	ok(/\bSet-Cookie2:/i.test(request.getAllResponseHeaders()), "Looking for Set-Cookie2 in all response headers in privileged context");
>+	ok(/\bX-Dummy:/i.test(request.getAllResponseHeaders()), "Looking for X-Dummy in all response headers in privileged context");
>+	
>+	// Try reading headers in unprivileged context
>+	setTimeout(function() {
>+	  is(request.getResponseHeader("Set-Cookie"), null, "Reading Set-Cookie response header in unprivileged context");
>+	  is(request.getResponseHeader("Set-Cookie2"), null, "Reading Set-Cookie2 response header in unprivileged context");
>+	  is(request.getResponseHeader("X-Dummy"), "test", "Reading X-Dummy response header in unprivileged context");
>+	  
>+	  ok(!/\bSet-Cookie:/i.test(request.getAllResponseHeaders()), "Looking for Set-Cookie in all response headers in unprivileged context");
>+	  ok(!/\bSet-Cookie2:/i.test(request.getAllResponseHeaders()), "Looking for Set-Cookie2 in all response headers in unprivileged context");
>+	  ok(/\bX-Dummy:/i.test(request.getAllResponseHeaders()), "Looking for X-Dummy in all response headers in unprivileged context");
>+	
>+	  SimpleTest.finish();
>+	}, 0);

Nit: If you first check that Set-Cookie is normally blocked, *then* call netscape.security.PrivilegeManager.enablePrivilege, and then check that you now can get to it, you won't have to mess with setting a timeout.

r/sr=me either way.
Attachment #345336 - Flags: superreview+
Attachment #345336 - Flags: review?(jonas)
Attachment #345336 - Flags: review+
Added request for check-in (I think).
Keywords: checkin-needed
Anything else needed to get this landed...?
Nope, just gotta wait until the tree reopens.
Supposedly, Microsoft fixed that issue in MS08-069 released today (http://www.microsoft.com/technet/security/bulletin/ms08-069.mspx). "MSXML Header Request Vulnerability" describes a different issue however.
Sigh, don't we have anyone watching the checkin-needed flag these days? Or is the tree simply too backed up with bigger things needing to land?
Added to the 1.9.1 metered checkin list
(In reply to comment #73)
> Supposedly, Microsoft fixed that issue in MS08-069 released today
> (http://www.microsoft.com/technet/security/bulletin/ms08-069.mspx). "MSXML
> Header Request Vulnerability" describes a different issue however.

Does anyone know how they handle it? Do they just block the header, like the patch for this defect does, or do they filter out only httponly-cookies? (I don't have a MS-system available to test it myself...)
Just checked with IE6 after update - request.getResponseHeader() displays the cookie without httponly flag, request.getAllResponseHeaders() displays Set-Cookie header for the unprotected cookie but not the cookie with httponly flag. So IE really only filters out httponly-cookies.
If you have a mixture of both "normal" and httponly cookies, do they only block the httponly ones, or do they block all of them?

Either way, I think we should try our solution due to its simplicity.
I tested on http://ha.ckers.org/httponly.cgi - yes, they filter out httponly cookie while letting through the normal one. And of course I prefer the simple solution as well.
I've removed this from the b2 checkin queue because it is not a b2 blocker nor does it have approval-1.9.1b2
Comment on attachment 345336 [details] [diff] [review]
Patch also blocking Set-Cookie2 header
[Checkin: Comment 85]

Seems to me that we need to take this for b2 if we're going to take it at all...
Gavin: Why? The main reason I see for taking this for beta is so that we can get wide trunk testing sooner in order for porting it to branch. Had a discussion with damon about this at lunch and he was in favor of just letting the branch porting wait.
Because it looked to me like a change that should get wide testing before we ship it. You're in a better position to judge that than I though - feel free to cancel the approval request!
Yeah, that's actually true, if we're not having a beta3 then I think this really should go in for beta2. Talked with jst and he things we should get it in now, so setting as beta2 blocker and re-adding to metered landings.
Priority: P2 → P1
Target Milestone: --- → mozilla1.9.1b2
Checked in, marking FIXED.
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Looks like IE's recent patch (http://www.microsoft.com/technet/security/bulletin/ms08-069.mspx) now stops set-cookie exposure to HTTPOnly cookies via JavaScript - but not set-cookie2 HTTPOnly exposure. http://ha.ckers.org/httponly.cgi was updated to include set-cookie and set-cookie2 tests today, btw.
Note IE does not return a single Set-Cookie header. As such, getResponseHeader('Set-Cookie') now only returns the first non-httponly Set-Cookie header. But that probable made it simpler for them to just filter the httponly Set-Cookie headers.
Also note, combining Set-Cookie headers with a comma separator has issues with comma being a legal value in an individual Set-Cookie header.

I doubt there are useful apps out there that rely on being able to parse Set-Cookie headers from getAllResponseHeaders(). If there are, we might be breaking more of them by combining multiple Set-Cookie headers into a single one already.
Attachment #345336 - Attachment description: Patch also blocking Set-Cookie2 header → Patch also blocking Set-Cookie2 header [Checkin: Comment 85]
Whiteboard: [sg:want P2] → [c-n: has baked for 1.9.1] [sg:want P2]
Wasn't this checked in before we branched? If so, why the status whiteboard?
(In reply to comment #88)
> Wasn't this checked in before we branched? If so, why the status whiteboard?

You're right: I was misleaded by the leftover 'checkin-needed' keyword...
Flags: in-testsuite+
Keywords: checkin-needed
Whiteboard: [c-n: has baked for 1.9.1] [sg:want P2] → [sg:want P2]
Comment on attachment 345336 [details] [diff] [review]
Patch also blocking Set-Cookie2 header
[Checkin: Comment 85]

3.0.6 might be too aggressive, but we do want this on 3.0 after we've gotten enough real-world 3.1beta testing.
Attachment #345336 - Flags: approval1.9.0.6?
Whiteboard: [sg:want P2] → [sg:want P2][needs baking; check on 12/17]
Flags: blocking1.9.0.6?
Flags: blocking1.9.0.6? → blocking1.9.0.6+
Whiteboard: [sg:want P2][needs baking; check on 12/17] → [sg:want P2]
Comon, be brave - push it out! IE already fixed this problem with http://www.microsoft.com/technet/security/bulletin/ms08-069.mspx - SO CLOSE but no cigar - IE still leaks set-cookie2 cookie headers. If you push this out, FireFox will truly be the first browser to implement HTTPOnly completely. It will impress the girls!
Attachment #345336 - Flags: approval1.9.0.6? → approval1.9.0.6+
Comment on attachment 345336 [details] [diff] [review]
Patch also blocking Set-Cookie2 header
[Checkin: Comment 85]

Approved for 1.9.0.6, a=dveditz for release-drivers.
Attached patch for 1.8.1Splinter Review
Attached patch for 1.8.0Splinter Review
Checked into 1.9.0 branch
Keywords: fixed1.9.0.6
Comment on attachment 355351 [details] [diff] [review]
for 1.8.1

please approve for 1.8 branches.
Attachment #355351 - Flags: approval1.8.1.next?
Attachment #355351 - Flags: approval1.8.0.next?
Comment on attachment 355351 [details] [diff] [review]
for 1.8.1

a=dveditz for 1.8.1-branch
Attachment #355351 - Flags: approval1.8.1.next?
Attachment #355351 - Flags: approval1.8.1.next+
Attachment #355351 - Flags: approval1.8.0.next?
Comment on attachment 355353 [details] [diff] [review]
for 1.8.0

a=dveditz for 1.8.0 branch, but I don't have permission to set the '+' flag.
Attachment #355353 - Flags: approval1.8.0.next?
Attachment #355353 - Flags: approval1.8.0.next? → approval1.8.0.next+
Comment on attachment 355353 [details] [diff] [review]
for 1.8.0

a=asac for 1.8.0
committed attachment 355351 [details] [diff] [review] to MOZILLA_1_8_BRANCH:

Checking in content/base/src/nsXMLHttpRequest.cpp;
/cvsroot/mozilla/content/base/src/nsXMLHttpRequest.cpp,v  <--  nsXMLHttpRequest.cpp
new revision: 1.156.2.22; previous revision: 1.156.2.21
done
Keywords: fixed1.8.1.21
Verified this for 1.9.0.6 with the original case in comment 0 using Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.0.6pre) Gecko/2009011304 GranParadiso/3.0.6pre.
Alias: CVE-2009-0357
You are all champions! FireFox 3.0.0.6 is the first browser to truly offer complete and comprehensive HTTPOnly support! http://manicode.blogspot.com/2009/02/firefox-3006-httponly-champion.html

My hats off to you all, beer is on me!
Robert Hansen was not fond of the fact that ALL cookies were removed from the XHR response headers via JS access.  I'm not sure I agree, but felt it prudent to pass his thoughts your way. What do you think?

(from Robert Hansen re: The fact that all set-cookie/2 calls were removed from response headers) 

That's kind of a bug (or at minimum a bad feature).  XHR should allow any headers to be viewed that don't have the HTTPOnly flag.  Who knows, maybe some web developers use XMLHTTPRequest to read the cookie (which might contain something benign like last seen date) and use it to display it back on the page?  Now HTTPOnly breaks that functionality.  The last thing you want is people removing HTTPOnly to re-enable functionality that already worked.

Firefox overstepped the whole point of the security feature.  You don't start disabling random other things just because you disable one.  Think about it from a developer's perspective.  You're sitting in a conference room, and you're not that brilliant and you say, "I heard that if you use HTTPOnly sometimes you won't be able to read not HTTPOnly cookies."  And you less than brilliant developer friend says, "Whelp, I guess we had better not consider it, we need to read those cookies, by golly."  And henceforth you have lower adoption.

	HTTPOnly has had enough problems without introducing extremely hard to understand "features" (hard to understand unless you are intimately familiar with the tech).  Plus, I have been known to replay cookies using XHR before too, because I want the exact syntax (including path name and expiration), and not the slimmed down document.cookie version of the actual header.  It's not all that uncommon.  It would be a nightmare do debug if I didn't know that Firefox had done that (on a different cookie, no less).  I'd bet a dollar most developers wouldn't have a clue as to why it was happening.
Robert misunderstands, I believe.  We never show set-cookie in XHR results, regardless of whether there is an HTTPOnly cookie in the response or not, so use of HTTPOnly doesn't affect anything here.  Ergo, there's no reason to avoid it in favour of more functionality.

(That's my interpretation of the patch, at least, I haven't tested exhaustively.)
Right, and Roberts claim is that developers need the ability to read non-HTTPOnly cookies from the XHR response headers, which IE 7x currently allows, but FF 3.0.0.6 does not.
See #87 regarding how useful Set-Cookie response headers are. You can parse the non-httponly cookies from document.cookie more reliably. And spec says implementations are free to drop transient headers and security related like set-cookie are called out specifically. The current 3.0.6 behavior is just fine.
Re: comment 103 -- removing HttpOnly from your cookies will not bring the functionality back. That won't be a temptation. If we had hidden only headers with HTTPOnly cookies then developers *might* be tempted to remove that attribute. Developers might get pissed at us for our aggressive approach here, but it cannot be accused of discouraging adoption of HttpOnly since the functionality is gone whether or not you use it.

I've filed bug 476977 on enhancing the DOM with the additional cookie data that developers might be trying to get out of manually parsing headers from XHR.
Blocks: 477165
(from Robert Hansen) 

I read the thread, and that makes more sense.  I thought XHR was disabling all because one cookie had HTTPOnly.  I didn’t realize it was independent of that feature.  That’s less bad.  Still, it’s going to cause some people to riot in the streets when they’re trying to get code to work regardless of how secure their cookies are, but at least it won’t harm HTTPOnly adoption, they’re correct.
As far as developers go, a much better solution for them is to use tools like the LiveHTTPHeaders or FireBug extensions.
    Built the bookmarklet from comment 0 and results returned null.  

    Verified fix on Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US;
    rv:1.9.2a1pre) Gecko/20090205 Minefield/3.2a1pre
    and Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1b3pre)
    Gecko/20090205 Shiretoko/3.1b3pre Ubiquity/0.1.5
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: