Last Comment Bug 146094 - Stealing third-party cookies through a proxy
: Stealing third-party cookies through a proxy
Status: VERIFIED FIXED
:
Product: Core
Classification: Components
Component: Networking (show other bugs)
: Trunk
: All All
: P4 normal (vote)
: mozilla1.0.1
Assigned To: Darin Fisher
: Tom Everingham
Mentors:
Depends on:
Blocks:
  Show dependency treegraph
 
Reported: 2002-05-21 21:36 PDT by Mitchell Stoltz (not reading bugmail)
Modified: 2002-09-05 17:42 PDT (History)
19 users (show)
See Also:
Crash Signature:
(edit)
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---


Attachments
v1 patch (2.43 KB, patch)
2002-05-22 16:10 PDT, Darin Fisher
no flags Details | Diff | Splinter Review
cookie module patch (648 bytes, patch)
2002-05-22 16:37 PDT, Stephen P. Morse
waterson: review+
darin.moz: superreview+
chofmann: approval+
Details | Diff | Splinter Review
v2 patch (for the trunk and mozilla 1.0.1) (3.20 KB, patch)
2002-05-22 17:30 PDT, Darin Fisher
dougt: review+
rpotts: superreview+
jud: approval+
Details | Diff | Splinter Review

Description Mitchell Stoltz (not reading bugmail) 2002-05-21 21:36:50 PDT
From Bennett Haselton:
-----------------------
Using Netscape 4.79:

1) Make sure you have a yahoo.com cookie set (check your cookies.txt file in
your user directory).  If you don't have one, go to http://www.yahoo.com/ and it
will set one.
1) Set your browser to use proxy.quixnet.net port 8487 as a proxy
2) Go to:
    http://www.peacefire.org/security/nsproxyspaces/demo.html

This causes your yahoo.com cookie to be sent to the peacefire.org server and
displayed back at you.  (It might take a second; it has to go through two
redirects first.)

This works by redirecting the browser to the URL
    http://www.peacefire.org www.yahoo.com/

The browser sends out the yahoo.com cookie in an HTTP header when it makes the
request, and the proxy server takes the request and the cookie header and sends
it to www.peacefire.org, so CGI on the peacefire.org server can see the cookie
header.  (JavaScript code on a page accessed in this way, however, cannot access
the yahoo.com cookie.)

This works with most HTTP proxies that I tested, including UNIX (Apache) and
Windows (IIS) variants:

s561.BESS.Net port 9308 (can't tell what this one is running)
205.252.59.233 port 80 (this is running Microsoft IIS Server)
www1.mca.com port 80 (this is running Apache)
etc.
You can get a list of known open proxies from:
http://tools.rosinstrument.com/cgi-bin/fp.pl/showlines?lines=100&sortor=3

It did *not* work with this proxy, however:
    206.30.227.104 port 8080
which is running SQUID 2.2.

But either way, I think this is a browser bug and not a proxy server bug,
because the header sent out by the browser:
    GET http://www.peacefire.org www.yahoo.com/ HTTP/1.0
is improperly formatted and should never be sent out in the first place -- so
the proxy server can't be faulted for guessing how to handle the request.
Comment 1 Mitchell Stoltz (not reading bugmail) 2002-05-21 21:39:02 PDT
Both Steve and I have reproduced this with some of the proxies. I reproduced it
with www1.mca.com port 80 inside Netscape's firewall, and Steve reports that
proxy.quixnet.net port 8487 works, but only from outside the firewall.
Comment 2 Stephen P. Morse 2002-05-22 06:58:07 PDT
Problem is occuring on line 704 of nsCookies.cpp when it calls upon 
ExtractUrlPart.  The value being returned is "www.peacefire.org www.yahoo.com" 
with the embedded space character.

Problem is essentially the same as in bug 110418 (stealing cookies using %00).  
I'm trying to remember how we fixed that one, since several fixes were checked 
in and immediately backed out.   cc'ing darin.
Comment 3 Stephen P. Morse 2002-05-22 08:02:40 PDT
Here's a fix that can be applied to the cookie module.  However the problem is 
really in the netwerking module and should be fixed there.  This fix can be 
considered as a second line of defence -- a belt-and-suspenders approach to 
security.

Index: nsCookies.cpp
===================================================================
RCS file: /cvsroot/mozilla/extensions/cookie/nsCookies.cpp,v
retrieving revision 1.53
diff -u -r1.53 nsCookies.cpp
--- nsCookies.cpp	16 May 2002 18:00:38 -0000	1.53
+++ nsCookies.cpp	22 May 2002 14:53:31 -0000
@@ -706,9 +706,15 @@
   if (NS_FAILED(result)) {
     return nsnull;
   }
+  if (host.RFindChar(' ') != -1) {
+    return nsnull;
+  }
   result = ioService->ExtractUrlPart(nsDependentCString(address),
                                      nsIIOService::url_Path, path);
   if (NS_FAILED(result)) {
+    return nsnull;
+  }
+  if (path.RFindChar(' ') != -1) {
     return nsnull;
   }
 
Comment 4 Stephen P. Morse 2002-05-22 08:09:02 PDT
There's something very strange going on with hasselton's test.  I ran it using 
4.7x and it indeed showed me my yahoo cookie.  Then I tried it with the latest 
mozilla trunk build and thought I saw my yahoo cookie as well.  But now I 
realize that it was showing me my 4.7 yahoo cookie and not my mozilla yahoo 
cookie (I have separate profiles).  Furthermore, if I delete and acquire a new 
cookie in 4.7, the 4.7 run still keeps displaying the old 4.7 yahoo cookie.

Yes, I've tried clearing my cache but that made no difference.  And it couldn't 
be the cache because the cache would not account for a 4.7 cookie appearing in a 
mozilla run.

I discovered this because I noticed that after I made the patch shown above, I 
was still getting a cookie displayed, even though I am no longer sending a 
cookie to his server.

My suspicion is that the cookie might be cached at the proxy (I'm using the 
same proxy for my 4.7 run and my mozilla run).  Is that possible?
Comment 5 Darin Fisher 2002-05-22 11:09:11 PDT
morse: nsIOService::ExtractUrlPart doesn't do any unescaping.. are you certain
that the input to ExtractUrlPart hasn't already been unescaped somehow?
Comment 6 Stephen P. Morse 2002-05-22 13:28:53 PDT
Forget whether netwerking is doing escaping or unescape.  What's significant is 
that it is returning a string with a space in it when I ask for the host.  
That's what it shouldn't do.

Yes the input to ExtractUrlPart already had the space in it.  But I would hope 
that the routine could detect that and possibly give an error return in that 
case rather then returning a seemingly valid result with an embedded space.
Comment 7 Darin Fisher 2002-05-22 13:36:42 PDT
ok, so junk in equals junk out... maybe the URL parsers should consider any
invalid characters like spaces invalid... or maybe it should escape them... or
maybe some other piece of code should be responsible for that.  or lastly, maybe
we only want to make hostname parsing more strict.  hmm...
Comment 8 Darin Fisher 2002-05-22 15:46:41 PDT
i think a good solution to this bug would be to simply make the URL parsers fail
if a space exists in the hostname.  this would prevent this bad URL from being
loaded at all... we wouldn't even get near the cookie module.  working on a patch.
Comment 9 Stephen P. Morse 2002-05-22 16:03:17 PDT
Follow-up to comment 4.  Yes, there is some sort of caching on the proxy which 
caused the odd results that I was seeing.

Tried a different proxy.  This time I didn't have that caching problem.  And I 
was able to verify that my patch fixes this bug.
Comment 10 Darin Fisher 2002-05-22 16:10:50 PDT
Created attachment 84690 [details] [diff] [review]
v1 patch

ok, this patch prevents the redirect by making the URL parser reject the
malformed URL.	the only added check is for a space character in the
hostname... perhaps we should check for other illegal characters as well?!  the
bulk of the patch simply adds nsresult checking to ensure that the error
propagates upwards.
Comment 11 Darin Fisher 2002-05-22 16:12:19 PDT
reassigning to myself.  need some folks to review this patch... very simple...
any takers?
Comment 12 Darin Fisher 2002-05-22 16:16:17 PDT
one concern i have with this patch is that some folks reuse nsStandardURL for
things that don't really have a hostname.  that is, they might use the hostname
field to mean something else.  that makes it very difficult to place
restrictions on the valid set of characters allowed in this field.  restricting
the space character seems safe to me, but i might be missing some of the ways in
which nsStandardURL is (re)used.

cc'ing mailnews folks to verify that this change wouldn't break anything in
mailnews.
Comment 13 Chris Waterson 2002-05-22 16:24:32 PDT
r=waterson on morse's change. You should comment why you're checking for spaces,
and you should probably assert, too when/if darin's changes land. darin should
probably sr=.
Comment 14 Mitchell Stoltz (not reading bugmail) 2002-05-22 16:26:37 PDT
Looks good, but I have to ask: Are there any other illegal characters we can
filter out while we're at it? Is space the only illegal character in the URL
spec? Is it feasible/safer to specify a range of valid characters rather than
searching for invalid ones?

Where does the escaping and unescaping occur? I'm worried about the possibility
of getting a space with the high bit set, such that it will be recognized as a
space in some circumstances but not others. What about %20? At what point does
that get changed into a space?
Comment 15 Chris Waterson 2002-05-22 16:28:15 PDT
We should probably test with other whitespace characters as well; e.g., TAB.
Comment 16 Stephen P. Morse 2002-05-22 16:36:24 PDT
Based on conversation with darin and waterson, I'm modifying my patch to:
- test for tab as well
- forget the test that I added for path since it has no bearing here

The eventual solution should be my patch as well as darin's.  However for safety 
at this point, my patch alone with solve the exploit and have zero risk of 
introducing regressions.
Comment 17 Stephen P. Morse 2002-05-22 16:37:35 PDT
Created attachment 84694 [details] [diff] [review]
cookie module patch
Comment 18 Chris Waterson 2002-05-22 16:42:58 PDT
Comment on attachment 84694 [details] [diff] [review]
cookie module patch

r=waterson
Comment 19 Darin Fisher 2002-05-22 16:44:48 PDT
Comment on attachment 84694 [details] [diff] [review]
cookie module patch

>Index: nsCookies.cpp

>+  if ((host.RFindChar(' ') != -1) || (host.RFindChar('\t') != -1)) {
>+    return nsnull;
>+  }

nit: RFindCharInSet would be faster.

either way, sr=darin
Comment 20 Darin Fisher 2002-05-22 16:45:37 PDT
Comment on attachment 84690 [details] [diff] [review]
v1 patch

this patch is actually incorrect... it does not fix the bug if there is an
explicit port appended to the hostname... doh!
Comment 21 Mitchell Stoltz (not reading bugmail) 2002-05-22 17:25:02 PDT
Darin and I noticed that Necko already checks for \t, so we don't actually need
to check for that here, but belt-and-suspenders never hurts. r=mstoltz on
Steve's patch, FWIW.
Comment 22 Darin Fisher 2002-05-22 17:30:35 PDT
Created attachment 84702 [details] [diff] [review]
v2 patch (for the trunk and mozilla 1.0.1)

i think this is the patch that we should land on the trunk and branch post 1.0
Comment 23 Stephen P. Morse 2002-05-22 17:58:32 PDT
Cookie module patch has been checked in on the trunk.  Not closing this out 
because darin's netwerking fix will also need to eventually get checked in.  But 
the cookie fix alone will stop the atack.
Comment 24 David :Bienvenu 2002-05-22 18:05:39 PDT
AFAIK, mailnews doesn't do anything funky with the hostname - I believe we
pretty much use it for hostnames :-)
Comment 25 chris hofmann 2002-05-22 18:14:11 PDT
ok, just want to be sure here.  It looks like the proposal is to take
the cookie patch 
http://bugzilla.mozilla.org/attachment.cgi?id=84694&action=view
for RC3/1.0

and then continue to land the additional patch on the network side
http://bugzilla.mozilla.org/attachment.cgi?id=84702&action=view
later after more testing and possible regression finding can be done.

Comment 26 Darin Fisher 2002-05-22 18:24:09 PDT
attachment 84694 [details] [diff] [review] is the patch we want for 1.0/RC3 (morse, waterson, mstoltz, and
i discussed this and arrived at this conclusion together).  i want to land my
patch (attachment 84702 [details] [diff] [review]) after 1.0/RC3.
Comment 27 chris hofmann 2002-05-22 18:34:17 PDT
Comment on attachment 84694 [details] [diff] [review]
cookie module patch

a=chofmann,brendan,scc for the 1.0 branch RC3/1.0.  we will take darin's patch
later.
Comment 28 scottputterman 2002-05-22 19:31:15 PDT
adt1.0.0+ for approval for branch checkin.
Comment 29 scottputterman 2002-05-29 23:32:45 PDT
changing to adt1.0.1+ for checkin to the 1.0 branch for the Mozilla1.0.1
milestone.  Please get drivers approval before checking in.
Comment 30 Stephen P. Morse 2002-05-30 01:03:19 PDT
This bug should not have been given atd1.0.1+ designation.  The crucial part of 
it has already been checked in on the trunk and branch, and that stops the 
attack.  The remaining part is no longer of high priority and is of higher risk.
Comment 31 Jaime Rodriguez, Jr. 2002-05-30 11:00:58 PDT
The ADT was simply responding to, and carrying over the "+" based on your
initial request for approval on 05/22. Should you feel (and, the Security team
agree), that the remaining work is too risky to take for 1.0.1, and not needed
to prevent the exploit at this time, pls feel free to: 1) Remove the adt1.0.1+,
and 2) Set this bug  to milestone other than 1.0 or 1.0.1. thanks!
Comment 32 Darin Fisher 2002-05-30 16:43:04 PDT
no, i strongly feel that we should work toward fixing the problem from within
necko.  even though this particular problem is already solved, the root cause of
the problem still remains.  we don't know for sure that the problem in necko
couldn't be tickled some other way (perhaps in some other protocol) to cause a
security hole.
Comment 33 scottputterman 2002-06-11 18:19:24 PDT
Is the v2 patch (for the trunk and mozilla1.0.1) the right patch and does it
still need to be landed on the branch?
Comment 34 Darin Fisher 2002-06-12 09:49:00 PDT
i need to spend some time verifying the v2 patch before checking it in.  i'm
hoping to make some progress on that this week.  i think it'd be good to take
the patch on the branch as well as the trunk because it gives us an additional
layer of security (belt-and-suspenders as they say).
Comment 35 Doug Turner (:dougt) 2002-06-13 06:44:48 PDT
Comment on attachment 84702 [details] [diff] [review]
v2 patch (for the trunk and mozilla 1.0.1)

r=dougt

lets make sure that this get well tested on the trunk before porting it to the
moz1.0.x branch.
Comment 36 rpotts (gone) 2002-06-18 22:53:10 PDT
Comment on attachment 84702 [details] [diff] [review]
v2 patch (for the trunk and mozilla 1.0.1)

sr=rpotts@netscape.com
Comment 37 Jaime Rodriguez, Jr. 2002-06-19 06:00:08 PDT
Removing "+" for now, until the new patch has landed on the trunk and been
verified by QA. thanks!
Comment 38 Darin Fisher 2002-06-19 22:57:09 PDT
landed v2 patch on the trunk.  marking FIXED.
Comment 39 Paul Wyskoczka 2002-06-20 09:03:42 PDT
trever, can you verify this fix on the trunk.  Thx.
Comment 40 Tom Everingham 2002-06-20 18:27:45 PDT
using the testcase from comment #1:

- re-checked that cookies were still not being stolen with builds from a few
days ago.
- tested 06/20 trunk builds that the malformed url (with spaces) is blocked. 
- tested with various proxies listed in comment 1

verified trunk - 06/20/02 builds - win NT4, linux rh6, mac osX
Comment 41 Jaime Rodriguez, Jr. 2002-06-20 20:31:09 PDT
adt1.0.1 (on ADT's behalf) approval for checkin to the 1.0 branch, pending
drivers' approval. pls check this in asap, then add the "fixed1.0.1" keyword.
Comment 42 Judson Valeski 2002-06-21 08:48:56 PDT
please checkin to the 1.0.1 branch. once there, remove the "mozilla1.0.1+"
keyword and add the "fixed1.0.1" keyword.
Comment 43 Darin Fisher 2002-06-21 16:47:18 PDT
fixed1.0.1
Comment 44 lchiang 2002-07-01 19:25:43 PDT
tever - pls verify on the branch when you get a chance. Thanks.
Comment 45 Tom Everingham 2002-07-11 15:54:54 PDT
verified branch - 07/11/02 builds - win NT4, linux rh6, mac osX
Comment 46 Jaime Rodriguez, Jr. 2002-07-19 12:57:38 PDT
tever: Tom can you verify this on the 1.0 branch?
Comment 47 Tom Everingham 2002-07-23 17:28:22 PDT
oops, forgot to add verified1.0.1 keyword, this was already tested on the branch

Note You need to log in before you can comment on or make changes to this bug.