Closed
Bug 146094
Opened 23 years ago
Closed 22 years ago
Stealing third-party cookies through a proxy
Categories
(Core :: Networking, defect, P4)
Core
Networking
Tracking
()
VERIFIED
FIXED
mozilla1.0.1
People
(Reporter: security-bugs, Assigned: darin.moz)
Details
Attachments
(2 files, 1 obsolete file)
648 bytes,
patch
|
waterson
:
review+
darin.moz
:
superreview+
chofmann
:
approval+
|
Details | Diff | Splinter Review |
3.20 KB,
patch
|
dougt
:
review+
rpotts
:
superreview+
jud
:
approval+
|
Details | Diff | Splinter Review |
From Bennett Haselton:
-----------------------
Using Netscape 4.79:
1) Make sure you have a yahoo.com cookie set (check your cookies.txt file in
your user directory). If you don't have one, go to http://www.yahoo.com/ and it
will set one.
1) Set your browser to use proxy.quixnet.net port 8487 as a proxy
2) Go to:
http://www.peacefire.org/security/nsproxyspaces/demo.html
This causes your yahoo.com cookie to be sent to the peacefire.org server and
displayed back at you. (It might take a second; it has to go through two
redirects first.)
This works by redirecting the browser to the URL
http://www.peacefire.org www.yahoo.com/
The browser sends out the yahoo.com cookie in an HTTP header when it makes the
request, and the proxy server takes the request and the cookie header and sends
it to www.peacefire.org, so CGI on the peacefire.org server can see the cookie
header. (JavaScript code on a page accessed in this way, however, cannot access
the yahoo.com cookie.)
This works with most HTTP proxies that I tested, including UNIX (Apache) and
Windows (IIS) variants:
s561.BESS.Net port 9308 (can't tell what this one is running)
205.252.59.233 port 80 (this is running Microsoft IIS Server)
www1.mca.com port 80 (this is running Apache)
etc.
You can get a list of known open proxies from:
http://tools.rosinstrument.com/cgi-bin/fp.pl/showlines?lines=100&sortor=3
It did *not* work with this proxy, however:
206.30.227.104 port 8080
which is running SQUID 2.2.
But either way, I think this is a browser bug and not a proxy server bug,
because the header sent out by the browser:
GET http://www.peacefire.org www.yahoo.com/ HTTP/1.0
is improperly formatted and should never be sent out in the first place -- so
the proxy server can't be faulted for guessing how to handle the request.
Reporter | ||
Comment 1•23 years ago
|
||
Both Steve and I have reproduced this with some of the proxies. I reproduced it
with www1.mca.com port 80 inside Netscape's firewall, and Steve reports that
proxy.quixnet.net port 8487 works, but only from outside the firewall.
Comment 2•23 years ago
|
||
Problem is occuring on line 704 of nsCookies.cpp when it calls upon
ExtractUrlPart. The value being returned is "www.peacefire.org www.yahoo.com"
with the embedded space character.
Problem is essentially the same as in bug 110418 (stealing cookies using %00).
I'm trying to remember how we fixed that one, since several fixes were checked
in and immediately backed out. cc'ing darin.
Comment 3•23 years ago
|
||
Here's a fix that can be applied to the cookie module. However the problem is
really in the netwerking module and should be fixed there. This fix can be
considered as a second line of defence -- a belt-and-suspenders approach to
security.
Index: nsCookies.cpp
===================================================================
RCS file: /cvsroot/mozilla/extensions/cookie/nsCookies.cpp,v
retrieving revision 1.53
diff -u -r1.53 nsCookies.cpp
--- nsCookies.cpp 16 May 2002 18:00:38 -0000 1.53
+++ nsCookies.cpp 22 May 2002 14:53:31 -0000
@@ -706,9 +706,15 @@
if (NS_FAILED(result)) {
return nsnull;
}
+ if (host.RFindChar(' ') != -1) {
+ return nsnull;
+ }
result = ioService->ExtractUrlPart(nsDependentCString(address),
nsIIOService::url_Path, path);
if (NS_FAILED(result)) {
+ return nsnull;
+ }
+ if (path.RFindChar(' ') != -1) {
return nsnull;
}
Comment 4•23 years ago
|
||
There's something very strange going on with hasselton's test. I ran it using
4.7x and it indeed showed me my yahoo cookie. Then I tried it with the latest
mozilla trunk build and thought I saw my yahoo cookie as well. But now I
realize that it was showing me my 4.7 yahoo cookie and not my mozilla yahoo
cookie (I have separate profiles). Furthermore, if I delete and acquire a new
cookie in 4.7, the 4.7 run still keeps displaying the old 4.7 yahoo cookie.
Yes, I've tried clearing my cache but that made no difference. And it couldn't
be the cache because the cache would not account for a 4.7 cookie appearing in a
mozilla run.
I discovered this because I noticed that after I made the patch shown above, I
was still getting a cookie displayed, even though I am no longer sending a
cookie to his server.
My suspicion is that the cookie might be cached at the proxy (I'm using the
same proxy for my 4.7 run and my mozilla run). Is that possible?
Assignee | ||
Comment 5•23 years ago
|
||
morse: nsIOService::ExtractUrlPart doesn't do any unescaping.. are you certain
that the input to ExtractUrlPart hasn't already been unescaped somehow?
Comment 6•23 years ago
|
||
Forget whether netwerking is doing escaping or unescape. What's significant is
that it is returning a string with a space in it when I ask for the host.
That's what it shouldn't do.
Yes the input to ExtractUrlPart already had the space in it. But I would hope
that the routine could detect that and possibly give an error return in that
case rather then returning a seemingly valid result with an embedded space.
Assignee | ||
Comment 7•23 years ago
|
||
ok, so junk in equals junk out... maybe the URL parsers should consider any
invalid characters like spaces invalid... or maybe it should escape them... or
maybe some other piece of code should be responsible for that. or lastly, maybe
we only want to make hostname parsing more strict. hmm...
Assignee | ||
Comment 8•23 years ago
|
||
i think a good solution to this bug would be to simply make the URL parsers fail
if a space exists in the hostname. this would prevent this bad URL from being
loaded at all... we wouldn't even get near the cookie module. working on a patch.
Comment 9•23 years ago
|
||
Follow-up to comment 4. Yes, there is some sort of caching on the proxy which
caused the odd results that I was seeing.
Tried a different proxy. This time I didn't have that caching problem. And I
was able to verify that my patch fixes this bug.
Assignee | ||
Comment 10•23 years ago
|
||
ok, this patch prevents the redirect by making the URL parser reject the
malformed URL. the only added check is for a space character in the
hostname... perhaps we should check for other illegal characters as well?! the
bulk of the patch simply adds nsresult checking to ensure that the error
propagates upwards.
Assignee | ||
Comment 11•23 years ago
|
||
reassigning to myself. need some folks to review this patch... very simple...
any takers?
Assignee: morse → darin
Status: ASSIGNED → NEW
Component: Cookies → Networking
Assignee | ||
Comment 12•23 years ago
|
||
one concern i have with this patch is that some folks reuse nsStandardURL for
things that don't really have a hostname. that is, they might use the hostname
field to mean something else. that makes it very difficult to place
restrictions on the valid set of characters allowed in this field. restricting
the space character seems safe to me, but i might be missing some of the ways in
which nsStandardURL is (re)used.
cc'ing mailnews folks to verify that this change wouldn't break anything in
mailnews.
Comment 13•23 years ago
|
||
r=waterson on morse's change. You should comment why you're checking for spaces,
and you should probably assert, too when/if darin's changes land. darin should
probably sr=.
Component: Networking → Cookies
Reporter | ||
Comment 14•23 years ago
|
||
Looks good, but I have to ask: Are there any other illegal characters we can
filter out while we're at it? Is space the only illegal character in the URL
spec? Is it feasible/safer to specify a range of valid characters rather than
searching for invalid ones?
Where does the escaping and unescaping occur? I'm worried about the possibility
of getting a space with the high bit set, such that it will be recognized as a
space in some circumstances but not others. What about %20? At what point does
that get changed into a space?
Component: Cookies → Networking
Comment 15•23 years ago
|
||
We should probably test with other whitespace characters as well; e.g., TAB.
Comment 16•23 years ago
|
||
Based on conversation with darin and waterson, I'm modifying my patch to:
- test for tab as well
- forget the test that I added for path since it has no bearing here
The eventual solution should be my patch as well as darin's. However for safety
at this point, my patch alone with solve the exploit and have zero risk of
introducing regressions.
Comment 17•23 years ago
|
||
Comment 18•23 years ago
|
||
Comment on attachment 84694 [details] [diff] [review]
cookie module patch
r=waterson
Attachment #84694 -
Flags: review+
Assignee | ||
Comment 19•23 years ago
|
||
Comment on attachment 84694 [details] [diff] [review]
cookie module patch
>Index: nsCookies.cpp
>+ if ((host.RFindChar(' ') != -1) || (host.RFindChar('\t') != -1)) {
>+ return nsnull;
>+ }
nit: RFindCharInSet would be faster.
either way, sr=darin
Attachment #84694 -
Flags: superreview+
Assignee | ||
Comment 20•23 years ago
|
||
Comment on attachment 84690 [details] [diff] [review]
v1 patch
this patch is actually incorrect... it does not fix the bug if there is an
explicit port appended to the hostname... doh!
Attachment #84690 -
Flags: needs-work+
Reporter | ||
Comment 21•23 years ago
|
||
Darin and I noticed that Necko already checks for \t, so we don't actually need
to check for that here, but belt-and-suspenders never hurts. r=mstoltz on
Steve's patch, FWIW.
Assignee | ||
Comment 22•23 years ago
|
||
i think this is the patch that we should land on the trunk and branch post 1.0
Attachment #84690 -
Attachment is obsolete: true
Comment 23•23 years ago
|
||
Cookie module patch has been checked in on the trunk. Not closing this out
because darin's netwerking fix will also need to eventually get checked in. But
the cookie fix alone will stop the atack.
Comment 24•23 years ago
|
||
AFAIK, mailnews doesn't do anything funky with the hostname - I believe we
pretty much use it for hostnames :-)
Comment 25•23 years ago
|
||
ok, just want to be sure here. It looks like the proposal is to take
the cookie patch
http://bugzilla.mozilla.org/attachment.cgi?id=84694&action=view
for RC3/1.0
and then continue to land the additional patch on the network side
http://bugzilla.mozilla.org/attachment.cgi?id=84702&action=view
later after more testing and possible regression finding can be done.
Assignee | ||
Comment 26•23 years ago
|
||
attachment 84694 [details] [diff] [review] is the patch we want for 1.0/RC3 (morse, waterson, mstoltz, and
i discussed this and arrived at this conclusion together). i want to land my
patch (attachment 84702 [details] [diff] [review]) after 1.0/RC3.
Comment 27•23 years ago
|
||
Comment on attachment 84694 [details] [diff] [review]
cookie module patch
a=chofmann,brendan,scc for the 1.0 branch RC3/1.0. we will take darin's patch
later.
Attachment #84694 -
Flags: approval+
Comment 28•23 years ago
|
||
adt1.0.0+ for approval for branch checkin.
Updated•23 years ago
|
Comment 29•22 years ago
|
||
changing to adt1.0.1+ for checkin to the 1.0 branch for the Mozilla1.0.1
milestone. Please get drivers approval before checking in.
Comment 30•22 years ago
|
||
This bug should not have been given atd1.0.1+ designation. The crucial part of
it has already been checked in on the trunk and branch, and that stops the
attack. The remaining part is no longer of high priority and is of higher risk.
Comment 31•22 years ago
|
||
The ADT was simply responding to, and carrying over the "+" based on your
initial request for approval on 05/22. Should you feel (and, the Security team
agree), that the remaining work is too risky to take for 1.0.1, and not needed
to prevent the exploit at this time, pls feel free to: 1) Remove the adt1.0.1+,
and 2) Set this bug to milestone other than 1.0 or 1.0.1. thanks!
Assignee | ||
Comment 32•22 years ago
|
||
no, i strongly feel that we should work toward fixing the problem from within
necko. even though this particular problem is already solved, the root cause of
the problem still remains. we don't know for sure that the problem in necko
couldn't be tickled some other way (perhaps in some other protocol) to cause a
security hole.
Status: NEW → ASSIGNED
Assignee | ||
Updated•22 years ago
|
Target Milestone: mozilla1.0 → mozilla1.0.1
Assignee | ||
Updated•22 years ago
|
Priority: -- → P4
Comment 33•22 years ago
|
||
Is the v2 patch (for the trunk and mozilla1.0.1) the right patch and does it
still need to be landed on the branch?
Keywords: mozilla1.0.1
Assignee | ||
Comment 34•22 years ago
|
||
i need to spend some time verifying the v2 patch before checking it in. i'm
hoping to make some progress on that this week. i think it'd be good to take
the patch on the branch as well as the trunk because it gives us an additional
layer of security (belt-and-suspenders as they say).
Comment 35•22 years ago
|
||
Comment on attachment 84702 [details] [diff] [review]
v2 patch (for the trunk and mozilla 1.0.1)
r=dougt
lets make sure that this get well tested on the trunk before porting it to the
moz1.0.x branch.
Attachment #84702 -
Flags: review+
Comment 36•22 years ago
|
||
Comment on attachment 84702 [details] [diff] [review]
v2 patch (for the trunk and mozilla 1.0.1)
sr=rpotts@netscape.com
Attachment #84702 -
Flags: superreview+
Comment 37•22 years ago
|
||
Removing "+" for now, until the new patch has landed on the trunk and been
verified by QA. thanks!
Assignee | ||
Comment 38•22 years ago
|
||
landed v2 patch on the trunk. marking FIXED.
Status: ASSIGNED → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
Comment 39•22 years ago
|
||
trever, can you verify this fix on the trunk. Thx.
Updated•22 years ago
|
Whiteboard: [adt1 RTM] [ETA 06/20] → [adt1 RTM] [ETA 06/21]
Comment 40•22 years ago
|
||
using the testcase from comment #1:
- re-checked that cookies were still not being stolen with builds from a few
days ago.
- tested 06/20 trunk builds that the malformed url (with spaces) is blocked.
- tested with various proxies listed in comment 1
verified trunk - 06/20/02 builds - win NT4, linux rh6, mac osX
Status: RESOLVED → VERIFIED
Keywords: verifyme
Whiteboard: [adt1 RTM] [ETA 06/21] → [adt1 RTM] [ETA 06/21] [verified-trunk]
Comment 41•22 years ago
|
||
adt1.0.1 (on ADT's behalf) approval for checkin to the 1.0 branch, pending
drivers' approval. pls check this in asap, then add the "fixed1.0.1" keyword.
Updated•22 years ago
|
Attachment #84702 -
Flags: approval+
Comment 42•22 years ago
|
||
please checkin to the 1.0.1 branch. once there, remove the "mozilla1.0.1+"
keyword and add the "fixed1.0.1" keyword.
Keywords: mozilla1.0.1 → mozilla1.0.1+
Comment 44•22 years ago
|
||
tever - pls verify on the branch when you get a chance. Thanks.
Comment 45•22 years ago
|
||
verified branch - 07/11/02 builds - win NT4, linux rh6, mac osX
Whiteboard: [adt1 RTM] [ETA 06/21] [verified-trunk] → [adt1 RTM] [ETA 06/21] [verified-trunk][verified-branch]
Comment 46•22 years ago
|
||
tever: Tom can you verify this on the 1.0 branch?
Comment 47•22 years ago
|
||
oops, forgot to add verified1.0.1 keyword, this was already tested on the branch
Keywords: verified1.0.1
Reporter | ||
Updated•22 years ago
|
Group: security?
Keywords: approval,
fixed1.0.1
Whiteboard: [adt1 RTM] [ETA 06/21] [verified-trunk][verified-branch]
You need to log in
before you can comment on or make changes to this bug.
Description
•