Closed Bug 413737 Opened 17 years ago Closed 16 years ago

UI for failed safebrowsing gethash responses

Categories

(Toolkit :: Safe Browsing, defect, P2)

defect

Tracking

()

RESOLVED FIXED

People

(Reporter: dcamp, Assigned: johnath)

References

Details

If the gethash request for a partial hash match fails and we can't verify a partial match, we don't know whether the site we're visiting is a phishing/malware site.  We need to decide the right response here and any necessary UI for it.
Flags: blocking-firefox3?
Failing open (letting the connection succeed) creates an attack (phish + google DoS) that wouldn't be there if we failed closed, but I'm not sure how realistic I think that attack is.

The obvious "compromise" solution that comes to mind is to dump to the error page, but with a clickthrough in this case.  Of course, that presumes the ability to click-through, which is bug 400731 and currently sized as "annoying, but doable."  Of course, once we have that ability, we get to revisit the whole "should we always allow clickthrough?" question, but that's fine, we should make that call on preferred behaviour, not technical capability anyhow.

If "reload with clickthrough" ends up being too much pain, then we need to find a way to talk to the user during pageload, which probably reduces us to dialog boxing them.  This really ought to be an edge case, so if dialoging is significantly less work, I'd be okay with it.

The crappiest solution (short of failing open) would be to just have different error page text for this case, explaining in more detail how to turn it off in preferences.  But given that that's a global setting, it seems pretty unpleasant.  
With 4 billion possible 32-bit hash prefixes and maybe a million phishing database entries, 1 in 4000 page legitimate page loads will trigger a partial hash match.  I don't think we want 1 in 4000 page loads to trigger a vague security warning when Google is unreachable.  It might be better to simply allow the load.

We should probably ask Google what they think, though.  They might not be happy with a decision that makes them an attractive DoS target.
Depends on: 413938
Agree that we need to block for a decision.

I tend to agree with Jesse. If we can't verify a partial match, we should probably just allow the load.

Is there a bug for what we do when we can't get in touch with the service? I think we might want/need to alert users to that fact. Or maybe not. I guess we can't do a lot for them in that case :(
Flags: blocking-firefox3? → blocking-firefox3+
Assignee: nobody → johnath
Priority: -- → P2
Okay, I'm coming around to failing open here too, after more conversation with dcamp on the subject.

The local DB in this version of the safebrowsing protocol feels like a performance/privacy optimization: it allows us to rapidly (locally) confirm that most sites are not badware, and cues us as to when we need to double check because there might be a problem or, as Jesse points out, a harmless hash collision.  Without that double-check, a 32-bit hash is really not enough to be confident we can mark this site as bad.

As Dave put it, if the getHash servers go down for an hour, we risk some infections that would have been prevented by failing closed, but we also mark as malware 1/5000th of the internet that collides with a malware hash.  I don't think there are words we can put around that distinction that don't a) confuse users and b) unfairly demonize legitimate sites.

Another way to look at this (again, credit to Dave): if the gethash server is down for an hour and we fail open, that's basically equivalent to the DB update server being down for an hour and never serving us the 32-bit hash in the first place.

This is a you-must-be-connected feature, imho.  The nature of the attack demands it.  The vast majority of the time, FF3 with malware protection will keep users safer than they otherwise would have been, but it does require an online check, because these sites last for hours or days, not weeks or months where a static DB might be fine.  If that check fails, it is preferable to me to lose the added protection temporarily than to mark any page unfortunate enough to hash collide as an attack site.

If we agree this is the way to proceed, there is no UI per se, but we would likely need changes in the safebrowsing code to allow the passthrough to happen.  First I'd like to know if anyone sees grave flaws in my logic here.
 
Dave - let's do this - how much pain is it to have the urlClassifier let failed updates fall through?
That's how it works now, we should be fine.
That makes things easy, then.
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Product: Firefox → Toolkit
You need to log in before you can comment on or make changes to this bug.