Closed
Bug 400731
Opened 17 years ago
Closed 17 years ago
Should phishing UI allow passthrough?
Categories
(Toolkit :: Safe Browsing, defect, P2)
Toolkit
Safe Browsing
Tracking
()
VERIFIED
FIXED
Firefox 3 beta3
People
(Reporter: johnath, Assigned: johnath)
References
Details
(Whiteboard: [has patch][has reviews])
Attachments
(3 files, 3 obsolete files)
71.28 KB,
image/png
|
Details | |
5.10 KB,
patch
|
johnath
:
review+
johnath
:
ui-review+
|
Details | Diff | Splinter Review |
10.33 KB,
patch
|
Details | Diff | Splinter Review |
In FF2, the safebrowsing bubble allowed users to "Ignore this warning" and when it moves to the error page in bug 399233 (and more importantly, to a docshell error that blocks load) we will lose this ability. Getting it back would seem to need an API that says "load this (previously blocked) URL and don't ask the url-classifier this time."
Comment 1•17 years ago
|
||
Need to figure out if we're comfortable having phishing errors be "no way, jose, full stop" for beta, which they will be if we take bug 399233 but do not fix this one.
Flags: blocking-firefox3?
Target Milestone: --- → Firefox 3 M9
Comment 2•17 years ago
|
||
I think we can ship like this in beta, after all, with a relnote. We'll want to address it for post-M9, though.
Flags: blocking-firefox3? → blocking-firefox3+
Target Milestone: Firefox 3 M9 → Firefox 3 M10
Comment 3•17 years ago
|
||
Can this bug be extended to also cover same addition to the malware UI ?
Updated•17 years ago
|
Target Milestone: Firefox 3 M10 → Firefox 3 M11
Comment 4•17 years ago
|
||
No. We are not currently planning to allow passthrough if we believe the site contains malicious code. Unlike phishing sites, which users can choose to not enter data into, malware sites don't rely on user actions, and as a result clicking the button may result a user being compromised. That's the type of button we shouldn't have in the UI.
Assignee: nobody → johnath
Priority: -- → P3
Comment 5•17 years ago
|
||
What mechanisms are there by which a user would get infected with malware as a direct result of simply visiting a malware site? If it's indeed possible for a site to infect a user with no interaction, then we should fix such security holes right?
I was under the impression that malware sites would encompass sites such as those that host cracks for games, the archives of which often carry additional payloads such as viruses or trojans.
But in such cases the user would have to disregard the firefox warning, find and download their crack, and then unarchive it and run it. So in the same way that users don't _have_ to enter data into phishing sites, users don't _have_ to download files from malware sites.
I guess I don't understand the distinction between the two, and think we should either allow passthrough for both kinds, or deny passthough for both.
Assignee | ||
Comment 6•17 years ago
|
||
(In reply to comment #5)
> What mechanisms are there by which a user would get infected with malware as a
> direct result of simply visiting a malware site? If it's indeed possible for a
> site to infect a user with no interaction, then we should fix such security
> holes right?
If they are holes with Firefox, we absolutely should, and as you know, we do whenever we discover one, either on our own, or in the wild. But many malware sites use a shotgun approach: hundreds of iframes each loading some malicious snippet - some target browsers, some target flash or java or quicktime, or even the underlying OS. And in the (comparatively rare) case where they do target Firefox directly, it may still be the case that a blacklist entry buys us time to code and test a fix for the problem while not leaving users exposed in the interim.
There's little cost to page authors to add another attack, but it rapidly becomes impossible for us to be at all confident that the page can be visited safely. A user that loads that page doesn't get to decide what actions to take, they have been taken by the time the page renders.
Comment 7•17 years ago
|
||
(In reply to comment #4)
> No. We are not currently planning to allow passthrough if we believe the site
> contains malicious code.
I can understand this, I just only hope that there wouldn't be many false positives :) (this is rather the main reason for allowing passthrough)
IMHO, the best argument for not allowing passthrough on either the phishing or malware error pages is that it only takes a maximum of 6 clicks (Tools > Options > Security > uncheck relevant box > OK > Reload) to see the site in question if you want to ignore Firefox's warning and visit it anyway.
For me, the ability to turn off the protection just like that negates any need for a passthrough. The only possible problem to this approach that I can see (and it is perhaps a big one) is that after you visit the site that you wanted to get to, you might forget to turn the phishing or malware protection back on.
In regards to comment #7:
"I can understand this, I just only hope that there wouldn't be many false
positives :) (this is rather the main reason for allowing passthrough)"
I think the best way overall to deal with false positives is implementing bug 401645 and ensuring that sites erroneously marked as phishing sites can be quickly reported to Google and thus removed from the blacklist.
Assignee | ||
Comment 9•17 years ago
|
||
Dave C - last I remember us discussing it, allowing passthrough in this way was technically difficult, because the API where one would pass such a flag had used up all the optional bits. Is it still the case that this is technically infeasible, or can you see a way to do it? Malware aside, this is something we used to allow for phishing (albeit through a less safe mechanism) and imo we should avoid removing the functionality unless it's really not practical to re-introduce it.
Comment 10•17 years ago
|
||
I think it's annoying, but doable.
Comment 11•17 years ago
|
||
Is there a plan to give some indication in the UI to remind you after you click the passthrough link that you're on a suspected phishing site, like in IE7, where the address bar turns red and it says 'Phishing Site' in the location bar?
Comment 12•17 years ago
|
||
As I said to Monsieur Beltzner in the "Internet Relay Chat", I feel the unverified SSL cert model is a good one to apply here. When a browser can't verify a cert (be it invalid, from a CA the browser doesn't know, etc.), the browser gives you three choices; accept for this session, accept forever, or skip this whole mess. I think on the warning page (which is excellent and very clear, very jarring to the user as it should be, "hey, look out!") there should be somewhere at the bottom, maybe under the "Get me out of here!" button, the options to visit the site anyway, or to whitelist it. "I understand this site might be a loade-- er, dangerous site, but I wanna visit anyway, you warned me!" and "I'm an evil haxx0r, let me whitelist this site for the future."
FTR, I feel the same way about Google's malware site "interstital" which isn't Inter at all, since you can't stital past it.
Comment 13•17 years ago
|
||
As an interesting datapoint, http://www.downthemall.net is apparently secured (or at least claims to be) but not yet removed from Google's blacklist. It can be a pretty confusing story for users, especially since the stopbadware.org report says that they couldn't find evidence of badware, but the site is blocked.
At this point, as a user, I might wonder why I'm not allowed to proceed.
I know/see the merits of not ever allowing passthroughs, but think that a small, somewhat obscurely worded option might be in order.
Comment 14•17 years ago
|
||
Ok, what prompted me to comment here was the block for
http://www.downthemall.net/. Today they posted a notice on their site about the
block Google slapped them with, and why.
"Google flagged our site as a potential cause of malware infection last week.
After a complete check up of the site structure, we’ve found that an
attacker had exploited a WordPress vulnerability to inoculate unauthorized code
into our theme. This code contained links to a site which tried to install
malicious code on visitor’s computer.
As far as we know this malware operates on an old vulnerability of Windows
98, Windows Millennium Edition and Windows 2000. A patch was released in 2005.
You can read details here.
This site has been secured, and you can visit it with confidence.
Even though nobody is known to be infected by this software, we apologize
with our users and visitors for any trouble we might have caused."
Now, I'd never have seen that with the current warning, so I'd forever think
that DTA was bad. So, a positive use case for a "I'm really really sure I want
to do this stupid thing" is false positives. Currently, the only way around it
is to add an exception (tedious) or turn off checking (a simpler process, thus
more likely). "bah, I don't want to type all that in right now, I'll jsut
disable it and turn it back on later," and we all known we generally forget,
which is why AntiVirus checkers tend to have "Disable this for x minutes"
options, so when we have to disable it, we won't forget to reenable it. Having
a "I'm really sure" link will help dissuade users from turning off the malware
blockage.
Thank you, and good night.
Comment 15•17 years ago
|
||
The bug is about phishing, not malware. For malware, I am fundamentally opposed to a Whatever button, because if there is a lurking exploit I'm not secure against, I just got owned. It like having a button on a candy machine that says "Danger, do not press, electric shock may occur." There is no sane reason for adding such a button, unless you believe that the potential for candy is worth the electric shock.
Updated•17 years ago
|
Priority: P3 → P4
Assignee | ||
Comment 17•17 years ago
|
||
Add an "Ignore this warning" link bottom-right, on clickthrough, add a notify bar as a reminder.
Assignee | ||
Comment 18•17 years ago
|
||
Assignee | ||
Comment 19•17 years ago
|
||
(In reply to comment #15)
> The bug is about phishing, not malware. For malware, I am fundamentally
> opposed to a Whatever button, because if there is a lurking exploit I'm not
> secure against, I just got owned. It like having a button on a candy machine
> that says "Danger, do not press, electric shock may occur." There is no sane
> reason for adding such a button, unless you believe that the potential for
> candy is worth the electric shock.
Mike - you know I agree with this sentiment - and that I've pounded on this table plenty. This patch adds click-through for both phishing and malware, though it would be easy to turn it off for malware, if that's what we decided.
I'm becoming persuaded though, that it's better to warn aggressively and allow a way out than to push the persistently curious into disabling the protection. I think the changes in bug 420751 will give most deadly-curious users an out, with that big juicy "Why was this site blocked?" button, and anecdotal information is that google sees low (<1%) click-through rates on their own interstitial, which is much less scary than ours.
Basically, for me, it comes down to whether we can be really confident that allowing click-through or not allowing it is the safer alternative. If we can make a better choice than our users here, and be confident that our false positive rates are vanishingly low, then we should only allow clickthrough for phishing. If we think that the hard lockout on malware is likely to push more people into disabling it wholesale, or if we're at all worried about false-positives, then we should allow a release valve.
What do you think?
Assignee | ||
Comment 20•17 years ago
|
||
try server builds here, for anyone who wants to test the behaviour:
https://build.mozilla.org/tryserver-builds/2008-03-10_13:35-jnightingale@mozilla.com-jn-clickthrough/
Status: NEW → ASSIGNED
Comment 21•17 years ago
|
||
(In reply to comment #19)
> anecdotal
> information is that google sees low (<1%) click-through rates on their own
> interstitial, which is much less scary than ours.
There's at least one context where I don't think that number is comparable... If the user navigates to a site they know -- say, via a bookmark or home button -- I would expect them to be much more likely to ignore our Malware warning (compared to clicking a Google result for some random search).
That's probably the hardest case to deal with (ie, user is most compelled to get there), but it's also probably only a small fraction of malware hits.
Assignee | ||
Comment 22•17 years ago
|
||
Adding the phishing-only version as well
Comment 23•17 years ago
|
||
(In reply to comment #19)
> [snip] ... anecdotal
> information is that google sees low (<1%) click-through rates on their own
> interstitial, which is much less scary than ours.
I'm confused; on the Google interstitial page I see (http://www.google.com/interstitial?url=http://www.area51warez.info/ for example), there isn't a click-through link, so how can Google collect statistics on how many people click through?
Comment 24•17 years ago
|
||
I wondered the same thing. I've never seen a click through, even though the page text indicates one should be able to click through...
Comment 25•17 years ago
|
||
(In reply to comment #24)
> I wondered the same thing. I've never seen a click through, even though the
> page text indicates one should be able to click through...
I assume Google decided that it isn't a good idea to provide a click-through link to a site reported to be hosting malware, so instead they give you the full URL as text so if you really do want to visit the site, you can select, copy and paste into the location bar to get to the site. (Obviously this approach wouldn't work with Firefox...)
Comment 26•17 years ago
|
||
(In reply to comment #22)
> Created an attachment (id=308496) [details]
> Same First Pass patch, but only allow clickthrough for phishing
>
> Adding the phishing-only version as well
>
The notification box still gets value "blocked-badware-page". This should probably become "blocked-phishing-page". I'm not sure what the value is used for, though.
Assignee | ||
Comment 27•17 years ago
|
||
(In reply to comment #23)
> (In reply to comment #19)
> > [snip] ... anecdotal
> > information is that google sees low (<1%) click-through rates on their own
> > interstitial, which is much less scary than ours.
>
> I'm confused; on the Google interstitial page I see
> (http://www.google.com/interstitial?url=http://www.area51warez.info/ for
> example), there isn't a click-through link, so how can Google collect
> statistics on how many people click through?
It is anecdotal - mentioned in passing by someone who used to work on the team, so I don't think we should put too much stock in it, nor try to delve too deeply into how they collect it. I mostly put it out there as a coarse measure of expectation, in case people it helped people get an order-of-magnitude feel for things.
(In reply to comment #26)
> (In reply to comment #22)
> > Created an attachment (id=308496) [details] [details]
> > Same First Pass patch, but only allow clickthrough for phishing
> >
> > Adding the phishing-only version as well
> >
>
> The notification box still gets value "blocked-badware-page". This should
> probably become "blocked-phishing-page". I'm not sure what the value is used
> for, though.
The value is just an internal ID, so I think leaving it generic is fine (badware does sound more like malware - but for an ID like this it doesn't really matter; if I think of something more generic, I'll substitute that in.)
Comment 28•17 years ago
|
||
(In reply to comment #27)
> (In reply to comment #23)
> > (In reply to comment #19)
> > > [snip] ... anecdotal
> > > information is that google sees low (<1%) click-through rates on their own
> > > interstitial, which is much less scary than ours.
> >
> > I'm confused; on the Google interstitial page I see
> > (http://www.google.com/interstitial?url=http://www.area51warez.info/ for
> > example), there isn't a click-through link, so how can Google collect
> > statistics on how many people click through?
>
> It is anecdotal - mentioned in passing by someone who used to work on the team,
> so I don't think we should put too much stock in it, nor try to delve too
> deeply into how they collect it. I mostly put it out there as a coarse measure
> of expectation, in case people it helped people get an order-of-magnitude feel
> for things.
Thanks for the clarification on where the data comes from; my point was just that it doesn't seem like Google is actually providing a click-through on their interstitial, so I was confused by how they could know how many people are clicking through if there is no way to click through. (Perhaps they were referring to the phishing bubble click-through in Fx 2?)
Anyway, I don't have a strong opinion one way or the other on a click-through link so I'll shut up now; I do think the new warning text is clearer though :)
Assignee | ||
Comment 30•17 years ago
|
||
Comment on attachment 308496 [details] [diff] [review]
Same First Pass patch, but only allow clickthrough for phishing
Requesting reviews on the phishing-only patch, since
a) it less contentious and the actual topic of this bug
b) it's trivial to flip on malware later, if we see additional need.
dcamp isn't a browser peer, but he's arguably the most familiar with the code in question. Tagging gavin as well, in case dcamp has an allergy to reviewing CSS.
Attachment #308496 -
Flags: ui-review?(beltzner)
Attachment #308496 -
Flags: review?(gavin.sharp)
Attachment #308496 -
Flags: review?(dcamp)
Comment 31•17 years ago
|
||
Comment on attachment 308496 [details] [diff] [review]
Same First Pass patch, but only allow clickthrough for phishing
>Index: browser/base/content/browser.js
>+ var notificationBox = gBrowser.getNotificationBox();
>+ notificationBox.appendNotification(
>+ errorDoc.title,
Would be nice to add a comment that this is the "bad site" text and not the page's actual title.
>Index: browser/components/safebrowsing/content/blockedSite.xhtml
>+ <style type="text/css">
>+ /* Style warning button to look like a small text link in the bottom right */
>+ #ignoreWarningButton {
>+ -moz-appearance: none;
>+ background: transparent;
>+ border: none;
>+ color: white;
Add a comment that this hardcoded color is OK, because this will always be on an element with a hardcoded dark red background in netError.css?
Looks like the referer-sending code is non-functional, so we can just remove it.
Attachment #308496 -
Flags: review?(gavin.sharp) → review+
Assignee | ||
Comment 32•17 years ago
|
||
(In reply to comment #31)
> (From update of attachment 308496 [details] [diff] [review])
> >Index: browser/base/content/browser.js
>
> >+ var notificationBox = gBrowser.getNotificationBox();
> >+ notificationBox.appendNotification(
> >+ errorDoc.title,
>
> Would be nice to add a comment that this is the "bad site" text and not the
> page's actual title.
Done.
> >Index: browser/components/safebrowsing/content/blockedSite.xhtml
>
> >+ <style type="text/css">
> >+ /* Style warning button to look like a small text link in the bottom right */
> >+ #ignoreWarningButton {
> >+ -moz-appearance: none;
> >+ background: transparent;
> >+ border: none;
> >+ color: white;
>
> Add a comment that this hardcoded color is OK, because this will always be on
> an element with a hardcoded dark red background in netError.css?
Done. I also elaborated the comment at the top of that style block to explain *why* it makes more sense to drastically restyle a button instead of just using a text link.
>
> Looks like the referer-sending code is non-functional, so we can just remove
> it.
Done. Thanks, Gavin.
Attachment #308496 -
Attachment is obsolete: true
Attachment #308705 -
Flags: ui-review?(beltzner)
Attachment #308705 -
Flags: review?(dcamp)
Attachment #308496 -
Flags: ui-review?(beltzner)
Attachment #308496 -
Flags: review?(dcamp)
Updated•17 years ago
|
Attachment #308705 -
Flags: review?(dcamp) → review+
Comment 33•17 years ago
|
||
Comment on attachment 308705 [details] [diff] [review]
Phishing only, review comments addressed
I think the UI is right here, but as discussed over IRC, we should have the button hidden by default using CSS, and show it in the phishing case, which allows a userChrome.css hack to override the hiddenness for people who want a Russian Roulette button in their UI
Attachment #308705 -
Flags: ui-review?(beltzner) → ui-review+
Updated•17 years ago
|
Whiteboard: [has patch][has reviews]
Assignee | ||
Comment 34•17 years ago
|
||
I think carrying over the reviews is appropriate, but I'll get someone to take a look before landing anyhow.
Attachment #308705 -
Attachment is obsolete: true
Attachment #308868 -
Flags: ui-review+
Attachment #308868 -
Flags: review+
Comment 35•17 years ago
|
||
(In reply to comment #15)
(In reply to comment #33)
Meet me in bug 422410, let's land this change for now as it's definitely more
better.
Assignee | ||
Comment 36•17 years ago
|
||
(In reply to comment #35)
> Meet me in bug 422410, let's land this change for now as it's definitely more
> better.
That bug also has an XPI to turn the click-through on for malware. Or userContent.css, if you prefer. Obviously they both depend on the code in this bug being landed.
Assignee | ||
Comment 37•17 years ago
|
||
Identical patch, but with tests to confirm that some strange external styling change in, e.g. netError.css doesn't have the effect of changing clickthrough visibility.
Attachment #308471 -
Attachment is obsolete: true
Assignee | ||
Comment 38•17 years ago
|
||
Gavin has reviewed the tests, and I do not believe this new use of the "Ignore this warning" string constitutes a late-l10n change, since it's actually just fixing a regression from FF2 which hid the string when the phishing UI changed to an error page. Landing it.
Assignee | ||
Comment 39•17 years ago
|
||
Checking in browser/base/content/browser.js;
/cvsroot/mozilla/browser/base/content/browser.js,v <-- browser.js
new revision: 1.1001; previous revision: 1.1000
done
Checking in browser/components/safebrowsing/Makefile.in;
/cvsroot/mozilla/browser/components/safebrowsing/Makefile.in,v <-- Makefile.in
new revision: 1.8; previous revision: 1.7
done
Checking in browser/components/safebrowsing/content/blockedSite.xhtml;
/cvsroot/mozilla/browser/components/safebrowsing/content/blockedSite.xhtml,v <-- blockedSite.xhtml
new revision: 1.4; previous revision: 1.3
done
RCS file: /cvsroot/mozilla/browser/components/safebrowsing/content/test/Makefile.in,v
done
Checking in browser/components/safebrowsing/content/test/Makefile.in;
/cvsroot/mozilla/browser/components/safebrowsing/content/test/Makefile.in,v <-- Makefile.in
initial revision: 1.1
done
RCS file: /cvsroot/mozilla/browser/components/safebrowsing/content/test/browser_bug400731.js,v
done
Checking in browser/components/safebrowsing/content/test/browser_bug400731.js;
/cvsroot/mozilla/browser/components/safebrowsing/content/test/browser_bug400731.js,v <-- browser_bug400731.js
initial revision: 1.1
done
Status: ASSIGNED → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
Comment 40•17 years ago
|
||
Verified Mozilla/5.0 (Windows; U; Windows NT 6.0; fr; rv:1.9b5pre) Gecko/2008031304 Minefield/3.0b5pre ID:2008031304
Assignee | ||
Updated•17 years ago
|
Flags: in-testsuite+
Comment 41•17 years ago
|
||
This unittest is failing on the PGO build unittests;
http://tinderbox.mozilla.org/showlog.cgi?log=MozillaTest/1206388465.1206403127.30304.gz&fulltext=1#err127
FAIL - Timed out - chrome://mochikit/content/browser/toolkit/mozapps/downloads/tests/browser/browser_bug_406857.js
Comment 42•17 years ago
|
||
Sorry, should have mentioned this in the previous comment.
The bug tracking the failure PGO tests is https://bugzilla.mozilla.org/show_bug.cgi?id=424925
Comment 43•17 years ago
|
||
Don't worry about this for now.
It seems the chrome tests are showing different failures for different runs on the same build.
We'll continue tracking in 420073
Comment 44•17 years ago
|
||
based on comment #40, i'm marking status verified.
Status: RESOLVED → VERIFIED
Updated•11 years ago
|
Product: Firefox → Toolkit
You need to log in
before you can comment on or make changes to this bug.
Description
•