Closed Bug 422410 Opened 16 years ago Closed 16 years ago

anti malware warning UI should allow pass through / show ignore link

Categories

(Toolkit :: Safe Browsing, defect)

defect
Not set
normal

Tracking

()

VERIFIED FIXED
Firefox 3

People

(Reporter: beltzner, Assigned: johnath)

References

Details

Attachments

(2 files)

(spun off from bug 400731 which adds a "Ignore this warning" link to the anti-phishing page)

I'm hereby advocating for adding a similar link to the anti-malware warning page. My rationale for doing so comes as a reply to bug 400731 comment 15.

(In reply to bug 400731 comment #15)
> The bug is about phishing, not malware.  For malware, I am fundamentally

What's better? Someone clicking that button or someone turning off the feature?

We are seeing _real_ cases where people have turned the feature off - unlikely to be turned back on - because they simply didn't believe the malware case.

> opposed to a Whatever button, because if there is a lurking exploit I'm not

I, too, am opposed to a "Whatever" button. That's why the intentional design, which Johnathan and I spent a non-trivial amount of time wrestling with, and which Johnathan spent an even less-trivial amount of time implementing in a non-chrome-privileged and docShell-friendly manner, doesn't have a whatever button. It has two very prominent buttons, reflecting standard "OK/Cancel" functionality (where OK = why was this blocked? and Cancel = get me out of here). 

The uses a small link which allows users to ignore the warning, but then continues to alert them when they hit the page by calling a <notificationbox>.

> secure against, I just got owned.  It like having a button on a candy machine
> that says "Danger, do not press, electric shock may occur."  There is no sane
> reason for adding such a button, unless you believe that the potential for
> candy is worth the electric shock.

If I thought that's what the effect of adding this link would be, I'd be fully opposed to it as well. Believe me, this is not a decision that we came to lightly, but rather one based on what we've seen through beta and nightly usage. There's a lot going on here, and there's a real and valid concern that presently we've designed a feature which users will simply turn off as an annoyance.

The click-through phenomenon has been shown to be based on frequency of exposure to a UI pattern. The standard UI pattern for security confirmations is:

 - user says "I want this" 
 - system asks "Did you want that?" or says "Warning! That's dangerous, are you sure?"
 - user answers "Yes, whatever"

Note that this pattern still exists in our products in cases where the effect of saying "Yes, whatever" isn't by itself harmful (ie: about:config, the install EULA, add-on confirmation, running a downloaded file). We changed this UI pattern for SSL exceptions to *great* effect (based on the number of people who've told us it's now "unintuitive" or "difficult to figure out" how to add exceptions for unsafe certificates).

In anti-phishing (bug 400731) we once again changed that pattern, giving the user two very clear actions which are safe, and then allowing them to ignore the warning. By that point, though, the user has already decided that they don't believe our warning, and is already actively looking for ways to work around it. All the link does is provide a way for them to work around it that one time, as opposed to turning off the feature entirely.

It is my ardent belief and hope that the "Why was this site blocked" button will satisfy those who don't believe that a site like http://www.example.com can get owned. On the chance that it doesn't, though, I'd rather risk exposing the user to some spyware or potentially-plugged* JS exploit than have them turn off the feature entirely.

(* ironically the malware that's blocked by the stopbadware.org list often won't actually affect users running Firefox 3)
Flags: blocking-firefox3?
(In reply to comment #0)
> It is my ardent belief and hope that the "Why was this site blocked" button
> will satisfy those who don't believe that a site like http://www.example.com
> can get owned. On the chance that it doesn't, though, I'd rather risk exposing
> the user to some spyware or potentially-plugged* JS exploit than have them turn
> off the feature entirely.

Why do you think that will end up being better? If they're the type of user that doesn't believe our warning and actively wants to get past it, is the enabled-but-always-clicked-through state actually better than the disabled state?
I'm still conflicted on what the best outcome is here, but I will offer an additional thought:

Even a terminally curious user, who is going to click through every malware warning, is better protected by a click-through than by turning it off, since we'll still block malware loads in invisible iframes for that person.

That has to be weighed against the number of passively curious users who would have clicked through, but are content not to if there's no option beyond "Why was this site blocked?"

Which of those is the larger population?  The answer to that is, I think, the answer to what we do here.
(In reply to comment #1)
> Why do you think that will end up being better? If they're the type of user
> that doesn't believe our warning and actively wants to get past it, is the
> enabled-but-always-clicked-through state actually better than the disabled
> state?

The lack of belief in our warning is tightly related to the relationship users have with established domains. The problems we've seen are for popular sites which users expect to be "safe", like joehewitt.org, example.com, downthemall.net. In all those cases, the sites were pwn3d without the admins even knowing. 

By turning it off the feature, the user is turning off all protection for malware and phishing. While the user may have disbelieved us for some domain they frequent, they might not be so sure about some random domain they got to by clicking a link.
The implementation in bug 400731 made it deliberately straightforward to add the clickthrough to the malware page, either as a resolution to a bug like this, or just for expert users.  Using a pref is hard, since the page is now unprivileged, but an addon like this can do it by just unhiding the button.

This can also be accomplished by adding the following line to your userContent.css:

#errorPageContainer > #ignoreWarning > #ignoreWarningButton {
  display: -moz-box !important;
}

(deliberately hyper-specific selector to avoid stomping all over other content)
(In reply to comment #3)

My hunch is that the potential fallout caused by unsure users clicking the "ignore this warning" button just because it exists is greater than the potential fallout caused by people determined to get past the warning turning off the feature entirely. I think the latter group is much smaller than the former, yet more likely to be vocal beta testers.

I think we should try to head off the risk of users disabling the feature for their "favorite sites" by clearly explaining why we're blocking the favorite site, not by allowing them to just click through despite our warning. I think the improved "Why is this site blocked?" warning is a good step in that direction, and I know you have plans and ideas for further improvement there.
With IE View Extension installed the user has a very easy and very unsafe way to get to the malware page. Right click and select "View this page in IE". Just think of what will happen when he views a malware page in Internet Explorer...
(In reply to comment #5)
> My hunch is that the potential fallout caused by unsure users clicking the
> "ignore this warning" button just because it exists is greater than the
> potential fallout caused by people determined to get past the warning turning
> off the feature entirely. I think the latter group is much smaller than the
> former, yet more likely to be vocal beta testers.

What do you base this hunch on?

We have data (from a controlled user-study, I can dig up the link if you want) showing that the click-through button available in the anti-phishing warning bubble was used less than 1% of the time. Again, this is in a case where the click-through is in a non-standard position as compared to the normal click-to-confirm UI pattern.

> I think we should try to head off the risk of users disabling the feature for
> their "favorite sites" by clearly explaining why we're blocking the favorite
> site, not by allowing them to just click through despite our warning. I think
> the improved "Why is this site blocked?" warning is a good step in that
> direction, and I know you have plans and ideas for further improvement there.

I agree that more information is better, and indeed that's why the button changed to "Why was this site blocked?" since that's the question that users will actually be asking. I think that will also be the button that users click when frustrated. And while I am working with stopbadware.org to get better information placed on that report page, if the user still isn't satisfied I would rather they have a one-time clickthrough at their disposal than the alternatives which are turn off the feature or load the page in another browser.

Other possible mitigation strategies have been considered are:

 - spring load the safebrowsing pref, so it always defaults back to on in new sessions
 - add a new state to the pref which has "allow clickthroughs"
(In reply to comment #7)
> (In reply to comment #5)
> > My hunch is that the potential fallout caused by unsure users clicking the
> > "ignore this warning" button just because it exists is greater than the
> > potential fallout caused by people determined to get past the warning turning
> > off the feature entirely. I think the latter group is much smaller than the
> > former, yet more likely to be vocal beta testers.
> 
> What do you base this hunch on?
> 
> We have data (from a controlled user-study, I can dig up the link if you want)
> showing that the click-through button available in the anti-phishing warning
> bubble was used less than 1% of the time. Again, this is in a case where the
> click-through is in a non-standard position as compared to the normal
> click-to-confirm UI pattern.

Do you think that will carry over to the malware-on-a-hacked-site warning?  I'd imagine that telling the user "this isn't the site you meant to visit" probably causes a different reaction than "this site you visit regularly for content you want has suddenly been labeled a malicious site".

What dcamp said raises the concerns I have.  I believe that in the phishing case, where all of the other indicators (Location bar, etc) would show something random, or an IP address, there's plenty of information presented that would back up the assertion that this isn't where you're trying to go.

On the other hand, if you know that foo.com is where you get your content, and we suddenly tell you that foo.com is unsafe, there is no additional information which supports the assertion.  Its a "you're going to have to trust us" message, rather than a "look around, you're not in Kansas anymore" message like antiphishing was and continues to be.
(In reply to comment #8)
> Do you think that will carry over to the malware-on-a-hacked-site warning?  I'd
> imagine that telling the user "this isn't the site you meant to visit" probably
> causes a different reaction than "this site you visit regularly for content you
> want has suddenly been labeled a malicious site".

That's a good point. And I actually do think the change made in bug 400731 adding a "Why was this site blocked?" button will deflect a great deal of the "frustrated user" class, as the report page says:
 - why a page was blocked
 - who has been informed

So you're right, I don't know if the clickthrough rates from studies about phishing are transferable.

However, I would ask you this question:

Do you really think that not including a click through will actually stop someone who was going to a site where they normally get their content? Using mconnor's example, if foo.com is where they normally go, and we suddenly say foo.com is unsafe, do we really think that a frustrated user will stop there? Or do we think that they would pick one of the following alternatives:

 - open the site in another browser
 - turn off the security feature

Do we think those other alternatives are better or worse to a one-time click through?
(In reply to comment #10)
> However, I would ask you this question:
> 
> Do you really think that not including a click through will actually stop
> someone who was going to a site where they normally get their content? Using
> mconnor's example, if foo.com is where they normally go, and we suddenly say
> foo.com is unsafe, do we really think that a frustrated user will stop there?
> Or do we think that they would pick one of the following alternatives:
> 
>  - open the site in another browser
>  - turn off the security feature
> 
> Do we think those other alternatives are better or worse to a one-time click
> through?

I do think it would stop a significant percentage of those users, yeah.  It wouldn't stop all, and the ones it doesn't stop will skew early-adopter-tech-savvy and make a disproportionate amount of noise about it, but yes, I am basically certain that it will stop some significant portion of them, even on sites they have a prior relationship with.

I also think that comparing "turning it off" to a "one-time click through" mis-states things a little here. The argument is that these people will do this whenever sites they think they "can trust" are blocked, so it's sort of an every-time-clickthrough.  As I mention above, even a serial click-through-addict still gets protected on invisible iframes, but we shouldn't make this decision based on a false dilemma like "is it better to let one bad site through or let them all through."

Again, if we turn clickthroughs on, some number of people N who wouldn't have visited the page, now will.  And without the clickthrough, some other number of people, M, will turn off the protection or use another browser.  We can shrink M somewhat with the addon/userContent to turn the clickthrough back on, but at the end of the day, we should ask ourselves whether N or M is the group whose needs carry the day, or find some way to make both sides happy.

If a site is blocked and clicked through, there's no "you just got 0wned!" message to teach them that they shouldn't have clicked through.  At best they'll notice an infection at some point and hopefully correlate it to the blocked page that they clicked through.  At worst they'll never notice it (or maybe that particular exploit didn't even affect their specific browser/os combination, and there's nothing TO notice).

So if joehewitt.org is blocked and the user decides to skip through, there will be no immediate feedback that they did the wrong thing.  At worst, they might conclude that the site was incorrectly blocked, and take the next warning even less seriously.

I'm not sure if that really matters to this bug (the same thing would happen if they disabled the check globally), but might be worth considering.
(In reply to comment #12)
> If a site is blocked and clicked through, there's no "you just got 0wned!"
> message to teach them that they shouldn't have clicked through.  At best

Actually, there would be. Bug 400731 added a <notificationbox> which would report to users that the site had been reported as an attack site.
(In reply to comment #13)
> (In reply to comment #12)
> > If a site is blocked and clicked through, there's no "you just got 0wned!"
> > message to teach them that they shouldn't have clicked through.  At best
> 
> Actually, there would be. Bug 400731 added a <notificationbox> which would
> report to users that the site had been reported as an attack site.

That's not a "you just got owned" message, that's a "you ignored our warning" message, and I don't think it will "teach them that they shouldn't have clicked through" when they encounter problems later on (they're likely to remember the initial warning that they clicked through more than the notification anyways, I think).

I think dcamp's point is valid - if there are no immediate "bad" effects other than "Firefox freaking out", some users will probably assume that it was just a false alarm, and might conclude that the malware protection is just a nuisance. I'm not sure how much relevance it has to the debate about whether or not to allow click-through, though - I suspect that the set of people who would jump to that conclusion are likely to get there whether it requires clicking through or disabling the protection entirely, though it may be that the added work required to disable the protection vs. clicking a link will make them stop and think it through a bit more.
I think, to use johnath's groupings, we need to target group N more than M.  I think the M group is far more likely to skew towards having up to date software (Firefox, plugins, OS patches) and are less likely to get burned by not having malware protection active, as a general but not universal rule.

As for the notificationbox, if its a real attack site, the notification box is really just a "there's a nonzero chance your machine is now compromised.  Have a nice day." message.  And if there's no obviously successful attack, it'll be perceived as a false postive.  Even if the user got exploited, because odds are they won't notice anything short of triggering an exploitable crash, and even then how many users understand that crash bugs can be exploited?  Its not like phishing where you can realize its a fake and walk away, you just rolled the dice and you probably won't know what you rolled.
I wonder if the adjusting the wording of the malware warning (and click-thru, if any) would help here... [Apologies if I'm veering off into the weeds here.]

The problem this bug is about stems from the cognitive dissonance experienced by user when we tell that snuggle-kittens.com is an "attack site". Kittens couldn't possibly be evil, and the user may have visited the site before, so the Firefox warning may sound alarmist/wrong... *clickthru*

I think the original wording was written in the mindset that the malware protects users from accidentally visiting designed-to-be-evil sites, and the bit about good sites being compromised reads like an afterthought. That distinction wouldn't really be needed if the warning described the site in a less-judgmental way.

As a rough example, consider a message modeled on a "virus detected" warning, or disease/infection in general. That might be a more familiar concept to novices, and avoids judging the site -- maybe it's good, maybe it's bad, but it has a case of the flu and you probably don't want to touch it. (Bonus points for a warning icon with animated mucus.) A small "infect me anyway" [sic] click-thru link would be an option; all the issues raised in this bug still apply, but it feels like this would be a higher mental hurdle for users than a generic "ignore this warning".
Justin - before digging in too deeply on wording, have you seen the page since bug 420751 landed recently, with updated strings, or are you working off the older strings?  The new strings for malware (attachment 307316 [details]) do say something along the lines of "Some sites intentionally distribute malware, but many sites are compromised without the knowledge of their owners", which I think gets to some of what you're saying around making it clear that even snuggle-kittens.com can be attacked.
(In reply to comment #15)
> I think, to use johnath's groupings, we need to target group N more than M.  I
> think the M group is far more likely to skew towards having up to date software
> (Firefox, plugins, OS patches) and are less likely to get burned by not having
> malware protection active, as a general but not universal rule.

I take it you're asserting that the people who are less likely to take a browser warning at its word are those who consider themselves a little bit more "hax0r"-ish. I would tend to agree with that assertion, and it meshes with the early reports we've seen (media, technical folk) but I'm not sure it's an entirely fair mapping.

I think, f.e., this will happen a lot:

<scenario>
Joe reads an online message board about video gaming every day. That message board allows arbitrary HTML content. One day a malicious person uses the arbitrary HTML content loophole to install some malware. Joe goes to that site as part of his daily routine, and is blocked.
</scenario>

I sincerely, sincerely hope that Joe will click "Why was this site blocked" and believe that the website had been compromised. I think that, like we've seen with the malware warning, Joe will believe the warning page and choose not to ignore our warning.

I worry that Joe will simply state that the malware feature is broken, and I worry that he'll cast about for workarounds. I worry only because the workarounds are to load the site in another web browser, statistically likely to be IE, and statistically likely to be *less secure* than Firefox 3, or to turn off the feature.

> As for the notificationbox, if its a real attack site, the notification box is
> really just a "there's a nonzero chance your machine is now compromised.  Have
> a nice day." message.  And if there's no obviously successful attack, it'll be

True. It's just more reminder that users should be cautious on the site. We block sites hosting many types of attacks, not just ones that host attacks that occur onload; some require clicking, and I do believe it's worth having the reminder there.

What dcamp was asking for ends up being impossible. If we could show a message whenever a user got exploited, we would also be able to prevent the exploit in the first place, and hopefully would do that instead of showing a "you got pwn3d" message. :)

(In reply to comment #14)
> or disabling the protection entirely, though it may be that the added work
> required to disable the protection vs. clicking a link will make them stop and
> think it through a bit more.

I fully agree that the added work will give someone pause. It will also keep the feature off forever, meaning that they'll only have to take that pause once. The click through ensures that they'll have to make that decision each and every time.

(In reply to comment #11)
> I also think that comparing "turning it off" to a "one-time click through"
> mis-states things a little here. The argument is that these people will do 
> this
> whenever sites they think they "can trust" are blocked, so it's sort of an

Yes, this is true, but I feel pretty comfortable in asserting that a large portion of malware exists on sites which people get to by third-party links, web searches, IM, email, etc.

So far I haven't seen much in this debate that goes beyond:

providing a click through
... is good because it means users keep the feature on, don't use other browser
... is bad because it will increase the number of people visiting sites reported as having malware

not providing a click through
... is good because it makes it very difficult to continue on to that page
... is bad because the only workarounds are worse than viewing the site in Firefox*

(* based on the belief that Firefox is more secure than other browsers)

Does anyone have a different way of slicing this, or other information that can be brought to bear?
(In reply to comment #18)
> Does anyone have a different way of slicing this, or other information that can
> be brought to bear?

I do, I do!

Re-reading this bug, I wonder if the stopbadware.org guys would let us put the XPI that Johnath posted somewhere on their site. Or link to a page we could host with that XPI and more disclosure about what it allows and why we think users need to be VERY REALLY CAREFUL about using it.
(In reply to comment #17)
> Justin - before digging in too deeply on wording, have you seen the page since
> bug 420751 landed recently, with updated strings

Yes. I think that is an improvement, but that there is room for more. Perhaps an unexpected warning page will be read by users, but I'd suspect there will be a tendency to just see  "Attack site blah blah blah". My suggestion was 1/2 that something like "Virus blah blah blah" would carry more weight, and 1/2 that something like "Infect me anyway" is a succinct, somber hurdle.

There are compelling arguments for and against the click-thru, so I was trying to think of ways to shake up the warning so that either (1) users wouldn't *want* a chick-thru (thus eliminating the need to provide one) and/or (2) making the click-thru a very unpalatable choice to the naive.(In reply to comment #18)


(In reply to comment #15)

> Does anyone have a different way of slicing this, or other information that
> can be brought to bear?

It's probably worth noting that it doesn't seem to be evidence that false-positives are a significant problem (which would argue for adding a click-thru). Although, to be fair, the vagueness of the stopbadware.org reports makes evidence one way or the other hard to come by.
(That last bit was a reply to commetn 18, I don't know where the "comment 15" thing came from!)
(In reply to comment #18)
> > As for the notificationbox, if its a real attack site, the notification box is
> > really just a "there's a nonzero chance your machine is now compromised.  Have
> > a nice day." message.  And if there's no obviously successful attack, it'll be
> 
> True. It's just more reminder that users should be cautious on the site. We
> block sites hosting many types of attacks, not just ones that host attacks that
> occur onload; some require clicking, and I do believe it's worth having the
> reminder there.
> 
> What dcamp was asking for ends up being impossible. If we could show a message
> whenever a user got exploited, we would also be able to prevent the exploit in
> the first place, and hopefully would do that instead of showing a "you got
> pwn3d" message. :)

I don't think dcamp is asking for a "let the user know when they've been hacked" message, he's pointing out that the impossibility of that task makes it very easy for users to misinterpret their situation, and lose confidence in malware protection, because "everything looks okay."  *Because* we can't have a "you got pwn3d" message, any clickthrough needs to anticipate the next result, which is almost inevitably, "Stupid Firefox, this site looks fine."

> (In reply to comment #11)
> > I also think that comparing "turning it off" to a "one-time click through"
> > mis-states things a little here. The argument is that these people will do 
> > this
> > whenever sites they think they "can trust" are blocked, so it's sort of an
> 
> Yes, this is true, but I feel pretty comfortable in asserting that a large
> portion of malware exists on sites which people get to by third-party links,
> web searches, IM, email, etc.

Right - there seems to be a good mix, and I agree that on sites users *don't* have a trust relationship with, they are much less likely to click through the warning, since there's much less competing information there.

> 
> So far I haven't seen much in this debate that goes beyond:
> 
> providing a click through
> ... is good because it means users keep the feature on, don't use other browser
> ... is bad because it will increase the number of people visiting sites
> reported as having malware
> 
> not providing a click through
> ... is good because it makes it very difficult to continue on to that page
> ... is bad because the only workarounds are worse than viewing the site in
> Firefox*
> 
> (* based on the belief that Firefox is more secure than other browsers)
> 
> Does anyone have a different way of slicing this, or other information that can
> be brought to bear?

With the exception of a second footnote, that we can provide the XPI as a workaround that is not substantially worse than viewing the site in firefox, I would agree with this, which is another way of stating what I was saying with my N and M groups (who could arguably be better named.)
Maybe the action can be changed, so that instead of clicking on the link and opening it in the same tab, another firefox instance is opened, with flags -no-remote -safe-mode and unsafe plugins, javascript, authenticated sessions disabled.
> I don't think dcamp is asking for a "let the user know when they've been
> hacked" message, he's pointing out that the impossibility of that task makes it
> very easy for users to misinterpret their situation, and lose confidence in
> malware protection, because "everything looks okay."  *Because* we can't have a
> "you got pwn3d" message, any clickthrough needs to anticipate the next result,
> which is almost inevitably, "Stupid Firefox, this site looks fine."

And in many cases, for that user it _is_ fine.  The attacks will be targetted at browsers other than the one they're using, for most of our users: older Firefoxes or non-Firefox browsers.  "You can't browse to this page because if you were using another browser it would hurt you" doesn't feel like something that we should offer without an option other than "take the batteries out of the smoke detector".

If we had Johnath's fork-and-exec private browsing mode, and we were running on an operating system that supported rights restriction, then we could do a "process in defensive configuration" thing, which is probably the right capability to offer the user.  (It might also make the user think that the page is safe, because they got a different ad in the hacked ad-network rotation, and the new one doesn't hurt.)
(In reply to comment #24)
> Firefoxes or non-Firefox browsers.  "You can't browse to this page because if
> you were using another browser it would hurt you" doesn't feel like something
> that we should offer without an option other than "take the batteries out of
> the smoke detector".

Indeed. And I fear even more common path will be for the user to fire up one of those less secure browsers, at which point they are more likely to be owned than if they had just been let through.

Building on my earlier suggestion, we could have the target of "Ignore this warning" be the AMO page which features the add-on that turns that button into a true ignore-warning button. That feels a little cyclical to me, though. :)
[Implementation note: if we decide to do this, we'll need to update the tests here, too:  http://mxr.mozilla.org/mozilla/source/browser/components/safebrowsing/content/test/browser_bug400731.js ]
How about a whitelist?
The UI could be similar to the ssl_error UI where there's a few scary warning and an "Add an exception" link
I'm not sure a whitelist is appropriate, as the time window in which a site is affected is important. That you trust yourfavoritesite.com isn't really the issue here, it's whether or not you want to risk getting infected with malware, which yourfavoritesite.com has been reported to feature at that particular moment in time.

I honestly do go back and forth on this (Johnath, Gavin, Madhava and I had a pretty long debate about it yesterday) and am not sure what the right answer is. My instinct is to include the small, unobtrusive warning, though a third pref state for malware (ie: on/allow clickthrough/off) may be the best way to expose that.

My goal is to give a less-dangerous workaround than "open a less secure browser" or "turn off the protection completely" to the user who has decided, come hell or high water, that they will go to the site despite our warning. The cost is potentially allowing someone who *would* have been stopped a route to infecting themselves. It's a tricky tradeoff.
(In reply to comment #28)
> I honestly do go back and forth on this (Johnath, Gavin, Madhava and I had a
> pretty long debate about it yesterday) and am not sure what the right answer
> is. My instinct is to include the small, unobtrusive warning, though a third
> pref state for malware (ie: on/allow clickthrough/off) may be the best way to
> expose that.
> 
> My goal is to give a less-dangerous workaround than "open a less secure
> browser" or "turn off the protection completely" to the user who has decided,
> come hell or high water, that they will go to the site despite our warning. The
> cost is potentially allowing someone who *would* have been stopped a route to
> infecting themselves. It's a tricky tradeoff.

It is, and I go back and forth on this a lot too.  It's a closer call than it seems, at first blush.  And if it is such a close call, if conscientious people trying to do the right thing can't come down strongly on either side, then I think I might be leaning towards saying "we should not make decisions for the user when we can't decide it any better than they can."  If we can't be reasonably confident that blocking the clickthrough is a net-safer alternative, then maybe we need to let the users make the call, after arming them with the information.

The lingering question I have is whether it would be enough to make the click-through available for users who seek it out, e.g. via an add-on, versus including it by default.  I don't think linking to the add-on from the vanilla product really flies - it creates the same temptation to clickthrough in people who otherwise might not.  But if we thought it was viable to just trust frustrated and technology-confident users to find the XPI on AMO, then we could use that as our pressure-release.

If we don't think that's enough, then it feels like a clickthrough (possibly mediated by a pref) might be the way to go, though I'm not sure how a pref would work on an unprivileged page.
At this time we're not going to block final release on this.
Flags: blocking-firefox3? → blocking-firefox3-
(In reply to comment #24 et al)
> And in many cases, for that user it _is_ fine.  The attacks will be targetted
> at browsers other than the one they're using, for most of our users: older
> Firefoxes or non-Firefox browsers.  "You can't browse to this page because if
> you were using another browser it would hurt you" doesn't feel like something
> that we should offer without an option other than "take the batteries out of
> the smoke detector".

Is it not possible to only block a site when it's known that *this* version of Firefox on *this* platform is affected?

I think if users were aware that this was the case – for example, if instead of saying “this site has been reported as an attack site…”, the warning read “this site has been reported as affecting Firefox 3.0.2 running on Linux…” – then they'd trust the warning a lot more and wouldn't want to click through.
I'm in favour of a click-through of some sort, however I wonder if it's worth considering the user's History and Bookmarks when determining exactly what to show them. In the case of a site they've never visited before the click-through might ask for confirmation, then let them proceed.

In the case of a site they've previously visited, however, there would be an additional warning along the following lines: 

"Although you've previously visited this site, it has now been listed as a source of malware. It may have been hacked without the site owner's knowledge. It is recommended that you visit a cached version of this page, or try again at a later date"

There should be a link to the Google (or other) cache for the page (where possible, and assuming the cached version is known to be safe), another chance to get to the stopbadware.org page, as well as an option to really continue to the site if the user wants to.

By adding an extra step such as this the user will be made more explicitly aware that the previously safe snuggle-kittens.com may have been compromised. If they're only looking for information that is on the site, a cached version may meet their requirements.
Idea: can we ship Firefox with everything in place for providing the link, except for a CSS rule which hides it? Then, if we get lots of negative feedback (if, for example, we find our malware site source is providing too many false positives) we can ship the removal of that CSS rule in a security patch as a low-code-risk change.

It's easier to start strict and loosen up than it is to start loose and tighten up later.

Gerv
(In reply to comment #33)
> Idea: can we ship Firefox with everything in place for providing the link,
> except for a CSS rule which hides it? Then, if we get lots of negative feedback
> (if, for example, we find our malware site source is providing too many false
> positives) we can ship the removal of that CSS rule in a security patch as a
> low-code-risk change.

Good news - that's what's currently happening.  :)

Bug 400731 implemented the clickthrough generically, but used a CSS rule to hide it in the malware case.  The extension linked to in this bug unhides it.
This is the code to always show the clickthrough.  Public response, although obviously selection bias exists, is pro-clickthrough, and I've ruminated plenty on it myself as well ( http://blog.johnath.com/index.php/2008/03/17/should-malware-warnings-have-a-clickthrough/ ).  I don't love it, but I think we should do it.
Assignee: nobody → johnath
Status: NEW → ASSIGNED
re-nom'ng since it's not just a discussion question any more
Flags: blocking-firefox3- → blocking-firefox3?
Attachment #313705 - Flags: review?(gavin.sharp)
I wish the text on the page were clearer about the fact that simply *visiting* the site is dangerous.
Both pages that are used to demonstrate malware and phishing are too much alike.

Maybe you could make the attack page more "friendly", not saying i'm a big bad site, but: I'm your standard blogsite / favorite homepage and I seem very friendly and look much the same as yesterday, but today I infected your computer with some bad trojan.

I think this might help demonstrating the difference between malware and phishing. Judging by the sample pages it looks like you're talking about the same thing. The http://www.mozilla.com/firefox/its-an-attack.html page even says "It's a trap!", copied from the http://www.mozilla.com/firefox/its-a-trap.html page.
Attachment #313705 - Flags: review?(gavin.sharp) → review+
(In reply to comment #38)
> Both pages that are used to demonstrate malware and phishing are too much
> alike.

Onno, you may want to take these comments to bug 423912.


(In reply to comment #37)
> I wish the text on the page were clearer about the fact that simply *visiting*
> the site is dangerous.

I agree.  I think that there are a couple wording improvements we could make.  We're past string freeze now, I don't suppose I could impose on you to open a new bug to track that?
Filed bug 427665, "Malware warning page should make it clear that merely *visiting* the page is dangerous".
After a lot of debate, mconnor and I decided to take this for final. User choice ends up trumping.

I agree with bug 427665, but it feels late to take a string change without fully negotiating with the l10n community. Maybe for 3.0.1?
Flags: blocking-firefox3? → blocking-firefox3+
Marking this bug checkin-needed since it has the necessary reviews/approvals, but I am travelling and may not have a contiguous block of time to land it and watch the tree.  If someone gets to this before I can, thanks!
Keywords: checkin-needed
Checking in browser/components/safebrowsing/content/blockedSite.xhtml;
/cvsroot/mozilla/browser/components/safebrowsing/content/blockedSite.xhtml,v  <--  blockedSite.xhtml
new revision: 1.5; previous revision: 1.4
done
Checking in browser/components/safebrowsing/content/test/browser_bug400731.js;
/cvsroot/mozilla/browser/components/safebrowsing/content/test/browser_bug400731.js,v  <--  browser_bug400731.js
new revision: 1.2; previous revision: 1.1
done
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Keywords: checkin-needed
Resolution: --- → FIXED
Target Milestone: --- → Firefox 3
This is still broken - you can't view source. A site of mine (reddragdiva.co.uk) got hit by toxic comment spam, so has been flagged. I'm trying to clean it up using Minefield. I can go to pages with the "Ignore this" clickthrough ... but when I try to view source, it won't show it to me - just the two buttons and the clickthrough as on the page, but none of them actually do anything. Try it and see!

Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9pre) Gecko/2008041204 Minefield/3.0pre
David Gerard, see bug 397937.
Verified FIXED using:

Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9pre) Gecko/2008042705 Minefield/3.0pre

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.4; en-US; rv:1.9pre) Gecko/2008042704 Minefield/3.0pre

-and-

Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9pre) Gecko/2008042704 Minefield/3.0pre
Status: RESOLVED → VERIFIED
Flags: in-litmus?
Just my 2 cents contrib as user. I just don't want such a feature in my favorite browser if I am not able to control it. Why?

Not because I am biased. Or biased just mean that I want to be in control. And that is not what biased mean, It isn't?

But just because I don't trust google on this. This corporation is actively collaborating with censorship authorities in countries like China or EU. Just look for comments like the following in any European google site:

" En réponse à une demande légale adressée à Google, nous avons retiré 8 résultat(s) de cette page. Si vous souhaitez en savoir plus sur cette demande, vous pouvez consulter le site ChillingEffects.org."

Automatic translation of the above message: " In reply to a lawful request addressed in Google, we withdrew 8 result (s) of this page. If you like to know about it more at this request, you can consult the site ChillingEffects.org. "

This search do have nothing to do with terrorist or pedophile activities but all with politics. So, it is pure political censorship, here in Switzerland where I am living, and not in China! And all the other European google sites give exactly the same censored results!

The same search on google.com (USA) show all the results without censorship. In consequence and like I am not living into the American dream (dixit Madonna), I just don't trust google in my country as well that in many other countries.

In consequence, I just don't want to give away to someone else the control of my browser.

Many thanks for the good work, and please, just don't break it with features that will take the control away from the user.

To get a warning with explanation is a very good function. But just let the users in control. I am happy that this is what will apend.
(In reply to comment #48)
> Just my 2 cents contrib as user. I just don't want such a feature in my
> favorite browser if I am not able to control it. 

Kudos, Dominique, for your comments. Sorry to come in late. I am a grateful user and not an expert, but just happened to see Johnathan Nightingale's blog pointing here and a W3C draft in progress. Scary if absolute blocks were even thought to be required (RFC 2119 MUST). I also read tonight that Google has 2.5 years experience with malware. I would guess and hope they have the help of the people who have 25 years experience. Good luck to the Firefox team on your upcoming release.
Implemented in litmus test case 6988 awhile back.

Link: https://litmus.mozilla.org/show_test.cgi?id=6988
Flags: in-litmus? → in-litmus+
(In reply to comment #18)
> Does anyone have a different way of slicing this, or other information that
> can be brought to bear?

There is a huge amount of anecdotal data suggesting that certain sites (such as breitbart.com) are constantly getting falsely reported to Google and antivirus product vendors as hosting malware or phishing, as a form of DOS attack by their political opponents.  Some of their users are starting to respond in kind by reporting the same falsehoods about left-leaning sites.  This is why this fix was needed and a whitelist would be even better.
Product: Firefox → Toolkit
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: