Closed Bug 468313 Opened 16 years ago Closed 13 years ago

Having to click 'Ignore this warning' for every page on the suspected 'Attack Site' is seriously annoying

Categories

(Toolkit :: Safe Browsing, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
Tracking Status
status2.0 --- wanted

People

(Reporter: hogwaump, Assigned: mmm)

References

Details

(Whiteboard: [sg:want?])

Attachments

(4 files, 1 obsolete file)

User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4

I can see where this might be a useful feature for some situations, but it is causing me a problem. I use a particular site that google has labeled as dangerous apparently because of some third party sites that have ads or whatever on the main site and have distributing malware in the distant past. I had to click on 'ignore this warning' and wait for your incredibly stupid slow-scroll-up warning page exit for every page that I navigated to on the site. This is ridiculous. If I want to use the site in question and still use the feature, then I should be able to do so. You need to dump the slow-scroll nonsense and add a whitelist feature.

Reproducible: Always

Steps to Reproduce:
1.Go to any site listed by google as a reported attack site
2.
3.
Actual Results:  
Full page highly obnoxious warning rather than display of site in question

Expected Results:  
Given that I have noscript installed, it would have been better to pop up a warning ONCE ONLY and never, ever implement anything that does a painfully slow scroll. If there had been no option to kill the attack site warning altogether, I would have switched back to internet explorer (yuck).
The "slow-scroll nonsense" is covered by other bugs, such as bug 355965 (which is already fixed for Firefox 3.1) and bug 456620.

The rest of your complaint seems valid to me.  Making you click through a warning for each page you load from the site doesn't really improve your security.  At best, it makes it more likely that you'll contact the webmaster to get them to clean up the site, or discourages you from clicking "Ignore this warning" in the future.  At worst, as you mentioned, you'll disable malware protection entirely, making it more likely that *another* site will own you.

Maybe "Ignore this warning" should add the site to a temporary whitelist that goes away at the end of the Firefox session.

Btw, I think you overestimate how well NoScript protects you.  It won't prevent a web page from exploiting a memory safety bug in Firefox's layout code or image decoders.  A smart attacker might not have to use Flash or JavaScript at all in taking over your computer.
Status: UNCONFIRMED → NEW
Component: General → Phishing Protection
Ever confirmed: true
OS: Windows XP → All
QA Contact: general → phishing.protection
Hardware: PC → All
Summary: New 'Attack Site' feature is seriously annoying to the point that I have turned it off → Having to click 'Ignore this warning' for every page on the suspected 'Attack Site' is seriously annoying
Whiteboard: [sg:want?]
We could perhaps store the specific hash in an in-memory whitelist. Mostly the hashes match at the domain level (as in this case) so whitelisting it will cover the site. But if there are multiple badnesses on there we'd want to make sure the user knows about them, so I'd want the user to have to whitelist the hash of each thing we're objecting to, not just let him loose on the site.

Of course the user shouldn't see or know about the hashes, that's an implementation detail. It might be helpful, however, if the malware blocking page included the URI we were blocking on. That way if there's injected content and the user's saying 'But I know BigSite or MyFriendsBlog is not evil' seeing that our warning says "Page tries to load malicious content from hackers.ru" might stop them from clicking through.

That UI suggestion is totally tangential to this bug and I don't mean to hijack it, but if we did implement such a UI then it would be somewhat easier to implement a sensible whitelisting strategy. Otherwise on a multiply-infected site the user will be annoyed, thinking "I just whitelisted this site!"
(1) The version of FireFox I am using is 3.0.4 - Using the built-in 'check for updates' feature does not find any updates available. I imagine 3.1 is in beta, perhaps?

(2) Regarding generally OK sites that have third-party bad actors hanging about, one protection strategy is having the good sense to never click on ads and such, basically stay on the main site and avoid any links that go off-site. Informing the site owner/operator of the problem is a good idea also, but is not always possible. At any rate, the site in question apparently was flagged by google because it had a couple of ads pointing to other sites having malware. This is more a problem with google really than with FireFox, I suppose. At any rate, the temporary whitelist sounds like a good idea. FYI, the main site in question was already on the whitelist; changing the whitelist does not appear to affect the malware warning in its present incarnation on my machine. I don't suppose it would be very feasible to try to pick through google's report and see which sites are actually causing the problem and how to decide whether or not the report warrants a malware warning at the site root level. Is it really possible for the simple displaying of the ad to cause a malware infection? Is this something that typical antivirus software would/should catch? How might one determine whether or not an infection has occurred?

Excerpt from the Google report in question:

    Of the 107 pages we tested on the site over the past 90 days, 2 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2008-12-05, and the last time suspicious content was found on this site was on 2008-11-07.
    Malicious software includes 3 exploit(s). Successful infection resulted in an average of 0 new processes on the target machine.
    Malicious software is hosted on 3 domain(s), including mmcounter.com/, filmmultimediaonline.cn/, bestpicturemedia.cn/.
    2 domain(s) appear to be functioning as intermediaries for distributing malware to visitors of this site, including vxhost.cn/, filmmultimediaonline.cn/.
    Over the past 90 days, empornium.us did not appear to function as an intermediary for the infection of any sites.
(In reply to comment #3)
> Using the built-in 'check for updates' feature does not find any updates
> available. I imagine 3.1 is in beta, perhaps?

It is, beta 2 was released this week. If you want to help us test betas you need to get it explicitly, we're not going to update regular users from a stable release to a beta.

> (2) Regarding generally OK sites that have third-party bad actors hanging
> about, one protection strategy is having the good sense to never click on
> ads and such, basically stay on the main site and avoid any links that go
> off-site.

That is a losing strategy, the malware warning is not given because of mere hyperlinks to bad sites. In that case you'd be blocked if you tried to load those links.

> FYI, the main site in question was already on the whitelist; changing the
> whitelist does not appear to affect the malware warning in its present
> incarnation on my machine.

What whitelist? I was proposing one, we don't currently support one.

> Is it really possible for the simple displaying of the ad to cause a
> malware infection?

Yes.

Even in the case of a static image it's theoretically possible (there have been image exploits in-the-wild for IE in the past, and potentially exploitable image bugs fixed in Firefox), and there's been a downright epidemic of exploits for active content.

> Is this something that typical antivirus software would/should catch?

Not always, but often. But not everyone realizes the need for antivirus software, or can afford it, or keeps it up to date.

> Excerpt from the Google report in question:
> 
>     Of the 107 pages we tested on the site over the past 90 days, 2 page(s)
> resulted in malicious software being downloaded and installed without user
> consent.

There you go: it was simply following links off site, the infections happened while being on that site. What they don't say is what was the vulnerable software, which makes me sad. I don't know that they've ever found a Firefox exploit, it might only affect IE users, or people with old copies of Flash. On the other hand, if a site was compromised and contained known malware even if patched software was not vulnerable the site could well also be infected with NEW malware that no one has learned to detect yet and which could infect you.

> The last time Google visited this site was on 2008-12-05, and the last
> time suspicious content was found on this site was on 2008-11-07.

Keeping a site on the blacklist a month after it's clean seems punitive. I'd definitely follow the stopbadware.org appeal process and complain to Google.

> Malicious software is hosted on 3 domain(s), including mmcounter.com/,
> filmmultimediaonline.cn/, bestpicturemedia.cn/.

This does not mean you have to click a link to go to those sites, it means that's where the infected site gets the malware. It might be an iframe, a script tag, or the source for plugin content.

>     Over the past 90 days, empornium.us did not appear to function as an
> intermediary for the infection of any sites.

That means if you're on another site you won't catch something from empornium, it doesn't mean surfing empornium itself is safe.
> I imagine 3.1 is in beta, perhaps?

Correct.

> Is it really possible for the simple displaying of the ad to cause a
> malware infection?

Yes.  Many ads are loaded using iframes, script inclusion, or Flash.  Any of those are sufficient to redirect you to another site.

I believe Google actually loads the site in a "client honeypot", and flags the site if any new processes appear in the virtual machine.  So if Google flags a site, that means that simply displaying the ad was sufficient for infection for that specific ad.

> Is this something that typical antivirus software would/should catch?

Antivirus software is useless against new attacks, more or less.  Web-based attacks can evolve more quickly than antivirus software can keep up its code blacklists.

> At any rate, the site in question apparently was flagged
> by google because it had a couple of ads pointing to other sites having
> malware. This is more a problem with google really than with FireFox, I
> suppose.

Consider these two scenarios:
1) Site A includes iframes from evil site B.
2) Site A includes iframes from legitimate ad server C, which has been hacked.

In the first scenario, we have a pretty strong indication that site A is itself evil.  If we just block site B, site A will likely switch to another hostname or another tactic.

In the second scenario, it might be ideal to only block C, but there are three major problems with this:

* It's hard to distinguish this from the first scenario, especially if you are trying to do so automatically and without bias.

* It's arguably more important to get the site fixed, protecting users who do not have malware protection, than it is for users with malware protection to be able to access the site while the site is dangerous.

* We'd have to fix bug 413733 for this to work in many real-world situations.

> At any rate, the temporary whitelist sounds like a good idea. FYI, the
> main site in question was already on the whitelist; changing the whitelist does
> not appear to affect the malware warning in its present incarnation on my
> machine.

Which whitelist are you referring to here?
Many thanks to Daniel Veditz and Jesse Ruderman for answering all my possibly silly questions. I don't know where I saw a whitelist, and I did go looking for it. Perhaps it was in NoScript rather than FireFox.

Update: After turning attack site notification back on, the site in question
was blocked after a refresh. I clicked through and viewed the source, looking
for '.cn' inclusions, and found none. Subsequently, the site apparently no
longer triggers a warning. Killing Firefox and restarting did not regenerate
the warning, so behavior of the warning system appears to have changed for
whatever reason. I was attempting to see if I could view the source and search
for bad actors in the page source even though the site was blocked and prior to
clicking through, which would be a very useful feature for persistent site
users who want to preview for possible problems prior to clicking through.
Since the site appears to no longer be blocked, I cannot test that possibility
at this time.

I agree that google is deficient in not specifying which browser they are
referring to. I suspect that they are working relevant to Internet Explorer,
but who knows. At any rate, as FireFox continues to take more of their market
share, attackers will no doubt start writing malware specific to FireFox. I
know it was starting to happen with Netscape back when I abandoned that
software in favor of FireFox and Thunderbird.

If there is any testing that I can do that will help, please do let me know. I
will do whatever I can. I was a computer systems programmer and consultant for
20 years, so I am not exactly a noobie. But, I am 53 years old, inclined to be
confused easily, and sadly out of date at this point. For the present, I have
left attack site warning turned on as per your advice. I will be interested to
see if the site becomes blocked again in future.
(In reply to comment #2)

> That UI suggestion is totally tangential to this bug and I don't mean to hijack
> it, but if we did implement such a UI then it would be somewhat easier to
> implement a sensible whitelisting strategy. Otherwise on a multiply-infected
> site the user will be annoyed, thinking "I just whitelisted this site!"

Yeah, but I don't mind annoying people in that shrinking edge case.  Or at least, fixing this bug with a straightforward "ignore this hash match for this session" should sufficiently mitigate the problem that we can tackle edge cases in a different bug.  :)

DCamp, what do you think? Is this as easy as making a note somewhere any time a load happens with LOAD_FLAGS_BYPASS_CLASSIFIER, and then exempting those hashes from future lookups? Does the url classifier even get TOLD when such a load happens?  Or is it, you know, bypassed?  :)
(In reply to comment #4)
> If you want to help us test betas you need to get it explicitly, we're not
> going to update regular users from a stable release to a beta.

I hasten to add, we would greatly appreciate the help if you did switch to using our new betas. I didn't mean that to come out snippy, as if we didn't think mere users were good enough to try our betas. It's simply not fair to foist unfinished products on unsuspecting people. We want constructive feedback from people who know going in there might be some rough spots and who know they can switch back to the last release if they get into trouble.
(In reply to comment #6)
> I don't know where I saw a whitelist, and I did go looking for
> it. Perhaps it was in NoScript rather than FireFox.

NoScript definitely has a whitelist, as do a few features in Firefox proper but not the malware/phishing feature.

> Update: After turning attack site notification back on, the site in question
> was blocked after a refresh. I clicked through and viewed the source, looking
> for '.cn' inclusions, and found none. Subsequently, the site apparently no
> longer triggers a warning.

When you turned off the notifications you also turn off the database updates. When you turn it back on we notice the data is outdated and start updating it. Eventually you got fresh data that cleared the site.

> I was attempting to see if I could view the source and search for bad actors
> in the page source

There will be some gap between when a site cleans up its act and when it gets rescanned and taken off the list (just as there was a gap between when a site got hacked and when it was noticed and added to the list). That's just the nature of scanning based detection.

> I agree that google is deficient in not specifying which browser they are
> referring to. I suspect that they are working relevant to Internet Explorer,

The newest malware isn't going to be detected by any scans so they work on the reasonable theory that a compromised site 1) might have more bad stuff than they found, and 2) might be updated with new malware at any time as new attacks are discovered. It's like a lot of places who don't like to hire ex-cons, it takes a while to regain trust.

It's not a judgement on the site owner's intentions, most of the time in fact they're perfectly legitimate sites which have been compromised. The site is a victim too, but our responsibility is to protect our users as much as we can.
(In reply to comment #7)
> (In reply to comment #2)
> DCamp, what do you think? Is this as easy as making a note somewhere any time a
> load happens with LOAD_FLAGS_BYPASS_CLASSIFIER, and then exempting those hashes
> from future lookups? Does the url classifier even get TOLD when such a load
> happens?  Or is it, you know, bypassed?  :)

It would probably be better to add a method to nsIUrlClassifierDBService to temporarily whitelist the hashes that would block a given URI, and call that before reloading with LOAD_FLAGS_BYPASS_CLASSIFIER.  I'll look in to that.
> I hasten to add, we would greatly appreciate the help if you did switch to
> using our new betas. 

Well, I am not exactly a novice and I am getting the benefit of a free browser that beats the other ones out there. I will download 3.1 and participate in the bug reporting process.
Clicking "Ignore this warning" link causes Firefox to load a site but in a visually limited way. I think you should either not allow loading it or load it fully.
It may be worth revisiting this issue. The lack of a whitelist or "remember my choice" option has been a repeated complaint of users. I think using an interface similar to the one used for certificate exceptions, which requires you to explicitly click "I accept the risk" and allows you to make the exception permanent with an additional checkbox would be ideal.

It's an interesting question whether it should be done using the relevant hash, or by URL (with wildcards available). The latter provides more user control, so the question is how many cases are there where using the hash would be insufficient to produce the desired result. I might be able to help test this if someone implements the feature.
Attached patch Patch v1. (obsolete) — Splinter Review
This patch sort of solves the problem using the permission manager.
It stores a permission under the title "safe-browsing" for the session. Right now, it seems to cause the notification bar to show up and then disappear right away.
Assignee: nobody → mars.martian+bugmail
Status: NEW → ASSIGNED
Sounds like progress. It's probably worth considering a complete revamping of the feature & workflow for a future version, but a working patch would be a useful step.
(In reply to comment #15)
> Sounds like progress. It's probably worth considering a complete revamping of
> the feature & workflow for a future version, but a working patch would be a
> useful step.

Yeah, there are plans to rewrite the back-end component in JS and solve some storage problems at the same time.
I was considering tacking on another column to the DB to solve this problem, though we are considering moving to a bloom filter (as Chromium has done) and permission manager makes it easy to store data for the session only.
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

Seems the notification bar problem wasn't because of this patch, updated to current mozilla-central and issue went away.

Not sure whom to flag as reviewer or if tests are needed.
Attachment #516615 - Attachment description: WIP patch. → Patch v1.
Attachment #516615 - Flags: review?
(In reply to comment #16)
 
> Yeah, there are plans to rewrite the back-end component in JS and solve some
> storage problems at the same time.

Great. It might make sense to think about the UI and messaging, too. I'm not a dev, but along with QA testing any tech changes, the front end is an area where I and rest of the StopBadware team can help out.
I think that the current dialog might need some changing to reflect that we'll be ignoring the warning on the entire domain.

Limi: any opinions on the matter? I've attached a screenshot of the current dialog above.
Whiteboard: [sg:want?] → [sg:want?][ux-wanted]
That is not the current malware dialog, it's the current phishing dialog.
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

As this is a patch to both toolkit/ and front-end, seems like Shawn would be an appropriate reviewer.
Attachment #516615 - Flags: review? → review?(sdwilsh)
In browser.js, you can just use makeURI() and Services.perms.
Attachment #516615 - Flags: ui-review?(limi)
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

r=sdwilsh assuming you address comment 23
Attachment #516615 - Flags: review?(sdwilsh) → review+
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

LGTM.
Attachment #516615 - Flags: ui-review?(limi) → ui-review+
Whiteboard: [sg:want?][ux-wanted] → [sg:want?]
I generated this percentile based on 45,000 lookups to the UrlClassifier DB over about a week.

The times were generated with a build of Firefox 4 RC with extra logging.
These percentiles were generated the same way as the last attachment. First run through a parser (to strip out unnecessary information from the log and determine the start and end of a lookup wrt a URL) and then through R to generate the percentile. 26,000 lookups were used for this. A permission was added for www.mozilla.com about 20,000 lookups into the study.

Based on these numbers, it looks like adding this check to the lookup path affects the times a little if at all.
(In reply to comment #24)
> r=sdwilsh assuming you address comment 23

Last time we discussed this, you said that you would prefer if I included some tests with this patch.

I've started work on it but it might take a while. Would you mind if I committed this (with the changes) and file a followup bug for tests?
(In reply to comment #28)
> Last time we discussed this, you said that you would prefer if I included some
> tests with this patch.
> 
> I've started work on it but it might take a while. Would you mind if I
> committed this (with the changes) and file a followup bug for tests?
comment 23 doesn't say anything about tests, so yes.
Attachment #516615 - Attachment is obsolete: true
http://hg.mozilla.org/mozilla-central/rev/79497dd8d244
Status: ASSIGNED → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
Depends on: 655884
This has caused a regression in our Mozmill tests which assume the Ignore and Get Me Outta Here buttons to be available on every page load:

GetMeOuttaHere: bug 655885
Ignore: bug 655884

What's the expected change to behaviour?
(In reply to comment #32)
> What's the expected change to behaviour?

The previous behaviour used to be that _anytime_ you visited a bad page, you would see the suspected attack site warning.
The current behaviour is that you only see a suspected attack site warning the first time you visit a bad page on _that domain_ (if you hit "Ignore this warning" of course).

The change is that warnings will not be shown for pages on the bad domain if they have been ignored previously for that browser session.
So in other words, our automation will fail for any tests which follow triggering Ignore on the *mozilla.org test sites.

Is this behaviour controlled through a pref?
It is not controlled through a pref but instead as a permission.

If you clear permissions for "safe-browsing" after each test, this would cause the warning to show up again the next time.
(In reply to comment #35)
> It is not controlled through a pref but instead as a permission.
> 
> If you clear permissions for "safe-browsing" after each test, this would
> cause the warning to show up again the next time.

Is there a way to do that programmatically without having to go through the Preferences dialog?
(In reply to comment #36)
> Is there a way to do that programmatically without having to go through the
> Preferences dialog?

From JS you can remove the permission using nsIPermissionManager for the specific host and type. (In this case host would probably be "www.mozilla.org" or "mozilla.org" and type would be "safe-browsing".)
I did some user testing on this via the nightly build of 6.0a1 (2011-05-16). A few observations:

1. It addresses the core concern. Clicking "Ignore this site" once whitelists the domain and allows use of the site without repeated warnings.

2. Permissions for a domain appear to be temporary (per browser session) rather than permanent. This seems ideal.

3. After clicking "Ignore this warning," there is an information bar displayed across the top indicating the site is an attack site. However, if you reload the page, navigate to a different page, or return to the original page from another page, that information bar disappears. I wonder if it should remain visible while viewing any page that is blacklisted, even if the domain has been ignored.

4. Possibly worthy of a separate bug, there is an inconsistency in the language between the interstitial page ("Reported attack page") and the information bar ("Reported attack site" and "This isn't an attack site").
RE: previous comment, item number 3:
As a user, I think the information bar should persist on that tab and any others tabs one might open on the same site. People can be forgetful, especially if they have a lot of tabs open.
(In reply to comment #38)
> I did some user testing on this via the nightly build of 6.0a1 (2011-05-16).
> A few observations: [...]

I would suggest filing new bugs for #3 and #4. Otherwise we'll lose track, since this bug (the core issue) is fixed.
+1

This is still very annoying with FF 17.
First, I have to find this small link on the warning page, then I need to click "yes, I really want to" in the tool-bar which pops out. And yet, even through Firefox has plagued me with a two-step process, it opens up an additional web-site.

However, even worse, Firefox does not allow me to whitelist specific sites in a persistent manner. So I get annoyed *every* time I try to browse that web-page in a new session.

I got so annoyed, I had to disable phishing protection enterly.
I gave up on this issue a long while back. Just disable phish protection and install the WOT addon. NoScript and RequestPolicy are pretty good, too, as long as you don't mind manually configuring each site that you visit regularly.
Product: Firefox → Toolkit
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: