Last Comment Bug 468313 - Having to click 'Ignore this warning' for every page on the suspected 'Attack Site' is seriously annoying
: Having to click 'Ignore this warning' for every page on the suspected 'Attack...
Status: RESOLVED FIXED
[sg:want?]
:
Product: Toolkit
Classification: Components
Component: Safe Browsing (show other bugs)
: unspecified
: All All
: -- normal with 1 vote (vote)
: ---
Assigned To: Mehdi Mulani [:mmm] (I don't check this)
:
Mentors:
Depends on: 655884 705182
Blocks:
  Show dependency treegraph
 
Reported: 2008-12-07 02:11 PST by Charles Gilbert
Modified: 2014-05-27 12:25 PDT (History)
14 users (show)
See Also:
Crash Signature:
(edit)
QA Whiteboard:
Iteration: ---
Points: ---
Has Regression Range: ---
Has STR: ---
wanted


Attachments
Patch v1. (3.17 KB, patch)
2011-03-03 09:14 PST, Mehdi Mulani [:mmm] (I don't check this)
sdwilsh: review+
limi: ui‑review+
Details | Diff | Review
Screenshot of current dialog. (129.56 KB, image/png)
2011-04-14 16:44 PDT, Mehdi Mulani [:mmm] (I don't check this)
no flags Details
Percentile of lookup times before patch was applied. (23.85 KB, text/plain)
2011-05-02 10:10 PDT, Mehdi Mulani [:mmm] (I don't check this)
no flags Details
Percentile of lookup times after patch was applied. (25.81 KB, text/plain)
2011-05-02 10:19 PDT, Mehdi Mulani [:mmm] (I don't check this)
no flags Details
Patch as checked in. (2.97 KB, patch)
2011-05-04 07:18 PDT, Mehdi Mulani [:mmm] (I don't check this)
no flags Details | Diff | Review

Description Charles Gilbert 2008-12-07 02:11:29 PST
User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.4) Gecko/2008102920 Firefox/3.0.4

I can see where this might be a useful feature for some situations, but it is causing me a problem. I use a particular site that google has labeled as dangerous apparently because of some third party sites that have ads or whatever on the main site and have distributing malware in the distant past. I had to click on 'ignore this warning' and wait for your incredibly stupid slow-scroll-up warning page exit for every page that I navigated to on the site. This is ridiculous. If I want to use the site in question and still use the feature, then I should be able to do so. You need to dump the slow-scroll nonsense and add a whitelist feature.

Reproducible: Always

Steps to Reproduce:
1.Go to any site listed by google as a reported attack site
2.
3.
Actual Results:  
Full page highly obnoxious warning rather than display of site in question

Expected Results:  
Given that I have noscript installed, it would have been better to pop up a warning ONCE ONLY and never, ever implement anything that does a painfully slow scroll. If there had been no option to kill the attack site warning altogether, I would have switched back to internet explorer (yuck).
Comment 1 Jesse Ruderman 2008-12-10 17:11:27 PST
The "slow-scroll nonsense" is covered by other bugs, such as bug 355965 (which is already fixed for Firefox 3.1) and bug 456620.

The rest of your complaint seems valid to me.  Making you click through a warning for each page you load from the site doesn't really improve your security.  At best, it makes it more likely that you'll contact the webmaster to get them to clean up the site, or discourages you from clicking "Ignore this warning" in the future.  At worst, as you mentioned, you'll disable malware protection entirely, making it more likely that *another* site will own you.

Maybe "Ignore this warning" should add the site to a temporary whitelist that goes away at the end of the Firefox session.

Btw, I think you overestimate how well NoScript protects you.  It won't prevent a web page from exploiting a memory safety bug in Firefox's layout code or image decoders.  A smart attacker might not have to use Flash or JavaScript at all in taking over your computer.
Comment 2 Daniel Veditz [:dveditz] 2008-12-10 18:53:52 PST
We could perhaps store the specific hash in an in-memory whitelist. Mostly the hashes match at the domain level (as in this case) so whitelisting it will cover the site. But if there are multiple badnesses on there we'd want to make sure the user knows about them, so I'd want the user to have to whitelist the hash of each thing we're objecting to, not just let him loose on the site.

Of course the user shouldn't see or know about the hashes, that's an implementation detail. It might be helpful, however, if the malware blocking page included the URI we were blocking on. That way if there's injected content and the user's saying 'But I know BigSite or MyFriendsBlog is not evil' seeing that our warning says "Page tries to load malicious content from hackers.ru" might stop them from clicking through.

That UI suggestion is totally tangential to this bug and I don't mean to hijack it, but if we did implement such a UI then it would be somewhat easier to implement a sensible whitelisting strategy. Otherwise on a multiply-infected site the user will be annoyed, thinking "I just whitelisted this site!"
Comment 3 Charles Gilbert 2008-12-10 23:40:09 PST
(1) The version of FireFox I am using is 3.0.4 - Using the built-in 'check for updates' feature does not find any updates available. I imagine 3.1 is in beta, perhaps?

(2) Regarding generally OK sites that have third-party bad actors hanging about, one protection strategy is having the good sense to never click on ads and such, basically stay on the main site and avoid any links that go off-site. Informing the site owner/operator of the problem is a good idea also, but is not always possible. At any rate, the site in question apparently was flagged by google because it had a couple of ads pointing to other sites having malware. This is more a problem with google really than with FireFox, I suppose. At any rate, the temporary whitelist sounds like a good idea. FYI, the main site in question was already on the whitelist; changing the whitelist does not appear to affect the malware warning in its present incarnation on my machine. I don't suppose it would be very feasible to try to pick through google's report and see which sites are actually causing the problem and how to decide whether or not the report warrants a malware warning at the site root level. Is it really possible for the simple displaying of the ad to cause a malware infection? Is this something that typical antivirus software would/should catch? How might one determine whether or not an infection has occurred?

Excerpt from the Google report in question:

    Of the 107 pages we tested on the site over the past 90 days, 2 page(s) resulted in malicious software being downloaded and installed without user consent. The last time Google visited this site was on 2008-12-05, and the last time suspicious content was found on this site was on 2008-11-07.
    Malicious software includes 3 exploit(s). Successful infection resulted in an average of 0 new processes on the target machine.
    Malicious software is hosted on 3 domain(s), including mmcounter.com/, filmmultimediaonline.cn/, bestpicturemedia.cn/.
    2 domain(s) appear to be functioning as intermediaries for distributing malware to visitors of this site, including vxhost.cn/, filmmultimediaonline.cn/.
    Over the past 90 days, empornium.us did not appear to function as an intermediary for the infection of any sites.
Comment 4 Daniel Veditz [:dveditz] 2008-12-11 00:07:52 PST
(In reply to comment #3)
> Using the built-in 'check for updates' feature does not find any updates
> available. I imagine 3.1 is in beta, perhaps?

It is, beta 2 was released this week. If you want to help us test betas you need to get it explicitly, we're not going to update regular users from a stable release to a beta.

> (2) Regarding generally OK sites that have third-party bad actors hanging
> about, one protection strategy is having the good sense to never click on
> ads and such, basically stay on the main site and avoid any links that go
> off-site.

That is a losing strategy, the malware warning is not given because of mere hyperlinks to bad sites. In that case you'd be blocked if you tried to load those links.

> FYI, the main site in question was already on the whitelist; changing the
> whitelist does not appear to affect the malware warning in its present
> incarnation on my machine.

What whitelist? I was proposing one, we don't currently support one.

> Is it really possible for the simple displaying of the ad to cause a
> malware infection?

Yes.

Even in the case of a static image it's theoretically possible (there have been image exploits in-the-wild for IE in the past, and potentially exploitable image bugs fixed in Firefox), and there's been a downright epidemic of exploits for active content.

> Is this something that typical antivirus software would/should catch?

Not always, but often. But not everyone realizes the need for antivirus software, or can afford it, or keeps it up to date.

> Excerpt from the Google report in question:
> 
>     Of the 107 pages we tested on the site over the past 90 days, 2 page(s)
> resulted in malicious software being downloaded and installed without user
> consent.

There you go: it was simply following links off site, the infections happened while being on that site. What they don't say is what was the vulnerable software, which makes me sad. I don't know that they've ever found a Firefox exploit, it might only affect IE users, or people with old copies of Flash. On the other hand, if a site was compromised and contained known malware even if patched software was not vulnerable the site could well also be infected with NEW malware that no one has learned to detect yet and which could infect you.

> The last time Google visited this site was on 2008-12-05, and the last
> time suspicious content was found on this site was on 2008-11-07.

Keeping a site on the blacklist a month after it's clean seems punitive. I'd definitely follow the stopbadware.org appeal process and complain to Google.

> Malicious software is hosted on 3 domain(s), including mmcounter.com/,
> filmmultimediaonline.cn/, bestpicturemedia.cn/.

This does not mean you have to click a link to go to those sites, it means that's where the infected site gets the malware. It might be an iframe, a script tag, or the source for plugin content.

>     Over the past 90 days, empornium.us did not appear to function as an
> intermediary for the infection of any sites.

That means if you're on another site you won't catch something from empornium, it doesn't mean surfing empornium itself is safe.
Comment 5 Jesse Ruderman 2008-12-11 00:35:31 PST
> I imagine 3.1 is in beta, perhaps?

Correct.

> Is it really possible for the simple displaying of the ad to cause a
> malware infection?

Yes.  Many ads are loaded using iframes, script inclusion, or Flash.  Any of those are sufficient to redirect you to another site.

I believe Google actually loads the site in a "client honeypot", and flags the site if any new processes appear in the virtual machine.  So if Google flags a site, that means that simply displaying the ad was sufficient for infection for that specific ad.

> Is this something that typical antivirus software would/should catch?

Antivirus software is useless against new attacks, more or less.  Web-based attacks can evolve more quickly than antivirus software can keep up its code blacklists.

> At any rate, the site in question apparently was flagged
> by google because it had a couple of ads pointing to other sites having
> malware. This is more a problem with google really than with FireFox, I
> suppose.

Consider these two scenarios:
1) Site A includes iframes from evil site B.
2) Site A includes iframes from legitimate ad server C, which has been hacked.

In the first scenario, we have a pretty strong indication that site A is itself evil.  If we just block site B, site A will likely switch to another hostname or another tactic.

In the second scenario, it might be ideal to only block C, but there are three major problems with this:

* It's hard to distinguish this from the first scenario, especially if you are trying to do so automatically and without bias.

* It's arguably more important to get the site fixed, protecting users who do not have malware protection, than it is for users with malware protection to be able to access the site while the site is dangerous.

* We'd have to fix bug 413733 for this to work in many real-world situations.

> At any rate, the temporary whitelist sounds like a good idea. FYI, the
> main site in question was already on the whitelist; changing the whitelist does
> not appear to affect the malware warning in its present incarnation on my
> machine.

Which whitelist are you referring to here?
Comment 6 Charles Gilbert 2008-12-11 01:04:20 PST
Many thanks to Daniel Veditz and Jesse Ruderman for answering all my possibly silly questions. I don't know where I saw a whitelist, and I did go looking for it. Perhaps it was in NoScript rather than FireFox.

Update: After turning attack site notification back on, the site in question
was blocked after a refresh. I clicked through and viewed the source, looking
for '.cn' inclusions, and found none. Subsequently, the site apparently no
longer triggers a warning. Killing Firefox and restarting did not regenerate
the warning, so behavior of the warning system appears to have changed for
whatever reason. I was attempting to see if I could view the source and search
for bad actors in the page source even though the site was blocked and prior to
clicking through, which would be a very useful feature for persistent site
users who want to preview for possible problems prior to clicking through.
Since the site appears to no longer be blocked, I cannot test that possibility
at this time.

I agree that google is deficient in not specifying which browser they are
referring to. I suspect that they are working relevant to Internet Explorer,
but who knows. At any rate, as FireFox continues to take more of their market
share, attackers will no doubt start writing malware specific to FireFox. I
know it was starting to happen with Netscape back when I abandoned that
software in favor of FireFox and Thunderbird.

If there is any testing that I can do that will help, please do let me know. I
will do whatever I can. I was a computer systems programmer and consultant for
20 years, so I am not exactly a noobie. But, I am 53 years old, inclined to be
confused easily, and sadly out of date at this point. For the present, I have
left attack site warning turned on as per your advice. I will be interested to
see if the site becomes blocked again in future.
Comment 7 Johnathan Nightingale [:johnath] 2008-12-11 05:55:41 PST
(In reply to comment #2)

> That UI suggestion is totally tangential to this bug and I don't mean to hijack
> it, but if we did implement such a UI then it would be somewhat easier to
> implement a sensible whitelisting strategy. Otherwise on a multiply-infected
> site the user will be annoyed, thinking "I just whitelisted this site!"

Yeah, but I don't mind annoying people in that shrinking edge case.  Or at least, fixing this bug with a straightforward "ignore this hash match for this session" should sufficiently mitigate the problem that we can tackle edge cases in a different bug.  :)

DCamp, what do you think? Is this as easy as making a note somewhere any time a load happens with LOAD_FLAGS_BYPASS_CLASSIFIER, and then exempting those hashes from future lookups? Does the url classifier even get TOLD when such a load happens?  Or is it, you know, bypassed?  :)
Comment 8 Daniel Veditz [:dveditz] 2008-12-11 08:38:02 PST
(In reply to comment #4)
> If you want to help us test betas you need to get it explicitly, we're not
> going to update regular users from a stable release to a beta.

I hasten to add, we would greatly appreciate the help if you did switch to using our new betas. I didn't mean that to come out snippy, as if we didn't think mere users were good enough to try our betas. It's simply not fair to foist unfinished products on unsuspecting people. We want constructive feedback from people who know going in there might be some rough spots and who know they can switch back to the last release if they get into trouble.
Comment 9 Daniel Veditz [:dveditz] 2008-12-11 09:26:12 PST
(In reply to comment #6)
> I don't know where I saw a whitelist, and I did go looking for
> it. Perhaps it was in NoScript rather than FireFox.

NoScript definitely has a whitelist, as do a few features in Firefox proper but not the malware/phishing feature.

> Update: After turning attack site notification back on, the site in question
> was blocked after a refresh. I clicked through and viewed the source, looking
> for '.cn' inclusions, and found none. Subsequently, the site apparently no
> longer triggers a warning.

When you turned off the notifications you also turn off the database updates. When you turn it back on we notice the data is outdated and start updating it. Eventually you got fresh data that cleared the site.

> I was attempting to see if I could view the source and search for bad actors
> in the page source

There will be some gap between when a site cleans up its act and when it gets rescanned and taken off the list (just as there was a gap between when a site got hacked and when it was noticed and added to the list). That's just the nature of scanning based detection.

> I agree that google is deficient in not specifying which browser they are
> referring to. I suspect that they are working relevant to Internet Explorer,

The newest malware isn't going to be detected by any scans so they work on the reasonable theory that a compromised site 1) might have more bad stuff than they found, and 2) might be updated with new malware at any time as new attacks are discovered. It's like a lot of places who don't like to hire ex-cons, it takes a while to regain trust.

It's not a judgement on the site owner's intentions, most of the time in fact they're perfectly legitimate sites which have been compromised. The site is a victim too, but our responsibility is to protect our users as much as we can.
Comment 10 Dave Camp (:dcamp) 2008-12-11 11:17:21 PST
(In reply to comment #7)
> (In reply to comment #2)
> DCamp, what do you think? Is this as easy as making a note somewhere any time a
> load happens with LOAD_FLAGS_BYPASS_CLASSIFIER, and then exempting those hashes
> from future lookups? Does the url classifier even get TOLD when such a load
> happens?  Or is it, you know, bypassed?  :)

It would probably be better to add a method to nsIUrlClassifierDBService to temporarily whitelist the hashes that would block a given URI, and call that before reloading with LOAD_FLAGS_BYPASS_CLASSIFIER.  I'll look in to that.
Comment 11 Charles Gilbert 2008-12-11 12:45:59 PST
> I hasten to add, we would greatly appreciate the help if you did switch to
> using our new betas. 

Well, I am not exactly a novice and I am getting the benefit of a free browser that beats the other ones out there. I will download 3.1 and participate in the bug reporting process.
Comment 12 Roman R. 2009-07-02 10:27:23 PDT
Clicking "Ignore this warning" link causes Firefox to load a site but in a visually limited way. I think you should either not allow loading it or load it fully.
Comment 13 Maxim Weinstein 2010-07-02 14:10:44 PDT
It may be worth revisiting this issue. The lack of a whitelist or "remember my choice" option has been a repeated complaint of users. I think using an interface similar to the one used for certificate exceptions, which requires you to explicitly click "I accept the risk" and allows you to make the exception permanent with an additional checkbox would be ideal.

It's an interesting question whether it should be done using the relevant hash, or by URL (with wildcards available). The latter provides more user control, so the question is how many cases are there where using the hash would be insufficient to produce the desired result. I might be able to help test this if someone implements the feature.
Comment 14 Mehdi Mulani [:mmm] (I don't check this) 2011-03-03 09:14:06 PST
Created attachment 516615 [details] [diff] [review]
Patch v1.

This patch sort of solves the problem using the permission manager.
It stores a permission under the title "safe-browsing" for the session. Right now, it seems to cause the notification bar to show up and then disappear right away.
Comment 15 Maxim Weinstein 2011-03-03 09:58:28 PST
Sounds like progress. It's probably worth considering a complete revamping of the feature & workflow for a future version, but a working patch would be a useful step.
Comment 16 Mehdi Mulani [:mmm] (I don't check this) 2011-03-03 10:06:02 PST
(In reply to comment #15)
> Sounds like progress. It's probably worth considering a complete revamping of
> the feature & workflow for a future version, but a working patch would be a
> useful step.

Yeah, there are plans to rewrite the back-end component in JS and solve some storage problems at the same time.
I was considering tacking on another column to the DB to solve this problem, though we are considering moving to a bloom filter (as Chromium has done) and permission manager makes it easy to store data for the session only.
Comment 17 Mehdi Mulani [:mmm] (I don't check this) 2011-03-03 11:07:41 PST
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

Seems the notification bar problem wasn't because of this patch, updated to current mozilla-central and issue went away.

Not sure whom to flag as reviewer or if tests are needed.
Comment 18 Maxim Weinstein 2011-03-04 06:08:32 PST
(In reply to comment #16)
 
> Yeah, there are plans to rewrite the back-end component in JS and solve some
> storage problems at the same time.

Great. It might make sense to think about the UI and messaging, too. I'm not a dev, but along with QA testing any tech changes, the front end is an area where I and rest of the StopBadware team can help out.
Comment 19 Mehdi Mulani [:mmm] (I don't check this) 2011-04-14 16:44:24 PDT
Created attachment 526157 [details]
Screenshot of current dialog.
Comment 20 Mehdi Mulani [:mmm] (I don't check this) 2011-04-14 16:48:51 PDT
I think that the current dialog might need some changing to reflect that we'll be ignoring the warning on the entire domain.

Limi: any opinions on the matter? I've attached a screenshot of the current dialog above.
Comment 21 Maxim Weinstein 2011-04-15 06:14:42 PDT
That is not the current malware dialog, it's the current phishing dialog.
Comment 22 Mehdi Mulani [:mmm] (I don't check this) 2011-04-19 16:18:14 PDT
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

As this is a patch to both toolkit/ and front-end, seems like Shawn would be an appropriate reviewer.
Comment 23 :Gavin Sharp [email: gavin@gavinsharp.com] 2011-04-19 20:42:55 PDT
In browser.js, you can just use makeURI() and Services.perms.
Comment 24 Shawn Wilsher :sdwilsh 2011-04-21 12:01:30 PDT
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

r=sdwilsh assuming you address comment 23
Comment 25 Alex Limi (:limi) — Firefox UX Team 2011-04-22 16:21:27 PDT
Comment on attachment 516615 [details] [diff] [review]
Patch v1.

LGTM.
Comment 26 Mehdi Mulani [:mmm] (I don't check this) 2011-05-02 10:10:58 PDT
Created attachment 529503 [details]
Percentile of lookup times before patch was applied.

I generated this percentile based on 45,000 lookups to the UrlClassifier DB over about a week.

The times were generated with a build of Firefox 4 RC with extra logging.
Comment 27 Mehdi Mulani [:mmm] (I don't check this) 2011-05-02 10:19:09 PDT
Created attachment 529506 [details]
Percentile of lookup times after patch was applied.

These percentiles were generated the same way as the last attachment. First run through a parser (to strip out unnecessary information from the log and determine the start and end of a lookup wrt a URL) and then through R to generate the percentile. 26,000 lookups were used for this. A permission was added for www.mozilla.com about 20,000 lookups into the study.

Based on these numbers, it looks like adding this check to the lookup path affects the times a little if at all.
Comment 28 Mehdi Mulani [:mmm] (I don't check this) 2011-05-02 10:26:10 PDT
(In reply to comment #24)
> r=sdwilsh assuming you address comment 23

Last time we discussed this, you said that you would prefer if I included some tests with this patch.

I've started work on it but it might take a while. Would you mind if I committed this (with the changes) and file a followup bug for tests?
Comment 29 Shawn Wilsher :sdwilsh 2011-05-02 10:47:57 PDT
(In reply to comment #28)
> Last time we discussed this, you said that you would prefer if I included some
> tests with this patch.
> 
> I've started work on it but it might take a while. Would you mind if I
> committed this (with the changes) and file a followup bug for tests?
comment 23 doesn't say anything about tests, so yes.
Comment 30 Mehdi Mulani [:mmm] (I don't check this) 2011-05-04 07:18:24 PDT
Created attachment 529995 [details] [diff] [review]
Patch as checked in.
Comment 31 Mehdi Mulani [:mmm] (I don't check this) 2011-05-04 07:18:40 PDT
http://hg.mozilla.org/mozilla-central/rev/79497dd8d244
Comment 32 Anthony Hughes (:ashughes) [GFX][QA][Mentor] 2011-05-17 09:13:33 PDT
This has caused a regression in our Mozmill tests which assume the Ignore and Get Me Outta Here buttons to be available on every page load:

GetMeOuttaHere: bug 655885
Ignore: bug 655884

What's the expected change to behaviour?
Comment 33 Mehdi Mulani [:mmm] (I don't check this) 2011-05-17 12:44:00 PDT
(In reply to comment #32)
> What's the expected change to behaviour?

The previous behaviour used to be that _anytime_ you visited a bad page, you would see the suspected attack site warning.
The current behaviour is that you only see a suspected attack site warning the first time you visit a bad page on _that domain_ (if you hit "Ignore this warning" of course).

The change is that warnings will not be shown for pages on the bad domain if they have been ignored previously for that browser session.
Comment 34 Anthony Hughes (:ashughes) [GFX][QA][Mentor] 2011-05-17 12:46:38 PDT
So in other words, our automation will fail for any tests which follow triggering Ignore on the *mozilla.org test sites.

Is this behaviour controlled through a pref?
Comment 35 Mehdi Mulani [:mmm] (I don't check this) 2011-05-17 12:48:16 PDT
It is not controlled through a pref but instead as a permission.

If you clear permissions for "safe-browsing" after each test, this would cause the warning to show up again the next time.
Comment 36 Anthony Hughes (:ashughes) [GFX][QA][Mentor] 2011-05-17 12:49:33 PDT
(In reply to comment #35)
> It is not controlled through a pref but instead as a permission.
> 
> If you clear permissions for "safe-browsing" after each test, this would
> cause the warning to show up again the next time.

Is there a way to do that programmatically without having to go through the Preferences dialog?
Comment 37 Mehdi Mulani [:mmm] (I don't check this) 2011-05-17 13:02:01 PDT
(In reply to comment #36)
> Is there a way to do that programmatically without having to go through the
> Preferences dialog?

From JS you can remove the permission using nsIPermissionManager for the specific host and type. (In this case host would probably be "www.mozilla.org" or "mozilla.org" and type would be "safe-browsing".)
Comment 38 Maxim Weinstein 2011-05-24 11:36:03 PDT
I did some user testing on this via the nightly build of 6.0a1 (2011-05-16). A few observations:

1. It addresses the core concern. Clicking "Ignore this site" once whitelists the domain and allows use of the site without repeated warnings.

2. Permissions for a domain appear to be temporary (per browser session) rather than permanent. This seems ideal.

3. After clicking "Ignore this warning," there is an information bar displayed across the top indicating the site is an attack site. However, if you reload the page, navigate to a different page, or return to the original page from another page, that information bar disappears. I wonder if it should remain visible while viewing any page that is blacklisted, even if the domain has been ignored.

4. Possibly worthy of a separate bug, there is an inconsistency in the language between the interstitial page ("Reported attack page") and the information bar ("Reported attack site" and "This isn't an attack site").
Comment 39 Charles Gilbert 2011-05-24 11:57:37 PDT
RE: previous comment, item number 3:
As a user, I think the information bar should persist on that tab and any others tabs one might open on the same site. People can be forgetful, especially if they have a lot of tabs open.
Comment 40 Justin Dolske [:Dolske] 2011-05-26 17:25:16 PDT
(In reply to comment #38)
> I did some user testing on this via the nightly build of 6.0a1 (2011-05-16).
> A few observations: [...]

I would suggest filing new bugs for #3 and #4. Otherwise we'll lose track, since this bug (the core issue) is fixed.
Comment 41 Clemens Eisserer 2012-12-28 13:56:36 PST
+1

This is still very annoying with FF 17.
First, I have to find this small link on the warning page, then I need to click "yes, I really want to" in the tool-bar which pops out. And yet, even through Firefox has plagued me with a two-step process, it opens up an additional web-site.

However, even worse, Firefox does not allow me to whitelist specific sites in a persistent manner. So I get annoyed *every* time I try to browse that web-page in a new session.

I got so annoyed, I had to disable phishing protection enterly.
Comment 42 Charles Gilbert 2012-12-28 14:26:47 PST
I gave up on this issue a long while back. Just disable phish protection and install the WOT addon. NoScript and RequestPolicy are pretty good, too, as long as you don't mind manually configuring each site that you visit regularly.

Note You need to log in before you can comment on or make changes to this bug.