Closed Bug 380932 Opened 17 years ago Closed 17 years ago

Handle malware URIs with error page

Categories

(Toolkit :: Safe Browsing, defect)

defect
Not set
normal

Tracking

()

VERIFIED FIXED
Firefox 3 alpha7

People

(Reporter: johnath, Assigned: johnath)

References

Details

(Whiteboard: SPI-003f)

Attachments

(6 files, 7 obsolete files)

147.72 KB, image/png
Details
86.21 KB, image/png
beltzner
: ui-review+
Details
705 bytes, image/png
Details
6.85 KB, image/png
Details
17.34 KB, patch
Details | Diff | Splinter Review
3.06 KB, patch
Biesinger
: review+
Details | Diff | Splinter Review
Attached image Malware error page mockup (obsolete) —
In addition to the existing anti-phishing protection, there is a line item in the Firefox 3 PRD which involves adding support for anti-malware protection.  This involves 3 pieces of work:

1. Updating the existing code which checks for anti-phishing updates to also pull down malware blacklists.
2. Implementing an API which will allow callers to determine whether a given URI is in the malware blacklist.  
3. Implementing hooks in the browser to check URIs using the API before page load, and presenting an appropriate error page when a hit is detected.

This bug will track only item 3.  Dave Camp is working on items 1 and 2, and the bugs that track it will be marked as blocking this one.

My thinking in terms of approach here is that malware URIs should be handled with an error page in the content area, like server not found and other network failures are today.  The approach used in the current anti-phishing code (darkening the content and overlaying a speech bubble) isn't appropriate here since we need the interception to happen before page load occurs.  I've attached a first-pass mockup.

Most firefox users are accustomed to error pages from their day to day surfing, and for those non-technical users most likely to be victimized by malware sites it presents a safe approach (no "default accept" behaviour) and even if they don't read the content (as users seldom do) they will likely treat it as a dead end, and go elsewhere for whatever it was they were intended to find.

Unlike most existing error dialogs, this one does not have a bulleted list of possible solutions, since there really aren't any solutions for most users other than "Get me out of here."  Technically the "ignore this warning" is another option, but I wanted that visually distinct and out of the way, not presented as another valid option in the list.

As a supplemental note, Google currently does something similar with an interstitial page for links that lead to known malware sites.  An example can be found here:  

http://www.google.com/interstitial?url=http://www.kohit.net/
I have two questions about what happens when the user has selected "Check by asking Google about each site I visit" rather than "Check using a downloaded list of suspected sites":

1. Does Firefox still download the blacklist, so it can block the navigation ~90% of the time before any browser-based attacks have a chance to run?

2. If a site does not appear in the blacklist but the response from Google says "this is a malware site", will Firefox stop showing the page immediately and load this error page in its place?
1. Currently the blacklist gets downloaded in either mode to use as a backup in case the remote lookup fails.  So even if phishing is in remote lookup mode, malware checking should work.

2. I think the plan for malware is to only use the local lists so we don't have to slow down page load or unload a page (which would probably be too late anyway).

I think this means that malware blocking should be tied to the global safe browsing pref (browser.safebrowsing.enabled) and not care whether we're in local or remote lookup mode.
> 2. I think the plan for malware is to only use the local lists so we don't have
> to slow down page load or unload a page (which would probably be too late
> anyway).

For a JavaScript-based attack, it might be too late or it might not.  For a "please download this shiny malware and run it" page, it won't be too late.

We're already asking Google whether it's a phishing site, so why not ask Google "is this a phishing or malware site"?
This mockup moves the notification from content to chrome, see my notes in the comments for my rationale.
I think the malware notification needs to originate from chrome instead of content for three reasons:

1)  Messing with the user's mental model:  We need to make it clear to the user who is talking, the browser or the content.  While most users will have the correct mental model, we are not helping them out by occasionally swapping the locations in the interface where the browser and the page are able to communicate (an inverse example is javascript popups that on OS X appear as a sheet coming out of the browser window and contain a 64x64 Firefox icon.)

2)  Overstepping our bounds:  There is a big difference between not being able to show a site due to the network being down, versus choosing to not display a site and instead replacing it with something else.  We obviously mean well by choosing to intercept the page request and display different information in the content area, but I still think this a line we should avoid crossing.

3)  Taking responsibility:  By displaying the message in the content area, it is not clear who is responsible for determining that this was an attack site.  It could have been the user's proxy server, or ISP, or some other third party.  Or even the site itself, the user may think "maybe the site was hacked and this message is a joke."  It needs to be clear that Firefox blocked the page.
a bunch of thoughts

Alex's comments and mockup makes sense to me for the cases where we are showing information about confirmed exploitation sites, but there are some subtle changes that might be considered.  In the cases that we are talking about in comment 0 its not "the browser" that has identified evil.foo, it's really  "the google malware detection service" or some such service working with the browser.  Should we make that distinction?

In a related topic, I've been kicking around a few ideas about rule based systems built-in to the browser that might also detect the early stages of malware attacks for the cases were a site has not yet been added to the web service blacklist.

The idea would be to alerting users to possible danger that is unfolding.  If this becomes feasible figuring out the how to present this information also has the same challenges listed in the comments above.

http://www.usenix.org/events/hotbots07/tech/full_papers/provos/provos.pdf
has some interesting ideas about about web content paterns that are consistently used in execution of exploits and might be predictors of attempts at exploits.

for example, web content that systematically scans for vulnerable versions of unpatched browsers or plugins.  Firefox could see these paterns as the content is parsed and alert the user to possible attempts at trying to exploit the system might be underway.

Another common pattern is a bunch of assigment statement to build a string that obfuscates location.replace(http://evil.com/run-drive-by-exploit.html) followed by a statement to eval(the_obfuscateded_location.replace)

this combination of eval and obfuscated location.replace that redirects to a drive by exploit site is something that could be detected and the user warned of potential danger before the redirect even happens and the attempt at compromising the system is executed.

In these cases it really would be "the browser" that is detecting the possible attack, as opposed to "the google malware finding web service", and it would also be a warning of a possible problem site, rather than a confirmed known exploitation site.   Maybe a yellow version of alex's mockup that indicates the possibily of problems, as opposed to the red version that shows a site that is confirmed to be exploiting systems would be appropriate if we got somethink like this off the ground.

Also, will the google malware blackkist include sites that are known to be exploting just Firefox, or IE, or all browsers?  Do we need to make that distinction and/or communicate it to the user so we don't overstep our bounds?

Also The mockup shows
   "The website at evil.foo has been identified as an attack site"
The study above presents the case were social networking sites that are not protecting for code insertion and XSS problems are often vectors for malware attacks.

Should the warning/error messages refer to problems as problems associated with "the website" or be associated with "the webpage"  I guess the question is -- should we taint all of myspace.com if only a single user page on myspace has been injected with malware code?
Flags: blocking-firefox3?
I think the "bypass this warning" link should be removed, to be replaced with something like Ctrl-Shift-click on the Get Me Out Of Here button. If you are a really a security professional, you will know about this. And if you don't know, you'll complain to your colleagues at anti-phishing.com, who will.

On the other hand, what we are actually doing here is giving Google veto power over any web page. Hmm...

Gerv
I think Alex's mockup is preferable for a few reasons:

1) It is consistent with the anti-phishing UE.  I can't think of any reason to treat phishing and malware as particularly different in that, for the user, they are both significant dangers to be avoided/protected against.  I expect that the average user neither knows nor particularly cares about the difference between phishing and malware, only that they are scary things on the internet.  I think that treating them as conceptually different would cause unnecessary confusion.

2) We have already been training users with the anti-phishing UE.  I think that building upon and reinforcing that training would be better than throwing a different interaction design at them.  Chances are pretty good that most users will never read all the text in the malware and phishing dialogs/pages.  With the dialog, the default user action will be to simply scan the text in the bigger font and click through on the "Get me out of here" button.  This is reasonable default behaviour for both phishing and malware sites.

3) I agree that the warnings should be part of chrome, not content.

4) I think that we should say where the warnings are coming from (for both phishing and malware) but that information can be included in what is essentially "the fine print" -- the smaller-font paragraph of text in the dialogs that will usually not be read.  If people are interested, the information is there, but it's not "in your face" enough to cause confusion when a user is simply doing the default "eek-scan-clickbutton" behaviour.

5) I'm not sure I agree with Gerv's suggestion to remove the "ignore this warning" link entirely since that feels like it may be overly restrictive of users' freedom (to hang themselves, but whatever).  I would, however, add a confirmation dialog with some extra explanation so users could cancel the action if they accidentally (or out of curiousity) clicked through.

UI nitpick:

Could we make the "Get me out of here" button bigger, with a slightly bigger font, and centered in the notification dialog?  Really make it clear what the user's click-to-run-away target should be.

A question:

If we're going to be putting up the malware warning before the page loads, is there any reason we shouldn't do the same with phishing warnings?  I understand that they don't present the same immediate danger, but if there's no technical reason, I think consistent behaviour would be preferable.
OS: Mac OS X → All
Hardware: PC → All
about consistent behavior between malware and phishing I agree.

Increasing phishing is the method used get users to drive by sites and install malware.   One of the first big public attacks like this was back in 2005 and the number of attacks like this are on the increase. 
http://www.sophos.com/pressoffice/news/articles/2005/09/va_katrina.html

If a phisher can entice you to a site it's not only worth their time to try and steal private information, they can also increase the economic gain by attempting to take control of your system while you are on the page.
Couple of comments:

- Yes, malware and anti-phishing UI should be consistent; most users aren't going to know, or care deeply, about the difference. I am not terribly convinced that we should even differentiate meaningfully on the message to users, which is: this site has been identified as evil, and you shouldn't go to it.

- Whatever solution we go with should not be a dead-end for users; we should be giving them some sort of option to complete the task that they thought they were about to complete. The simplest thing here would be a search box that allowed them to try to find the place they were trying to go to when they stumbled upon this malicious location. This could replace the "Get me out of here" link.

- If the user decides to continue they should first be asked if they wish to report the page as inappropriately marked as malicious; this solves both Gerv's concern that making it easy to click through the warning defeats the purpose, and also ensures that we're getting good metrics on how many people are skipping past the warnings for certain pages.

- attaching this message to the location bar isn't, to my mind, a priority

- Alex's design depends on being able to render error messages as he mocked up, which I'd love to see, but am not sure we can do. We need to scope & determine that feasibility pretty damned quickly, IMO. If it fails, I'm not entirely opposed to making this an error page that replaces content, simply because I don't see a lot of value in spoofing that type of error page; so far nobody's bothered to spoof our HTTP error pages (I bet Jesse's gonna step up and figure out why this is bad shortly, though!)

cheers,
mike
(not blocking on P2 PRD items, but really-really want!)
Flags: blocking-firefox3? → blocking-firefox3-
Whiteboard: SPI-003f [wanted-firefox3]
Attached image Scarier error page (obsolete) —
I like Alex's mockup as well, but I'm sensitive to beltzner's point that it is a larger question mark in terms of sizing.  While we figure that out, I've updated the error page mockup to take advantage of Alex's much scarier visuals.  I also followed Alex's lead and dropped the unnecessary "click here" in the ignore-warning text.

I've softened the text a bit, per Google's suggestion that we not be absolute about our warnings, and took Deb's suggestion to grow the button (though we'll see how that works in implementation-land, vs. just mockup).

The error page discussion should keep bug 327181 in mind as well (using error pages for SSL errors) since these are obviously related, though they speak to different problems (known maliciousness vs. broken SSL and the theoretical potential for maliciousness).
Attachment #265049 - Attachment is obsolete: true
In case anyone is curious what this particular bug would look like in the form of an article:

http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9023520&pageNumber=1
Just trying to add my 2c here - what about sites that contain embedded references to the "attack sites"? a lot of attacks are contaminating websites with an iframe that points to a malicious site hosting the attack code. I understand how we can prevent users (or at least alert them) from browsing to attack sites, but a mechanism is needed to examine code coming from a different domain as well.
More $.02: an error page can be displayed in a frame or Iframe quite nicely.
Just another 1c - usually these malicious iframes are non-displayable (zero size, style=visibility: hidden, etc...). That was the kind I was referring to - need to figure a way to display the user a prompt indicating that some content on the page is hosted on a server known to host attack code, and was blocked...
(In reply to comment #16)
> Just another 1c - usually these malicious iframes are non-displayable (zero
> size, style=visibility: hidden, etc...). That was the kind I was referring to -
> need to figure a way to display the user a prompt indicating that some content
> on the page is hosted on a server known to host attack code, and was blocked...

Do we though?  :)  I mean to say, do our users need a way to make visible a hidden, blocked, malware-flagged site?  Is it an impediment to any legitimate, majority use-case to just block this and fail to make alternate arrangements easily available?
The malware blacklist includes pages that have these invisible iframes so, hopefully, many of these will be blocked. The problem is that most of the pages that host these iframes are compromised to try dozens of potential vulnerabilities serially. If we happen to detect one, we might want to just block everything. This is the approach that we take when populating the blacklist.
In reply to comment 16: whether the "malicious" iframe is displayable or 
not, we defeat the malice by substituting the error page for the malicious 
content, even if the error page is not actually seen by the user.
True - although the error page as per the initial design proposed here should have been interactive... invisible pages should be treated then differently to present some kind of error (much like the <a href="https://bugzilla.mozilla.org/attachment.cgi?id=265071">initial proposal </a> but with the text referring to "elements of the page..."
Rather than reinventing the wheel, this patch just adds some flexibility to netError.xhtml to allow callers to pass in an optional CSS class name, which it then sets on the elements of the page.  This allows error page callers to change the visual presentation using custom CSS rules in the general case.

The patch also includes specific styling for malware handling, making use of the above changes.  I'll attach a screenshot of this presentation.  

The patch requires two new icon files be added to pinstripe/winstripe themes in /toolkit/, also to be attached.

As beltzner says, Alex's mockup is another good way to present it, and his points are taken.  I think it could be implemented (or something similar-looking could) by embedding the necessary UI in a XUL popup.  However, in terms of sizing this item in contention with the other security UI priorities for FF3, re-using the existing framework for presenting users with reasons why a page won't be rendered seems like an agreeable alternative; particularly in the context of bugs like bug 327181 which already envision extending error pages to handle wont-show cases along with cant-show.  The page has no obvious spoofing value and, at the end of the day, will almost certainly engender the appropriate behaviour in our user, i.e. make them not visit the malware site.

Incidentally, this patch also resolves an outstanding issue with netError.xhtml.  The file currently has a comment that reads:

<!-- XXX this needs to be themeable -->

In relation to the page's favicon link-rel.  The additional style parameter added by this patch also allows custom favicon usage.  If the patch is landed as-is, bug 314416 can be marked fixed as well.
Note that there is no Try Again button here as there would be in other error pages, since that is not meaningful in this context.  This is a fatal error - you shouldn't go here.

There is also no click-through "ignore this warning" link.  The more I thought about it, the more I felt that designing a last-chance-harm-prevention mechanism with a one-click override for the "security researcher" use case seemed misguided.  Security researchers can disable safebrowsing, or download the file with wget to their local machine.  The false report case is more convincing, but still not enough to justify such an easy workaround, in my opinion.
Attachment #266401 - Attachment is obsolete: true
A copy of this file is needed in
toolkit/themes/pinstripe/global/icons  and
toolkit/themes/winstripe/global/icons
A copy of this file is needed in
toolkit/themes/pinstripe/global/icons  and
toolkit/themes/winstripe/global/icons
If we're not allowing them any (easy) way to get to the site that's being blocked, are the words "This site may attempt" accurate? The site won't attempt that, though it could've if we hadn't saved them. Or am I understanding that wording wrong?
Comment on attachment 268881 [details]
Screencap of malware error page, as handled by first patch

Mike - mconnor is also aware of the progress on this bug, so if you want to delegate ui-r, he should be able to take it (esp. since he can't do code review for this component).
Attachment #268881 - Flags: ui-review?(beltzner)
(In reply to comment #5)
> I think the malware notification needs to originate from chrome instead of
> content for three reasons:
> 
> 1)  Messing with the user's mental model:  We need to make it clear to the user
> who is talking, the browser or the content.  

I second that. I just realized that in the case of malware warnings violating this principle is especially dangerous. Bogus "Your system has been infected!" warnings are already a common technique used by websites to trick users into downloading questionable software. By placing our malware warning entirely in the content area we're encouraging users to trust these kind of messages.

And what if someone spoofs the Firefox's warning page, which users have already learnt to trust, slightly changing the text and adding a link to "download necessary protection software"?

Teaching users to trust messages coming from the content area is bad in general but it's _especially_ dangerous in this case.
(In reply to comment #25)
> If we're not allowing them any (easy) way to get to the site that's being
> blocked, are the words "This site may attempt" accurate? The site won't attempt
> that, though it could've if we hadn't saved them. Or am I understanding that
> wording wrong?

Heh - I take your point.  The language is in there in that context because we didn't want to talk in terms of absolutes, since we're not sure which of those things it will do, and yet did absolutely want to mention the kind of things that could go wrong, in user-meaningful terms, so that they understood why we blocked it.

What do you think of changing:

This site may attempt to install destructive programs on your computer which can steal private information, damage your computer, and use your computer to attack others.

...to...

Attack sites may attempt to install destructive programs on your computer which can steal private information, damage your computer, and use your computer to attack others.

...so that we're making a generic statement about attack sites, not a specific statement about this one, that we neutered?
(In reply to comment #27)
> I second that. I just realized that in the case of malware warnings violating
> this principle is especially dangerous. Bogus "Your system has been infected!"
> warnings are already a common technique used by websites to trick users into
> downloading questionable software. By placing our malware warning entirely in
> the content area we're encouraging users to trust these kind of messages.
> 
> And what if someone spoofs the Firefox's warning page, which users have already
> learnt to trust, slightly changing the text and adding a link to "download
> necessary protection software"?
> 
> Teaching users to trust messages coming from the content area is bad in general
> but it's _especially_ dangerous in this case.

Dangit, Adam.  That's a good point.  

I understand that it's probably the point Alex was trying to make all along, I'm sometimes slow that way.  I mean, really, it's not substantially different from a lot of other ways a malicious page could exploit the firefox trust relationship (e.g. spoof the regular page-load-error, spoof the notification bar) but it's still a possibility.

So the question is - is the protection afforded by malware-blocking-as-error-page useful enough to include immediately, or do we hold off until the places UI lands (which has a similar look and feel to Alex's malware mockup, unsurprisingly :) and then piggyback off of that?  

This patch has 3 pieces:
 - Modifications to make the error page more stylable.  These are actually independent of malware protection, and my feeling is that they should go in anyhow, though I would not be opposed to moving them to another bug.
 - Strings in browser for malware protection.  These are probably the strings we'd use for any other presentation as well (or similar ones).
 - CSS/Icons for the custom styling of netError in the malware case.

Landing the patch now gives us a working front end for malware, and almost all the pieces (save part 3) are useful even in a world where we migrate it to the places UI.  I don't really want to block this bug on places UI though, since that feels like a long deferral, and this is not something we want to introduce at the last second, imho.
IMHO, we need to display _something_ in the content area, so it's good to get this in as a measure for that - consider the case of us blocking content of one tab, the user dismisses the warning without closing the tab, switching to a different tab - he still needs some indicator about what's up with that tab. Also, what if the tab is not being shown and is just in the background? What if we're loading a tabgroup with multiple malware tabs? I think we need to show some indication in the content area there. Probably some additional UI is a good idea, maybe a notification bar, red background in the urlbar or something like that is good for that though.
I'm not too convinced of those dialog-style menus hanging off buttons, esp. as someone could hide lots of toolbar elements through customizations, probably also the urlbar (I'm sure someone finds a good use for that) and I guess we still want to show some blocking UI in that case.
Additionally, embedded Gecko or other Gecko-based browsers probably want that functionality as well - or is this planned to be a Firefox-only feature? In that case, you should not touch a toolkit file like netError at all and other Gecko-based products need to duplicate code for such a feature.
Comment on attachment 268880 [details] [diff] [review]
Patch netError.xhtml to support broader theming flexibility, including malware specifically.

This is classed under phishing protection, but the principal code impact is on netError.xhtml, so I'm tapping a docshell reviewer to take a look.
Attachment #268880 - Flags: review?(cbiesinger)
(In reply to comment #28)
> Attack sites may attempt to install destructive programs on your computer which
> can steal private information, damage your computer, and use your computer to
> attack others.

Yeah, this works for me.
Attachment #268880 - Attachment is obsolete: true
Attachment #268975 - Flags: review?(cbiesinger)
Attachment #268880 - Flags: review?(cbiesinger)
Comment on attachment 268975 [details] [diff] [review]
Minor string change, and patch updated to apply cleanly on trunk

You also need to change dom/locales/en-US/chrome/netError.dtd

+        
+        var class = getCSSClass();
+        if (class) {
+        

Please don't add trailing whitespace; also, don't add that empty line after the opening brace
Attachment #268975 - Flags: review?(cbiesinger) → review+
On checkin please note the need to also land both the custom favicon and the custom page icon in global/icons/ of pinstripe & winstripe themes.
Attachment #268975 - Attachment is obsolete: true
Comment on attachment 268881 [details]
Screencap of malware error page, as handled by first patch

ui-r+ with two comments:

1. "Attack sites try to install programs that steal private information, use your computer to attack others, or otherwise damage your system."

2. a follow-up bug should be filed to size and attempt to get this into a chrome based warning, as per comment 5, comment 27, etc.
Attachment #268881 - Flags: ui-review?(beltzner) → ui-review+
"otherwise damage your system"? did I really write that? drop the "otherwise" :)
Attachment #269895 - Attachment is obsolete: true
Blocks: 385988
Status: NEW → ASSIGNED
Whiteboard: SPI-003f [wanted-firefox3] → SPI-003f [wanted-firefox3] [checkin needed]
Attachment #269933 - Attachment is obsolete: true
After discussion on IRC with Dao, Gavin and Biesi, I've updated the code in netError.xhtml to only set the className on document.documentElement.  This saves iterating needlessly through every child element setting className, at the expense of more complicated CSS selectors.
Attachment #270040 - Attachment is obsolete: true
mozilla/browser/locales/en-US/chrome/overrides/netError.dtd 	1.8
mozilla/docshell/resources/content/netError.xhtml 	1.21
mozilla/dom/locales/en-US/chrome/netError.dtd 	1.8
mozilla/toolkit/themes/pinstripe/global/jar.mn 	1.30
mozilla/toolkit/themes/pinstripe/global/netError.css 	1.4
mozilla/toolkit/themes/pinstripe/global/icons/blacklist_favicon.png 	1.1
mozilla/toolkit/themes/pinstripe/global/icons/blacklist_large.png 	1.1
mozilla/toolkit/themes/winstripe/global/jar.mn 	1.32
mozilla/toolkit/themes/winstripe/global/netError.css 	1.4
mozilla/toolkit/themes/winstripe/global/icons/blacklist_favicon.png 	1.1
mozilla/toolkit/themes/winstripe/global/icons/blacklist_large.png 	1.1
Status: ASSIGNED → RESOLVED
Closed: 17 years ago
Resolution: --- → FIXED
Whiteboard: SPI-003f [wanted-firefox3] [checkin needed] → SPI-003f [wanted-firefox3]
Target Milestone: Firefox 3 alpha6 → Firefox 3 beta1
Is bug 384941 the bug about making use of this error page? Or which are the other bugs mentioned in comment 0?
Blocks: 384941
(In reply to comment #42)
> Is bug 384941 the bug about making use of this error page? Or which are the
> other bugs mentioned in comment 0?

Yes, and thanks for the pickup, marking this as blocking bug 384941.  

The implementation has changed from what was described in comment 0.  Hooking into browser.js is too late to catch malware since we need to be certain it happens before any page content is loaded.  Dave is using 384941 to track the work necessary in docshell to catch malware pages sufficiently early.  

This bug tracks only the changes necessary to present the appropriate UI, which will, as you mention, be used by the code in bug 384941. 
Warning: class is a reserved identifier
Source File: about:neterror?e=netTimeout...
Line: 92, Column: 12
Source Code:
        if (class === -1) 

please don't use the word "class"
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Attachment #271013 - Flags: review?(cbiesinger)
Attachment #271013 - Flags: review?(cbiesinger) → review+
Whiteboard: SPI-003f [wanted-firefox3] → SPI-003f [wanted-firefox3] [checkin needed]
Checking in docshell/resources/content/netError.xhtml;
new revision: 1.22; previous revision: 1.21
Status: REOPENED → RESOLVED
Closed: 17 years ago17 years ago
Resolution: --- → FIXED
Whiteboard: SPI-003f [wanted-firefox3] [checkin needed] → SPI-003f [wanted-firefox3]
I think we also need a smaller warning for pages which are not dangerous for Firefox users but nevertheless contain exploits for other browsers. Will file followup if needed.
(In reply to comment #47)
> I think we also need a smaller warning for pages which are not dangerous for
> Firefox users but nevertheless contain exploits for other browsers. Will file
> followup if needed.

It seems to me that it would be nearly impossible to ever determine this, since the malware could be be attacking, e.g. flash, java, or quicktime, instead of Firefox itself.  Furthermore, the most prevalent way of building a malware site like this is to stack hundreds of exploit iframes with the hopes that some of them will get through[1], so the number of sites matching this criterion is likely to be small.  Keep in mind too that, at least for the moment, we're not talking about active page analysis, just checking the URL against known-offenders, which would make the determination harder still.

It's hard for me, too, to see the value this would add - giving users the ability to visit known-malicious sites seems like the wrong thing to spend time on?  One expects that most users don't want to be there in the first place, even if we could make a meaningful determination about their safety.

Of course, if you're thinking about security researchers, I can understand that; but they have more sophistication at their disposal.  They can flip the pref, or wget themselves a local copy for forensic analysis - I don't think we need to create a main-stream path for them.

[1] http://www.usenix.org/events/hotbots07/tech/full_papers/provos/provos.pdf
Also, I'd be worried about "malicious but safe for Firefox" sites (1) changing so they do attack Firefox or (2) encouraging users to view them in other browsers (e.g. by having "broken" layout when viewed in Firefox).
Is there a URI that theme developers can enter to test this functionality (bring up the error page)?
(In reply to comment #50)
> Is there a URI that theme developers can enter to test this functionality
> (bring up the error page)?

The d= string might change in the final implementation of the malware checking backend, but for styling purposes, this should work:

chrome://global/content/netError.xhtml?c=blacklist&e=malwareBlocked&u=http://evil.foo/&d=The%20site%20at%20evil.foo%20has%20been%20reported%20as%20an%20attack%20site.


Depends on: 387524
Blocks: 387524
No longer depends on: 387524
Blocks: 388645
Flags: in-litmus?
Litmus Triage Team: Tomcat will cover the Litmus test case for this one.
Depends on: 396309
Depends on: 397937
created testcase in Litmus - also verified fixed Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.9a9pre) Gecko/2007101504 Minefield/3.0a9pre ID:2007101504
Status: RESOLVED → VERIFIED
Flags: in-litmus? → in-litmus+
Flags: wanted-firefox3+
Whiteboard: SPI-003f [wanted-firefox3] → SPI-003f
Product: Firefox → Toolkit
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: