Closed Bug 329292 Opened 18 years ago Closed 18 years ago

add SafeBrowsing anti-phishing extension to trunk for evaluation

Categories

(Firefox :: General, enhancement)

enhancement
Not set
normal

Tracking

()

RESOLVED FIXED
Firefox 2 alpha2

People

(Reporter: fritz, Assigned: fritz)

References

()

Details

Attachments

(2 files, 1 obsolete file)

User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.1) Gecko/20060111 Firefox/1.5.0.1
Build Identifier: 

We'd like to land the SafeBrowsing anti-phishing extension on the trunk for consideration as the base for an anti-phishing feature in Firefox. It'll be a global extension, off by default.

More info:
http://developer.mozilla.org/en/docs/Safe_Browsing
http://developer.mozilla.org/en/docs/Safe_Browsing:_Design_Documentation

Reproducible: Always
-> fritz
Assignee: nobody → fritz
Status: UNCONFIRMED → NEW
Ever confirmed: true
Target Milestone: --- → Firefox 2 alpha2
There are some privacy issues that should be addressed:

From Google's FAQ
http://www.google.com/tools/firefox/safebrowsing/faq.html

"12. What information is sent to Google when I enable the Enhanced Protection Feature?

When enabled, the entire URL of the site that you're visiting will be securely transmitted to Google for evaluation. In addition, a very condensed version of the page's content may be sent to compare similarities between authentic and forged pages. For example, if the condensed 'fingerprint' of the page you are visiting matches the 'fingerprint' of a popular bank's site but the page's URL is different, that's a good sign that the page you are on is designed to mislead users.

If you disable Enhanced Protection, no information about the pages you visit will be sent to Google unless you visit a page Google Safe Browsing identifies as potentially unsafe. In this case, we will only send the action you choose to take to help refine our anti-phishing algorithms. Please note that enabling Enhanced Protection gives the Google Safe Browsing extension access to the most up-to-date fraud information about each page you visit."
guanxi, what privacy issues?  Addressed how?  The point of the extension being opt-in is that users who opt-in are asserting trust in the service to keep their suspect-site browsing traces private.  Are you looking for a stronger privacy policy statement somewhere?  Or some actual change in mechanism?

/be
There should be some visible question about whether to turn it on/off, along with clear information about what privacy concerns some could have with it.

This is a very important feature, especially for those that would most greatly benefit from the phishing filter: normal everyday users. These people, if the filter is disabled by default, will probably either never find out about the feature until it is too late or can't find how to enable it.

Perhaps the question (such as "Would you like to enable protection against known scamming web sites?") along with the privacy information should be part of the Firefox installation process. That way, users can make an educated decision at the start.

I just feel we really need to let users know about this feature right from the start. Else the users that would benefit most will never have this turned on.
Chris: absolutely.  Whether this is an installer option remains to be decided, but however the optionality works, and *provided it works really well*, it needs to be (1) clear and compelling to average users; (2) clear about both how it works and the governing privacy policy.

/be
Blacklists/signatures should be downloaded on a regular basis (maybe integrated into Firefox's update routine?). Just like virus scanners work. URLs and contents of pages should not be transmitted to a remote server. Especially not to servers from a company that is known to work hand in hand with China's dictatorship.
(In reply to comment #6)
> Blacklists/signatures should be downloaded on a regular basis (maybe integrated
> into Firefox's update routine?).

We may get to that promised land, but not right away.  To avoid unacceptable false positive rate, the extension uses as service that needs training data, and enough over time to train itself well.

> Just like virus scanners work. URLs and
> contents of pages should not be transmitted to a remote server. Especially not
> to servers from a company that is known to work hand in hand with China's
> dictatorship.

I don't like China's dictatorship either.  Too bad all the big companies are doing deals, even if only to avoid being left out.

No one is forcing you to use this extension.  That would be dictating a single anti-phishing service.  Likewise, Mozilla doesn't reject extensions such as this one.  That would dictate other outcomes.  We want user choice and market judgment.

/be
(In reply to comment #7)
> (In reply to comment #6)
> > Blacklists/signatures should be downloaded on a regular basis (maybe 
> > integrated
> > into Firefox's update routine?).
> 
> We may get to that promised land, but not right away.  To avoid unacceptable
> false positive rate, the extension uses as service that needs training data,
> and enough over time to train itself well.

(I think this feature is a fantastic idea, by the way: It's hard to think of anything that would provide more value to users.  I'm just focusing on one issue:)

The following is easily suggested by someone who isn't writing the code.  I just think user privacy should be paramount, and if this feature can be implemented while maintaining privacy we offer more value to users and set a good example for others:


Optimally, we should keep all user data on the client.

You say that at this point Google needs data to improve accuracy.  But if it's not accurate enough, then it's not ready for users, and if it is accurate enough, then the data could be used client-side (ignoring any technical issues).

First, a few questions:
  * Given likely changing tactics of phishers (partly in reaction to this feature), how often will Google's need to update its data?
  * Does Google need everyone's data?  Would a subset suffice? (See related suggestion below.)  Exactly how big a sample is needed?
  * Is it appropriate to use Firefox users to help Google develop a product?  That may sound provocative, but I can see arguments on both sides.  It should be addressed, though.

Also, some suggestions:
  * Google should anonymize all data they collect, delete it after processing and after a maximum period of retention (e.g. one week max) and the privacy policy should promise all that clearly.
  * Possibly over-budget, but:  Client sends encyrpted phishing data to proxy (Mozilla.com?), proxy can't read data but scrubs identifying info (e.g. IP address) and sends now anonymous data on to Google, who decrypts it.  I see the encryption is already planned in the "Security of Remote Lookups" section.
  * Google doesn't need live updates of each event. Data could be sent once/day or more, further improving privacy.
  * Opt-in on updating Google's data, otherwise keep all data on client:  They likely don't need data from everyone (as suggested above under questions).  When function is activated, ask 'Help us to help you! ... Privacy warning: ...'.

Any plans to land this on the branch? Would be great to get it on the branch for Alpha 1 or the latest Alpha 2...even if its just dogfood. IE7 will have anti-phising and we don't want them to have one up on us now do we? Especially soemthing so important to a lot of users. I just don't think we can wait for this in 3.0 which is over a year and maybe a year and a half away. Too much time for IE7 to regain users with their anti-phising mechanism.
So, yes, we are aiming to have some sort of anti-phishing capability in place for Fx 2 in order to provide enhanced features to help people stay safe out there, and to be competitive. 

The plan of record to ensure that we will have solutions that work well for Fx 2 is to land the safebrowsing code on the trunk as an Extension (like DOM inspector) and that we have all agreed to work through the details in parallel, including privacy implications/disclosure, branding, integration, etc.

The options for integrating a third-party anti-phishing capability are:
1. Bundled extension enabled by default 
2. Bundled extension(s) with explicit opt-in to preserve user choice
3. User initiated "wizard-y" streamlined extension installation to add an anti-phishing capability post-install
4. No bundled extensions but strong promotion on first run page, mozilla.com and/or start page

IMHO, #2 or #3 are the most likely outcomes that preserve user choice, provide a good user experience, ensure accessibility of the feature, and fit within our product philosophy.

We've also been looking at doing some very basic heuristic-based protection, like in Thunderbird, as a baseline with possible hooks at that point on positives to suggest one or more enhanced solutions. 
("This is Firefox's basic anti-phising protection service working to help keep you safe on the Web.  <a>Learn more about enhanced capabilities available through Firefox Extensions</a>")
(In reply to comment #8)
> The following is easily suggested by someone who isn't writing the code.  I
> just think user privacy should be paramount, and if this feature can be
> implemented while maintaining privacy we offer more value to users and set a
> good example for others:

This feature can, indeed, be implemented while maintaining privacy. Without "enhanced" mode enabled, the extension uses a locally cached copy of the anti-phishing data and seeks updates from Google when the browser starts up. 

Presently, when the extension is installed it offers the user a choice about enabling "Enhanced" protection mode (I'll attach a screencap). We'll definitely want to soften and clarify the language in that dialog -- right now it's pretty skewed towards people turning it on -- and perhaps even make the default option to not enable this mode.

> You say that at this point Google needs data to improve accuracy.  But if it's

Improving the data accuracy comes from the "report this site as phishing" and "report this site as OK" functions which are used to modify the master list of phishing sites. Both of those functions are directly initiated by the user. The data stream in "Enhanced" mode is just used to compare against the master list, and returns a single "good" or "bad" signal to the user's browser.

>   * Given likely changing tactics of phishers (partly in reaction to this
> feature), how often will Google's need to update its data?

Every minute of every day, I'd imagine. The success of this extension rests on the ability to get submissions from users, reporting suspected phishing sites, and then filtering that list to strip out false positives. Fritz tells me that even with the relatively small user base that they have right now, they are getting a huge number of these submissions. I would actually like for the extension to extend the "Report Broken Website ..." dialog to include a mechanism for reporting suspected phishing websites.

>   * Is it appropriate to use Firefox users to help Google develop a product? 

Google is, as I understand things, donating this code to the open source community by landing it as an extension in the Mozilla tree. Whether or not it makes it into Firefox is a totally different question, and will be based on how that code can be made to fit with our principles of user choice and innovation. As cbeard hints in comment 10, this would include allowing users to pick their own provider of anti-phishing data, and allowing that provider to brand and identify themselves in the presentation layer. Fritz has been very receptive to this so far.

The first step is getting it on the trunk, and then we can start dealing with the other issues in parallel, as cbeard mentions.

>   * Google should anonymize all data they collect, delete it after processing
> and after a maximum period of retention (e.g. one week max) and the privacy
> policy should promise all that clearly.

Fritz? As I understand it, the data comes in totally anonymized already, as in, it's just a URL without any other information. Is that right?

>   * Possibly over-budget, but:  Client sends encyrpted phishing data to proxy
> (Mozilla.com?), proxy can't read data but scrubs identifying info (e.g. IP

Any proxy would have the same issues around potentially being able to tie a submission to an IP address, would it not? Given data anonymization and the default setting of enhanced mode being off, I think this might be overkill.

>   * Opt-in on updating Google's data, otherwise keep all data on client:  They
> likely don't need data from everyone (as suggested above under questions). 
> When function is activated, ask 'Help us to help you! ... Privacy warning:
> ...'.

Does a user really click on the same phish twice? Reporting something as a phish should certainly add it to the user's local cache if they've opted to work without "enhanced" protection, though, yes.
    Reading over this I've got two questions (pardon if they are redundant, I don't see the answers mentioned anywhere):

    1.  Will google continue releasing the extension as part of Google Labs, or a product offering?  Is this going to be a fork of the product?  Or is it actually moving to the trunk?

    2.  Will server side still connect to google?  Or will mozilla.org host?  Is the server side released under the same license?  Or is the API under such a license that it would be possible to construct a new server side without infringing (assuming no Google data is used, I'm refering to the implementation over the naughty list)?
Right now there's a lot of text and the screen doesn't really explain the difference to the user between the two modes, but the option is there, and we can swizzle the text quite easily.
(In reply to comment #13)
> Created an attachment (id=214183) [edit]
> initial run screen for safe browsing extension

Suggestion:  In order to reduce that text even further, how about changing the radiobuttons to say something like:

(0) Send Google feedback data
(0) Do not send Google feedback data

or similar.  "Enhanced Protection" doesn't really hint what the feature does, which is why you have to add those extra paragraphs.
(In reply to comment #12)
>  1.  Will google continue releasing the extension as part of Google Labs, 
>  or a product offering?  Is this going to be a fork of the product?  Or is 
>  it actually moving to the trunk?

I can't speak for Google, but would hope/expect that they would continue to release the tool as part of their cross-browser Google Toolbar, and try to keep their code synched to that in our tree, perhaps with different default settings and some customizations.

>     2.  Will server side still connect to google?  Or will mozilla.org host? 

Please see comment #10. One of the conditions of this making it into the product will be ensuring that there is some easy way for a user to choose their own data provider for the safe browsing extension, and indeed, ensuring that the user has some method for choosing to use safe browsing or not at all.

If the code is submitted to the trunk, I would expect it to be subject to the MPL. I don't know if we can have code on trunk that is licensed otherwise.

> (In reply to comment #13)
> Suggestion:  In order to reduce that text even further, how about changing the
> radiobuttons to say something like:

We're not going to discuss that in this bug. Wait for the code to land on trunk, and we'll file bugs against it at that time. Patience, patience :)
(In reply to comment #15)
> Please see comment #10. One of the conditions of this making it into the
> product will be ensuring that there is some easy way for a user to choose their
> own data provider for the safe browsing extension, and indeed, ensuring that
> the user has some method for choosing to use safe browsing or not at all.
> 
> If the code is submitted to the trunk, I would expect it to be subject to the
> MPL. I don't know if we can have code on trunk that is licensed otherwise.
> 
Ok,  one of the reasons I ask, is if Mozilla.org hosts it's own data provider, we could perhaps rig reporter to contribute to that (add a new problem type as "phishing/scam").

Also could be fun to build a proxy ;-)

/should stop being so geeky
Mike, thanks for answering in detail in comment #11. Just a couple minor points:

> >   * Google should anonymize all data they collect, delete it after 
> > processing
> > and after a maximum period of retention (e.g. one week max) and the privacy
> > policy should promise all that clearly.
> 
> Fritz? As I understand it, the data comes in totally anonymized already, as 
> in,
> it's just a URL without any other information. Is that right?

Well, it must come with an IP address.  URL + IP address = log of user's browsing. 


> >   * Possibly over-budget, but:  Client sends encyrpted phishing data to 
> > proxy
> > (Mozilla.com?), proxy can't read data but scrubs identifying info (e.g. IP
> 
> Any proxy would have the same issues around potentially being able to tie a
> submission to an IP address, would it not? Given data anonymization and the
> default setting of enhanced mode being off, I think this might be overkill.

If the proxy can't read the browsing data (because of encryption) then the all the proxy knows is the IP address.

Agreed, it might be overkill, but it might not.  It provides some technical assurance of privacy, which IMHO would be valuable, and a valuable precedent, but that's getting OT.


> Does a user really click on the same phish twice?

When's the last time you dealt with end-users?  ;)
Fritz, there seems to be some synergy between what you want to land here and what I want to do for Thunderbird 2 (see Bug 328749). 

It'd be great if we could break this bug into two separate parts.

1) A phishing service implementation based on local and remote URL blacklists based on a phishing API which would allow other web service providers to implement their own implementations that could replace this in Firefox and Thunderbird.

2) Firefox application UI for presenting the information returned by the phishing service. 

In other words separate the service from the application UI so other consumers of the mozilla code base can leverage the phishing service. I could foresee Camino, seamonkey and others also wanting access to the service without the Firefox UI portion.

Maybe we should create two bugs to make it easier to discuss each of these independent parts?
Attached patch patch, including new files (obsolete) — Splinter Review
This patch contains build system changes and all of the new files (from Fritz).  This extension will be disabled by default to start with.
Attachment #214209 - Flags: review?(darin)
I'd like to see us look into breaking this down into a safe-browsing firefox UI extension and a phishing service extension before we try to get this reviewed and landed on the trunk.
Assignee: fritz → nobody
happy mouse clicks. commit is too close to re-assign.
Assignee: nobody → fritz
Comment on attachment 214209 [details] [diff] [review]
patch, including new files

>Index: extensions/safe-browsing/Makefile.in
...
>@@ -0,0 +1,81 @@
>+DEPTH      = ../..
>+topsrcdir  = @top_srcdir@
>+srcdir     = @srcdir@
>+VPATH      = @srcdir@

It is commonplace to prepend a license header to a Makefile.


>Index: extensions/safe-browsing/content/close16x16.png
>Index: extensions/safe-browsing/content/dim.png
>Index: extensions/safe-browsing/content/logo.png
>Index: extensions/safe-browsing/content/phishing-afterload-warning-message.css

As I mentioned offline, these should all be moved into a skin package.
That is requirement for proper skinning support, and it should probably
be an issue that blocks the enablement of this extension in a release.


>Index: extensions/safe-browsing/content/safebrowsing-overlay-bootstrapper.xul
...
>+<!-- This overlay inserts a js file that has the logic of whether or not we
>+     can run in this version of Firefox. If so, the js file dynamically 
>+     loads the real overlay and hooks this browser window up to the 
>+     application already running its own XPCOM context.
>+
>+     This separate bootstrapping step was necessary in the Google Toolbar
>+     version of the extension because the user might have the stand-alone
>+     extension, which shared many of the same names, ids, commands, and 
>+     so forth. It's not strictly necessary here because neither the 
>+     standalone extension nor the Google Toolbar with SafeBrowsing will
>+     claim compatibility with any Firefox later than 1.5.
>+
>+     But we still keep it because it prevents us from cluttering the browser's
>+     XUL with unnecessary crap when we're not running.
>+-->

This isn't really applicable anymore, right?


>Index: extensions/safe-browsing/content/safebrowsing.js

>+function SB_setStatus(msg) {
>+  document.getElementById("statusbar-display").label = msg;
>+}
>+
>+/**
>+ * Clear the status text
>+ */
>+function SB_clearStatus() {
>+  document.getElementById("statusbar-display").label = "";
>+}

Does this play nicely with other code that attempts to modify the
text of the statusbar?


>Index: extensions/safe-browsing/lib/application.js

>+    if (G_GDEBUG) {

Hmm... G_ prefixes.


That's a lot of code ;-)


r=darin for landing this disabled by default.  mscott: your suggested
refactoring makes a lot of sense.  After this patch lands, I think we
want to create a new bug for actually enabling this in FF2.  We should
use that bug a meta bug, and file bugs that block the meta bug for each
of the issues that block using this in FF2.
Attachment #214209 - Flags: review?(darin) → review+
Status: NEW → RESOLVED
Closed: 18 years ago
Resolution: --- → FIXED
hang on, it's not checked in yet.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
(In reply to comment #20)
> I'd like to see us look into breaking this down into a safe-browsing firefox UI
> extension and a phishing service extension before we try to get this reviewed
> and landed on the trunk.

There should be no requirement that we abstract parts of this extension over Thunderbird as well as Firefox before checkin.

This feature is a Firefox extension, a well-known model for adding optional/early functionality such as anti-phishing.  We explicitly support and recommend Firefox extensions for both user choice and distributed R&D scaling reasons.  Holding it hostage to design for two apps before it is available for even one app is not a good idea.

Induction over two instances of anti-phishing should be based on our experience, including user testing, with one app.  Different apps have different interaction designs, too, and shouldn't couple serially in any schedule that is trying to get needed user-chosen anti-phishing features to market.

The Thunderbird bug 328749 has no technical content.  Is there a wiki page yet with design ideas, requirements, or sketches?

Again, I see no reason to make this work wait for generalization to multiple apps.

/be
And how to we enable this via about:config or when compileing?
Responding to comments in bulk...

> [snip many concerns about privacy]

As I called out explicitly in
http://developer.mozilla.org/en/docs/Safe_Browsing, exactly how this
feature would be enabled, and in what form, is totally up in the
air. Users should be clearly told what they're getting and what the
implications are for their privacy. No argument there.

> [snip many concerns about opt-in]

Again, this question is totally open: my main concern at this point
was getting the code in so people can read it over and play with
it. There are a number of serious questions that need to be answered
-- including how the user would opt for this feature, assuming that it
makes it in -- and I'll file bugs on most of these specific issues so
people can track them.

> [snip many concerns about alternate providers]

Users should definitely be able to swap providers out if they so
desire. The extension as it is doesn't currently support this, but as
long as potential providers are using the same protocols and data
formats, this should not be hard. I'm totally open to refining the
current protocols and formats to make this easier.

> [snip many comments about heuristics]

Sounds like a complementary approach. But also sounds a bit vaporific;
is there a design document or code we can look at?

> You say that at this point Google needs data to improve accuracy.

This is a fact, not some smoke-and-mirrors attempt to violate
everyone's privacy. If we're happy with the current level of coverage
-- and I most certainly am not -- then we don't need more data. But if
we want to make improvements on the server side then we need to. If
someone doesn't want to give us their data, fine. But we really want
to give those people who are ok with it the opportunity to do so.

Your distrust of a third party is just that: yours. It shouldn't
affect the choices of someone else who does trust that third
party. Which is why we've gone to great lengths to offer two modes,
one of which is completely privacy preserving.

> Given likely changing tactics of phishers (partly in
> reaction to this feature), how often will Google's need
> to update its data?

Frequently. Right now clients get updates on the order of once per
hour. But this might not be fast enough, which is why we give users
the option of enhanced protection.

> Does Google need everyone's data?

Probably not, which is why you can disable enhanced protection.

> Is it appropriate to use Firefox users to help
> Google develop a product?

Is it appropriate to look a gift horse in the mouth? I kid! I kid!
Seriously though, Firefox should feel free to not take this
feature. We're offering something we've spent man-months developing,
including paying third parties to license their data, deploying
machines to handle the lookup and update requests, paying for
bandwidth, and building internal workflow tools to help us manage user
submissions. But if it doesn't fit, it doesn't fit.

BTW, if you think about it, I suspect you'll come to the conclusion
that it's Google helping Mozilla build a product, not the other way
around.

> Google should anonymize all data they collect, delete it
> after processing and after a maximum period of retention

This is a good idea, but something I don't have a lot of control over
(the policy level stuff, anyway). We should try to reach some sort of
accommodation on this. I will file a bug to follow up and see what can
be done.

Incidentally, we strip cookies from the report requests and could
probably strip them from some other kinds of requests as well (it
would require changing a few lines).

> [snip mention of encryption]

Encryption is actually implemented, not planned.

> Google doesn't need live updates of each event.
> Data could be sent once/day or more, further
> improving privacy.

Are you suggesting that revealing the temporal characteristics of
these events is somehow a violation of privacy? I don't really see how
this helps...

BTW, I think there might be some confusion with regard to data
collection. There are two kinds of data that are important for us to
collect if we want to improve the service:

(1) explicit user reports of phishing sites (and false positives). As
Mike mentioned, this is a very valuable source of information for
us. We want some way for users to be able to easily submit this kind
of information to us. Right now the extension adds an item to do this
to the Tools menu, but something more appropriate is probably
warranted.

(2) automatic reports of "interesting" phishing-related events. In
enhanced protection mode (and only in enhanced protection mode) the
extension sends pings to the provider when certain things
happen. Right now these reports are generated when the user lands on a
blacklisted page, when the user accepts or declines the warning
dialog, and when the user navigates away from a phishing page. The
information transmitted is what happened, and the URL. We use this
information to understand what's happening to the users of this
feature (how often do people hit these sites? how often do they
actually heed the warning? etc). Note that we strip cookies from these
report requests.

We also are actively working on heuristics-based notifications, but
this is still up in the air, so I won't start that discussion until it
shakes out.

> Will google continue releasing the extension as part
> of Google Labs, or a product offering?

Great question. We're end-of-lifing the stand-alone extension as it is
released on Labs. Instead, we've integrated this feature into the
Google Toolbar for Firefox and it will go out in the next
release. Then one of two things happens. Case one is this feature (or
something like it) makes it into Firefox, in which case we rip it out
of the Toolbar and do all new development in Moz cvs tree. Case two is
that this feature does not make it into Firefox, in which case we
continue to support it in the Toolbar.

So, to answer your question, we'd very much like active development to
move into Moz cvs tree. But we won't force it.

> Will server side still connect to google?

Google is currently exposing these interfaces, but if someone else
wants to as well, that's fine with us.

> Is the server side released under the same license?

Anyone who wants to use our server-side interfaces should be able to,
_so_long_ as we can get useful information from them. That is, so long
as we can get user submissions from those using the data, we're all
for it. It's a virtuous circle: the more people who use our data, the
better it gets.

> It'd be great if we could break this bug into
> two separate parts.
> ...
>
> 1) A phishing service...
> ...
> 2) Firefox application UI...

Yeah, this sounds like a good idea. It would not be too hard to
separate the two. I'd be for making a change such as this.

I'm not trying to hold anything hostage for Thunderbird. 

I am trying to make sure we think for a few moments about how we want to architecturally structure this extension before it lands into the source tree so we have a plan for how the mozilla project can benefit from it as a whole. 

I think it is easier to think a little bit before hand about how we want to organize it, and to have a plan for getting there instead of just landing things and trying to sort them out later. Maybe that's just me thinking that way. 

In any case, while typing this reply I see that Fritz thinks it should be pretty easy to separate the front end from the back end portion of this extension which is something I was hoping to hear.
If this extension is more generically useful it should probably not be called "safe-browsing", rather "phishing", "phishing-protection" or some such. 

I understand this is probably all ultimately moot as the UI pieces will eventually merge with their respective apps, but doesn't hurt to get something like this right.
Fritz,

> This is a fact, not some smoke-and-mirrors attempt to violate
> everyone's privacy.

No questioning of your motives was intended!

As I said, this is a fantastic feature.  I truly appreciate your hard work and Google's support of it.

I only meant to discuss some specifications; unfortunately, privacy has been a sensitive issue recently!  The documentation said it was 'up in the air', and I took it to be an open issue.

> Is it appropriate to look a gift horse in the mouth? I kid! I kid!

That's frequently an issue, incidentally, with a product built by so many volunteers, and I can sympathize; I think the question is important and legitimate, and I tried to bring it up in a respectful, non-controversial way.  Apparantly, I was not quite successful ... ;)

Cheers,
guanxi
> This isn't really applicable anymore, right?

It's not strictly necessary but is a nice thing to have in. Why overlay if we're not going to run?

> Does this play nicely with other code that attempts to modify the
> text of the statusbar?

Probably not. I can bugify it when I get to that point.

guanxi: was there something specific that you wanted me to comment on that I haven't yet (at least at a high level)? Or were you more generally just asking people to start thinking about this stuff?
(In reply to comment #26)
> > [snip many concerns about alternate providers]
> 
> Users should definitely be able to swap providers out if they so
> desire. The extension as it is doesn't currently support this, but as
> long as potential providers are using the same protocols and data
> formats, this should not be hard. I'm totally open to refining the
> current protocols and formats to make this easier.
> 
 Is there any documentation available on how this works (I'm too busy, and partially lazy to read source) in regards to protocols/formats?
Attached patch patch snapshotSplinter Review
This patch has the makefile license added and the skin files separated out.  It also has a lot of debugging output, because we're trying to track down why the controller is not invoked for the menu commands before landing this.  We think something changed within the last 1-2 weeks that broke this.
Attachment #214209 - Attachment is obsolete: true
(In reply to comment #31)
>  Is there any documentation available on how this works (I'm too busy, and
> partially lazy to read source) in regards to protocols/formats?

Fritz pointed out that there's documentation on MDC (now linked to in the URL field of this bug). I'd imagine that will be moved to wiki.mozilla.org (or at least, think it should be) until the feature has landed and finalized in a Gecko branch, though.
Depends on: 329587
(In reply to comment #27)
> I'm not trying to hold anything hostage for Thunderbird. 

Holding up checkin of a Firefox extension was what I meant -- sorry for the "Die Hard" imagery!

> I am trying to make sure we think for a few moments about how we want to
> architecturally structure this extension before it lands into the source tree
> so we have a plan for how the mozilla project can benefit from it as a whole.

We're not doing the suite any longer.  One of the advantages of this change is that we don't have to figure out how N > 1 apps would use an extension designed and developed for 1 app, before any of that extension lands.

Sounds like the right things are happening in the right order, so I will stop fussing and typing.

/be
checked in.
Status: REOPENED → RESOLVED
Closed: 18 years ago18 years ago
Resolution: --- → FIXED
You can't just move docs without first asking the author or giving their new location or a redirect. New locations are:

http://wiki.mozilla.org/Safe_Browsing
http://wiki.mozilla.org/Safe_Browsing:_Design_Documentation
what should the bug component be called, and where should it live?
SafeBrowsing? 

I'd have it live under Firefox I guess.
I did update the link in the bug's URL field, sorry if that wasn't sufficiently clear.  I wanted to move it off developer.mozilla.org before a lot of people started linking there, to avoid needing a redirect or eating the namespace there indefinitely.

(People can and do move and refactor docs on our wikis all the time, though; we have some templates for "please don't change this" in use in various places, and we can slap one on there if you don't want others mucking with it.)

I chose an unadorned "Safe_Browsing" by parallel to Places and because we don't yet know if it's going to be part of Firefox.  We can rename it if that's what people want, I just wanted to get it off MDC statim.
I added links to the new pages from the old pages. 

Delete if you think they're not appropriate, but given the fact that hundreds of people are already pointing there, they're probably helpful.

Sorry for landing in the wrong place initially -- I had the two confused.
Should we just focus on phishing, like Google Safebrowsing, or also use Siteadvisor, which looks for infected downloads and also to see that if by registering, you receive spam.
There are major privacy/security risks for user data involved in Google's SafeBrowsing:
1) Every request is transmitted to Google over HTTP (may be used to generate brwosing record for the user and subsequently for targeted ads);
2) The extension sends the entire GET request to Google. If a web application were to send private information via GET parameters, this will now be transmitted to Google.
For details see: http://www.oreillynet.com/pub/wlg/8760
(In reply to comment #43)
> There are major privacy/security risks for user data involved in Google's
> SafeBrowsing:
> 1) Every request is transmitted to Google over HTTP (may be used to generate
> brwosing record for the user and subsequently for targeted ads);
> 2) The extension sends the entire GET request to Google. If a web application
> were to send private information via GET parameters, this will now be
> transmitted to Google.
> For details see: http://www.oreillynet.com/pub/wlg/8760
> 
1.  You can download the blacklist to prevent each requiest being sent.
2.  There is encryption available.

btw, we have the code... as well as decent documentation, so it's pretty safe to say, anyone who needs to know, has a very good idea how things work.

Please don't scare monger... it's not productive.
What Robert said :) And, additionally, we're actively trying to work something out from the data retention side of things where we make best effort to throw away lookup queries in a timely fashion (say, after a few weeks). 

This is harder than it sounds for a number of technical, legal, and policy reasons. But we're trying.
Perhaps 'Privatize phishing detection' should be a new bug; this one is fixed: It's on the trunk for evaluation.

(In reply to comment #44)
> > [snip]
> > 2) The extension sends the entire GET request to Google. If a web
> > application
> > were to send private information via GET parameters, this will now be
> > transmitted to Google.
> > For details see: http://www.oreillynet.com/pub/wlg/8760
> > 

> [snip]
> 2.  There is encryption available.

That only protects it during transmission, no?  Whoever runs the server (it could be anyone; it's got nothing to do with a paricular company) will decrypt it and have the private data.  Could we strip the transmitted data down to the host name (before transmission)?


> btw, we have the code... as well as decent documentation, so it's pretty safe
> to say, anyone who needs to know, has a very good idea how things work.

That's an advantage, but users need to understand it and no user can review documentation or code for every app they use, and average users can't review it at all.

 
> Please don't scare monger... it's not productive.

Look - Google is, right now, the media's/blogger's whipping boy (my sympathies, Fritz) and privacy is the hot topic.  So privacy in relation to a feature donated by Google will be a loaded issue.  But we can ignore that: The code is the code, and will follow its logic, and deliver privacy to users or not, regardless of what's on Slashdot tomorrow.  Let's focus on the code.

With that in mind, try re-reading the post: You might agree (or might not) that it's mostly just informative.  The GET request info was new, useful, and well documented (by the link).

If you still think the post whips up fear, how could the poster better present the info?  A suggestion would be productive ; I know I tried to discuss the topic without provoking anyone ... and failed miserably.

But defensive responses are unproductive.  It's a hot-button topic, so it would pay to double-check reactions, stick to the technical points, and focus on the phishers and the Firefox users, not on companies donating code and Bugzilla posters.

cheers,
guanxi
> Could we strip the transmitted data down to the
> host name (before transmission)?

No. If you do widespread blacklisting of hosts instead of URLs you violate our first design goal (you have read the design doc, right?). You might suggest sending hashes of URLs instead but that doesn't work if you want to match against regular expressions (which you most certainly do). 

But basically you're missing the point: if you don't trust the provider, don't use enhanced protection.

> With that in mind, try re-reading the post: You might agree (or might not) that
> it's mostly just informative.  The GET request info was new, useful, and well
> documented (by the link).

Actually not new (months old) and not useful (the flaw doesn't exist). But certainly well documented. 
Depends on: 339027
*** Bug 309291 has been marked as a duplicate of this bug. ***
No longer depends on: 1185933
You need to log in before you can comment on or make changes to this bug.