Closed Bug 62178 Opened 24 years ago Closed 12 years ago

implement mechanism to prevent sending insecure requests from a secure context

Categories

(Core Graveyard :: Security: UI, defect, P1)

defect

Tracking

(Not tracked)

RESOLVED FIXED
mozilla18

People

(Reporter: neil, Assigned: bsterne)

References

(Blocks 2 open bugs, )

Details

(6 keywords, Whiteboard: [kerh-coa][sg:want][psm-arch][parity-chrome][parity-ie])

Attachments

(3 files, 27 obsolete files)

59.13 KB, image/png
Details
16.00 KB, patch
tanvi
: review+
Details | Diff | Splinter Review
16.53 KB, patch
bsterne
: review+
Details | Diff | Splinter Review
build 20001205 on linux.

actual behavior:  the alert telling you that "you have requested and (in)secure
document..." has no "cancel" option.

expected behavior: it seems like the dialog should have an option to *not*
continue with the request ("cancel", "no way, jose", etc)

seems to occur on all GET/POST, from-ssl/to-ssl requests.

to wit:
filling out a form on an ssl page that submits to a non-ssl page leaves
you no apparent way to not submit, except to kill the browser.

see https://patty.rackle.com/ssl_to_non-ssl_form.html (low reliability host)
Looks like the "entering secure page" and "exiting secure page" have no cancel
options, but the "submitting insecure form" dialog does have a cancel button,
which works just fine. I tried this on Mac and NT. I'm not sure the entering and
leaving warnings need cancel buttons...but maybe it would be nice. David, what
do you think? I don't think this is a problem, but we could call it an
enhancement request.
Assignee: mstoltz → ddrinan
Status: UNCONFIRMED → NEW
Component: Security: General → Security: Crypto
Ever confirmed: true
QA Contact: ckritzer → junruh
Whiteboard: (probably invalid)
You should not get the "You are submitting insecure information" dialog box 
since you are at a secure site. See bug 59499. The warning you get next should 
be there, that you are requesting an insecure document. Nav 4.7X gives you the 
chance to cancel here, but Netscape 6 does not. Removing (probably invalid) from 
the status whiteboard.
OS: Linux → All
Hardware: PC → All
Whiteboard: (probably invalid)
ok, this looks like two seperate issues:

1) (some other problem) I had set psm to "to show a warning and ask for
permission before: Sending unencrypted information to a site", but the pref
never got set.  reopening the security pref dialog, that pref remained
unchecked.  I assume this has something to do with having psm installed by root
and then chmod'ing around to get it to work for regular users, but I'll take a
look and see if there is a bug on this.

2) (this bug, probably now only a UI enhanement) there is a setting to warn you
if you are browsing to/from ssl'ed, but it doesn't have a "cancel" option.
maybe I'm missing something, but what's the point of a dialog if you have only
one option?  isn't an indication-only thing what the status bar is for?

It seems like 2) is certainly not a Secutity: Crypto bug, but perhaps a UI?
*** Bug 64545 has been marked as a duplicate of this bug. ***
adding keywords from bug 64545
Assignee: ddrinan → javi
adding some people to cc for ui feedback
*** Bug 67979 has been marked as a duplicate of this bug. ***
Keywords: mozilla0.9
Keywords: mozilla0.8.1
Keywords: mozilla0.8
Mass changing of product. Browser:Security:Crypto --> PSM 2.0
Component: Security: Crypto → Client Library
Product: Browser → PSM
Version: other → 2.0
Milestone 2.0
Target Milestone: --- → 2.0
By the time the browser sends the "exiting secure page" dialog box, it is too 
late to stop the page load.  The URL has already been sent to the server.

Isn't that somewhat problematic?  The whole point of that alert is so that data
will not be sent insecurely somewhere.  If we send the data regardless of the
alert, what's the point??

See also bug 72509, which is related to this one based on John's observation.
Mass reassigning target to 2.1
Target Milestone: 2.0 → 2.1
Keywords: nsenterprise
P1
I want to get to the bottom of this.
If it's not a problem, then we'll change it back.
 
Priority: P3 → P1
I created a testcase at:
  https://www.kuix.de/misc/test5/
Make sure to access it with https.

If I press submit, a first warning appears. It gives the correct warning and
says: Information will be posted insecure.

This warning message has both OK and cancel buttons. I used a network sniffer.
No data at all has yet been transferred when this dialog get's displayed. And if
Cancel if pressed, no data will be transferred, either.

This warning appears in any case, even if I switch off all the warnings in SSL
preferences.
This means, the user has a chance to cancel.

However, the story continues:

As soon as I press ok, the network connection is started and data is sent over. 

Before the new pages get's displayed, another warning message comes up. It says:
You have requested an insecure document. And indeed, this warning only contains
an OK button.

======

Discussion: The user has a chance to cancel. A second dialog, informational
only, is displayed, which does not and can not provide a cancel button.

This informational dialog is a very general thing. It contains a checkbox saying
"show this message next time", which means, a user can decide to know what it
means to switch between http and https sites, and turn this warning message off.
Once it's turned off, it will never be shown again. Applied to other above case:
The user will still be able to cancel, but see only one warning box.

There are two general dialogs of that kind. Warn when leaving a secure site, the
above message, and "Warn when entering a secure site".

The latter has a matching preference in "Preferences / Privacy / SSL". However,
the former has none. But the option get's saved to disk
(security.warn_leaving_secure). For consistency, both options should be
adjustable in preferences - they are in Communicator.

======

My opinion:

To make it clearer and to avoid confusion, I suggest to be more specific in the
informational message that is displayed. It should state clearly, that this is
an information only, and not an alert (as it currently says in the title). It
should state that the intention is to inform the user about a mode change, and
that the user might decide to switch these mode change messages off, or continue
to receive them.

For the secure-to-insecure-transition, the current message is: "Security Warning
/ You have requested an insecure document. The document and any information you
send back could be observed by a third party while in transit.  [x] Show Me This
Alert Next Time."
I suggest a message similar to: "Connection Mode Reminder / Please note: The
connection is now public. Privacy mode is no longer active. Any information you
see and send back could be observed by a third party while in transit.   [x]
Show Me This Reminder Next Time."

To be consistent, I suggest to change the message for
insecure-to-secure-transition, too. The current message is: "Security Warning /
You have requested a secure document. The document and any information you send
back are encrypted for privacy while in transit."
I suggest a new message similar to: "Connection Mode Reminder / Please note: The
connection to the remote site switched to a privacy mode. While being transfered
between you and the remote site, any information you see and send back will be
protected using encryption technology.  [x] Show Me This Reminder Next Time."

Other things to do:
- add the mentioned checkbox to preferences.
- fix problem with the existing preference setting: After turning it off in the
dialog, it has no effect to turn it back on the in the preferences, the message
will not show up. The option needs to be changed in the configuration file to
appear again.
Kai: I think the "informational alert" should not be merely informational.  One
can be going from a secure to an insecure page via a link, not just via a form
submission.  And I feel that these secure-to-insecure transitions should be
cancel-able if the user decides to do so after seeing that the link is insecure.

For example, the secure page could construct a GET url that includes data gotten
from form elements in the DOM and then set window.location to that url.  This
would be equivalent to submitting the form with those elements, yet the first
alert would not come up.  The data (possibly including passwords) would be sent
in cleartext over the network.
P3
Priority: P1 → P3
*** Bug 92689 has been marked as a duplicate of this bug. ***
*** Bug 92715 has been marked as a duplicate of this bug. ***
Upping to critical. This bug presents a risk to the security of credit card
numbers and other sensitive information. Without being fixed the browser is an
insecure product and this will seriously affect its marketability.

Although this only occurs with links and not post-data, links can just as easily
introduce security risks. For example, a processed page might link to
"http://site.com/?ccnum=xxxxxxxxxx&name=xxxxxxx&exp=xxxx"...
Severity: normal → critical
Mass assigning QA to ckritzer.
QA Contact: junruh → ckritzer
Another note... When going from full security to partial security (secure page
with some insecure content) there should be the option to cancel the request
entirely or load only the secure content (i.e. if there's an insecure frame or
image, don't load it). Once again, obviously, this prompt should occur *before*
any transmissions are made through insecure connections.
Move to future. Won't have time to fix these for 2.1
Target Milestone: 2.1 → Future
ssaux: This is a very serious security problem. Are you sure that's a good idea?
Depends on: 99252
QA > junruh. ssaux, please set milestone to 2.2.
QA Contact: ckritzer → junruh
Version: 2.0 → 2.1
Changing my e-mail address.
Removing old e-mail address.
marking nsenterprise-; will be reevaluated for nsenterprise in future release.

Keywords: nsenterprise-
Blocks: 104166
*** Bug 105024 has been marked as a duplicate of this bug. ***
Adding nsbeta1 keyword.
Keywords: nsbeta1
Removing old nsenterprise, adding dataloss. (Potentially, the lost data could be
a credit card or social security number!)
Keywords: nsenterprisedataloss
*** Bug 116084 has been marked as a duplicate of this bug. ***
Why is this futured? This is a critical-severity nsbeta1 bug that creates a
potentially hazardous situation with sensitive data that could be revealed over
an insecure connection.

Javi: Please reconsider your priorities and move this bug up to some reasonable
point in time where it can be fixed before it creates a problem. Or reassign the
bug to someone else.
nsbeta1+
Keywords: nsbeta1nsbeta1+
This is only a security risk is the site from which you come (ssl) is
maliciously including an http link with the data in it.

nsbeta1-
Keywords: nsbeta1+nsbeta1-
Never attribute to malice what stupidity will account for.  I see this problem
all the time (eg, half the time when logging into yahoo mail it does a redirect
to an insecure URL with the user/password info and there is no way to stop it).
Keywords: privacy
*** Bug 134711 has been marked as a duplicate of this bug. ***
I think this needs to be reconsidered for nsbeta1. We can't be ignoring serious
security issues like this and spending our time making pretty icons instead.
Renominated.
Keywords: nsbeta1-nsbeta1
*** Bug 99252 has been marked as a duplicate of this bug. ***
*** Bug 133455 has been marked as a duplicate of this bug. ***
Blocks: 100787
Currently, the security state tracking depends on web progress events. But those
events are sent out after transfer has began, and therefore not approriate to
use them to implement canceling before any data has been transfered.

See the duplicates of this bug for more examples why it is a good idea to fix
this bug.

Note about the correct way to fix this bug:

We must find the correct place to hook the security code in to the document
loading progress. Basically the document loader needs to ask the security module
for allowance, whether it is ok to load a document or not.

If in the future, the security code is asked for allowance, and detects a change
in the state, because of a protocol change, the code must ask the user for
allowance (depending on prompt configuration), and need to return the user's
decision to the document loader.

Implementing it that way should resolve all bugs, that complain about wrong
order of security warnings.
*** Bug 138863 has been marked as a duplicate of this bug. ***
Blocks: 83134
I consider this to be a very real bug, and have a couple suggestions based on
some presuppositions that may not be true:

Is Mozilla currently only allowing secure connections when using the "https"
protocol in the URI?  If so, the warning could be given _before_ the events in
question as a result of clicking on a link that is not "https" (or vice-versa).

If this protocol assumption is true (that the browser knows at render-time that
a link will or will not _try_ to enable encryption), then mouse-over
highlighting of secure and insecure links also becomes a plausible option.
Yes, this is how it usually works.

If you are on an https page, and a link, redirect, or image points to a http
URL, the browser does not make the request until after prompting the user.
Mozilla currently makes the insecure request no matter what, compromising
security almost completely. The only time Mozilla offers the user the option to
halt the action in advance is with a form.
Furthermore, I think it only confirms if it is a POST form; a GET form offers no
warning or an OK-only dialog notifying the user after the insecure data was
already sent.
> Furthermore, I think it only confirms if it is a POST form

Bzzzt.  Wrong.  
Considering Skewer's reply, does anyone with their fingers in the code have any
ideas for patching Mozilla to allow for prompting _before_ a fetch is sent on
the basis that the protocol changes from secure<->insecure based on preferences?
It is possible to go from an https page to another https page and have the
second be insecure.  Unfortunately, there is no way to detect this situation
until the second page has been fetched from the server....
Is this an SSL bug or true for TLS et. al. as well?

Would it be possible to get around this by knowing the user's preferences before
entering encryption negotiation with the server? (That would be a seperate bug
report / feature reqeust, in my mind).
darin: can you help us find a place for this silly event to live?



it seems to me that as soon as something decides that it wants to load a non https connection, it should be easy to run a security check and decide to throw up this silly dialog.
-> me
Assignee: javi → kaie
Trying to plan an implementation, some thoughts:

Suppose a page where a document contains 20 images, 10 are referenced with
https, 10 with http. Suppose our loading happens in an order, where https and
http alternate. We should not show the user 20 security change warnings in such
a situation.

As a result, we should restrict the warning to the state of the top level
document. If the URI of the top level to be loaded changes from a secure to a
non-secure protocol (or vice versa), we should show the user the requested
warning, with the option to cancel.

While we continue the loading of the page's parts, we need to be aware of the
security state reported to the user.

I finally understand why Communicator had a warning "this page has insecure
content, do you want to view it"?

This warnings helps to get rid of the potential 20 warnings as explained above.

This warning would be shown if the toplevel document is secure, at the time the
first non-secure sub content is detected (such as an image having a non-secure URL).

Note that "show content" is not the only effect. Because the loading process
will access non-secure protocols, there is also the risk that user data will get
transfered over the non-secure protocol. Therefore, the warning message should
be stronger.

If the user decides to disallow accessing the insecure content, the document
loading process needs to remember that decision, as long as the current top
level document continues to be shown. Any non-secure content will automatically
get rejected without additional warnings shown to the user. The non-secure
content will not appear on the page.

If the user decides to allow accessing insecure content, the consequence will be
similar. No warnings will be shown in addition, and all requests to insecure
resources will be allowed, no more warnings will be shown, until the user
finally navigates to a different top level document.

Bug 59637 requests to implement such a warning, but we probably need to
incorporate that into the patch for this bug.
Blocks: 59637
I suggest the following approach to catch the events.

When nsIURILoader::openURI gets called, the passed WindowContext is QI'd to
nsIURIContentListener. If that's successful, a call to
nsIURIContentListener::onStartURIOpen is made. That is the place where the
cancelable prompt could be shown. If this method returns FALSE, the load will
not start.

It is the nsDocShell who seems to initiate the call to openURI. It passes a
pointer to self for the window context, and on QI to nsIURIContentListener, it
gives out its own mContentListener, which is an instance of nsDSURIContentListener.

We should make nsSecureBrowserUIImpl implement nsIURIContentListener.

We should extend the implementation of nsDSURIContentListener::OnStartURIOpen to
also call nsSecureBrowserUIImpl::OnStartURIOpen. This could be done by looking
at the securityUI property of associated window and QIing it to
nsIURIContentListener.

I have not much inside knowledge about the document/uri loader process, I made
this conclusion by browsing the sources a bit. I hope this will notify the
security module about any URI load requests. If you know that more needs to be
done, please let me know!
Implementing this (reasonable) enhancement request has another consequence, a
consequence for the logic that decides whether a secure or an insecure lock icon
is shown.

Currently, the "lock icon tracking code" uses a rather complicated approach. It
processes web progress notifications, and extracts the SSL security state.

That approach makes sense in order to obtain the information to be shown in the
lock icon tooltip (signed by ...) and the information shown in the page info window.

However, I don't understand why in the past the decision had been made that the
same approach is used to drive the decision, whether a secure or an insecure
lock icon is shown.

No other browser I know of seems to be doing that. Other browsers simply look at
the protocols involved. They look whether "http" or "https" is used to fetch
parts of the shown document.

As a result, other web browsers are very quick in informing the user that one is
navigating from http to https or vice versa. But we are slow, because we have to
wait until at least the first part of the message has been transfered.


My point is:
- If we want to throw up security warnings early in the process, we must rely on
the protocol being used
- if we have shown a warning that the security mode changes, and the user
allows, it would be bad if there were a delay until the lock icon changes
- as a result, the decision, which lock icon is shown, should also rely only on
the protocol being used, not on the SSL state of the content
- however, we still need to extract SSL state in progress notifications to
extract the information for page info and lock icon tooltips


As a result, the current logic to track the lock icon state would change
radically. (Unfortunately, we have just put a lot of work to ensure the lock
icon correctness, see the dependent bugs of bug 130949.)


So, before I proceed, I would like to hear more opinions on what I have said in
this and the previous comments.
I wanted to add my personal opinion:

Although I think it is quite some work, I think we should make the change.

Two more arguments why the change seems reasonable:
- our lock icon behaviour is incompatible with Communicator with regards to
content changes done by JavaScript. Because we rely on knowing where the content
came from, we are bascially unable to track this currently. To be on the safe
side, we decided a "false negative" lock icon report is less critical than a
"false positive" lock icon report. As a result, we change the lock icon to
insecure mode whenever JavaScript code modifies the shown content.
- it is possible to go around form submit warnings with JavaScript code
(different issue). I was told that all JavaScript network activitiy goes through
URL based protocols. If we had a channel from URL loading directly to security
tracking, we would be able to also track requests originated from JavaScript to
have an influence on the shown lock icon.


PS: I suspect even Communicator used the protocol-lock-icon-decision approach,
rather than our current SSL-state-extraction-lock-icon decision approach.
Doesn't your approach require doing both (i.e., the protocol is good to start,
but you still need to monitor the document being loaded to see if it's secure or
not).

How much work would this be?
> Doesn't your approach require doing both (i.e., the protocol is good to start,
> but you still need to monitor the document being loaded to see if it's secure or
> not).

Yes. The protocol would be used to decide about the security level, security
switches and lock icon changes. Whether a lock icon is shown or not would only
be influenced by protocol, never by content. The mixed content decision would
also be derived from protocol only.

In addition, SSL status would have to get extracted, to check crypto strengh
level, and server certificate trust level. That information would continue to
drive warnings about low encryption strength, untrusted servers and the
information shown in toolstips and page info.


I just realize there is one more problem.

One of our warnings is the "connection to a site using weak crypto", i.e. small
secret key sizes. That information is not available by looking only at the
protocol (https).

With the above apporach and hook location, we would still be unable to allow the
user to cancel a connection to such a site.


As a consequence we'd need one more additional hook. We need to find a way to
post a notification as soon as a SSL handshake is over, but before the first
application bytes (http request) are being transfered. (That hook can only be an
addon to what has been said previously, because it won't be reached when going
to a nonsecure site.)
it seems to me you must do more than the protocol. All ssl sockets can return a
number of SSL and SEC errors, and these must absolutely be detected.
Blocks: 154616
Keywords: 4xp
*** Bug 187730 has been marked as a duplicate of this bug. ***
QA Contact: junruh → bmartin
I'm attaching a patch that fixes the problem for me.
I'll need to do some more testing, but I have already tested the various issues
tracked in bug 130949, and I did not find any regressions.

This patch fixes at least this bug and also bug 191212.

I have not yet tested, but I'm optimistic it will also fix Windows bug 119969.
cc'ing Jag, because I'm changing browser.xml
State of the patch:

It is now possible to cancel
- going from insecure to secure
- going from secure to insecure

It is now possible to suppress loading insecure objects that are shown in a
secure page. The warning about "mixed content" will be shown. Confirming allows
the insecure objects to load, and the lock icon will go to the mixed state.
Using cancel will not load any sub objects. This already works for <frame>s and
<iframe>s. Unfortunately, it does not yet work for images, see bug 135007.

In the future I think we should change the text for the mixed content warning.
However, since we are yet not done with the image problem, and because I want to
land this patch on the 1.4 branch, which does not allow locale changes
currently, I want to delay wording changes.

A remaining piece of work is the transition to weak crypto documents as
explained in comment 56. The weak transition is still a warning only, without
being able to cancel. However, IMHO, this is not a real problem. If one does not
want to allow weak crypto, one should use edit/prefs/privacy/ssl/edit ciphers
and disable the weak crypto ciphers.
Attached patch Patch v8 - for 1_4 branch (obsolete) — — Splinter Review
This patch applies to the 1.4 branch.
Attachment #127883 - Flags: superreview?(jaggernaut)
This patch also is usable on the trunk, there is just a small context change at
the end of nsDocShell.cpp that causes the apply failure.
Comment on attachment 127883 [details] [diff] [review]
Patch v8 - for 1_4 branch

I will try to produce a patch that uses nsIContentPolicy instead of  changing
nsDocLoader.
Attachment #127883 - Attachment is obsolete: true
Attachment #127883 - Flags: superreview?(jaggernaut)
A note about mozilla 1.5 (Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US;
rv:1.5) Gecko/20031007), if it is already fixed, ignore my messsage.

If you think this should be a separate bug, say so.

In the preferences under "Provacy&Security"/SSL it says :
Set Mozilla to show a warning and ask permission before :
 - Loading a page that supports encryption
 - other options ...

The description text and actual behavior do not match, as no permission is
asked, the page gets loaded regardless of user input.
( go to an insecure page,the to a secure page. A dialog will be shown, but the
secure page will be loaded regardless of what the user does. He has not much
choice anyway, OK or close dialog with [X])
David, you are absolutely right, what you describe is part of this bug.
Sigh. Half a year since my last comment and I still haven't been able to work on
it more :-(
*** Bug 240630 has been marked as a duplicate of this bug. ***
*** Bug 244880 has been marked as a duplicate of this bug. ***
(In reply to comment #64)
> I will try to produce a patch that uses nsIContentPolicy instead of  changing
> nsDocLoader.

You can't show prompts from nsIContentPolicy. the prompt to ask before loading
images used to do that, and it caused crashes. see bug 208867.
Product: PSM → Core
Blocks: 288693
*** Bug 295374 has been marked as a duplicate of this bug. ***
*** Bug 305589 has been marked as a duplicate of this bug. ***
Blocks: 304848
Attached patch Patch v14 (obsolete) — — Splinter Review
Suggested patch. Some notes:

- Two years ago it has been suggested, to use the nsIContentPolicy mechanism for the "load/do not load" decision. However, we need to prompt, and prompts do not work from there. So, this patch changes nsDocShell::DoURILoad.

- Currently (without the patch) the ssl/insecure transition warning are shown when the user uses back/forward to change between pages. With the patch, these are no longer shown when using back/forward. This is equivalent to the behaviour of netscape communicator 4.x (which does not have bug 62178)
Attachment #201312 - Flags: review?(bzbarsky)
> However, we need to prompt, and prompts do not work from there. So, this patch
> changes nsDocShell::DoURILoad.

Are you sure DoURILoad doesn't have the same issues?  That is, is reentering DoURILoad safe?

Also, will this flag jar:https:// URIs as insecure?  It looks to me like it would...
Also, if I am on an https:// page and I click an http:// link to a file that we don't handle internally (so it's passed off to a helper app), should that change the security state of the page?  It seems to me like this patch would, no?
Comment on attachment 201312 [details] [diff] [review]
Patch v14

r- pending answers to my questions and because this doesn't do the right thing if DoURILoad is called while that prompt is up.
Attachment #201312 - Flags: review?(bzbarsky) → review-
Boris, you seem to be right with most statements. Thanks for your suggestions and test szenarios, I will work on a better patch.

Regarding "is DoURILoad reentrant?". Does it need to be? I traced what happens when we bring up the prompt, during the call to DoURILoad. While that triggers another call to DoURILoad, that happens in a separate "this", a separate docshell instance. So we should be safe?

Or yan you think of a situation where DoURILoad is reentered with the same docshell instance?
If someone sets location on a window while its docshell has the prompt up, we will reenter DoURILoad for the new load.  Since the previous load hasn't actually started yet, we will not cancel it, and just start the new load.  Then when the user accepts the prompt we will start the old load -- at this point we will not only have two loads in flight for the same docshell, but they will be racing to see which completes first.  That's not really desirable.
> Will this flag jar:https:// URIs as insecure?  It looks to me like it would...

You are right. I wasn't even aware of that kind of URIs !

I created a test file here:
  jar:https://kuix.de/misc/test46/test.jar!/file.txt

It seems to me, there is no code in place in Mozilla to deal with "nested URIs" like this in general. Every module seems to deal with it on its own.

In the forthcoming patch, note how I am replacing the call to SchemeIs("https")
Is that ok?

The new patch (v15) seems to work.
> If I am on an https:// page and I click an http:// link to a file that we
> don't handle internally (so it's passed off to a helper app), should that
> change the security state of the page?  It seems to me like this patch would,
> no?

You are right, here are two testcases.

https://kuix.de/misc/test47/index.php
http://kuix.de/misc/test47/index2.php

The new patch (v15) seems to deal with it well.

The idea is: At the time the user confirms, we set a "potential new state", but delay that state until we are sure, that new content is indeed being loaded into our document.

I added two events that can trigger making the potential state the current state:
- OnStateChange: at the time we notice that data is being transfered for the toplevel document
- OnLocationChange
Status: NEW → ASSIGNED
>> However, we need to prompt, and prompts do not work from there. So, this patch
>> changes nsDocShell::DoURILoad.
>
>Are you sure DoURILoad doesn't have the same issues?  That is, is reentering
>DoURILoad safe?
>
>If someone sets location on a window while its docshell has the prompt up, we
>will reenter DoURILoad for the new load.  Since the previous load hasn't
>actually started yet, we will not cancel it, and just start the new load.  Then
>when the user accepts the prompt we will start the old load -- at this point we
>will not only have two loads in flight for the same docshell, but they will be
>racing to see which completes first.  That's not really desirable.


Here is a test case:
(make sure you have secure/insecure transition warnings enabled)
1) go to https://kuix.de/
   (as a preparation, to make sure you have it in the URL bar dropdown list)
2) go to http://kuix.de/misc/test48/
3) as soon as 2) has loaded, quickly select https://kuix.de/ from the dropdown list
You will see a prompt now, do not confirm the prompt, but continue to wait.
(You can move the prompt window a bit, so you'll notice the second one more easily)
A few seconds later, the page that is still shown (2) will try to load a secure page, too, and a second prompt will come up

With patch v15, while the confirmation prompter is shown, any further document load attempts in the current docshell will be aborted.

That is, with the above testcase, you will not get the second prompt.

Boris, what do you think about this approach? If you don't like it, do you have an alternative suggestion?
Attached patch Patch v15 (obsolete) — — Splinter Review
Attachment #201312 - Attachment is obsolete: true
Attachment #202141 - Flags: review?(bzbarsky)
> With patch v15, while the confirmation prompter is shown, any further document
> load attempts in the current docshell will be aborted.

Hmm....

Ideally, we'd just take the now-pointless prompt down and do the new load.  But I guess this is kinda hard, eh?
Attached patch Patch v15 with more context and -p (obsolete) — — Splinter Review
Darin, biesi, the issue we're trying to resolve here is that we want to prompt before calling AsyncOpen() on a channel, since if the user cancels we don't want to contact the server.  At the same time, we want to gracefully deal with a new load starting while the previous load's prompt is up...

The question is what would be reasonable behavior in that case.  Just ignoring the new load?  Putting up a second prompt?  Queueing up the new load and dealing after the prompt comes down?  I doubt we can take down the first prompt.

Also, how well will HTTP deal with a modal dialog being posed from OnRedirect?
note to self: Boris suggested to implement the nsIHttpEventSink.idl interface, in order to implement the warning in http protocol level redirection events. A test case is here: https://kuix.de/misc/test26/redirect.php
And he suggested to remove the lock, as docshell isn't threadsafe anyway.
(nsIHttpEventSink.idl is deprecated, use nsIChannelEventSink)

why can a new load start if the dialog is modal?
Because it's not app-modal and because JS can still be running in other windows, for example.
Blocks: 119969
*** Bug 302634 has been marked as a duplicate of this bug. ***
Blocks: 233184
Target Milestone: Future → mozilla1.9alpha
*** Bug 277806 has been marked as a duplicate of this bug. ***
Whiteboard: [kerh-coa]
> The question is what would be reasonable behavior in that case.  Just ignoring
> the new load?  Putting up a second prompt?  Queueing up the new load and
> dealing after the prompt comes down?  I doubt we can take down the first
> prompt.

Can you suspend the subsequent loads?  Or, how would a new load be generated if you have a modal event queue in place?


> Also, how well will HTTP deal with a modal dialog being posed from OnRedirect?

I'm not sure.  You'll have to test.
> Or, how would a new load be generated if you have a modal event queue in place?

Loads can be started from other windows...
Attachment #202141 - Flags: review?(bzbarsky) → review-
*** Bug 341412 has been marked as a duplicate of this bug. ***
Blocks: 321022
QA Contact: bmartin → ui
Flags: blocking1.9?
Pre-existing.  -'ing.  Patch is way old.  Please re-nom if an issue.
Flags: blocking1.9? → blocking1.9-
Blocks: 415705
Blocks: 418354
Summary: there is no way "cancel" browsing to/from ssl'ed requests → there is no way "cancel" browsing to an https link from an https page
Summary: there is no way "cancel" browsing to an https link from an https page → there is no way "cancel" browsing to an http link from an https page
Blocks: newlock
Blocks: 369503
Depends on: 432685
Version: psm2.1 → 1.0 Branch
Version: 1.0 Branch → Trunk
Flags: blocking1.9.1?
Whiteboard: [kerh-coa] → [kerh-coa][sg:want]
Updating the summary
Summary: there is no way "cancel" browsing to an http link from an https page → implement mechanism to block loading of insecure content in a secure context
I would prefer for the summary to say "prevent sending insecure requests".

The present dialog has the property that it appears after the request has
been sent, at the point where the response is about to be rendered.  
The damage is already done by that time.  It has to be stopped before the 
request is initiated.
Updating the summary once more. Nelson's proposal sounds good to me, because loading insecure content is preceeded by sending some insecure request.
Summary: implement mechanism to block loading of insecure content in a secure context → implement mechanism to prevent sending insecure requests from a secure context
Any chance for progress on this in the next few days to try and make firefox 3.1?

otherwise it might be time to try for next release.
Flags: blocking1.9.1? → blocking1.9.1-
Depends on: 494940
Firefox should give the user the option to "view only the webpage that was
delivered securely" on partially encrypted webpages.  Internet Explorer 8 does this so, the user is sure which part of the site is secure.
I would like to add some text from Bug 508214.

From a uses point, when opening an encrypted page with some unencrypted contents, the expected results should be:

Firefox should stop transmission of the page, pop out a warning with three choices: 1, continue without loading unencrypted content (preferred); 2, stop browsing (then jump to a special page with further actions like connection reset page, invalid certificate page, etc); 3, proceed with everything secure and insecure. And a check box whether to remember the choice (better if it could be site specific).
Blocks: lockicon
Blocks: 509409
I think the priority should be raised to P1, because of bug 509409 comment 5.
Trying to reassign to nobody, as I don't have recently worked on this.
Assignee: kaie → nobody
Priority: P3 → P1
Status: ASSIGNED → NEW
Flags: wanted1.9.2?
Flags: blocking1.9.2?
Target Milestone: mozilla1.9alpha1 → ---
-> me

I've already been preparing to work on this.
Assignee: nobody → honzab.moz
Status: NEW → ASSIGNED
Still really, really want this. Still won't block on it, though.
Flags: wanted1.9.2?
Flags: wanted1.9.2+
Flags: blocking1.9.2?
Flags: blocking1.9.2-
I'm not sure a modal dialog is an effective solution in the first place for the majority of users.  A better approach might be what Chrome does, which is to manage this as a security setting (enable, disable, or permit only images).  Though, apparently nobody really uses that setting either.
Re: comment 104

I would love for that option to load *only* the secure objects, without *any* of the insecure ones.  Right now, the only way I have of doing it (sometimes successfully) is a combination of NoScript & RequestPolicy extensions.
Re: comment 109

To clarify, an option to perhaps force anything insecure to secure, & fail the object load if unable to do so.

Shoulda mentioned that last time.  Mea culpa for bugspam.
(In reply to comment #106)
> -> me
> 
> I've already been preparing to work on this.

Hey Honza - has this fallen off the radar?
(In reply to comment #111)
> Hey Honza - has this fallen off the radar?

No, just dropped bellow in my todo list. 

I need a good knowledge of a request context to solve this bug.  This context seems to be provided by load groups.  I'm studding the load group code to see if we can safely use it, simply update it, or we need a new mechanism.

I also want to have a list of detailed suggestions for e10s design to prevent any missing API making bugs like these hard to fix.

I'll comment on this bug about the progress.
Blocks: 558551
Checking in to see if this is still moving forward.  This issue was recently discussed during a security meeting and we are very much behind this change.  

More specifically, we'd like to see the following:

Whenever a HTTPS page attempts to load insecure content, the request to load insecure content is dropped by the browser.  The HTTP request is never actually sent.  The only exception would be if the HTTP request is for an image.  We realize the ultimate secure approach would be to also block HTTP requests for images, but this is a common use case that exposes little risk to the user.  The biggest concern is blocking HTTPS sites from requesting scripts or style sheets over HTTP.

Is this approach in alignment with your intended code fix?
This is much against what I want to do.  Bug 509409 is a perfect example of why to prevent also the image requests.  The purpose is to prevent sending of a sensitive information out to an insecure server that can be carried simply in the image URL (w/o user being notified/given a chance to disallow that).


As for the progress - I had some more or less serious but mainly time consuming health problems that prevented me from actively working on this particular bug.  Now I'm pressed by e10s work and unfortunately I might not be able to move forward with this in a next month, but it is not a final word.
Are you against the approach of automatically blocking (e.g not sending) HTTP requests for stylesheets and javascript?

-OR-

Are you against not automatically blocking HTTP requests for images?

If the latter, I'm ok with autoblocking HTTP image requests too. However, this might make it tougher to get the fix accepted since so many sites use all HTTPS except for images (example gmail).

I read through bug 509409. It was a bit tough to understand the root issue / attack vector.  In the img example it is assumed that the attacker has the ability to control the img src URL. In this scenario the attacker could just point to their own HTTPS page for whatever purpose they are injecting the img src URL.
I want to block HTTP image requests AS WELL.

I understand it might be a problem in case of sites like gmail to accept such a patch.  What about to allow HTTP image requests from HTTPS context ONLY to the same domain (we cannot use the same-origin rule here)?

What to do in case of HTTPS subrequest to a different domain?
In the case of gmail the images are often at other domains and referenced via rich html within emails. So any change to block images would effect sites such as gmail.

What about taking the hard line stance on all non-images (as we've both already agreed) and then adding the option to also block images as a configurable option (about:config)?  We can debate which way that option should default to.
Agree.  There should be some kind of an option because some sites will need to strictly allow all image loads (gmail) and some strictly disallow (ebanking).

I think of a bar similar to the window pop-up blocker bar with "remember for this site" (thought, possibly spoof-able).  This would allow us to set the option to 'block' by default.
(In reply to comment #114)
> This is much against what I want to do.  Bug 509409 is a perfect example of why
> to prevent also the image requests.  The purpose is to prevent sending of a
> sensitive information out to an insecure server that can be carried simply in
> the image URL (w/o user being notified/given a chance to disallow that).

I do not believe that we should be trying to prevent the use of covert channels here; it's simply not feasible, and it's against the purpose of SSL and the mixed-content warnings.

The purpose behind the mixed-content warning (or load blocking for CSS and script) is that the guarantees we make about the content being what was intended by the site with the Safety Blue in the URL bar may not hold if an HTTP subrequest is subverted.  If a server is generating an image URL that encodes sensitive data over HTTP to "leak" it, they could just communicate that data themselves (and they are likely storing it in plaintext in a database somewhere anyway).

If a site wishes, for some reason, to have the browser enforce a policy of no insecure subrequests for images as well, I think they should use CSP.  Making block the default will just numb the user -- how are they supposed to decide if they should let a given site make an HTTP request for an image? why is their browser asking them that? -- and lead to more people disabling the behaviour globally.

Bug 509409 is not a perfect example at all, especially since it somehow requires the attacker to get the site to interpolate the sensitive content into the generated URL.  At that point, they're basically evaluating code on the bank's web servers, and an attacker with that level of sophistication can easily point at an https server they own (or pwn).
Sending sensitive data over an unsecured TCP connection is not a "covert 
channel".  It's an OVERT channel.   It is certainly NOT the case that https
is intended to protect the responses but not the requests, as you seem to 
be suggesting, Mike.  When I'm visiting a site that has access to sensitive
information about me, I don't want my browser sending ANY requests to any
sites via http at all for any reason, including images.  That should be my
choice, not yours.
(In reply to comment #117)
> In the case of gmail the images are often at other domains and referenced via
> rich html within emails. So any change to block images would effect sites such
> as gmail.

In such cases (http images on a https site, like gmail showing html email that contains such images) IE asks the user if he want HTTPS only or also http content.
Yes, it as bit annoying surfing that way, compared to FF which does not make a sound in this case, but security should be more important.
The bug to which Honza was referring, in the quoted part of my comment, is about using failed-HTTP requests as a covert channel to leak information out of a hacked site.  At least, as I understand the term!

I absolutely believe that SSL should protect the privacy of the request as well as the response, but that has nothing to do with how different types of non-SSL requests impact the integrity of the page, which we represent to users via characteristics of the top-level address.  If you enter a password on a page at https://mybank.com, you can trust -- to the extent that PKI can provide trust -- that the content in question came from mybank.com, and that it hasn't been subverted by an attacker while in transit, so it's doing what mybank.com wants it to.  If there is a script loaded via HTTP, you can't trust that, because a compromised script can rewrite the page.  Similarly, more or less, for CSS.  Compromising an image does not affect the behaviour of the page beyond the appearance of the image, and I believe that it is not a meaningful risk to the integrity of the page, or its "under control of mybank.com" state as we represent it in the URL bar.

I don't know how to condition our settings on whether the site has sensitive information about you or not.
Whiteboard: [kerh-coa][sg:want] → [kerh-coa][sg:want][psm-arch]
Attached file sujet:bug (obsolete) (deleted) —
sujet: bug
Attachment #463326 - Flags: ui-review+
Attachment #463326 - Flags: review+
Attachment #463326 - Flags: approval1.9.0.next+
The content of attachment 463326 [details] has been deleted by
    Dave Miller [:justdave] <justdave@mozilla.com>
who provided the following reason:

inappropriate content

The token used to delete this attachment was generated at 2010-08-05 16:52:53 PDT.
Depends on: 642675
Hey Honza, can we raise the priority of this bug?  There is a lot of discussion happening around improving our mixed content story, and it looks like many of the experiments we'd like to try depend on this.  Is there anything I can do to help unblock you so we can get a patch for this?  Thanks!
I kinda postpone fixing this as it seemed to me mitigated by CSP and mainly HSTS (bug 495115).

I did some research in the past on how we could fix this.  I think I might know where the right places to hook are, also I have other bugs on the list, postponed from the same reason as outlined above, I would like to fix along with this one.

One thing is to fix this bug (i.e. write the code), other is how to present this feature to a user.  Should this be an option?  Limited to just cross site loads?  Or whatever other question should be?  We might want to raise this in some forum first.  

I'm not familiar with current state how other browsers do this, maybe it is a good way to start.

I could add this as Q3 networking goal, if desired, and work on this, but not just immediately now.  The estimation (realistic) might be to start intensively coding this in about one month or maybe sooner.
I do not think this is a networking feature, it is a security & privacy feature. We (networking team) can make it a goal to help implement a design, but we have to have a design first. If Honza is going to be the one to implement this, then we need to keep Honza in the loop with all the design discussions that secteam is having here in Mountain View. I suggest that we document a design before anybody bothers to try to write any (more) code.

Regarding images: I do agree with others above (and the IE team and the Chrome team at Google) that we can't block HTTP images right now. But, also, we can't show the same security indicators when there are insecure images as we would for regular HTTPS.

One potential point of contention I see, is that this feature purposefully makes the UX worse for sites that have mixed content, basically to coerce the website creator to fix the security problem. But, we want to make the Firefox UX better, not worse. The better we make the UX in this case, the more likely it is that websites will continue to implement HTTPS poorly.

I have an idea for a "HSTS-if-mixed-content-otherwise" feature, that would work as follows: If the document is served over HTTPS, and we are about to request http://example.org/x.img, then first we make a HTTPS request to HTTPS://example.org/x.img (as if the site was HSTS). There are a variety of ways that the site can indicate that the HTTPS-served resource is equivalent to the HTTP-served one--e.g. a simple "Secure-Version-of: http://example.org/x.img" response header field. This would allow us to help sites like YouTube silently upgrade hotlinked content to avoid mixed content warnings automatically without requiring them to be HSTS. We should see if any websites are interested in implementing this or something similar to it.
Whiteboard: [kerh-coa][sg:want][psm-arch] → [kerh-coa][sg:want][psm-arch][parity-chrome][parity-ie]
I believe this mechanism can be implemented quite easily using existing nsIContentPolicy infrastructure.

(In reply to comment #69)
> You can't show prompts from nsIContentPolicy. the prompt to ask before
> loading
> images used to do that, and it caused crashes. see bug 208867.

We don't need to put up a modal prompt in the middle of a shouldLoad call.  Instead, we could look up pref values, say security.allow_mixed_script and security.allow_mixed_display (to borrow Chrome's terminology), from shouldLoad and we could put up an info bar after the fact telling users that we blocked something (or allowed it?) and giving them the option to change the pref value and reload with the new setting.  My preference would be to set allow_mixed_script to false by default, and allow_mixed_display (images, iframes, fonts) to true.
If that approach sounds reasonable (bz, I'm looking at you) I'd be happy to take a swag at a patch, in fact, as it sounds like Honza's plate might have more on it than mine at the moment.
That approach could work, sure.

Do we do content policy checks on XHR?
Yes, we do.
I'll start working on a patch if that's all right with you, Honza.
Brandon, feel free to take it.
Also, will your solution work for downloads (i.e. external handling) ?
Good point.  I will investigate if external handling can be routed through Content Policy.  We should definitely cover that case.
Brandon: will this also cover/block NPAPI requests?
My patch doesn't currently address NPAPI requests, but it easily could since I'm using nsIContentPolicy which has TYPE_OBJECT_SUBREQUEST.

To provide a status update, I have a patch that blocks mixed content loads of TYPE_SCRIPT, TYPE_STYLESHEET and TYPE_OBJECT.  The patch also blocks mixed content downloads before we create an external app handler, but does NOT prevent the request from going out, though, since we can't differentiate a download from a link click until we receive the response.

Right now, I'm working on the UI part of the patch in which we'll notify the user that something was blocked.  I see uiwanted is already set.  I'll speak to some of the UX teams to figure out the details.
Assignee: honzab.moz → bsterne
Attached patch contentpolicy approach (obsolete) — — Splinter Review
I just pushed this patch to Try:
https://tbpl.mozilla.org/?tree=Try&rev=415e1cd327b7

I will split the patch into pieces for review from appropriate individuals: Part of the patch is in browser.js and adds a listener for the "mixed content blocked" event to put up the info bar to allow the mixed content.  The bulk of the patch implements the content policy.  There's a testcase here that demonstrates the behavior:
http://people.mozilla.org/~bsterne/tests/62178/test.html

This patch is basically a strawman that implements the blocking and a very simple UI.  The "Allow" action is attached to the session history entry, so if you navigate back to the page, we'll re-allow the mixed content, but if you break the navigation chain and come back to the page, we'll again block.

This definitely needs some UX review... Limi?

I'll attach a screenshot of the current UI shortly.
Attached patch content/docshell patch v1 (obsolete) — — Splinter Review
nsIContentPolicy implementation, add flag to nsISHEntry to indicate mixed script is allowed
Attachment #202141 - Attachment is obsolete: true
Attachment #202142 - Attachment is obsolete: true
Attachment #579538 - Attachment is obsolete: true
Attachment #579878 - Flags: review?(bugs)
Attached patch browser patch v1 (obsolete) — — Splinter Review
adds listener for MixedContentBlocked event which puts up an info bar with the option to reload the page with the mixed script enabled
Attachment #579880 - Flags: review?(gavin.sharp)
Comment on attachment 579878 [details] [diff] [review]
content/docshell patch v1

>diff --git a/content/base/src/Makefile.in b/content/base/src/Makefile.in
>--- a/content/base/src/Makefile.in
>+++ b/content/base/src/Makefile.in
>@@ -156,6 +156,7 @@ CPPSRCS		= \
> 		ThirdPartyUtil.cpp \
> 		nsEventSource.cpp \
> 		FileIOObject.cpp \
>+		nsMixedContentBlocker.cpp \
> 		$(NULL)
> 
> # Are we targeting x86-32 or x86-64?  If so, we want to include SSE2 code for
>diff --git a/content/base/src/nsMixedContentBlocker.cpp b/content/base/src/nsMixedContentBlocker.cpp
>new file mode 100644
>--- /dev/null
>+++ b/content/base/src/nsMixedContentBlocker.cpp
>@@ -0,0 +1,262 @@
>+/* -*- Mode: C++; tab-width: 4; indent-tabs-mode: nil; c-basic-offset: 4 -*- */
>+/* ***** BEGIN LICENSE BLOCK *****
>+ * Version: MPL 1.1/GPL 2.0/LGPL 2.1
>+ *
>+ * The contents of this file are subject to the Mozilla Public License Version
>+ * 1.1 (the "License"); you may not use this file except in compliance with
>+ * the License. You may obtain a copy of the License at
>+ * http://www.mozilla.org/MPL/
>+ *
>+ * Software distributed under the License is distributed on an "AS IS" basis,
>+ * WITHOUT WARRANTY OF ANY KIND, either express or implied. See the License
>+ * for the specific language governing rights and limitations under the
>+ * License.
>+ *
>+ * The Original Code is mozilla.org code mixed content blocking code.
>+ *
>+ * The Initial Developer of the Original Code is
>+ *   Mozilla Corporation
>+ *
>+ * Contributor(s):
>+ *   Brandon Sterne <bsterne@mozilla.com>
>+ *
>+ * Alternatively, the contents of this file may be used under the terms of
>+ * either the GNU General Public License Version 2 or later (the "GPL"), or
>+ * the GNU Lesser General Public License Version 2.1 or later (the "LGPL"),
>+ * in which case the provisions of the GPL or the LGPL are applicable instead
>+ * of those above. If you wish to allow use of your version of this file only
>+ * under the terms of either the GPL or the LGPL, and not to allow others to
>+ * use your version of this file under the terms of the MPL, indicate your
>+ * decision by deleting the provisions above and replace them with the notice
>+ * and other provisions required by the GPL or the LGPL. If you do not delete
>+ * the provisions above, a recipient may use your version of this file under
>+ * the terms of any one of the MPL, the GPL or the LGPL.
>+ *
>+ * ***** END LICENSE BLOCK ***** */
>+
>+#include "nsMixedContentBlocker.h"
>+#include "nsContentPolicyUtils.h"
>+
>+#include "nsIContentViewer.h"
>+#include "nsIDOMEvent.h"
>+#include "nsIDOMEventTarget.h"
>+#include "nsINode.h"
>+#include "nsIDOMWindow.h"
>+#include "nsIDOMNode.h"
>+#include "nsIDOMDocument.h"
>+#include "nsCOMPtr.h"
>+#include "nsContentUtils.h"
>+#include "nsIDocShell.h"
>+#include "nsIDocShellTreeItem.h"
>+#include "nsIContentViewer.h"
>+#include "nsISHEntry.h"
>+#include "nsIWebNavigation.h"
>+#include "nsISHistory.h"
>+#include "nsIHistoryEntry.h"
>+#include "mozilla/Preferences.h"
>+
>+using namespace mozilla;
>+
>+// Is mixed script blocking (script, stylesheets, plugins) enabled?
>+bool nsMixedContentBlocker::sBlockMixedScript = true;
>+
>+// Fired at the document that attempted to load mixed content.  Causes the
>+// UI to display an info bar that offers the choice to reload the page with
>+// mixed content permitted.
>+class nsMixedContentBlockedEvent : public nsRunnable {
{ should be in the next line


>+public:
>+    nsMixedContentBlockedEvent(nsISupports *aContext)
>+        : mContext(aContext)
>+    {}
>+
>+    NS_IMETHOD Run() {
{ should be in the next line


>+        NS_ASSERTION(mContext,
>+                     "You can't call this runnable without a requesting context");
2 space indentation, please.


>+
>+        // We might not have a doc to fire the event at. This happens, for
>+        // example, during the first speculative load of the insecure content.
>+        // We block the load anyway even though there's no UI to update.
>+        nsCOMPtr<nsIDocument> doc;
>+        nsCOMPtr<nsIDOMDocument> domDoc;
>+        nsCOMPtr<nsINode> node = do_QueryInterface(mContext);
>+        if (node) {
>+            doc = node->OwnerDoc();
>+            nsCOMPtr<nsIDOMDocument> domDoc = do_QueryInterface(doc);
>+            if (doc && domDoc) {
>+                nsContentUtils::DispatchTrustedEvent(doc, domDoc,
>+                                                     NS_LITERAL_STRING("MixedContentBlocked"),
>+                                                     true, true);
>+            }
So this is sent to the web page? I don't think we want that. Should the event be dispatched to chrome only
(target could be still doc).
I think you could use nsPLDOMEvent (aDispatchChromeOnly=true), RunDOMEventWhenSafe
nsPLDOMEvent is about to be renamed to AsyncDOMEvent or some such.

>+    // We need aRequestingLocation to pull out the scheme. If it isn't passed
>+    // in, get it from the DOM node.
>+    if (!aRequestingLocation) {
>+        nsCOMPtr<nsIDocument> doc;
>+        nsCOMPtr<nsINode> node = do_QueryInterface(aRequestingContext);
>+        if (node) {
>+            doc = node->OwnerDoc();
>+        }
>+        if (!doc) {
>+            doc = do_QueryInterface(aRequestingContext);
>+        }
nsIDocument is an nsINode, so useless QI here.

>+    /* REMOVE debugging */
>+    nsCAutoString thisSpec, topSpec;
>+    aContentLocation->GetSpec(thisSpec);
>+    aRequestingLocation->GetSpec(topSpec);
>+    printf("\nBSTERNE  top url: %s\n        this url: %s\n     contentType: %d\n",
>+           topSpec.get(), thisSpec.get(), aContentType);
>+
>+    // Get the top-level scheme. If it is not an HTTPS page then mixed content
>+    // restrictions do not apply.
>+    nsCAutoString topScheme;
>+    aRequestingLocation->GetScheme(topScheme);
>+    if (!topScheme.LowerCaseEqualsLiteral("https")) {
>+        printf("BSTERNE exit 3\n");
>+        return NS_OK;
>+    }
Couldn't you use nsIURI::SchemeIs


>+    // Get the scheme of the sub-document resource to be requested. If it is
>+    // not an HTTP load then mixed content doesn't apply.
What about FTP?
Attachment #579878 - Flags: review?(bugs) → review-
Comment on attachment 579880 [details] [diff] [review]
browser patch v1

It'd be good to get that UX feedback before diving too deep into the code specifics.

>diff --git a/browser/app/profile/firefox.js b/browser/app/profile/firefox.js

>+pref("security.mixed_content.block_active_content", true);

This looks unused in this patch - if it's supposed to control the backend code, it should be put in all.js instead.

>diff --git a/browser/base/content/browser.js b/browser/base/content/browser.js

>+var gMixedContentHandler = {

>+      var notification = notificationBox.getNotificationWithValue("mixed-content-blocked");
>+      if (notification) {
>+        notification.label = message;
>+      }

This doesn't look useful - any notification with that value will already have that label, given that it doesn't vary, right? You should probably just remove any existing notifications here.

>+          callback: function(aNotificationBar, aButton) {
>+            // Mark the SHEntry as allowed to run mixed script
>+            var webNav = getWebNavigation();
>+            var sessionHistory = webNav.sessionHistory;
>+            var entry = sessionHistory.getEntryAtIndex(sessionHistory.index, false)

This is going to operate on the session history entry that's active when you click the button, rather than the one that was active when you showed the notification. I suppose this may not be a problem in practice if the notification is guaranteed to not outlive the current SHEntry, but it would be best to not rely on that and get the reference to the SHEntry in the event handler (browser.webNavigation... etc.), and then just refer to it here via a closure. You can also use browser.reload() rather than BrowserReload().

This also seems like it might not handle subframes properly. In the case of a subframe using mixed content, setting the allowMixedScript attribute on the outer SHEntry and reloading it probably won't affect any of its subframes, right? If we go that route, it would be easier to fire the event on the content window (smaug can advise on the specifics here), from which you can get the relevant webNav and topmost browser using something like:

var webNav = contentWindow.QueryInterface(Ci.nsIInterfaceRequestor).getInterface(Ci.nsIWebNavigation);
var browser = webNav.QueryInterface(Ci.nsIDocShell).chromeEventHandler;
Attachment #579880 - Flags: review?(gavin.sharp) → review-
Comment on attachment 579878 [details] [diff] [review]
content/docshell patch v1

Review of attachment 579878 [details] [diff] [review]:
-----------------------------------------------------------------

::: content/base/src/nsMixedContentBlocker.cpp
@@ +14,5 @@
> + *
> + * The Original Code is mozilla.org code mixed content blocking code.
> + *
> + * The Initial Developer of the Original Code is
> + *   Mozilla Corporation

According to bug 507387, these license blocks should state Mozilla Foundation is the initial developer.
UI wise, this looks fine to me. Infobar is the right thing to be using here, unless we want to be even more subtle about it, but that seems like it could have some drawbacks of the "where is my content" form.
Keywords: uiwanted
Brandon, thank you very much for your great work on this bug! I'm very glad to see this progress!

I would like to request, please don't close this bug until we are able to block any kind of insecure content.

I see your patch currently:
- blocks scripts, objects, stylesheets
- doesn't block display content, images etc.


In my opinion, before we can call this bug fixed, we must have (at least) preferences, that can be used to fully block all insecure content - regardless of the default setting of the prefs.

Because this bug is a core platform bug, such preferences should live somewhere in the core code, not in the browser code. Maybe we could use netwerk/base/public/security-prefs.js which has other related prefs.


Now to the user interface discussion.

I like:
- the notification bar
- that content is blocked by default.
- that security indicators go away once the user allows insecure content

I don't like (yet):
- that you don't warn about display content, and continue to load it


I understand that your current intention is to allow loading of insecure display content, because you want to do the same thing as other browsers. Well, blocking *some* insecure content by default is an improvement compared to what we have today, even if we *still* load display content by default. I hope that we will be able to improve strictness in the future.


However, in my opinion, in order to call this bug solved, the following should be included in the work done to resolve this bug:

If the user switches a (core) pref meaning
        "block insecure display content by default"
then your user interface should handle that content, too.

In that scenario (prefs ask to block all insecure content) I would expect that a single user interface interaction "allow" has the effect to load all such content.


Another user interface related question (not specifically related to the core blocking ability requested in this bug) is:

   How frequently should the user be asked to reconfirm?


With your current patch, the user's override click is a one time remedy. Once I reload the page, things get blocked again. Is this a good idea, because it's more secure? Or will it annoy users and motivate them to disable blocking altogether?

A solution might be to implement permanent site specific permissions (e.g., in addition to the "allow" button, add another "always allow on this site" button).

But this means the user might need a kind of management interface to check and undo such a permanent setting. (Something like the management interface for "ability to install add-ons" which shows the list of exceptions, and gives the user the ability to remove exceptions.)
(In reply to Kai Engert (:kaie) from comment #149)
> I would like to request, please don't close this bug until we are able to
> block any kind of insecure content.
> 
> I see your patch currently:
> - blocks scripts, objects, stylesheets
> - doesn't block display content, images etc.

I agree with Kai. I would prefer to have a bug for blocking what this these patches block, blocking this bug, with this bug becoming a tracking bug. Brandon, Jesse, Dan, and I at one point or another all discussed several additional things that we could do in the interim between landing Brandon's patch and fully blocking any mixed content, and we (I) should file bugs that are blocking this tracking bug.

Do you agree Brandon?

> In my opinion, before we can call this bug fixed, we must have (at least)
> preferences, that can be used to fully block all insecure content -
> regardless of the default setting of the prefs.
>
> Because this bug is a core platform bug, such preferences should live
> somewhere in the core code, not in the browser code. Maybe we could use
> netwerk/base/public/security-prefs.js which has other related prefs.

Please file a new bug for that preference.

> I understand that your current intention is to allow loading of insecure
> display content, because you want to do the same thing as other browsers.
> Well, blocking *some* insecure content by default is an improvement compared
> to what we have today, even if we *still* load display content by default. I
> hope that we will be able to improve strictness in the future.

I agree. But, I don't think we should block Brandon's current work on the further improvements, including the pref you suggest. We have brainstormed many additional improvements that haven't been captured anywhere yet, besides the improvements you suggest.

> With your current patch, the user's override click is a one time remedy.
> Once I reload the page, things get blocked again. Is this a good idea,
> because it's more secure? Or will it annoy users and motivate them to
> disable blocking altogether?

This is also a concern I have, especially in the "reload" case.

> A solution might be to implement permanent site specific permissions (e.g.,
> in addition to the "allow" button, add another "always allow on this site"
> button).

I disagree with this. I think it might be OK to change it so that it would be less annoying to browse such a broken site, but I do not think we should allow a persistent multi-session override for any site, until we have data suggesting that the lack of this is causing users to disable the feature.
(In reply to Brian Smith (:bsmith) from comment #150)
> I agree with Kai. I would prefer to have a bug for blocking what this these
> patches block, blocking this bug, with this bug becoming a tracking bug.
> Brandon, Jesse, Dan, and I at one point or another all discussed several
> additional things that we could do in the interim between landing Brandon's
> patch and fully blocking any mixed content, and we (I) should file bugs that
> are blocking this tracking bug.
> 
> Do you agree Brandon?

I agree, and I'll file the two bugs: "Block mixed script" and "Block mixed display", and I'll make those bugs block this one.

> > In my opinion, before we can call this bug fixed, we must have (at least)
> > preferences, that can be used to fully block all insecure content -
> > regardless of the default setting of the prefs.
> >
> > Because this bug is a core platform bug, such preferences should live
> > somewhere in the core code, not in the browser code. Maybe we could use
> > netwerk/base/public/security-prefs.js which has other related prefs.

I agree, that seems to be a sensible place for these prefs.

> Please file a new bug for that preference.

I don't think it needs a new bug.  I'll just move the prefs in the next rev of the patch.

> > With your current patch, the user's override click is a one time remedy.
> > Once I reload the page, things get blocked again. Is this a good idea,
> > because it's more secure? Or will it annoy users and motivate them to
> > disable blocking altogether?
> 
> This is also a concern I have, especially in the "reload" case.

My patch handles the reload case.  If you click "Allow" and then reload the page, the mixed content gets loaded again.  This is because the "allow mixed content" flag is stored on the SHEntry (session history), so as the user navigates back and forward in their history, the preference for a given page is remembered.  You only get back into the "block and prompt" state for a given page if the navigation chain is broken, e.g. you manually type the URL or click a bookmark, etc.

> > A solution might be to implement permanent site specific permissions (e.g.,
> > in addition to the "allow" button, add another "always allow on this site"
> > button).
> 
> I disagree with this. I think it might be OK to change it so that it would
> be less annoying to browse such a broken site, but I do not think we should
> allow a persistent multi-session override for any site, until we have data
> suggesting that the lack of this is causing users to disable the feature.

I agree with Brian, and that's why I chose not to implement the persistent, per-site option.  Frankly we want this to be a somewhat degraded experience because it puts pressure on sites to fix their mixed content issues.  If it turns out this is too much pressure, as Brian says, we can add the feature suggested by Kai.

I'm hoping to have a revised patch by Monday or so.
Depends on: 711141
Depends on: 711145
(In reply to Olli Pettay [:smaug] from comment #144)
> Comment on attachment 579878 [details] [diff] [review]
> content/docshell patch v1

Thanks for the review, Olli.  I've fixed everything in here, except for two things that I have questions about:

> >+                nsContentUtils::DispatchTrustedEvent(doc, domDoc,
> >+                                                     NS_LITERAL_STRING("MixedContentBlocked"),
> >+                                                     true, true);
> >+            }
> So this is sent to the web page? I don't think we want that. Should the
> event be dispatched to chrome only
> (target could be still doc).
> I think you could use nsPLDOMEvent (aDispatchChromeOnly=true),
> RunDOMEventWhenSafe
> nsPLDOMEvent is about to be renamed to AsyncDOMEvent or some such.

Is it permissible to use nsContentUtils::DispatchChromeEvent, or is there some property of nsPLDOMEvent that you like for this particular use case?

> >+    // Get the scheme of the sub-document resource to be requested. If it is
> >+    // not an HTTP load then mixed content doesn't apply.
> What about FTP?

Yeah, this is a fair question.  We almost certainly want to allow sftp: resources to load in this context.  Is that reasonable to allow just these two protocols?  What about secure websockets?  Where do we draw the line?

I've run into this question before.  Rather than hard code a bunch of protocols all over the place, we really need a way to ask the underlying channel "are you secure?".  Brian, is this what nsIChannel::securityInfo represents?
(In reply to Brandon Sterne (:bsterne) from comment #152)
> I've run into this question before.  Rather than hard code a bunch of
> protocols all over the place, we really need a way to ask the underlying
> channel "are you secure?".  Brian, is this what nsIChannel::securityInfo
> represents?

Actually, this isn't the right question to ask.  We need to ask the "are you secure" question of a URI, before we even create a channel to fetch the URI.
> Is it permissible to use nsContentUtils::DispatchChromeEvent, or is there
> some property of nsPLDOMEvent that you like for this particular use case?

Your nsMixedContentBlockedEvent gets dispatched as a scriptrunner.
nsPLDOMEvent already supports that, so you wouldn't need to have class nsMixedContentBlockedEvent at all, I think.
bsterne: does this do the wrong thing for jar:https: ? (comment 79)
(In reply to timeless from comment #155)
> bsterne: does this do the wrong thing for jar:https: ? (comment 79)

My patch doesn't currently allow that, but if we want I could change it to use GetInnermostURI and extract the protocol from that.  Can secure and non-secure protocols be nested together in the same URI?  If not, then this approach would seem to handle all such cases.
(In reply to Brandon Sterne (:bsterne) from comment #153)
> (In reply to Brandon Sterne (:bsterne) from comment #152)
> > I've run into this question before.  Rather than hard code a bunch of
> > protocols all over the place, we really need a way to ask the underlying
> > channel "are you secure?".  Brian, is this what nsIChannel::securityInfo
> > represents?
> 
> Actually, this isn't the right question to ask.  We need to ask the "are you
> secure" question of a URI, before we even create a channel to fetch the URI.

We need to process HSTS rewriting insecure->secure before we ask the "are you secure" question of the URI, because HSTS may rewrite an insecure link to a secure one automatically. (The same applies to CSP; we must process HSTS before we apply enforce CSP on a URL.) The HSTS service already has to be able to know how to detect insecure vs. secure schemes. So, one option would be to add a new API to the HSTS service that rewrites the URL to the secure equivalent (e.g. http->https, https->https) based on the HSTS info, or fails if the URL is insecure and there is no HSTS info for it.

We can add in Necko a way to determine if a scheme implies TLS. I filed bug 718002 for this.

One thing that is unclear from the above discussion is whether we are planning to show the infobar when images from an insecure origin are embedded in a page with a secure origin. I think we should always show the infobar when there is mixed content (without any button), and the infobar should have the "Allow" button added when we block some kind of mixed content. However, I know that Jesse and maybe others strongly disagree about this. I think the infobar is the best choice as *the* mixed-content indicator, especially since it was decided to remove the site identity block for mixed content (see bug 588270).
Depends on: 718002
(In reply to Brian Smith (:bsmith) from comment #157)
> We need to process HSTS rewriting insecure->secure before we ask the "are
> you secure" question of the URI, because HSTS may rewrite an insecure link
> to a secure one automatically. (The same applies to CSP; we must process
> HSTS before we apply enforce CSP on a URL.) The HSTS service already has to
> be able to know how to detect insecure vs. secure schemes. So, one option
> would be to add a new API to the HSTS service that rewrites the URL to the
> secure equivalent (e.g. http->https, https->https) based on the HSTS info,
> or fails if the URL is insecure and there is no HSTS info for it.

Since my patch (and CSP) use nsIContentPolicy to implement the restrictions, the HSTS conversion will be taken into account.  I get called right before we're going to open the channel to make the request, so in the HSTS case I'll be called with the converted URI.

> We can add in Necko a way to determine if a scheme implies TLS. I filed bug
> 718002 for this.

Great.  I'll use that API once it's available.  Until then, I'll make a patch that hard codes the relevant schemes.

> One thing that is unclear from the above discussion is whether we are
> planning to show the infobar when images from an insecure origin are
> embedded in a page with a secure origin.

I don't have a strong opinion about this.  It's certainly an option to add a similar info bar for the mixed display case, and have separate preferences for  block-and-infobar for mixed display and mixed content.

> I think we should always show the
> infobar when there is mixed content (without any button), and the infobar
> should have the "Allow" button added when we block some kind of mixed
> content. However, I know that Jesse and maybe others strongly disagree about
> this. I think the infobar is the best choice as *the* mixed-content
> indicator, especially since it was decided to remove the site identity block
> for mixed content (see bug 588270).

I don't think this is for me to decide. I'll implement whatever the product drivers want.
Brandon, not sure you are also aware of bug 308496.  Is your implementation taking care also of XHRs?
My patch could be made to block mixed XHR as part of the block-mixed-display preference, but I'm not touching security state indicators at all (I simply block or allow mixed content loads to proceed).  If we have bugs in our security state indication, my patch won't address them.
What is the status of this bug?
The attached content/docshell patch implements a nsIContentPolicy that 1) watches for a mixed content load, 2) prevents it (if blocking is enabled), and 3) fires an event at the document to notify that content was blocked.  The browser patch adds some rough UI that puts up an info bar when mixed content is blocked and allows the user to reload the current page with the mixed content allowed.

I don't have the time right now to drive a product decision, but, Tanvi, if you want to find out what the product people want this feature to look like, I can finalize the patch.  We're most of the way there, but there are some nuances that I never dealt with.

The feature page would be a good place to flesh out these details:
https://wiki.mozilla.org/Security/Features/Mixed_Content_Blocker
No longer depends on: 642675
(In reply to Brandon Sterne (:bsterne) from comment #162)
> The attached content/docshell patch implements a nsIContentPolicy that 1)
> watches for a mixed content load, 2) prevents it (if blocking is enabled),
> and 3) fires an event at the document to notify that content was blocked. 
> The browser patch adds some rough UI that puts up an info bar when mixed
> content is blocked and allows the user to reload the current page with the
> mixed content allowed.
> 
> I don't have the time right now to drive a product decision, but, Tanvi, if
> you want to find out what the product people want this feature to look like,
> I can finalize the patch.  We're most of the way there, but there are some
> nuances that I never dealt with.
> 
> The feature page would be a good place to flesh out these details:
> https://wiki.mozilla.org/Security/Features/Mixed_Content_Blocker

Thanks Brandon for the update!  It looks like most of the open questions are around the UX and the default state.  We have to figure out where to display that content is blocked (info bar ?) and what options the user has (load display content, load all content, don't load content ?).

We also have to determine whether we want to follow a heuristic for when decisions are remembered (in addition to state saved in the session history).  I think this could be something we wait until after we have some feedback from users.

I can talk to UX and see what they think.  Not sure who the right contact for product is, but I will find out, talk to them and get back to you.



I have a few questions:
* There are 3 blockers to this bug.  For 718002, you have a workaround.  What about 711141 and 711145?  Not sure if the preferences have been implemented as part of this bug.
* Speaking of the preferences, where are the preferences to change the global state (or where will they be).  Will they be in about:config?  What are they called (security.mixed_content.block_active_content and ?)?

Thanks!
(In reply to Tanvi Vyas from comment #163)
> I have a few questions:
> * There are 3 blockers to this bug.  For 718002, you have a workaround.

Yes, once bug 718002 is implemented, this feature should uptake that API to make it work in non-HTTP contexts.

> What about 711141 and 711145?  Not sure if the preferences have been
> implemented as part of this bug.

The browser patch does add security.mixed_content.block_active_content (as you point out below).  That's bug 711141.  I'll add security.mixed_content.block_display_content, or something to that effect to address bug 71114.

> * Speaking of the preferences, where are the preferences to change the
> global state (or where will they be).  Will they be in about:config?  What
> are they called (security.mixed_content.block_active_content and ?)?

These are the two mentioned above.  They'll be in about:config and I doubt we'd add UI anywhere else to toggle the prefs.
(In reply to Brandon Sterne (:bsterne) from comment #164)
> Yes, once bug 718002 is implemented, this feature should uptake that API to
> make it work in non-HTTP contexts.

I think we can probably WONTFIX bug 718002. It turns out we don't support ftps:// anyway, so for web content the logic is pretty simple: "If it is the main page or a subresource and the scheme is 'https' then it is secure; otherwise, it is not secure," which is basically what you've done, IIRC. 

> These are the two mentioned above.  They'll be in about:config and I doubt
> we'd add UI anywhere else to toggle the prefs.

+1.

IMO, we should consider fonts active content. My understanding is that TTF and other advanced font formats have what amounts to a scripting language for defining the hinting/shaping. Also, webfonts are relatively new so there shouldn't be much legacy use of mixed-content webfonts (this is why we hard-block mixed-content websockets by default already). Finally, if we display the text in a fallback font because we blocked the non-TLS webfont...oh well, the vast majority of pages will still work fine and will still be readable.

I agree that it makes sense to do any work on the site identity block in separate bugs, if there is any work to do. Note that the new site identity block design just landed in Nightly, and some part of that work interacts with this work.
No longer depends on: 718002
Hi Brandon,

We are still trying to work out what the right UX should be for this --- a) block and make it impossible (or nearly impossible) to unblock.  or b) block and make it pretty easy to unblock (what chrome is currently doing).

In the meantime, is there a way you can land the part of your code that distinguishes between mixed script content and mixed display content?  That way, we can at least have site identity icons that distinguish the difference (see https://bugzilla.mozilla.org/show_bug.cgi?id=747090#c11).

What do you think?
Sure, I'm very happy to land the "plumbing" changes that monitor for mixed script and mixed display.  I still need to address Smaug's review comments (comment 144), but those are very straightforward.

So, to be clear, you don't want any blocking, but just a notification that is sent to some component that will update the site identity icons?  What API should I be calling in order for that to happen?
(In reply to Brandon Sterne (:bsterne) from comment #167) 
> So, to be clear, you don't want any blocking, but just a notification that
> is sent to some component that will update the site identity icons?  What
> API should I be calling in order for that to happen?

In /browser/base/content/browser.js, XULBrowserWindow.onSecurityChange calls gIdentityHandler.checkState with the new state of the identity block.

Here is the C++ call stack for when onSecurityChange is called:

> xul.dll!nsBrowserStatusFilter::OnSecurityChange(nsIWebProgress * aWebProgress, nsIRequest * aRequest, unsigned int aState) Line 275	C++
> xul.dll!nsDocLoader::OnSecurityChange(nsISupports * aContext, unsigned int aState) Line 1705	C++
> xul.dll!nsDocShell::OnSecurityChange(nsISupports * i_Context, unsigned int state) Line 230	C++
> xul.dll!nsSecureBrowserUIImpl::TellTheWorld(bool showWarning, nsSecureBrowserUIImpl::lockIconState warnSecurityState, nsIRequest * aRequest) Line 1552	C++
> xul.dll!nsSecureBrowserUIImpl::UpdateSecurityState(nsIRequest * aRequest, bool withNewLocation, bool withUpdateStatus, bool withUpdateTooltip) Line 1373	C++
> xul.dll!nsSecureBrowserUIImpl::EvaluateAndUpdateSecurityState(nsIRequest * aRequest, nsISupports * info, bool withNewLocation) Line 585	C++
> xul.dll!nsSecureBrowserUIImpl::OnLocationChange(nsIWebProgress * aWebProgress, nsIRequest * aRequest, nsIURI * aLocation, unsigned int aFlags) Line 1651	C++
(In reply to Brandon Sterne (:bsterne) from comment #167)
> So, to be clear, you don't want any blocking, but just a notification that
> is sent to some component that will update the site identity icons? 

Correct.  For now, lets just distinguish between script and display content.  If we could get the API into Firefox 15 (by 6/5/12), that would be great!

I'll update the bug on the UI once it's decided.  Thanks Brandon for your help on this.  We really appreciate it :)
Depends on: 737873
A found a bug in my patch:

The nsMixedContentBlockedEvent takes a context which it uses to grab the document to fire the event at.  Instead I need to make sure I grab the top level document to handle cases where the mixed content is blocked (or allowed) from a subframe.

I also will address Olli and Gavin's feedback, as well as swapping out the infobar for the identify block change.  I'm hoping to have a new patch by tonight or tomorrow.
Comment on attachment 579880 [details] [diff] [review]
browser patch v1

Obsoleting the UI portion of the patch for now, since there is review feedback to address and we're only going to land the mixed-script and mixed-display detection at this stage.
Attachment #579880 - Attachment is obsolete: true
per sec triage removing sec review flag as unnecessary
Hey Brandon,

How is the patch going?  I had one question that came up recently... how does the patch handle downloadable fonts?  Do they fall into the TYPE_STYLESHEET category, and hence block if they are fetched over http?


(In reply to Brandon Sterne (:bsterne) from comment #137)
> To provide a status update, I have a patch that blocks mixed content loads
> of TYPE_SCRIPT, TYPE_STYLESHEET and TYPE_OBJECT.  The patch also blocks
> mixed content downloads before we create an external app handler, but does
> NOT prevent the request from going out, though, since we can't differentiate
> a download from a link click until we receive the response.
>
Hi Tanvi,

The patch is coming well.  I'll have a new patch by this weekend hopefully.  My patch handles TYPE_FONT as mixed display, but I'm willing to take direction on how to handle that case.

I went through nsIContentPolicy.idl and attempted to divide the world into mixed script vs. mixed display.  Here's how my patch currently divides things:

Mixed script: TYPE_SCRIPT, TYPE_XMLHTTPREQUEST, TYPE_STYLESHEET, TYPE_OBJECT

Mixed display: TYPE_IMAGE, TYPE_SUBDOCUMENT, TYPE_PING, TYPE_FONT, TYPE_MEDIA, TYPE_WEBSOCKET

Necko already blocks mixed websockets, so that case is probably redundant, but I didn't want people to wonder why it wasn't explicitly handled.

Some load types, like TYPE_XBL and TYPE_REFRESH, didn't appear to make sense in this context, so I ignored them.

Patch coming soon!
(In reply to Brandon Sterne (:bsterne) from comment #174)
> Mixed script: TYPE_SCRIPT, TYPE_XMLHTTPREQUEST, TYPE_STYLESHEET, TYPE_OBJECT
> 
> Mixed display: TYPE_IMAGE, TYPE_SUBDOCUMENT, TYPE_PING, TYPE_FONT,
> TYPE_MEDIA, TYPE_WEBSOCKET
> 
> Necko already blocks mixed websockets, so that case is probably redundant,
> but I didn't want people to wonder why it wasn't explicitly handled.
> 

Hey Brandon,
Thanks for the info!  The above looks okay with a couple changes:

TYPE_SUBDOCUMENT - should be MixedScript because it could contain references to scripts and contains inline scripts.

TYPE_WEBSOCKET - should be MixedScript (it should be with XHR).

TYPE_FONT - Fonts may have scripting in them, but they aren't able to run in the context of the page.  So the are okay where they are as mixed display.
(In reply to Tanvi Vyas from comment #175)
> TYPE_FONT - Fonts may have scripting in them, but they aren't able to run in
> the context of the page.  So the are okay where they are as mixed display.

I suggest (again) that we treat TYPE_FONT as script unless/until it is shown to be problematic. Cross-origin HTTPS->HTTP webfonts should be very rare as webfonts are relatively new.
(In reply to Tanvi Vyas from comment #175)
> TYPE_SUBDOCUMENT - should be MixedScript because it could contain references
> to scripts and contains inline scripts.

Are you sure? TYPE_SUBDOCUMENT is an iframe, so it may contain external references to script files or inline script itself, but if the iframe is mixed content, shouldn't it be in a different origin than the parent page since the protocol is different?

Also, WebKit treats iframes as mixed display, so I figured it would be good to try to be consistent with their behavior.

> TYPE_WEBSOCKET - should be MixedScript (it should be with XHR).

Sounds good. I'll make that change.

> TYPE_FONT - Fonts may have scripting in them, but they aren't able to run in
> the context of the page.  So the are okay where they are as mixed display.

I'm happy to move this to mixed script as well per Brian's request, but that would be another difference between the Gecko and WebKit implementation.  I'll defer to you guys.
(In reply to Brandon Sterne (:bsterne) from comment #177)
> (In reply to Tanvi Vyas from comment #175)
> > TYPE_SUBDOCUMENT - should be MixedScript because it could contain references
> > to scripts and contains inline scripts.
> 
> Are you sure? TYPE_SUBDOCUMENT is an iframe, so it may contain external
> references to script files or inline script itself, but if the iframe is
> mixed content, shouldn't it be in a different origin than the parent page
> since the protocol is different?

The subdocument can affect the content in the parent document if they communicate via postMessage; there's a transitive effect on the parent document from scripts on the subdocument.  I also think that on sites where the outer does mixed content safely, but the inner frame doesn't (and the inner frame is where a user does sign-in or something) there's a problem.  I think if we want to represent a page as "not mixed script" it needs to be "not mixed script" all the way down the document tree.

Fonts: I think parity with Chrome is fine here.  What's the worst that can be done with font scripting?  Is it just changing the font's display characteristics?
(In reply to Sid Stamm [:geekboy] from comment #178)
> What's the worst that can be done with font scripting?

Replace text?
(In reply to Brandon Sterne (:bsterne) from comment #177)
> (In reply to Tanvi Vyas from comment #175)
> > TYPE_SUBDOCUMENT - should be MixedScript because it could contain references
> > to scripts and contains inline scripts.
> 
> Are you sure? TYPE_SUBDOCUMENT is an iframe, so it may contain external
> references to script files or inline script itself, but if the iframe is
> mixed content, shouldn't it be in a different origin than the parent page
> since the protocol is different?
> 
> Also, WebKit treats iframes as mixed display, so I figured it would be good
> to try to be consistent with their behavior.

IE treats iframes as mixed script and blocks them.  We have to make a decision here on whether we want to block frames or not.  dveditz?
I'm very close here.  The only piece I haven't figured out is how to trigger a security state change in the identity block.

@Jared, given a document that the mixed content got blocked in, how, in browser.js, can I get to the relevant XULBrowserWindow?  Answer that and I'll have the patch.  Thanks.

@Tanvi, as far as the iframes go, I was compelled by your and Sid's argument for treating them as script, so I moved them to that group.  I also moved fonts to mixed script as well per Brian's argument.
AFAICT, there is only one XULBrowserWindow. At what place within browser.js are you trying to update the securityUI? I've been looking at browser.js, and I don't think there is a preexisting way to update the security UI for a browser within there.

I think it is mainly done through the onSecurityChange notification. Can you use the callstack that was posted in comment #168?
I did try to make use of the callstack you provided in comment 168, so thanks for that.  I'm a bit lost, though, because that callstack is initiated by a (presumed) navigation, which calls nsSecureBrowserUIImpl::OnLocationChange.  From there, the security state of the webpage is evaluated and the identity block is updated if there is is mixed content or other relevant changes to the security state.

In my case, I am monitoring for mixed content loads, and I do differentiate between mixed script and mixed display, and I can either let the load go through or block it based on the user's pref for that type of content.

If I let the load go through, the right thing already happens and the identity block gets updated with the warning icon for mixed content.  I am stuck, though, in the blocking case.  Even if I call the stack used in the navigation case, e.g. EvaluateAndUpdateSecurityState, the security state is still secure, since no mixed content loads occurred.

So, it appears that I need to find a way to call nsDocLoader::OnSecurityChange directly, and I'm having trouble figuring out how to do that from within the content policy.  How is nsDocLoader related to nsDocShell/nsIDocShell?  Does this line imply that nsDocShell "implements" nsISecurityEventSink by forwarding those method calls to nsDocLoader:
http://hg.mozilla.org/mozilla-central/diff/9b2a99adc05e/docshell/base/nsDocShell.h

I think I've been staring at this too hard for too long.  Perhaps someone can give me a clue...
> How is nsDocLoader related to nsDocShell/nsIDocShell? 

nsDocShell inherits from nsDocLoader.  But there are some nsDocLoaders that are not nsDocShells (well, there is one per process: the root docloader).
Attached patch content patch v2 (obsolete) — — Splinter Review
Okay, new patch.  Here's a summary of what this patch does:

1. Adds two new prefs, both set to false by default (no blocking):
       security.mixed_content.block_active_content
       security.mixed_content.block_display_content
2. Defines mixed script as: fonts, plugin content, scripts, stylesheets,
   iframes, websockets, XHR
3. Defines mixed display as: images, audio, video, <a ping>
4. Adds a new nsIContentPolicy implementation, nsMixedContentBlocker, which, if
   the preceeding prefs are enabled, prevents mixed content loads of mixed
   script or mixed display
5. If a mixed content load is prevented, it updates the site identity block to
   STATE_IS_BROKEN, which is the "caution sign" icon.

Note that for #5, this is the same outcome for mixed-script-blocked as mixed-display-blocked.  One of Tanvi's comments indicated we'd want a less harsh outcome for blocked-mixed-display, but I wasn't able to confirm this for myself because without my patch, we seem to be breaking the lock icon already for a mixed script or a mixed image, for example.  I didn't want my patch to deharshify the latter case unless someone explicitly tells me that's what we want.

This is a much simpler patch than previous versions (no browser.js changes).  All of Olli's review comments from comment 144 have been fixed, other than:

(In reply to Olli Pettay [:smaug] from comment #144)
> >+    // Get the scheme of the sub-document resource to be requested. If it is
> >+    // not an HTTP load then mixed content doesn't apply.
> What about FTP?

We decided to punt on this case (see comment 165).  I think we're close... I hope we can land this soon!
Attachment #579878 - Attachment is obsolete: true
(In reply to Brandon Sterne (:bsterne) from comment #185)
> 3. Defines mixed display as: images, audio, video, <a ping>

We can do it in a separate bug, but why not simply skip doing the ping for "<a ping>" if it is an HTTPS context and the ping URL isn't HTTPS?

(Sorry I didn't notice this earlier; I didn't realize what TYPE_PING meant.)
Attachment #634588 - Flags: review?(bugs)
Comment on attachment 634588 [details] [diff] [review]
content patch v2

>--- /dev/null
>+++ b/content/base/src/nsMixedContentBlocker.cpp
>@@ -0,0 +1,219 @@
>+/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
>+/* ***** BEGIN LICENSE BLOCK *****
>+ * Version: MPL 1.1/GPL 2.0/LGPL 2.1
Please use the new MPL license block.



>+
>+#include "nsMixedContentBlocker.h"
>+#include "nsContentPolicyUtils.h"
>+
>+#include "nsINode.h"
>+#include "nsCOMPtr.h"
>+#include "nsIDocShell.h"
>+#include "nsISecurityEventSink.h"
>+#include "nsIWebProgressListener.h"
>+#include "nsContentUtils.h"
>+#include "mozilla/Preferences.h"
Nit, I think nsContentUtils.h includes nINode.h


>+
>+using namespace mozilla;
>+
>+// Is mixed script blocking (fonts, plugin content, scripts, stylesheets,
>+// iframes, websockets, XHR) enabled?
>+bool nsMixedContentBlocker::sBlockMixedScript = true;
>+
>+// Is mixed display content blocking (images, audio, video, <a ping>) enabled?
>+bool nsMixedContentBlocker::sBlockMixedDisplay = true;
So, by default each xul app get true for there two prefs, but in Firefox they are set to false.
That feels odd. I think you should use false here, not true.



>+
>+// Fired at the document that attempted to load mixed content.  The UI could
>+// handle this event, for example, by displaying an info bar that offers the
>+// choice to reload the page with mixed content permitted.
>+class nsMixedContentBlockedEvent : public nsRunnable
>+{
>+public:
>+  nsMixedContentBlockedEvent(nsISupports *aContext)
nsISupports*

>+    : mContext(aContext)
>+  {}
>+
>+  NS_IMETHOD Run() {
Nit, { should be in on its own line.


>+nsMixedContentBlocker::ShouldLoad(PRUint32 aContentType,
>+                                  nsIURI *aContentLocation,
>+                                  nsIURI *aRequestingLocation,
>+                                  nsISupports *aRequestingContext,
>+                                  const nsACString &aMimeGuess,
>+                                  nsISupports *aExtra,
>+                                  PRInt16 *aDecision)
Not, * and & goes with type, not parameter name. nsIURI* etc.




>+  // We need aRequestingLocation to pull out the scheme. If it isn't passed
>+  // in, get it from the DOM node.
>+  if (!aRequestingLocation) {
>+    nsCOMPtr<nsIDocument> doc;
>+    nsCOMPtr<nsINode> node = do_QueryInterface(aRequestingContext);
>+    if (node) {
>+      doc = node->OwnerDoc();
>+    }
>+    if (doc) {
>+      aRequestingLocation = doc->GetDocumentURI();
>+    }
No need for nsDOMPtr<nsIDocument> doc;
  nsCOMPtr<nsINode> node = do_QueryInterface(aRequestingContext);
  if (node) {
    aRequestingLocation = node->OwnerDoc()->GetDocumentURI();
  }




>+    // If we still don't have a requesting location then we can't tell if
>+    // this is a mixed content load.  Allow it.
>+    if (!aRequestingLocation) {
>+      return NS_OK;
>+    }
In which case can this happen? Shouldn't we deny, not allow?


>+  }
>+
>+  // Check the parent scheme. If it is not an HTTPS page then mixed content
>+  // restrictions do not apply.
>+  bool parentIsHttps;
>+  if (NS_FAILED(aRequestingLocation->SchemeIs("https", &parentIsHttps)) ||
>+      !parentIsHttps) {
>+    return NS_OK;
>+  }
>+
>+  // Get the scheme of the sub-document resource to be requested. If it is
>+  // not an HTTP load then mixed content doesn't apply.
>+  bool isHttp;
>+  if (NS_FAILED(aContentLocation->SchemeIs("http", &isHttp)) || !isHttp) {
>+    return NS_OK;
>+  }
Shouldn't you check here https and return NS_OK if scheme is https?
Or am I missing something here.

>--- /dev/null
>+++ b/content/base/src/nsMixedContentBlocker.h
>@@ -0,0 +1,60 @@
>+/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */
>+/* ***** BEGIN LICENSE BLOCK *****
>+ * Version: MPL 1.1/GPL 2.0/LGPL 2.1
New license, please
Attachment #634588 - Flags: review?(bugs) → review-
Attached patch content patch v3 (obsolete) — — Splinter Review
(In reply to Olli Pettay [:smaug] from comment #188)
> Please use the new MPL license block.

Fixed.

> Nit, I think nsContentUtils.h includes nINode.h

Fixed.

> So, by default each xul app get true for there two prefs, but in Firefox
> they are set to false.
> That feels odd. I think you should use false here, not true.

Yep, fixed.

> >+  NS_IMETHOD Run() {
> Nit, { should be in on its own line.

Fixed.

> >+nsMixedContentBlocker::ShouldLoad(PRUint32 aContentType,
> >+                                  nsIURI *aContentLocation,
> >+                                  nsIURI *aRequestingLocation,
> >+                                  nsISupports *aRequestingContext,
> >+                                  const nsACString &aMimeGuess,
> >+                                  nsISupports *aExtra,
> >+                                  PRInt16 *aDecision)
> Not, * and & goes with type, not parameter name. nsIURI* etc.

Are you sure? Every other contentpolicy impl uses this style:
http://mxr.mozilla.org/mozilla-central/search?string=::shouldload%28

I'm happy to make the change, though, if you prefer.

> No need for nsDOMPtr<nsIDocument> doc;
>   nsCOMPtr<nsINode> node = do_QueryInterface(aRequestingContext);
>   if (node) {
>     aRequestingLocation = node->OwnerDoc()->GetDocumentURI();
>   }

Fixed.

> >+    // If we still don't have a requesting location then we can't tell if
> >+    // this is a mixed content load.  Allow it.
> >+    if (!aRequestingLocation) {
> >+      return NS_OK;
> >+    }
> In which case can this happen? Shouldn't we deny, not allow?

I'm not sure what case would cause this, but the very next thing we do is to check aRequestingLocation's scheme, so I figured we need it to proceed.  I do agree that not having it should deny the load, so I did change that part.

> >+  // Get the scheme of the sub-document resource to be requested. If it is
> >+  // not an HTTP load then mixed content doesn't apply.
> >+  bool isHttp;
> >+  if (NS_FAILED(aContentLocation->SchemeIs("http", &isHttp)) || !isHttp) {
> >+    return NS_OK;
> >+  }
> Shouldn't you check here https and return NS_OK if scheme is https?
> Or am I missing something here.

Yes, you're right. I was too narrowly defining mixed content as HTTP load underneath HTTPS page, but any non-HTTPS load is mixed content if we've gotten to this check.  Fixed.

> >+++ b/content/base/src/nsMixedContentBlocker.h
> New license, please

Fixed.
Attachment #634588 - Attachment is obsolete: true
Attachment #638263 - Flags: review?(bugs)
Olli - can you review this when you have a chance.  We'd like to try and get it in FF16.

Brandon - do we need any mochitests to go along with this patch?
Yes, we definitely want mochitests for this change.  I've had it on my todo list for a while and haven't had the time to write the tests as of yet.  I will try very hard to make this happen soon.
Comment on attachment 638263 [details] [diff] [review]
content patch v3


>+  if (!aRequestingLocation) {
>+    nsCOMPtr<nsINode> node = do_QueryInterface(aRequestingContext);
>+    if (node) {
>+      aRequestingLocation = node->OwnerDoc()->GetDocumentURI();
>+    }
Actually, which URI should we use here? DocumentURI or Principal URI?
I think Principal URI. node->NodePrincipal()->GetURI(...);
Attachment #638263 - Flags: review?(bugs) → review-
Keywords: sec-want
Attached patch content patch v4 (obsolete) — — Splinter Review
(In reply to Olli Pettay [:smaug] from comment #192)

> >+    if (node) {
> >+      aRequestingLocation = node->OwnerDoc()->GetDocumentURI();
> >+    }
> Actually, which URI should we use here? DocumentURI or Principal URI?
> I think Principal URI. node->NodePrincipal()->GetURI(...);

Done.
Attachment #638263 - Attachment is obsolete: true
Attachment #642398 - Flags: review?(bugs)
Attached patch content patch v5 (obsolete) — — Splinter Review
Minor update to account for the new nsIContentPolicy signatures (methods take a nsIPrincipal now).
Attachment #642398 - Attachment is obsolete: true
Attachment #642398 - Flags: review?(bugs)
Attachment #642435 - Flags: review?(bugs)
Attached patch content patch v6 (obsolete) — — Splinter Review
Missed one of the new nsIContentPolicy method signatures.  This patch should be good to go.  Pushed to try:
https://tbpl.mozilla.org/?tree=Try&rev=c1c69aff21fe
Attachment #642435 - Attachment is obsolete: true
Attachment #642435 - Flags: review?(bugs)
Attachment #642458 - Flags: review?(bugs)
Comment on attachment 642458 [details] [diff] [review]
content patch v6


>+nsMixedContentBlocker::ShouldLoad(PRUint32 aContentType,
>+                                  nsIURI *aContentLocation,
>+                                  nsIURI *aRequestingLocation,
>+                                  nsISupports *aRequestingContext,
>+                                  const nsACString &aMimeGuess,
>+                                  nsISupports *aExtra,
>+                                  nsIPrincipal *aRequestPrincipal,
>+                                  PRInt16 *aDecision)
Nit, * should go with the type, not parameter name.
nsIURI* aContentLocation etc.



>+  // Get the scheme of the sub-document resource to be requested. If it is
>+  // an HTTPS load then mixed content doesn't apply.
>+  bool isHttps;
>+  if (NS_FAILED(aContentLocation->SchemeIs("https", &isHttps)) || isHttps) {
>+    return NS_OK;
>+  }
>+
>+  // If we are here we have mixed content.
>+
>+  // Decide whether or not to allow the mixed content based on what type of
>+  // content it is and if the user permitted it.
>+  switch (aContentType) {
>+  case nsIContentPolicy::TYPE_FONT:
case should be indented.
Attachment #642458 - Flags: review?(bugs) → review+
Tests are still missing.
(In reply to Olli Pettay [:smaug] from comment #196)
> >+nsMixedContentBlocker::ShouldLoad(PRUint32 aContentType,
> >+                                  nsIURI *aContentLocation,
> >+                                  nsIURI *aRequestingLocation,
> >+                                  nsISupports *aRequestingContext,
> >+                                  const nsACString &aMimeGuess,
> >+                                  nsISupports *aExtra,
> >+                                  nsIPrincipal *aRequestPrincipal,
> >+                                  PRInt16 *aDecision)
> Nit, * should go with the type, not parameter name.
> nsIURI* aContentLocation etc.

I questioned this before.  In all of the other nsIContentPolicy impls, the * is with the parameter name.  Are you sure you prefer it this way?

> case should be indented.

Will do.

(In reply to Olli Pettay [:smaug] from comment #197)
> Tests are still missing.

Yes, I'll write the tests this week and will get review on those as well before I land anything.  Thanks for all your time, Olli!
(In reply to Brandon Sterne (:bsterne) from comment #198)
> I questioned this before.  In all of the other nsIContentPolicy impls, the *
> is with the parameter name.  Are you sure you prefer it this way?
Well, that is the coding style. New code should follow that.
(In reply to Olli Pettay [:smaug] from comment #199)
> (In reply to Brandon Sterne (:bsterne) from comment #198)
> > I questioned this before.  In all of the other nsIContentPolicy impls, the *
> > is with the parameter name.  Are you sure you prefer it this way?
> Well, that is the coding style. New code should follow that.

Will do.  I'll also switch the patch to indent the switch case lines, which I forgot to update in the last patch.
Blocks: 776278
Attached patch WIP tests (obsolete) — — Splinter Review
Tests for mixed active content and mixed display content blocking.
Aiming to land this patch and tests in time for Firefox 17.  Note this patch adds 2 prefs to about:config where a user can decide to block mixed script content or mixed display content.  The pref is off by default.

We will work on the UI for automatically blocking mixed script content in a separte new bug after this bug lands.  I hope to get that work landed in time for Firefox 18.
Target Milestone: --- → mozilla17
I found a bug in the patch.  I built the patch and set security.mixed_content.block_active_content  to true.  I went to https://people.mozilla.com/~bsterne/tests/62178/test.html.  The mixed content was blocked, but the lock icon still changes into the globe and the site identity box says the connection is partially encrypted.  I'm going to try and figure out where this change happens and how to block the change, since we've blocked the mixed content.
(In reply to Tanvi Vyas from comment #204)
> I found a bug in the patch.  I built the patch and set
> security.mixed_content.block_active_content  to true.  I went to
> https://people.mozilla.com/~bsterne/tests/62178/test.html.  The mixed
> content was blocked, but the lock icon still changes into the globe and the
> site identity box says the connection is partially encrypted.  I'm going to
> try and figure out where this change happens and how to block the change,
> since we've blocked the mixed content.

The pages security state is changed here: https://mxr.mozilla.org/mozilla-central/source/security/manager/boot/src/nsSecureBrowserUIImpl.cpp in a call to UpdateSecurityState().  Will take a closer look and figure out what we should change.
(In reply to Tanvi Vyas from comment #204)
> I found a bug in the patch.  I built the patch and set
> security.mixed_content.block_active_content  to true.  I went to
> https://people.mozilla.com/~bsterne/tests/62178/test.html.  The mixed
> content was blocked, but the lock icon still changes into the globe and the
> site identity box says the connection is partially encrypted.  I'm going to
> try and figure out where this change happens and how to block the change,
> since we've blocked the mixed content.

I explicitly made this change because that's what I thought you were asking for based on this exchange:

(In reply to Tanvi Vyas from comment #169)
> (In reply to Brandon Sterne (:bsterne) from comment #167)
> > So, to be clear, you don't want any blocking, but just a notification that
> > is sent to some component that will update the site identity icons? 
> 
> Correct.  For now, lets just distinguish between script and display content.
> If we could get the API into Firefox 15 (by 6/5/12), that would be great!

If that's not what we want, it's very easy to remove that part of the patch.  But I'm still confused as to what you meant, then, by:

(In reply to Tanvi Vyas from comment #166)
> In the meantime, is there a way you can land the part of your code that
> distinguishes between mixed script content and mixed display content?  That
> way, we can at least have site identity icons that distinguish the
> difference (see https://bugzilla.mozilla.org/show_bug.cgi?id=747090#c11).
Blocks: 781018
(In reply to Brandon Sterne (:bsterne) from comment #206)
> I explicitly made this change because that's what I thought you were asking
> for based on this exchange:
> 
> (In reply to Tanvi Vyas from comment #169)
> > (In reply to Brandon Sterne (:bsterne) from comment #167)
> > > So, to be clear, you don't want any blocking, but just a notification that
> > > is sent to some component that will update the site identity icons? 
> > 
> > Correct.  For now, lets just distinguish between script and display content.
> > If we could get the API into Firefox 15 (by 6/5/12), that would be great!
> 
> If that's not what we want, it's very easy to remove that part of the patch.
> But I'm still confused as to what you meant, then, by:
> 
> (In reply to Tanvi Vyas from comment #166)
> > In the meantime, is there a way you can land the part of your code that
> > distinguishes between mixed script content and mixed display content?  That
> > way, we can at least have site identity icons that distinguish the
> > difference (see https://bugzilla.mozilla.org/show_bug.cgi?id=747090#c11).

Hi Brandon, you are right.  We just wanted to get the plumbing in for this patch without any associated blocking UI.  That way, a notification could be sent to update the site identity icons for the mixed script and mixed display cases.  Since you had set about:config settings to turn mixed display and script content off, I wanted to try that out.  When the content is blocked (for the few users that find the about:config option and set it), the lock icon shouldn't change to the globe since their is no mixed content on the page anymore.  I can try and make this change in a separate bug.

For the site identity icons and UI for blocking, that is still under discussion (I'm working on this with Larissa Co from UX) but we are getting close.

Perhaps we should have a quick chat to sync up.  I'll send you an email and we can set up a time.

Sorry for the confusion Brandon!  And thank you for your help!!
I think it is better to keep the code. One thing  I would suggest is change the value, you are sending from nsIWebProgressListener::STATE_IS_BROKEN to a new value for mixed content blocking. I think this would Bug 781018 really easy. Further, if I am not wrong, this would also  mean that (right now) it gets ignored without changing the icon. Also, this is good for the current ui plain for a different ui for mixed content blocking (separate from the STATE_IS_BROKEN UI).
(In reply to Tanvi Vyas from comment #207)

Ah, okay that makes total sense.  I'm happy to remove the OnSecurityChange call, i.e. turning Run() into a no-op for now.  Or, if you'd like me to follow Dev's suggestion in comment 208, I can do that as well... I'm at your disposal.
Comment on attachment 642458 [details] [diff] [review]
content patch v6

Both are plausible options, but I think removing the OnSecurityChange call is the simplest change.

If we kept the OnSecurityChange call, there would be more changes to the code.  Even with these code changes, telemetry data wouldn't be great.  We would only get data from users who have turned off mixed display and mixed script in FF17 (small amount).  In FF18 (where we hope that mixed script is turned off by default), we would get data about the percentage of ssl pages encounter the blocked mixed script, but for mixed display we would only have data from users who go to about:config and turn that off.  On the other hand, we are going to have to make some of these changes anyway once we figure out the UI.  Here are some of the things that would be changed...

>diff --git a/content/base/src/nsMixedContentBlocker.cpp b/content/base/src/nsMixedContentBlocker.cpp
>new file mode 100644
>--- /dev/null
>+++ b/content/base/src/nsMixedContentBlocker.cpp

>+class nsMixedContentBlockedEvent : public nsRunnable
>+{
>+public:
>+  nsMixedContentBlockedEvent(nsISupports *aContext)
>+    : mContext(aContext)
>+  {}
In the Blocked Event, we should add a paremter specifying if we are blocking mixed display or mixed script.  The state in OnSecurityChange() would depend on this to determine whether we should request a state of STATE_IS_MIXED_DISPLAY or STATE_IS_MIXED_SCRIPT.  Another option is to make these global variables.
bool mixedDisplay = false;
bool mixedScript = false;

>+
>+  NS_IMETHOD Run()
>+  {
>+    NS_ASSERTION(mContext,
>+                 "You can't call this runnable without a requesting context");
>+
>+    // Update the security UI in the tab with the blocked mixed content
>+    nsCOMPtr<nsIDocShell> docShell = NS_CP_GetDocShellFromContext(mContext);
>+    nsCOMPtr<nsISecurityEventSink> eventSink = do_QueryInterface(docShell);
>+    if (eventSink) {
>+      eventSink->OnSecurityChange(mContext,
>+                                  nsIWebProgressListener::STATE_IS_BROKEN);
>+    }
>+    return NS_OK;
>+  }

Return different states for mixed display and script:

    if (eventSink && mixedDisplay) {
      eventSink->OnSecurityChange(mContext,
                                  nsIWebProgressListener::STATE_IS_MIXED_DISPLAY);
    }


    if (eventSink && mixedScript) {
      eventSink->OnSecurityChange(mContext,
                                  nsIWebProgressListener::STATE_IS_MIXED_SCRIPT);
    }




>+private:
>+  // The requesting context for the content load. Generally, a DOM node from
>+  // the document that caused the load.
>+  nsCOMPtr<nsISupports> mContext;
>+};
>+
>+nsMixedContentBlocker::nsMixedContentBlocker()
>+{
>+  // Cache the pref for mixed script blocking
>+  Preferences::AddBoolVarCache(&sBlockMixedScript,
>+                               "security.mixed_content.block_active_content");
>+
>+  // Cache the pref for mixed display blocking
>+  Preferences::AddBoolVarCache(&sBlockMixedDisplay,
>+                               "security.mixed_content.block_display_content");
>+}
>+
>+nsMixedContentBlocker::~nsMixedContentBlocker()
>+{
>+}
>+
>+NS_IMPL_ISUPPORTS1(nsMixedContentBlocker, nsIContentPolicy)
>+
>+NS_IMETHODIMP
>+nsMixedContentBlocker::ShouldLoad(PRUint32 aContentType,
>+                                  nsIURI *aContentLocation,
>+                                  nsIURI *aRequestingLocation,
>+                                  nsISupports *aRequestingContext,
>+                                  const nsACString &aMimeGuess,
>+                                  nsISupports *aExtra,
>+                                  nsIPrincipal *aRequestPrincipal,
>+                                  PRInt16 *aDecision)
>+{
>+  // Default policy: allow the load if we find no reason to block it.
>+  *aDecision = nsIContentPolicy::ACCEPT;
>+
>+  // If mixed script blocking and mixed display blocking are turned off
>+  // we can return early
>+  if (!sBlockMixedScript && !sBlockMixedDisplay) {
>+    return NS_OK;
>+  }
>+
>+  // Top-level load cannot be mixed content so allow it
>+  if (aContentType == nsIContentPolicy::TYPE_DOCUMENT) {
>+    return NS_OK;
>+  }
>+
>+  // We need aRequestingLocation to pull out the scheme. If it isn't passed
>+  // in, get it from the DOM node.
>+  if (!aRequestingLocation) {
>+    nsCOMPtr<nsINode> node = do_QueryInterface(aRequestingContext);
>+    if (node) {
>+      nsCOMPtr<nsIURI> principalUri;
>+      node->NodePrincipal()->GetURI(getter_AddRefs(principalUri));
>+      aRequestingLocation = principalUri;
>+    }
>+    // If we still don't have a requesting location then we can't tell if
>+    // this is a mixed content load.  Deny to be safe.
>+    if (!aRequestingLocation) {
>+      *aDecision = nsIContentPolicy::REJECT_REQUEST;
>+      return NS_OK;
>+    }
>+  }
>+
>+  // Check the parent scheme. If it is not an HTTPS page then mixed content
>+  // restrictions do not apply.
>+  bool parentIsHttps;
>+  if (NS_FAILED(aRequestingLocation->SchemeIs("https", &parentIsHttps)) ||
>+      !parentIsHttps) {
>+    return NS_OK;
>+  }
>+
>+  // Get the scheme of the sub-document resource to be requested. If it is
>+  // an HTTPS load then mixed content doesn't apply.
>+  bool isHttps;
>+  if (NS_FAILED(aContentLocation->SchemeIs("https", &isHttps)) || isHttps) {
>+    return NS_OK;
>+  }
>+
>+  // If we are here we have mixed content.
>+
>+  // Decide whether or not to allow the mixed content based on what type of
>+  // content it is and if the user permitted it.
>+  switch (aContentType) {
>+  case nsIContentPolicy::TYPE_FONT:
>+  case nsIContentPolicy::TYPE_OBJECT:
>+  case nsIContentPolicy::TYPE_SCRIPT:
>+  case nsIContentPolicy::TYPE_STYLESHEET:
>+  case nsIContentPolicy::TYPE_SUBDOCUMENT:
>+  case nsIContentPolicy::TYPE_WEBSOCKET:
>+  case nsIContentPolicy::TYPE_XMLHTTPREQUEST:
>+    // fonts, plugin content, scripts, stylesheets, iframes, websockets and
>+    // XHRs are considered high risk for mixed content so these are blocked
>+    // per the mixed script preference

      mixedScript = true;

>+    if (sBlockMixedScript) {
>+      *aDecision = nsIContentPolicy::REJECT_REQUEST;
>+
>+      // Fire the event from a script runner as it is unsafe to run script
>+      // from within ShouldLoad
>+      nsContentUtils::AddScriptRunner(
>+        new nsMixedContentBlockedEvent(aRequestingContext));
>+    }
>+    break;
>+
>+  case nsIContentPolicy::TYPE_IMAGE:
>+  case nsIContentPolicy::TYPE_MEDIA:
>+  case nsIContentPolicy::TYPE_PING:
>+    // display (static) content are considered moderate risk for mixed content
>+    // so these will be blocked according to the mixed display preference

      mixedDisplay=true;

>+    if (sBlockMixedDisplay) {
>+      *aDecision = nsIContentPolicy::REJECT_REQUEST;
>+
>+      // Fire the event from a script runner as it is unsafe to run script
>+      // from within ShouldLoad
>+      nsContentUtils::AddScriptRunner(
>+        new nsMixedContentBlockedEvent(aRequestingContext));
>+    }
>+
>+    break;
>+
>+  default:
>+    // other types of mixed content are allowed
>+    break;
>+  }
>+
>+  return NS_OK;
>+}
>+
Just chatted with Brandon and we decided to make the following changes:

* Comment out the OnSecurityChange() call in Run().  Brandon will include some comments describing how we can add code to this section when we implement the blocking UI.

* Add a parameter to nsMixedContentBlockedEvent() that indicates whether the content was display content or active content.  In the future, we can use this information to determine what site identity icons to display, among other things.

This is going to make the telemetry patch a bit more difficult, but I'd rather land this patch sooner than later, and extend it in future bugs.
(In reply to Tanvi Vyas from comment #211)
>
> This is going to make the telemetry patch a bit more difficult, but I'd
> rather land this patch sooner than later, and extend it in future bugs.

couldn't agree more.

and thanks for continuing to push on this, Brandon ! :D
Attached patch content patch v7 (obsolete) — — Splinter Review
Final patch attached.  This patch changes the signature of nsMixedContentBlockedEvent to take a type parameter to distinguish between active and display content blocked.  The type codes are defined in nsMixedContentBlocker.h.  It also removes the OnSecurityChange call, turning the event into a no-op for now, while we wait for the UI indicators to be created for the different states.

The patch also addresses the last two nits: the indentation of the switch cases and moving *'s next to the type.
Attachment #642458 - Attachment is obsolete: true
Attachment #651144 - Flags: review?(bugs)
Comment on attachment 651144 [details] [diff] [review]
content patch v7


>+  NS_IMETHOD Run()
>+  {
>+    NS_ASSERTION(mContext,
>+                 "You can't call this runnable without a requesting context");
>+
>+    // For now, we are leaving MixedContentBlockedEvent as a no-op.
So, we shouldn't even create nsMixedContentBlockedEvent objects if they are no-op.
The code is simple enough that leaving nsMixedContentBlockedEvent commented out in the source
code is ok. File a followup bug to get the UI, and add the bug number near
class nsMixedContentBlockedEvent.



>+// Flags passed to nsMixedContentBlockedEvent describing the content type
>+#define BLOCKED_MIXED_SCRIPT  0
>+#define BLOCKED_MIXED_DISPLAY 1
I'd use enum
Attachment #651144 - Flags: review?(bugs) → review-
(In reply to Olli Pettay [:smaug] from comment #214)
> Comment on attachment 651144 [details] [diff] [review]
> content patch v7
> So, we shouldn't even create nsMixedContentBlockedEvent objects if they are
> no-op.
> The code is simple enough that leaving nsMixedContentBlockedEvent commented
> out in the source
> code is ok. File a followup bug to get the UI, and add the bug number near
> class nsMixedContentBlockedEvent.

Part of my thinking was that addons could handle this event in the interim.

> >+// Flags passed to nsMixedContentBlockedEvent describing the content type
> >+#define BLOCKED_MIXED_SCRIPT  0
> >+#define BLOCKED_MIXED_DISPLAY 1
> I'd use enum

I'm happy to make that change.
(In reply to Brandon Sterne (:bsterne) from comment #215)
> Part of my thinking was that addons could handle this event in the interim.
> 
Sure, but in the patch nsMixedContentBlockedEvent doesn't do anything.
Its Run() is called, but it is no-op
(In reply to Olli Pettay [:smaug] from comment #214)
> >+  NS_IMETHOD Run()
> >+  {
> >+    NS_ASSERTION(mContext,
> >+                 "You can't call this runnable without a requesting context");
> >+
> >+    // For now, we are leaving MixedContentBlockedEvent as a no-op.
> So, we shouldn't even create nsMixedContentBlockedEvent objects if they are
> no-op.
> The code is simple enough that leaving nsMixedContentBlockedEvent commented
> out in the source
> code is ok. File a followup bug to get the UI, and add the bug number near
> class nsMixedContentBlockedEvent.
> 
> 
> 

I filed the followup bug for the Mixed Content Blocking UI - bug 782654.
Attached patch WIP Mochitests (obsolete) — — Splinter Review
Push to try: https://tbpl.mozilla.org/?tree=Try&rev=e40c3f9169a5

Tests for content types iframe, image, stylesheet, object, and script included.  Still need to complete tests for xhr, websockets, fonts, media, and a ping.

Thanks to Sid and Ian for the debugging help!
Attachment #649176 - Attachment is obsolete: true
Attached patch Mochitests for (obsolete) — — Splinter Review
Mochitests for mixed content media, image, iframe, object, stylesheet, and css working.

Mochitest for ping attribute incomplete.  ping is turned off by default, so we would need to set the browser.send_pings preference to true in order to test.  I am not sure how to detect the ping went through.  Do I simulate a click to the <a> tag and then look for a PING-TO request hearder sent to file_mixed_content_server.js?

For fonts, load and error events are not detectable in javascript right now, hence we can't test this.
 
XHR requires same origin, so a request from an https page to an http pages are blocked regardless of the mixed content blocking setting

The Websockets spec states - http://www.w3.org/TR/websockets/
If secure is false but the origin of the entry script has a scheme component that is itself a secure protocol, e.g. HTTPS, then throw a SecurityError exception.
Hence, when I try to create a connection to a ws:// from an https page, I get a Security_Error - The operation is insecure.

I have left the code I wrote for Fonts, XHR, and Websockets in the Patch but commented out.  I will remove this code soon, but first wanted to check if anyone believes their is a way to test these.

Requesting feedback, but not sure who to feedback? it to.  Thanks!
Attachment #655151 - Attachment is obsolete: true
Attachment #655485 - Flags: feedback?
Attached patch content patch v8 (obsolete) — — Splinter Review
(In reply to Olli Pettay [:smaug] from comment #214)
> So, we shouldn't even create nsMixedContentBlockedEvent objects if they are
> no-op.
> The code is simple enough that leaving nsMixedContentBlockedEvent commented
> out in the source
> code is ok. File a followup bug to get the UI, and add the bug number near
> class nsMixedContentBlockedEvent.

Done. I reference bug 782654 in the new patch.

> >+// Flags passed to nsMixedContentBlockedEvent describing the content type
> >+#define BLOCKED_MIXED_SCRIPT  0
> >+#define BLOCKED_MIXED_DISPLAY 1
> I'd use enum

Done.
Attachment #651144 - Attachment is obsolete: true
Attachment #655509 - Flags: review?(bugs)
> XHR requires same origin, so a request from an https page to an http pages
> are blocked regardless of the mixed content blocking setting

What about cross-origin XHR (i.e., CORS)?
Attached patch Mochitests (obsolete) — — Splinter Review
Mochitests for mixed content media, image, iframe, object, stylesheet, css, and a ping.

For the ping attribute, looks like we don't follow the spec (will file a bug for that), so I checked the host header on the server side to see if the ping went through, and then had the client check for success after a short timeout.  Not necessarily the best option, but since it is off by default anyway, I thought it might be okay.

As Ian and Dev point out, I could use CORS for the XHR test.  I'll try that tomorrow.

Adding an r? for Olli.  Thanks!
Attachment #655485 - Attachment is obsolete: true
Attachment #655485 - Flags: feedback?
Attachment #655867 - Flags: review?(bugs)
Attached patch Mochitests (obsolete) — — Splinter Review
Small update to prevent a timeout if ping_status testping() is called on the client before setState() has executed on the server.
Attachment #655867 - Attachment is obsolete: true
Attachment #655867 - Flags: review?(bugs)
Attached patch Mochitests (obsolete) — — Splinter Review
Small update to prevent a timeout if ping_status testping() is called on the client before setState() has executed on the server.
Attachment #655874 - Attachment is obsolete: true
Attachment #655875 - Flags: review?(bugs)
Comment on attachment 655509 [details] [diff] [review]
content patch v8


>+// Disabled for now until bug 782654 is fixed
>+// class nsMixedContentBlockedEvent : public nsRunnable
For commenting out this class I'd prefer /* */



>+// This enum defines type of content that is blocked when an
>+// nsMixedContentBlockedEvent fires
>+enum MixedContentBlockedTypes {
>+  // "Active" content, such as fonts, plugin content, JavaScript, stylesheets,
>+  // iframes, WebSockets, and XHR
>+  BLOCKED_MIXED_SCRIPT,
>+  // "Display" content, such as images, audio, video, and <a ping>
>+  BLOCKED_MIXED_DISPLAY
>+};
Nit, enum values should be in form
eValueName
Attachment #655509 - Flags: review?(bugs) → review+
Comment on attachment 655875 [details] [diff] [review]
Mochitests

># HG changeset patch
># Parent 79ef5db536e6e1f5fcaeefb6e7bdeb1a8f44e15e
># User Tanvi Vyas <tvyas@mozilla.com>
>Mochitest - iframe, stylesheet, object, script, image, media.
>Ping incomplete.  XHR, Websockets not needed.  Fonts not possible.
>
>diff --git a/content/base/test/Makefile.in b/content/base/test/Makefile.in
>--- a/content/base/test/Makefile.in
>+++ b/content/base/test/Makefile.in
>@@ -550,16 +550,19 @@ MOCHITEST_FILES_B = \
> 		test_bug753278.html \
> 		test_bug761120.html \
> 		test_XHR_onuploadprogress.html \
> 		test_XHR_anon.html \
> 		file_XHR_anon.sjs \
> 		test_XHR_system.html \
> 		test_XHR_parameters.html \
> 		test_ipc_messagemanager_blob.html \
>+		test_mixed_content_blocker.html \
>+		file_mixed_content_main.html \
>+		file_mixed_content_server.sjs \
> 		$(NULL)
> 
> MOCHITEST_CHROME_FILES =	\
> 		test_bug357450.js \
> 		$(NULL)
> 
> MOCHITEST_FILES_PARTS = $(foreach s,A B,MOCHITEST_FILES_$(s))
> 
>diff --git a/content/base/test/file_mixed_content_main.html b/content/base/test/file_mixed_content_main.html
>new file mode 100644
>--- /dev/null
>+++ b/content/base/test/file_mixed_content_main.html
>@@ -0,0 +1,218 @@
>+<!DOCTYPE HTML>
>+<html>
>+<!--
>+Tests for Mixed Content Blocker
>+https://bugzilla.mozilla.org/show_bug.cgi?id=62178
>+-->
>+<head>
>+  <meta charset="utf-8">
>+  <title>Tests for Bug 62178</title>
>+  <script type="application/javascript" src="/tests/SimpleTest/EventUtils.js"></script>
>+</head>
>+<body>
>+<div id="testContent"></div>
>+
>+<!-- types the Mixed Content Blocker can block
>+     /*
>+  switch (aContentType) {
>+  case nsIContentPolicy::TYPE_OBJECT:
>+  case nsIContentPolicy::TYPE_SCRIPT:
>+  case nsIContentPolicy::TYPE_STYLESHEET:
>+  case nsIContentPolicy::TYPE_SUBDOCUMENT:
>+  case nsIContentPolicy::TYPE_FONT:
>+  case nsIContentPolicy::TYPE_WEBSOCKET:
>+  case nsIContentPolicy::TYPE_XMLHTTPREQUEST:
>+
>+  case nsIContentPolicy::TYPE_IMAGE:
>+  case nsIContentPolicy::TYPE_MEDIA:
>+  case nsIContentPolicy::TYPE_PING:
>+  }
>+     */
>+-->
>+
>+<script>
>+var baseUrl = "http://example.com/tests/content/base/test/file_mixed_content_server.sjs";
>+var httpsBaseUrl = "https://example.com/tests/content/base/test/file_mixed_content_server.sjs";
>+//var socketBaseUrl = "ws://example.com/tests/content/base/test/file_mixed_content_server.sjs";
>+
>+var testContent = document.getElementById("testContent");
>+
>+/* Part 1: Mixed Script tests */
>+
>+// Test 1a: insecure object
>+var objects = document.createElement("object");
Why objects and why not object?

>+objects.data = baseUrl + "?type=objects";
>+objects.type = "application/x-test";
>+objects.width = "200";
>+objects.height = "200";
>+
>+testContent.appendChild(objects);
>+
>+function objectStatus(objects) {
>+  objects instanceof Components.interfaces.nsIObjectLoadingContent;
>+
>+  if (objects.displayedType != Components.interfaces.nsIObjectLoadingContent.TYPE_NULL)
>+  {
>+    //object loaded
>+    parent.postMessage({"test": "object", "msg": "insecure object loaded"}, "http://mochi.test:8888");
>+  }
>+  else {
>+    //object didn't load
>+    parent.postMessage({"test": "object", "msg": "insecure object blocked"}, "http://mochi.test:8888");
>+  }
>+}
>+
>+window.setTimeout(objectStatus, 1000, objects);
This looks error prone. Do we not get load or error event? (Based on code inspection we don't)
And if not, please add a comment why we need to use timeout


>+
>+// Test 1b: insecure script
>+var scripts = document.createElement("script");
Why scripts and why not script


>+/* BELOW COMMENTED CODE WILL BE REMOVED AFTER FIRST REVIEW
Er, what? I don't know whether I should actually review something here because it is
commented out, and it isn't clear why it is commented out

>+
>+window.setTimeout(websockStatus, 1000, websock);
Websocket certainly has events. No need to use timers

>+// Test 2c: insecure <a ping>
We support ping? Since when? I thought they are disabled.

>+  if(!allowPing) {
>+    SpecialPowers.setBoolPref("browser.send_pings", true);
>+  }
Ah, you enable ping. But the code we have for ping is for an earlier version of HTML spec.
I'm not sure we should really test it (we should remove the old ping stuff)


r- because I'm not sure what all I should be reviewing, and also because I think there is at least the timer for
websocket which could be avoided. (timers cause often random failures)
Attachment #655875 - Flags: review?(bugs) → review-
Attached patch content patch v9 (obsolete) — — Splinter Review
Nits addressed.  Thanks, Olli!
Attachment #655509 - Attachment is obsolete: true
Attachment #656491 - Flags: review+
Attached patch Mochitests (obsolete) — — Splinter Review
Thanks Olli for your review!  I have fixed the issues you have mentioned and provided some details inline.

(In reply to Olli Pettay [:smaug] from comment #226)
> Comment on attachment 655875 [details] [diff] [review]
> Mochitests
>
> >+     */
> >+/* Part 1: Mixed Script tests */
> >+
> >+// Test 1a: insecure object
> >+var objects = document.createElement("object");
> Why objects and why not object?
Fixed to object.


>
> >+objects.data = baseUrl + "?type=objects";
> >+objects.type = "application/x-test";
> >+objects.width = "200";
> >+objects.height = "200";
> >+
> >+testContent.appendChild(objects);
> >+
> >+function objectStatus(objects) {
> >+  objects instanceof Components.interfaces.nsIObjectLoadingContent;
> >+
> >+  if (objects.displayedType != Components.interfaces.nsIObjectLoadingContent.TYPE_NULL)
> >+  {
> >+    //object loaded
> >+    parent.postMessage({"test": "object", "msg": "insecure object loaded"}, "http://mochi.test:8888");
> >+  }
> >+  else {
> >+    //object didn't load
> >+    parent.postMessage({"test": "object", "msg": "insecure object blocked"}, "http://mochi.test:8888");
> >+  }
> >+}
> >+
> >+window.setTimeout(objectStatus, 1000, objects);
> This looks error prone. Do we not get load or error event? (Based on code
> inspection we don't)
> And if not, please add a comment why we need to use timeout
We do not have load and error events for objects, so we have to use the settimeout (per johns' recommendation).  I've included a comment explaining this in the code.


> >+// Test 1b: insecure script
> >+var scripts = document.createElement("script");
> Why scripts and why not script
Fixed.


> >+/* BELOW COMMENTED CODE WILL BE REMOVED AFTER FIRST REVIEW
> Er, what? I don't know whether I should actually review something here
> because it is
> commented out, and it isn't clear why it is commented out

It didn't appear to be possible to have tests for fonts, web sockets, and xhr.  I included the code that I had written (commented out) in case anyone had a way to test these.  Dev and Ian suggested that I use CORS for the xhr test, and have done so in the new patch.  For fonts and web sockets, tests are not possible (Load events for external fonts are not detectable by javascript. Websocket connections over https require an encrypted web socket protocol, so mixed content is not possible).

> >+// Test 2c: insecure <a ping>
> We support ping? Since when? I thought they are disabled.
>
> >+  if(!allowPing) {
> >+    SpecialPowers.setBoolPref("browser.send_pings", true);
> >+  }
> Ah, you enable ping. But the code we have for ping is for an earlier version
> of HTML spec.
> I'm not sure we should really test it (we should remove the old ping stuff)
The code for a ping doesn't follow the html spec.  And hence, I detect a ping by checking for a host header (which is a bit odd):
 case "ping":
      if(request.hasHeader("host")) {
I've filed a bug about how we are out of spec - https://bugzilla.mozilla.org/show_bug.cgi?id=786347.  The test works and passes, so we could leave it or we could take it out.  I leave that up to you :)
Attachment #655875 - Attachment is obsolete: true
Attachment #656531 - Flags: review?(bugs)
Comment on attachment 656531 [details] [diff] [review]
Mochitests


>+// object does not have onload and onerror events. Hence the need for a setTimeout to check the object's status
>+window.setTimeout(objectStatus, 1000, object);
This is still too error prone. On debug builds GC/CC for example could cause quite major pauses.
Could you poll the status until it is expected. Basically setTimeout(..., 100) for some time.

>+xhr.open("GET", baseUrl + "?type=xhr", true);
>+//xhr.setRequestHeader("Access-Control-Allow-Origin", "http://example.com");
>+//xhr.setRequestHeader("Access-Control-Allow-Origin", "*");
I don't understand these commented out lines. There needs to be some explanation why they are there, 
or just remove them.
Attachment #656531 - Flags: review?(bugs) → review-
Hi Olli,

(In reply to Olli Pettay [:smaug] from comment #229)
> >+// object does not have onload and onerror events. Hence the need for a setTimeout to check the object's status
> >+window.setTimeout(objectStatus, 1000, object);
> This is still too error prone. On debug builds GC/CC for example could cause
> quite major pauses.
> Could you poll the status until it is expected. Basically setTimeout(...,
> 100) for some time.

So I can't poll the status indefinitely, because I actually want to know if the object loaded or errored out.  If mixed active content is blocked, then the object should not load and should error.  If I put the setTimeout in the code for the error case, and poll indefinitely until the global mochitest timer times out the whole test, the test will fail when it should have succeeded (i.e. we expected that the object wouldn't load).  Hence, we need a cap on the number of times we call setTimeout.  I wrote some pseudo-code for this (http://pastebin.mozilla.org/1786484), and for now have a set a counter tha maxes out at 20.  Let me know what you think the counter should max out at.  Thanks!
 


> >+xhr.open("GET", baseUrl + "?type=xhr", true);
> >+//xhr.setRequestHeader("Access-Control-Allow-Origin", "http://example.com");
> >+//xhr.setRequestHeader("Access-Control-Allow-Origin", "*");
> I don't understand these commented out lines. There needs to be some
> explanation why they are there, 
> or just remove them.
They should be removed.  I'll comment them out.
Attached patch Mochitests (obsolete) — — Splinter Review
Revamped mochitests.

I am now polling for the script, stylesheet, and object tests for a maximum of 10 seconds.  If the script/stylesheet/object has not loaded within 10 seconds, I assume it has been blocked.  Note that I can't use onerror for script due to bug 789856.

I removed the ping test, per Olli's suggestion, since we are not following the ping spec - bug 786347.

The tests use the default setting in the browser for security.mixed_content.block_display_content and security.mixed_content.block_active_content.  Both are set to false in nightly.  I changed the settings to true and tested locally to make sure the tests still pass.  Everything looks good.

Note that when switching security.mixed_content.block_display_content from false to true without restarting the browser (i.e. a refresh to the mochitest page in the browser), the media test fails.  This is because the video has been cached in the browser and the http request to retrieve it is never sent; and hence the content isn't blocked.

Thanks!
Attachment #656531 - Attachment is obsolete: true
Attachment #659629 - Flags: review?(bugs)
Comment on attachment 659629 [details] [diff] [review]
Mochitests

Oh, hmm, we should actually have tests for the setup when the prefs are on.
Could we just clear the cache before the test or something to fix the
media issue?
Attachment #659629 - Flags: review?(bugs)
Attached patch Mochitests (obsolete) — — Splinter Review
I have updated the code to cycle through the 4 different possible combinations for the security.mixed_content.block_display_content and the security.mixed_content.block_active_content settings.  Now there are a total of 28 tests run.

For the issue with media caching.  I had a few options:
1) Clear the cache with a pref.  
No such pref seems to exist.  I asked on #media, and roc suggested entering and existing private browsing mode.  I tried that and the content was still cached.

2) Use the sjs file to handle the media request instead and set a Cache-Control no-cache header.
I tried this, leveraging some code from https://mxr.mozilla.org/mozilla-central/source/content/media/test/contentDuration1.sjs.  Even when I set the cache-control header to no-cache, the media is still cached and on subsequent calls the webconsole shows no response headers (since the network request never went out, and its using cached content).

3) Write a clear cache function using nsICacheServices.
I did sided to skip this option and move on to number 4...

4) Append a random number to the media request so that the browser doesn't use the cache.
I went with this approach.  Simple and it works.
Attachment #659629 - Attachment is obsolete: true
Attachment #659890 - Flags: review?(bugs)
Attached patch Mochitests (obsolete) — — Splinter Review
Small update to my last patch to fix a xml parsing error that was showing up in error console.
Attachment #659890 - Attachment is obsolete: true
Attachment #659890 - Flags: review?(bugs)
Attachment #659900 - Flags: review?(bugs)
Comment on attachment 659900 [details] [diff] [review]
Mochitests

Nit, you're not using consistent coding style;
space before and after =, consistent { usage too, please.
Attachment #659900 - Flags: review?(bugs) → review+
That tryserver run looks pretty clean.  Let me know when you want me to land this.  Great job, Tanvi.
Attached patch Mochitest (obsolete) — — Splinter Review
Fixed the nits with spaces before/after equals signs and { } usage.  Carrying over the r+.

Olli - thanks for your review and your help in finishing these tests!

Brandon - yes, please go ahead and land :)
Attachment #659900 - Attachment is obsolete: true
Attachment #660159 - Flags: review+
Hey Brandon,

Talked to mfinkle about the Android test that is failing on try.  It's failing because the pref doesn't exist on Android.  Hence, I could edit testing/mochitest/android.json and indicate that the test shouldn't be run for android.

However, mfinkle is questioning why we aren't setting the pref in all.js instead of firefox.js.  Gavin asked this question sometime ago...

(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #145)
> >diff --git a/browser/app/profile/firefox.js b/browser/app/profile/firefox.js
> 
> >+pref("security.mixed_content.block_active_content", true);
> 
> This looks unused in this patch - if it's supposed to control the backend
> code, it should be put in all.js instead.
> 

Thoughts?
I'll put the pref into all.js and push my patch plus your tests to try.  If all goes well, I'll land this tonight or tomorrow.
(In reply to Brandon Sterne (:bsterne) from comment #241)
> Pushed to try:
> https://tbpl.mozilla.org/?tree=Try&rev=a6745b92a4b6

Looks like the test timed out on Android.  I'll add some dumps to the patch and repush to try to see if I can figure out what's going on.
(In reply to Tanvi Vyas from comment #242)
> (In reply to Brandon Sterne (:bsterne) from comment #241)
> > Pushed to try:
> > https://tbpl.mozilla.org/?tree=Try&rev=a6745b92a4b6
> 
> Looks like the test timed out on Android.  I'll add some dumps to the patch
> and repush to try to see if I can figure out what's going on.

Pushed to try for android builds testing mochitest-1 with dump() statements.  Still getting the timeout.  Stuff is definitely written to logcat, but the logcat has a maximum size, so I can't see what tests pass and what test is causing the timeout:

https://tbpl.mozilla.org/?tree=Try&rev=e5dfd79d2261

I have two options - 

One is to try and construct a /testing/mochitest/android.json file that includes all mochitests in mochitest-1 except the one in this bug, so that only the mixed content tests will be run and the logcat won't get truncated.

Another is to set up a fennec build environment and run this test on an android device.  I'm working on this now, but if anyone already has this set up, and wouldn't mind running the mochitests and giving me the logcat file, that would be really helpful!

Thanks!
Attached patch Mochitest — — Splinter Review
After building fennec, setting up the testing environment and running my test, I was getting network errors when framing https://example.com.  Per jmaher, https isn't configured for mochitests on fennec.  I've updated my patch and have included the test in android.json (so that it doesn't run on fennec) with a note that SSL is required.

Also, I did a quick test of mixed content on the device.  Going to https://people.mozilla.com/~bsterne/tests/62178/test.html, observing the script loads (red outline).  Then setting the pref for mixed active content to true, and going to https://people.mozilla.com/~bsterne/tests/62178/test.html again to see that the script does not load (green outline).  It *seems* to work so I think we should leave the prefs in all.js, even though we can't test them right now.

Brandon, I think we are finally ready to land!  I have pushed our updated patches to try and am hoping for a very green page this afternoon: https://tbpl.mozilla.org/?tree=Try&rev=9569c8fb47fd
Attachment #660159 - Attachment is obsolete: true
Attachment #661331 - Flags: review+
Backed out in https://hg.mozilla.org/integration/mozilla-inbound/rev/0a1b001db714 - test_mixed_content_blocker.html times out on Android.
(In reply to Phil Ringnalda (:philor) from comment #246)
> Backed out in
> https://hg.mozilla.org/integration/mozilla-inbound/rev/0a1b001db714 -
> test_mixed_content_blocker.html times out on Android.

The test shouldn't be run in android (see comment 244).  Brandon, from looking at your pushes to inbound, I think you landed an outdated patch.  Please land the tests attached in comment 244.  They add the test to a list of tests that are disabled for android in testing/mochitest.android.json

Thanks!

~Tanvi
Brandon, can you also post your updated patch to the bug (the one where you set the prefs in all.js instead of firefox.js - https://hg.mozilla.org/integration/mozilla-inbound/rev/5065dd13b53d).  Thanks!
Attached patch final content patch — — Splinter Review
Moved the pref to all.js
Attachment #656491 - Attachment is obsolete: true
Attachment #661671 - Flags: review+
Target Milestone: mozilla17 → mozilla18
Er. Mah. Gerd.

Way to go. I thought this might never land. And we beat the 12 year anniversary!
Now that this bug has finally landed, I will be moving on to bug 782654.

Thank you Brandon for your continued contributions!
Keywords: helpwanted
Blocks: 782654
To the multitudes of people watching at home, please note that this is fixed for Firefox 18, which is due to ship some time around the new year. Due to the complex nature of the fix, it will not be backported to a release coming sooner than that. If you would like to test the new functionality out, it will be in nightly builds starting tomorrow. http://nightly.mozilla.org/
Depends on: 792101
No longer depends on: 803225
Blocks: 822366
Blocks: 822367
Blocks: 822371
Blocks: 822373
Not fixed.  Behavior same as previous versions.

1. I updated to 18.0.

2. Opened gmail (https).

3. Opened a new email I new contained would trigger the warning. See [http://i46.tinypic.com/n1wc91.png]. Note it has already loaded the nonsecure image, see [http://i50.tinypic.com/1zc054z.png]

4. Gives the same message as prior versions -- no opportunity to cancel loading of insecure images, damage already done (e.g. if on a corporate proxy).

Am I misunderstanding how mixed content blocking is supposed to work?

Thanks.
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/20100101 Firefox/18.0

Same as Andrew (comment 256). The URL in this bug (https://people.mozilla.com/~bsterne/tests/62178/test.html) shows "Insecure script: LOADED" and 
https://ie.microsoft.com/testdrive/browser/mixedcontent/assets/woodgrove.htm fails.

IE9 blocks both examples.

However, http://www.mozilla.org/en-US/firefox/18.0/releasenotes/ says that this is fixed ("FIXED Disable insecure content loading on HTTPS pages (62178)").
Flags: in-testsuite?
Flags: in-litmus?
CORRECTION to Comment 256:

Here are the Firefox defaults for mixed-content related config items:

-----
security.mixed_content.block_active_content FALSE
security.mixed_content.block_display_content FALSE
security.warn_viewing_mixed TRUE
security.warn_viewing_mixed.show_once TRUE
-----

Therefore, you need to manually switch on the first two to activate mixed content blocking.  I tried it and after activating both it is now ignoring http images within a https context, e.g. with Google Reader.

However, I'm not getting any notification about the mixed content situation. My security.warn_viewing_mixed.show_once is set to TRUE, which means it should be displaying the mixed content warning every time.

Are there other mixed-content related settings I need to manually configure?  Thanks.
Ack, correction to Comment 258:

"My security.warn_viewing_mixed.show_once is set to TRUE, which means it should be displaying the mixed content warning every time."

should read:

"My security.warn_viewing_mixed.show_once is set to FALSE, which means it should be displaying the mixed content warning every time."
Daniel, with my mixed content settings configured as stated, when I visit the microsoft/testdrive link I get "Dangerous Mixed Content was blocked!"  When I visit the ~bsterne/tests link I get "Insecure script: did not load" -- so, all good for both tests!
When I switch these back to default:

security.mixed_content.block_active_content FALSE
security.mixed_content.block_display_content FALSE

I get the mixed-content warning notification back -- it pops up after loading the mixed content (old behavior).

So seems something's broken to do with the expected notification when you manually switch on mixed content blocking.

With mixed content blocking active, isn't the expected behavior something like:

1. BLOCK the insecure content

2. Pop up a notification and permit the user to click OKAY to acknowledge the action and dismiss the notification; or LOAD INSECURE CONTENT to pull in the insecure content on the page this one time.

Right now there seems no case-by-case option, just all mixed content blocked all the time or allowed all the time.
> isn't the expected behavior something like:

Yes, and that's covered by bug 822367.  That not being done yet is one reason this isn't turned on yet.

As far as I can tell, the release notes that claim that this has been turned on are just wrong; I'll see if I can get them fixed.
Okay.  Awesome job so far.  Thanks, guys.
(In reply to Andrew from comment #263)
> Okay.  Awesome job so far.  Thanks, guys.

Hello there... pardon my intrusion:

* * * prevent sending insecure requests from a secure context * * *

Can you get it right already? It's been over 10 years now...

Can't see how this qualifies as "Awesome job".

Take a tip from Skewer:
"We can't be ignoring serious security issues like this and spending our time making pretty icons instead."

Cheers
If the mere fact that it took mozilla developers 10 years to finally fix this bug shows how much they have lost perspective than i don't know what would.
PS: Mozilla get it together and stop trying to copy chrome.
Bring back the statusbar to firefox and help extension developers modernize their extensions. And yea... i know about status4ever. Don't go there.
If you don't like what you see, get involved and write code yourself. Stop just complaining.
Florian, I didn't even know this bug existed until I saw the article about this on slashdot (yes it made slashdot). Just even more reasons mozilla has completely lost perspective.
ps: Stop making excuses and bring back the statusbar!
(In reply to DJonesLocker2001 from comment #268)
> ps: Stop making excuses and bring back the statusbar!

Please read https://bugzilla.mozilla.org/page.cgi?id=etiquette.html
This is still incorrectly listed on http://www.mozilla.org/en-US/firefox/18.0.2/releasenotes/ as being fixed (even though it's disabled by default for now). Who owns the release notes?
(In reply to Ed Morley [:edmorley UTC+0] from comment #270)
> This is still incorrectly listed on
> http://www.mozilla.org/en-US/firefox/18.0.2/releasenotes/ as being fixed
> (even though it's disabled by default for now). Who owns the release notes?

akeybl and team.
(In reply to :Gavin Sharp (use gavin@gavinsharp.com for email) from comment #271)
> (In reply to Ed Morley [:edmorley UTC+0] from comment #270)
> > This is still incorrectly listed on
> > http://www.mozilla.org/en-US/firefox/18.0.2/releasenotes/ as being fixed
> > (even though it's disabled by default for now). Who owns the release notes?
> 
> akeybl and team.

The release notes were updated ~2 hours ago. I had this communicated on the email thread on which the issue was being discussed.My bad, about delay in replying to comment#270 on this bug directly.
Blocks: 842191
Product: Core → Core Graveyard
Flags: in-testsuite?
Flags: in-litmus?
Flags: needinfo?(atulsarang402)
You need to log in before you can comment on or make changes to this bug.