+++ This bug was initially created as a clone of Bug #824871 +++ (In reply to Brian Smith (:bsmith) from Bug #824871 comment #2) > Let's say we have attacker.com framing https://mozilla.org/. Whether or not > we allow mixed content on https://mozilla.org affects the security of > mozilla.org, so we cannot allow it simply because attacker.com chose to > frame it. Otherwise, our defenses against attacks that inject mixed content > would be trivial to work around: just get the user to navigate to a page > where you can insert an iframe that in turn frames the victim site. > > Consequently, I think that STATE_IS_BROKEN is the correct state for the > iframe. (If/how we present the STATE_IS_BROKEN state for an iframe in the > security UI we have is another thing.) > > When the user chooses to un-block blocked mixed content in this case, we > probably should un-block mixed content only in the top document (and > possibly same-origin or same-eTLD iframes), but not framed content--unless > the UI somehow makes it clear that we're decreasing the security of the > victim (framed) site. This would be similar to how per-origin permission > prompts enable features (such as AppCache) only in the top-level document. > (How does click-to-play plugins deal with this case?) (In reply to Tanvi Vyas [:tanvi] from Bug #824871 comment #4) > There are two issues that were identified in the bug comments. > > 1) An http page has an https iframe. Mixed active content is blocked. > Should mixed contente on the https iframe load? > > 2) An https page has an https iframe. Mixed active content is blocked. The > user overrides the blocking and wants the mixed active content to load. Do > we load the mixed active content on the https top level page only? The > https page and the https subdocuments? The https page and only same origin > https subdocuments? > > Situation 2 is separate from this regression bug and should be filed as a > blocker to bug 815321 (the Mixed Content Blocker Master Bug). This bug was > to address the odd UI that shows up on nightly in situation 1. > > For Situation 1: > > Chrome and IE both block the mixed content on the iframe page. Chrome shows > it's shield its url bar, and IE shows an info bar at the bottom of the page > that tells the user that insecure content was blocked. If we do want to > block this content now without working out how to explain to a user that > content on a subdocument has been blocked, we can make sure that the icon > and the site identity info doesn't change - it should match the connection > top level page and not the iframed pages. The door hanger will still exist > and the user can choose to load the mixed content. This wouldn't cause a > change to the icons or the site identity text, since they currently > represent the connection to the top level page and not the subdocuments. (In reply to Honza Bambas (:mayhemer) from Bug 824871 comment #6) > I would suggest we block mixed in any iframe, maybe inform, and don't allow > override. the "maybe" and "don't" are here because UI becomes > overcomplicated as it is hard to present the state. and most users wouldn't > understand what's going on. I agree with Honza. My understanding is that the patch for bug 824871 and/or bug 822367 will unblock mixed active content in iframes of domains unrelated to the root document's domain. We should change this so that mixed active content in iframes of unrelated subdomains is NOT unblocked, for the reasons given in bug 824871 comment 2 quoted above.
calling it sec-low so it's on the radar as a security bug. I'd like to rate it higher but it's a bypass of a security feature sitting on top of standard historical behavior; it can already be a bad situation in old browsers.
I have read https://blog.mozilla.org/tanvi/2013/04/10/mixed-content-blocking-enabled-in-firefox-23/ . I thought simply that: in gecko layer, we should block the mixed active content in iframe. However, in Firefox layer, we permit the mixed active content in iframe by controlling some UI. So this means that we should separate the flag of blocking mixted content per each domain, and implement/extend the doorhanger UI for controlling its flags by user.
«The user see’s that they are on https://unimportant-site.com and can decide to load the mixed content on https://unimportant-site.com by clicking “Disable Protection on This Page”. To the user, “This Page” is https://unimportant-site.com, but in actuality, the result is that protection is disabled on https://bank.com.» The trivial solution is as simple as changing the text from “Disable Protection on This Page” to “Disable Protection on unimportant-site.com” It's not perfect (users clicking without reading, and so on), but would solve the big issue (the user -even expert ones- being tricked into adding an exception for a different site it thinks it does). If we want to give a tip that the exception is for a different site (ie. an iframe), the “Firefox has blocked content that is not safe” text can be changed into “Firefox has blocked embedded content that is not safe”. As an open question, what's the current behavior if https://unimportant-site.com includes a script from http://unimportant-site.com *and* an iframe from https://bank.com which itself contains mixed content?
(In reply to Nim Delineif from comment #3) > As an open question, what's the current behavior if > https://unimportant-site.com includes a script from > http://unimportant-site.com *and* an iframe from https://bank.com which > itself contains mixed content? Currently, if a user "disables protection on this page" all mixed active content will load - on the top level https page and embedded https frames. So mixed script from bank.com will load.
Can someone define "unsafe" to me that is completely not subjective? The blanket statement of... "'unsafe' is defined when https://unimportant-site.com attempts to connect to https://bank.com to get any information" ...is the current implementation I understand FF to be running on right now. But shouldn't the user be allowed to decide what is safe and unsafe that exists beyond the life of a page render? When pressing F5 to reload the page and then end up being required to re-allow external data to be requested is absurd. I agree that perhaps a default setting that puts the browser into the above mode making anything external to https://unimportant-site.com be blocked for any data, including that of AJAX, CSS, and downloadable URLs, but there needs to be an option that allows a domain to PERMANENTLY be allowed to get data from external sources. I have a web server in my basement at home that hosts my home page that is available for access via the internet, not just LAN. On it, I have four iFrames that HREF to http://free.timeanddate.com. The iframes show me the time and date from a different time zone, which I use for work. I also call scripts from http://df.gasbuddy.com that provide me with gas prices in my local area, and the HTML table presented to me is done completely via their JS. I've also been tinkering with the idea that depending on what IP address I'm calling my home server from and having the underlying PHP code provide a different URL for the HREF in the iframe target. I agree that the FF implementation introduced in v23 is SUPPOSED to stop exactly what I'm doing, however, a method needs to be implemented to permanently (Either for the life of the OS, or the life of the browser installation excluding updates) allow https://myhomeserver.com to get information from http://free.timeanddate.com and http://df.gasbuddy.com. I FULLY support blocking everything external from a domain, but I NEED to have a method to allow my domain to get information externally from within the browser. I FULLY support having the list of domains EXTERNAL to FireFox and have FireFox read an "allowed list", but temporarily blocking information for one page session is beyond insane. If http://myhomeserver.com is allowed to get information from http://free.timeanddate.com, http://free.timeanddate.com SHOULD NOT be allowed to get information from http://paid.timeanddate.com unless http://free.timeanddate.com is on the "allowed list".
May I ask why you want to load it as https://myhomeserver.com instead of http://myhomeserver.com ? FF 23 places no restrictions for embedding http from http.
The machine hosting the HTTPS contains my wiki which requires credentials to log in. Since I'd rather not send my creds over plain text, I opted for the extra security and self-signed cert. I have http://myhomeserver.com available as well on another machine but that machine was going to be decommissioned as an external web service. I've just not gotten around to doing it. Come to think of it, my wiki could also pull images from an external source for files I've dropped into dropbox which includes an HTTP resource.
Just for the record, there *IS* an option I discovered today in the about:config that entirely disables the blocking scheme. security.mixed_content.block_active_content = false This enables my home page to work with calling other HTTP sites. However, I wouldn't consider this solved since there isn't a 'whitelist' and I'm now open to vulnerabilities.
(In reply to bugmenot from comment #8) > This enables my home page to work with calling other HTTP sites. However, I > wouldn't consider this solved since there isn't a 'whitelist' and I'm now > open to vulnerabilities. Bug https://bugzilla.mozilla.org/show_bug.cgi?id=873349 discusses adding a whitelist for Mixed Content Blocker.
TY. I'll add comments there.
I'm going to close this won't fix. If https://attacker.com embeds https://bank.com, mixed content on both https://attacker.com and https://bank.com will be blocked. If the user clicks on the lock icon and "disables protection" then mixed content will load on both attacker.com and bank.com. Yes, this means that a user may get "tricked" into unblocking mixed content on bank.com. But on the other hand, consider this scenario. https://shopping.com embeds a frame to https://cart-saver.com and mixed content is blocked on both domains. The page is not working correctly, so the user disables protection and expects to be able to shop and add items to their shopping cart. But https://cart-saver.com needs an http script in order to work. So if we implemented the policy defined in this bug, the script in cart-saver.com would continue to be blocked even though the user disabled protection. To then go and ask the user if they also want to unblock the mixed content in cart-saver.com is complicated. They don't know what cart-saver.com is. What would the UX look like? When we ask the first time, should we ask per domain and provide a checkbox next to each framed domain the user might want to allow mixed active content from? I think this is overkill. From telemetry, we see that very few users are disabling mixed content protection anyway. Moreover, we have moved away from the "Shield icon" and use the site identity icon itself as an indicator of mixed content issues, making it a little less discoverable. - https://blog.mozilla.org/security/2015/11/03/updated-firefox-security-indicators-2/  https://telemetry.mozilla.org/new-pipeline/dist.html#!cumulative=0&end_date=2015-12-08&keys=__none__!__none__!__none__&max_channel_version=beta%252F43&measure=MIXED_CONTENT_UNBLOCK_COUNTER&min_channel_version=null&product=Firefox&sanitize=1&sort_keys=submissions&start_date=2015-11-03&table=0&trim=1&use_submission_date=0
Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.