Closed Bug 947079 Opened 11 years ago Closed 9 years ago

Passive mixed content indicator (grey triangle) shows up on full HTTPS sites that have no mixed content

Categories

(Core Graveyard :: Security: UI, defect)

x86_64
All
defect
Not set
major

Tracking

(firefox26 ?, firefox27+ wontfix, firefox28- wontfix, firefox29- wontfix, firefox30- wontfix, firefox31- wontfix, firefox32+ wontfix, firefox33+ wontfix, firefox34 wontfix, firefox35 wontfix, firefox36 wontfix, firefox37 affected, firefox38 ?, relnote-firefox 39+)

RESOLVED FIXED
mozilla39
Tracking Status
firefox26 --- ?
firefox27 + wontfix
firefox28 - wontfix
firefox29 - wontfix
firefox30 - wontfix
firefox31 - wontfix
firefox32 + wontfix
firefox33 + wontfix
firefox34 --- wontfix
firefox35 --- wontfix
firefox36 --- wontfix
firefox37 --- affected
firefox38 --- ?
relnote-firefox --- 39+

People

(Reporter: briansmith, Assigned: tanvi)

References

(Blocks 1 open bug)

Details

(Keywords: regression, reproducible)

Attachments

(3 files, 6 obsolete files)

I have noticed, on a regular basis, that I see the passive mixed content triangle on twitter.com on a regular basis. However, it is somewhat of a heisenbug, in that if I enable the web console and then reload the page, the mixed content triangle goes away. However, recently I tried to reproduce this by opening the web console and navigating around twitter.com, especially with back and forward. I eventually got the mixed content triangle to show up, but at no point did the web console ever show any mixed content messages. One possibility is that this may be due to the HTTP cache. When we save a document to the cache, one of the things we save is a bunch of counts of how many mixed-content sub-resources there were on the page. It is possible that, when we load something from the cache, that we're interpreting these counts incorrectly and/or our loads from the cache are being counted as mixed content even though they are not. This would explain why back<->forth would cause the trouble. ZBugs like this are bad because we're confusing users about the lock icon and the triangle. It is very important that these indicatrs are accurate, if we're going to have them in the UI at all.
Summary: Passive mixed content indicator (grey triangle) shows up on twitter.com (and other sites) but no mixed content loads appear in console → Passive mixed content indicator (grey triangle) shows up on twitter.com, news.ycombinator.com (and other sites) but no mixed content loads appear in console
This issue has been also encountered on Firefox 27 Beta 2 (20131216183647), under Windows 8 x64, Windows 7 x64, Mac OS X 10.9 and Ubuntu 12.04 x86, while browsing through the following websites: Yahoo Mail, Linkedin, Myspace, Facebook, Youtube, Google Search, Twitter, eBay and QMO. In some cases, the console displayed warnings similar to those depicted in Brian's Comment 1.
OS: Windows 7 → All
Is this something that should be fixed for the long 27beta2 bake? (sign-off for release of 27.0b2 build 1 is pending this decision)
Flags: needinfo?(lsblakk)
Assuming this is not a new regression in 27 so no need to hold the beta for this (which is already built and we don't have an assignee here yet let alone a fix). We'll track but this likely has been around since FF23 when MCB shipped.
David can you look into this or is there someone else taking point on MCB issues while Tanvi is on leave?
Flags: needinfo?(dkeeler)
(In reply to Lukas Blakk [:lsblakk] from comment #5) > David can you look into this or is there someone else taking point on MCB > issues while Tanvi is on leave? My understanding is Christoph is dealing with any MCB issues.
Flags: needinfo?(dkeeler) → needinfo?(mozilla)
Thanks David for bringing this to my attention, I will take a look at it.
Flags: needinfo?(mozilla)
Can QA help confirm if status-firefox26 is affected ? Also NI on :sid to help with assignee here given :ckerschb is on leave and we only have a couple more beta's left in Fx27.
Flags: needinfo?(sstamm)
Keywords: qawanted
(In reply to bhavana bajaj [:bajaj] from comment #8) > Can QA help confirm if status-firefox26 is affected ? > > Also NI on :sid to help with assignee here given :ckerschb is on leave and > we only have a couple more beta's left in Fx27. Just to reiterate my initial conclusions from the information I've seen here: the grey passive mixed content triangle should only be displayed if mixed content is shown while navigating through a website that uses a secured connection. I assume that the new grey triangle icon has been added starting with Fx 26 - please correct me if I'm wrong. For clarity: the grey *globe* icon has been replaced in this specific case, with a grey *triangle* icon that depicts a connection which is not fully secured. I was only able to spot the grey triangle icon on specific websites starting with the first beta of Firefox 26 (Build ID: 20131028225529), while the previous Fx versions displayed a grey globe icon instead. This issue is intermittently reproducible on all platforms, as the Browser Console is not always displaying mixed content warnings along with the grey triangle icon. Sometimes, simply refreshing the page in question makes the triangle disappear and the lock icon is displayed back. At this point and due to lack of exact STR, the only successful attempt of replicating the bug was on Ubuntu 12.04 32-bit using the QMO website via the latest Aurora (Build ID: 20140108004006): Mozilla/5.0 (X11; Linux i686; rv:28.0) Gecko/20100101 Firefox/28.0, while any other attempts made with recent Fx versions (including Fx 26) under Windows 7 64-bit, Mac OS X 10.8 and Ubuntu 12.04 32-bit were unsuccessful.
so this appears randomly and is hard to reproduce... are we getting complaints via input about this?
Flags: needinfo?(sstamm)
(In reply to Andrei Vaida, QA [:AndreiVaida] from comment #9) > Just to reiterate my initial conclusions from the information I've seen > here: the grey passive mixed content triangle should only be displayed if > mixed content is shown while navigating through a website that uses a > secured connection. Yes. > I assume that the new grey triangle icon has been added starting with Fx 26 > - please correct me if I'm wrong. The passive mixed content indicator (the grey triangle) has been in the product for a very long time. There were major changes to how mixed content detection and blocking works, around Firefox 23. There have been incremental improvements in it since then. > For clarity: the grey *globe* icon has > been replaced in this specific case, with a grey *triangle* icon that > depicts a connection which is not fully secured. The gray globe icon indicates a http:// site. The lock icon indicates a https:// site without mixed content. The gray triangle indicates a https:// site with mixed content. > I was only able to spot the grey triangle icon on specific websites starting > with the first beta of Firefox 26 (Build ID: 20131028225529), while the > previous Fx versions displayed a grey globe icon instead. This page should consistently show the gray triangle in pretty much every version of Firefox: https://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/public/nsIX509Cert.idl > At this point and due to lack of exact STR, the only successful attempt of > replicating the bug was on Ubuntu 12.04 32-bit using the QMO website via the > latest Aurora (Build ID: 20140108004006): Mozilla/5.0 (X11; Linux i686; > rv:28.0) Gecko/20100101 Firefox/28.0, while any other attempts made with > recent Fx versions (including Fx 26) under Windows 7 64-bit, Mac OS X 10.8 > and Ubuntu 12.04 32-bit were unsuccessful. STR: 0. visit https://twitter.com 1. Start a new profile and/or clear everything in your cache. 2. visit http://engadget.com/ 3. After you start to see engadget.com load, AND WHILE THAT PAGE IS STILL LOADING: navigate back to https://twitter.com/ by pasting https://twitter.com/ into the address bar. Expected result: Lock icon with green EV indicator Actual result: mixed content indicator. If you do this when the console is open, you will see stuff like this: GET http://b.aol.com/ping [Mixed Content][HTTP/1.1 200 OK 807ms] GET http://ums.adtechus.com/mapuser [Mixed Content][HTTP/1.0 200 OK 180ms] GET http://cmap.adt.ace.advertising.com/cfcm.ashx [Mixed Content][HTTP/1.1 200 OK 193ms] Note that these requests were requests that were made by http://engadget.com, NOT https://twitter.com, but we wrongly attributing the request to https://twitter.com. Similarly, if you try the experiment using http://nytimes.com instead of http://engadget.com, you will get the passive mixed content indicator (grey triangle) and you will see one or more loads like this in the console: GET http://b.scorecardresearch.com/b2 [Mixed Content][HTTP/1.1 204 No Content 156ms] This is a request that was made by http://nytimes.com, not https://twitter.com, but we are wrongly attributing it to https://twitter.com. This is a race between the start and/or completion of the loads of subresources for the previous, non-secure elements of the previous page, and the load of the current (secure) page. I suspect that this affects Twitter the most because we'll often have a persistent connection to Twitter open, and Twitter uses HTTP caching pretty aggressively. Thus, the initial page load for http://twitter.com will usually be very fast. Refreshing https://twitter.com causes the triangle to switch back to the lock because there aren't any http:// requests pending from the previous page load when you do the refresh. I think this is most easily reproducable on a slower internet connection, using an http:// site that is complex like http://engadget.com or http://nytimes.com, where https://twitter.com is in the cache but the HTTP site you choose isn't. Sid, I believe the above information should be sufficient for somebody to find the bug and fix the problem. I think it is a problem that should be fixed really soon because it is a high-visibility mistake in a part of the UI that users consider to be important and that they expect to be perfect.
Keywords: reproducible
Sid, please see previous comment.
Flags: needinfo?(sstamm)
This is likely a dupe of bug 552789 since the STR are very similar. 961457 and/or 492358 may also be dupes.
See Also: → 961457, 492358, 552789
Thanks, Brian. That's very helpful.
Flags: needinfo?(sstamm)
Christoph: can you take this when you get back?
Flags: needinfo?(mozilla)
Given we are close to wrapping up Firefox 27 and will be going with our RC candidate tomorrow, wontfixing this. Hoping we Christoph can find a low risk solution in time for Firefox 28.
(In reply to Brian Smith (:briansmith, :bsmith; NEEDINFO? for response) from comment #11) > STR: > 0. visit https://twitter.com > 1. Start a new profile and/or clear everything in your cache. > 2. visit http://engadget.com/ > 3. After you start to see engadget.com load, AND WHILE THAT PAGE IS STILL > LOADING: navigate back to https://twitter.com/ by pasting > https://twitter.com/ into the address bar. > > Expected result: Lock icon with green EV indicator > Actual result: mixed content indicator. I can reproduce that part (thanks for the detailed description), and I do see the grey triangle, but I don't see any Mixed content warnings as you describe them in the console. > If you do this when the console is open, you will see stuff like this: > > GET http://b.aol.com/ping [Mixed Content][HTTP/1.1 200 OK 807ms] > GET http://ums.adtechus.com/mapuser [Mixed Content][HTTP/1.0 200 OK 180ms] > GET http://cmap.adt.ace.advertising.com/cfcm.ashx [Mixed Content][HTTP/1.1 > 200 OK 193ms] When putting printfs in MixedContentBlocker.cpp for aContentLocation and for aRequestingLocation (whichever requesting location we end up using) no https-request ever shows up for aRequestingLocation. In case we would have a race condition, then at some point we would have an http-request as the contentLocation and a requestingLocation from https://twitter.com, wouldn't we? I keep investigating, but I am not completely sure that MixedContentBlocker is actually causing that problem.
Flags: needinfo?(mozilla)
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #18): > When putting printfs in MixedContentBlocker.cpp for aContentLocation and for > aRequestingLocation (whichever requesting location we end up using) no > https-request ever shows up for aRequestingLocation. In case we would have a > race condition, then at some point we would have an http-request as the > contentLocation and a requestingLocation from https://twitter.com, wouldn't > we? > > I keep investigating, but I am not completely sure that MixedContentBlocker > is actually causing that problem. Keep in mind that mixed content state tracking is split between nsMixedContentBlocker and nsSecureBrowserUIImpl. I wouldn't be surprised if this were an issue with nsSecureBrowserUIImpl. (We should remove the mixed-content-related parts of nsSecureBrowserUIImpl). This bug is assigned to nobody. It would be great to have it assigned to somebody.
(In reply to Brian Smith (:briansmith, :bsmith; NEEDINFO? for response) from comment #19) > Keep in mind that mixed content state tracking is split between > nsMixedContentBlocker and nsSecureBrowserUIImpl. I wouldn't be surprised if > this were an issue with nsSecureBrowserUIImpl. (We should remove the > mixed-content-related parts of nsSecureBrowserUIImpl). I share your thought, because when putting a return statement right after shouldLoad is called [1] and visiting page https://people.mozilla.org/~tvyas/mixeddisplay.html I also do get the grey triangle. Surprising, isn't it? [1] http://mxr.mozilla.org/mozilla-central/source/content/base/src/nsMixedContentBlocker.cpp#194 > This bug is assigned to nobody. It would be great to have it assigned to > somebody. I will take it.
Assignee: nobody → mozilla
Just a general reminder that this code (based on notifications after loading has already happened) was very difficult to work with in the past. Now that you have fixed bug 62178, and because you are able to track and prevent insecure loads from happening at all, you should probably use the same place in the code for tracking the security state of pages. I had proposed that rewrite in bug 370886.
Blocks: lockicon
It seems the overall problem is much more severe than I initially thought - please compare Honzas post [1] on 'implement new tracking mechanism for the security state of web pages' which was openend in 2007. I believe it's not that easy to provide a working patch in such a short amount of time. I guess we need to do some whiteboard discussions first. The attached patch fixes the problem described by Brian in Comment 11 [2]. Brian was right, there is a kind of a race condition happening. In more detail, nsSecureBrowserUIImpl increments a counter for http sub requests (|mSubRequestsNoSecurity|). In case the top level security state changes, e.g. from http to https (exactly what happens when we paste https://www.twitter.com into the address bar), then |mSubRequestsNoSecurity| still holds that count for http sub requests and wrongly causes the new security state to be updated. In turn, this missclassification causes the grey triangle to be displayed, even though no http reqeusts were made by https://www.twitter.com. Long story short, nsMixedContentBlocker itself classifies mixed content correctly, but nsSecureBrowserUIImpl causes this missclassification. The scenario described above also explains why I got the missclassification of Tanvi's testcase [3] which basically is only a https-page that loads an image over http. The current implementation correctly displays a grey triangle even when I put a return statement right after shouldLoad is called in nsMixedContentBlocker [4]. I guess no one realized that problem because it displayed the grey triangle correctly, even for the wrong reason :-). Brian, what do you think? [1] https://bugzilla.mozilla.org/show_bug.cgi?id=370886#c30 [2] https://bugzilla.mozilla.org/show_bug.cgi?id=947079#c11 [3] https://people.mozilla.org/~tvyas/mixeddisplay.html [4] http://mxr.mozilla.org/mozilla-central/source/content/base/src/nsMixedContentBlocker.cpp#203
Attachment #8367665 - Flags: feedback?(brian)
Comment on attachment 8367665 [details] [diff] [review] bug_947079_passive_mixed_content_indicator.patch Review of attachment 8367665 [details] [diff] [review]: ----------------------------------------------------------------- (In reply to Christoph Kerschbaumer [:ckerschb] from comment #22) > Brian, what do you think? Christoph, what happens if you just remove mSubRequestsBrokenSecurity and mSubRequestsNoSecurity and all the code that deals with them? I think that is the best solution. nsSecureBrowserUIImpl's mixed content handling is broken in numerous ways, as you've learned, and it isn't clear from your patch why we need to care about mSubRequestsBrokenSecurity and not mSubRequestsNoSecurity? It seems like if the preceding page had HTTPS subresources that used cert overrides then an analogous situation would occur.
Attachment #8367665 - Flags: feedback?(brian) → feedback-
First, nsSecureBrowserUIImpl makes not only kittens and babys cry [1], but also me. The whole class seems to rest on a shaky foundation and we definitely need to do something about that, even though I don't know how we can fix that mess that easily. (In reply to Brian Smith (:briansmith, :bsmith; NEEDINFO? for response) from comment #23) > Christoph, what happens if you just remove mSubRequestsBrokenSecurity and > mSubRequestsNoSecurity and all the code that deals with them? Brian, I followed your suggestion and tried to remove |mSubRequestsBrokenSecurity| and |mSubRequestsNoSecurity|. Unfortunately those two members are needed to keep track of the UI state in case of redirects (See try run [2]). It seems that a server side redirect and pasting a new URL into the address bar are (almost) indistinguishable in nsSecureBrowserUIImpl. My main question is, why sub-requests belonging to the previous request are not cancelled in case we paste a new URL in the address bar? Is there a reason behind that? If so, we could probably taggle this problem there, by just killing outdated sub-requests when pasting a new URL. Anyway, I attached a patch which only updates |mSubRequestsBrokenSecurity| and |mSubRequestsNoSecurity| in case we have a redirect from either https to http or vice versa. I agree, this is not the best solution, but it seems it works just fine. Given that nsSecureBrowserUIImpl is pretty messy anyway, I guess we could accept this patch as an interim solution till we are going to fix the whole setup of this class. > Refreshing https://twitter.com causes the triangle to switch back > to the lock because there aren't any http:// requests pending > from the previous page load when you do the refresh. > I think this is most easily reproducable on a slower internet > connection, using an http:// site that is complex like > http://engadget.com or http://nytimes.com I verified that this patch fixes the issues reported by Brian (see above) and I am also going to write some mochitests to verify correct behavior. [1] https://bugzilla.mozilla.org/show_bug.cgi?id=832834 [2] https://tbpl.mozilla.org/?tree=Try&rev=63be49b0ace9
Attachment #8367665 - Attachment is obsolete: true
Attachment #8371121 - Flags: review?(kaie)
Attachment #8371121 - Flags: review?(brian)
I tried to write a testcase to verify correct behavior of the patch. Unfortunately that's not that easy to accomplish. The problem we are facing is pasting a new URL into the address bar while the old page is still loading. We cannot use |window.location| because that stops (cancels) requests belonging to the previous page. Using |gURLBar.value| and then synthesizing ENTER does not cause the new page in the URLbar to be loaded. It's not interruptive. I am providing the WIP patch anyway in case someone can provide some guidance or knows how we can simulate such a pasting behavior for the testcase.
Comment on attachment 8371121 [details] [diff] [review] bug_947079_passive_mixed_content_indicator.patch Review of attachment 8371121 [details] [diff] [review]: ----------------------------------------------------------------- Christoph, it seems to me like these remaining mixed-content-blocker issues are all inter-related. You need this code to do special stuff for redirects because we don't have the right redirect logic due to the redirect bug, right? If we fixed the redirect bug first, then would we still need to be doing this? If not, then I suggest we just fix the redirect bug. (We can find a solution to the redirect bug without waiting for those other bugs that people insisted must block it.) Please re-request review if you still think this will be necessary with the redirect bug fixed.
Attachment #8371121 - Flags: review?(brian)
It would be better if someone reviewed this who had more recently touched this code. How about Honza?
Attachment #8371121 - Flags: review?(kaie)
(In reply to Brian Smith (:briansmith, was :bsmith; NEEDINFO? for response) from comment #26) > Review of attachment 8371121 [details] [diff] [review]: > ----------------------------------------------------------------- > > Christoph, it seems to me like these remaining mixed-content-blocker issues > are all inter-related. You need this code to do special stuff for redirects > because we don't have the right redirect logic due to the redirect bug, > right? If we fixed the redirect bug first, then would we still need to be > doing this? Brian, I spent quite some time figuring out exactly what causes the problem in this bug. Unfortunately fixing the redirect-bug [1] does not fix the issue in this bug. The redirect-bug can only block redirected requests before hitting the network. Anyway, I totally think we should work on simplifying nsSecureBrowserUIImpl, but not at this time. Anyway, pasting a new URL in the address bar causes |nsSecureBrowserUIImpl::OnLocationChange| to be triggered which updates the security state in case of our running example (initial URL: http://engadget.com/ and then pasting https://twitter.com/BrendanEich in the URL bar while http://engadget.com/ is still loading). Now, creating a new |contentViewer| in nsDocShell causes all the previous requests (from http://engadget.com/) to be cancelled. But now we are facing a race condition and |nsSecureBrowserUIImpl::OnStateChange| is called before all requests are cancelled from the previous page. Initially I thought requests from the previous page are not cancelled at all and hence the problem, but that's not the case, it is indeed a race condition. Even though I am totally willing to update my patch, I don't see how we can easily find a different solution for that problem. Honza, any suggestions? [1] https://bugzilla.mozilla.org/show_bug.cgi?id=418354
Flags: needinfo?(honzab.moz)
Attachment #8371121 - Flags: review?(brian)
(In reply to Christoph Kerschbaumer [:ckerschb] from comment #28) > (In reply to Brian Smith (:briansmith, was :bsmith; NEEDINFO? for response) > from comment #26) > > Review of attachment 8371121 [details] [diff] [review]: > Anyway, I totally think we should work on > simplifying nsSecureBrowserUIImpl, but not at this time. I here this sentence from time to time for few last years. And here it is again! > Honza, any suggestions? Yes, rewrite or wipe out of the earth surface the nsSecureBrowserUIImpl class! OK, seriously: the way how this class is written is horrible. I really don't know how to fix this easily w/o breaking this whole 'house of cards'.. I fixed many bugs here usually creating another one that was discovered much later. I'm glad I forgot how this all works, since I was hoping a long ago this code will be rewriten from scratch. Maybe remember for which docshell OnLocationChange has been called last time, reset and ignore all that is not for this docshell. But I believe there is at least one corner case you will ignore something that should not be ignored.. Perhaps give it a try, and *please try to write an automated test* for this bug. Correctly, the state should be tracked in the loadgroup or something you can safely associate (and find) with the top level loading channel (we have the associatedcontent-something interface, that should be enhanced IMO). That was my draft plan in the past, then we wrote the new mixed content detector that for a time worked...
Flags: needinfo?(honzab.moz)
(In reply to Honza Bambas (:mayhemer) from comment #30) > Maybe remember for which docshell OnLocationChange has been called last > time, reset and ignore all that is not for this docshell. But I believe > there is at least one corner case you will ignore something that should not > be ignored.. Well, the corner case is that I can not ignore everything in case of a redirect :-) > Perhaps give it a try, and *please try to write an automated test* for this bug. Please also see Comment 23. Not that trivial :-)
> Please also see Comment 23. Not that trivial :-) ^^ Comment 25 of course.
(In reply to Honza Bambas (:mayhemer) from comment #30) > Maybe remember for which docshell OnLocationChange has been called last > time, reset and ignore all that is not for this docshell. But I believe > there is at least one corner case you will ignore something that should not > be ignored.. The docShell does not change in case we paste a new URI in the address bar. Also storing the top level channel, nsIRequest, or any other information available in OnLocationChange does not allow to tie the occuring nsIRequest in OnStateChange to the former top level channel. > Correctly, the state should be tracked in the loadgroup or something you can > safely associate (and find) with the top level loading channel (we have the > associatedcontent-something interface, that should be enhanced IMO). That > was my draft plan in the past, then we wrote the new mixed content detector > that for a time worked... The problem is that channels do not know about their loading document. After discussing the problem with bz over email, we decided to extend the channel to also hold its loading document which would allow us to disregard loads originating from the previous http-page.
We're now far enough along in Beta that we can't really wait for a speculative fix here. The issue has turned out to be more involved than initially thought and without any reason to suspect it's a recent regression (looks to be since FF23 when MCB was shipped) we don't need to keep tracking this but instead can consider an uplift to branches whenever a fix is ready based on risk/reward evaluation.
Comment on attachment 8371121 [details] [diff] [review] bug_947079_passive_mixed_content_indicator.patch I spent very little time looking at this patch, but already it doesn't seem right to me. Like the other people who have worked in PSM have already said in the bug, trying to deal with this code is likely to get us nowhere fast. Even though this code is in "Core:: Security: PSM," it's really part of the mixed content blocker, which is in "Core:: DOM Security". Let's delegate the review to one of the peers of that component. If they decide this is still the right approach then that's fine with me.
Attachment #8371121 - Flags: review?(brian)
Comment on attachment 8371121 [details] [diff] [review] bug_947079_passive_mixed_content_indicator.patch I know there is some debate about whether to use this code or wipe out nsSecureBrowserUIImpl, but I noticed something about the current patch... >diff --git a/security/manager/boot/src/nsSecureBrowserUIImpl.cpp b/security/manager/boot/src/nsSecureBrowserUIImpl.cpp >+ if (channelOrigURI && channelURI) { >+ nsAutoCString scheme1, scheme2; >+ channelOrigURI->GetScheme(scheme1); >+ channelURI->GetScheme(scheme2); >+ >+ if (!scheme1.Equals(scheme2)) { Is checking that the schemes are the same sufficient? Or do we need to specifically check for http vs https? What if the scheme is not https but a safe scheme? Perhaps we should use the nsIProtocolHandler flags here. Do we only go through this code path for http and https requests?
(In reply to Tanvi Vyas [:tanvi] from comment #36) > >diff --git a/security/manager/boot/src/nsSecureBrowserUIImpl.cpp b/security/manager/boot/src/nsSecureBrowserUIImpl.cpp > >+ if (channelOrigURI && channelURI) { > >+ nsAutoCString scheme1, scheme2; > >+ channelOrigURI->GetScheme(scheme1); > >+ channelURI->GetScheme(scheme2); > >+ > >+ if (!scheme1.Equals(scheme2)) { > Is checking that the schemes are the same sufficient? Or do we need to > specifically check for http vs https? What if the scheme is not https but a > safe scheme? Perhaps we should use the nsIProtocolHandler flags here. Do > we only go through this code path for http and https requests? The problem in general is more severe. Simply checking for http vs https is not enough. And you are right, we go through this code not only for http and https. I have been discussing the problem with bz. The channel needs to know it's loading document to fix this bug (see Bug 976107), otherwise we can not ignore requests from a previous page. Marking the provided patch as obsolete because it will only fix that particual webpage, but not fix the "general" problem here.
Attachment #8371121 - Attachment is obsolete: true
See Also: → 654592
There is no fix for this bug - we should tackle the problem and revamp nsSecureBrowserUIImpl; probably in this bug: https://bugzilla.mozilla.org/show_bug.cgi?id=832834
Status: NEW → RESOLVED
Closed: 10 years ago
Resolution: --- → WONTFIX
Looking at this bug again and wanted to clarify one point... (In reply to Brian Smith (:briansmith, :bsmith, use NEEDINFO?) from comment #11) > > I assume that the new grey triangle icon has been added starting with Fx 26 > > - please correct me if I'm wrong. > > The passive mixed content indicator (the grey triangle) has been in the > product for a very long time. There were major changes to how mixed content > detection and blocking works, around Firefox 23. There have been incremental > improvements in it since then. > The grey triangle was introduced in Firefox 26 in bug https://bugzilla.mozilla.org/show_bug.cgi?id=865352. Before that it was a grey globe. Also, the problem in this bug has probably been around for a long time (before Mixed Content Blocker and Firefox 23), but you would see an grey globe for a fully SSL page instead of the grey triangle, so perhaps it was less noticeable.
Attached patch Bug947079-08-27-14.patch (obsolete) — Splinter Review
If the Mixed Content Blocker detects any sign of mixed content (active/display, loaded/blocked), we override the security state set by nsSecureBrowserUIImpl - https://mxr.mozilla.org/mozilla-central/source/security/manager/boot/src/nsSecureBrowserUIImpl.cpp#282 But if the MCB detects no mixed content, we leave the state that nsSecureBrowserUIImpl sets. One way around this particular bug without rewriting the whole file would be to do something like this. Potential problems with this - * Can we trust the docshell flags we are reading, or does this bug occur when the docshell is in transition? * Does nsSecureBrowserUIImpl catch mixed content redirects (which MCB currently doesn't detect)?
Blocks: 1046645
Active work on a wontfix bug makes no sense, so reopening also because it blocks bug 1046645 now.
Status: RESOLVED → REOPENED
Resolution: WONTFIX → ---
(In reply to Tanvi Vyas [:tanvi] from comment #42) > Created attachment 8480270 [details] [diff] [review] > Bug947079-08-27-14.patch > This patches fixes the directory tiles issue in bug 1046645, but as Honza says in comment 30, I don't know what other bugs it creates. Not sure who to feedback or review ? this to.
Attachment #8480270 - Flags: review?(dkeeler)
Comment on attachment 8480270 [details] [diff] [review] Bug947079-08-27-14.patch Review of attachment 8480270 [details] [diff] [review]: ----------------------------------------------------------------- Instead of adding more hacks, I think it would be worth it to find the root cause in this case. I know this code is painful to work with, but we need to start repaying technical debt here, rather than kicking it down the line. ::: security/manager/boot/src/nsSecureBrowserUIImpl.cpp @@ +278,5 @@ > if (!docShell) > return NS_OK; > } > > // Has a Mixed Content Load initiated in nsMixedContentBlocker? Does it make sense to reference nsMixedContentBlocker here? Aren't we asking the docshell? @@ +280,5 @@ > } > > // Has a Mixed Content Load initiated in nsMixedContentBlocker? > + // * If not, the state should not be broken because of mixed content; > + // overriding the previous state if it is inaccurately flagged as mixed. But why was it inaccurately flagged as mixed content? Shouldn't we fix that? My guess is that one (or all) of mSubRequestsBrokenSecurity, mSubRequestsNoSecurity, or mNotifiedSecurityState are not getting cleared appropriately (or, maybe a load associated with the previous document is incorrectly affecting the state of the new document?) @@ +281,5 @@ > > // Has a Mixed Content Load initiated in nsMixedContentBlocker? > + // * If not, the state should not be broken because of mixed content; > + // overriding the previous state if it is inaccurately flagged as mixed. > + if ((lock == lis_mixed_security) && nit: the extra parentheses are unnecessary @@ +285,5 @@ > + if ((lock == lis_mixed_security) && > + !docShell->GetHasMixedActiveContentLoaded() && > + !docShell->GetHasMixedDisplayContentLoaded() && > + !docShell->GetHasMixedActiveContentBlocked() && > + !docShell->GetHasMixedDisplayContentBlocked() ) { nit: don't need the extra space before the final ")"
Attachment #8480270 - Flags: review?(dkeeler) → review-
(In reply to David Keeler (:keeler) [use needinfo?] from comment #45) > Comment on attachment 8480270 [details] [diff] [review] > Bug947079-08-27-14.patch > Instead of adding more hacks, I think it would be worth it to find the root > cause in this case. I know this code is painful to work with, but we need to > start repaying technical debt here, rather than kicking it down the line. > I've seen three reports of this bug in the last 2 weeks. 1) Bug 1046645 (which was fixed in a different way) 2) Bug 1062605 (which should probably be marked as a duplicate of this one) 3) email to dev-security@ - https://groups.google.com/forum/#!topic/mozilla.dev.security/EAAscF6RVcw testcases: http://people.mozilla.org/~tvyas/wrongicon.html or http://dvwa.people.yandex.net/1.html I can reproduce all three cases and we see the grey mixed display content icon when there isn't any mixed display loaded on the current page. If I apply the patch, all three cases are fixed and we see the lock icon. I do agree that we need to revamp the entire file and fix the underlying issues in nsSecureBrowserUIImpl, but until someone has the time and strength to pick up https://bugzilla.mozilla.org/show_bug.cgi?id=832834, that's not going to happen. So we have to decide whether we want to accept this hacky fix or let this bug persist. Christoph has already tried once to fix the underlying issue without the overall rewrite (see comments 35, 37, and 40) and resolved to close won't fix this bug. It's only reopened because I proposed this hack and because we've seen so many reports lately. If the hack isn't sufficient, I'm inclined to re-close this as won't fix. > ::: security/manager/boot/src/nsSecureBrowserUIImpl.cpp > @@ +278,5 @@ > > if (!docShell) > > return NS_OK; > > } > > > > // Has a Mixed Content Load initiated in nsMixedContentBlocker? > > Does it make sense to reference nsMixedContentBlocker here? Aren't we asking > the docshell? > The flags are only set by nsMixedContentBlocker, and that's why we reference it here. We want nsMixedContentBlocker's decisions to override whatever decisions nsSecureBrowserUIImpl has made, so I think it is good to reference. > @@ +280,5 @@ > > } > > > > // Has a Mixed Content Load initiated in nsMixedContentBlocker? > > + // * If not, the state should not be broken because of mixed content; > > + // overriding the previous state if it is inaccurately flagged as mixed. > > But why was it inaccurately flagged as mixed content? Shouldn't we fix that? > My guess is that one (or all) of mSubRequestsBrokenSecurity, > mSubRequestsNoSecurity, or mNotifiedSecurityState are not getting cleared > appropriately (or, maybe a load associated with the previous document is > incorrectly affecting the state of the new document?) > See above. Also, from this test case http://people.mozilla.org/~tvyas/wrongicon.html, we can see that a load associated with the previous document is in fact affecting the new document. Changing the onunload image in that testcase to an https image will show a lock icon for https://google.com - http://people.mozilla.org/~tvyas/righticon.html I fixed the other 2 nits.
Attachment #8484382 - Flags: review?(dkeeler)
Comment on attachment 8484382 [details] [diff] [review] Bug947079-09-04-14.patch Review of attachment 8484382 [details] [diff] [review]: ----------------------------------------------------------------- Ok, this doesn't seem like too much of a hack (indeed, maybe in the future we should just always ask the docshell what it thinks the security state is, since it seems to have a better idea than nsSecureBrowserUIImpl). I would like two test-cases, however: one that's basically the example you linked above and another where the destination page actually does have mixed content.
Attachment #8484382 - Flags: review?(dkeeler) → review+
Attachment #8480270 - Attachment is obsolete: true
Summary: Passive mixed content indicator (grey triangle) shows up on twitter.com, news.ycombinator.com (and other sites) but no mixed content loads appear in console → Passive mixed content indicator (grey triangle) shows up on full HTTPS sites that have no mixed content
Tanvi, since you are fixing this. I am going to also assign the bug to you.
Assignee: mozilla → tanvi
Depends on: 1072421
Attached patch bug947079-tests-10-01-14C.patch (obsolete) — Splinter Review
Tests to check the security state of a page. An http page is loaded with an onunload event listener. When the browser navigates to another page and the onunload is triggered, an http image is load should not effect the security state of the new page. These are browser chrome tests instead of mochitest-plain because mochitest-plain doesn't work out with the top-level navigation that's required to test this. Note that there is already an isSecurityState() function for mochitest-plain: http://mxr.mozilla.org/mozilla-central/source/security/manager/ssl/tests/mochitest/mixedcontent/mixedContentTest.js#166
Attachment #8498534 - Flags: review?(dkeeler)
Comment on attachment 8498534 [details] [diff] [review] bug947079-tests-10-01-14C.patch Review of attachment 8498534 [details] [diff] [review]: ----------------------------------------------------------------- Generally, this looks good. I think there could be some cleanup. In particular, there should be some test harness utility functions that would get rid of a lot of the boilerplate code. I'm going to say r- for now, since I think you should probably get a review from a browser peer anyway. ::: browser/base/content/test/general/browser.ini @@ +294,5 @@ > [browser_bug902156.js] > skip-if = e10s > [browser_bug906190.js] > skip-if = buildapp == "mulet" || e10s # Bug ?????? - test directly manipulates content (strange - gets an element from a child which it tries to treat as a string, but that fails) > +[browser_bug947079.js] nit: I know there are tons of browser_bug??????.js tests, but I think it's useful to give them a more descriptive name. Maybe something like browser_mixedContentInOnunload.js? ::: browser/base/content/test/general/browser_bug947079.js @@ +1,2 @@ > +/* > + * Tests for Bug 947079 nit: a high-level description of what the bug was and how we expect things to work would be appreciated. Also, needs a modeline and license (I guess we're supposed to use CC0 now). @@ +11,5 @@ > +const gHttpTestRoot2 = "http://test1.example.com/browser/browser/base/content/test/general/"; > +const gHttpsTestRoot = "https://example.com/browser/browser/base/content/test/general/"; > +const gHttpsTestRoot2 = "https://test1.example.com/browser/browser/base/content/test/general/"; > + > +var origBlockDisplay; nit: let, not var @@ +17,5 @@ > +var gTestBrowser = null; > + > +registerCleanupFunction(function() { > + // Set preferences back to their original values > + Services.prefs.setBoolPref(PREF_DISPLAY, origBlockDisplay); Isn't there a clearUserPref that sets a pref back to the default? @@ +39,5 @@ > + > + var newTab = gBrowser.addTab(); > + gBrowser.selectedTab = newTab; > + gTestBrowser = gBrowser.selectedBrowser; > + newTab.linkedBrowser.stop() Why is this line necessary? Are we getting spurious events or something? @@ +43,5 @@ > + newTab.linkedBrowser.stop() > + > + gTestBrowser.addEventListener("load", SecStateTest1A, true); > + var url = gHttpTestRoot + "file_bug947079.html"; > + gTestBrowser.contentWindow.location = url; I think there's a whenLoaded or similar function that takes a url and a callback, loads the url, and calls the callback upon completion (thus encapsulating a lot of the boilerplate things). @@ +67,5 @@ > + var url = gHttpTestRoot + "file_bug947079.html"; > + gTestBrowser.contentWindow.location = url; > +} > + > +// Navigation from an http page to a https page has mixed display content nit: "https page that has..." @@ +89,5 @@ > + var ui = gTestBrowser.securityUI; > + > + //keeler: What should we do in the case of no ui? > + //For now, it will fall into the isInsecure state below. > + //Should we handle this another way instead? Are there cases where we have no security UI? That seems like a bug. Maybe punt for now and file a follow-up to determine when/why this would happen? @@ +95,5 @@ > + } > + > + const wpl = Components.interfaces.nsIWebProgressListener; > + > + //determine the security state nit: space after '//' @@ +97,5 @@ > + const wpl = Components.interfaces.nsIWebProgressListener; > + > + //determine the security state > + var isSecure = ui && > + (ui.state & wpl.STATE_IS_SECURE);; nit: I think each of these can fit in one line. Also, this line has two ';' at the end. @@ +109,5 @@ > + if (isSecure) { > + actualState = "secure"; > + } else if (isBroken) { > + actualState = "broken"; > + } else if (isInsecure) { Isn't this backwards? Shouldn't we start with 'if (isInsecure) { actualState = "insecure" }' ? Although, we only ever test for secure vs. broken, so why have "insecure" here at all? @@ +113,5 @@ > + } else if (isInsecure) { > + actualState = "insecure"; > + } > + > + ok(expectedState == actualState, "Expected state " + expectedState + " and the actual state is " + actualState + "."); I think there's an 'is(expected, actual, message);' test function
Attachment #8498534 - Flags: review?(dkeeler) → review-
Attached patch bug947079-tests-10-08-14.patch (obsolete) — Splinter Review
Comments addressed. (In reply to David Keeler (:keeler) [use needinfo?] from comment #51) > Comment on attachment 8498534 [details] [diff] [review] > bug947079-tests-10-01-14C.patch > > Review of attachment 8498534 [details] [diff] [review]: > ----------------------------------------------------------------- > > Generally, this looks good. I think there could be some cleanup. In > particular, there should be some test harness utility functions that would > get rid of a lot of the boilerplate code. I'm going to say r- for now, since > I think you should probably get a review from a browser peer anyway. > Okay, I will also flag a browser peer for review. > @@ +89,5 @@ > > + var ui = gTestBrowser.securityUI; > > + > > + //keeler: What should we do in the case of no ui? > > + //For now, it will fall into the isInsecure state below. > > + //Should we handle this another way instead? > > Are there cases where we have no security UI? That seems like a bug. Maybe > punt for now and file a follow-up to determine when/why this would happen? > I'm not aware of any cases where there is no security UI, but this case is covered by setting the state to unknown. > @@ +109,5 @@ > > + if (isSecure) { > > + actualState = "secure"; > > + } else if (isBroken) { > > + actualState = "broken"; > > + } else if (isInsecure) { > > Isn't this backwards? Shouldn't we start with 'if (isInsecure) { actualState > = "insecure" }' ? > Although, we only ever test for secure vs. broken, so why have "insecure" > here at all? > It will give us more information if the test fails - we'll know if the actual state was insecure.
Attachment #8498534 - Attachment is obsolete: true
Attachment #8501952 - Flags: review?(dkeeler)
Push to try shows we are breaking 3 of our mixed content tests that involve redirects. * security/manager/ssl/tests/mochitest/mixedcontent/test_dynUnsecureIframeRedirect.html * security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureIframeRedirect.html Note that these two tests pass when we apply both the patch for this bug and the patch for the mixed content redirect bug 418354. * security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureRedirect.html This test fails with and without the patch to bug 418354. Still investigating.
Comment on attachment 8501952 [details] [diff] [review] bug947079-tests-10-08-14.patch I'll wait until the tests are fixed before reviewing.
Attachment #8501952 - Flags: review?(jaws)
Comment on attachment 8501952 [details] [diff] [review] bug947079-tests-10-08-14.patch Tanvi told me through IRC that the tests here won't need to be changed to fix the other tests.
Attachment #8501952 - Flags: review?(jaws)
(In reply to Tanvi Vyas [:tanvi] from comment #56) > Push to try shows we are breaking 3 of our mixed content tests that involve > redirects. > This patch assumes that Mixed Content Blocker does a better job determining if the security state of a page is broken than nsSecureBrowserUIImpl does. This is true *except* in the case of redirects. If an https subresource redirects to an http location, Mixed Content Blocker today doesn't catch this. With bug 418354, we fix that so that we do catch redirects. Hence, this bug is blocked on bug 418354 (which is getting close to completion). > * > security/manager/ssl/tests/mochitest/mixedcontent/ > test_dynUnsecureIframeRedirect.html > * > security/manager/ssl/tests/mochitest/mixedcontent/ > test_unsecureIframeRedirect.html > Note that these two tests pass when we apply both the patch for this bug and > the patch for the mixed content redirect bug 418354. > The above explains why these two tests break with just patch 947079, but pass when we have apply both patch 947079 and patch 418354. > * > security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureRedirect.html > This test fails with and without the patch to bug 418354. > What it doesn't explain is why this test fails, even after we fixed Mixed Content Blocking redirects. I'll explain this shortly.
security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureRedirect.html passes in mozilla-central today because nsSecureBrowserUIImpl detects the redirected http load and marks the state broken. When we apply bug 418354, it *should* also detect the redirect to the http load. We do in fact call nsMixedContentBlocker.cpp::AsyncOnChannelRedirect(), which calls nsMixedContentBlocker::ShouldLoad. But, ShouldLoad says we should ACCEPT the load because it has the wrong loadingPrincipal/loadingNode. In more detail, when we call ShouldLoad(), we have an insecure contentLocation of http://example.com/tests/security/manager/ssl/tests/mochitest/mixedcontent/moonsurface.jpg. We check the scheme of the loadingNode or loadingPrincipal and we get that it is also insecure, so we accept the load: http://mochi.test:8888/tests/security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureRedirect.html The loadingNode, or the place where the insecure image is going to be loaded is https://example.com/tests/security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureRedirect.html?runtest, but instead we get a loadingNode from the mochitest framework. I'm not sure how to resolve this at the moment. There is no problem with the patch or test in this bug, but likely a problem with loadInfo or nsIContentPolicy in general.
Depends on: 1038756, 418354
After looking at security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureRedirect.html a bit more, it looks like the test succeeds if you change the <img> to a <script>. With a <script>, there are two calls to nsMixedContentBlocker::AsyncOnChannelRedirect() - one for http://mochi.test:8888 and one for the test page https://example.com. But with an <img>, there is only one. This implies that their is a caching issue with <img> redirects, where we don't go through AsyncOnChannelRedirect for a cached image redirect. I've still got to do some digging to find the exact bug.
Here are steps to reproduce the failure in security/manager/ssl/tests/mochitest/mixedcontent/test_unsecureRedirect.html without using the actual mochitest. * Apply patches in this bug * Apply patches in bug 418354 * Go to https://people.mozilla.org/~tvyas/mixeddisplayredir.html. You will see the grey triangle icon because the page has mixed display content (via an image redirect). * Open a new tab and again go to https://people.mozilla.org/~tvyas/mixeddisplayredir.html. This time, you will see the green lock. Somehow the image redirect is cached and AsyncOnChannelRedirect is not called. Hence, Mixed Content Blocker is not triggered and we do not detect the mixed state.
Comment on attachment 8501952 [details] [diff] [review] bug947079-tests-10-08-14.patch Review of attachment 8501952 [details] [diff] [review]: ----------------------------------------------------------------- Looks good with comments addressed. ::: browser/base/content/test/general/browser_mixedContentFromOnunload.js @@ +1,1 @@ > +/* nit: modeline/license boilerplate @@ +2,5 @@ > + * Tests for Bug 947079 - Fix bug in nsSecureBrowserUIImpl that sets the wrong > + * security state on a page because of a subresource load that is not on the > + * same page. > + */ > + nit: not sure the extra blank line is necessary @@ +24,5 @@ > + > +function test() { > + waitForExplicitFinish(); > + SpecialPowers.pushPrefEnv({"set": [["security.mixed_content.block_active_content", true], > + ["security.mixed_content.block_display_content", false]]}, SecStateTests); Use PREF_DISPLAY/PREF_ACTIVE here? (Or I guess just get rid of the consts, since it doesn't seem like they're used elsewhere). Since this is a browser-chrome test, don't we already have chrome privileges? I would assume SpecialPowers wouldn't be necessary (or even available). @@ +43,5 @@ > +function SecStateTest1A() { > + whenLoaded(gTestBrowser, SecStateTest1B); > + let url = gHttpsTestRoot1 + "file_mixedContentFromOnunload_test1.html"; > + gTestBrowser.contentWindow.location = url; > +} nit: add a blank line after this one @@ +93,5 @@ > + actualState = "unknown"; > + } > + > + // Is more than one bit set? > + if (ui) { It might be easier to just check for !ui and bail with an error early in this function. @@ +95,5 @@ > + > + // Is more than one bit set? > + if (ui) { > + let state = ui.state & (wpl.STATE_IS_SECURE | wpl.STATE_IS_BROKEN | wpl.STATE_IS_INSECURE); > + if ( (state-1) & state ) { nit: I think the spacing should be more like this: if ((state - 1) & state) { @@ +97,5 @@ > + if (ui) { > + let state = ui.state & (wpl.STATE_IS_SECURE | wpl.STATE_IS_BROKEN | wpl.STATE_IS_INSECURE); > + if ( (state-1) & state ) { > + actualState = "unknown"; > + } nit: I think the indentation is off by one space here
Attachment #8501952 - Flags: review?(dkeeler) → review+
Comment on attachment 8501952 [details] [diff] [review] bug947079-tests-10-08-14.patch Review of attachment 8501952 [details] [diff] [review]: ----------------------------------------------------------------- r=me. I agree with David's earlier comments. I've also added some of my own below too. ::: browser/base/content/test/general/browser_mixedContentFromOnunload.js @@ +29,5 @@ > +} > + > +function SecStateTests() { > + let newTab = gBrowser.addTab(); > + gBrowser.selectedTab = newTab; gBrowser.selectedTab = gBrowser.addTab(); You can remove `newTab` since it is unused outside of these two lines. @@ +49,5 @@ > + whenLoaded(gTestBrowser, SecStateTest2A); > + > + // check security state. Since current url is https and doesn't have any > + // mixed content resources, we expect it to be secure. > + isSecurityState("secure"); Move this up to the start of the function, and move the `whenLoaded` call below the `isSecurityState` call, since `whenLoaded` is used to prepare for the next test. @@ +95,5 @@ > + > + // Is more than one bit set? > + if (ui) { > + let state = ui.state & (wpl.STATE_IS_SECURE | wpl.STATE_IS_BROKEN | wpl.STATE_IS_INSECURE); > + if ( (state-1) & state ) { You can remove this, and just make the if/else if/else above also check that the other flags are false. For example: if (isSecure && !(isBroken || isInsecure)) { ... } else if (isBroken && !(isSecure || isInsecure)) { ... } else if (isInsecure && !(isSecure || isBroken) { ... } else { ... } On another note, in what case can we have both the isSecure and isInsecure flag set at the same time?
Attachment #8501952 - Flags: review?(jaws) → review+
Depends on: 1082837
Depends on: 1082947
Attached patch bug947079-tests-10-22-14.patch (obsolete) — Splinter Review
Comments addressed. (In reply to David Keeler (:keeler) [use needinfo?] from comment #63) > Comment on attachment 8501952 [details] [diff] [review] > bug947079-tests-10-08-14.patch > > Review of attachment 8501952 [details] [diff] [review]: > > + > > +function test() { > > + waitForExplicitFinish(); > > + SpecialPowers.pushPrefEnv({"set": [["security.mixed_content.block_active_content", true], > > + ["security.mixed_content.block_display_content", false]]}, SecStateTests); > > Use PREF_DISPLAY/PREF_ACTIVE here? (Or I guess just get rid of the consts, > since it doesn't seem like they're used elsewhere). > Since this is a browser-chrome test, don't we already have chrome > privileges? I would assume SpecialPowers wouldn't be necessary (or even > available). > Looks like this is how pushprefenv is used, even if it's in a browser chrome test: http://mxr.mozilla.org/mozilla-central/search?string=pushprefenv&find=\.js&findi=\.js&filter=^%5B^\0%5D*%24&hitlimit=&tree=mozilla-central pushprefenv is defined in the specialpowersAPI: http://mxr.mozilla.org/mozilla-central/source/testing/specialpowers/content/specialpowersAPI.js#931. Perhaps there is another preferred way to change prefs in a browser chrome test? > @@ +93,5 @@ > > + > > + // Is more than one bit set? > > + if (ui) { > > It might be easier to just check for !ui and bail with an error early in > this function. > Done with: if (!ui) { ok(false, "No security UI to get the security state"); return; } (In reply to Jared Wein [:jaws] (please needinfo? me) from comment #64) > Comment on attachment 8501952 [details] [diff] [review] > bug947079-tests-10-08-14.patch > > Review of attachment 8501952 [details] [diff] [review]: > ----------------------------------------------------------------- > On another note, in what case can we have both the isSecure and isInsecure > flag set at the same time? None that I am aware of.
Attachment #8501952 - Attachment is obsolete: true
Attachment #8509849 - Flags: review+
(In reply to Tanvi Vyas [:tanvi] from comment #62) > Somehow the image redirect is cached and AsyncOnChannelRedirect is not > called. Hence, Mixed Content Blocker is not triggered and we do not detect > the mixed state. This is because of bug https://bugzilla.mozilla.org/show_bug.cgi?id=1082837. This bug is dependent on it.
This came up in manual testing for Firefox 34.0b10. Cornel, can you give any details of what you saw, what url you were visiting, etc?
Flags: needinfo?(cornel.ionce)
We've encountered this across platforms on Fx 34.0b10 (build ID: 20141117202603); the grey triangle appears in the websites mentioned by Andrei in comment 2, for some of them it's intermittent but it's always displayed on myspace.com. Here you have the errors contained in the console: http://pastebin.mozilla.org/7380530 Please let me know if I can provide further assistance on this matter.
Flags: needinfo?(cornel.ionce)
The tests and the patch in this bug are ready to go. But we can't land until these two bugs are completed and land: https://bugzilla.mozilla.org/show_bug.cgi?id=1082947 - done but needs tests https://bugzilla.mozilla.org/show_bug.cgi?id=1082837 - not done.
Looked into this a bit more because I wanted to see how nsSecureBrowserUIImpl detected this cached image redirect issue. The reason nsSecureBrowserUIImpl knows that the cached image request resulted in an HTTP uri instead of an HTTPS uri is because it uses the imgRequest uri to test if the url is https. For cached image requests, the imgRequest uri must be the uri of the final location. This also means that nsSecureBrowserUIImpl wouldn't catch https->http->https redirects, which is a known issue with a bug on file - https://bugzilla.mozilla.org/show_bug.cgi?id=609433 Anyway, the way to get this bug landed is still to get bug 1082837 working. Seth has provided a patch in that bug that gives us more security info on image cache requests and I need to update that to add a call to nsMixedContentBlocker.
Hi Tanvi, are you still working on this and its related bugs? I think it may be too late for this to go out with Firefox 37 but it could still make it to 39 and 38 if they are affected. When you do land fixes, it also might be good to get QA help in testing the fix.
Flags: needinfo?(tanvi)
Problem occurring on FF36. Warning triangle - on some pages it indicates SHA-1 certificate on the console, on other pages no SHA-1 warning, but still grey triangle. Resulting in lost business on "insecure" order pages. Advising clients to use other browsers :-(
This bug is ready to go but is blocked on bug 1082837 and 1082947, which still need tests.
Flags: needinfo?(tanvi)
(In reply to Paul from comment #75) > Problem occurring on FF36. Probably because the site uses RC4. Please file a bug with URL. > Warning triangle - on some pages it indicates > SHA-1 certificate on the console, on other pages no SHA-1 warning, but still > grey triangle. Resulting in lost business on "insecure" order pages. > Advising clients to use other browsers :-( Using other browsers doesn't make you customer safe. It's just other browsers doesn't display necessary warnings.
I think what Paul is losing business over is the fact that the grey mixed content triangle is showing up when there is no mixed content on the page. That does in fact happen sometimes because of this bug.
Firefox 36+ will display grey triangle for RC4 sites. If the problem happens since Firefox 36, it's possible to be caused by RC4.
Thanks for the response. A few comments: "I think what Paul is losing business over is the fact that the grey mixed content triangle is showing up when there is no mixed content on the page. That does in fact happen sometimes because of this bug." - exactly. 1) There is no evidence of mixed content. The console shows SHA-1 as the only security problem. Problem still shows in test page with no images, scripts, etc. 2) RC4 is a possibility - we do not require it as have TLS 1.2. We were considering disabling RC4, but were investigating potential client impact (especially older browsers) 3) The comment about keeping customers safe: clearly we want to keep the customers safe, but when Chrome, Opera and Explorer(!) all indicate the site is safe, and FF traffic is falling off because of histrionic warnings with no explanation (except that the connection is insecure), I am not sure if this is god for FF (my preferred browser) or us! 4) Primary site is www.bookbrowse.com. All urls directed to use https. We're also investigating our cert tree to see why our newly-installed SHA-2 certificate still indicates SHA-1 in all browsers. Let me know if additional info is required. Just received another call from an end-user to say FF flashed up a warning (?) that the site was insecure...
Hi Paul, This does in fact look like an RC4 warning, and not related to this bug (which is about mixed content). Masatoshi, can you test https://www.bookbrowse.com and confirm this? Oddly, I can't access this page at all using Nightly. I get a "Secure Connection Failed" message: The connection to the server was reset while the page was loading. The page you are trying to view cannot be shown because the authenticity of the received data could not be verified. Please contact the website owners to inform them of this problem. On Chrome, I see it does use RC4, "obsolete cryptography", but it still shows the lock: Your connection to www.bookbrowse.com is encrypted with obsolete cryptography. The connection uses TLS 1.2. The connection is encrypted using RC4_128, with SHA1 for message authentication and RSA as the key exchange mechanism.
(In reply to Tanvi Vyas [:tanvi] from comment #81) > Hi Paul, > This does in fact look like an RC4 warning, and not related to this bug > (which is about mixed content). Masatoshi, can you test > https://www.bookbrowse.com and confirm this? Indeed. Filed bug 1143072 to track this. > Oddly, I can't access this page at all using Nightly. I get a "Secure > Connection Failed" message: > The connection to the server was reset while the page was loading. Nightly will not allow access to an RC4 only-server unless it is whitelisted.
Thanks guys for your awesome-ness looking at this and responding so fast. A bit stressful over here! I agree it could be an RC4 warning, but the only warning in the console is SHA-1 - if it is a weak cipher warning, it would be good to see it on the console. Also, as the connection is untrusted, FF does not give me any cert info (?) which makes diagnosis more difficult. FYI, I am also working with TrustWave - apparently they are seeing a number of sites with this issue; they sent me a new SHA-2 cert, but once installed browsers still see it as SHA-1. Weird. Wonder what advice they will give.
Okay, so I have disabled RC4; we not get an "A" grade at SSL labs, and still the grey triangle with FF. I'm back to thinking this issue is caused by SHA-1. I am working that, but I am sure others face the same issue...
(In reply to Paul from comment #84) > Okay, so I have disabled RC4; we not get an "A" grade at SSL labs, and still > the grey triangle with FF. I'm back to thinking this issue is caused by > SHA-1. I am working that, but I am sure others face the same issue... Hi Paul, Thank you for fixing the RC4 issue! When I go to https://www.bookbrowse.com/ on Nightly, I see the lock icon. I see the SHA1 warnings in the webconsole, but they don't effect the lock icon as far as I can tell. Can you try reloading the page and see if you still see the grey triangle?
Yeah, sorry, still triangle time. At least now FF allows me to see the certificate information :-) Got 5 PCs in front of me. All show the warning except for the one running FF 34.05. Cleared cache, cookies, rebooted local PC - no joy. Puzzling.
Well, I continue to investigate this, and the strangeness continues. The homepage now gets a grey padlock. I have made no changes. Some of the internal pages do now have the padlock, some do not. On one page it showed the warning triangle; I commented out the entire code - padlock. Uncommented code - now a padlock! So it is not SHA-1 which is the issue; there is something else, which does not appear on the console and is unique trigger to FireFox 36.01. FWIW all pages appear fine on Nightly.
Probably final update - hopefully it will help somebody - by turning on and off parts of our low-traffic pages, I have found a possible smoking gun. The following sample code snippet causes a grey warning triangle: <style> .testicon, .testicon:visited {height: 30px; width: 100px; border:2px solid #424749; display: block; background-image:url(https://www.bookbrowse.com/site/images/spam-free2.gif); background-repeat:no-repeat; background-position:center; } a.testicon:hover {background-image: url(https://www.bookbrowse.com/site/images/spam-free1.gif); background-repeat:no-repeat; background-position:center; } </style> <a class="testicon" href="javascript:myfunction()"></a> I tried relative as well as absolute links in the css. I validated this on other pages also. In most cases the background-image tag in css causes the issue. But not always - on some pages (such as the homepage) other background-images seem fine. This puzzled me for a while; then with more testing I established that it effects *some* .gif or .jpg images!! Replacing them with .png images gives the padlock. With hundreds of images, this does not help me much, but may help somebody else.
Looking at the code again, there is one corner case which this patch may not cover: The code in this bug checks against only lis_mixed_security. It doesn't check against lis_broken_security, because security could be broken for another reason (use of rc4 in the cert). So if the security state of the page is broken because of code in nsMixedContentBlocker.cpp, we may end up setting lis_broken_security instead of lis_mixed_security here: https://mxr.mozilla.org/mozilla-central/source/security/manager/boot/src/nsSecureBrowserUIImpl.cpp#1193 Note this is just theoretical right now. I'm not sure if nsMixedContentBlocker runs before or after nsSecureBrowserUIImpl::UpdateSecurityState (or both before and after). What we could do is check against lis_broken_security and then check webprogresslisteners for other flags that could have caused a broken state (i.e. STATE_USES_WEAK_CRYPTO, STATE_USES_SSL_3). I'm not sure if there is an nsIWebProgressListener Flag for every reason a state could be broken. Anyway, I still plan to land the code in this bug once inbound opens (since the blockers have landed). I've written this up here for future reference in case we continue to get this wrong icon complaint. In that case, we can investigate more to see if this theoretical problem is really a problem.
This actually needs another push to try since it's been so long: https://treeherder.mozilla.org/#/jobs?repo=try&revision=63167f600dbc
Rebased tests.
Attachment #8509849 - Attachment is obsolete: true
Attachment #8583320 - Flags: review+
Status: REOPENED → RESOLVED
Closed: 10 years ago10 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
Target Milestone: --- → mozilla39
Release Note Request (optional, but appreciated) [Why is this notable]: The users used to get the wrong indication of mixed content [Suggested wording]: [Links (documentation, blog post, etc)]:
relnote-firefox: --- → ?
See Also: → 997873
See Also: → 620224
See Also: → 527318
Tanvi, can you help me rephrase this release note so it is still short, but has more detail? Maybe something like: - Security state indicator now ignores the previous page's resource load - Improved accuracy for mixed content indicator - Some mixed content indicator false positives eliminated
Flags: needinfo?(tanvi)
(In reply to Liz Henry (:lizzard) from comment #100) > Tanvi, can you help me rephrase this release note so it is still short, but > has more detail? Maybe something like: > > - Security state indicator now ignores the previous page's resource load > - Improved accuracy for mixed content indicator > - Some mixed content indicator false positives eliminated Hi Liz, how about this: - The Security state indicator on a page now correctly ignores loads caused by previous pages - Improved accuracy for mixed content indicator, since many false positives were eliminated Not sure how much detail we want. This patch makes it so that the grey triangle with the exclamation sign won't show up when its not supposed to. Pages that are fully encrypted will display the lock icon when before they sometimes inaccurately showed the grey triangle.
Flags: needinfo?(tanvi)
Thanks Tanvi, I went with "The Security state indicator on a page now correctly ignores loads caused by previous pages".
We're actually still seeing this bug on 39.0b1-build2 (20150523155636), using Windows 10 (x64), Ubuntu 14.04 (x64), Mac OS X 10.8.5 and Windows 8 (x86). *.yahoo.com pages seem to replicate this pretty well after a while.
Given the previous comment, and the fact that people still reproduced this today with Firefox 39 Beta 6, I think the bug needs to be reopened and reviewed.
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Flags: needinfo?(tanvi)
(In reply to Andrei Vaida, QA [:avaida] from comment #103) > We're actually still seeing this bug on 39.0b1-build2 (20150523155636), > using Windows 10 (x64), Ubuntu 14.04 (x64), Mac OS X 10.8.5 and Windows 8 > (x86). *.yahoo.com pages seem to replicate this pretty well after a while. There are valid reason to see the grey triangle. The page could have mixed display content or use an rc4 cert. If you refresh the page or revisit the page on a new tab, do you see the lock icon? Andrei, can you provide an example url and a screenshot? When taking the screenshot, if you could open the security pane of the webconsole, that would be great. Florin, are users complaining about this issue? Thanks!
Flags: needinfo?(tanvi) → needinfo?(andrei.vaida)
(In reply to Tanvi Vyas [:tanvi] from comment #105) > Florin, are users complaining about this issue? I am not aware of any user complaints about this, it's just the reports from the team here. I'll let Andrei provide additional information here as requested.
(In reply to Tanvi Vyas [:tanvi] from comment #105) > There are valid reason to see the grey triangle. The page could have mixed > display content or use an rc4 cert. This seems indeed to be the case here, as the pages that where reported showing the grey triangle are all containing mixed passive/display content. An example of such a page is https://uk.news.yahoo.com/ - here's a screenshot showing the security warning for it: http://i.imgur.com/pRAzLKe.png. Setting back the status of this bug to Resolved Fixed, since the behavior reported here is expected.
Status: REOPENED → RESOLVED
Closed: 10 years ago9 years ago
Flags: needinfo?(andrei.vaida)
Resolution: --- → FIXED
Product: Core → Core Graveyard
Duplicate of this bug: 542230
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: