Closed Bug 455367 Opened 16 years ago Closed 15 years ago

bogus mixed content warnings if any image fails to load

Categories

(Core Graveyard :: Security: UI, defect, P2)

Tracking

(Not tracked)

VERIFIED FIXED
mozilla1.9.1b3

People

(Reporter: nelson, Assigned: bzbarsky)

References

(Blocks 1 open bug)

Details

(Keywords: regression, testcase, verified1.9.1)

Attachments

(5 files, 2 obsolete files)

SM trunk Gecko/20080909002343 

Many (most?) https sites now report mixed content warnings. 
Examples:
  https://sitekey.bankofamerica.com/sas/signonScreen.do?state=CA
  https://www.saferpay.com/

These warnings appear to be bogus.  A packet monitor trace shows that all
the connections were done using https (port 443).  The cache was flushed,
so that no insecure (http) content should have come from the cache.  

The sites that display the bogus warnings appear to use a jot of javascript.
The sites that do not display the warnings use little or no javascript.
That may be a clue.  I know that we've had problems in the past with 
javascript in https pages causing false reports of insecure content. 

This seriously erodes confidence in https.
In both accounts I can't confirm this. Did you start with a new session before accessing those sites?
Hmm... I likewise can't reproduce this on Firefox trunk with a fresh profile.

Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1b1pre) Gecko/20080915100012 Minefield/3.1b1pre
IIRC, the mixed content warning is disabled by default in FF3.
The only indicator of mixed content in FF3 is the lock icon, 
which is also missing in the default FF3 configuration, IINM. :(
Nelson, are we using the same product? Are you reporting for Sea Monkey? The lock icon is not missing with FF3 with fresh profile and the (almost invisible blueish) indicator around the favicon doesn't appear with mixed content. Larry says it's not secured.
Eddy, please note that comment 0 says:
> SM trunk Gecko/20080909002343 

Perhaps i should spell out SeaMonkey.  
But this bug is filed against PSM which is common to both.
Eddy, look in about:config at the pref browser.identity.ssl_domain_display 
and tell me what value you see.  I'll bet it's not 0, which is the FF3 
default, IINM.
You are right with your assumption that I browse with browser.identity.ssl_domain_display set to one, but with a fresh profile this is set to zero, that's why I'm saying that I can't reproduce it.
This is now being reported by users in mozilla.support.firefox.  See
news://news.mozilla.org:119/537829c7-1b54-4bb8-927c-0e9dc8d86a0c@p31g2000prf.googlegroups.com
To those who cannot reproduce this, please look at your setting for the
preference 
   security.warn_viewing_mixed
and see if it is true or false.   If it is false, then you have disabled
this warning, which is why you don't see it.  Change it to true, and 
then try visiting the web sites cited in comment 0.

Due to the "fix" for Bug 341472, the FF3 default for this pref was changed 
from true to false.  Consequently, the feature is now broken, and no-one 
notices.  But some of us rely on this feature.  It makes https nearly unusable.
Flags: blocking1.9.1?
Keywords: regression
Severity: normal → major
Flags: blocking1.9.0.5?
With the pref set to true (which is default, I believe), I do not see mixed report warnings on either of the sites mentioned in comment 0.

Additionally, bug 341472 didn't change the default pref to false. It left it as true. Unless the patch is not what was checked in...

We'll look at a patch for 1.9.0.5 if you can determine what the problem is.
Flags: blocking1.9.0.5?
I see the warning with current SM trunk on Linux and with SM trunk on OS/2, as well as with FF 3.1b2pre on OS/2. For Firefox I had to reset security.warn_viewing_mixed to true (I guess it got reset because the default of security.warn_viewing_mixed.show_once is true). But even if FF doesn't show the warning, it still displays the warning-lock icon in the status bar (the one with the closed lock with the red i next to it), and doesn't show such a site as encrypted.

I just re-tested in fresh profiles with SM 2.0a2pre 20081114 and FF 3.1b2pre 20081115 on Linux (32bit), and both show the problem.

With FF 3.0.5pre 2008111504 on Linux I never got it to show, not with my normal and not with a fresh profile. I don't get from the previous comments, why this bug is set to "1.9.0 branch"...

I got this on 
   https://ibank.barclays.co.uk/olb/b/LoginMember.do
   https://bcol.barclaycard.co.uk/ecom/as/doLogon.do
That the among others the sites of the biggest UK and the biggest US bank shows this problem should be a high incentive get this fixed before Gecko 1.9.1 is released. I fully second Nelsons blocking request.
Jeff can reproduce this reliably, but only the first time he visits the particular page.  Any debugging requests for him?
Some debugging tests were done in the duplicate bug #472986.  If I have any say, I'd concur with a blocking request, also - as the OP says, "This seriously erodes confidence in https".
Not blocking without solid STR. Please renom, especially with a regression range.
Flags: blocking1.9.1? → blocking1.9.1-
(In reply to comment #15)
> Not blocking without solid STR. Please renom, especially with a regression
> range.

Sorry I am not up on the "lingo"(STR, renom, regression range...). Apologies for my ignorance, but a key feature here has been demonstrated (multiple times) that it is not doing what it is supposed to, and therefore it needs to be fixed - what more do I need to offer to demonstrate the failure mode here?

In other words, if the insecure indicator is in error, regardless as to the actual security level of the page (...HTTPS is a fundamental hallmark of online commerce), then this is a fault, and what will it take to get this issue corrected for the 3.1 release.
(In reply to comment #16)
> (In reply to comment #15)
> > Not blocking without solid STR. Please renom, especially with a regression
> > range.
> 
> Sorry I am not up on the "lingo"(STR, renom, regression range...). Apologies
> for my ignorance, but a key feature here has been demonstrated (multiple times)
> that it is not doing what it is supposed to, and therefore it needs to be fixed
> - what more do I need to offer to demonstrate the failure mode here?
> 
> In other words, if the insecure indicator is in error, regardless as to the
> actual security level of the page (...HTTPS is a fundamental hallmark of online
> commerce), then this is a fault, and what will it take to get this issue
> corrected for the 3.1 release.

The lingo can be a bit much, I apologize.

No one disputes that we should fix bugs in our HTTPS display and mixed content detection. The question is whether this bug should block the release of Firefox 3.1. Mike's answer is that we can't block the release without having:

 - STR - Steps To Reproduce the bug reliably. Obviously some people here are seeing the problem, but others aren't, and it's difficult for developers to fix a bug they can't recreate.
 - Regression Range - a time window when this behaviour began.  Even without STR, if we can narrow it down to say "the builds from Dec 12 didn't have this problem, the builds from Dec 13 do" then we can look at the code that changed in that time period and see if any are obvious starting points for this behaviour. Doing this is sort of a pain, but is incredibly helpful for finding a bug.  Every night's build is put in this directory ftp://ftp.mozilla.org/pub/firefox/nightly/ for people doing this kind of work.

Finally, Mike suggested that if we can get those two things (regression range, and STR), we should re-nominate it for blocking consideration.
I understand what Mike suggested but I can't understand how you could even think to ship a browser with such a glaring fault. This would give the project _tons_ of bad press. If you are getting this error every day (as millions of people will because some of the biggest banking sites are affected) you would know what I'm talking about.

STR are simple:
- Create a fresh profile.
- Go to one of the problematic sites, https://www.saferpay.com/ works fine
  for that purpose.
- Be annoyed by the warning.

Isn't finding the regression range what you pay QA people to do? I don't understand why somebody at Mozilla Hq doesn't just tell them to do that and find it. You had _months_ to do so. (As a spare-time contributor it really sucks to do that, especially if you have to put up with glacial download speeds where it takes half an hour to get a package across to Europe.)

Oh well, using Linux x86_64 nightlies I get this:
   2008-09-01-05-mozilla-central   no warning, secure lock icon shown
   2008-09-02-05-mozilla-central   warning, mixed lock icon shown
   2008-09-04-05-mozilla-central   warning, mixed lock icon shown
   2008-09-07-02-mozilla-central   warning, mixed lock icon shown
   2008-09-08-01-mozilla-central   warning, mixed lock icon shown
   2008-09-09-01-mozilla-central   warning, mixed lock icon shown
(Took me two _hours_ to arrive there!)

Regression range is therefore
http://hg.mozilla.org/mozilla-central/pushloghtml?startdate=2008-09-01+05%3A09%3A39&enddate=2008-09-02+05%3A05%3A19

The check-ins for bug 450912, bug 135007, bug 451420, and maybe bug 448941 look suspicious to me.
Flags: blocking1.9.1- → blocking1.9.1?
OS: Windows XP → All
Version: 1.9.0 Branch → Trunk
While this is on a 64bit machine (both on Linux and Windows) I see the same problem on OS/2 on a 32bit machine (and OS) and it was happening on my old laptop, too (32bit Linux). So this is really cross-platform. Don't know why   Johnathan does not see it on Mac.

In the meantime I also checked Windows nightlies (in Vista), there I see
   good     20080901033305
   broken   20080902033133
   broken   20080903034741
which only marginally changes the regression range and still includes the "suspicious" bugs from comment 18. But bug 448941 is Firefox only and this problem also happens in SeaMonkey, so that can't be it.

kaie: you pushed the other changes. Any ideas?
Hardware: x86 → All
Flags: blocking1.9.1? → blocking1.9.1-
Sigh, what was missing this time to deny blocking status? Working for this project is getting really frustrating.
(In reply to comment #20)
> Sigh, what was missing this time to deny blocking status?

You should ask Benjamin that since he didn't comment when he denied blocking1.9.1.
Hey folks, what am I missing here - is this going to be fixed in the next Beta or what???
Flags: blocking1.9.1- → blocking1.9.1?
Depends on: 477118
Blocks: lockicon
Kai/Robert: can you guys please take a look at this and bug 451420 - we suspect it may have regressed.

(In reply to comment #18)
> I understand what Mike suggested but I can't understand how you could even
> think to ship a browser with such a glaring fault. This would give the project

Simple. Because without those things, we don't have evidence that the browser has a fault. Several people tried to reproduce the issue and were unable to do so. The fact that we haven't heard a lot of uproar about this issue indicates that it's not affecting a significant portion of our 350,000 nightly beta users. It was clearly stated that with better evidence of this problem, we'd want to fix it.

> _tons_ of bad press. If you are getting this error every day (as millions of
> people will because some of the biggest banking sites are affected) you would

Were that true, there would be hundreds of duplicate bugs and reports of this problem. Since that is not true, it feels to me like this is an edge case (possibly as it requires security.warn_viewing_mixed to be set different-from-default) that might only be affecting some systems.

Please don't misinterpret me, Peter: I want to ship a quality product. However, many bugs (like this one) are not black and white; this one appears to be severe but not universal, so my initial blocking judgement was based on the fact that I can't hold the product from 200MM+ users until I know the issue is real.

> STR are simple:
> - Create a fresh profile.
> - Go to one of the problematic sites, https://www.saferpay.com/ works fine
>   for that purpose.
> - Be annoyed by the warning.

Many people have commented in this bug saying that following those steps yields no error. I'm one of them. Put another way: those STR don't work for all systems, all users, so they're not reliable STR.

> Isn't finding the regression range what you pay QA people to do? I don't

No, it's why we have a regressionwindow-wanted keyword. QA does a lot of regression testing, but so do community members.

> (Took me two _hours_ to arrive there!)

Thank you for your time and effort.

> The check-ins for bug 450912, bug 135007, bug 451420, and maybe bug 448941 
> look suspicious to me.

In the future, please cc: the owners, patch authors and reviewers of those bugs so they know you suspect them of causing a regression. I really suspect bug 451420 based on the patch, and have cc'd rrylea.

(In reply to comment #20)
> Sigh, what was missing this time to deny blocking status? Working for this
> project is getting really frustrating.

There's a large chance that Benjamin accidentally set the flag back, as well. It happens sometimes with Bugzilla.

FWIW, I am still unable to reproduce this with a latest branch Firefox nightly on OSX or w32/Vista, with a fresh or old profile.
OK, so I just tried this in both Seamonkey and Firefox.  I made sure that "security.warn_viewing_mixed" is set to "true" and tried loading the two URLs in comment 0 and the two urls in comment 11.  I don't get the warning dialog.  This is on Mac; if you think it'd be worthwhile to test Linux and Windows I can try Firefox on those too.

In both cases I'm using a reasonably recent build (2009-01-21 comm-central for Seamonkey, and 2009-02-04 mozilla-central for Firefox).

I get the same results for Firefox using my normal browsing profile and Firefox using a completely blank profile.

Peter, Nelson, would you be willing to produce a log of a pageload that shows the problem for you?  Ideally that would be the only page you load in the browsing session (e.g. pass the url as a command-line argument).  You can do this with any nightly build that shows the problem.  There are instructions for creating an NSPR log on various platforms at <http://www.mozilla.org/projects/netlib/http/http-debugging.html>; you'd just want to set NSPR_LOG_MODULES to "nsSecureBrowserUI:5".
Thanks, Mike and Boris, for your time.

Before you, there was actually not lots of people who tried to reproduce. At least in this bug it was only Jonathan who clearly said that he had tried with a clean profile (but that was on Mac which I cannot test). I guess both of you also checked that the lock icon in the status bar looked OK?

Note that security.warn_viewing_mixed does _not_ need to be different from the default. Even if it is set to false (as done when dismissing the warning the first time without checking the box), the lock icon in the FF status bar will still show the red exclamation mark and the site icon in the URL bar will have the background of an unencrypted site.

Sure, I will take the NSPR log as Boris suggested.
Because I am sitting at an OS/2 machine right now, I used two Firefox builds that I had around. The nightlies of 2009020312 (and for comparison of 2008082201).

I started FF with a fresh profile with the URL on the command line:

https://ibank.barclays.co.uk/olb/i/LoginMember.do
  secure_barclays_2008082201.log   the old build without the problem
  secure_barclays_2009020312.log   the mostly current build w/the problem

https://www.saferpay.com/
  secure_saferpay_2009020312.log   the mostly current build w/the problem

Please let me know if you need more info.
Attached file Log for BofA site
Here is the requested log for the BofA site cited in comment 0.
Here's the relevant part of Peter's saferpay log, I think:

0[2b65a0]: SecureUI:200b66a0: OnStateChange: progress: for toplevel
0[2b65a0]: SecureUI:200b66a0: OnStateChange
0[2b65a0]: SecureUI:200b66a0: 20837bf4 20eecb00 OnStateChange 10010 https://www.etracker.de/cnt.php?v=2.4&java=y&tc=1234006148416&et=kbg1PK&et_ilevel=0&swidth=1400&sheight=1050&siwidth=988&siheight=873&scookie=1&scolor=16&p=undefined&et_areas=Index&et_target=,0,,1,0&et_lpage=0&et_se=0&et_basket=&et_url=https://www.saferpay.com/&slang=en-US
0[2b65a0]: SecureUI: GetSecurityState: - info is 20c61600
0[2b65a0]: SecureUI: GetSecurityState: - Returning 4
0[2b65a0]: SecureUI:200b66a0: OnStateChange: subreq INSECURE

Here's the relevant part of his "mostly current build" log on the barclays site:

0[2b65a0]: SecureUI:200b66a0: OnStateChange: progress: for toplevel
0[2b65a0]: SecureUI:200b66a0: OnStateChange
0[2b65a0]: SecureUI:200b66a0: 208383f4 23706ce0 OnStateChange 10010 https://statse.webtrendslive.com/dcssxcr8i00000stlemt7jpvp_8c9t/dcs.gif?&dcsdat=1234006718715&dcssip=ibank.barclays.co.uk&dcsuri=/olb/i/LoginMember.do&WT.tz=1&WT.bh=12&WT.ul=en-US&WT.cd=16&WT.sr=1400x1050&WT.jo=No&WT.ti=Barclays%20Online%20Banking%20-%20Log-in%20Step%201%20of%202&WT.js=Yes&WT.jv=1.5&WT.ct=unknown&WT.bs=988x873&WT.fi=No&WT.tv=8.0.2&WT.cg_s=logon%20-%20Log-in&WT.si_p=Step1;Step1;Step1&WT.sp=ibank&WT.si_n=LogonIA;LogonTrad;LogonTFA&WT.cg_n=logon&WT.vt_f_tlh=1234006704&WT.vt_sid=298034e2b710e09b33b1234003104183.1234006704183&WT.co_f=298034e2b710e09b33b1234003104183&Login=LogMembership
0[2b65a0]: SecureUI: GetSecurityState: - info is 20cc1c00
0[2b65a0]: SecureUI: GetSecurityState: - Returning 4
0[2b65a0]: SecureUI:200b66a0: OnStateChange: subreq INSECURE

Here's the relevant part of Nelson's log:

[b2d140]: SecureUI:35f70c0: OnStateChange: progress: for toplevel
0[b2d140]: SecureUI:35f70c0: OnStateChange
0[b2d140]: SecureUI:35f70c0: db1ab4 4a641c0 OnStateChange 10010 https://data.coremetrics.com/eluminate?tid=1&ci=90010394&vn2=e3.1&st=1234046987011&vn1=4.0.42b&ec=ISO-8859-1&pi=OLB%3APRODUCT%3AONLINE_BANKING%3BSIGNIN&cg=OLB%3APRODUCT%3AONLINE_BANKING&ul=https%3A//sitekey.bankofamerica.com/sas/signonScreen.do%3Fstate%3DCA
0[b2d140]: SecureUI: GetSecurityState: - info is 33148e0
0[b2d140]: SecureUI: GetSecurityState: - Returning 4
0[b2d140]: SecureUI:35f70c0: OnStateChange: subreq INSECURE

All three are image URIs, so this is presumably a result of us paying attention to them at all.  That's bug 135007.

Peter, Nelson, what do things look like to you if you load those URIs directly in your browser?  Do they come up as secure?
Blocks: 135007
In reply to the question:
> what do things look like to you if you load those URIs directly
> in your browser?  Do they come up as secure?

For all 3 URLs cited in the previous comment, they come up with 
  Page Load Error
  Failed to Connect
  The connection was refused when attempting to contact ...
Very odd.  All three work for me...

Given the error, the behavior almost makes sense to me, given how the code is structured right now: there's no useful encryption status if we never completed the SSL handshake with the server, so we treat it as unencrypted.  For images, we get no notification on whether we got data, so we assume that we have, as I recall.  Maybe we should fix that.

That said, it might be nice to figure out why you can't get to those URIs?  Any extensions (e.g. adblock) in use?
> it might be nice to figure out why you can't get to those URIs?  
> Any extensions (e.g. adblock) in use?

No mystery there.  Big anti-spyware /etc/hosts file.  

127.0.0.1  data.coremetrics.com
127.0.0.1  test.coremetrics.com #[SpySweeper.Spy.Cookie]
127.0.0.1  twci.coremetrics.com #[Ad-Aware.Tracking.Cookie]
127.0.0.1  webtrendslive.com #[SmartSource Data Collector]
127.0.0.1  statse.webtrendslive.com #[SDC Advanced Tracking Code]
127.0.0.1  sdc.mcafee.com #[statse.webtrendslive.com]
127.0.0.1  www.etracker.de

None of these trackers are required for the sites to function properly.
The browser just needs to not get its shorts in a knot over ECONNREF.
Yep, same here, I've been using hosts entries from <http://www.mvps.org/winhelp2002/hosts.htm> for years.
I can reproduce this at https://sitekey.bankofamerica.com/sas/signonScreen.do?state=CA if I add to my /etc/hosts:
127.0.0.1  data.coremetrics.com

So presumably this simple testcase will reproduce the problem when served via https at bugzilla.
Attachment #361246 - Attachment description: img src=https://localhost/ → testcase: img src=https://localhost/
Ah, but remember this is happening in Linux too, so all the malware related issues are unnecessary, correct?  IMHO Windows users at this level either know their hosts file or use "Spybot S&D", so their hosts file is prob already "immunized", correct?

So where are we here? What more can I do to test/validate/investigate?
OK.  So just not handing back a securityInfo from imagelib in this case won't help: the secure browser UI treats that as insecure.

Kai, would it make sense to ask the imgRequest whether it's transferred data instead of just assuming that STATE_STOP for an image means requestHasTransferedData should be true?  That seems like it would fix this issue...
I have no idea what comment 34 is talking about, for what it's worth.
(In reply to comment #36)
> I have no idea what comment 34 is talking about, for what it's worth.


I Forgot to add that all three URL's from post #28 and the two from #26 and the one from #33 show the secure lock icon.

This is my original set of URL's  that I use that have the problem - the first goes to the second.  The problem is that you have to actually sign in and let the third page load, and then go to the 4th then 5th page (Online Billpay) to get the insecure icon.  Would some sort of a test page from them help? (with a non-valid bank account maybe?)

http://www.freedombankonline.com/
https://www.freedombankonline.com/onlineserv/HB/Signon.cgi
3, 4, and 5th pages are under:
https://www.freedombankonline.com/onlineserv/HB/HomeBanking.cgi
Michael, if you think your problem is different from what Peter and Nelson are seeing, please create the sort of log I asked them for and attach it to this bug (or better yet to bug 455367, if you really think it's a different problem).
Attached patch Like so, say (obsolete) — Splinter Review
This does fix the testcase Ted attached.

It's weird that we only update the security state on request stop.  In particular, will a multipart insecure image ever update it?  In any case, this does feel like the right thing to do independent of the stop business.

Reviewers, please feel free to suggest a reasonable sr here.  I can't think of one.  :(
Attachment #361274 - Flags: superreview?
Attachment #361274 - Flags: review?(kaie)
Attachment #361274 - Flags: review?(joe)
Er, that patch actually probably breaks non-image loads that get no data.  I should really be setting transferred to false if not in the hashtable and not a security info provider.
Attached patch With that issue fixed (obsolete) — Splinter Review
Oh, and is there any way to test this stuff?  Or any existing tests?
Attachment #361274 - Attachment is obsolete: true
Attachment #361276 - Flags: superreview?
Attachment #361276 - Flags: review?(kaie)
Attachment #361276 - Flags: review?(joe)
Attachment #361274 - Flags: superreview?
Attachment #361274 - Flags: review?(kaie)
Attachment #361274 - Flags: review?(joe)
You can probably write a browser chrome test, at the very least,
(In reply to comment #38)
> Michael, if you think your problem is different from what Peter and Nelson are
> seeing, please create the sort of log I asked them for and attach it to this
> bug (or better yet to bug 455367, if you really think it's a different
> problem).

Bug 455367 is *this* bug. What bug number did you really intend, Boris?
Er, bug 472986.
Couldn't get the Ubuntu BASH to work, but got a Windows file.  FWIW, the instruction page needs review, this is what I used to get the data:
set NSPR_LOG_MODULES=nsSecureBrowserUI:5
set NSPR_LOG_FILE=C:\log.txt
cd \Program Files\Mozilla Firefox 3.1 Beta 2 ***(or as appropriate)***
firefox.exe
(In reply to comment #44)
> Er, bug 472986.

I thought marking a bug as a duplicate makes it invalid,  as the data is then    transferred to the original one [shrugs shoulders...]

Regardless, I attached a logfile here: 
https://bug455367.bugzilla.mozilla.org/attachment.cgi?id=361451
I hope this helps.
0[825140]: SecureUI:896ec0: a5f4d4 571dca0 OnStateChange 10010 https://cfbillpay41.digitalinsight.com/images/FInal-mid.gif
0[825140]: SecureUI: GetSecurityState: - no nsITransportSecurityInfo for 0
0[825140]: SecureUI:896ec0: OnStateChange: subreq INSECURE

Michael, what happens if you try to load that image directly?
(In reply to comment #47)
> 0[825140]: SecureUI:896ec0: a5f4d4 571dca0 OnStateChange 10010
> https://cfbillpay41.digitalinsight.com/images/FInal-mid.gif
> 0[825140]: SecureUI: GetSecurityState: - no nsITransportSecurityInfo for 0
> 0[825140]: SecureUI:896ec0: OnStateChange: subreq INSECURE
> 
> Michael, what happens if you try to load that image directly?

Right click on above URL = (https://cfbillpay41.digitalinsight.com/images/FInal-mid.gif)

This gets a 404:
"Not Found
The requested URL /images/FInal-mid.gif was not found on this server.
Apache Server at cfbillpay41.digitalinsight.com Port 4761"

Remember, I already found what I thought was a URL issue at their site:
https://bug472986.bugzilla.mozilla.org/attachment.cgi?id=356542
Evedently it's not the problem, but curious - what made the above URL "suspicious"?
> This gets a 404:

Right, but is it shown as secure for you?

> but curious - what made the above URL "suspicious"?

Other than the fact that there was no security info on the image request?  No idea.  Also no idea why there was no security info on the image request for you in this case.  There should have been, really.

I think the ball's in Kai's court here.
(In reply to comment #49)
> > This gets a 404:
> 
> Right, but is it shown as secure for you?
> 

Yes it is, with "High Grade Encryption" (AES-256)
I have the same problem at usaa.com with these STR:

1) Go to https://www.usaa.com/inet/ent_logon/Logon
2) Watch it load and see the green certificate thing show up (sometimes it
disappears when the page is done loading)
3) If certificate still shows when page is done loading, hit refresh,
certificate still shows
4) Hit ctrl+f5 for a hard refresh and the certificate doesn't show up and says
the site does not supply identity information.

And I think this bug should block since it will make people not want to login to their bank's or whatever site that usually had a EV certificate and now doesn't.  They might think their bank's site has been compromised as I thought also.
Kurt, you're seeing a different problem (EV cert vs non-EV cert, as opposed to the problem this bug is about).

I suggest filing a separate bug, and attaching a log as described in this bug to it.
(In reply to comment #52)
> Kurt, you're seeing a different problem (EV cert vs non-EV cert, as opposed to
> the problem this bug is about).
> 
> I suggest filing a separate bug, and attaching a log as described in this bug
> to it.

Filed bug 477826
(In reply to comment #52)
> Kurt, you're seeing a different problem (EV cert vs non-EV cert, as opposed to
> the problem this bug is about).
> 
> I suggest filing a separate bug, and attaching a log as described in this bug
> to it.

Is he?  From his description it sounds like the page starts up loading EV, sure, but then goes mixed-content on him (you'll note it downgrades to "No identity information", not just the DV display).  Of course, in that case the site could just have been built dumbly to source in http content, maybe it's *actually* mixed content in his case.  :)
Comment on attachment 361276 [details] [diff] [review]
With that issue fixed

r+ on imglib parts.

Boris, this is mostly a workaround for bug 432685, right?
Attachment #361276 - Flags: review?(joe) → review+
Ah, I see.  Yeah, could be.  The fact that refresh is involved (as opposed to the 100% reproducible issue in this bug) still means that a separate bug is desirable.

Similar for Michael K's issue, actually.  I've reopened his bug, since he's not seeing the issue this bug was filed on but something rather different.  This is why I said the log should have been attached there: if it confirmed the bug as a duplicate all well and good, but since it hasn't it's now on the wrong bug.

> I thought marking a bug as a duplicate makes it invalid,  as the data is then 
> transferred to the original one

Nothing is transferred other than the bug reporter of the dup getting cced on the original.  And bugs can be reopened.

joe, exactly.
Blocks: 472986
Blocks: 477826
Removing blocking.

Right now we don't have STR without a modified hosts file, so this isn't going to affect many users. We acknowledge that this is a bug, and would take a fix, but can't hold the release.

Jeff says he can see this with a fresh profile, no modified hosts file; he'll attach a log. Please renominate if new evidence comes to hand!
Flags: blocking1.9.1? → blocking1.9.1-
Mike, It doesn't require a modified hosts file.  It only requires that 
connections not complete 100% of the time.  On an insecure page, a failure to 
connect for an image just causes a funny little image place holder icon to 
appear.  On a secure page, it causes the page to be displayed with a bogus 
mixed-mode indicator.
Summary: bogus mixed content warnings for many https sites → bogus mixed content warnings if any image fails to load
Isn't this duplicate of bug 472986? Anyway, the cause is in the libpr0n code, IMO.
> Isn't this duplicate of bug 472986? 

No, because the log is quite different, as far as I can see.  In this case there IS securityInfo, but it claims that the connection is insecure.
Attachment #361276 - Flags: superreview?(honzab.moz)
Attachment #361276 - Flags: superreview?
Attachment #361276 - Flags: review?(kaie)
Attachment #361276 - Flags: review-
Comment on attachment 361276 [details] [diff] [review]
With that issue fixed

This patch looks reasonable to me, r=kaie with the issue below addressed:

This patch holds on to the lock
  nsAutoMonitor lock(mMonitor);

while we QI and call into the securityProvider. I think a {} scope should be used to limit the lock for hashtable access.

If Honza has proposals on this patch, we should listen to him, as he had worked on the insecure-image detection.
Attachment #361276 - Flags: superreview?(honzab.moz) → superreview+
Comment on attachment 361276 [details] [diff] [review]
With that issue fixed

sr=honzab
For me it looks ok, however it's a workaround. We should analyze bug 472986 prior to landing this, because this patch might hide problem from that bug.

I agree with Kai that QI and call to the provider must not be under the lock, probably just lock.unlock()?

Then would be better to do:
if (the large condition)
  requestHasTransferedData = PR_TRUE;

but it depends on you.
Good catch on the lock.  jst, would you mind doing the sr here?
Attachment #361276 - Attachment is obsolete: true
Attachment #362263 - Flags: superreview?(jst)
Is it possible to have nsHttp:5 enabled log? Just for curiosity.

Confirming this patch fixes the bug by automated test (bug 452401).
Attachment #362263 - Flags: superreview?(jst) → superreview+
Comment on attachment 362263 [details] [diff] [review]
Updated to comments.

Looks good, but found one thing:

- In nsSecureBrowserUIImpl::OnStateChange():

+  nsCOMPtr<nsISecurityInfoProvider> securityInfoProvider(do_QueryInterface(aRequest));

This variable is unused, and shadowed later on by another declaration, so remove this one.

sr=jst
Pushed http://hg.mozilla.org/mozilla-central/rev/f39d15c0b15e with the change from comment 65.

Given the analysis in bug 472986, I think this should be blocking after all: it'll happen on any page where the src of an image is changed by script before the original image has loaded "sufficiently".  In fact, we should take this for b3 if at all possible, both because of the interface change and because of the web compat testing we want.

No tests yet, but Honza is working on putting those together in bug 452401.
Assignee: kaie → bzbarsky
Status: NEW → RESOLVED
Closed: 15 years ago
Flags: blocking1.9.1- → blocking1.9.1?
Resolution: --- → FIXED
Attachment #362263 - Flags: approval1.9.1?
Priority: -- → P1
Target Milestone: --- → mozilla1.9.1b3
Flags: blocking1.9.1? → blocking1.9.1+
Priority: P1 → P2
Attachment #362263 - Flags: approval1.9.1?
(In reply to comment #68)
> Pushed http://hg.mozilla.org/releases/mozilla-1.9.1/rev/4c0ebbf03f46

Again, excuse me for my "noob-ness", but what's this mean?  :)
It means the fix was added, it will be part of nightly builds starting tomorrow.

Mozilla-1.9.1 equivalent to Firefox 3.1 nightly builds
Verified fixed based on bug 472986 comment 67.  I can also verify this fixed using various nightlys over the past few weeks.
Status: RESOLVED → VERIFIED
Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1pre) Gecko/20090604 Shiretoko/3.5pre
Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1pre) Gecko/20090604 Shiretoko/3.5pre
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1pre) Gecko/20090604 Shiretoko/3.5pre
Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; rv:1.9.1pre) Gecko/20090604 SeaMonkey/2.0b1pre
Mozilla/5.0 (Windows; U; Windows NT5.1; en-US; rv:1.9.1pre) Gecko/20090604 SeaMonkey/2.0b1pre
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1pre) Gecko/20090604 SeaMonkey/2.0b1pre

Using all of the above builds and both test sites in comment 0, I can not reproduce this. See results below:

security.warn_viewing_mixed = true -> no warning
security.warn_viewing_mixed = false -> no warning

Marking verified1.9.1
Added litmus test case https://litmus.mozilla.org/show_test.cgi?id=7778.
Flags: in-litmus+
No longer depends on: 477118
(In reply to Boris Zbarsky [:bz] from comment #66)
> Pushed http://hg.mozilla.org/mozilla-central/rev/f39d15c0b15e with the
> change from comment 65.
> 
> Given the analysis in bug 472986, I think this should be blocking after all:
> it'll happen on any page where the src of an image is changed by script
> before the original image has loaded "sufficiently".  In fact, we should
> take this for b3 if at all possible, both because of the interface change
> and because of the web compat testing we want.
> 
Came across this bug while making changes for bug 1006881.  Even if the image doesn't completely load, an insecure request still goes out from a secure page.  Cookies and referrers are still available in plaintext, but the page itself isn't compromised with an MITM'ed image.  Should the site still be considered secure?


> No tests yet, but Honza is working on putting those together in bug 452401.
> Even if the image doesn't completely load, an insecure request still goes out from a
> secure page.

_If_ the request was insecure.  See comment 30 for the issue this bug was trying to fix.

I'd be all in favor of a better fix if we can get the info we need...

Note also that we're checking for "any bytes at all", not "image completely loaded".
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.