Closed Bug 786417 Opened 12 years ago Closed 12 years ago

Broken display of downloads stats on SourceForge with Firefox 17 (recent regression)

Categories

(Core :: General, defect)

17 Branch
x86_64
Windows 7
defect
Not set
normal

Tracking

()

VERIFIED FIXED
mozilla19
Tracking Status
firefox17 + verified
firefox18 + verified
firefox19 + verified

People

(Reporter: epinal99-bugzilla2, Assigned: keeler)

References

()

Details

(Keywords: regression, testcase, verifyme)

Attachments

(4 files, 6 obsolete files)

Attached image Screenshot
Open http://sourceforge.net/projects/ffdshow-tryout/files/

The display of downloads stat in the column on the right is broken.

Mozregression range:

m-c
good=2012-08-25
bad=2012-08-26
http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=f077de66e52d&tochange=b3cce81fef1a

Reporter: http://forums.mozillazine.org/viewtopic.php?p=12233543#p12233543
Keywords: regression
Regression window(m-c)
Good:
http://hg.mozilla.org/mozilla-central/rev/116a2c731931
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Firefox/17.0 ID:20120825182458
Bad:
http://hg.mozilla.org/mozilla-central/rev/b3cce81fef1a
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Firefox/17.0 ID:20120825190957
Pushlog:
http://hg.mozilla.org/mozilla-central/pushloghtml?fromchange=116a2c731931&tochange=b3cce81fef1a


Regression window(m-c)
Good:
http://hg.mozilla.org/integration/mozilla-inbound/rev/8ffabf9f1fa3
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Firefox/17.0 ID:20120825103358
Bad:
http://hg.mozilla.org/integration/mozilla-inbound/rev/779bdf71cde5
Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Firefox/17.0 ID:20120825124859
Pushlog:
http://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=8ffabf9f1fa3&tochange=779bdf71cde5

Suspected: Bug 760307
Blocks: preload-hsts
Looks like the charts rely on requests to chart.apis.google.com, which is now considered HSTS (apis.google.com is on the list with includeSubdomains: true). What I'm seeing is for some reason the requests get no response from https://chart.apis.google.com.
Component: General → Networking
From Adam Langley:

> chart.apis.google.com has a hole cut for it. From the JSON:
>
>     // chart.apis.google.com is *not* HSTS because the certificate doesn't match
>     // and there are lots of links out there that still use the name.
> The correct
>     // hostname for this is chart.googleapis.com.
>     { "name": "chart.apis.google.com", "include_subdomains": true,
> "pins": "google" },
>
> (Note that there's no "mode", so it doesn't force to HTTPS.)

So, basically we have to implement hole-punching.
Component: Networking → General
Attached patch patch (obsolete) — Splinter Review
So, basically I implemented hole-punching.
Assignee: nobody → dkeeler
Status: NEW → ASSIGNED
Attachment #656249 - Flags: review?(bsmith)
Attachment #656249 - Flags: feedback?(sstamm)
In my eagerness to fix this bug, I jumped at the obvious solution without taking a moment to consider what the best course of action is.
Consider the goal of the HSTS preload list. My understanding is that it addresses a shortcoming of HSTS. That is, when first connecting to a host, the browser does not know its HSTS status until after connecting over a secure protocol. Consequently, an attacker can take advantage of this and prevent the browser from ever connecting securely. So, one way to interpret a host's presence on the preload list is as if the browser had connected securely to the host in the past and received an HSTS header with a high max-age value. Therefore, under this interpretation, I don't think any host that does not actually send an HSTS header should be on the list. apis.google.com does not send an HSTS header (that I see), and so it should not be on the preload list.
An alternate interpretation would be that we want our users to be as secure as possible, and so we should put any site on that list that we want them to automatically connect securely to, regardless of the headers that site sends. I don't think this is the right direction to go in at the moment. First, if site operators wanted this, they would use HSTS, and we would honor that header. Conversely, we should honor their decision to not use HSTS (there may come a time when we can use a secure protocol for everything and ditch insecure HTTP, but we are not there yet). Second, even if we did want to make those kinds of decisions, we should be the ones making them, rather than relying on Chrome's list.
So, I propose we modify our use of Chrome's list slightly. Instead of using any entry they indicate as using HSTS, we actually connect to that host and see if it sends an HSTS header. If so, and if the max-age is sufficiently large (say 18 weeks?), we put it on the list. If not, we ignore it.
We might also explore doing our own scanning of hosts and build up a list of sites that send an HSTS header and use that list instead of Chrome's.
Attachment #656249 - Flags: review?(bsmith)
Attachment #656249 - Flags: feedback?(sstamm)
Attached patch patch (obsolete) — Splinter Review
This makes the changes we discussed regarding only using hosts from Chrome's preload list that actually send the HSTS header.
Attachment #656249 - Attachment is obsolete: true
Attachment #657485 - Flags: review?(bsmith)
Comment on attachment 657485 [details] [diff] [review]
patch

Review of attachment 657485 [details] [diff] [review]:
-----------------------------------------------------------------

David, I started reviewing this patch and then I got stuck on something: Is there a reason we couldn't/shouldn't do this in xpcshell? If we do this in xpcshell, then we'll be using exactly the same HTTP parser, the exact same TLS stack, the exact same HSTS parser and associated logic, the same (root) certificate database as Firefox, etc., most/all of which have automated tests that this script lacks. I know it would be annoying to rewrite the whole script in JS but it seems like there's value in it for these reasons. Perhaps we could keep the already-reviewed python code that parses Chrome's list and change it to just output a list of domains, and then pipe that list (or whatever) into an xpcshell script?

Also, I am curious about the workflow. How does the person who runs this tool use it, from the point they pull Chromium's list to the point they check in the code? It seems like there are many tricky manual steps involved, such as double-checking why an entry got removed from the list that was on the list before. Please write up a "how to use this."

My review comments below are my initial review of the python-based approach. If we decide to continue down that I will need to finish that review.

::: security/manager/tools/getHSTSPreloadList.py
@@ +29,5 @@
>  static const nsSTSPreload kSTSPreloadList[] = {
>  """
>  POSTFIX = """};
>  """
> +HEADERS = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120828 Firefox/15.0' }

Should the UA be given as a required command-line parameter, so we can make sure we always use the UA string that will be used for the release (release, not nightly, aurora, beta)? That way we will work with any implementation that does UA sniffing.

@@ +30,5 @@
>  """
>  POSTFIX = """};
>  """
> +HEADERS = { 'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:15.0) Gecko/20120828 Firefox/15.0' }
> +EIGHTEEN_WEEKS = 10886400

Rename EIGHTEEN_WEEKS based on its intended meaning (MINIMUM_REQUIRED_MAX_AGE?). Also, it would be clearler to use a timedelta here:

MINIMUM_REQUIRED_MAX_AGE = timedelta(weeks = 18)

@@ +60,5 @@
> +
> +def parseHSTSHeader(header):
> +  maxAge = 0
> +  includeSubdomains = False
> +  directives = map(parseDirective, header.split(";"))

IMO, it is better to use cgi.parse_header, because it should simplify the code, because that's the normal way of parsing HTTP headers in python, and because it already handles many edge-cases about parsing. (See http://bugs.python.org/issue12529 for example).

@@ +63,5 @@
> +  includeSubdomains = False
> +  directives = map(parseDirective, header.split(";"))
> +  for directive in directives:
> +    if directive["type"] == "includeSubdomains":
> +      includeSubdomains = directive["value"]

why not includeSubdomains = True? can directive["value"] ever be false?
Attached patch patch v2 (obsolete) — Splinter Review
Ported the python file to xpcshell.
Attachment #657485 - Attachment is obsolete: true
Attachment #657485 - Flags: review?(bsmith)
Attachment #659335 - Flags: review?(bsmith)
(In reply to Brian Smith (:bsmith) from comment #7)
> If we do this in
> xpcshell, then we'll be using exactly the same HTTP parser, the exact same
> TLS stack, the exact same HSTS parser and associated logic

Actually, we can't use the same HSTS parser at the moment, because nsIStrictTransportSecurityService doesn't expose the max-age or includeSubdomains of hosts, both of which are needed when building the preload list.
Comment on attachment 659335 [details] [diff] [review]
patch v2

Review of attachment 659335 [details] [diff] [review]:
-----------------------------------------------------------------

::: security/manager/tools/getHSTSPreloadList.js
@@ +106,5 @@
> +
> +  return directive;
> +}
> +
> +function parseHSTSHeader(header) {

Is it possible to reuse our existing HSTS parsing logic in Gecko for this, instead of writing a new one? That would be closer to the goal of having this script be as similar to Gecko's normal processing as possible.

In fact, it seems to me like you should be able to make the requests, and then just ask the nsIStrictTransportSecurityService about the max-age and includeSubdomains status of all the domains, if we parameterized the methods of nsIStrictTransportSecurityService with an "ignoreBuiltins" options.
Attachment #659335 - Flags: review?(bsmith)
(In reply to David Keeler from comment #9)
> Actually, we can't use the same HSTS parser at the moment, because
> nsIStrictTransportSecurityService doesn't expose the max-age or
> includeSubdomains of hosts, both of which are needed when building the
> preload list.

Sorry, I wrote my comment before I saw yours. Why not just change nsIStrictTransportSecurityService to expose max-age and includeSubdomains. It is OK to add a new method which is used only by this script as long as it isn't a huge amount of code.
Attached patch patch v3 (obsolete) — Splinter Review
Attachment #659335 - Attachment is obsolete: true
Attachment #659399 - Flags: review?(bsmith)
Attached patch patch v4 (obsolete) — Splinter Review
This version prevents redirects as discussed.
Attachment #659399 - Attachment is obsolete: true
Attachment #659399 - Flags: review?(bsmith)
Attachment #660624 - Flags: review?(bsmith)
Brian - can you review this fix?  We're less than a week away from merging 17 to the Beta channel, having this only need uplift to Aurora instead of Aurora/Beta would be great.
keeler, can you run your script one more time and update the patch to use the latest Google list? I just had agl add some entries for me, and I'd like to get them into this patch update, if possible. Thanks! :)
Reed - I'll run the script again before checking it in, so your sites will be picked up. They're sending the header with a long max-age, right?
(In reply to David Keeler from comment #17)
> Reed - I'll run the script again before checking it in, so your sites will
> be picked up. They're sending the header with a long max-age, right?

Yep, max-age=10886400 to meet Firefox's minimum. :)
(In reply to David Keeler from comment #17)
> Reed - I'll run the script again before checking it in, so your sites will
> be picked up. They're sending the header with a long max-age, right?

Let's not add any new sites for 17. We should only be removing sites (the ones that don't have the 18+ week timefame). This way, we are guaranteed to not be regressing functionality when the patch lands in 17.

For 18+, we can run the script again to pick up additional sites, including Reed's.
Comment on attachment 660624 [details] [diff] [review]
patch v4

Camilo, can you please review this patch for me? I will super-review it after you r+ it.
Attachment #660624 - Flags: review?(cviecco)
Comment on attachment 660624 [details] [diff] [review]
patch v4

Camilo, Never mind, I mistake this bug for a different one. Better to have Necko peers review this.
Attachment #660624 - Flags: review?(cviecco)
Comment on attachment 660624 [details] [diff] [review]
patch v4

Review of attachment 660624 [details] [diff] [review]:
-----------------------------------------------------------------

r+ conditional on review comments being addressed.

We cannot check the whole patch into Aurora and Beta because it changes the XPCOM interfaces that we are supposed keep stable as we get closer to release.

In order to minimize risk for Firefox 17 and 18, please create a separate patch that contains just the updated nsSTSPreloadList.inc, with all the new entries removed, so that the file is a subset of what was originally in Firefox 17.

For mozilla-central, we can check in the whole patch.

::: netwerk/base/public/nsIStrictTransportSecurityService.idl
@@ +28,5 @@
>       *                          if there are unrecognized tokens in the header.
>       */
>      void processStsHeader(in nsIURI aSourceURI,
> +                          in string aHeader,
> +                          [optional] out unsigned long long aMaxAge,

Better to use uint64_t here.

::: netwerk/test/TestSTSParser.cpp
@@ +48,5 @@
>  
> +  // These aren't used here, but since [optional] xpidl parameters
> +  // aren't actually optional in C++, they must be included.
> +  uint64_t maxAge = 0;
> +  bool includeSubdomains = false;

NIT: we could be actually, you know, testing that the parser returned the correct values.

::: security/manager/boot/src/nsSTSPreloadList.inc
@@ +23,2 @@
>    { "arivo.com.br", true },
> +  { "blog.torproject.org", false },

I'm surprised so many sites are not sending the includeSubdomains but are includeSubdomains on Google's list.

@@ +28,4 @@
>    { "crate.io", true },
>    { "crypto.cat", true },
>    { "crypto.is", true },
> +  { "csawctf.poly.edu", true },

Please remove new entries for Firefox 17, to limit risk.

::: security/manager/ssl/tests/unit/test_sts_preloadlist.js
@@ +72,5 @@
>    do_check_false(gSTSService.isStsHost("com"));
>  
>    // Note: the following were taken from the STS preload list
> +  // as of Sept. 2012. If the list changes, this test will need to be modified.
> +  // check that the pref to toggle using the preload list works

I did not notice this the first time. Is it possible to write this test in a way where we don't have to modify it every time we update the list?

::: security/manager/tools/getHSTSPreloadList.js
@@ +34,5 @@
> +"/* This is an automatically generated file. If you're not                    */\n" +
> +"/* nsStrictTransportSecurityService.cpp, you shouldn't be #including it.     */\n" +
> +"/*****************************************************************************/\n" +
> +"\n" +
> +"#include <prtypes.h>\n" +

NIT: we're not supposed to use prtypes now if we can avoid it. #include "StandardInteger.h" instead, if possible.

@@ +69,5 @@
> +  try {
> +    data = JSON.parse(result);
> +  }
> +  catch (e) {
> +    dump("ERROR: could not parse data from '" + SOURCE + "': " + e + "\n");

Shouldn't we re-raise the exception here?

@@ +78,5 @@
> +function getHosts(rawdata) {
> +  var hosts = [];
> +
> +  if (!rawdata || !rawdata.entries) {
> +    dump("ERROR: source data not formatted correctly: 'entries' not found\n");

Again, it seems better to raise an exception here.

@@ +83,5 @@
> +    return hosts;
> +  }
> +
> +  for (entry of rawdata.entries) {
> +    if (entry.name && entry.mode && entry.mode == "force-https") {

I think that this function is basically about sanity-checking the data, and then filtering it for "force-https" entries. Shouldn't we raise an exception if entry.mode == "force-https" && !(entry.name && entry.mod)?

@@ +104,5 @@
> +      var uri = Services.io.newURI("https://" + host.name, null, null);
> +      gSTSService.processStsHeader(uri, header, maxAge, includeSubdomains);
> +    }
> +    catch (e) {
> +      dump("ERROR: could not process header '" + header + "' from " + hostname +

Here, I understand why we're catching the exception and not throwing.

@@ +127,5 @@
> +function RedirectStopper () {};
> +
> +RedirectStopper.prototype = {
> +  // nsIChannelEventSink
> +  asyncOnChannelRedirect: function(oldChannel, newChannel, flags, callback) {

What happens when we get a redirect with the strict-transport-security header? We should make sure we're counting that.

@@ +136,5 @@
> +    return this.QueryInterface(iid);
> +  },
> +
> +  // nsISupports
> +  QueryInterface: function(iid) {

Nit: XPCOMUtils.generateQI?

@@ +229,5 @@
> +// ****************************************************************************
> +// This is where the action happens:
> +// disable the current preload list so it won't interfere with requests we make
> +Services.prefs.setBoolPref("network.stricttransportsecurity.preloadlist", false);
> +// download and parse the raw json file from the Chrome source

s/Chrome/Chromium/
Attachment #660624 - Flags: superreview?(honzab.moz)
Attachment #660624 - Flags: review?(bsmith)
Attachment #660624 - Flags: review+
Thanks for the review - I have a couple of questions, though (below).

(In reply to Brian Smith (:bsmith) from comment #22)
> Comment on attachment 660624 [details] [diff] [review]
> patch v4
> 
> Review of attachment 660624 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> r+ conditional on review comments being addressed.
> 
> We cannot check the whole patch into Aurora and Beta because it changes the
> XPCOM interfaces that we are supposed keep stable as we get closer to
> release.
> 
> In order to minimize risk for Firefox 17 and 18, please create a separate
> patch that contains just the updated nsSTSPreloadList.inc, with all the new
> entries removed, so that the file is a subset of what was originally in
> Firefox 17.
> 
> For mozilla-central, we can check in the whole patch.

Sounds good.

> ::: netwerk/base/public/nsIStrictTransportSecurityService.idl
> @@ +28,5 @@
> >       *                          if there are unrecognized tokens in the header.
> >       */
> >      void processStsHeader(in nsIURI aSourceURI,
> > +                          in string aHeader,
> > +                          [optional] out unsigned long long aMaxAge,
> 
> Better to use uint64_t here.

Is that a valid idl type? Using https://developer.mozilla.org/en-US/docs/XPIDL, unsigned long long is an unsigned 64-bit type, but I'll use uint64_t if that works.

> ::: security/manager/ssl/tests/unit/test_sts_preloadlist.js
> @@ +72,5 @@
> >    do_check_false(gSTSService.isStsHost("com"));
> >  
> >    // Note: the following were taken from the STS preload list
> > +  // as of Sept. 2012. If the list changes, this test will need to be modified.
> > +  // check that the pref to toggle using the preload list works
> 
> I did not notice this the first time. Is it possible to write this test in a
> way where we don't have to modify it every time we update the list?

Unfortunately, I don't think so, unless we put some bogus entries on the preload list.
Overall, I doubt it will change too much - it only matters if we have to remove things, and in some cases if includeSubdomains changes.


> What happens when we get a redirect with the strict-transport-security
> header? We should make sure we're counting that.

I don't have a good idea of how to do this. I'll look into it, but if you know offhand, any hints would be appreciated :)
(In reply to David Keeler from comment #23)
> > >       */
> > >      void processStsHeader(in nsIURI aSourceURI,
> > > +                          in string aHeader,
> > > +                          [optional] out unsigned long long aMaxAge,
> > 
> > Better to use uint64_t here.
> 
> Is that a valid idl type? Using
> https://developer.mozilla.org/en-US/docs/XPIDL, unsigned long long is an
> unsigned 64-bit type, but I'll use uint64_t if that works.

cc: Ehsan. I believe we're supposed to prefer uint64_t over unsigned long long in IDL now. Regardless, we should update the documentation on the wiki, at least replacing the NSPR type names with standard C/C++ type names. I would do it, but I have a feeling Ehsan has a better sense of exactly what changes to the documentation would be best.

> > I did not notice this the first time. Is it possible to write this test in a
> > way where we don't have to modify it every time we update the list?
> 
> Unfortunately, I don't think so, unless we put some bogus entries on the
> preload list.
> Overall, I doubt it will change too much - it only matters if we have to
> remove things, and in some cases if includeSubdomains changes.

OK.

> > What happens when we get a redirect with the strict-transport-security
> > header? We should make sure we're counting that.
> 
> I don't have a good idea of how to do this. I'll look into it, but if you
> know offhand, any hints would be appreciated :)

One guess: in RedirectBlocker, look for the strict-transport-security header in oldChannel, and if found, process it the same as you would process a 200 OK.

Second guess: If RedirectBlocker blocks the redirect, does the XHR end up returning the 3xx response?

This testing situation is making me nervous. Perhaps we should be adding some entries for domains like "strict-transport-security-200.example.org", "strict-transport-security-301.example.org", etc. That would require the changes in bug 466524 for us to test properly.
(In reply to Brian Smith (:bsmith) from comment #24)
> (In reply to David Keeler from comment #23)
> > > >       */
> > > >      void processStsHeader(in nsIURI aSourceURI,
> > > > +                          in string aHeader,
> > > > +                          [optional] out unsigned long long aMaxAge,
> > > 
> > > Better to use uint64_t here.
> > 
> > Is that a valid idl type? Using
> > https://developer.mozilla.org/en-US/docs/XPIDL, unsigned long long is an
> > unsigned 64-bit type, but I'll use uint64_t if that works.
> 
> cc: Ehsan. I believe we're supposed to prefer uint64_t over unsigned long
> long in IDL now. Regardless, we should update the documentation on the wiki,
> at least replacing the NSPR type names with standard C/C++ type names. I
> would do it, but I have a feeling Ehsan has a better sense of exactly what
> changes to the documentation would be best.

No, using unsigned long long in IDL is fine these days.  It will get converted to uint64_t in the header.  I modified the docs to remove mentions of the NSPR types.
Attached patch patch v5 (obsolete) — Splinter Review
Turns out, we were already doing the right thing with the redirects - by stopping any redirects, we use the header from the first response. If it has a good maxAge value, we use it.

Carrying over r+ from bsmith unless he has any other changes to make.

Minimized patch for beta and aurora coming soon.
Attachment #660624 - Attachment is obsolete: true
Attachment #660624 - Flags: superreview?(honzab.moz)
Attachment #670168 - Flags: review+
Attachment #670168 - Flags: review?(honzab.moz)
Attached patch patch for upliftSplinter Review
This patch limits the entries on the preload list to the intersection of what was already on the list and those that send an appropriate header. Some of the includeSubdomains values were changed to reflect what's actually in the headers.
This is intended for uplift to aurora and beta.
I also updated the test.
Attachment #670182 - Flags: review?(bsmith)
Comment on attachment 670168 [details] [diff] [review]
patch v5

Review of attachment 670168 [details] [diff] [review]:
-----------------------------------------------------------------

sr=honzab

::: netwerk/protocol/http/nsHttpChannel.cpp
@@ +1152,5 @@
> +    // These aren't used here, but since [optional] xpidl parameters
> +    // aren't actually optional in C++, they must be included.
> +    uint64_t maxAge = 0;
> +    bool includeSubdomains = false;
> +    rv = stss->ProcessStsHeader(mURI, stsHeader.get(), &maxAge, &includeSubdomains);

What about to allow passing nullptr for the two arguments?  Then you don't need these temp locals.

::: security/manager/boot/src/nsStrictTransportSecurityService.h
@@ +22,3 @@
>  #define NS_STRICT_TRANSPORT_SECURITY_CID \
> +  {0x193608a3, 0x5369, 0x43fb, \
> +    {0x8d, 0x37, 0x17, 0xb7, 0x00, 0xad, 0x4f, 0x9f} }

Why this has been changed?

::: security/manager/tools/getHSTSPreloadList.js
@@ +148,5 @@
> +  req.open("GET", uri, true);
> +  req.channel.notificationCallbacks = new RedirectStopper();
> +  req.onreadystatechange = function(event) {
> +    if (!inResultList && req.readyState == 4) {
> +      inResultList = true;

Why do you need the inResultList logic?
Attachment #670168 - Flags: review?(honzab.moz) → review+
Thanks for the review - I have a couple of comments/questions:

(In reply to Honza Bambas (:mayhemer) from comment #28)
> ::: netwerk/protocol/http/nsHttpChannel.cpp
> @@ +1152,5 @@
> > +    // These aren't used here, but since [optional] xpidl parameters
> > +    // aren't actually optional in C++, they must be included.
> > +    uint64_t maxAge = 0;
> > +    bool includeSubdomains = false;
> > +    rv = stss->ProcessStsHeader(mURI, stsHeader.get(), &maxAge, &includeSubdomains);
> 
> What about to allow passing nullptr for the two arguments?  Then you don't
> need these temp locals.

Good idea.

> ::: security/manager/boot/src/nsStrictTransportSecurityService.h
> @@ +22,3 @@
> >  #define NS_STRICT_TRANSPORT_SECURITY_CID \
> > +  {0x193608a3, 0x5369, 0x43fb, \
> > +    {0x8d, 0x37, 0x17, 0xb7, 0x00, 0xad, 0x4f, 0x9f} }
> 
> Why this has been changed?

I modified nsIStrictTransportSecurityService.idl, so I had to rev the uuid - I was under the impression that this CID should change too - is this the case?

> ::: security/manager/tools/getHSTSPreloadList.js
> @@ +148,5 @@
> > +  req.open("GET", uri, true);
> > +  req.channel.notificationCallbacks = new RedirectStopper();
> > +  req.onreadystatechange = function(event) {
> > +    if (!inResultList && req.readyState == 4) {
> > +      inResultList = true;
> 
> Why do you need the inResultList logic?

I guess I was concerned that onreadystatechange could be fired with readyState == 4 multiple times per request, but there's some question as to if that's possible. I'll look into it.
Firefox 17 beta was released with this bug and broke Google Charts everywhere. Is there an ETA on a fix by chance? Google Charts are used in more places than you might think once you stop seeing them.
Comment on attachment 670182 [details] [diff] [review]
patch for uplift

[Approval Request Comment]
Bug caused by (feature/regressing bug #): Bug 760307
User impact if declined: Some sites that use the Google Charts API won't work. Some other sites may be forced to be considered Strict Transport Security (i.e. HTTPS-only) for longer than they would like.

Testing completed (on m-c, etc.): The feature has been tested on -aurora for 6 weeks.

Risk to taking this patch (and alternatives if risky): None.

String or UUID changes made by this patch: None

This patch is designed to rectify issues found during testing, by being more conservative in which sites are affected and how they are affected.
Attachment #670182 - Flags: review?(bsmith)
Attachment #670182 - Flags: review+
Attachment #670182 - Flags: approval-mozilla-beta?
Attachment #670182 - Flags: approval-mozilla-aurora?
Looks like this still needs to land on mozilla-central.  We want this for tomorrow's Beta 2 builds so please land on m-c asap so we can approve the uplift once QA has verified this on trunk.
Keywords: qawanted, verifyme
I am not able to reproduce this in Firefox 17.0b1. Without a locally reproducible testcase this will be almost impossible to verify. Can someone please provide said testcase?
I put the test case in the url (you'll get a cert error until this gets fixed or the site gets fixed, which I think won't happen for a while, if at all).
I don't see any bustage with Firefox 17.0b1 on chart.apis.google.com. The page loads and I can create charts using the tool.
(In reply to Anthony Hughes, Mozilla QA (:ashughes) from comment #33)
> I am not able to reproduce this in Firefox 17.0b1. Without a locally
> reproducible testcase this will be almost impossible to verify. Can someone
> please provide said testcase?

Just try to display Google chart with <img> tag like the testcase I attached.
Attached patch patch v6Splinter Review
- Allowed passing nullptrs to ProcessStsHeader for the two additional arguments
- bz says CIDs aren't generally changed, so I undid that
- I stand by my usage of inResultList in the onreadystatechange callback (it may not be necessary, but at least it's safe)

Carrying over r+.
Attachment #670168 - Attachment is obsolete: true
Attachment #671614 - Flags: review+
Here's a try run: https://tbpl.mozilla.org/?tree=Try&rev=55777bb53849
Brian, if you could have one last look before I land this, that'd be great.
(In reply to Loic from comment #36)
> Created attachment 671582 [details]
> Testcase (google chart)

Testcase displays without any problems for me in Firefox 17.0b1.
(In reply to Anthony Hughes, Mozilla QA (:ashughes) from comment #39)
> (In reply to Loic from comment #36)
> > Created attachment 671582 [details]
> > Testcase (google chart)
> 
> Testcase displays without any problems for me in Firefox 17.0b1.
Could you test with a new profile, please.
On a new profile this now reproduces with Firefox 17.0b1. Thanks.

Flagging for verification once this is fixed.
But, why wouldn't it reproduce on an old profile? Could you try reproducing on your old profile, but shift-reload the page?
In fact, in FF17+, if you open directly the Google chart in the location bar , you got the error message "This Connection is Untrusted".

FF16 doesn't display this error message and is able to display the chart.

Maybe your old profile contains a security exception about chart.apis.google.com.
(In reply to Loic from comment #44)
> In fact, in FF17+, if you open directly the Google chart in the location bar
> , you got the error message "This Connection is Untrusted".
> 
> FF16 doesn't display this error message and is able to display the chart.
> 
> Maybe your old profile contains a security exception about
> chart.apis.google.com.

But, this feature should prevent the security exception from working. That is why I am puzzled/concerned about the behavior Anthony is seeing. It makes me think things may be severely broken.
Testcase fails to load on the old profile now too.
https://hg.mozilla.org/mozilla-central/rev/1cbaabb841d9
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla19
Comment on attachment 670182 [details] [diff] [review]
patch for uplift

Thanks for the verifications Anthony, let's get this uplifted asap so it's in today's Beta builds.
Attachment #670182 - Flags: approval-mozilla-beta?
Attachment #670182 - Flags: approval-mozilla-beta+
Attachment #670182 - Flags: approval-mozilla-aurora?
Attachment #670182 - Flags: approval-mozilla-aurora+
Verified on the latest beta on Windows 7 64bit, Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/17.0 Firefox/17.0 , Build ID: 20121023124120
Verified on Firefox 18 beta 1.

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/18.0 Firefox/18.0
Build ID: 20121121075611
QA Contact: manuela.muntean
Verified on Firefox 19 beta 2.

User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0
Build ID: 20130116072953
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: