Closed Bug 402210 Opened 17 years ago Closed 16 years ago

Should SSL Error page for domain mismatch hyperlink to correct site?

Categories

(Core Graveyard :: Security: UI, defect, P3)

defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: johnath, Assigned: johnath)

References

Details

Attachments

(3 files, 6 obsolete files)

When an SSL error page is presented for a domain mismatch, we tell the user the site(s) for which the cert is actually issued.  Should that text be hyperlinked to take you to the actual site?

In the common case, a legitimate site is breaking because they don't use a wildcard cert and so, for example, https://amazon.com currently errors out, since the cert is only for https://www.amazon.com.  In this case, it is extremely desirable to give users a way to get to the site they actually want, and hyperlinking the text is an obvious approach for that.

The catches I can see are:

 - SubjectAltName: Certs can have multiple (many, even) names for which they are issued, thanks to the SubjectAltName certificate extension.  Those could be unrelated sites, and so the interaction there could be "weird."  One mitigation possibility is to only hyperlink for certs with a single verified name, and punt on long lists.  Another approach is to link them all, and let the user sort it out.  Confusing, but maybe not moreso than the current behaviour of listing them all and NOT hyperlinking.

 - MitM - In a man in the middle attack, the attackers present their own cert in place of the site's legitimate certificate.  If I attempt to connect to https://www.amazon.com and the attacker redirects me to a site presenting a (legitimately-obtained) cert for "www.amazonsecureshopping.com" - a phish site they control - then hyperlinking to it from an authoritative "browser warning page" might enable the attack more than we'd like.  A potential mitigation here would be to only hyperlink for obvious substring matches - so https://amazon.com would only get hyperlinks for site names under that user-specified domain - yes to www.amazon.com, no to phishingamazonians.com.  This would limit the helpfulness of the fix though, since not all mismatches are subdomains.
Depends on: https-error-pages
I don't know if we can find a resolution that works here, but I think we should
answer the question before firefox3, nominating as blocking.
Flags: blocking1.9?
I propose we try to be very specific. If we believe that the omission of the "www." part of the URL is the most common case, let's handle that one specifically.

I'd like to see some real life cases.  The amazon.com case doesn't do it for me because you can't actually start at http://amazon.com and encounter a link to https://amazon.com.  Or at least I could not find one.

Does anyone have examples of public sites that need this treatment?
bob,
https://verisign.com is maybe such a site. 
https://www.verisign.com works, https://verisign.com result in the error page
(In reply to comment #3)
> bob,
> https://verisign.com is maybe such a site. 
> https://www.verisign.com works, https://verisign.com result in the error page

Hi Carsten,

Is there a way to start from http://www.verisign.com/ and get to https://verisign.com via normal clicks?  My guess is that many (most?) sites that use SSL will generate the error when you remove the "www." component. I'm most curious about sites where average users will likely run into trouble just by doing what they do, starting at the non-SSL port.

Perhaps, for example, there are sites where the transition from http to https is handled in such a way (generated by JS??) that the current URL is modified to include the "s" in "https".  In such a case, users would hit the invalid "https://example.com" URL.
Such a site would presumable work equally well on foo.com as www.foo.com rather than redirecting to the www. versions as most do, but then construct their https: links dynamically based on the current user's location.

When we first turned on invalid-cert-blocking we had a couple of those reported, can't remember which bug #'s but it wasn't totally consistent (was it fedex?).

I'm pretty dubious of this. It _sounds_ like a good thing, but I really think it'll end up being more abuseful than useful.

On the somewhat similar IE7 ssl error pages Microsoft apparently found the (www.)foo.com case common enough to mention to users in the "More information" text, but that's hidden behind a link click and it seems like users are just as likely to click the 'continue to this website (not recommended)' link right above it.
(In reply to comment #5)
> I'm pretty dubious of this. It _sounds_ like a good thing, but I really think
> it'll end up being more abuseful than useful.

Howso? We're (as I understand things) talking solely about special-casing the "www" here, as well as adding a hyperlink to the domain name offered by the certificate. Johnath's described how the latter might be abused (MITM with a homograph-URL style phish) but I can't really see how the former might be abused. 

> On the somewhat similar IE7 ssl error pages Microsoft apparently found the
> (www.)foo.com case common enough to mention to users in the "More information"
> text, but that's hidden behind a link click and it seems like users are just as
> likely to click the 'continue to this website (not recommended)' link right
> above it.

Right. Our goal here is to try and provide the safest possible alternative in the case where a user has hit a misconfigured cert. I want to divert attention from the "override" button, basically :)
note that the sites I'm dealing with are things like:
https://browser.garage.maemo.org which has a certificate for https://garage.maemo.org

So if you simple special case "www", you've absolutely defeated my request.

At worst we might need to check eTLD, however I doubt it, if someone can get a certificate for 'us.' that's fine.

It should be possible to offer links both up or down the DNS chain, but not across.

Perhaps for cases where the server is across could offer a link for common stem, however I believe that's beyond the scope of my original request.
[visit: https://browser.garage.maemo.org, cert: https://www.garage.maemo.org, offer: https://garage.maemo.org]

bob, please see https://bugs.maemo.org/show_bug.cgi?id=1403#c20 - i.e., yes people really do this.

JavaScript based redirects are certainly fairly common, as are HTTP header level redirects, both of which are blocked by the current code. However neither of them should really influence this bug.
Another thing we could do is always offer an HTTP based link, which in my experience works in 99% of the cases (including the one timeless refers to).

So:

request: https://amazon.com
cert: https://www.amazon.com
auto-redirect: https://www.amazon.com

request: https://browser.garage.maemo.org
cert: https://garage.maemo.org
offer: You can go to the _site for which the certificate is valid_ or try to visit _browser.garage.maemo.org_ without encryption.

The wording would need to be de-sucked.
Johnath points out that I just suggested a mechanism to encourage security downgrade attacks - though, at least we'd be making it clear that's what we were doing.
-'ing.  Feel free to re-nom if you disagree.
Flags: blocking1.9? → blocking1.9-
Priority: -- → P3
Don't disagree; this is wanted, not blocking.
Flags: wanted1.9+
(In reply to comment #2)
> I propose we try to be very specific. If we believe that the omission of the
> "www." part of the URL is the most common case, let's handle that one
> specifically.
> 
> I'd like to see some real life cases.  The amazon.com case doesn't do it for me
> because you can't actually start at http://amazon.com and encounter a link to
> https://amazon.com.  Or at least I could not find one.
> 
> Does anyone have examples of public sites that need this treatment?

Bumped into this one this morning:

http://shoppersdrugmart.ca/english/index.html  (Shopper's Drug Mart is a large Canadian pharmacy chain).

The "Healthwatch Med Ready" link along the top navbar redirects to https://shoppersdrugmart.ca/refill/en_CA/welcome.html which fails, since the cert is issued for www.shoppersdrugmart.ca.


Just a thought, the hyperlink won't help people who arrived on a page as a result of a submit action. The original post/get data is lost when people clicked such a link on the error page.
i so don't care about that edge case. let's fix the common case.
Let's not let perfect be the enemy of good here.  I propose we do the simple thing first:

 - If the intended domain is a sub-domain of the actual (which gets the "misssing www." case as well as a bunch of similar ones), we hyperlink, otherwise we file a new bug.

The error page used is not privileged and shouldn't be, so things like eTLD service are not available without significant complication, but basic JS string manipulation is still available.

Figuring out how to do this in a way that isn't a terrible hack is also kind of tricky.  The string in question is passed in from nsDocShell so either it's linkified there (rather ick, but possible. And chrome-privileged, for that matter) or we linkify it in netError.xhtml javascript (very ick, and prone to l10n bustage).  Or there's some third way that I'm not thinking of right now...
Attached patch demo (obsolete) — Splinter Review
This doesn't implement auto redirect. It's merely a proof of concept. Sadly it isn't possible to pass additional arguments off to the js side, so they have to be encoded into the message. How exactly we choose to encode it is up to us.

I believe that we could in theory push this through such that any localizations which don't update this string merely don't get the feature and could choose to take it when they feel comfortable with it (if the tag isn't present, the code does nothing).

If someone wants to implement auto-redirect, they're free to do so.
Attachment #315105 - Flags: ui-review?(johnath)
Attachment #315105 - Flags: review?(kengert)
Attachment #315105 - Flags: review?(cbiesinger)
Comment on attachment 315105 [details] [diff] [review]
demo

Alas, latel10n, but the idea is right. We don't really have a solid policy on accepting en-US only changes; if we did, a follow up would have to be levied to get the string translated, I guess?
Comment on attachment 315105 [details] [diff] [review]
demo

Alas, latel10n, but the idea is right. We don't really have a solid policy on accepting en-US only changes; if we did, a follow up would have to be levied to get the string translated, I guess?
Comment on attachment 315105 [details] [diff] [review]
demo

So, yes, this is something we'd like to do.

To avoid late-l10n, I think you could easily write a little script that looks for %S in all of the locales (http://mxr.mozilla.org/l10n/search?string=certErrorMismatchSingle2) and add the anchor around it. There's nothing grammatically different about that change, I don't think, it's just making it active.
Attachment #315105 - Flags: review?(cbiesinger) → review+
Attachment #315105 - Flags: ui-review?(johnath)
Attachment #315105 - Flags: ui-review+
Attachment #315105 - Flags: review?(cbiesinger)
Attachment #315105 - Flags: review+
Axel, see comment 19 - are you OK with that?
Comment on attachment 315105 [details] [diff] [review]
demo

I agree with the ui-r here - it's definitely the right ui experience.  A couple drive-by comments on the code change:

>Index: mozilla/docshell/resources/content/netError.xhtml
>+        if (err == "nssBadCert" && sd && sd.textContent) {
>+          /* Bug 402210, sometimes servers are misconfigured
>+           * in certain cases we're willing to help out their
>+           * users.
>+           */

This makes the function pretty long, and mostly for the sake of this special case - maybe add a separate addHelperLink() function or something of the like?

>+          var textContent = sd.textContent;
>+          var linkTag = /<a title="([^"]+)">([^<>]+)<\/a>/;
>+          if (/ssl_error_bad_cert_domain/.test(textContent) &&

This ssl_error_bad_cert_domain error code will also be presented in the case where there are multiple mismatched domains, or where there are multiple errors.  I *think* this is still okay, since in the multiple-mismatch case you won't find the <a> tag (since you only add that to the single) and in the multiple-errors case, linking to the right site might still be helpful (because that site might present an entirely valid cert.

>+              linkTag.test(textContent)) {
>+            var okHost = RegExp.$1;
>+            var url = document.documentURI;
>+            var ctype = url.search(/\&c\=/);
>+            var duffUrl = url.search(/\&u\=/);
>+            url = decodeURIComponent(url.slice(duffUrl + 3, ctype));
>+            var thisHost;
>+            var proto;
>+            if (/^((?:[^:]+):\/\/)([^/]+)/.test(url)) {
>+              proto = RegExp.$1;
>+              thisHost = RegExp.$2;
>+            }

I think you could skip the document.documentURI parsing and just use document.location (which should be the "real" url - the one that isn't about:neterror) which is already parseable for you.  :)  This might make the specific cases below easier to check too, since you'll have a full-fledged DOM Location at that point.

>+            var newText = okHost;
>+            /* case #1: 
>+             * amazon.com uses an invalid security certificate.
>+             *
>+             * The certificate is only valid for www.amazon.com
>+             */
>+            if (okHost.indexOf("."+thisHost) + thisHost.length + 1 == okHost.length)
>+              newText = '<a href="'+proto + okHost+'">'+okHost+'</a>';
>+            /* case #2:
>+             * browser.garage.maemo.org uses an invalid security certificate.
>+             *
>+             * The certificate is only valid for garage.maemo.org
>+             */
>+            else if (thisHost.indexOf("."+okHost) + okHost.length + 1 == thisHost.length)
>+              newText = '<a href="'+proto + okHost+'">'+okHost+'</a>';
>+            sd.innerHTML = textContent.replace(linkTag, newText);
>+          }
>+          document.getElementById("errorTryAgain").style.display = "none";
>+        }
>+

If you gave the <a> tag an id, you could getElementById() it, and then set href directly, instead of doing text manipulation.  That would also make the detection up top easier.
Here's my take:

Writing a fix-up patch should be possible, and easy.

I wouldn't totally rule out that there is an odd-off grammar thing in here for some language, but I'd be highly surprised to see that have a worse impact than "yuck".

We should do the full monty in terms of doing the post to .l10n, with a testcase that shows what's expected to happen. https://amazon.com sounds suitable. I'd prefer that post to be done by someone that knows what he's talking about.
Attached patch putting money where my mouth is (obsolete) — Splinter Review
Thanks to timeless for making this happen - I just took my own review advice and put this together.
Assignee: kengert → johnath
Attachment #315105 - Attachment is obsolete: true
Status: NEW → ASSIGNED
Attachment #317342 - Flags: review?(cbiesinger)
Attachment #315105 - Flags: review?(kengert)
Attachment #315105 - Flags: review?(cbiesinger)
Attachment #317342 - Flags: review?(kengert)
Shows how the patch deals with the four obvious cases
Attached patch Right patch this time! (obsolete) — Splinter Review
Sorry, last patch was missing a moved block of code (the new function in netError needs to execute at the end of initPage(), not the beginning).  This is the patch that produced the screenshots.
Attachment #317342 - Attachment is obsolete: true
Attachment #317348 - Flags: review?(kengert)
Attachment #317348 - Flags: review?(cbiesinger)
Attachment #317342 - Flags: review?(kengert)
Attachment #317342 - Flags: review?(cbiesinger)
Comment on attachment 317348 [details] [diff] [review]
Right patch this time!

r=kengert for the change in security/manager
Attachment #317348 - Flags: review?(kengert) → review+
Comment on attachment 317348 [details] [diff] [review]
Right patch this time!

>Index: docshell/resources/content/netError.xhtml
>+        /* case #1: 
>+         * example.com uses an invalid security certificate.
>+         *
>+         * The certificate is only valid for www.example.com
>+         */
>+        if(new RegExp(thisHost + "$").test(okHost))
>+          // okHost ends with thisHost
>+          link.href = proto + okHost;
>+
>+        /* case #2:
>+         * browser.garage.maemo.org uses an invalid security certificate.
>+         *
>+         * The certificate is only valid for garage.maemo.org
>+         */
>+        if(new RegExp(okHost + "$").test(thisHost))
>+          // thisHost ends with okHost
>+          link.href = proto + okHost;
>+      }

These regexps are obviously a problem, since okhost is likely to include '.' characters, which have a different meaning in regular expressions.  I'd like a library function to escape them, rather than doing it manually in this file, but that doesn't seem to exist at the moment (bug 248062).  So I'll either have to attach a patch which does include the manual escaping, or go back to timeless' indexOf() approach.

The most readable approach is probably just to poof up an endsWith function, and use that (since endsWith is what we actually want).  Something like this oughta do:

function endsWith(string, target) {
  var expectedIndex = string.length - target.length;
  return expectedIndex >= 0 && 
         string.lastIndexOf(target) == expectedIndex;
}

The other thing to keep in mind is that notpaypal.com shouldn't be able to get a hyperlink while MitM'ng paypal.com just because it endsWith "paypal.com" - so the tests should be including the "." separator.

I'll attach an updated patch later today.
Attachment #317348 - Flags: review?(cbiesinger)
Same behaviour as depicted in attachment 317345 [details], but without regexes to get wrong.
Attachment #317348 - Attachment is obsolete: true
Attachment #318185 - Flags: review?(cbiesinger)
From IRC:

13:43 <@Waldo> that one doesn't seem biesi-worthy from what I can tell, it being basically completely UI- and JS-related
13:43 <@biesi> johnath, probably better to find someone else for that one
13:43 <@biesi> yeah what Waldo said
13:45 <@biesi> johnath, moa/rs=me
13:45 <@biesi> now just find a reviewer :)
13:46 <@johnath> gavin|: feel like playing docshell peer, or do you have blockers to review?
13:46 < proxyvictim> i'd go w/ waldo as the reviewer
13:46 < proxyvictim> he's alive and fairly url conscious
13:46 <@Waldo> no, not me
13:46 <@Waldo> no time at the moment for it
13:46 <@johnath> oh yes, I'd take Waldo too, except... what Waldo said!
13:46 <@Waldo> I have to be choosy right now
13:47 <@gavin|> johnath: hit me

Consider yourself hit.  The only thing missing here is the patch to replace the %S string in all locales.
Attachment #318185 - Attachment is obsolete: true
Attachment #318189 - Flags: review?(gavin.sharp)
Attachment #318185 - Flags: review?(cbiesinger)
Attached patch Change the string in 51 locales (obsolete) — Splinter Review
Axel, please take a quick look and make sure I didn't do anything stupid, but this is a result of replacing |%S| with |<a id="cert_domain_link" title="%1$S">%1$S</a>| in the 51 locales that contain a "certErrorMismatchSingle2" key in their pipnss.properties file.
Attachment #318207 - Flags: review?(l10n)
Comment on attachment 318207 [details] [diff] [review]
Change the string in 51 locales

Thanks, that looks good, up to the inclusion of tamil, l10n/ta/security/manager/chrome/pipnss/pipnss.properties isn't branched yet, and it's not sure if we would, and I don't think we want this on the branch. Apart from that, this is cool.
Attachment #318207 - Flags: review?(l10n) → review+
(In reply to comment #31)
> (From update of attachment 318207 [details] [diff] [review])
> Thanks, that looks good, up to the inclusion of tamil,
> l10n/ta/security/manager/chrome/pipnss/pipnss.properties isn't branched yet,
> and it's not sure if we would, and I don't think we want this on the branch.
> Apart from that, this is cool.

New patch, no tamil.  Thanks Axel.
Attachment #318207 - Attachment is obsolete: true
Attachment #318225 - Flags: review+
Comment on attachment 318225 [details] [diff] [review]
Change string in 50 locales (no tamil)

Requesting approval for the l10n changes, now that they have Axel's okay.  Technically, this could land at a different time than the code change without any ill-effects (the code wouldn't do anything without this change, and this change wouldn't do anything visible without the code, but neither would break without the other). Nevertheless, my plan is to land them together once they both pass review.
Attachment #318225 - Flags: approval1.9?
Attachment #318189 - Flags: review?(gavin.sharp)
Comment on attachment 318225 [details] [diff] [review]
Change string in 50 locales (no tamil)

a1.9=beltzner
Attachment #318225 - Flags: approval1.9? → approval1.9+
Comment on attachment 318189 [details] [diff] [review]
Better comments in a couple places

I appreciate the approval, but gavin still needs to review this.  :)
Attachment #318189 - Flags: review?(gavin.sharp)
Comment on attachment 318189 [details] [diff] [review]
Better comments in a couple places

>Index: docshell/resources/content/netError.xhtml

>+      function addDomainErrorLink() {

>+        if(!link)

Gotcha!

>+        var okHost = link.title;

.getAttribute("title") might be a bit safer, to be sure to avoid a potentially value-munging getter, but .title is just a string attr (as opposed to a URI attr) so this worry is probably unwarranted (nsGenericHTMLElement::GetTitle just calls GetAttr).

>+        var thisHost = document.location.hostname;
>+        var proto = document.location.protocol;

This returns "https:", without the slashes, so when you append the hostname to it below you end up with, e.g. "https:host.com". This correctly linkifies to "https://host.com", but it might be a good idea to add a test for that somewhere - in the unlikely case that this breaks in the future we could potentially link to the wrong site.

(I'm not sure whether the same do-what-I-mean logic will fixup links for non-HTTP[S] protocols, but explicitly adding "//" might not be any better, and that might be a corner case not worth worrying about.)

>+        /* case #2:
>+         * browser.garage.maemo.org uses an invalid security certificate.
>+         *
>+         * The certificate is only valid for garage.maemo.org

I'm not sure how useful the feature is in this case, since I think it's most likely that in these cases we'll end up linking to a different site/page than the one the user was trying to access (as in the example, even - browser.garage.maemo.org is not the same site as garage.maemo.org). I guess it does get you back to a working site, at least.

>+      function endsWith(string, target) {
>+        var expectedIndex = string.length - target.length;
>+        return expectedIndex >= 0 && 
>+               string.lastIndexOf(target) == expectedIndex;
>+      }

I assume you've changed this to use Neil's suggested version from IRC?
Attachment #318189 - Flags: review?(gavin.sharp) → review+
Attached patch With testsSplinter Review
(In reply to comment #36)
> >+      function addDomainErrorLink() {
> >+        if(!link)
> 
> Gotcha!

Grr!  Fixed in 3 places.

> >+        var okHost = link.title;
> 
> .getAttribute("title") might be a bit safer, to be sure to avoid a potentially
> value-munging getter, but .title is just a string attr (as opposed to a URI
> attr) so this worry is probably unwarranted (nsGenericHTMLElement::GetTitle
> just calls GetAttr).

It also doesn't hurt, though.  Fixed.

> >+        var thisHost = document.location.hostname;
> >+        var proto = document.location.protocol;
> 
> This returns "https:", without the slashes, so when you append the hostname to
> it below you end up with, e.g. "https:host.com". This correctly linkifies to
> "https://host.com", but it might be a good idea to add a test for that
> somewhere - in the unlikely case that this breaks in the future we could
> potentially link to the wrong site.
>
> (I'm not sure whether the same do-what-I-mean logic will fixup links for
> non-HTTP[S] protocols, but explicitly adding "//" might not be any better, and
> that might be a corner case not worth worrying about.)

Added a test for https, and for non-http protocols (ably represented by ftp). 

> >+        /* case #2:
> >+         * browser.garage.maemo.org uses an invalid security certificate.
> >+         *
> >+         * The certificate is only valid for garage.maemo.org
> 
> I'm not sure how useful the feature is in this case, since I think it's most
> likely that in these cases we'll end up linking to a different site/page than
> the one the user was trying to access (as in the example, even -
> browser.garage.maemo.org is not the same site as garage.maemo.org). I guess it
> does get you back to a working site, at least.

Right, and basically my rationale is "it might help, and it's safe to do."

> >+      function endsWith(string, target) {
> >+        var expectedIndex = string.length - target.length;
> >+        return expectedIndex >= 0 && 
> >+               string.lastIndexOf(target) == expectedIndex;
> >+      }
> 
> I assume you've changed this to use Neil's suggested version from IRC?

Yes indeed:  return haystack.slice(-needle.length) == needle;

Carrying forward your r+, but feel free and invited to take a look at the tests, if you like.

Requesting approval for the second half of this patch - this is the half that actually does the work.  

RISK: Very low, since we're just special casing some error page code, all of which is guarded by a check to make sure the newly added anchor tag actually exists.  If something goes wrong and it doesn't, we just continue to behave as we currently do.  Even though this patch changes strings, it doesn't change semantics, and the accompanying l10n patch catches all other locales up on the change.

REWARD: In a common SSL error case, we become a lot more helpful by pointing visitors to the site they likely want to visit, without opening them up to the MitM attacks these pages are designed to prevent.
Attachment #318189 - Attachment is obsolete: true
Attachment #318670 - Flags: review+
Attachment #318670 - Flags: approval1.9?
Comment on attachment 318670 [details] [diff] [review]
With tests

a1.9=beltzner
Attachment #318670 - Flags: approval1.9? → approval1.9+
I've landed the code half - need someone with l10n to land that half.  Gavin is updating his tree as we speak, so that's probably him.

Checking in docshell/resources/content/netError.xhtml;
/cvsroot/mozilla/docshell/resources/content/netError.xhtml,v  <--  netError.xhtml
new revision: 1.29; previous revision: 1.28
done
Checking in docshell/test/Makefile.in;
/cvsroot/mozilla/docshell/test/Makefile.in,v  <--  Makefile.in
new revision: 1.13; previous revision: 1.12
done
RCS file: /cvsroot/mozilla/docshell/test/test_bug402210.html,v
done
Checking in docshell/test/test_bug402210.html;
/cvsroot/mozilla/docshell/test/test_bug402210.html,v  <--  test_bug402210.html
initial revision: 1.1
done
Checking in security/manager/locales/en-US/chrome/pipnss/pipnss.properties;
/cvsroot/mozilla/security/manager/locales/en-US/chrome/pipnss/pipnss.properties,v  <--  pipnss.properties
new revision: 1.34; previous revision: 1.33
done
(In reply to comment #40)
> Checked in the l10n patch:
> http://bonsai-l10n.mozilla.org/cvsquery.cgi?branch=HEAD&branchtype=match&date=explicit&mindate=2008-04-30+13%3A44&maxdate=2008-04-30+13%3A46

That's both halves!

-> FIXED

(Can I get an Amen?)
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Oh so much more, you got a mass in the church of l10n, http://groups.google.com/group/mozilla.dev.l10n/browse_frm/thread/43637b70d0f9e0d5
Blocks: 431647
Depends on: 431652
Depends on: 432491
Depends on: 432494
When you worked on this patch, you only cared for the most obvious scenario, namely the error pages.

But there is more. This is core code. The same string is used in an classic error dialog, whenever the SSL error happens in a context outside of a web page. For example, when you're doing email protocols, or maybe running in an chatzilla extensions, and in some scenarios where we don't detect that we're running in a browser window.

One such scenario has been reported in bug 433420.

As a result of this bug, the error dialog contains the html tags, too.
I regret that I missed to warn about the dialog scenario.

The original proposal to use search-and-replace method for this very browser-specific feature should have been prefered.

Can we still do it?
Can we revert the string change and add some logic in browser code?
Regarding the issue described in comment 43 and 44 I have filed bug 439062.
Blocks: 439062
No longer blocks: 431647, 439062
Depends on: 439062
Product: Core → Core Graveyard
Blocks: 1324204
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: