Closed Bug 700837 Opened 13 years ago Closed 8 years ago

Server certificate is only trusted for a specific port after exception. Alternating certificates on one port creates constant exception prompts..

Categories

(Core :: Security: PSM, enhancement)

8 Branch
enhancement
Not set
normal

Tracking

()

RESOLVED WONTFIX

People

(Reporter: julien.pierre, Unassigned, NeedInfo)

References

Details

I am developing web server software and testing it with firefox.
The server supports SSL with virtual servers. It supports dynamic reconfiguration including change of the server certificate. For testing purposes, I am using self-signed certificates.

There are several scenarios where the exception handling does not work as I would expect.

Here are the test cases :

1) I connect to  https://myserver.mydomain.com:port1 with Firefox
2) I receive cert1 which has the proper hostname in the subject
3) I accept the security exception.
4) I connect to https://myserver.mydomain.com:port2
5) I receive cert1 which has the proper hostname in the subject. I get another exception dialog, even though it was previously trusted. It seems to me that there should be a way to trust the certificate altogether, for all ports, and not for just one specific port. This is the first part of the problem.
The trust is specific to a particular URL, including protocol, port and hostname. It doesn't make sense to me. I should be able to trust the certificate according to what it says it's good for, ie. the content of its subject. Either I trust the certificate, or I don't. I believe this business of trusting a given certificate only for a particular URL violates PKIX specifications, such as RFC 5280.

I believe that when it comes to re-using the certificate for the same certificate for the same hostname on the same port, this restriction is seriously misguided. One should be able to do that without getting prompted again.

If the concern is that the certificate may contain multiple subjectAltNames for other hosts, or wildcards, perhaps Firefox could check for those during the exception confirmation process, and present the full list to the user, so that he can trust the cert or not. Anyway, IMO, those should always be displayed to the user.

6) I reconfigure the server at https://myserver.mydomain.com:port1 to use a new cert, cert2
7) I connect to  https://myserver.mydomain.com:port1 with Firefox again
8) I receive cert2 which has the proper hostname in the subject, but was not previously seen
9) I accept the security exception. So far, so good.
10) I reconfigure the server at https://myserver.mydomain.com:port1 to use the old cert, cert1
11) I connect to  https://myserver.mydomain.com:port1 with Firefox again
12) I receive cert1 which has the proper hostname in the subject
13) I receive another security exception dialog. This should not happen, because I already made an exception for this same URL and same certificate cert in step 3. But now, Firefox forgot this decision. Presumably, this exception was overwritten in step 9.
This is problem number 2.
This is not a theoretical problem. Server farms with load balancers do exist, which may end up presenting different certificates for each connection to the same URL, though real deployments would hopefully use proper certificates that chain to a trust anchor, so one would not expect exceptions to show up. However, it is impossible to test such scenarios with Firefox without a real CA.
OS: Windows 7 → All
Hardware: x86_64 → All
Julien, could you generate your own CA certificate, import it into the user trust store, issue your test server certs from that trust store, and then have your test configuration use those certs?

Is this a regression?
Component: Security → Security: PSM
Product: Firefox → Core
QA Contact: firefox → psm
Brian,

The sever product in question does not act as a CA. It only has the ability to generate self-signed certs for testing purposes. So, it would be a lot more work on my part.

I believe the first problem is a regression, I am fairly confident that it used to be possible to trust a server certificate and it would work on any port. I don't know how long ago that was or in which versions of Mozilla suite or Firefox.

I believe the second problem is a regression as well. It used to be possible to trust multiple server certs with the same subject. That doesn't seem to be possible anymore.
Yes, we had deliberately changed the implementation of cert overrides.

We no longer set a trusted flag on a certificate.
We remember the override at the application level, and bind it to a specific port, this was deliberate.
Kai,

Well, IMO this should be revisited.

I don't see a good reason for accepting the same certificate with hostname:port1 but not hostname:port2 . For a different hostname, that can be argued.

I also don't see a good reasoning for forgetting the previous overrides.
This bug seems to be affecting me differently. When including Javascript libraries on 2 different ports with the same host and SSL certificate the second include silently fails and doesn't present the dialog for the second port.

Browsing directly to the library on the second port presents the dialog. Creating an iframe with a src on the alternate port will generate the dialog inside the iframe. Loading the page normally silently aborts the connection on the alternate port.

Is this a bug in the way cert overrides now work?
(In reply to dwight.hobbs from comment #6)
> 
> Is this a bug in the way cert overrides now work?

No, it works as expected (don't shoot the messenger).
INVALID as per comment 3 (this is the intended behavior).
Status: NEW → RESOLVED
Closed: 9 years ago
Resolution: --- → INVALID
It may work as intended, but what's intended here doesn't make sense, IMO.

Since the current behavior isn't considered a bug, I reopened this and changed it to an enhancement request.

There are really 2 issues.

a) Same certificate being used for multiple protocol/hostname/port combinations.
IMO, there should be a way to trust the certificate so it can be subsequently processed per the rules of RC5280, and not some arbitrary per-URL rule. Even if Firefox doesn't do this automatically, it should offer this as an option. Say, give the choice between "Trust this certificate for all its listed usages", which would add it to the cert DB, and "Trust this certificate for this protocol/hostname/port only". 
Having the first option would allow the sequence 1) through 5) to work.

b) Back-end server alternating certificates :
Also, if you follow steps 1-3, then 6-13, the sequence is currently broken too, in step 13.
Here, the issue is that there can only be one override per protocol/hostname/port.
There are 2 ways to solve this : either having the option to trust the certificate for all its usages (see a), or allowing multiple cert overrides per protocol/hostname/port. The former probably makes more sense than the later.

Of course, adding a trust flag to a cert has a larger impact in terms of security than to add the cert to an override list. One possibility would be to have a UI that doesn't automatically add the trust flag, but rather opens a PSM cert display screen with the content of the certificate, and then the user could manually edit the trust flags, rather than have PSM automatically add the flag. 

The problem is that as is, you can't do this through PSM at all. You would have to use certutil to add the cert manually, and trust it.
Severity: normal → enhancement
Status: RESOLVED → REOPENED
Resolution: INVALID → ---
The security engineering team discussed this and came to the conclusion that the use-case of having multiple certificates at one endpoint isn't going to be supported. The way to get the desired behavior of point a) (above) is to create your own CA, import the root certificate, and use certificates issued by it.
Status: REOPENED → RESOLVED
Closed: 9 years ago8 years ago
Resolution: --- → WONTFIX
This is very problematic behavior for our product, which uses port 443 to supply client (JS) web software to run our web interface, but uses a different port for our REST API. CORS is already available to provide security when the client Javascript switches to use the REST API port. This fails completely because the REST port is not covered by the certificate acceptance. The only way I could make it work was to specifically access the REST port manually, and accept the certificate. That won't fly.

The suggestion to create an ad hoc CA and accept it in the browser (always a very bad idea, since it can just as easily generate a certificate for your bank) doesn't really work in a customer's production environment. We do have the ability to generate and install customer CA-signed certificates on our product, but we can't rely on our customers to always have that capability.

As such, we may have to simply deprecate the use of Firefox to manage our products.
(In reply to Kent Peacock from comment #11)
> This is very problematic behavior for our product, which uses port 443 to
> supply client (JS) web software to run our web interface, but uses a
> different port for our REST API. CORS is already available to provide
> security when the client Javascript switches to use the REST API port. This
> fails completely because the REST port is not covered by the certificate
> acceptance. The only way I could make it work was to specifically access the
> REST port manually, and accept the certificate. That won't fly.

If you use the add cert exception dialog directly (Preferences -> Advanced -> Certificates -> View Certificates -> Servers -> Add Exception) and type in the host:port directly, does it work?

> The suggestion to create an ad hoc CA and accept it in the browser (always a
> very bad idea, since it can just as easily generate a certificate for your
> bank) doesn't really work in a customer's production environment. We do have
> the ability to generate and install customer CA-signed certificates on our
> product, but we can't rely on our customers to always have that capability.

You should be able to restrict to a particular DNS hierarchy that doesn't include your bank using the name constraints extension.
Flags: needinfo?(kent)
(In reply to David Keeler [:keeler] (use needinfo?) from comment #12)
> (In reply to Kent Peacock from comment #11)

> If you use the add cert exception dialog directly (Preferences -> Advanced
> -> Certificates -> View Certificates -> Servers -> Add Exception) and type
> in the host:port directly, does it work?

I'm sure it would. I accomplished the same thing by manually accessing the host:port, and went through the cert acceptance dialog. Neither of these is really acceptable for us in our commercial product. We don't want our customers to have to do this. We want them to just have to accept the certificate once, and be done with it (in the self-signed case). It's actually somewhat worse than that. The current connection to the REST port just silently refuses the connection with unknown_ca and hangs, because the connection is being done from JavaScript. The code doesn't know that it needs to trust the certificate (again).

> You should be able to restrict to a particular DNS hierarchy that doesn't
> include your bank using the name constraints extension.

I'm not quite sure how a standalone device can do this intelligently when deployed in a customer's network where the customer has no local PKI infrastructure.
(In reply to Kent Peacock from comment #13)
> (In reply to David Keeler [:keeler] (use needinfo?) from comment #12)
> > (In reply to Kent Peacock from comment #11)
> 
> > If you use the add cert exception dialog directly (Preferences -> Advanced
> > -> Certificates -> View Certificates -> Servers -> Add Exception) and type
> > in the host:port directly, does it work?
> 
> I'm sure it would. I accomplished the same thing by manually accessing the
> host:port, and went through the cert acceptance dialog. Neither of these is
> really acceptable for us in our commercial product. We don't want our
> customers to have to do this. We want them to just have to accept the
> certificate once, and be done with it (in the self-signed case). It's
> actually somewhat worse than that. The current connection to the REST port
> just silently refuses the connection with unknown_ca and hangs, because the
> connection is being done from JavaScript. The code doesn't know that it
> needs to trust the certificate (again).

One thing you could do is write an add-on that sets everything up for your users. You would just need to gather the appropriate information and call https://dxr.mozilla.org/mozilla-central/rev/8a494adbc5cced90a4edf0c98cffde906bf7f3ae/security/manager/ssl/nsICertOverrideService.idl#56 for each host:port.

> > You should be able to restrict to a particular DNS hierarchy that doesn't
> > include your bank using the name constraints extension.
> 
> I'm not quite sure how a standalone device can do this intelligently when
> deployed in a customer's network where the customer has no local PKI
> infrastructure.

Well, your standalone device generates a self-signed certificate, right? Presumably you could have it generate a CA certificate with the right properties and then have that sign an end-entity certificate. Then you have your users download and import the root certificate.
(In reply to David Keeler [:keeler] (use needinfo?) from comment #14)
> One thing you could do is write an add-on that sets everything up for your
> users. You would just need to gather the appropriate information and call
> https://dxr.mozilla.org/mozilla-central/rev/
> 8a494adbc5cced90a4edf0c98cffde906bf7f3ae/security/manager/ssl/
> nsICertOverrideService.idl#56 for each host:port.

Really? That's not ever going to happen. In the real world, this stuff just has to work, and the simpler the better. We can't force your customers to download an add-on to manage our devices.

> Well, your standalone device generates a self-signed certificate, right?
> Presumably you could have it generate a CA certificate with the right
> properties and then have that sign an end-entity certificate. Then you have
> your users download and import the root certificate.

It's funny you suggest that. That was how the guy before me implemented our certificates (but without any domain restrictions). As far as I know, no customer ever went through the manual process of importing the root certificate (which was a good thing! They would have ended up with an unrestricted CA certificate from every device they bought). How is that better than relying on CORS for cross-port connections? Customers did the simple thing, and accepted the device certificate directly (and didn't even know there was a CA certificate backing it up). I just wasn't comfortable with our customers having to import a CA certificate for every device they owned, which you can't make them do anyway, so I changed it to a self-signed certificate.

The bottom line for us is this: any burden on our customers aside from their simple initial acceptance of the self-signed certificate is unacceptable. (Security-minded customers will install their own trusted certificates, and everything will be cool.) As I said, we will just have to release note that managing our arrays with Firefox is unsupported (perhaps with some suggested workarounds).

I'm a little surprised that this hasn't come up before. With the rising importance of REST API interfaces, I'd be amazed if every company proxies those along with their web interfaces on 443. (We do now, but we want to change that. That's how we discovered this.)
I thought I'd post the solution that I ended up having to implement to force the user to accept the server certificate on the REST port (only in Firefox by the way, all other browsers work without doing this). The solution is somewhat complicated by the fact that some configuration changes from the GUI cause a certificate regeneration (e.g., changing the DNS name). The solution involves opening a new page directed to the REST port, with a unique URL, and then putting up an alert to prevent the code from continuing and doing an access to the REST port before certificate acceptance:

      var useragent = navigator.userAgent;
      patt = new RegExp("Firefox");
      res = patt.test(useragent);
      if (res) {
        var seconds = new Date().getTime() / 1000;
        var newURL = "https://" + window.location.hostname + ":9999/cert-accept.html?t=" + seconds;
        var opened = window.open(newURL,"");
        alert("Please allow the pop-up window.\nIt is used to accept a certificate, if required.\nThis is due to a Firefox bug.");
      }

      ... continue to bring up the GUI ...

The handler for cert-accept.html (the t= is ignored, it just forces the page not to be cached) returns:

<!DOCTYPE html>
<html>
  <head>
  <META HTTP-EQUIV="Pragma" CONTENT="no-cache">
  </head>
  <body>
    <script language="JavaScript">
      function doClose() {
        window.close();
      }
      setTimeout(doClose, 1000);
    </script>
  </body>
</html>

One subtlety is that if the server supports RFC 5077 ticket session resumption, any certificate regeneration has to make those sessions invalid (in openssl, by replacing the ticket keys in the SSL_CTX).

It works, and puts up the necessary acceptance dialogs for a new certificate on page reload, but also requires that the alert be cleared every time.
We are having the same problem where one host has multiple microservices running on different ports and they talk to each other via REST API's.
We are also finding this is nuisance.  It forces one into using a reverse proxy, or telling clients that firefox is not supported.
This affected the product I'm working on too, because in our lab we use self signed certs for multiple ports, one of which is a rest port which didnt display the cert warning - it was difficult to figure out why just firefox failed. 

This workaround is ugly, but it gets by this obvious Firefox product deficiency which we do not see on any other browser.  In the odd chance you have a user thats dead set on using Firefox for whatever reason, they can get by it by doing this, but every time they clear their history they have to do it again.  Its more trouble than its worth, its easier to just use chrome or safari.  I strongly believe that Mozilla should totally rethink this, and just do an override in about:config for this or stop granulating accepting certs by port they way they are now.

Workaround:

Click on the Firefox Settings Icon (which is the 3 vertical bars in top right corner of Firefox next to the url bar)
Select “Preferences”
From the left-hand navigation pane, select “Privacy and Security”
Scroll to “History”, insure that ”Remember History” is on.  If it is not set to "Remember History", change it
Scroll further down to “Certificates”

Under “Certificates”, click the “View Certificates” box (to the right of “Query OCSP responder servers to confirm the current validity of certificates”)

From the “Certificate Manager”, select the “Servers” tab at the top of the page
Scroll in the certificates list until you see the instances of the Server certificate for the fqdn/port.
  For each certificate that is stored for the host:
    highlight the certificate
    select “delete” (at the bottom of the page), then "Ok"
    repeat for all remaining instances of the fqdn certificate that are stored

Once they are all cleared:

Click on the “Add Exception” box in the lower right actions pane, and a dialog box will appear
In the “Location” bar under “Server”, enter https://<fqdn>:<port>
    Click "Get Certificate"
    Insure “Permanently store this exception” is checked
    Click “Confirm Security Exception”

Repeat the previous block of steps for any port the cert uses

Click “Ok”
You need to log in before you can comment on or make changes to this bug.