Closed Bug 405514 Opened 17 years ago Closed 16 years ago

Exception dialog needs to support cert with domain name or matching IP address

Categories

(Core Graveyard :: Security: UI, defect)

defect
Not set
normal

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: mimecuvalo, Assigned: KaiE)

References

Details

User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b1) Gecko/2007110904 Firefox/3.0b1 Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9b1) Gecko/2007110904 Firefox/3.0b1 Separating this into a separate bug as discussed in bug 405289 (which deals with support across diff. ports) So, a site which throws the invalid security error like: https://www.kuix.de and it's matching IP: https://212.227.62.41 have to be separately approved. So, in extensions where you manually open up a socket - you will have to manually resolve the DNS so that you don't get two Add Exception dialogs to do. E.g. I do: this.transportService.createTransport(null, 0, this.host, parseInt(this.port), proxyInfo); where this.host is "google.com" for example. I will have to manually figure out that the IP is "64.233.167.99" and connect to that instead. Reproducible: Always
Depends on: 401575
Why are you using the ip address at all? Why can't you always use the hostname?
(In reply to comment #1) > Why are you using the ip address at all? > Why can't you always use the hostname? Well, in theory, the data connection should have the same host as the control connection. I think I can rely on that fact 99% of the time. But there are some edge cases I believe (e.g. http://en.wikipedia.org/wiki/File_Transfer_Protocol#FTP_and_NAT_devices ) that could cause issues. In that case, the data connection would have a different IP so I need to be sure to use the IP address supplied to me by the PASV response command. That being said, I don't know - the more I think about it, I think you've developed a pretty good security solution for HTTP and general web browsing. But it obviously imposes limitations for other protocols. Maybe instead of trying to rework the flow of adding an exception, it would be wise to just offer a way for the rest of the protocols (that have special considerations like FTP) to be able to import a certificate that would apply across different ports/server aliases. But that's just me throwing thoughts out there - I've always been more of a UI guy than a security guy - I'm not sure what the best approach would be.
(In reply to comment #2) > That being said, I don't know - the more I think about it, I think you've > developed a pretty good security solution for HTTP and general web browsing. > But it obviously imposes limitations for other protocols. Maybe instead of > trying to rework the flow of adding an exception, it would be wise to just > offer a way for the rest of the protocols (that have special considerations > like FTP) to be able to import a certificate that would apply across different > ports/server aliases. If you are willing to go through the slightly more onerous process of saving the cert to a local file (not realistic for generic web browsing, but arguably a possibility for your use case) this support already exists in PSM. From the Certificate Manager, you can choose "Import" instead of "Add Exception" to import a locally stored cert as fully trusted. That cert won't be bound by the same host-port restrictions as the ease-of-use optimized "Add Exception" code path. Does that solve your problem?
(In reply to comment #3) > If you are willing to go through the slightly more onerous process of saving > the cert to a local file (not realistic for generic web browsing, but arguably > a possibility for your use case) this support already exists in PSM. From the > Certificate Manager, you can choose "Import" instead of "Add Exception" to > import a locally stored cert as fully trusted. That cert won't be bound by the > same host-port restrictions as the ease-of-use optimized "Add Exception" code > path. Does that solve your problem? I did try this (see https://bugzilla.mozilla.org/show_bug.cgi?id=405289#c3 ). Quote: "FWIW, I tried working around this issue by programmatically doing importServerCertificate but it interestingly produces the same problem. In the UI it lists '*' for the server which would in theory make it valid for any server/port combo."
If their cert says it is for all hostnames that match "*", then it is broken in another way, and when bug 159483 is fixed, that cert will require an exception.
(In reply to comment #5) > If their cert says it is for all hostnames that match "*", then it is > broken in another way, and when bug 159483 is fixed, that cert will > require an exception. Nelson, I suspect you may have misunderstood. The '*' does NOT refer to a CN or a subject alt name. The '*' that he mentions refers to Cert Manager's display in the Server column. The Server column in Cert Manager usually displays the host:port combination a server exception is bound, too. For old-style exceptions, meaning, certificates imported into the NSS database + trust assigned, Cert Manager displays a '*', meaning, this "exception" is not bound to a specific host:port. The '*' display was originally proposed by Bob Relyea. There has been discussion in some other bug to use a better display like <any host>, but that change is not yet agreed on.
Mime, in comment 4 you said, you already tried Johnathan's proposal from comment 3. Maybe you think you tried, but possibly you missed a detail. Importing the server cert is not sufficient. Even after you imported the cert, even when Cert Manager displays the cert with a '*' in the server column, that still doesn't mean the cert is "trusted" :-/ Select the line in cert manager and click the edit button, which is for editing the trust assigned to that cert. What does the radio button say? Does it say "do not trust" or "trust"? You would have to select "trust" and confirm. I tried this approach with the testcase from comment 0. I exported the cert from cert viewer, imported the cert into servers tab, edited the cert and granted explicit trust to it. But still, that is not sufficient for visiting https://212.227.62.41/ While the cert now passes the "trusted" test, there is still a "domain mismatch". (Old Firefox never allowed you to permanently override a domain mismatch.)
I think this bug should be WONTFIX. I don't think PSM or NSS should automatically lookup a list of IP addresses for the names given in a cert and iterate the list for matching ones. Even if we did, I don't think it could work realiably, think Round Robin.
(In reply to comment #7) > I tried this approach with the testcase from comment 0. > I exported the cert from cert viewer, imported the cert into servers tab, > edited the cert and granted explicit trust to it. > > But still, that is not sufficient for visiting https://212.227.62.41/ > > While the cert now passes the "trusted" test, there is still a "domain > mismatch". Right, that's the results I got as well. > (Old Firefox never allowed you to permanently override a domain mismatch.) Yes, but at least you were temporarily allowed to override it. The bottom line here is that it "just worked" in the old Firefox. I haven't come across an FTP server yet that is configured correctly as far as certs goes - most of them have I had to add exceptions. So, basically with this bug and bug 405289 I'd like to be able to work with (or workaround) the new system so that I can continue have it "just work" in Firefox 3.
Mime, let's go back to my question about "always using hostnames". I still don't understand why you can't. Is my following assumption correct? - the second data port defined by the ftp server always lives on the same host as the first data connection If yes, you could always use the initial hostname, even if your ftp protocol communication happens to send you an ip address?
Hmm. But I agree my previous proposal won't help, if the first connection used a hostname that uses round robin. You may have gotten a random host from a pool, and you might get a different host when you try to open the second connection...
Ok, other proposal: You are running an extension, which means, you probably have access to all services, including nsICertOverrideService Could you use nsICertOverrideService::rememberValidityOverride to activate a temporary exception for the port and ip address you desire? If you do, please call clearValidityOverride after a connection is done.
(In reply to comment #12) > Could you use nsICertOverrideService::rememberValidityOverride > to activate a temporary exception for the port and ip address you desire? I think this could be a valid workaround. I played around with trying to get working today and came across a problem, however. rememberValidityOverride requires the cert and there doesn't seem to be a way to get an override certificate from the database. I tried: var certdb = Components.classes["@mozilla.org/security/x509certdb;1"].getService(Components.interfaces.nsIX509CertDB2); var certs = certdb.getCerts().getEnumerator(); to get all the certificates available and match the fingerprint with the fingerprint returned from: certOverride.getValidityOverride(controlHostPort, hashAlg, fingerprint, overrideBits, isTemporary); but the override fingerprint doesn't seem to be part of the getCerts() list. In theory, what I could do is store the cert myself on the control connection but this seems to be less than ideal. So, basically, I think your idea could work but there doesn't seem to be a way for me to get the cert needed to send to the rememberValidityOverride function.
Maybe you could somehow query the certificate from the good connection, using nsISSLStatus, nsISSLSocketControl, nsISSLStatusProvider.idl
(In reply to comment #14) > Maybe you could somehow query the certificate from the good connection, using > nsISSLStatus, nsISSLSocketControl, nsISSLStatusProvider.idl Ah, I knew about nsISSLSocketControl and nsITransportSecurityInfo but I didn't know about nsISSLStatusProvider (the documentation for this stuff is always a bit hidden :) Well, I've been able to successfully work around this now. Feel free to close this bug and bug 405289 up if you wish. For those who also might need to work around this, my code for this is here: http://www.mozdev.org/source/browse/fireftp/src/content/js/connection/dataSocket.js.diff?r1=1.47;r2=1.48;f=h Thanks, Kai! You've been very helpful!
resolving as WONTFIX. please reopen if you disagree.
Status: UNCONFIRMED → RESOLVED
Closed: 17 years ago
Resolution: --- → WONTFIX
Reopening as this possibly might still be an issue. I'm getting reports from users that some servers are unable to use SSL anymore: https://www.mozdev.org/bugs/show_bug.cgi?id=19322 Example 1: 207.44.146.2:4929 uses an invalid security certificate. The certificate is only valid for <a id="cert_domain_link" title="ftp.avxtrack.com">ftp.avxtrack.com</a> (Error code: ssl_error_bad_cert_domain) Example 2: 129.130.0.161:1034 uses an invalid security certificate. The certificate is not valid for any server names. (Error code: ssl_error_bad_cert_domain) Kai, I will send you an email shortly with FTP username/passwords for one site that does work and some sites that do not. Hopefully you might might be able to see what the issue is.
Status: RESOLVED → UNCONFIRMED
Resolution: WONTFIX → ---
There's no Firefox or PSM bug here. It's the job of SSL to ensure that the name it was told to connect to is found in the cert of the server to which it connects, WITHOUT doing any forward or reverse name-address translations. (Using the result of any such translations would render it vulnerable to a DNS attack.) If you connect to a server by IP address, then that server's certificate must have the IP address in it, or else you have NO basis on which to conclude that you've received the cert of the server in question and not an attacker's cert. For an example of a server setup to do that, see https://65.125.174.101/public/ Now, AFAIK, this ftp-over-ssl is being done by some extension, right? That extension needs to do the right thing, and ask SSL to connect to the server by a name that is found in the server's cert. That means either (a) the cert must bear the IP address, or (b) the client must ask to connect to the server by NAME rather than by IP address. And, again, resist the urge to do a reverse lookup, as that will utterly defeat your MITM protections.
(In reply to comment #18) I see. Ok, well I just wanted to verify that it was up to the servers to update the certificates. Like I said, it's only some servers that have become broken because of this. Other servers are working fine. Nelson, I'll send you the credentials as well so that you can maybe verify this theory+
I'm not at all sure that the server certs need IP addresses for ftp. In fact, I don't believe that's the best solution. I think the client needs to be smarter about the names it asks SSL to verify in the servers' certificates. I don't think that's terribly difficult. But this bug doesn't seem like the right place to discuss the behavior of an ftps extension. Where is that right place? Is it https://www.mozdev.org/bugs/show_bug.cgi?id=19322 ?
(In reply to comment #20) > Is it https://www.mozdev.org/bugs/show_bug.cgi?id=19322 ? Sure, sounds good.
Any further insight into this?
Hey guys, so was there any insight into this? I'm blocking version 1.0 here on your word... This extension has been around since Firefox 1.0 and has garnered over 6 million downloads in that time. So there are a lot of users out there wondering what went wrong. The bottomline is that it worked in Firefox 2.0 and now it's broken in 3.0 - we don't do that to users unless there's a reasonable explanation or a way to workround/update their stuff to be compatible. Is there something they have to do to their certificates to make them work better? Is this the fault of the servers, Firefox, FireFTP?
It would help if I could install this extension on a nightly FF build, which identifies itself as minefield. The extension installer page says "you need FireFox to use FireFTP".
So, I installed FF3 (official) and then installed FireFTP. When I attempt to run FireFTP, I get a dialog that says: FireFTP You do not have the appropriate permissions or directory does not exist.
Despite the appearance of several of these "appropriate permissions" dialogs, fireftp did let me create the 3 test accounts whose info you emailed to me, and I tested with them. Is there something that I need to do to tell fireFTP to try to use SSL? My test results showed that fireftp never used SSL at all. It just sent everything over the wire in the clear. I was able to login to two of the 3 systems whose account details you sent me, and was able to see directory listings, but no SSL was used. I was not able to login to system "torg", apparently because the username password combination was not accepted. I can't help with this any more until I know how to get fireftp to attempt to use SSL.
OK, I think I found how to configure fireftp connections for SSL/TLS. Will try.
Yeah, you configure it to use SSL under the Connection tab in the account's settings. Thanks for your help, Nelson. I appreciate it. The "You do not have the appropriate permissions or directory does not exist." is a separate issue I'm still trying to work on. Seems to happen on Linux for some directories.
I'm testing on Windows. :) I tried again. Worked OK with brackeen and avxrack, failed with torg. I observe that the in the cases that worked, the SSL library was given the proper host name to compare against the cert in the data connections. But in the case that failed, the SSL library was NOT given the host name for the data connection. This is probably the reason that it's failing. There's something odd about torg's cert. PSM's cert viewer is unable to view any details in that cert. Don't know why. It can export the cert OK.
Hmm, the brackeen account seems to work ok. The avxrack one is still giving: 207.44.146.2:4141 uses an invalid security certificate. The certificate is only valid for <a id="cert_domain_link" title="ftp.avxtrack.com">ftp.avxtrack.com</a> (Error code: ssl_error_bad_cert_domain) The torg one seems out of date (as far as the password goes) - these are test accounts sent in to me by users so they might have disabled that one in the meantime.
(In reply to comment #29) > There's something odd about torg's cert. PSM's cert viewer is unable to > view any details in that cert. Don't know why. It can export the cert OK. That's because it's using a *very recent* version of X.509 (with a relatively short RSA key length, though): Certificate: Data: Version: 4 (0x3) Serial Number: 0 (0x0) Signature Algorithm: PKCS #1 MD5 With RSA Encryption Issuer: "L=Manhattan,ST=Kasnas,OU=CNS,O=Cerberus FTP Server,CN=torg.c ns.ksu.edu,C=US" Validity: Not Before: Mon May 19 15:02:53 2008 Not After : Tue May 19 15:02:53 2009 Subject: "L=Manhattan,ST=Kasnas,OU=CNS,O=Cerberus FTP Server,CN=torg. cns.ksu.edu,C=US" Subject Public Key Info: Public Key Algorithm: PKCS #1 RSA Encryption RSA Public Key: Modulus: df:28:a2:95:50:b6:8f:61:f4:34:d0:22:88:37:2a:41: 65:81:63:db:44:4a:32:6e:b2:ce:3e:c4:41:60:69:d4: 27:e8:1d:d8:8d:c8:ea:2a:6c:1e:b3:55:bf:2c:83:59: 9a:cc:ba:da:13:03:7e:f6:5b:19:18:a2:69:ed:74:13 Exponent: 65537 (0x10001) (Apparently generated by some smart software which doesn't know that X.509 version numbers start at 0. PSM could be smarter when stumbling over a cert like this, of course.)
I wonder if the admins of these servers are changing things during the testing. brackeen is no longer working for me. It is now failing like the others. I spent a bunch of time with ff3 in the debugger and also sniffing the wire. I also watched the list of exceptions. I discovered a lot of things. 1. None of these sites has certs issued by valid CAs. So, exceptions are needed for all these certs. So, I get a dialog to create a security exception, and must create that exception to even make the first connection. One of them doesn't even have the same host name as the DNS name used to connect to the server. brackeen.com's cert, for example, claims to be for ftp.Serv-U.com. So they have a host name mismatch, AND an invalid issuer, right off the bat, on the first connection. Oh, and all those certs have a serial number of zero. Torg has a 512-bit key. I don't know what software created them, but my advice is: lose that software, FAST. It's going to cause you no end of grief, with reused serial numbers. 2. The added security exception is good for one host name and one port ONLY. So, for example, I get an exception for brackeen.com:21 but not for backeen.com:2049, nor for 12.34.56.78:2049. 3. When the SSL library is called to do the handshake for the data connection the host name it is given is actually a dotted-decimal IP address string. And, of course, it is not trying to connect to port 21 for the data connection. 4. Some software tries to automatically create a temporary exception for the cert received on the data connection. It more-or-less succeeds in creating an exception for the IP address and port to which the data connection was being sent. But that connection still apparently fails, and a message appears in the log saying something about too few parameters. I'm guessing that's why the connection fails, even though the exception is created. I'm REALLY not very approving of silently creating security exceptions, especially when they don't work. :) 5. Since the remote server chooses a different TCP port for the incoming data connection EACH TIME, none of those temporary security exceptions can ever be used again. 6. I also discovered that the UI for deleting security exceptions doesn't work. You select one and click delete, and after a "Mother May I?" dialog, the selected exception disappears from the list. You can do that for all the exceptions in the list. But if you then close the cert manager dialog and reopen it, all the exceptions are right back. They may have been deleted from the UI the first time, but not from the real list of exceptions. One thing to know, that is a difference between FF2's method of overriding cert errors (by "trusting" certs) and FF3's method of overriding by adding exceptions, is that each exception is good for only one port on one host name or IP address, whereas a trusted cert is good for all ports, but only on the host names (and IP addresses) contained in the cert itself. I tried the technique that Johnathan Nightingale suggested in comment 3 above. I went to ftp.avxtrack.com, (whose cert has the right host name in it) and instead of creating an exception, I saved the cert to a local file. Then I went into cert manager and "imported" the cert from that file. Then I went back and tried to visit it with FireFTP again, and ... got the "add exception" dialog again, just as before. Importing that cert made absolutely no difference. :( I recommend two changes that I believe together will solve all of this. 1. Get real ssl server certificates from a known and recognized CA. You can get FREE ones ("free as in beer", as people around here like to say) from startssl.com, and possibly others. THAT will eliminate the first dialog about invalid cert and the security exception that goes with it. 2. Change your client so that when it connects for the data connections after the initial control connection, it supplies the very same host name for those data connections that is supplied for the control connection, NOT the IP address that you get back from the ftp server on the control connection. The server will generally always supply the same cert on the data connection that it previously supplied on the contorl connection. If the host name was right (matched the cert) for the control connection, and you supply that same host name for the data connection, you won't need an exception for the data connection. What you want is to get to the state of NO security exceptions. You'll be glad you did, and so will your users. Plus, you'll actually be doing them a favor by getting them to use real certs, good for all sorts of things.
Actually, there is such a thing as X.509 v4. It's the very latest version of X.509. It's backwards compatible with v3. It would be truly ironic, IMO, if the software generating these certs is actually trying to generate certs with the very latest X.509 version number, but is not following the standards in so many other areas. If the author of that software cared what I think, I'd recommend: - Generate serial numbers that are RANDOM 16 byte (128-bit) numbers, but make sure the high order bit of the first byte is zero, so it's a positive number. A new random number EVERY time. - Raise your minimum key size to at least 1024 bits. - Drop all the old Netscape proprietary cert extensions and use the real standard extensions: - drop Netscape's cert type extension and use the Extended Key Usage and Basic Constraints extensions instead. See RFC 5280. - drop Netscape's SSL server host name extension, and put the DNS names (all of them) into the Subject Alternative Names. Kai, is Cert Manager really unable to display an X.509 v4 cert?
(In reply to comment #33) > Actually, there is such a thing as X.509 v4. It's the very latest version > of X.509. Sorry Nelson, that's not correct. What you're referring to is the fourth *edition* of X.509 (there's even been a fifth in the meantime), and the version number always remained at v3(2). See page 11f. or page 100 of ITU-T Rec. X.509 (08/2005), inter alia. Given the fact that X.509v3 extensions are basically a bag of bags for lots of arbitrary stuff, it seems quite unlikely that the X.509 version number ever needs to be increased in the future. > is Cert Manager really unable to display an X.509 v4 cert? It is. When it's preparing the tree for the details tab, it chokes on version numbers other than 0, 1 or 2: http://lxr.mozilla.org/mozilla/source/security/manager/ssl/src/nsNSSCertHelper.cpp#146 http://lxr.mozilla.org/mozilla/source/security/manager/ssl/src/nsNSSCertHelper.cpp#1938
(In reply to comment #32) > 4. Some software tries to automatically create a temporary exception for the > cert received on the data connection. It more-or-less succeeds in creating > an exception for the IP address and port to which the data connection was > being sent. But that connection still apparently fails, and a message > appears in the log saying something about too few parameters. I'm guessing > that's why the connection fails, even though the exception is created. > > I'm REALLY not very approving of silently creating security exceptions, > especially when they don't work. :) See comment 12. That was the workaround decided upon by this original bug. As far as the error with clearValidityOverride - you can ignore that error. That's already fixed in nightly builds and the difference is that it just gets rid of the exception manually right then and there instead of when the browser restarts. > 1. Get real ssl server certificates from a known and recognized CA. > You can get FREE ones ("free as in beer", as people around here like to say) > from startssl.com, and possibly others. THAT will eliminate the first dialog > about invalid cert and the security exception that goes with it. Ok, I will pass that information on to the users. > 2. Change your client so that when it connects for the data connections after > the initial control connection, it supplies the very same host name for those > data connections that is supplied for the control connection, NOT the IP > address that you get back from the ftp server on the control connection. > The server will generally always supply the same cert on the data connection > that it previously supplied on the contorl connection. If the host name was > right (matched the cert) for the control connection, and you supply that same > host name for the data connection, you won't need an exception for the data > connection. > > What you want is to get to the state of NO security exceptions. You'll be > glad you did, and so will your users. Plus, you'll actually be doing them > a favor by getting them to use real certs, good for all sorts of things. I'm wary of changing this behavior but I'll do as you suggest, at least for SSL connections. Thanks for all your help and patience - much appreciated!
Alright, I've released a new version that uses the host name instead of the IP address only when SSL is enabled. Hopefully this should resolve the issue without creating a side effect of some sort. Thanks again.
Status: UNCONFIRMED → RESOLVED
Closed: 17 years ago16 years ago
Resolution: --- → FIXED
No change to mozilla was made, changing resolution to WONTFIX.
Resolution: FIXED → WONTFIX
Product: Core → Core Graveyard
You need to log in before you can comment on or make changes to this bug.