An error occurred during a connection to licensing.microsoft.com:443 because it uses an invalid security certificate. The certificate is not trusted or its issuer certificate is invalid. (sec_error_unknown_issuer)
I think this is not a duplicate of 398915. Bug 398915 is about a "small mismatch" in hostname, where the site otherwise uses a trusted cert. This bug 399324 is about a site which uses an untrusted cert. This could be a misconfiguration on the server, possibly the admin did not install the intermediate certificate required to construct a chain to a trusted builtin root.
Resolving as INVALID, the server admin should fix their site to either use a valid cert or install the missing intermediates.
Reopening for more triage. mconnor asks: Could this site be related to the descriptions on http://blogs.msdn.com/larryosterman/archive/2004/06/04/148612.aspx ? Should we be able to trust that site, because it uses extensions to find the missing intermediates?
This is a fairly serious web-compat issue. I realize we've always warned in the past, but now we're actively blocking connections with this problem, and IE7 "just works" Either IE7's behaviour is insecure/bad in some way we need to evangelize, or we need to stay compatible with the web, IMO.
See bug 245609 for why we don't support this.
(In reply to comment #2) > I think this is not a duplicate of 398915. > > Bug 398915 is about a "small mismatch" in hostname, where the site otherwise > uses a trusted cert. I don't really see why it's important to differentiate "INVALID because of a domain name mismatch" and "INVALID because the server doesn't send the intermediate cert" when both bugs were filed as "can no longer connect because SSL errors are now fatal". The part that's invalid is that breaking these sites was a conscious decision, and that applies both of these bugs, so I duped this one to that one. If this bug is now going to be "don't treat lack of a complete cert chain as a fatal error", it needs a new summary.
Note that we already have several bugs on this issue with various sites. Some are probably resolved invalid, some I at least got moved to evang. I do think that this not working is a serious compat issue that we should try to resolve. The thins is that the resolution would need to be in NSS... Is there an NSS bug covering this?
(In reply to comment #8) > The thins is that the resolution would need to be in NSS... Is there an NSS bug > covering this? Bug 245609?
I have good results with reporting misconfigured Microsoft servers to Microsoft sysadmins. This bug appears to me to be just another "Evangelism" bug. The fact that it's MS doesn't make it any different. Also, the problem isn't "Fatal", in the sense that the user has no workaround. There is a UI by which such sites can be added as "excpetions". It's in the certificate manager. Try it! You'll like it!
In reply to comment 8: Boris, nothing in the way that NSS processes certificates has changed. Nothing at all. All the changes in UI are outside of NSS, in "core" mozilla code.
(In reply to comment #11) > In reply to comment 8: > Boris, nothing in the way that NSS processes certificates has changed. > Nothing at all. All the changes in UI are outside of NSS, in "core" > mozilla code. > I think the bug summary is immensely unproductive. Either we we're web compatible or we're not. We do not have customers telling us we need to take a stand on "incomplete cert chains" that actually are complete if you use IE.
Nelson, the point is that in this case we don't want to either block access to the site OR put up a dialog saying the cert is invalid. Ideally, we would validate the cert, if possible. And THAT needs changes to NSS.
Robert and Mike, What's your opinion of the idea that FireFox should render all web pages the same as IE does? Do you agree that this is necessary for web compatibility? Such discussions have, in the past, brought out lots of words about standards compliance. FF claims to conform to the relevant W3C standards. Now the issue is again raised, in the context of a different standard, one from the IETF. And in this context too, we claim conformance to the relevant IETF RFCs. But in this context, several people (including at least one W3C standards advocate) appear to be saying that we must be IE compatible. It's not surprising that MS web sites work differently with IE than with FF in many respects, including rendering of html AND handling of SSL cert chains. MS sysadmins test their servers only with IE, thereby failing to detect deviation from standards in multiple areas, both in html and in SSL. If you call for FireFox to be completely 100% compatible with IE in SSL, will you acceed to others' call for 100% IE compatbility in html?
(In reply to comment #14) > > Now the issue is again raised, in the context of a different standard, > one from the IETF. And in this context too, we claim conformance to the > relevant IETF RFCs. But in this context, several people (including at > least one W3C standards advocate) appear to be saying that we must be IE > compatible. My two cents: we can popup a warning box that people automatically click through and not be IE-compatible here. If we want to get confrontational about certs (and yes, I understand there are very real security reasons for doing that), that makes the incompatibility much more costly for customers. So, we'll have to give in one way or another, and I am honestly not sure which we should choose.
In reply to comment 13: Boris, if this is an RFE to make FF be "bug compatible" with IE7 in SSL cert handling, there are several ways that can be done. One requires changes in NSS, one does not. Years ago, Netscape Communicator did behave similarly to the way that IE behaves now, storing in the cert DB a copy of every CA cert seen in every valid cert chain. We deliberately stopped doing that (don't recall exactly why at the moment). To this day, PSM keeps in memory for the process lifetime a copy of every CA cert seen in every valid cert chain, and there is a bug filed to get it to stop doing that, too.
Nelson, you seem to have missed a number of years' of real-world standards work in Gecko. See, e.g., http://weblogs.mozillazine.org/roadmap/archives/2004/06/the_nonworld_no_1.html and please read http://wiki.whatwg.org/wiki/FAQ The 2004 w3c workshop about which I blogged was the proximate cause of Apple, Mozilla, and Opera principals including myself founding the WHAT-WG. The fact is that IE established a great many de-facto standards, only some of which the de-jure standards codified (mostly before Netscape collapsed). Quoting from http://weblogs.mozillazine.org/hyatt/archives/2004_07.html#005928 "... where possible we attempt to be fully compatible with the W3C standards, but we also want to support the real-world standards, i.e., extensions that for better or worse have become de facto standards. If you really do believe we should not have implemented contenteditable, then you are simply out of touch with reality." For "contenteditable", substitute a great many other things, possibly including the bone of contention here: emulating IE's handling of incomplete cert chains. Now each concession to reality has a cost, and you can't generalize and say "ok, let's clone IE" -- that is a never-ending bug-for-bug compatibility chase. But it is not required for interoperation, fortunately. In the case of this bug, I don't know what's best, but we have yet another variation on the Prisoner's Dilemma, where cooperating with purity police while IE defects means we lose. I hope Frank Hecker will weigh in here. /be
Nelson, the general position of the Mozilla Organization on layout standards has been that we follow the standard if there is a conflict between what IE does and the standard. If we need to add an extension to our implementation to achieve a significant improvement in web compat, and the standard does not prohibit such an extension, then we add it. Examples abound. Now unless I seriously misunderstand the situation with the certificate chain here, the issue is that while what the site is sending is not conforming to the letter of the RFC (in that the site is indeed broken), there is nothing in the RFC that forbids us to do what IE does. If I'm wrong in my undestanding, please correct me. Given that situation, the question becomes one of weighing the benefits of doing what IE does against the drawbacks. The obvious drawbacks are implementation cost and the slippery slope argument (arguing that we're weakening the standard by adding such an extension). The latter is somewhat negated in my mind by the usual IETF RFC policy of "be strict in what you send and liberal in what you accept" (a policy to which the W3C does not necessarily always subscribe, for what it's worth; see XML parsing). But you'd know more about the state of the standard than I, and about whether adding such an extension would cause significant harm to the web as a whole. You would also know more than I about the costs of implementation. Perhaps we should have a discussion in m.d.platform where you fill the rest of us in on the drawbacks, since you're most familiar with them, and someone who has some hard data on how common this problem is in the wild posts that data. That way we can make an informed decision. > If you call for FireFox to be completely 100% compatible with IE in SSL, > will you acceed to others' call for 100% IE compatbility in html? Sorry, that's a straw man. No one is demanding 100% compatibility in SSL. A request is being made to accomodate what appears to be a common type of server-side error, if it is possible to do so without violating the relevant SSL standards. That's not the same thing at all, and you and I both know it.
Nelson, I think that Robert has more or less outlined the three options we have here: 1) Take the web compat hit 2) Restore the dialog for this case 3) Make changes to NSS to handle this case like IE does Again, it's not obvious which is the right decision, because we seem to be missing vital data about costs vs benefits, but I think we would all like to avoid #2 if possible. There are very good reasons to remove the dialog completely, as we have done. #3 is looking like a better alternative from what I know, but if there is something missing from my understanding of the situation, I would love to be enlightened.
First, guys, THANKS for your thoughtful and edifying responses! The bug to which I referred in comment 16 is bug 399045. Do you think we should resolve it "WONTFIX"? Boris, what is "this case" you identify in comment 19? Do you mean all incomplete cert chains? We're taking away the one-click-bypass because we know it has been exploited by people who are taking advantage of users' habits of clicking through everything. (We're only now finding out just HOW MUCH it has been exploited in that regard, as people bemoan it -- until they learn about the new exception dialog.) If we restore an easy bypass for all sites with incomplete cert chains (cert chains that lead to no known issuer), then it seems we immediately give those intentional exploiters an easy way to resume their exploitation, and the net result is no increased security to users. Finally there is an option 4: Make changes to PSM (that is, not to NSS) to handle CA cert chains like IE does. PSM would use existing features of NSS, the same features that it used long ago in Netscape days before that was deliberately removed. I hope Bob Relyea or Bob Lord can remember why we decided to stop doing that. I just don't recall tonight.
> Do you think we should resolve it "WONTFIX"? No, we should fix it. That bug leads to unpredictable behavior and a possible memory leak. I don't think we need caching of certificates in memory to fix the incomplete chain issue; we just need to fetch them as needed. > Do you mean all incomplete cert chains? Absolutely not. As you point out, that has no security benefit over what we used to do with the dialog. The case under discussion is the case when the cert chain is incomplete but the last cert in the chain has this magic field that IE utilizes to get a URI at which to look for the next cert in the chain. Given that we apparently don't really care where a cert came from when it comes time to determine cert chain validity (e.g. we're fine if the server sent it, and we're also fine if it happened to already be cached), it seems that fetching the cert at that URI and seeing whether it happens to be a valid next cert in the chain does not introduce any security issues. Either the chain, extended through use of this magic field as needed, ends up at a trusted root or it doesn't. In the former case, the cert is good. In the latter, it's not. That's all assuming that doing such a lookup is not explicitly an RFC violation. Is it? I don't know enough about the functionality separation between PSM and NSS to say whether the above can be done entirely in PSM. If it can, that's great.
The "magic field" is presently only utilized by IE7 (& only in Vista, IINM). It's not utilized by IE6, nor by Opera (AFAIK) nor by any other browser yet AFAIK. The fact that these sites appear to work in IE6, or on XP, is because IE5 and IE6 store copies of all validated intermediate CA certs, not because of the "magic field". The SSL 3.0 Internet draft (which never became an RFC) and the TLS (SSL 3.1) RFC require SSL/TLS servers to send out a complete cert chain, up to the a trusted root CA cert. (Sending the root cert itself is optional, because the client is presumed to have a copy.) Even the very latest drafts of the next version of TLS continue to require that servers send out the full chain. This is because there is no requirement that conformant clients implement the "magic field" (formally the "Authority Information Access" or "AIA" extension). [Much about the use of AIA is not fully specified, much less standardized. For example, AFAIK, the MIME content type of the response is not specified in an RFC, and neither is the content. (One cert? All certs bearing the same subject name?) Despite this, the NSS team is already working on an implementation of this feature.] I imagine that, Someday, implementations of automated cert fetching using the AIA extension will be common enough that a new version of the TLS standard will emerge that drops the complete cert chain requirement. But that day hasn't come yet, and servers that send incomplete chains are simply not conformant to today's relevant SSL/TLS standards. AFAIK, the MS admins aren't trying to force the world's SSL clients to use AIA. They just don't test their servers with anything but IE, which seems to work for reasons explained above, which may or may not include AIA. I'll write to someone about the server cited above.
> The fact that these sites appear to work in IE6, or on XP Wait. How are they getting this validated intermediate cert to store a copy of to start with? Do they have the same issue where one has to go to some unrelated site first to make the site in question work?
Blocking for a resolution, though I'm not ready to make a direct call on what that resolution might be at this point. Getting it on the radar, though.
Providing some test results: (In reply to comment #22) > The "magic field" is presently only utilized by IE7 (& only in Vista, IINM). IE 7 on XP seems to do it, too. I deleted all intermediate certs from the cert store. Then I visited https://licensing.microsoft.com The page loaded ok, and IE had imported both intermediate certs into the "intermediate certs" store. > It's not utilized by IE6, nor by Opera (AFAIK) Correct. Opera complains about an incomplete cert chain. > nor by any other browser yet Konqueror fails, too Latest Safari from Mac OSX 10.4.10 fails, too
(In reply to comment #25) > > The "magic field" is presently only utilized by IE7 (& only in Vista, IINM). > > IE 7 on XP seems to do it, too. This is handled by CryptoAPI (crypt32.dll/cryptnet.dll), it's not an IE6 vs. IE7 issue. http://technet.microsoft.com/en-us/library/bb457027.aspx has all the gory details on path validation and revocation checking for XP. AIA cAIssuers fetching has been available in CryptoAPI since Windows 2000 (cf. http://www.microsoft.com/technet/security/guidance/cryptographyetc/tshtcrl.mspx). With XP Service Pack 2, some precautions against denial of service attacks were added (http://support.microsoft.com/kb/887196/en-us).
So my question remains unanswered. What is the behavior of an up-to-date IE6 on the site in question?
That site loads fine in IE6 without having loaded any other sites previously.
Great. Now _why_ is that happening? Is IE6 also following the AIA field per comment 26? (Addressed to whoever might know, not Ted specifically.)
(In reply to comment #29) > Great. Now _why_ is that happening? Is IE6 also following the AIA field per > comment 26? Yes, IE6 is following the AIA extension, because it uses the CryptoAPI. Note that it's possible to turn off AIA cert fetching with this (undocumented) registry setting: [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Cryptography\OID\EncodingType 0\CertDllCreateCertificateChainEngine\Config] "DisableAIAUrlRetrieval"=dword:00000001 (should be on one line - "...\EncodingType 0\...") Deleting both intermediates from the Windows cert store, enabling the above setting and trying to open licensing.microsoft.com with IE under XP will then also fail. (I've tried with IE7, maybe Ted can confirm this for IE6 - but again, it's not an IE6 vs. IE7 issue, really.)
Has anyone actually pinged Microsoft about this? They've normally been fine about fixing these in the past. (There are previous bugs in Bugzilla about it - e.g. bug 398210) It doesn't seem to come up all that often and, when it does, it's usually MS's own servers which are misconfigured. Gerv
Boris, In reply to comment 27: One cannot completely answer the question "what does IE6 do" without knowing what OS it is running on. MS now considers the SSL library to be part of the OS, not the browser. The SSL library has different capabilities in different versions of Windows, and different versions of the browser attempt to use different subsets of the OS's underlying capabilities. Kai and Kasper now say that IE6 on XP does do AIA-based fetching. Maybe it does, now. I did testing of this a couple years ago, and it did not do so in my tests. That could have changed since then. Gerv, in reply to comment 31, I wrote to one of my contacts at MS about the license site. He's trying to find the admin for that site.
One of my MS contacts from the CAB Forum has stated that they consider the ability to locally store (to "cache") validated intermediate CA certs to be a prerequisite to AIA fetching, because without such caching, the volume of AIA cert requests would "knock over" the cert issuers' AIA request servers, resulting in the appearance that the "Internet has stopped working". Kaspar, are you aware of any RFCs (or even internet drafts) that define the content and MIME content type of the response to AIA cert URIs?
If we do such a local cert cache, it needs to either expire things expeditiously or not be in memory. Nelson, it sounds like the AIA field is de jure underdefined and there is a de facto use that increases our web compat. Barring strong arguments to the contrary, it sounds like we should seriously think about doing it.
In comment 17, Brendan talked about uniting with Apple and Opera principals to set real web standards. Maybe we want to do something like that again. The IETF TLS working group has deliberately chosen to define TLS so as NOT to require clients to implement AIA-based cert fetching. I think there are several reasons for this, including (a) lack of definition, and (b) the desire to not unnecessasrily increase the complexity and size of all conformant clients. The proposed change says that the RFC doesn't matter, and the real standard is not the one defined in the RFC. Do we really want to support that? MS has lots of ideas for the direction in which they'd like to steer TLS. There's something called OCSP Stapling that mozilla has not discussed yet. Do we want to just follow their direction in all things TLS?
> The proposed change says that the RFC doesn't matter That would be the case if the change contradicted the RFC. As far as I can tell it does not. If it does, can you say why? And if you think the change is harmful for web at large, please just come out and say so, preferably with justification. > Do we want to just follow their direction in all things TLS? That's a straw man again. Again, extensions to standards for web compat need to be weighed carefully. We should not knee-jerk into either taking them or not taking them.
Remember, good standards allow extensions, for distributed extensibility. Just cuz MS takes advantage does not make a particular extension evil or undesirable. /be
> There's something called OCSP Stapling that mozilla has not discussed yet. Not discussed? I thought we were implementing it! Isn't that a de facto prerequisite to requiring OSCP validation for EV certs?
(In reply to comment #38) > > There's something called OCSP Stapling that mozilla has not discussed yet. > > Not discussed? I thought we were implementing it! Yeah, I did a double-take on that one too. To quote from my Foundation status report of 8/31: "The Mozilla Foundation is funding a project to implement server-side features in Apache and OpenSSL to complement future Firefox enhancements to check the validity of SSL certificates using OCSP. (Technically, what's being done is implementing support for the Certificate Status Request extension to TLS as defined in section 3.6 of RFC 3546, so that OCSP information can be returned to Firefox as part of the TLS handshake and Firefox doesn't have to generate a separate OSCP request itself. This technique is sometimes referred to as 'OCSP stapling'.)"
Frank, You're funding server side development of OCSP stapling, and funding development of code not used in FireFox, but not funding the development of the code used in FireFox? (You do realize, don't you, that MoFo doesn't fund NSS development, and that NSS development staff has lost 50% in the last two years?) Daniel, which "we" do you think is implementing OCSP stapling? No, OCSP stapling is NOT a prerequsite to deploying OCSP or EV. OCSP caching is. stapling != caching. Boris, how can you say that it doesn't contradict the RFC? RFC 2246 says a server sends out a complete chain (minus the root). Windows says "no, it doesn't". I think y'all may be unwittingly taking sides in the battle over who sets the internet standards. Please consider that carefully. I have a lot more to say about this, stuff that doesn't belong in this bug. Is there a newsgroup in which you all will participate?
> Boris, how can you say that it doesn't contradict the RFC? Nelson, maybe I'm missing something. Looking at RFC 2246, it has the unfortunate feature that it doesn't clearly specify what's a requirement on the server, what's a requirement on the client, and what's a requirement on both. In particular, I presume you're referring to the language in section 7.4.2, right? If I read this literally, there are three possible interpretations: 1) What this server is sending is not a certificate_list. Client response is determined by what is supposed to happen if the server doesn't respond with a certificate_list. 2) This server is sending a certificate_list, but one not allowed by this RFC. Client response is determined by what should be done with invalid certificate_lists. 3) This server is sending a valid certificate_list certified by an unknown root. In cases 1 and 2, the RFC doesn't seem to define the permissible client response to these violations of the RFC, that I can find. If I'm missing it, please point me to the relevant text. Typically in such a situation one enters error-handling territory, and the RFC no longer applies, since we've stepped outside the range of situations is considers as possible. This is not to say that the spirit of the RFC shouldn't inform the decisions we make, but as far as I can tell any client action at this point would be compliant with this RFC, strictly speaking. I take it NSS is taking interpretation 3, right? > I think y'all may be unwittingly taking sides Please spare us the "unwittingly"? ;) > Is there a newsgroup In comment 18 I suggested having a detailed discussion of the pros and cons in m.d.platform.
(In reply to comment #40) > Frank, You're funding server side development of OCSP stapling, and funding > development of code not used in FireFox, but not funding the development of > the code used in FireFox? Gerv was the person driving this, so I'll leave it to him to comment, but my understanding is that the client side of OCSP stapling was being taken care of. If not then I'm perfectly happy to propose to the Mozilla Foundation board that we consider kicking in some funding to support the relevant NSS work.
(In reply to comment #17) > I hope Frank Hecker will weigh in here. Gee thanks, Brendan :-) Anyway, here are my thoughts, based on a quick scan of the comments here and some correspondence I had with Window on this general topic, and subject to my usual disclosure that I'm not a PKI expert and thus may have absolutely no idea what I'm talking about: As I understand it, it is indeed the case that the Microsoft servers that don't send complete cert chains are not compliant with the TLS standard (RFC 2246 IIRC), as are any other servers doing the same. As I understand it, it's also the case that in the absence of additional information it's impossible to tell the difference between a misconfigured server whose cert chains up to a valid root and a server whose cert chains up to an invalid root. My understanding is that the only ways at present to distinguish between these two cases are by caching intermediate CA certs previously encountered, or by relying on the AIA extension if present. From my perspective caching intermediate CA certs is analogous to the client automatically supplying a closing tag when one is not present in the HTML sent by the server: The client isn't violating any standard itself, it's just compensating for a standards violation on the part of the server. Doing so of course encourages sloppy practices on the part of server operators, but as Boris Zbarsky notes it's also squarely in the classic tradition of "be liberal in what you accept". As for recognizing the AIA extension, I also don't see doing this as doing violence to the base TLS standard: The RFC 2246 requirement relating to complete cert chains is a requirement on servers, not on clients. AFAIK nothing in the standard prohibits a client from taking whatever information it can find and reconstructing the cert chain based on the incomplete information the server provides. And it's not like recognizing the AIA extension is like recognizing the <MARQUEE> tag or other non-standard HTML tags; as I understand it the Authority Information Access extension is a perfectly standard extension, being defined in RFC 2459. Its real world use may be underspecified by the relevant standards and have to be supplemented by de facto standards based on vendor implementations, but goodness knows that's true of a lot of stuff in the PKI world. So in my opinion it would be a perfectly reasonable thing for us to implement intermediate CA cert caching (whether in PSM or NSS), and also perfectly reasonable for us to implement recognition of the AIA extension. But I do agree with the Microsoft person (quoted by Nelson Bolyard) that intermediate CA caching should be implemented before recognition of the AIA extension. As I understand it, it's very similar to the case for implementing client and server support for the TLS Certificate Status Request extension prior to turning on full-time OCSP checking in the client -- the PKI world has such a fragile infrastructure for handling real-time requests for information that a responsible client should avoid burdening that infrastructure with requests if it possibly can.
Let's take this to m.d.platform (is that really the right group? Ok, I don't care that much, but I have to ask). Nelson, there's nothing "unwitting" as bz said, but it seems to me you are once again missing the point that the RFC does *not* constrain clients from performing "error recovery", and Postel's Law encourages it. Do you disagree, or are bz and I wrong about how the RFC constrains clients? Of course, there's a well-known corollary (does it have a name?) about how version 2 of a Postel-friendly protocol can't extend v1 as it would like, because liberality in receivers has preempted some tokens and states. But in this particular case we already have a de-facto standard. /be
(In reply to comment #33) > Kaspar, are you aware of any RFCs (or even internet drafts) that define the > content and MIME content type of the response to AIA cert URIs? That would be 3280bis (section 126.96.36.199): Where the information is available via HTTP or FTP, accessLocation MUST be a uniformResourceIdentifier and the URI MUST point to either a single DER encoded certificate as specified in [RFC 2585] or a collection of certificates in a BER or DER encoded "certs-only" CMS message as specified in [RFC 2797]. [...] HTTP server implementations accessed via the URI SHOULD specify the media type application/pkix-cert [RFC 2585] in the content-type header field of the response for a single DER encoded certificate and SHOULD specify the media type application/pkcs7-mime [RFC 2797] in the content-type header field of the response for "certs-only" CMS messages. For FTP, the name of a file that contains a single DER encoded certificate SHOULD have a suffix of ".cer" [RFC 2585] and the name of a file that contains a "certs-only" CMS message SHOULD have a suffix of ".p7c" [RFC 2797]. (http://tools.ietf.org/id/draft-ietf-pkix-rfc3280bis-08.txt - note that I left out the paragraphs about LDAP URIs) > they consider the > ability to locally store (to "cache") validated intermediate CA certs to > be a prerequisite to AIA fetching Agreed.
(In reply to comment #41) > Looking at RFC 2246, it has the > unfortunate feature that it doesn't clearly specify what's a requirement on > the server, what's a requirement on the client, and what's a requirement on > both. I think it would be better to refer to RFC 4346 (TLS 1.1) in general, though for this particular question, there's probably not much difference between the two.
My MS contact said that licensing.microsoft.com will be fixed soon, and tested with FireFox.
https://licensing.microsoft.com/ now works for me.
Popping in late, nevertheless I would like to add my two cents here. We've discussed this issue on the dev-tech-crypto mailing list in the past. Even so fetching of missing intermediate CA certificates via the AIA CA Issuers extension isn't a requirement by client software, it certainly doesn't forbid it either. But it is one of the options Mozilla can pursue. Speaking as an operator of a CA, about 25% of all servers aren't configured correctly, i.e. the chain the server presents isn't complete. I have very clear data concerning that. My opinion is, that Mozilla should find a solution for this very problem by fetching the missing intermediate CA certificates with the AIA extension, since by only caching an intermediate CA certificate already seen doesn't solve the problem (as suggested by Frank). In such a case, the user might have visited a site signed by such an intermediate CA certificate, or not. The correct solution in my opinion is, to fetch the missing certificate, _if_ the AIA CA Issuers extension is present in the server and following certificates up to the trusted root CA _and_ cache the certificates. Not sure if the discussion has moved somewhere else but I hope this comment to be useful nevertheless.
Apparently bug 399045 solves the caching issue. Is the use of AIA extension planned?
(In reply to comment #51) > by comment 10, i openned a new bug #400608 to help users regain access to the > blocked sites > The bug you opened would revert the current efforts undertaken by Mozilla and I guess this will result in a WONTFIX. I asked however about the implementation of the AIA CA Issuer extension and fetching of intermediate CA certificates in addition to the caching of previously seen intermediate CA certificates.
Now that bug 401575 and bug 399045 are fixed, this should be much less of a problem. Does it still need to block Firefox 3?
I think this should not block.
Yeah, it's not a severe issue with the new errors. not a blocker, and we probably shouldn't work on it for 1.9 since it is probably risky, too.
I think the PSM part of this work may already be all done. The NSS part of this work is on the list of things to be delivered for NSS 3.12.
(In reply to comment #56) > I think the PSM part of this work may already be all done. The NSS part of > this work is on the list of things to be delivered for NSS 3.12. Is there a NSS bug that we can make this one depend one, in order to track when it's ready?
Kai, I *think* the NSS part is already delivered.
wtc asked about this bug yesterday. The latest I can remember from our discussions a while ago, this bug might be WONTFIX, because we are worried about privacy implications. The idea was, if we don't trust a cert, then we should not visit URLs listed in it.
Kai, the fact that a user already accessed the site, the user might have reveal most of the information already, even when using a self-signed or otherwise invalid certificate. Self-signed certificate usually don't feature the AIA extension, but CAs do. The browser doesn't send any private information beyond the regular headers, but in order for this to work AND if there is a privacy concern, simply don't send the referrer and hosts headers. I believe AIA support important, because CAs not issuing directly from the CA root are punished with the extra burden of complaints by subscribers and visitors of sites alike, but issuing directly from the root is discouraged by the Mozilla CA policy and listed as a problematic practice: https://wiki.mozilla.org/CA:Problematic_Practices#Issuing_end_entity_certificates_directly_from_roots Support for AIA should be really implemented because of the requirements for CAs by Mozilla.
Eddy, this is not limited to browser scenarios, this is core code, the application might have obtained a cert from elsewhere. The mere fact that you visit an URL could reveal information about the person who made the request (yes, probably not in the browser scenario). I would prefer the same cert verification logic for all protocols. I would be reluctant to solutions that would allow AIA fetching for some protocols (like browser) but not for others. The fact that self-signed certs "usually" don't contain AIA is not a sufficient argument for enabling it, because nothing stops attackers from producing certs that include it. (I think nobody is punishing CAs. In my opinion a CA should clearly and automtically include all required intermediates when they deliver the entity cert to the customer. By not doing so the CAs punish the developers who must again and again triage bugs that are caused by CAs not educating their users ;-) )
This educational exercise costs us a lot of grief, specially since IE does support AIA completely. I'm not suggesting to remove the responsibility from the CAs, but neither should the browser make it more difficult. Server certificates aren't delivered like client certificates and usually not in PKCS12 format, hence intermediate CA and root certificates must be installed manually. This isn't always done correctly... Now, AFAIK support for AIA exists in NSS (PKIX) so enabling it in Firefox (and TB) shouldn't be such a big step anymore? Isn't this the job of PSM? Or can this be done only at the core level as you suggested?
Is it agreed that it's OK for the browser to fetch the other certs from a privacy standpoint? Regarding email, there is an existing mechanism for determining if the mail client should fetch images from a report site. Can we attach fetching of the other certs to that mechanism. If you check for images, fetch the certs. If not, then don't.
This bug cannot be resolved until PSM starts to use libPKIX for all cert path building and cert path validation. So, it is blocked by bug 479393.
I think it would not be a problem to add the newer CA GoDaddy in the next update. Or you see a problem? If the certificate sent incomplete information, as would be verified as valid in IE?
Comment #2 and comment #3 are correct. This problem almost always indicates a server that fails to include the require intermediate certificate. This failure will potentially be addressed in Phase 2 of revising the Mozilla CA Certificate Policy. Comment #49 states that 25% of servers are misconfigured by not having all required intermediate certificates. This is a problem that belongs to the certificate authorities (CAs), not to Mozilla. The CAs have not sufficiently emphasized how their subscribers are supposed to configure their servers. The concept inherent in this bug report disturbs me. A Web server administrator who does not understand the need to install intermediate certificates might also not understand other aspects of Internet security. After all, the mere presence of an SSL certificate does not prevent hacking, software bugs that display sensitive information on unsecured Web pages, or the misplacement (loss) of backup media. When I go to a secure Web site, I want some assurance that the administrator is indeed paying attention to what is needed for security, at least that the server has indeed been configured correctly. In any case, what Microsoft does in IE should not direct what Mozilla does. The frequency of Microsoft patches to address security vulnerabilities in Windows and IE indicates to me that Microsoft really does not devote much up-front attention to security. If this RFE is implemented, it means that convenience for naive users -- and ignorant Web server administrators -- outweighs a concern for security. In that case, I request a user-oriented option to turn off this capability.
Andrew Prout wrote in Bug 542674 comment 3: > [T]his change will also allow FF/NSS to better > support bridged PKI environments where there > are multiple chains to different trust anchors > possible. The web server cannot be configured > correctly for this because the server does > not know which trust anchors the client will > accept and therefore does not know which > chain to send.
We are a group of CS researchers that is currently evaluating the state of SSL and certificates in the Internet. We are passively monitoring the Internet uplinks of several networks (mostly university and research) and recording information of the SSL/TLS sessions and the exchanged certificates. http://notary.icsi.berkeley.edu gives a short overview of our work. One thing we are currently trying to measure in our work is the impact of AIA fetching on certificate verification - and we thought a few preliminary numbers might be interesting to this bug. Currently, our data collection effort contains 329,531 unique chains. When validating these chains, using all intermediate CAs we ever saw, we can validate 316,797 (>96%) of those chains at least once. When just looking at these (316,797) chains that we can validate, and disabling intermediate caching completely, 25,500 of those chains no longer validate (bringing the number of validate-able chains down to about 88%. This is equivalent to when a user visits a page with a completely new installation of Firefox. For those 25,500 chains, that did not validate without intermediate caching, we tried what would happen if we switch the verify function to PKIX and enable AIA caching. The result is, that over 98% of these chains would have validated when using the AIA information to retrieve missing intermediate certificates.
Hello, As i see, we are concetrating here on the SSL certificates, but the intermediate not only used here. I you dont have the chain elements you can have two scenario: 1. SSL a) in FFox: the server should send to the client the chain, if it is not configured by the admin ,the chain will not be complete in Firefox b) in the IEs: if there is no chain in the server reply, the IE and the CryptoAPI together fetches the missing chain elements from the AIA url-s... (this second is a workaround for the lazy administrators :) ) c) the a) follows the TLS standards, the b) adds some more after the TLS standard fails with the server chain 2. Client certificates a) in Ffox: when you install the client certificatem, you need also to install its intermediate. b) in IE/Win: when you just open to view a certificate, the cryp32 gets to work. its fetches the intermediate, and if it is not there, it fetches from the MS trust list the roots too... I think we need at the install a quick fetch...
Re comment #73: As I pointed out in comment #68, scenario 1.a in comment #73 is symptomatic of a Web server administrator who is not fully cognizant of how to configure a Web server for proper security. This indicates there might be additional security problems. Fixing this bug as proposed would weaken user safety by enabling poor security practices by Web server administrators. Even comment #73 admits that doing what Internet Explorer does is "a workaround for the lazy administrators". When it comes to secure browsing, lazy administrators are a security vulnerability that should not be encouraged.
like with (unstapled) OCSP it's also a privacy issue, when the client fetches the intermediate cert from the AIA url-s. I strongly suggest to NOT implement this extension and close this bug as WONTFIX.
This issue is nowhere near as severe a security issue as non-stapled OCSP. With OCSP, the client would request the revocation status of the end-entity (probably web-server) certificate which would enable a malicious OCSP server to track the users' browsing habits. That is, it would know that my IP address has visited www.acme.com because it asked for www.acme.com's certificate revocation status. With AIA, the client would only be asking for the intermediate certificate. Each intermediate certificate would have signed many end-entity certificates, therefore leaking far less information to the malicious AIA server. By noticing that my IE browser has requested an intermediate StartCom certificate Eddy (if he was that way inclined, of course!) would only know that I've visited one of the multitude of sites that have been signed with that certificate - nothing more and much less than OCSP gave away. This bug should be reopened.
I most definitely disagree with comment #77. The lack of an intermediate certificate for a supposedly secure Web site clearly indicates that the site administrator does not know a basic requirement for securing the Web site. To me, this means the site is not secure at all. This RFE should indeed remain WontFix. See my comment #68 and comment #74.
While a mission to clean up the web such as the one in comment #78 is commendable, in reality these will only work if all browser vendors are behind such schemes; for example, the current SHA-1 deprecation policy. Going it alone will lead to alienation of Firefox's ever decreasing user base. Currently, according to some of the web stats (I accept that they vary considerably and are to be taken with a large pinch of salt, but they give a basic idea), FF has between 20% and 6.5% of the web users. That is, between 80% and 93.5% of web users do not use Firefox. If the the stats were reversed and FF had 80% or so of the users, then maybe it would be in a position to take the moral high ground and attempt to force sys-admins to learn their job. But as the stats currently stand, taking this high ground will only lead to users leaving FF for browsers that are happy to fetch intermediate certs using AIA information and give them the website that they are after. Your average user isn't going to email the webmaster insisting that they learn their job and add intermediate certs to their servers quoting chapter and verse from RFCs. Your average user will just dump FF and start using IE/Chrome/Opera etc all of which support AIA and give the users what they want - a working website. Now I'm certain that the stats aren't low simply because FF doesn't process AIA, but it doesn't help. Anything that scares users away, such as being shown a security message when the same website works on other browsers, will simply be another nail in poor old Firefox's coffin. If bad sys-admins is really a concern for Mozilla, what about enabling AIA and designing a system that anonymously sends URLs of sites that do not send intermediates to a central repository accessible to conscientious Mozilla volunteers who contact site admins to educate them? Mozilla might be able to get other vendors to join the scheme and get some good press out of it. A win-win situation ;-)
After the recent news that Chrome enabled AIA fetching on Android , :Keeler and I analyzed our TLS Error Reports submitted from users that opt-in to such reporting . Our analysis goal was to set an upper-bound on the decrease in total SEC_ERROR_UNKNOWN_ISSUER error pages seen by users if Firefox performed AIA fetching. We found that upper bound to be a 5.88% reduction in total unknown issuer errors, if AIA fetching were implemented and always successful. To do this analysis, we began with one month of TLS error reports (February 2017), sampled 1% of those reports, and filtered that sample to only reports that were for the SEC_ERROR_UNKNOWN_ISSUER error. Then we produced a list of all the end-entity certificates from those reports, along with the number of times each certificate was observed in the sample. This produced 2.2 GB of certificates, PEM-encoded. We then filtered to certificates which had been observed 11 or more times, reducing to a more-manageable 122 MB of PEM-encoded certificates (cert count=83583).  We then processed each of the certificates to determine if it had an AIA extension field. As we were looking for an upper bound, we assumed that if it had such a field, AIA-chasing would have allowed the connection to succeed without errors.  For each certificate with an AIA field, we assign it a weight equal to the number of times it was observed. The weighted summation is divided by the total number of observations in the set to get an upper bound on the number of connections that could be salvaged through AIA fetching. Notes on analysis: This did not filter out certificates that are actually trusted in our root program. Doing that would increase the apparent effect, but the purpose of this analysis was to look for global effects on Firefox connection errors -- to be similar to . While I don't believe we can release the input data externally, if anyone wants to modify  to control for whether a certificate is trusted, we'd be happy to re-run it. However, given the low initial result, I do not expect to see numbers for Firefox as high as those the Chrome team reported. These low numbers aren't compelling to me, personally, to trade away the privacy.  https://groups.google.com/a/chromium.org/forum/#!topic/net-dev/H-ysp5UM_rk  https://support.mozilla.org/t5/Fix-slowness-crashing-error/What-does-quot-Your-connection-is-not-secure-quot-mean/ta-p/30354#w_reporting-the-error  https://gist.github.com/jcjones/535b5672d075910fdce4f55b9ce57ef7  https://gist.github.com/mozkeeler/29754494dcdb3b169483595283f29923