Closed Bug 724929 Opened 12 years ago Closed 11 years ago

Remove Trustwave Certificate(s) from trusted root certificates

Categories

(CA Program :: CA Certificate Root Program, task)

task
Not set
blocker

Tracking

(Not tracked)

RESOLVED WONTFIX

People

(Reporter: mozilla_bugzilla, Assigned: kwilson)

References

Details

Attachments

(2 files, 4 obsolete files)

User Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:10.0) Gecko/20100101 Firefox/10.0
Build ID: 20120129141551

Steps to reproduce:

Trustwave issued a subordinate root certificate to a company, therefore enabling the company to issue unlimited SSL certificates for any domain/hostname:

http://blog.spiderlabs.com/2012/02/clarifying-the-trustwave-ca-policy-update.html

This is a violation of the Mozilla CA Certificate Policy, specifically:

We reserve the right to not include a particular CA certificate in our software products. This includes (but is not limited to) cases where we believe that including a CA certificate (or setting its "trust bits" in a particular way) would cause undue risks to users' security, for example, with CAs that

    knowingly issue certificates without the knowledge of the entities whose information is referenced in the certificates; or

I therefore request the root certificate(s) of Trustwave to be removed from the CA store of all Mozilla products.

I know that they made this public and stated that they won't do it again but I can't place any trust in their certificates any more and I think this should serve as an example that CAs who have these business practices or had them in the past should not be included in products used (and trusted) by so many people.
OS: Linux → All
Hardware: x86 → All
urgency and importance of this "bug" should be maximum i guess.
Please bear in mind that since Mozilla's list of trusted certificates nowadays is not only used by Mozilla products but also by for example the Net::SSL perl module, this may affect even server security. Some unknown and untrusted company can now eavesdrop on our server-server connections. A situation which cannot be allowed to continue.
The DigiNotar bug was classified as blocker so I'll do the same here.
Severity: normal → blocker
Adding a representative from Trustwave, so we can get their perspective on this.
Status: UNCONFIRMED → NEW
Ever confirmed: true
I am actively looking into this.
Status: NEW → ASSIGNED
Assignee: nobody → kwilson
It has been known for quite a while that such automatic-MITM-devices exist, for example see http://files.cloudprivacy.net/ssl-mitm.pdf

Given the vast amount of intermediate certificates out in the wild (as documented by the EFF SSL Observatory), given to all sorts of organizations, it shouldn't surprise us that such a combination of intermediate and device was deployed.

It's good that we now have a public example of such an deployment, it will help to raise awareness that we urgently need improvements for today's SSL trust model.

I'm not yet convinced that TrustWave should be blamed. In my opinion, if you decide to blame TrustWave, you could equally blame any CA that has issued at least one intermediate CA certificate and gave it to a different entity.

The CA that issues an intermediate "empowerment CA" certificate (TrustWave) cannot control what will be done with the certificate by the receipient.

We don't know how many other environments already use such a device and an intermediate empowerment certificate from a different CA. I would assume the quoted example isn't the only one in the world.

If the intermediate CA certificate was indeed bound to a HSM, then its use was restricted to the corporate environment where it was installed, which is an argument on the positive side for TrustWave.

If anyone is to blame, it's the company who deployed the MITM device. I hope they had informed their employees. If not, shame on them.

If we decided that intermediate CAs are indeed a problem in general, then we would have to ask that all CAs immediately revoke all intermediates given to anyone else, and if we ever find another CA who issued a non-revoked intermediate, used by another company, then we could punish that CA by removing it. But at this point, I don't see a benefit in punishing just TrustWave.

This also tells us that we browser and software vendors must urgently get our homework done and make revocation checking strict and really work. If we don't do that, any CA could just issue an intermediate CA certificate, claim it was a mistake and immediately revoke it, give the intermediate to the customer, and have the customer ensure that the channel to revocation checking is disabled in their enviroment.

This is why we also need a better solution where the powers of a single CA will be limited. We need a system where a second entity must confirm the not-yet-revoked state of a certificate. For example, in my MECAI proposal the client would require a notary statement about the correctness of the server's certificate from an additional CA. The MITM device being used here wouldn't have been able to present such a voucher.

We must improve all parts of the trust system. Based on my current information, punishing TrustWave would simply be a pawn sacrifice.
(In reply to Kai Engert (:kaie) from comment #7)
 > I'm not yet convinced that TrustWave should be blamed. In my opinion, if you
> decide to blame TrustWave, you could equally blame any CA that has issued at
> least one intermediate CA certificate and gave it to a different entity.

I don't have an opinion about this specific case until facts are presented. However the above statement is absolutely not true. However every CA that issues certificates to another party and included with Mozilla NSS is bound to the Mozilla CA policy which clearly defines the requirements for issuing end-entity certificates and which minimum steps are required. Period.

> The CA that issues an intermediate "empowerment CA" certificate (TrustWave)
> cannot control what will be done with the certificate by the receipient.

This is their problem and not yours. Mozilla has a policy (and new there are also the baseline requirements) that state the minimum requirements. A CA that can not control how a CA certificate is used by a third party should better not issue such a certificate.

And why should some CAs that tightly control their PKI and don't agree to certain customer demands and burden themselves with sufficient (expensive) controls be blamed for another CAs carelessness? Again, I'm not voicing an opinion if TrustWave has done anything, but the requirements must be applied equally.
Such a deployed device doesn't have to be limited to the company intranet. They could route other traffic over this device. 

Yes, the company using this device is to blame but you also have to blame the CA if they didn't forbid the usage of their intermediate certificate in such a way.
The CA accepted the Mozilla CA Certificate Policy and violated it !

The only reason to not remove the root CA is that the third party company violated the policy of the CA and the CA had no knowledge about the usage but that leaves the question why they can not control their customers.
(In reply to Eddy Nigg (StartCom) from comment #8)
> 
> Mozilla has a policy (and new there are
> also the baseline requirements) that state the minimum requirements. A CA
> that can not control how a CA certificate is used by a third party should
> better not issue such a certificate.

(a) TrustWave (A) CA certificate, trusted by Mozilla software

(b) Intermediate CA certificate, signed by (a), given to company (B).

Even if A has a policy that restricts the use of (b), how can A enforce that B won't abuse (b) in any way they desire, if the abuse happens in a closed environment?

I would assume it to be the norm that A cannot control what (b) is being used for.
(In reply to Matthias Versen (Matti) from comment #9)
> Such a deployed device doesn't have to be limited to the company intranet.
> They could route other traffic over this device.

That's true, although with a higher likelihood of detection (e.g. by people who run a Certificate Patrol Add-On.)


> Yes, the company using this device is to blame but you also have to blame
> the CA if they didn't forbid the usage of their intermediate certificate in
> such a way.
> The CA accepted the Mozilla CA Certificate Policy and violated it !

Yes, I agree that CAs should forbid that use.
The question is, even if they forbid it, can they really enforce that it won't be done, or rather won't they even notice?


> The only reason to not remove the root CA is that the third party company
> violated the policy of the CA and the CA had no knowledge about the usage
> but that leaves the question why they can not control their customers.

How is a CA supposed to control what happens behind the firewall of their customer?
I would like to take this opportunity to provide more details about the change in the Trustwave CA policy found here:

http://blog.spiderlabs.com/2012/02/clarifying-the-trustwave-ca-policy-update.html

This single subordinate root system was one:
- that was issued to a enterprise customer for use on their internal network - with network usage policies presented to users.
- had the CA private key stored in a non-exportable or recoverable mode
- generated the private keys for the cloned end entity certificates within the HSM and were never available to system administrators (or anyone other user) of the dedicated hardware device.

We did not take the decision to enable this system lightly and thusly we crafted CA policies (CPS) with the customer in addition to providing on site audits to ensure the above statements were accurate. We did not create a system where the customer could generate ad-hoc SSL certificates AND extract the private keys to be used outside this device. Nor could the subordinate root key ever get exported from the device. The system was used only for routing internal corporate traffic and not in any other way.  In addition, our on site audit focused on physical security and controls around the appliances to ensure that the boxes could not be physically taken from the facility to be placed on other networks to route traffic there.

The system is not being revoked because of any type of compromise or issue with the the trust of the system. The system is being revoked in light of the major SSL events that occurred last year, as we have decided to no longer enable this system or any systems of this type in the future.

I will be happy to clarify as much of this information as legal will allow me to get away with.
(In reply to Kai Engert (:kaie) from comment #10)
> Even if A has a policy that restricts the use of (b), how can A enforce that
> B won't abuse (b) in any way they desire, if the abuse happens in a closed
> environment?

Basically this is not your problem if you don't dictate how this should be done. The Mozilla Policy at http://www.mozilla.org/projects/security/certs/policy/InclusionPolicy.html requires:

for a certificate to be used for SSL-enabled servers, the CA takes reasonable measures to verify that the entity submitting the certificate signing request has registered the domain(s) referenced in the certificate or has been authorized by the domain registrant to act on the registrant's behalf;

Failing to ensure the above would be a violation by that CA.

> I would assume it to be the norm that A cannot control what (b) is being
> used for.

It's upon the CA to enforce the Mozilla CA Policy and implement controls that does exactly that. If a CA can not control it, it shall not issue such a certificate and otherwise failed right from the outset to comply to the policy.
(In reply to Brian Trzupek from comment #12)
> This single subordinate root system was one:
> - that was issued to a enterprise customer for use on their internal network

Hi Brian,

thank you for your reply. A few questions:

How did you ensure that the system was only used on their internal network? (See note regarding physical access below)

What did the system do after doing the MITM attack? Can you give a more detailed explanation on the method of operation?

> routing internal corporate traffic and not in any other way.  In addition,
> our on site audit focused on physical security and controls around the
> appliances to ensure that the boxes could not be physically taken from the
> facility to be placed on other networks to route traffic there.

What would have prevented them to just route other traffic through the box? No physical access/modification is required to do that. I work in the networking field and I know how easy it is to redirect traffic streams to where you need them. For MITM eavesdropping it would be enough to mirror the traffic trough the box.


> The system is not being revoked because of any type of compromise or issue
> with the the trust of the system. The system is being revoked in light of
> the major SSL events that occurred last year, as we have decided to no
> longer enable this system or any systems of this type in the future.

I assume you can't tell us who the customer for the system was?
The most important detail to focus on, is (per comment 12 by Brian Trzupek above) that Trustwave knew when it issued the certificate that it would be used to sign certificates for websites not owned by Trustwave's corporate customer.

That is, Trustwave sold a certificate knowing that it would be used to perform active man-in-the-middle interception of HTTPS traffic. 

This is very very different than the usual argument that is used to justify "legitimate" intermediate certificates: the corporate customer wants to generate lots of certs for internal servers that it owns.

Regardless of the fact that Trustwave has since realized that this is not a good business practice to be engaged in, the damage is done.

With root certificate power comes great responsibility. Trustwave has abused this power and trust, and so the appropriate punishment here is death (of its root certificate).
>For MITM eavesdropping it would be enough to mirror the
> traffic trough the box.

Scratch that, that's not working in this case, my fault.
(In reply to Brian Trzupek from comment #12)
> This single subordinate root system was one:
> - that was issued to a enterprise customer for use on their internal network
> - with network usage policies presented to users.
> - had the CA private key stored in a non-exportable or recoverable mode
> - generated the private keys for the cloned end entity certificates within
> the HSM and were never available to system administrators (or anyone other
> user) of the dedicated hardware device.

Brian, just one question from me - was this ever disclosed in your publicly published CA policies and practice statements before or after applying for inclusion with Mozilla?
I just wanted to provide an update.

I have some answer to the questions here that are pending internal release. Once I get them released I will provide the update.
(In reply to Christopher Soghoian from comment #15)
> The most important detail to focus on, is (per comment 12 by Brian Trzupek
> above) that Trustwave knew when it issued the certificate that it would be
> used to sign certificates for websites not owned by Trustwave's corporate
> customer.

Let's say, next week we'll learn about another CA-Y who issued an intermediate CA certificate to corporation Z, because an employee of Z sends us a valid looking MITM certificate for one of the major webmail sites, not revoked, chaining up to CA-Y. Z will prove that all of Z's employee have signed an agreement where they agree to being watched while they are at the workplace, aware that they shouldn't do any private communication at the workplace. CA-Y will succeed in plausibly denying that they knew about Z's actions.

What should happen with CA-Y?

Christopher, you say we should focus on the fact that TrustWave knew about the MITM abuse of the intermediate. In your opinion, should TrustWave be punished harder than a potential CA-Y? Or should both be punished equally? Or should CA-Y be punished harder, because TrustWave was upfront with us, while CA-Y might be actually lying and trying to get away with it, and we shouldn't accept CA-Y's unawareness? (Eddy suggests that CA-Y is supposed to be aware of Z's actions.)

Finding out what a CA knew and what it didn't knew might be very difficult to judge.

In my opinion, if we decided to punish TrustWave, we would have to equally punish any CA-Y that we might learn about in the future, because we cannot easily judge whether CA-Y knew or didn't know about the abuse.

If we decided to punish any CA-Y that we will learn about in the future, then any current employee of a corporation like Z will have the power to get the involved CA-Y revoked.

In my opinion, the TrustWave incident should motivate us to make a general decision for the future, how CA's like CA-Y should be treated if their intermediates get abused for MITM, whether they knew about the abuse or not. If we decided the punishment should always be permanent revocation of their root, then we could potentially give all current CA's a 1 months grace period and allow them to become clean by revoking all current intermediates they might be unable to control.

In my opinion, there should be only one scenario where TrustWave deserves the maxmimum punishment: If we agreed that any future CA-Y is also imediately guilty and will get maximum punishment, and if we agree that all of today's CAs must have already been aware of this punishment risk for MITM (ab)use of intermediates.

I appreciate that we currently have the requirement to discuss this scenario, because it hasn't been sufficiently discussed in the past, and if I had to make the decision, I would be willing to grant probation to TrustWave, if the intermediate was restricted to the closed environment, and as long as we won't learn about any harm having been caused by this revoked intermediate.

In addition, we should decide if there should be any distinction between a CA-Y, where the "victims" are fully aware of being watched, and between a CA-X where the issued intermediates are being used to spy on people without their knowledge.

The above was my attempt to contribute to the solution of this bug.

For the record, in my personal opinion MITM devices are a bad thing, and if any closed environment decides to use them, it should only be done after fully educating all affected users about the "big brother" effect, the devices should only ever be used with a private CA (one that isn't trusted by browsers by default), thereby giving educated visitors of the environment a chance to discover the MITM attack. MITM devices should never be used against people that assume to be connected to the public Internet and assume not be watched. Because of my personal opinion I'd have to join the request for punishing TrustWave, however, because I feel such punishment should be announced in advance, and this might not have happened until after this discussion, I'd be willing to accept probation.
I fail to see how this cannot result in the removal of the trust-bits for Trustwave's roots.

This is a blatant and self-admitted violation of Mozilla policy, as well a 'bad thing' for internet users in general. 

I am certain other CAs have performed the same actions and have not yet disclosed this situation. I cannot believe that any public-trusted CA would do such a thing as providing key-pairs capable of signing public-trusted certificates to any organization without consideration that this could happen.
There are many reasons to sign sub-ordinate certificates, but unless the signing keys are held by an entity that is disclosed and audited to the same standards as a root CA (i.e. WebTrust or the equivalents), the only reason in this case could be a purely financially-driven one with no regard for the danger this poses internet users at large. 

Setting a precedent now by removing the roots will underline that the actions are unacceptable and violations of Mozilla CA policy are not tolerated.

Kai: While the question was not directed at me, I do feel that the punishment should be handed out equally. Should another CA be brought to attention doing the same thing, yes, they absolutely should suffer the same fate that Trustwave should.

I'm sure Mozilla can't be seen to only take action in cases where the CA is unwillingly violated (like Diginotar) and not in cases of a blatant disregard for the policies set.
In my opinion, the TrustWave root certificate should be removed, and so should any other if there are certificates signed by that root that the owners of the domain did not approve!
This is a clear violation of the Mozilla CA Certificate Policy and has to be punished!
What message are we sending to other CAs if not removing it? You can do anything you want, if you revoke it after you get caught?
Why should any CA refuse a paying customer any certificate, if there is no punishment?

It doesn't matter if they knew that the certificate would be used for malicious purposes (or even if it was used for such), if they didn't secure their servers or if someone gave it out while drunk! If it is clear that a CA doesn't follow Mozillas CA Certificate Policy it shouldn't be included with Mozilla Products!

I don't say that we cannot include them again after a proper audit! But
The whole CA model is broken. I believe TrustWave when they say that there are many competitors doing the same thing they did.

In this case TrustWave is the one who went public with it. So what now? I think you will have to set a precedent. If you let it go now, there is probably no way you can refuse other CAs who do the same.

What if "HonestAchmed CA" sells a Sub-CA to a company in Iran who uses it to spy on its users and report them to the Iranian government? Where will you draw the line?
Personally I think that taking their root cert out will not solve anything at all. But I agree that if you let this one slip others can refer to this as a precedent.

In my opinion all browser vendors should consider empowering users with tools that make it easy to choose the (for the individual user) acceptable CA root (or intermediate) certificates. In addition a tool such as "Certificate Patrol" (FF add-on) should be active by default as well. This would not only raise awareness but actually provide means by which users can mistrust CAs on their own without going through hoops. No doubt there are different requirements depending on the target audience. Some will not want to be bothered with those details while others will want full control. And no, I don't consider the Tools->Options->Advanced->Encryption->[View Certificates] dialog very user-friendly or even suitable for the problem I was trying to describe in this paragraph.

So while I agree with the person who filed the bug, I think that there are different facets to the problem and multiple ways to approach those aspects, not just dropping that particular CA's root cert ...
So in long-term we would need tools to verify that no other CA is supporting MITM attacks, maybe some sort of rating system for the CAs, better and frequent auditing for the CAs processes, security standards and portfolio... but in short-term we definitely have to remove trust for CAs who obviously do not comply to the requirements!
If these companies get through with it, thats making the whole trust system for CAs nonsense... why trust a CA, if we have to assume that someone could look into the packets anyway? I don't need SSL encryption at all then.

So definitely kick out TrustWave and any other CA we have proof that they are issuing sub root certificates for MITM attacks (no matter if to curious companies or law enforcement agencies).
I'm bringing this trust issue up for comments at the American Bar Association task group on identity management where we are looking at trust frameworks and NIST-800-63 levels of assurance for users in relationship to the White House Cybersecurity Policy, National Strategy on Trusted Identity in Cyberspace (NSTIC), NIST, and Commerce. 

I'm also a Firefox user, aware of current MITM attacks and mitigation, following the current conversation regarding Merkle/Lamport one time hash trees, sovereign keys, side channel trust validation, etc. 

Overall, my preference is to increase trust if possible, by resolving issues like this, and this could be a great opportunity to do so by improving trust.

In particular Certificate Authorities generate a great deal of policy statements which relying parties can examine, and may be required to examine before trusting a specific certificate. In practice these are contracts of adhesion in which most users click through.

In practice for most Firefox users, Certificate fail, or CA fail is a troublesome experience, and there's no path forward. It causes a great deal of frustration for users.

Also, if you are already MITMed, it's difficult, (but not impossible) to do a workaround. 

All the updates you download can be infected if code signing is involved, and so on. 

It is serious, and does not stop in the case of an APT. 

The losses related to this in terms of intellectual property are substantial. 

The solutions to the losses are often hamhanded, impractical, and interfere with the functioning of the Internet. 

And so here here we have a hamhanded, impractical, and interfering with the net kind of DLP to prevent intellectual property loss approach example which Trustwave has acknowledged was a bad idea. That's the good part, they might be a leader in doing so.

The question is how to move forward, and get the other CAs to come clean, and that's going to take a process, since they took a risk in relying on the crowd sourced intelligence and some like Trustwave may have miscalculated the loss of reputation, they then reassessed their position to manage that loss.  

If a MITM is part of a legal intercept in some country, it is useless since the "false trust" that vendors of MITM'ed certificates promise to their clients though malware, is just that.

Yet the effect is to lessen  baseline "internet trust" into closed user groups (which can issue their own certs in any case), and thus the true effect is to degrade the value of the entire Internet, which contributes trillions to global economies, and the return to walled gardens, which you get to via the Net.

This is a significant step backwards, since many of those walled gardens also datamine your most personal information.

Since MITM certificates were found to be used against Syrian and Egyptian activists, and some of the technology was sourced from U.S. and U.K. companies to disrupt Facebook and Twitter planning for protests, the value of these companies is diminished.

The net effect is a form of censorship, and the net routes around censorship.

Typically that involves checking for revoked certificates which the browser does, and then the question is, should it? Is the decision to remove or keep a root certificate a fairly blunt instrument, when we need better trust frameworks?

Adam Langley recently brought up issues regarding time delay in browser certificate checking using OCSP, privacy issues of web sites visited being sent to the CA, and whether the browser will make a partial connection in any event. 

In short the entire issue is complicated and the users do not understand the complexity of SSL issues, but expect that someone (meaning you) has taken the effort to move the process forward. 

This may be one of these opportunities to reform, rather than replace, by demonstrating that the right thing has been done, but the community is the ultimate decision maker under RFC-5280.

At the same time, many users rely on simple warnings to indicate that a MITM could be taking place, some will take the additional step in Firefox to do the about:config and enable fail on OCSP checking failure, some agencies entirely reject any browser supplied list of trusted CAs, which is revealing.

This brings up an alternative under X.509v3 of listing root certificates under the pkiCA attribute in X.500 outside the trusted local store and checking certificates by doing path validation to the listed root certificates there. 

This however would not have worked if the local store already trusted the root.

Yet in issuing a subordinate root certificate the stakes were considerably higher, if the Trustwave CPS ver. 2.9 that  lists subordinate CAs, did not contain that fact as a deliberate omission, and thus the question turns to intent. Did they just bankrupt their own company with this one move considering their liability?

Did Trustwave intend to "fool" mobile users of Firefox, into thinking they had secure connections to do online banking from work, or was there a policy in place where these people worked there that said "Go ahead and use your mobile phone, but if you connect through our network then we are going to monitor it for DLP to meet compliance requirements", as a BYOD policy statement. 

If the users (and guests) were not being fooled and this was documented in all the relevant policy statements on which the users were relying, then the remaining issue is really the ability of that certificate being used to sign other certificates, and whether that was prevented by the HSM.


I have a few other questions given the nature of the subordinate root certificate that was issued and how it was used, given the Trustwave Holdings Inc. certificate policy and certification practices statement ver 2.9. dated July 13, 2010. 

The purpose of this certificate was to prevent data loss by personal devices using SSL at the company which installed it. 

Had (or have) the users been made aware of this fact, that their personal devices, such as smartphones, would be subject to this DLP module in the network path? What is the OID of the policy used in the certificate?

It states on the web site that relying parties have recourse of up to $500,000 according to the following which also notes mobile browser support


Trustwave SSL Certificates Offer:

    99% Browser Ubiquity
    Trustwave’s root certificates are trusted by all major Web Browsers
    Highest industry standard 128/256 bit encryption
    High assurance validation (both domain and organization vetting)
    Extended Validation (EV) SSL Capable CA
    We refuse to propagate the SGC (server gated cryptography) myths, and as such we do not sell them.
    High Warranty, up to $500,000 USD.
    30 day no questions money back guarantee
    Unlimited re-issuance policy for the life of a certificate
    Free Trusted Commerce Site Seal, demonstrating both compliance and trust to online customers.
    Mobile browser support.


Are there current plans to compensate those users who were affected by the issuance of that certificate? If so, under what CPS version does this fall,
version 2.9, or a subsequent CPS that would have been required for the subordinate root certificate according to ver 2.9?

Since the DLP module kept logs, what proactive steps have been made to contact the users of Firefox at company X and inform them of their rights regarding the use of mobile devices?

It was stated that this is common practice in the industry. 

To what extent, now that Trustwave has broken with the pack by deciding to come forward to manage this trust issue, can the problem be dealt with in a structural process model within the various groups that support certificate roots, and related documents such as Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.0 by the CA Browser Forum?


thanks,

Peter Bachman
c=US Manager,  Cequs Inc.
peterb@cequs.com
All, Your thoughtful and constructive input into this bug is appreciated.

Just as a reminder... Please use the mozilla.dev.security.policy forum to discuss policy improvements. There is already a thread created for this called: "Subordinate-CAs from trusted roots for managing encrypted traffic"

If there are other CAs with roots included in NSS that have signed subCAs that are used in this manner, then we urge those CAs to revoke those subCAs asap.
(In reply to Kathleen Wilson from comment #26)
> If there are other CAs with roots included in NSS that have signed subCAs
> that are used in this manner, then we urge those CAs to revoke those subCAs
> asap.
Kathleen, just wondering: does this already imply a decision on part of the Mozilla developers to remove the Trustwave cert(s)?

Thanks.
(In reply to Oliver from comment #27)
> Kathleen, just wondering: does this already imply a decision on part of the
> Mozilla developers to remove the Trustwave cert(s)?

We're still evaluating the situation, and we are determining a course of action.

My personal opinion: I would like to impress upon the CAs the seriousness of this, and that if any of them do have this type of subCA it needs to be revoked. However, I think that the "death sentence" for this CA would be extreme.
All,

In the interest of Mozilla's users, there is probably now no urgent need to distrust Trustwave's CA root since the original problem apparently has been recognized and corrected. Most likely there is currently no real risk to its users that would warrant such a step - even though it obviously sucks big time that the policy of Mozilla and user expectations were violated.

Rather I'm interested to know how such occurrence will be prevented from now on and which steps and requirements are placed upon Trustwave at this time. And for that matter any other CA - how can Mozilla make sure, that its policies and requirements are implement by the CAs to the letter and not according to their own considerations and judgement.
Eddy,

I welcome your comments in mozilla.dev.security.policy. 

In the meantime, we need to decide what to do about Trustwave's certs.
IMHO, this is a very hard dillema. On one side, there is a blatant policy violation that should not go without consequences. On the other hand, as far as I understood, the CA decided without outer pressure that what they did was wrong, revoked the cert and promised not to do it again. Punishing it for coming forward would mean that any CA that did this in the past would try as hard as possible to hide that fact, instead of changing it. Therefore I would suggest to implement appropriate changes of policy (i.e. make clear that any CA that did something like that has 2 weeks to revoke the cert and 4 weeks to fully disclose it, and any CA that does not comply or issues such a cert in the future will be removed immediately), but NOT to punish Trustwave.

However, if Trustwave did NOT admit to this freely without outer pressure, then off with their hea^Wtrustbits.

A compromise between removing their trustbits completely and doing nothing could be to prevent the CA from issuing Sub-CA certificates (and distrusting those). I assume this could be done by replacing the root certificate in the cert store with one that has a length limitation set. (This of course assumes that it does not issue third-party intermediates on the same depth as the own certificates they use for signing).
(In reply to Jan from comment #31)
> 
> A compromise between removing their trustbits completely and doing nothing
> could be to prevent the CA from issuing Sub-CA certificates (and distrusting
> those). I assume this could be done by replacing the root certificate in the
> cert store with one that has a length limitation set.


I don't think a chain length restriction is a solution.

We need at least one intermediate, because the recommended practices say that issueing end entity certificates directly from the root is undesirable. When using an intermediate, in case of an abuse (e.g. hacker abuses intermediate's key), this can be dealt with by revoking the intermediate.

But as soon as you allow one intermediate, you cannot technically enforce whether the intermediate is used inhouse or elsewhere.
(In reply to Sebastian Wiesinger from comment #14)

Sebastian,
 
As you acknowledged mirroring would not work in this case because SSL inspection proxies require SSL termination to function.   This means the device would need to be present on a network where route changes could cause traffic for downstream users to flow through the device. The customer in question is not an ISP and therefore controls only their internal network (where the device was used).
 
The system would send the data to a DLP system for policy enforcement.
 
Correct, we are not at liberty to disclose customer information at this time.

> (In reply to Brian Trzupek from comment #12)
> > This single subordinate root system was one:
> > - that was issued to a enterprise customer for use on their internal network
> 
> Hi Brian,
> 
> thank you for your reply. A few questions:
> 
> How did you ensure that the system was only used on their internal network?
> (See note regarding physical access below)
> 
> What did the system do after doing the MITM attack? Can you give a more
> detailed explanation on the method of operation?
> 
> > routing internal corporate traffic and not in any other way.  In addition,
> > our on site audit focused on physical security and controls around the
> > appliances to ensure that the boxes could not be physically taken from the
> > facility to be placed on other networks to route traffic there.
> 
> What would have prevented them to just route other traffic through the box?
> No physical access/modification is required to do that. I work in the
> networking field and I know how easy it is to redirect traffic streams to
> where you need them. For MITM eavesdropping it would be enough to mirror the
> traffic trough the box.
> 
> 
> > The system is not being revoked because of any type of compromise or issue
> > with the the trust of the system. The system is being revoked in light of
> > the major SSL events that occurred last year, as we have decided to no
> > longer enable this system or any systems of this type in the future.
> 
> I assume you can't tell us who the customer for the system was?
(In reply to Eddy Nigg (StartCom) from comment #17)

> Brian, just one question from me - was this ever disclosed in your publicly
> published CA policies and practice statements before or after applying for
> inclusion with Mozilla?

Eddy,

Our general CPS doesn't include information about this as we only did this off a specific subordinate certificate. The CPS that governed that subordinate is where the information was that disclosed how the system operated. As a company we generally have the CPS for a subordinate include the operational details for which that subordinate was created.
(In reply to Brian Trzupek from comment #34)
> Our general CPS doesn't include information about this as we only did this
> off a specific subordinate certificate. The CPS that governed that
> subordinate is where the information was that disclosed how the system
> operated. As a company we generally have the CPS for a subordinate include
> the operational details for which that subordinate was created.

Where is/was this CPS at that time of inclusion process with Mozilla? And to all, how can Mozilla ensure that all CP/CPSs that of a CA have been disclosed?
(In reply to Kai Engert (:kaie) from comment #32)
>
> But as soon as you allow one intermediate, you cannot technically enforce
> whether the intermediate is used inhouse or elsewhere.

Yes, but as long as all currently issued sub-CAs are from one root, it will a) invalidate existing sub-CAs b) prevent new sub-CAs from being created unless the CA issues them directly from their root. The latter would need to be enforced by policy (i.e. the CA agrees not to do it and if it does, the root is pulled).

I would like to make clear that this is an alternative to pulling the root if it is decided that some kind of punitive measure is needed/wanted but putting the CA out of business completely is not. I do NOT suggest to do this to Trustwave as they voluntarily disclosed their misbehavior. I will discuss details in mozilla.dev.security.policy.
Personally, I think Trustwave should be commended for being the first CA to come forward, admit to, and renounce this practice of issuing unrestricted 3rd-party sub-CAs.

When I read Mozilla's policy, and the CA/B Forum baseline requirements, I see enough wiggle room in there that someone might plausibly claim that some agreed-upon scenarios for MitM certs was not prohibited by the agreement. In fact Geotrust was openly advertising a "Georoot" product on their website until fairly recently.

Those who are advocating Trustwave's removal from the list would seem to be of the belief that Trustwave was somehow alone in this practice. As I do not hold that belief, I think it would be a mistake to continue to threaten Trustwave and discourage other CAs from coming forward at this time.
I can agree with your idea that Trustwave did the right thing here, however they did it after they messed it up in the first place. 
There may be CAs that use exactly the same practices and there may be some that do worse things and Trustwave may be one of the good ones, even if that were true the Trustwave certificate has to be removed nonetheless because Mozilla needs to make an example of this case and make it clear that such practices can and will not be allowed in any way.
It's pretty clear in the policy that MITMs aren't permitted.  And that is also the universal expectation of the public.  More terminally, that is what the PKI / CA offering is:  we sell you these so that you can't be MITM'd.

So, yeah, there might be some wiggle room in the court of law to limit damages.  Granted, lawyers, etc.

But there will be no wiggle in the court of public opinion.

Either the system stands against the MITM or not, that is the question.  If the system does not, then we don't need it.  We can go back to PGP, self-signed certs, friends on Facebook, ebay rep, LinkedIn or any other of 1000 ideas.

The entire history of PKI in SSL and friends can be stated in simple words

     "you need CA-signed certificates to stop the MITM."

At the heart of this question is a principle.
As a comment on the original Version 1 policy, from memory.  There was a requirement to publish disclosures, e.g., CPs and/or CPSs.  The leeway that was written in was because there was (and is) no unified way to express the set of documentation.  For example, some groups used a single CPS, others split it amongst CPs and CPSs, and the line between wasn't clear.

However, there was no question in the authors' minds about relevant disclosures.  A disclosure that an MITM product existed would not be a disclosure that could be omitted.  CAs were to issue certificates to parties they had verified as control/owners.  Fullstop.

Indeed, the existance of factory-MITMs had a marked effect on the policy.  There were claims at the time that 2 organisations were in the root list and were doing MITMs in some fashion or other (distinct to current event).  This was clearly not acceptable, but those who claimed this would not come forward with evidence, citing NDAs and jobs. So allegations died for lack of substance.

At the time, again from memory, it was pointed out that the control was audit, and if you wrote a secret CPS that said we MITM our customers, *and the auditor signed off on this* then this could be seen as a perfect backdoor to bypass the policy.  I do not think anyone claimed this was an acceptable state of affairs, but at the time there was no understood or agreed defence against auditors that signed-off on clear violations of the spirit of the system, in secret.

There was also some discussion as to the power of acceptance and removal.  At the time, there was an expectation for a community voice.  But, given the weaknesses above and elsewhere, it was argued (successfully) that Mozilla should have the sole and complete power to make decisions of the nature of root inclusion and exclusion.  Otherwise, any controversial decision would turn into a foodfight.

I still think that is the right decision, and especially in light of this event.  However it should be noted that reputation for the entire system is the coinage that is invested each time.
(In reply to Ian Grigg from comment #39)
> It's pretty clear in the policy that MITMs aren't permitted.

Yes, but does that same interpretation exist among CAs?

> And that is also the universal expectation of the public.

So we have a classic software engineering question here: does the implementation match the stated and implied requirements?

I propose a two-step alternate fix for this bug which I think addresses the problem closer to the root cause than just patching over immediate symptoms:

1. Remove the certs of all the CAs that have *not* issued a binding statement (much like Trustwave has done) committing that they will never issue a cert to another party (even in an HSM) which *could* technically be used for MitM (AKA "traffic management") of any domain names or IPs that party doesn't legitimately control according to public domain registration/ICANN/IANA.

2. Add code to NSS (for Firefox and other Mozilla projects) to watch for such certs and if any are detected in the wild, automatically store and forward the entire chain as proof to Mozilla, EFF's SSL Observatory, and other public CA auditing projects. If any such cert is found to have been issued, the CA that issued it would be summarily removed from the list of trusted roots. Identifying such 'rogue' sub-CA certs could be easily done with a small whitelist of the hashes of the CA's internal use sub-CAs. No new protocol needed.

So if, as Ian suggests, the current batch of CAs have already agreed to and are abiding by this policy then they should have no problem with this fix.

Right?
(In reply to Marsh Ray from comment #41)
> (In reply to Ian Grigg from comment #39)
> > It's pretty clear in the policy that MITMs aren't permitted.
> 
> Yes, but does that same interpretation exist among CAs?

If you can't find a CP/CPS of a CA that declares it acceptable and as part of their business practices, than you might have some evidence that it's actually not acceptable (and only done hidden and shrouded in secrecy by some). The Mozilla Policy certainly doesn't allows it, neither the EV guidelines nor the Baseline Requirements.
@Marsh,

> > It's pretty clear in the policy that MITMs aren't permitted.

> Yes, but does that same interpretation exist among CAs?

well, maybe not.  Which is why I say it is a question of principle.  Either there is a policy, or not.

Which is it?

The line was drawn a long time ago.  Now it's time to stand on one side or the other.
Trustwave needs to be removed. Every other decision would only make Mozilla look bad.

If there are no consequences for breaking the policy, there's no need for a policy in the first place. CAs need to know how far they can go.

Same goes for CAs with sub-standard security, who get hacked, because they didn't update their servers in a year. Just remove them. These companies make so much money, yet get away with providing a really bad service, because the browser vendors don't do anything.

You're either for the users or against them. Pick a side.
I agree with Marsh - TrustWave needs to be congratulated and as a reward, I think they should have their CA distrusted. If others are less willing to come forward with this kind of issue, we'll find them in the wild and hopefully Mozilla will take even harsher action.

Selling a CA cert for MITM is absolutely a violation of the letter and the spirit of the Mozilla policies. I'm glad they came forward but I'm sure all of the people harmed aren't warmed over by their "honesty" this late in the game.
As much as I would like to "death sentence" a CA, we shouldn't with the one who came forward apparently without outer pressure. They did wrong but at least its bringing more attention to the issue. The focus should be to create a mechanism to find other certificates being used for MitM as Marsh Ray suggested. If the CA doesn't quickly revoke & issue a public statement then take their head off. 

(In reply to Marsh Ray from comment #41)
> I propose a two-step alternate fix for this bug which I think addresses the
> problem closer to the root cause than just patching over immediate symptoms:
> 
> 1. Remove the certs of all the CAs that have *not* issued a binding
> statement (much like Trustwave has done) committing that they will never
> issue a cert to another party (even in an HSM) which *could* technically be
> used for MitM (AKA "traffic management") of any domain names or IPs that
> party doesn't legitimately control according to public domain
> registration/ICANN/IANA.
> 
> 2. Add code to NSS (for Firefox and other Mozilla projects) to watch for
> such certs and if any are detected in the wild, automatically store and
> forward the entire chain as proof to Mozilla, EFF's SSL Observatory, and
> other public CA auditing projects. If any such cert is found to have been
> issued, the CA that issued it would be summarily removed from the list of
> trusted roots. Identifying such 'rogue' sub-CA certs could be easily done
> with a small whitelist of the hashes of the CA's internal use sub-CAs. No
> new protocol needed.
The deployed PKI has failed to scale to the current userbase:  This incident is just an example of that failure to scale.

At a certain point, engineers involved in maintaining and operating the infrastructure cannot just pin blame on "human error".  We have probably reached that point.  We know that there is a fundamental problem with the design.  The CA architecture is inherently defective.
Seems like a moderate path would be to declare short term amnesty for CAs that disclose and agree to stop while concurrently developing detection. After that, start delisting any caught in non-compliance. Perhaps this is a way forward that other browser vendors might agree to adopt too?

I'm a bit confused as to why a lawfully deployed corporate data loss prevention solution would need a certificate connected to the root. Why not create your own CA cert and require employees on your network to install it?  Using a real certificate would seem to be extremely costly (see their description of the fancy HSM with unresolvable keys) and unjustified if the interception was being performed with the users knowledge and consent.
1) DigiNotar was a bad experience - however I feel that this is worse.  DigiNotar was compromised.  TrustWave blatantly disregarded policy on purpose.  Until policy changes and technical controls have been added and a new compliance audit has occurred, they cannot be trusted.  They were given trust and abused this trust.  It could be argued that they should be given a second chance but this isn't elementary school - this affects the trust of every Mozilla user.

2) Even if the CA cert was contained on an HSM that doesn't mean that it did not issue another CA certificate underneath itself.  It does not mean that it did not issue another certificate for ANY website that may have been exported and used anywhere.

3) I'm guessing due to the limited market this is on a BlueCoat SSL proxy device - if that is the case their on-the-fly certificates are kept on the appliance's hardware module.  That doesn't mean that traffic from somewhere else could not have been redirected through this company's network for them to analyze.  Unlikely, but possible.  I'd imagine that many high level hackers have real jobs, too, and would not hesitate to take advantage of this kind of scenario.

4) The biggest issue, to me, is this: publicly trusted certificates should not be used for internal corporate use.  Period.  As a generic statement, most companies could care less about information security beyond checking off a list.  If a company feels that they need to use a commercial vendor instead of their own internal CA then that just says that they don't get the concepts and/or do not have the technical skill to do this right.  These DLP solutions are intrusive, however when used with an internal PKI solution then you can say that they are protecting their corporate information, which they have the right to do.  However, with a publicly trusted certificate then they are able to MITM any company, not just their own, which is not acceptable.

5) When using an internal PKI for this, the user should be able to view the certificate chain and be able to determine that the connection is being intercepted or if it is a true connection to the server.  Yes, it can be argued that many users do not know how to do this and even those that do know how often don't bother, however that is a weak argument.  Those that are knowledgeable and take the time to look should be able to trust the results.  This type of issuance destroys this trust.

6) As a side note, TrustWave appears to issue certificates directly from its root - this is not good practice.  The root should be kept offline and never - ever - have a path to the internet.  Roots cannot be revoked, they can only be untrusted - this is sloppy.  Yes, many users disable CRL checking because they don't understand its function and importance - to me this is a failing of technical support teams and general user (and admin) education.

7) A more friendly method for this would be to block just the issued subCA cert, however this does not address the bigger issue: the trust chain was broken at the root level, so that is where this branch, and all leaves, must be pruned.


My opinion: This certificate must be blocked due to violation of policy.  After mitigating the circumstances and undergoing a new compliance audit then they may be allowed to apply again with a new root certificate signed with a new key (check the modulus).
I strongly feel that Trustwave's CA root should be revoked. This was a blatent and willful policy violation, and totally undermines any credibility they have. SSL CAs get their business for the sole reason of selling trust. If a CA cannot be trusted, then as far as I'm concerned they don't deserve to be in the CA business. As a website owner, I am certainly not happy that an arbitrary corporation is able to generate certificates for my domain name.

There is no guarantee that this hardware device has not been used to generate subordinate CAs, nor that Trustwave's customer hasn't routed other traffic through it. It would have been all too easy for this customer to have give the hardware device to an ISP who could have intercepted numerous SSL streams. As others have said, this corporation could have used their own internal CA quite easily to do MitM web monitoring, but paid for a commercial chain of trust which gives them the ability to read SSL streams in a covert manner.

I suspect the only reason they've come clean is they suspect someone on the inside was about to disclose their activities.

If Mozilla does not revoke the Trustwave CA promptly, then as I see it Mozilla itself is actively supporting a CA who by their own admission grant bogus sub-CA certificates. One cannot have a policy which is inconsitantly applied. As a user, I take comfort in knowing my browser vendor vets the root CAs that are bundled. If that vetting isn't as stringent as the policy states, then Mozilla is actively misleading me, the browser user. by saying CAs are trustworthy when actually they aren't, which puts Mozilla in the same category as Trustwave.
I would encourage Mozilla to consider the full potential danger of the scope of this incident. Such actions are unacceptable -- and in light of a recent search, the issue is even more serious than I believe is being considered.

The certificate cannot be trusted, and any other CA that has done the same should be immediately revoked from Mozilla trust.

https://www.trustwave.com/government/public-sector.php

States the following:

"Trustwave products and services facilitate compliance and security. We are mindful of the critical role Trustwave plays in securing public sector networks and the sensitive information throughout the enterprise, as well as the role it plays in facilitating compliance with regulatory standards.

Secured networks are essential to the mission of public sector organizations from federal defense and civilian agencies to federal, state and local governments. Trustwave plays a critical role in strengthening the defense of these networks, securing all layers of public and private networks.

Trustwave secures the privacy and confidentiality of critical data, protecting thousands of government employees and the citizens they serve. These are a few of public sector organizations that rely on Trustwave for threat management and compliance reporting: DOD, DHS, DOT, FAA, DOJ, FBI, DOL, NASA, DOE, Army, Air Force, Navy, HHS, RCMP, DND, the National Archives, the State of Virginia and the Royal Netherlands Army."

The site then goes on to prominantly feature the official logos of the following United States Government organizations (and EDS):

Federal Aviation Administration
Department of the Army
Department of Justice
Department of Labor
Department of the Navy
Department of the Treasury
Department of Homeland Security
National Institutes of Health
Spec Operations Command
NASA
U.S. Army
U.S. Nuclear Regulatory Commission
U.S. Department of Defense
U.S. Regulatory Commission
U.S. Intelligence Community
U.S. Missile Defense Agency
U.S Naval Special Warfare Agency
U.S Intelligence Community
EDS
I agree with all those insisting that Mozilla revokes Trustwave as a CA.  However, until Mozilla takes action on what appears to be a no-brainer decision in my personal and professional opinions, I have elected to take it upon myself to delete/distrust all Trustwave (SecureTrust Corporation) certificates in all of my browsers.
I'd also say that Truswave or (any other CA) should be immediately removed from the trusted roots if any such action is disclosed.

The whole purpose of CAs is trust and by such actions they clearly show that one cannot trust them.
IMHO it doesn't count whether such companies are forced to do so by law or not, especially as their root certs are included world wide and legal situation may be completely different from country to country.

If, e.g. the NSA secretly orders e.g. VeriSign to give then _any_ certificate, this may be lawful in the US (well ... maybe) but not in e.g. Europe,.. and their certificates are also trusted there.

If a CA is ordered by law to make such fraudulent certs and they're also ordered not to disclose this (which is usually the case) they should have to revoke their own root-certs.
Better safe than sorry.


In this special case one should have even less mercy with Trustwave. As pointed out, there is no technical reason that a real CA makes such fraudulent certs just to supervise the employees of a company; any self signed CA will do.


Really hope that Mozilla will make a clear sign here and throw them out.

When I go through Mozilla's list of trusted certs I feel really strange anyway... especially when root certs from countries are included which certainly do not meet western standards with respect to democracy, freedom of speech and rules of law.
Guess I don't need to name them...

Chris.
btw: This shows once again, that the system of a strict hierarchical PKI is inherently broken.

Many of its problem is solved by the arbitrary trust hierarchies allowed by OpenPGP.
Not only are "normal-user to normal-user" trust relations possible, but also the same what we have no with strict hierarchical X.509,... (by the means of trust signatures).


There's even an RFC that specifies TLS/SSL with OpenPGP (which is even implemented by gnutls).

The funny thing now is that all world cries if something happens like this or DigiNotar... and the browser manufacturers promise to improve things,.. but everything stays the same.
We still have dozens of CAs which can issue certs for just anything.

I mean none of the browser manufacturers really cares a lot on security, otherwise they would force the use of OSCP or CRLs (and fail if both don't work) per default.
And they would deny access to any server that is still (possibly) vulnerable to the SSL renegotiation bug.
I know this is harmful, but, Dear Mozilla,.. you have not even enabled
security.ssl.treat_unsafe_negotiation_as_broken
per default... while actually you should enable:
security.ssl.require_safe_negotiation
...regardless of whether sites break.


Oh and really,.... all those things like the compiled in lists of EV-enabled CAs or the really really... really stupid idea that Google seems to go now with Chromium (dropping support for OSCP and CRLs altogether and relying on lists of revoked certs)...
What the hell?
Was nothing learned by the earlydays of hostlists (instead of DNS)?
Why should I trust Google (or anyone else) to be able to keep these lists up to date from all their CAs (not to speak about my own added CAs)? Why should I trust Google (or anyone else) to do the-right-thing™ and nothing evil?


Just my 2ct,
Chris.
(In reply to Gregory Maxwell from comment #48)
> 
> I'm a bit confused as to why a lawfully deployed corporate data loss
> prevention solution would need a certificate connected to the root. Why not
> create your own CA cert and require employees on your network to install it?

No doubt many do, however...

1. If you have 10,000+ employees, it's a minor military operation to get that installed on everybody's computer and keep it updated. Think about all the systems running on Linux, newfangled smartphones with no UI for it, etc. Often applications have their own separate trusted root stores. Visitors come in and out of the company every day for meetings and training and need access to the Internet. It's a huge hassle.

2. Often businesses need to do business with each other and "trust" each others internal servers and users for data exchange. Sometimes this can be accomplished with "bridge" CAs, but it's a big deal.

and of course...
3. Sometimes maybe they really do enjoy the ability to do, um, "root/Administrator-free MitM" just a little bit. Maybe for them this is OK, on their device, on their network, and so on.
@ Brian Trzupek,

thanks for stepping into the lion's den.  It is clear that disclosure will play an important part in this.

My questions relate to audit.  When this root was audited during the exposure to this product, what audit criteria were used by the external auditor?  (I'm guessing WebTrust.)

Can you provide a link to the audit report(s) applicable over root and time period?

Was the external auditor in a position where he or she was aware of the nature of the product?  That is, an internal MITM traffic monitoring device?

I understand that you yourselves did an internal audit of the company.  I'm interested in how the external auditor viewed and signed off on that report.  I've yet to formulate a question in this regard tho, but any further details would be appreciated.

As I say, disclosure is going to be key, but I also understand that legal may insist on 'no comment.'  Which in itself is going to be interesting.
(In reply to Marsh Ray from comment #55)

> Often applications have their own separate trusted root stores. Visitors
> come in and out of the company every day for meetings and training and need
> access to the Internet. It's a huge hassle.

Just a minor nitpick. It is one thing to spy on your own employees, who have signed (and are presumed to have read) employment agreements disclosing such monitoring, and who potentially have less of a reasonable expectation of privacy when using an employer provided network.

It is an entirely different thing to spy on non-employee visitors who happen to be using the wifi network in your lobby or meeting rooms.

I'm no lawyer, but if Starbucks decided to covertly MITM the SSL connections of customers using its wifi network, I highly doubt if a disclosure buried in the click-through terms of use would be sufficient to protect the company from legal troubles.
(In reply to Christopher Soghoian from comment #57)
> 
> It is an entirely different thing to spy on non-employee visitors who happen
> to be using the wifi network in your lobby or meeting rooms.

Oh absolutely!

To be clear, I was specifically not saying whether it was right or wrong, legal or illegal, moral or not, and so on. I was simply explaining why a large enterprise would find it attractive to have their own public root.
Mozilla and other vendors now stand between the horns of the dilemma.  I have attempted to frame both sides of the argument here:

http://financialcryptography.com/mt/archives/001359.html

The major problem I see is that whichever way a vendor chooses to go, there will be losses.
In PKI, trust is a commodity and Trustwave (et al) violated the trust. Its sad that Mozilla is even debating what to do with these folks (and their friends).

Evading computer security systems and tampering with communications are violations of federal law in the US. So says the US Attorney General in New Jersey when he charged Wiseguys Tickets with gaming the TicketMaster systems [1,2]. If the Attorney General is to be believed, Trustwave (et al) violated 18 USC 1030 (a) (4) and 1030 (c) (3) (a).

If Mozilla does not revoke these folks, a dangerous precedent will become a time honored policy.

[1] http://www.wired.com/threatlevel/2010/03/wiseguys-indicted/
[2] http://www.wired.com/images_blogs/threatlevel/2010/03/wiseguys-indictment-filed.pdf
(In reply to Christopher Soghoian from comment #57)
> (In reply to Marsh Ray from comment #55)
> 
> > Often applications have their own separate trusted root stores. Visitors
> > come in and out of the company every day for meetings and training and need
> > access to the Internet. It's a huge hassle.
> 
> Just a minor nitpick. It is one thing to spy on your own employees, who have
> signed (and are presumed to have read) employment agreements disclosing such
> monitoring, and who potentially have less of a reasonable expectation of
> privacy when using an employer provided network.
> 
> It is an entirely different thing to spy on non-employee visitors who happen
> to be using the wifi network in your lobby or meeting rooms.
I don't believe the US code is written that way. While an employee may agree to have his/her communications monitored, a company can only do so as a passive adversary. As soon as a company goes active with a MitM attack, they violate US federal law by evading a computer security system and tampering with communications.

By the way, did the other endpoint in the secure channel agree to monitoring?
(In reply to Eddy Nigg (StartCom) from comment #8)
> 
> And why should some CAs that tightly control their PKI and don't agree to
> certain customer demands and burden themselves with sufficient (expensive)
> controls be blamed for another CAs carelessness? Again, I'm not voicing an
> opinion if TrustWave has done anything, but the requirements must be applied
> equally.
Mr. Nigg is quite correct here. StartCom and other CAs which follow the rules and operate ethically should not be put at a disadvantage or penalized. I image some would be quite happy to usurp customers from an unethical CA which was more than willing to disregard its agreements and responsibilities to users and the community.

If Mozilla does not take administrative action, it will set a horrible precedent and encourage more bad behavior.
(In reply to Sebastian Wiesinger from comment #22)
> The whole CA model is broken. I believe TrustWave when they say that there
> are many competitors doing the same thing they did.
> 
> In this case TrustWave is the one who went public with it. So what now? I
> think you will have to set a precedent. If you let it go now, there is
> probably no way you can refuse other CAs who do the same.
Mozilla should send out a questionnaire to all included roots and subordinate roots and get a public statement from them. More of these incidents are going to surface in the future, and a public statement will help vet the truth later. For those that deny and are later caught, the consequences and subsequent administrative action will be much easier.

Its similar to what Mr. Nigg alliterated to - not all CAs engage in deceptive practices and policy violations. Those operating ethically and within the bounds of their agreements should be rewarded (and not have to suffer by a competitor's underhanded ways).
(In reply to Oliver from comment #23)
> In my opinion all browser vendors should consider empowering users with
> tools that make it easy to choose the (for the individual user) acceptable
> CA root (or intermediate) certificates. In addition a tool such as
> "Certificate Patrol" (FF add-on) should be active by default as well. This
> would not only raise awareness but actually provide means by which users can
> mistrust CAs on their own without going through hoops.
http://ssl.entrust.net/blog/?p=615.
On Friday, 10 Feb 2012, I emailed an inquiry to the Texas State Board of Public Accountancy, Enforcement Division, regarding the public attestation performed by Boysen & Miller, PLLC, Certified Public Accountants (Texas Firm License ID C04172); for Trustwave Holdings, Inc. of Chicago, Illinois.

In that inquiry, I specifically referenced the public attestation linked from the "Trustwave Online Legal Repository: https://ssl.trustwave.com/CA/", with the link text "AICPA/CICA WebTrust for Certification Authorities Audit Report".  That link points to the url:

https://cert.webtrust.org/ViewSeal?id=1098

Later on Friday, 10 Feb 2012, I received a reply from Virginia C. Moher, who provided an advisory staff opinion which is not binding on the Texas State Board of Public Accountancy (the "Board").

I have forwarded a copy of that response to the person assigned to this bug: Kathleen Wilson.
I have posted a draft CA Communication in the mozilla.dev.security.policy forum for review/discussion. My intent is to make it clear that this type of behavior will not be tolerated for subCAs chaining to roots in NSS, give all CAs fair warning and a grace period, and state the consequences if such behavior is found after that grace period. There is also an action item for CAs to update their CP/CPS to make it clear that they will not issue subCAs for this purpose.

I greatly appreciate everyone's comments and input into this bug. We have taken your input into account, and some of your input is directly incorporated into the CA communication.
wtc, could you test this patch. You'll also need the patch for bug 727204 as well, which is already checked in.

Thanks, 

bob
Attachment #598395 - Flags: superreview?(kaie)
Attachment #598395 - Flags: review?(wtc)
I propose to adjust the labels used in the patch.

Given that this patch does NOT distrust Trustwave, but rather distrusts subCAs that were issued by Trustwave, I propose to change all 4 occurrences of the labels to use more specific wording like:

  Untrusted Trustwave's subCA 1
  Untrusted Trustwave's subCA 2

I will wait for Bob to send me more details by email, so I can double check the other contents of the patch are correct.
Bob, you encoded the first octet (0x6b) in octal incorrectly.
It should be \153 rather than \253.

I generated a patch independently using addbuiltin -t "p,p,p",
which helped me catch the incorrect serial numbers in your patch.

The two certs should be described with "MITM" or an euphemism
("data loss prevention"?), with this bug number.

I tested this patch with the vfychain command, first with
CERT_VerifyCertificate and then with CERT_PKIXVerifyCert (-pp):

BEFORE

$ vfychain -u 1 <cert files>
Chain is good!
$ vfychain -u 1 -pp <cert files>
Chain is good!

AFTER

$ vfychain -u 1 <cert files>
Chain is bad!
PROBLEM WITH THE CERT CHAIN:
CERT x. <Subject name of CERT x> [Certificate Authority]:
  ERROR -8172: Peer's certificate issuer has been marked as not trusted by the user.
    <Issuer name of CERT x>
$ vfychain -u 1 -pp <certfiles>                             
Chain is bad!
PROBLEM WITH THE CERT CHAIN:
CERT x. <Subject name of CERT x> [Certificate Authority]:
  ERROR -8171: Peer's certificate has been marked as not trusted by the user.

NOTE: I like the -8172 error returned by CERT_VerifyCertificate
better, but this is just a minor issue with CERT_PKIXVerifyCert.
Attachment #598395 - Attachment is obsolete: true
Attachment #598395 - Flags: superreview?(kaie)
Attachment #598395 - Flags: review?(wtc)
(a)
> Bob, you encoded the first octet (0x6b) in octal incorrectly.
> It should be \153 rather than \253.

I agree with Wan-Teh.
When using the tool from bug 728044, the output being dumped directly from the CRL starts with \153


(b)
> The two certs should be described with "MITM" or an euphemism
> ("data loss prevention"?), with this bug number.

Ok, how about "MITM subCA 1/2 - bug 724929" ?

I used this string and created yet another patch.

Note that the enhanced tool also adds comments with human readable issuer and serial number (same as in the octal dump of the binary encoding) for easier future maintenance of the certdata.txt file.


(c)
This is our first change to certdata this round of NSS, therefore I added the version number update to file nssckbi.h


(d)
> I tested this patch with the vfychain command, first with
> CERT_VerifyCertificate and then with CERT_PKIXVerifyCert (-pp):

Wan-Teh, I compared my patch v3 with your patch v2.
In the certdata.txt section, our patches only differ in comments and in the label statements.

Would you like to repeat the test with this newer patch anyway?
Attachment #598431 - Attachment is obsolete: true
Attachment #598452 - Flags: review?(rrelyea)
Bob, Kai: please also add (manually) when these two certs expire,
so we know when these two trust records can be removed.  Thanks.

Kai, your addbuiltin output should not quote the value of Issuer
because the Issuer string contain quotes.
Addressed Wan-Teh's requests.
Attachment #598452 - Attachment is obsolete: true
Attachment #598452 - Flags: review?(rrelyea)
Attachment #598464 - Flags: review?(wtc)
Comment on attachment 598464 [details] [diff] [review]
Make sure the offending trustwave intermediates are not trusted in mozilla, v4

r+ rrelyea

I agree with wtc suggested changes, implemented here by kai (0x6b = octal 153 not 253 of course, but also the new cert names).

Also thanks for verifying the code.
Attachment #598464 - Flags: review?(wtc) → review+
As a note to others: If you want to protect a browser that doesn't already have this patch, you can enable automatic CRL fetching as follows:

click on  http://crl.trustwave.com/ORGCA_L2.crl
This will load a crl into your browser.

 At this point you are now protected from accessing the MITM certs (even if someone blocks your access to the CRL server), however in about 2-3 days you may not be able to process any certs issued by the ORGCA_L2 intermediate (I don't remember exactly how we deal with CRL's past the next update, but at some point we view the CRL as too load and treat all certs issued by the CA as revoked). You can prevent this by going to preferences (tools->options on windows, edit->preferences on Linux) -> advanced->encryption->revocation lists. Select the Trustwave Holdings, click enable automatic update, and click Update "1" day before next update. You can delete this CRL when your browser has been updated with this patch.

bob
Comment on attachment 598464 [details] [diff] [review]
Make sure the offending trustwave intermediates are not trusted in mozilla, v4

This patch also works.

>+CKA_LABEL UTF8 "MITM subCA 1 - bug 724929"

I suggest that we put "bug 724929" only in the comment in this file
because it is more useful to NSS developers.

The CKA_LABEL should contain more user-friendly info.  It may be
displayed by the UI.  How about "MITM subCA 1 issued by Trustwave"?
Adjusted patch according to Wan-Teh's proposals.
Only the label and the comment have changed.
Attachment #598464 - Attachment is obsolete: true
Attachment #598527 - Flags: review?(wtc)
Comment on attachment 598527 [details] [diff] [review]
Make sure the offending trustwave intermediates are not trusted in mozilla, v5 [checked in]

r=wtc.
Attachment #598527 - Flags: review?(wtc) → review+
This bug seems to be sliding toward being quietly marked "resolved" as a result of Kathleen's email and Bob's patch (for better or worse).  Is Mozilla at least going to require Trustwave to answer the questions in comment #35 and comment #56?
Comment on attachment 598527 [details] [diff] [review]
Make sure the offending trustwave intermediates are not trusted in mozilla, v5 [checked in]

I checked in the patch to distrust the intermediates for NSS 3.13.3


cvs commit: Examining .
Checking in certdata.c;
/cvsroot/mozilla/security/nss/lib/ckfw/builtins/certdata.c,v  <--  certdata.c
new revision: 1.85; previous revision: 1.84
done
Checking in certdata.txt;
/cvsroot/mozilla/security/nss/lib/ckfw/builtins/certdata.txt,v  <--  certdata.txt
new revision: 1.82; previous revision: 1.81
done
Checking in nssckbi.h;
/cvsroot/mozilla/security/nss/lib/ckfw/builtins/nssckbi.h,v  <--  nssckbi.h
new revision: 1.35; previous revision: 1.34
done
Attachment #598527 - Attachment description: Make sure the offending trustwave intermediates are not trusted in mozilla, v5 → Make sure the offending trustwave intermediates are not trusted in mozilla, v5 [checked in]
(In reply to sjs from comment #78)
> This bug seems to be sliding toward being quietly marked "resolved" as a
> result of Kathleen's email and Bob's patch

I don't think it seems to do that.
Let's assume the bug will stay open until a final decision has been made.
(In reply to Eddy Nigg (StartCom) from comment #35)
> (In reply to Brian Trzupek from comment #34)
> > Our general CPS doesn't include information about this as we only did this
> > off a specific subordinate certificate. The CPS that governed that
> > subordinate is where the information was that disclosed how the system
> > operated. As a company we generally have the CPS for a subordinate include
> > the operational details for which that subordinate was created.
> 
> Where is/was this CPS at that time of inclusion process with Mozilla? 

The Trustwave roots were included in NSS in 2008, per bug #418907, and the XRamp root was included in 2005, per bug #274723. The certificates for the subCA in question were created in 2011.
(In reply to Ian Grigg from comment #56)
> @ Brian Trzupek,
> 
> thanks for stepping into the lion's den.  It is clear that disclosure will
> play an important part in this.
> 
> My questions relate to audit.  When this root was audited during the
> exposure to this product, what audit criteria were used by the external
> auditor?  (I'm guessing WebTrust.)
>

WebTrust CA


> Can you provide a link to the audit report(s) applicable over root and time
> period?
> 

https://cert.webtrust.org/SealFile?seal=1099&file=pdf


> Was the external auditor in a position where he or she was aware of the
> nature of the product?  That is, an internal MITM traffic monitoring device?
> 

After reviewing the CPS and the audit statement, my opinion is that it looks like the auditors observed Trustwave's operations and the certs that they specifically issued. It looks to me like the auditors didn't observe the subCA's operations and certificates.  


> I understand that you yourselves did an internal audit of the company.  I'm
> interested in how the external auditor viewed and signed off on that report.
> I've yet to formulate a question in this regard tho, but any further details
> would be appreciated.
> 

It's not clear to me that "the external auditor viewed and signed off on that report."

I think that this situation clearly demonstrates the need to add a statement in Mozilla's CA Certificate Policy to say that all subCAs must be technically constrained or they must be audited by an independent party and publicly disclosed. As you know, discussions have been ongoing about this in the mozilla.dev.security.policy forum, and proposed in item #9 of http://www.mozilla.org/projects/security/certs/policy/WorkInProgress/InclusionPolicy.html
Combining comments #81 and #82 there appears to be an interesting timeline here.  If the subRoot was created after August 2011 then there is no overlap between the audit opinion and the event in question.

On the one hand, it is fairly clear that whatever did happen, the subRoot did not survive for an entire audit cycle.  This in itself indicates that audit might be doing some good.  OTOH, the auditor expressed opinion over management's assertion that subscriber information was properly authenticated (point (a) Boysen & Miller's opinion as references in #82)

Material questions to ask then include:

* Was the subRoot created before August 2011?
* Was the auditor made aware of the project, and especially the distinct nature of it?
* How long was the subRoot in effective operation?

The question I am trying to resolve is whether audit is worth the cost.  As far as I can see, this is an open question.  Now we have a data point (several if we include other events in 2011 such as DigiNotar).  We also have a widespread suspicion that CAs and auditors treat subCAs as contractually and liability-wise distinct, something that is against policy, so in effect many CAs and auditors may be engaged in "don't ask, don't tell" practices.
(In reply to Kai Engert (:kaie) from comment #79)
> Comment on attachment 598527 [details] [diff] [review]
> Make sure the offending trustwave intermediates are not trusted in mozilla,
> v5 [checked in]

Looking at the patch, I see two (2) subCAs:

+# Explicitly Distrust "MITM subCA 1 issued by Trustwave", Bug 724929
+# Issuer: E=ca@trustwave.com,CN="Trustwave Organization Issuing CA, Level 2",O="Trustwave Holdings, Inc.",L=Chicago,ST=Illinois,C=US
+# Serial Number: 1800000005 (0x6b49d205)

and also:

+# Explicitly Distrust "MITM subCA 2 issued by Trustwave", Bug 724929
+# Issuer: E=ca@trustwave.com,CN="Trustwave Organization Issuing CA, Level 2",O="Trustwave Holdings, Inc.",L=Chicago,ST=Illinois,C=US
+# Serial Number: 1800000006 (0x6b49d206)

But in Nicholas J. Percoco's blog post, I see a definite statement that there was only one (1) certificate. 

http://blog.spiderlabs.com/2012/02/clarifying-the-trustwave-ca-policy-update.html
"This is a proactive revocation, of the only certificate we issued for these purposes..."

2 != 1

Can someone please clear up this discrepancy for me?
Depends on: 728617
I'm also interested in the 2 vs 1 discrepancy.  Has it been explained somewhere else?
* Was the subRoot created before August 2011?
A: Yes, it was created in April, 2011.

* Was the auditor made aware of the project, and especially the distinct nature of it?
A: No. The auditor reviewed the issuance of the subCA's certificate, but the operation/use of the subCA cert was out of scope. This goes back to the ongoing discussions we are having in mozilla.dev.security.policy regarding subCAs.

* How long was the subRoot in effective operation?
A: The subCA cert was created in April, 2011.

* 2 certs vs 1 question
A: One cert was issued, then later reissued in order to make some technical changes. We decided to explicitly distrust both of these certs (the original one, and the one that replaced it).
(In reply to Kathleen Wilson from comment #86)

> * 2 certs vs 1 question
> A: One cert was issued, then later reissued in order to make some technical
> changes. We decided to explicitly distrust both of these certs (the original
> one, and the one that replaced it).

Thanks for relaying these answers.

Following up on the 2 certs vs 1 question:  

Were both these certs issued under the CPS available at:
https://ssl.trustwave.com/CA/micros/micros_cps.pdf

And why were the two certificates revoked on different dates?

Serial Number: 6B49D205
        Revocation Date: Feb 15 21:51:25 2012 GMT

Serial Number: 6B49D206
        Revocation Date: Feb 10 21:08:40 2012 GMT
Micros Certification Practices Statement
Version 2.1.1
Effective Date July 14, 2010

Downloaded from https://ssl.trustwave.com/CA/micros/micros_cps.pdf
sha1sum: 12244dcdb08dd91415e5b6618c5bdf80bc0a9446
sha256sum: 1ceac5ff1dafec0393b0bdf0b969caa6207e0aa103e157fede538353a1b83258

In "Appendix A", at pp. 46-47 (pp. 57-58 in PDF), contains text of an "1800000005 (0x6b49d205)" certificate issued by "Trustwave 
Organization Issuing CA, Level 2"

Certificate: 
    Data: 
        Version: 3 (0x2) 
        Serial Number: 1800000005 (0x6b49d205) 
        Signature Algorithm: sha1WithRSAEncryption 
        Issuer: C=US, ST=Illinois, L=Chicago, O=Trustwave Holdings, Inc., CN=Trustwave 
Organization Issuing CA, Level 2/emailAddress=ca@trustwave.com 
        Validity 
            Not Before: Apr  1 18:23:53 2010 GMT 
            Not After : Mar 29 18:23:53 2020 GMT 
        Subject: C=US, ST=Maryland, L=Columbia, O=Micros Systems, Inc., CN=Micros CA 
        Subject Public Key Info: 
            Public Key Algorithm: rsaEncryption 
            RSA Public Key: (2048 bit) 
                Modulus (2048 bit): 
                    00:e2:1c:4c:5e:25:a3:53:4b:64:63:f5:ec:c3:11: 
                    2e:df:cd:d1:e5:31:a5:16:08:67:04:43:0e:3a:33: 
                    f0:5c:fd:9a:8c:a6:a4:a6:5d:3c:08:29:6c:0e:3d: 
                    b7:5a:95:35:a1:62:4a:83:19:56:1d:49:0e:2e:e5: 
                    4a:d5:57:83:c5:b7:ed:1a:b5:66:73:c5:24:4c:c1: 
                    99:bf:2b:89:de:0c:1b:d3:d8:58:8e:9d:28:70:22: 
                    84:ed:e1:29:3e:97:7b:ff:78:22:3b:90:d1:7a:11: 
                    f1:b1:ae:c1:0d:6a:f4:f9:bd:2a:a7:1a:d5:ca:d5: 
                    55:59:9c:cb:cc:5b:c7:b1:c9:2f:cf:6f:6c:19:6a: 
                    af:8f:9d:52:18:f9:6b:05:8a:4a:b9:b9:e7:d0:a9: 
                    d2:44:b2:bc:fa:ed:55:20:00:d0:78:0d:b5:34:27: 
                    1e:e7:9e:f9:f3:9f:b2:b3:87:92:ef:0b:8c:7f:d9: 
                    1e:65:ef:d1:d4:a7:8f:a2:7f:c4:12:80:f2:af:72: 
                    41:4e:df:f2:8c:cb:f1:ec:7a:3a:f5:86:68:8f:de: 
                    33:db:a4:2d:dc:2f:26:b8:78:ca:48:d6:f7:29:a2: 
                    7b:7f:df:9c:84:40:9c:f1:ed:ae:52:a1:6c:1c:46: 
                    b6:a8:5a:1e:77:62:db:f1:bd:05:46:da:b7:c9:d0: 
                    d2:df 
                Exponent: 65537 (0x10001) 
        X509v3 extensions: 
            X509v3 Basic Constraints: critical 
                CA:TRUE, pathlen:1 
            X509v3 Subject Key Identifier:  
                C6:6B:91:57:43:4E:5B:93:15:CA:C2:14:40:9E:2C:43:7E:EF:18:18 
            X509v3 Authority Key Identifier:  
                keyid:92:08:64:B1:BB:9F:A4:91:5B:5E:AF:53:ED:E2:92:F3:DB:66:AD:31 
 
            X509v3 Key Usage:  
                Certificate Sign, CRL Sign 
[page break]

 X509v3 Certificate Policies:  
                Policy: 1.3.6.1.4.1.35539.3.3.3.3.3 
                  CPS: http://ssl.trustwave.com/CA/micros 
 
    Signature Algorithm: sha1WithRSAEncryption 
        b2:95:fc:57:27:08:25:b5:a8:a4:49:9f:0a:68:e9:0f:31:18: 
        68:c7:2b:44:e4:31:d9:a5:f2:00:bc:0b:6f:55:2d:32:d2:1f: 
        14:b4:3c:cf:92:85:f3:2c:39:c4:55:e6:aa:6b:87:8d:5a:8c: 
        17:3a:99:a3:24:4f:17:49:85:17:12:ad:e4:7e:f7:d1:3d:78: 
        c3:b9:4e:a7:6f:fe:29:97:ee:52:ad:8c:6d:fc:64:fa:c9:7e: 
        f1:ba:80:02:15:af:b8:c7:6d:87:a0:3a:09:23:ae:a1:f4:b5: 
        82:5e:5f:1c:58:b4:2d:49:c1:ab:04:cc:cf:64:b5:06:f0:78: 
        92:9e:03:85:f3:e0:f5:a5:92:4d:7f:c5:0f:c0:c5:99:47:ab: 
        67:4e:83:da:8e:d5:f0:82:84:e4:01:c5:96:28:c5:78:e5:b3: 
        ba:0c:4b:11:f2:89:3e:d6:a6:1a:74:8a:8c:36:27:b0:44:1f: 
        ad:cc:b6:87:0b:59:7e:41:bd:b1:07:88:0a:c9:17:01:e6:5d: 
        b7:01:0b:d5:51:53:21:25:1c:19:10:3a:89:d9:fd:f0:4d:30: 
        82:20:81:50:60:36:0f:9c:4f:67:f5:c7:aa:21:86:50:22:6a: 
        31:87:67:3c:a7:3c:93:6c:80:6f:8a:d6:bd:fb:86:29:ee:87: 
        01:5e:8c:9b
(In reply to Kai Engert (:kaie) from comment #7)

> It's good that we now have a public example of such an
> deployment, it will
> help to raise awareness that we urgently need improvements for
> today's SSL
> trust model.

In a lot of cases the organisations that deploy such setups do not even want to break ssl connections and look what's inside. That's computationally heavy, and presents all kinds of unpleasant legal risks

They just need their gateway control point to send messages to the user when he browses ssl web sites (for example: url forbidden by company policy, your proxy auth has expired, please log-in again here, etc)

However since firefox and other browsers decided to block the https redirects that were used previously for this a few years ago (even though many people commented that it was going to break existing proxy setups and no alternatives were proposed by browser editors), they now need to perform MITM and spoof web site certificates just to be able to talk with the user browser (since anything that does not transits from the connection certificate is ignored by the web client nowadays). That decision was a godsend to MITM DPI vendors.

So you reap what you sow, if you don't want whatever solution you propose to be workaround-ed again by enterprises you really need to provide a way for internal-to-internet network gateways to communicate with the web client without hijacking user http(s) sessions.

The needs are very basic, in most organisations the gateway only needs to:
1. provide a way for the user to authenticate (and re-authenticate when its auth expires while browsing) itself safely (over ssl)
2. tell the browser such and such url are blocked by org policy for the level of access the user has (when the browser attempts to access them)

Most orgs would accept they can't malware-check user https connections if they had that simple level of control over what internal network users can access without breaking ssl sessions.
> However since firefox and other browsers decided to block the https
> redirects that were used previously for this a few years ago (even though
> many people commented that it was going to break existing proxy setups and
> no alternatives were proposed by browser editors), they now need to perform
> MITM and spoof web site certificates just to be able to talk with the user
> browser (since anything that does not transits from the connection
> certificate is ignored by the web client nowadays). That decision was a
> godsend to MITM DPI vendors.

This isn't a valid reason: Big Company have no problem to provide their clients with a additional certificate. There is no reason Firefox should trust any of these certificates by default!!!
> Big Company have no problem to provide their clients with a
> additional certificate

ROTFL

Any Big Company that decides on this kind of approach will just ban Firefox and provision ie only (and at that point → back to the AD-only solution). But since it's a lot of work and won't please ipad-happy managers, I sort of doubt it will stay the preferred approach.

If you want interception to die, provide Big Companies with a solution that does not involve deploying files not part of the standard installation on hundred of thousands of computers. Interception is so much cheaper and effective right now it's no wonder every gateway vendor is proposing it today
I trust Mozilla to to the right thing regarding its users security, and in this case, this means completely revoking Trustwave's from the trusted root.

A CA that sells certificate for MITM is definitely not trustworthy.
Component: Security → CA Certificates
Product: Core → mozilla.org
Version: Trunk → other
After much discussion in security-group and mozilla.dev.security.policy it was decided that only the offending subordinate CA certificate would be distrusted. The Trustwave root certificates were not removed.

NSS actively distrusts the offending subordinate CA certificate as per bug #728617.
https://bugzilla.mozilla.org/show_bug.cgi?id=728617#c4

CA Communications regarding MITM certs, and the corresponding CA responses are here:
https://wiki.mozilla.org/CA:Communications

Please use the mozilla.dev.security.policy forum if you need to discuss this issue further.
Status: ASSIGNED → RESOLVED
Closed: 11 years ago
Resolution: --- → WONTFIX
It's a real shame...

While it's not much new that Mozilla doesn't care about real security (which can be seen on countless examples like that you still try to propagate the inherently broken X.509 instead of allowing better alternatives like OpenPGP based TLS... or still not really marking servers that use vulnerable TLS renegotiation as broken, etc. pp.) the way how Mozilla deals with it's CA repo is really outrageous.

Not only do you willingly accept any devil where even people with the least common sense will expect that the CA is under the control of some evil government where human rights count nothing and which is notoriously known for cyber war/attacks (CNNIC)...
no you also keep (some) certs from those organisations/companies how have proven more beyond doubt that they're absolutely not capable of managing a CA and moreover not trustworthy to at least tell the public when they got compromised.

Trustwave is for sure not the only reason... the same game happened with Turktrust.


I really can't believe that Mozilla is so naive to believe that such organisations have then just noticed these incidents ... or that they handled their different root CAs completely different and while one got compromised ... all other can be considered secure.

Sorry guys... can't believe that you're that stupid... and given that the harm of removing unimportant roots like Turktrust, Trustwave or CNNIC goes to zero (unlike when we'd had to to this with e.g. Versigin)... I really must come to the conclusion that you do this for malicious reason.
It's the only possible reason, IMHO,... otherwise security of users would come first and such questionable CAs would be thrown out without lots of discussion.


A shame.
Oh and towards any Mozilla representative... please don't even try to hide against some "policy rules" which allegedly would have prevented you to the plain right thing (as you did e.g. with the request to drop CNNIC)...
You're making those rules... and those poor excuses make me just sick.
(In reply to Christoph Anton Mitterer from comment #95)
> Oh and towards any Mozilla representative... please don't even try to hide
> against some "policy rules" which allegedly would have prevented you to the
> plain right thing (as you did e.g. with the request to drop CNNIC)...
> You're making those rules... and those poor excuses make me just sick.
Trustwave violated at least two inclusion rules. First was issuing certificates for domains not under the requester/operator's control; second was the material adverse change in policy (MAC) that Trustwave performed to shed legal liability. There may be more, but I don't follow these things too closely.

(In reply to Christoph Anton Mitterer from comment #94)
> I really can't believe that Mozilla is so naive to believe that such
> organisations have then just noticed these incidents 
Mozilla was not being naive - they were being accommodating. I suspect: long before Trustwave went public on its blog and long before this bug report, Trustwave contacted Mozilla (and other major browsers) to privately broker a backroom deal covered under NDA. This bug report and the accompanying debate was simply a dog a pony show. The outcome was predetermined. (BTW, I personally filed the bug report for Wndows platforms. Available upon request).

It was necessary for Trustwave to act - and act quickly - for two reasons. First, Trustwave needed to protect its multi-million dollar business; and second, Trustwave needed all talks with browser manufacturers held under NDA so no one would know about the pre-determined outcome.

While the details of the backroom meetings are surely covered under layers of NDA, I'm curious if anyone from Mozilla (Kathleen?) would be willing to state when the foundation became aware of the problem (which would surely coincide with the time the foundation was contacted by Trustwave).

In the end, this is another example of the safety net failing. There was no transparency on Mozilla's part. Mozilla has proven itself to be no more trustworthy than the folks they are trying to protect us against.
(In reply to Christoph Anton Mitterer from comment #94)
> It's a real shame...
> [...]
> A shame.

I totally agree.

A real shame.
I'll respond to a couple questions in this comment, and then I will not respond in this bug again. All further discussion should be in the mozilla.dev.security.policy forum.

1) To my knowledge, Mozilla has not signed an NDA with a CA. 

2) The first I heard about this particular incident was on February 2, 2012. Trustwave contacted me soon after the first post about the incident appeared in the mozilla.dev.security.policy forum.
https://groups.google.com/forum/?fromgroups=#!topic/mozilla.dev.security.policy/ehwhvERfjLk/overview

Kathleen
(In reply to Christoph Anton Mitterer from comment #94)
> It's a real shame...
> 
> While it's not much new that Mozilla doesn't care about real security (which
> can be seen on countless examples like that you still try to propagate the
> inherently broken X.509 instead of allowing better alternatives like OpenPGP
> based TLS... or still not really marking servers that use vulnerable TLS
> renegotiation as broken, etc. pp.) the way how Mozilla deals with it's CA
> repo is really outrageous.
> 
> Not only do you willingly accept any devil where even people with the least
> common sense will expect that the CA is under the control of some evil
> government where human rights count nothing and which is notoriously known
> for cyber war/attacks (CNNIC)...
> no you also keep (some) certs from those organisations/companies how have
> proven more beyond doubt that they're absolutely not capable of managing a
> CA and moreover not trustworthy to at least tell the public when they got
> compromised.
> 
> Trustwave is for sure not the only reason... the same game happened with
> Turktrust.
> 
> 
> I really can't believe that Mozilla is so naive to believe that such
> organisations have then just noticed these incidents ... or that they
> handled their different root CAs completely different and while one got
> compromised ... all other can be considered secure.
> 
> Sorry guys... can't believe that you're that stupid... and given that the
> harm of removing unimportant roots like Turktrust, Trustwave or CNNIC goes
> to zero (unlike when we'd had to to this with e.g. Versigin)... I really
> must come to the conclusion that you do this for malicious reason.
> It's the only possible reason, IMHO,... otherwise security of users would
> come first and such questionable CAs would be thrown out without lots of
> discussion.
> 
> 
> A shame.


+1

(sry 4 spam but this has to be said. This outcome though no suprising is really ashaming. ;ozilla really should have set an example here. This bug is far from resolved in my opinion.
(In reply to Kathleen Wilson from comment #98)

Hi Kathleen,
 
> 1) To my knowledge, Mozilla has not signed an NDA with a CA. 

Mozilla however is a full member of CABForum, which for many years had an agreement for non-disclosure.

With this agreement in place, it prepared documents in secret which effected the rights of users, and then rammed them through Mozilla's "open" forum for public comment.  While the secrecy may have changed somewhat in the last year, after years of complaints, it is notable that the change happened *only* after the core documents were finished and agreed by Mozilla, other CAs, etc, all in secret, and all to the detriment of users.

The issue is not in the detail or procedure, but the intent, the breach in spirit of the policy, and the failure of Mozilla's manifesto.  I and others have discussed the confidentiality of discussions between Mozilla and CAs for years on the policy list.  We have complained!  The admission that this has been going on since forever (?) has eaten at the policy like a cancer.  In essence, there is no dealing to be done with Mozilla in good faith on this issue, because we the open community cannot see, cannot test, cannot dispute, cannot know what is going on behind the scenes.  We punch blind.

Other than the original admission that Mozilla maintains confidential discussions with CAs on a routine and regular basis, Mozilla has been silent on this issue.

>  All further discussion should be in the mozilla.dev.security.policy forum.

But that only applies to non-CAs!  CAs have the option of confidential discussions in any forum of their choice.

What is the point of any further discussion?
RESOLVED WONTFIX? Really? This is hardly acceptable, for all the reasons mentioned already. What's the point in having a stringent inclusion policy if the policy for _violating_ these policies (intentionally or not) is just "well, errors happen, you better try not to repeat them"?

Of course, the obvious workaround here is:
 1) Delete Trustwave from certstore manually
(In reply to Christian Kujau from comment #101)
> Of course, the obvious workaround here is:
>  1) Delete Trustwave from certstore manually
Be careful of this. Some browsers infer your certifcate store is damaged and automatgically repair it.

Currently, Firefox behavior is to honor the delete (confer, http://www.mozilla.org/projects/security/pki/psm/help_21/certs_help.html). However, Opera will repair the store (confer, http://my.opera.com/community/forums/topic.dml?id=1580452).
There are going to be no further official responses here so spamming this bug isn't going to change or do anything. Take it to mozilla.dev.security.policy.
@Al, Kathleen...

Sorry,... absolutely no interest to "discuss" this anywhere anyhow further.

Why? Well, there is absolutely no idea to have anything "discussed" or to find a "compromise"! This is about security and not about politics.

The simple question is, can one afford to put one's users at risk by giving ultimate trust to an organisation which is notoriously known for (or has even proved to be) not trustworthy.
The simple answer that anyone with only a little common sense would get is: no, throw them out ASAP.
Mozilla however, apparently can afford this.

So what should it help to discuss anything further, if the decision makers are apparently already completely wrong-led.


Further, the we-don't-discuss-here-but-only-there seems to be a bit childish.

My 2ct.
(In reply to Christoph Anton Mitterer from comment #104)
> Further, the we-don't-discuss-here-but-only-there seems to be a bit childish.

No, you just don't understand the purpose of this system. This is a Bug tracker where implementation details are discussed, not the issue itself. The Groups/Lists is the place to discuss go/nogo decisions. Please submit everything you want to say there and not in this Bug. You won't change any decision by posting anything in this Bug. And please be polite and precise, and write a pro/con or risk/impact list or any other reasonable explanation on why you want to revisit the decision (great title for a new topic: "Revisit Trustwave Root Cert Decision"). Optimally, you can list any arguments for/against removal of the certificate stated here and in the list with a comment saying why it is relevant or not or how its impact can be mitigated or bypassed. 

I advise anyone trying to convince Mozilla to start an Etherpad draft posting to the list outlining the "perfect posting" and submitting it to the List. That way, it should be obvious for Mozilla what to do. 

Just don't be a lazy, angry troll, please.
Product: mozilla.org → NSS
Product: NSS → CA Program
You need to log in before you can comment on or make changes to this bug.