Closed Bug 642395 Opened 13 years ago Closed 13 years ago

Deal with bogus certs issued by Comodo partner

Categories

(Core :: Security, defect)

x86
All
defect
Not set
normal

Tracking

()

VERIFIED FIXED
Tracking Status
firefox5 --- unaffected
blocking2.0 --- final+
blocking1.9.2 --- .16+
status1.9.2 --- .16-fixed
blocking1.9.1 --- .18+
status1.9.1 --- .18-fixed
fennec 2.0+ ---

People

(Reporter: dveditz, Assigned: KaiE)

References

Details

(Whiteboard: [sg:high])

Attachments

(10 files, 3 obsolete files)

14.04 KB, application/zip
Details
9.57 KB, application/zip
Details
7.09 KB, text/plain
Details
3.91 KB, patch
rrelyea
: review+
briansmith
: feedback+
bzbarsky
: feedback+
beltzner
: approval2.0+
Details | Diff | Splinter Review
1.25 KB, patch
KaiE
: review+
beltzner
: superreview+
beltzner
: approval2.0+
Details | Diff | Splinter Review
2.78 KB, application/zip
Details
1.84 KB, patch
briansmith
: review-
Details | Diff | Splinter Review
5.36 KB, patch
Details | Diff | Splinter Review
6.86 KB, text/plain
Details
12.54 KB, text/plain
Details
According to mail from Comodo to the security alias "A long-term trusted partner of Comodo which has an RA function suffered an internal security breach and the attacker caused 7 certificates to be issued." The bogus certs were revoked within hours.

Unfortunately there are scenarios where the MITM can prevent the victim from contacting the OSCP responder and the certs will continue to be trusted. Given the high profile of the sites involved (including our own addons.mozilla.org) we should explicitly disable these certs if we can.

The CNs involved are
	addons.mozilla.org
	login.live.com
	mail.google.com
	www.google.com
	login.yahoo.com
	login.skype.com
	global trustee

Can they be added as untrusted to the built-in certificate store, or does that only work for signing certificates? Can we ship a CRL containing these certs?
blocking1.9.1: --- → ?
blocking1.9.2: --- → ?
blocking2.0: --- → ?
Whiteboard: [sg:high]
Attachment #519860 - Attachment description: Bad certs (PEM) zipped → Bad certs (text dumps) zipped
Revocation is in this CRL:
http://crl.comodoca.com/UTN-USERFirst-Hardware.crl

Here is the relevant section of the CRL:
openssl crl -text -inform DER -in UTN-USERFirst-Hardware.crl

    Serial Number: 047ECBE9FCA55F7BD09EAE36E10CAE1E (mail.google.com)
        Revocation Date: Mar 15 19:04:03 2011 GMT
    Serial Number: F5C86AF36162F13A64F54F6DC9587C06 (www.google.com)
        Revocation Date: Mar 15 19:04:24 2011 GMT
    Serial Number: E9028B9578E415DC1A710A2B88154447 (login.skype.com)
        Revocation Date: Mar 15 19:05:26 2011 GMT
    Serial Number: 9239D5348F40D1695A745470E1F23F43 (addons.mozilla.org)
        Revocation Date: Mar 15 20:15:20 2011 GMT
    Serial Number: D7558FDAF5F1105BB213282B707729A3 (login.yahoo.com)
        Revocation Date: Mar 15 20:15:26 2011 GMT
    Serial Number: 392A434F0E07DF1F8AA305DE34E0C229
        Revocation Date: Mar 15 20:15:38 2011 GMT
    Serial Number: 3E75CED46B693021218830AE86A82A71
        Revocation Date: Mar 15 20:15:47 2011 GMT
    Serial Number: B0B7133ED096F9B56FAE91C874BD3AC0 (login.live.com)
        Revocation Date: Mar 15 20:16:03 2011 GMT
    Serial Number: D8F35F4EB7872B2DAB0692E315382FB0 (global trustee)
        Revocation Date: Mar 15 20:19:04 2011 GMT

This OCSP responder, which is embedded in the certs, should also return "revoked":
http://ocsp.comodoca.com

Gerv
The idea to explicitly add the bad certs to the roots module is a good idea, but I don't know how to make this work. I don't see a way to make a cert as "explicitly not trusted". All I know of is "no explicit trust", but this is not sufficient, as the cert will still be trusted, if it can be chained to a trusted root.

The idea to use a CRL is problematic, because:
- I don't know an easy way to preship a CRL with binary NSS, because NSS reads CRLs from the database
- the approach might require that the applications ships the CRL and imports it at runtime. I think this approach is more work than the following one.

I'd like to propose the following approach (which, in a discussion with Gerv and Dan, we're currently favoring):

- block the certs at the application level
  (PSM = all mozilla apps = all SSL sockets)

- embed the certs into PSM

- extend PSM's existing code for SSL_AuthCertificateHook
  http://www.mozilla.org/projects/security/pki/nss/ref/ssl/sslfnc.html#1088805

- compare the server cert with our embedded blacklist

- in order to embed the blacklist, use
  "issuer name" and "serial number" as the primary key to 
  decide about rejection


Implementation details:

I already have a tool that dumps this ID information for a given cert.
(I'm using this tool to extract the information that we need to grant the EV status to a root).

At runtime, when we arrive at the function, we have available:
- the full binary blob of the cert
- the ascii (utf8?) presentation of the issuer name
- the binary encoding of the serial number

I think it's tricky to store binary encoding into sourcecode.
It's better to use base64 encoding.

This has a minimal performance disadvante, we must convert the site's serial number from binary to base64. (But really, this little cpu overhead shouldn't be worrisome, there are many more expensive operations involved in processing certificates than this, anyway)

In theory, in order to be 100% sure we're checking the correct certificates, we ought to compare the binary encoding of the issuer name.

However, in this particular scenario, we are able to simplify, IMHO. The certificates to be blocked all use an issuer name that consists of pure ASCII characters. I propose we save CPU cycles, and do a standard string comparison for the issuer name.

When we find a match, we'll set error code SEC_ERROR_REVOKED_CERTIFICATE and abort the verification.
The user will see the existing error page "site of cert is revoked".
The user will NOT be able to override.
Attached patch example patch (obsolete) — Splinter Review
Demonstration patch.
This patch blocks the certificate currently being used at https://kuix.de
With this patch applied, I can't connect to the server anymore.

I will soon attach a patch that blocks all 7 problematic certs from Comodo.
I have received a test certificate from comodo for the host kuix.de

I have installed the test certificate on a SECONDARY port at my server,
https://kuix.de:9449/

Gerv has suggested, we could avoid the runtime conversion from binary-to-base64, by encoding the binary encoding of the serial numbers using hexadecimal characters \x00. Thanks to Gerv for that good idea.

I've enhanced the patch in bug 421989, and I now have a tool that prints such information, that can be easily copied to C source code.
This text file contains a dump of the identifying information of 8 certs:
- the 7 affected certs 
- the test certificate from the same issuer

This information could be used for review purposes.
Attached patch Proposed patch (my v3) (obsolete) — Splinter Review
Attachment #519872 - Attachment is obsolete: true
Test instructions:

(a)
  Go to a site that uses a cert from the blacklist:
    https://kuix.de:9449/
  Expected:
    ERROR, revoked

(b)
  Go to a site that uses a good cert:
    https://aws.bayerischer-golfverband.de/
  Expected:
    Site should display fine

(c)
  Go to a site that was issued from a different CA:
    https://www.mozilla.org
  Expected:
    Site should display fine


FYI, (a) and (b) use certs that were issued from the same root CA.

If (a) fails and (b) works, this proves that we correctly distinguish between separate serial numbers of the same issuer.

If (c) works, this proves that our blacklist doesn't hurt other CAs.
I would like to note, I'm confident in the encoding of this patch, because I didn't do it manually.

The sourcecode of that static list was automatically produced from the DER encoded certificates, and the code it produced for the test certificate at https://kuix.de:9449/ correctly caused it to be blocked. I think it's reasonable to conclude the code works for all certs correctly (after code review).
I've tested the patch with 4.0 (2.0), 3.6 (1.9.2) and 3.5 (1.9.1), and it worked in all versions.

1.9.1 branch requires a slightly merged patch, but only because a neighbourhood function is different.
Gerv noted that my code dumps some certificates with a leading zero, and produces thereby 17 bytes long serial numbers.

I believe this is correct, I think the binary encoding of those serial numbers is indeed 17 bytes.

When dumping such certs with NSS pp tool, it prints a leading 00:

I imported such a cert, using Firefox and cert manager, to a server tab, and then clicked "view", and it displays a 17 bytes serial number with leading 00:
And the reason why I wrote comment 11, because Gerv noticed that the text dumps we have received, all serial numbers were 16 bytes long.

I believe those text dumps were produced using openssl. It appears that openssl strips initial 00: in the human readable dump.
> The idea to use a CRL is problematic, because:
> - I don't know an easy way to preship a CRL with binary NSS, because NSS reads
CRLs from the database

NSS reads CRLs from tokens. adding a CRL to the builtins should work.

> - the approach might require that the applications ships the CRL and imports it

> - block the certs at the application level
>  (PSM = all mozilla apps = all SSL sockets)
> - embed the certs into PSM

do you think this is the last time we are going to have this problem? It seems
to me if we can't block the certs in builtins, we should modify NSS so that we
can.

bob
Bob: should we ask Comodo to produce for us a special CRL with the seven certs in it? Their current CRL is 113k, even compressed.

Gerv
Attached patch Patch v4Splinter Review
Attachment #519897 - Attachment is obsolete: true
On the NSS call, we decided speed was more important than pretty. That we'd keep this bug open for the 'pretty' fix, but the fastest way to solve this is kai's patch.

NOTE: kai's patch only turns off SSL. Since these certs are only valid for SSL, that should be sufficient for these certs.

bob
Comment on attachment 519939 [details] [diff] [review]
Patch v4

r+ rrelyea
Attachment #519939 - Flags: review+
Blocks: 642503
Adding Rob @ Comodo to CC list.
Assignee: nobody → kaie
Comment on attachment 519939 [details] [diff] [review]
Patch v4

I want to chemspill this
Attachment #519939 - Flags: approval2.0?
Attachment #519939 - Flags: approval1.9.2.16?
Attachment #519939 - Flags: approval1.9.1.18?
Robin Alden at Comodo (now CCed) tells me:

- Microsoft are aware of the problem.

- Opera are looking at options; they may add the roots to their "untrusted" store. Or, because they always check revocation status, they may rely on OCSP.

- Chrome are writing a patch; there is a stable version due to ship next week, and they could add the patch to that, but they may do an even earlier release for this problem.

Comodo are also discussing the matter with law enforcement. 

Gerv
Can I get a summary of the risk inherent in this code, please, and what QA would need to be revalidated if we took it in an RC2?
Mike,

this code path is executed each time we do a handshake with any SSL server.

However, the test starts with a simple string comparison.
If the issuer name is not matching, then all remaining code is skipped.

This alone eliminates the risk for all CAs that are not connected to this issue.


If the issuer name matches, we will compare certificate serial numbers, of the server cert, against our blacklist.

The rest of the code is skipping initial zeros, and looping through our blacklist, then a final memory comparison.

This isn't risky code, no memory allocation etc., but simple iteration, looping and comparison, and code review shows we carefully deal with boundaries. I don't see risks for side effects.


I made it easy for testing.
We received a test certificate, that I installed on a server, and that we will keep around for testing the QA candidate, and longer.

The test instructions are described in comment 8.


In my opinion, the only work that needs to be repeated, if you want to: Test that you can still connect to a variety of https/SSL sites.

If you still can connect, and are not being blocked, all is fine.
tracking-fennec: --- → ?
OS: Mac OS X → All
Attachment #519939 - Flags: feedback?(bsmith)
Attachment #519939 - Flags: feedback?(benjamin)
Attachment #519939 - Flags: feedback?(bzbarsky)
(In reply to comment #22)
> I made it easy for testing.
> We received a test certificate, that I installed on a server, and that we will
> keep around for testing the QA candidate, and longer.

That is to say: we asked Comodo to issue us another certificate from the same root the bogus ones were issued from, but for a domain Kai controls. We then added that cert's serial number to the blacklist (which is why it's 8 certificates long rather than 7). This means we don't need to set up a server using one of the compromised certs (which is impossible, as we don't have the private keys) in order to test the blocking. Instead, we can use https://kuix.de:9449/ , which is using the cert Kai obtained and we have now blacklisted.

Gerv
Attachment #519939 - Flags: feedback?(benjamin)
Comment on attachment 519939 [details] [diff] [review]
Patch v4

FWIW, the patch looks good to me.
Attachment #519939 - Flags: feedback+
Filed bug 642503 on moving a more generic version of this blacklist functionality into NSS proper for a future version.
blocking2.0: ? → final+
tracking-fennec: ? → 2.0+
I'm not familiar with this area, and don't understand the leading zeroes thing (or why the leading zeros are kept in the literal table only to be stripped out with an extra loop). But it looks okay, and low risk.
the tests from comment 8 all work fine in Fennec with this patch
(In reply to comment #27)
> I'm not familiar with this area, and don't understand the leading zeroes thing
> (or why the leading zeros are kept in the literal table only to be stripped out
> with an extra loop). But it looks okay, and low risk.

Sorry, the code in the static list was automatically created using a tool. I wanted to avoid the risk of manually tweaking it, because manual tweaking has the risk of human error. Also, if I tweaked it now, I might forget about that later, if there's ever a need to recreate or extend the list.

This filtering was a late addition, suggested during code review, to make sure we always make a correct match, regardless of leading zeros.
we are going to wait for bsmith's review
tracking-fennec: 2.0+ → ?
tracking-fennec: ? → ---
The tool I'm refering to is NSS "pp" tool, plus the patch attached to bug 421989.
It produced attachment 519892 [details] (which contains the leading zeros).
Comment on attachment 519939 [details] [diff] [review]
Patch v4

This seems reasonable to me, provided the leading 0s can be ignored (that is, that there are never serial numbers that are the same except one has more leading zeros than the other).

Is that the case?
Attachment #519939 - Flags: feedback?(bzbarsky) → feedback+
tracking-fennec: --- → 2.0+
I'm also not quite sure why the auto-generated serial numbers have leading zeros for 6 of the certs, but not the other two...
Bob would be able to explain the issue with the leading zeros better, I'll try to repeat what he said.

It depends on whether using a signed vs. an unsigned variable to the decoding of the serial number. The serial number is DER encoded. The raw encoding sometimes takes up 17 bytes, when having a 16 bytes serial number with the highest bit set.

(Don't ask me why that is, it appears to be an implementation detail/behaviour of the encoders/decoders used.)

We see a leading zero for all serial numbers that start with hex 8 or higher. No leading zero for 7 or lower.

While discussing/reviewing the patch, Bob suggested, stripping the initial zeros will always make it work.

Since Bob is the expert with properties of certificates and serial numbers, I also conclude he is sure that stripping leading zeros is fine. The answer to comment 32 is yes. Serial numbers are really numbers, not byte arrays.
Another supporting point, when the "openssl x509" tool dumps one of those certificates, and prints the serial numbers in human readable form, it strips away leading zeros, too.
The main concern now is with the logic in nsNSSBadCertHandler, namely the hitting of the following assertion:

NS_NOTREACHED("why did NSS call our bad cert handler if all looks good? Let's cancel the connection");

and the fact that nsHandleSSLError and cancel_and_failure() are not called before returning SECFailure. Kai and I are discussing it on IRC.
I believe comment 36 isn't a new scenario.

I guess when we hit a revoked cert in the wild, we'll probably get exactly the same code path as now.

The bad-cert-handler is designed to ignore any other errors that it doesn't deal with explicitly - including the "revoked" scenario. Obviously that failure reason was "forgotton" when writing the bad cert handler.

I don't see a strong reason to fix this now. But it's a very good opportunity to fix it. The only thing that should be done in addition, is ignoring and a little bit of cleanup.

So, I'm proposing this tiny addon patch. The error code, that we set in the other code, is available in this bad-cert-handler, if fetched as the first operation in the function. This is documented at http://www.mozilla.org/projects/security/pki/nss/ref/ssl/sslfnc.html#1088928

The patch fetches this status, and only if we see the "revoked" status, we'll give special treatment: Ignore and cleanup. The cleanup code "cancel_and_failure" is the same as we do in all other exit paths (except in the not-reached-assertion path that Brian discovered).

Thanks a lot for discovering this.
This patch builds. I'm testing it in a moment.
No, this incremental patch does not work. we no longer get the error page.
Ok, the patch is not necessary. With this patch we prevent the "default catch all" that will display an error.

The only thing that is necessary, and should be done in a follow-up patch: Remove the "not-reached" assertion when a cert is revoked.
Attachment #520017 - Attachment is obsolete: true
> Bob would be able to explain the issue with the leading zeros better,
> I'll try to repeat what he said.

Serial numbers are unsigned integers. DER INTS are all signed, so when you encode an unsigned integer with DER you need to make sure the leading bit is not zero. This CA issues serial numbers which are 16 bytes long. If the lead byte starts with a zero (0x00-0x7f) then the serial number is 16 bytes long, If the lead starts with a 1 (0x80-0xff) then a zero is added as padding.

Different uses of these values tend to decode the resulting bytes differently, so it's usually best to canonicalize the bytes before comparing them. the easiest way of doing that is to strip the leading bytes.

bob
Comment on attachment 519939 [details] [diff] [review]
Patch v4

The patch looks sufficient to me. Like Kai said, the unusual code path with the assertion should also be tripped by a normally-revoked cert. I will file a separate bug to deal with that.
Attachment #519939 - Flags: feedback?(bsmith) → feedback+
Rob: can we get 'er landed? a=beltzner for branches.
tracking-fennec: 2.0+ → ---
Attachment #519939 - Flags: approval2.0?
Attachment #519939 - Flags: approval2.0+
Attachment #519939 - Flags: approval1.9.2.16?
Attachment #519939 - Flags: approval1.9.2.16+
Attachment #519939 - Flags: approval1.9.1.18?
Attachment #519939 - Flags: approval1.9.1.18+
Comodo say:

Gerv,
	We have analysed our OCSP responder logs and we see no evidence of 
any of these certificates having seen traffic in the wild other than 
2 hits from the attacker himself (from the already mentioned 
212.95.136.18 and the almost adjacent 212.95.136.20) and also hits 
from Comodo staff testing and from Microsoft, Google, and (we think) 
Mozilla staff testing.

The earliest OCSP hit we see is at 15/Mar/2011:21:39:43 UTC, which 
is after the latest revocation - which was at 20:19 UTC.

Regards
Robin
mozilla-central:
http://hg.mozilla.org/mozilla-central/rev/f6215eef2276

mozilla-2.1:
http://hg.mozilla.org/releases/mozilla-2.1/rev/76bd4daa2d95

mozilla-2.0:
http://hg.mozilla.org/releases/mozilla-2.0/rev/3d4c3670c0bd

Waiting for builds before landing on relbranches for mozilla-2.0 and mozilla-2.1, currently testing 1.9.2 and 1.9.1.

Three out of seven builds done.
Status: NEW → RESOLVED
Closed: 13 years ago
Resolution: --- → FIXED
tracking-fennec: --- → 2.0+
Robert, thanks for explaining the leading 0 thing!
(In reply to comment #44)
> Comodo say:
>     We have analysed our OCSP responder logs and we see no evidence of 
> any of these certificates having seen traffic in the wild other than 
> 2 hits from the attacker himself (from the already mentioned 
> 212.95.136.18 and the almost adjacent 212.95.136.20)

Does Comodo keep baseline traffic stats such that you would detect a drop in normal OCSP pings coming from the region of that IP (Tehran), or other regions in case that address is misdirection? Lack of normal OCSP traffic might be evidence of an attempt to use those certs by blackholing the OCSP responder.

But as long as you're otherwise seeing normal OCSP traffic the lack of pings for those specific certs is comforting.
I've verified the bug on Linux with the steps on comment #8 and the examples in comment #0 with the exception of "global trustee." In the RC1 I can load the page https://kuix.de:9449/ but in the RC I get a page with this error:

Secure Connection Failed
An error occurred during a connection to kuix.de:9449.
Peer's Certificate has been revoked.
(Error code: sec_error_revoked_certificate)

I've also tested around functionality in AMO, like installing and updating extensions, as well as functionality in the sites mentioned in comment #0. I've seen no problems with those.
UPDATE: The attacker also caused the bad certificate for login.yahoo.com to be reissued twice.  The attached patch will blacklist these 2 additional certs.

These 2 certs were revoked at the same time the other 7 certs were revoked, but somehow we omitted to communicate the details to you.  Sincere apologies.
Attachment #520160 - Flags: superreview?
Attachment #520160 - Flags: review?
Attachment #520160 - Flags: feedback?
Attachment #520160 - Flags: approval2.0?
Attachment #520160 - Flags: approval1.9.2.16?
Attachment #520160 - Flags: approval1.9.1.18?
Rob: Can you totally utterly definitely confirm there are no more?

I note that these two certs are the two revocations in the middle of the set from the CRL, as quoted in comment 2. As that CRL is used regularly (multiple times per day), I asked Robin Alden of Comodo on the phone specifically about those two revocations - whether it was coincidence (i.e. they were unrelated), or whether there were more mis-issued certificates. He told me it was coincidence. :-|

Gerv
> He told me it was coincidence. :-|
I was wrong, and apologize.  
I'll let Rob give his confirmation separately.

Regards
Robin
Comment on attachment 520160 [details] [diff] [review]
Additional Patch - 2 more bad certs

I confirm this patch is correct.
My tool produced exactly the same code when dumping the additional new attached certs.
Attachment #520160 - Flags: review? → review+
> Rob: Can you totally utterly definitely confirm there are no more?

Gerv,
I confirm that the Comodo Partner Account that suffered the security breach has (totally, utterly and definitely) *not* mis-issued any further certificates, and we are not aware that any other Accounts have been breached.

The "2 more bad certs" in comment 50 need to be blacklisted.  After that, there are no more.
Comment on attachment 520160 [details] [diff] [review]
Additional Patch - 2 more bad certs

First and foremost: ugh.

Second, we will take this patch on:

 mozilla-central
 releases/mozilla-2.0 (default and GECKO20_2011031715_RELBRANCH) 
 releases/mozilla-2.1 (default and GECKO21_20110317_RELBRANCH)

After we get the confirmed changeset landings, we will:

 respin mozilla-central nightlies
 make a Firefox 4 RC2 build3 for Windows, OSX and Linux
 make a Firefox 4 RC1 build2 for Android
Attachment #520160 - Flags: superreview?
Attachment #520160 - Flags: superreview+
Attachment #520160 - Flags: feedback?
Attachment #520160 - Flags: approval2.0?
Attachment #520160 - Flags: approval2.0+
That's actually Firefox 4 RC1 build3 for Android
And actually Android and Maemo, but who's counting?
Changeset numbers:

mozilla-2.1 default:                      63440:f4cceb1f9b59
mozilla-2.1 GECKO21_20110317_RELBRANCH:   63441:0f7aaaf20cb3

mozilla-2.0 default:                      63341:afbc0b4fd618
mozilla-2.0 GECKO20_2011031715_RELBRANCH: 63342:6be9e31d01b4

mozilla-central default:                  63437:55f344578932

Gerv
Comment on attachment 520160 [details] [diff] [review]
Additional Patch - 2 more bad certs

Approved for 1.9.2.16 and 1.9.1.18, a=dveditz for release-drivers
Attachment #520160 - Flags: approval1.9.2.16?
Attachment #520160 - Flags: approval1.9.2.16+
Attachment #520160 - Flags: approval1.9.1.18?
Attachment #520160 - Flags: approval1.9.1.18+
blocking1.9.1: ? → .18+
blocking1.9.2: ? → .16+
I've reverified the fix in build 3 of RC2 of Firefox 4 now.
verified FIXED in build 3 of RC1 for Fennec 4 based on the test instructions listed in comment #8:

Mozilla/5.0 (Android; Linux armv7l; rv:2.1) Gecko/20110318 Firefox/4.0b13pre Fennec/4.0 ID:20110318114419
(In reply to comment #62)
> verified FIXED in build 3 of RC1 for Fennec 4 based on the test instructions
> listed in comment #8:
> 
> Mozilla/5.0 (Android; Linux armv7l; rv:2.1) Gecko/20110318 Firefox/4.0b13pre
> Fennec/4.0 ID:20110318114419

also double checked ssl sites for comment 0 on fennec.  all good.
(In reply to comment #20)
> Robin Alden at Comodo (now CCed) tells me:
> - Microsoft are [...]
> - Opera are [...]
> - Chrome are [...]

Robin: what about Apple/Safari? Have they been informed?
Let's not forget to land the additional patch into 1.9.1 and 1.9.2
(In reply to comment #64)
> (In reply to comment #20)
> > Robin Alden at Comodo (now CCed) tells me:
> > - Microsoft are [...]
> > - Opera are [...]
> > - Chrome are [...]
> 
> Robin: what about Apple/Safari? Have they been informed?
Yes.  I had a positive response and a follow up request for more information from product-security@apple.com.
Attachment #519939 - Flags: approval1.9.2.17+
Attachment #519939 - Flags: approval1.9.2.16+
Attachment #519939 - Flags: approval1.9.1.19+
Attachment #519939 - Flags: approval1.9.1.18+
Attachment #520160 - Flags: approval1.9.2.17+
Attachment #520160 - Flags: approval1.9.2.16+
Attachment #520160 - Flags: approval1.9.1.19+
Attachment #520160 - Flags: approval1.9.1.18+
Verified for 1.9.2.16 and 1.9.1.18 using repro steps in comment 8.
blocking1.9.1: .18+ → ?
blocking1.9.2: .16+ → ?
blocking1.9.1: ? → .18+
blocking1.9.2: ? → .16+
Attached patch fix a leakSplinter Review
The patch introduced a small leak. When we detect a blacklisted cert, we exit the function early and don't clean up the cert.

The fix is to move the C++ cleanup variable further up, which I have missed when moving up the line to obtain the cert.

I filed bug 643694 to fix it. I propose to do it in the nex regular update cycle. I'm attaching the patch here, as this issue is not yet public.
Attachment #520887 - Flags: review?(bsmith)
Comment on attachment 520887 [details] [diff] [review]
fix a leak

As mentioned on IRC, this:

> +  CERTCertificateCleaner serverCertCleaner(serverCert);
>    CERTCertificate *serverCert = SSL_PeerCertificate(fd);

should be:

>    CERTCertificate *serverCert = SSL_PeerCertificate(fd);
> +  CERTCertificateCleaner serverCertCleaner(serverCert);
Attachment #520887 - Flags: review?(bsmith) → review-
Depends on: 644012
The updates (not the blog post yet) should be live for 3.6 and 3.5 users.
Attachment #519860 - Attachment is private: true
Attachment #519863 - Attachment is private: true
Attachment #519892 - Attachment is private: true
Attachment #520163 - Attachment is private: true
Group: core-security
Attachment #519860 - Attachment is private: false
Attachment #519863 - Attachment is private: false
Attachment #519892 - Attachment is private: false
Attachment #520163 - Attachment is private: false
I reiterate my objection to Mozilla allowing the included certification authorities to outsource to third-party registration authorities.
May I ask why this bug (and the whole issue) has been kept secret for over a week? To spare my time, let me quote The Register [1] since it exactly reflects my thoughts:

<snip>
The decision by Google, Microsoft, Mozilla and Comodo to keep the world in the dark for eight days comes as a slap in the face to their users.

“The attackers had all they needed,” said Marsh Ray, a researcher and software developer at two-factor authentication service PhoneFactor. “Knowing which certificates have been compromised gives an immediate step people can take to secure their systems.”

None of the companies would explain why they waited so long to disclose the attack.
</snip>

If there is some policy due to which this was kept under the hood, then it's completely flawed and needs to be rethinked ASAP.

[1] http://www.theregister.co.uk/2011/03/23/gmail_microsoft_web_credential_forgeries/page2.html
Mozilla has made a more detailed statement about the Comodo misissuance incident:
http://blog.mozilla.com/security/2011/03/25/comodo-certificate-issue-follow-up/

Please continue discussion in the mozilla.dev.security.policy discussion forum:
https://www.mozilla.org/about/forums/#dev-security-policy

Gerv
backport to 1.8.1.24.
If anyone has time to take a look at it, I'd be very grateful
Comment on attachment 522155 [details] [diff] [review]
fix for 1.8.1(.24)

Looks OK to me.
Attached file Hacker claim
Found a hacker's claim of responsibility at http://pastebin.com/74KXCaEZ

Could be BS, but there are some testable claims
 - the name of the RA
 - the ceo's account
 - the "comodo username"
 - did the RA really have a trustdll.dll
 - was it C#
 - did it really hardcode in their password/username?

This is all depressingly plausible. Is trustdll.dll something Comodo distributes, or was that winning idea solely the RAs? Does it really take only a name and password, and do RAs typically leave those hardcoded into internet-connected systems?

Found a similar (unverified) claim in response to a Heise article on the subject, guy claiming to be a reseller (presumably restricted by DV checks at the RA or Comodo level?) who could get around that by calling the APIs directly and bypassing the app they were given.
http://www.heise.de/security/news/foren/S-Kenne-ich-von-Comodo-nicht-anders-ich-kann-selber-solche-Zertifikate-ausstellen/forum-196553/msg-20015933/read/
(In reply to comment #79)
> Created attachment 522228 [details]
>   --> https://bugzilla.mozilla.org/attachment.cgi?id=522228
> Hacker claim
> 
> Found a hacker's claim of responsibility at http://pastebin.com/74KXCaEZ
> 
> Could be BS, but there are some testable claims
>  - the name of the RA
>  - the ceo's account
>  - the "comodo username"
>  - did the RA really have a trustdll.dll
>  - was it C#
>  - did it really hardcode in their password/username?

There's also http://pastebin.com/DBDqm6Km, which apparently contains the decompiled source code to trustdll.dll.  This can also be tested.
Attached file decompiled TrustDLL
(In reply to comment #80)
> There's also http://pastebin.com/DBDqm6Km, which apparently contains the
> decompiled source code to trustdll.dll.  This can also be tested.

Attached (pastebins suck].
> guy claiming to be a reseller (presumably restricted by DV checks at
> the RA or Comodo level?) who could get around that by calling the APIs
> directly and bypassing the app they were given.

That claim should be easy to verify, somebody just needs to enroll as reseller and try it.
I've heard the same claims from intermediate CAs using the API.
> Hacker claim

One wonders why he didn't just post some data signed with one of the private keys he generated. He seems particularly anxious to prove he was the one who pulled this off.

bob
(In reply to comment #85)
> Does that help? http://pastebin.com/X8znzPWH

Since pastebins suck, here's the content:

For some real dumbs, I bet they don't have IQ above 75, WHO STILL thinks I'm not the hacker, here is mozilla addon's certificate, check it's serial with one published on all the internet:

http://www.multiupload.com/J9I8NFWPT0

I really worry about you guys (people who still have doubts) even for surfing in internet, have you ever visited a doctor?

Private key for above certificate:
http://www.multiupload.com/SI4FKWJ5KY

@ioerror, when I say you have relations with intelligence agencies and you pass traffic, I have my reasons: http://bit.ly/dK0oB5 #comodogate


Thanks to Robert Graham for pointing out that private key is encrypted with a passphrase, here is private key without passphrase, I don't want to give away my passphare:




-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAq8ZtNvMVc3iDc850hdWu7LLw4CQfE4O4IKy7mv6Iu6uhHQsf
RQCqSbc1Nwxq70dMudG+41cSBI2Sx7bsAby22seBOCCtcoXmDvyBbAetaHY4xUTX
zMZKxZc+ZPRR5vB+suxW9yWCTUmYyxaY3SPxiZHRF5dAmSbW4qIrXt+9ifIbGlMt
zFBBetA9KgxVcBQB6VhJEHoLk4KL4R7tOoAQgs6WijTwzNfTubRQh1VUCbidQihV
AOWMNVS/3SWRRrcN5V2DqOWL+4TkPK522sRDK1t0C/i+XWjxeFu1zn3xXZlA2sru
OIFQvpihbLgkrfOvjA/XESgshBhMfbXZjzC1GwIDAQABAoIBAQCJoijaEXWLmvFA
thiZL7jEATCNd4PK4AyFacG8E9w8+uzR15qLcFgBTqF95R49cNSiQtP/VkGikkkc
ao25aprcu2PnNA+lpnHKajnM9G3WOHuOXHXIps08es3MmBKTxvjNph6cUlqQULrz
Zry+29DpmIN/snpY/EzLNIMptn4o6xnsjAIgJDpQfFKQztxdmZU6S6eVVn0mJ5cx
q+8TTjStaMbh+Yy73s+rcaCXzL7yqWDb1l5oQJ/DMYNfufY6lcLgZUMwFxYKjCFN
ScAPCiXFUKTzY3Hy1Z4tLndFxipyEPywDep1TB2nMb+F3OOXUs3z+kKVjGFaGnLZ
591n3x3hAoGBAOOgsb4QybjHh9+CxhUkfsqcztGGdaiI3U5R1qefXL7R47qCWfGc
FKdoJh3JwJzHEDX68ZmHz9dPhSXw6YrlLblCi6U/3g7BOMme5KRZKBTjHFo7O9II
B0laE5ISRH4OccsOC3XUf9XBkm8szzEBj95DgzB0QydPL4jp7NY0h0QrAoGBAMEv
jEFkr/JCRe2RWUSx/a1WT/DHnVLMnDb/FryN2M1fAerpMYNUc2rnndjp2cYbsGLs
cSF6Xecm3mUGqn8Y5r8QqBwxCp5OunCFCXEJvkiU3NSs8oskCsB8QJ6vk3qmauUK
jClX91heSCigwhC2t+1txnF290m/y0T46EfqOSrRAoGAUlyVk4D9jEdeCWiHBaVj
3ynnx3ZQYj/LW4hPE+2coErPjG+X3c0sx/nuOL8EW3XHjtCS1IuIj45tTfIifqg3
6B2E67D1Rv9w7br5XeIIl64pVxixp2hSQp8+D49eiwHs+JzHVsYhzxUwR9u9yCyZ
gsGI2WJn3fRP7ck+ca8l9msCgYB4B2Hec3+6RqEKBSfwvaI+44TRtkSyYDyjEwT+
bCeLGn+ng/Hmhj8b6gKx9kH/i86g+AUmZtAXQZgmLukaBM/BYMkCkxnk2EeQh6gh
Goumrw8x+K7N8rvXcpv3vGEmcGW0H0SMn4In3pR44cER/2Tx2SXV87Obl9Xk6b3w
iL+yMQKBgFjXcmiBW8lw3l2CaVckd/1SzrT80AfRpMT9vafurxe+iAhl9SDAdoZe
3RlshoItDQLW1ROlkLhM7Pdq/XZvLRm128hiIGKTDBnxtfN8TKAg+V7V+/TTfdqv
8jq7epvZsq5vjOC1FZh2gOhf50QwpqDJktjdyka1sPiBKQSoxfbZ
-----END RSA PRIVATE KEY-----
And the private key has been verified as matching the public key attached to this bug:
http://erratasec.blogspot.com/2011/03/verifying-comodo-hackers-key.html

So this guy either did it, or is part of the group that did it.

Gerv
Depends on: 646460
Status: RESOLVED → VERIFIED
(In reply to comment #69)
> Created attachment 520887 [details] [diff] [review]
> fix a leak

This was landed as part of bug 644012.
Presumably comment 8 can be morphed into a test, either automated or Litmus, or both. Setting flags.
Flags: in-testsuite?
Flags: in-litmus?
(In reply to comment #89)
> Presumably comment 8 can be morphed into a test, either automated or Litmus, or
> both. Setting flags.

Litmus testcase has been added -- please review:
https://litmus.mozilla.org/show_test.cgi?id=15365
Flags: in-litmus? → in-litmus+
(In reply to comment #90)
> (In reply to comment #89)
> > Presumably comment 8 can be morphed into a test, either automated or Litmus, or
> > both. Setting flags.
> 
> Litmus testcase has been added -- please review:
> https://litmus.mozilla.org/show_test.cgi?id=15365

This test presupposes that https://kuix.de:9449/ will be up forever. Since the bogus test cert is keyed to that site, we cannot move the testcase to our QA server.
(In reply to comment #91)
> > Litmus testcase has been added -- please review:
> > https://litmus.mozilla.org/show_test.cgi?id=15365
> 
> This test presupposes that https://kuix.de:9449/ will be up forever. Since the
> bogus test cert is keyed to that site, we cannot move the testcase to our QA
> server.

Correct. As far as I understand it, the purpose is to test a certificate which has been revoked by the CA. Is there a way that we can spoof a revoked certificate? If we can, then we could create a test on the QA server...
Testing is tricky.

Do you want to test the blacklist mechanism?

The test cert on kuix.de:9449 will expire soon. I'm not sure what will happen then. You might get EITHER error expired OR error revoked. Only if the error REVOKED then you can continue to use this blacklist test. If EXPIRED has higher precendece and is the error that will be reported, this test ability will be gone. You must test this probably around April 20.

If you want to test revocation mechanisms in general, that's another story, and I agree, that could be automated, but it would also require to have someone permanently operate a server with a revoked cert.
(In reply to comment #93)

> If you want to test revocation mechanisms in general, that's another story, and
> I agree, that could be automated, but it would also require to have someone
> permanently operate a server with a revoked cert.

 QA has a public facing server we can use for SSL work now.
Why does this need to be a litmus test?
Kyle, it doesn't. If someone wanted to craft an automated test, that would be great in my opinion.
Yeah, we should be able to test this through automation.
Flags: in-litmus+ → in-litmus-
(In reply to comment #97)
> Yeah, we should be able to test this through automation.

This should bear bug 617414 in mind. I don't know how SSL tests are currently performed and whether you'd need a couple more certificates from Comodo (one to blacklist and one to not) as this code is only run if the issuer name matches.
Resetting the Litmus flag to indicate that the test still exists in Litmus (and I think it should until it is replaced, or until it is no longer a valid test as per comment 93). Given that this is a security issue, surely a Litmus test is better in the interim? Let me know if I'm wrong.
Flags: in-litmus- → in-litmus+
ah... we have the private key for the revoked addons certificate!

bob
the litmus test should be reworked to use the addons cert for which we have the private key. in order to do this, the testcase will need to pin <'now'> to be a fixed date (e.g. 2011-03-28), on the client system, and the host will need to be happy to serve an expired certificate (possibly by pinning the date to the same).
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: