Closed
Bug 487381
Opened 16 years ago
Closed 16 years ago
certificates with very large issuer names can corrupt cert8 database
Categories
(NSS :: Libraries, defect, P2)
Tracking
(Not tracked)
RESOLVED
FIXED
3.12.4
People
(Reporter: nelson, Assigned: nelson)
References
Details
(Keywords: dataloss, Whiteboard: [sg:dos])
Attachments
(5 files, 5 obsolete files)
|
174.88 KB,
application/zip
|
Details | |
|
15.43 KB,
patch
|
rrelyea
:
superreview+
|
Details | Diff | Splinter Review |
|
65.40 KB,
application/octet-stream
|
Details | |
|
64.99 KB,
application/octet-stream
|
Details | |
|
2.89 KB,
patch
|
mozbgz
:
review+
|
Details | Diff | Splinter Review |
This bug report is a place holder for certain potential vulnerabilities
that have not yet been publicly disclosed. They may prove to be false
alarms, but this bug is "born security-sensitive", just in case.
| Assignee | ||
Comment 1•16 years ago
|
||
I'm not yet sure whether this is or is not a "vulnerability", but I'm
continuing to play it safe for a while longer. I haven't found any crash
and the corruption only causes some certs to "get lost" (not be findable)
after they were imported into the DB. More details, sample certs, patches,
to follow.
Summary: Place holder for potential vulnerabilities in certificate issuer names → certificates with very large issuer names can corrupt cert8 database
| Assignee | ||
Comment 2•16 years ago
|
||
This zip file contains 6 binary (DER) certs in 6 files, all contributed
by Kaspar. Don't import these into a cert DB you care about!
90547 Apr 9 14:02 mozchomp.der the root
90477 Apr 9 14:03 mozchomp2.der the intermediate CA
66812 Apr 9 00:59 alexei.crt an EE cert issued by mozchomp2
66807 Apr 9 01:00 bob.crt an EE cert issued by mozchomp2
66812 Apr 9 01:00 julien.crt an EE cert issued by mozchomp2
66812 Apr 9 01:00 nelson.crt an EE cert issued by mozchomp2
You could import these with certutil commands like these:
mkdir /tmp/487381
cd /tmp/487381
unzip /tmp/487381a.zip
certutil -d . -N
certutil -d . -A -n mozchomp -i mozchomp.der -t "C,C,C"
certutil -d . -A -n mozchomp2 -i mozchomp2.der -t ""
certutil -d . -A -n nelson -i nelson.crt -t ""
(etc.) (etc)
The intermediate CA cert has a subject name that exceeds 64KB. NSS cannot
even print that name correctly (which is the subject of bug 487487).
That name is also the issuer name in each of the 4 EE certs.
There are potentially two separate issues here:
1. NSS uses the concatenation of a certificate's serial number and its
DER encoded issuer name as a database "key" (index). The old Berkeley DB
code used for NSS's cert8.DB has an (unpublished and not well understood)
upper bound on the size of each individual record's "key" and "body".
(Body is also called record "entry" and record "data" in NSS code).
When these size limits are exceed, the old Berkeley DB code doesn't detect
it and report an error, but instead generates corrupted DBs and may crash.
For years, NSS has limited the size of record "bodies", storing the too-large
bodies in disk files called "blobs". But NSS has no limits on or protection
against too large DB record "keys". So, when a cert with a very large issuer
name gets entered into the DB, the "body" gets safely stored as a blob, but
the issuer name becomes part of a DB "key", which may cause overflows of
various sorts.
We don't know what the upper bound for DB "keys" really is. For DB "entries" the limit was originally set to 60KB, but it has subsequently been lowered to
14 KB (which seems too low.) I'm thinking of making it be 60KB, but I'm
open to suggestions.
2. 16-bit field size fields in record bodies. many of the different types of
cert8 DB records contain variable length fields, and in some cases, multiple
different variable length fields in each record. These fields are stored
as an explicit length, followed by the data. The length is stored as a
16-bit quantity in two bytes, least significant byte first (regardless of
the CPU type). Consequently, any variable length fields whose size exceeds
64K-1 bytes result is false length values being stored. For records that
contain only one variable length field, this is easy to detect and to fix,
because we also have the overall length. But some record types have multiple
variable length fields, and for them, it's pretty hopeless, unless we have
reason to believe that at most one of those fields can ever exceed 64KB in
length.
The record type with the big problem is the "subject name" record. This
record's key (index value) is its subject name (strike 1). It's body
contains a variable number (one or more, strike 2) of pairs of variable
length serial number and issuer name (strike 3), one pair for each
certificate with that subject name.
So, if any cert has an issuer name exceeding 64KB, the size of that name
will overflow the 16-bit size field for that pair in the subject name record,
and it will make a very long "key" (index) value for the record that
actually holds the certificate.
In the next comment, I will talk about possible remedies, and attach a patch.
| Assignee | ||
Comment 3•16 years ago
|
||
To resolve this, Several things must be done.
1) it is necessary to impose an upper bound limit on the size of DB "keys"
(index values) of all records added to the cert DB. It simply must not be
possible to add records whose large "key" corrupts the database.
2) It is necessary to be more thorough about detecting inconsistencies in
the lengths of variable length components of records, especially the subject
records.
In this first patch, I tried to achieve the goal of getting the cert DB
to work correctly (or as expected) with all the sample certs in the
previously attached zip file. To that end I found it necessary to:
a) Set the DB key size limit to a value above 64KB. I tried 90KB.
b) For any given cert subject, allow ONE cert with that subject name to have
an issuer name that exceeds 64KB, as long as that is the ONLY cert with that
subject name.
This patch achieves those goals, at least for the sample certs.
| Assignee | ||
Comment 4•16 years ago
|
||
Before I attach the next alternative, I want to add several comments:
1. All of these problems are artifacts of the old Berkeley DB and NSS's
DB schema for that old DB. I believe none of these problems are present
with NSS's new sqlite3-based DB design, which is now present and working
in NSS 3.12.x. Nevertheless, until Firefox adopts NSS's sqlite3 DBs, FF
users will experience the problems demonstrated with the attached certs,
and so we want to mitigate those problems somewhat.
2. The first alternative patch allows the attached certs to be imported
and displayed, but it still isn't very practical. It is not possible to
"renew" on of those certs, for example, because doing so violates the
new limitation (imposed by that patch) of only one cert per subject name
if that cert has a too-long issuer name.
3. I think 60 KB is MORE than enough for any reasonable issuer name.
The only reason to allow a limit above 60 KB is to say "I did it!". :-/
Oh, I just realized that my patch alternative one above has a bug.
It imposes a limit on the sum of all issuer name string sizes, not on any
single issuer name size. Will fix.
Comment 5•16 years ago
|
||
(In reply to comment #4)
> Nevertheless, until Firefox adopts NSS's sqlite3 DBs, FF
> users will experience the problems demonstrated with the attached certs,
> and so we want to mitigate those problems somewhat.
What's preventing Firefox from swapping to sqlite3 for NSS DBs? Is there a bug on file?
| Assignee | ||
Comment 7•16 years ago
|
||
I'll not request review on this until I've done a big more testing.
| Assignee | ||
Comment 8•16 years ago
|
||
Attachment #372187 -
Attachment is obsolete: true
| Assignee | ||
Comment 9•16 years ago
|
||
Comment on attachment 372188 [details] [diff] [review]
patch alternative 2, v2 - no keys or issuer names > 60 KB (checked in)
Kaspar, Bob,
I'd appreciate your reviews on either of these alternatives.
Alternative 2 completely eliminates the problems of 16-bit size field overflows by disallowing issuer names that exceed 60KB. I think it's "safer".
I guess the question of whether this is inside or outside the FIPS boundary remains. :(
Attachment #372188 -
Flags: superreview?(rrelyea)
Attachment #372188 -
Flags: review?(mozbugzilla)
| Assignee | ||
Comment 10•16 years ago
|
||
(In reply to comment #5)
> What's preventing Firefox from swapping to sqlite3 for NSS DBs?
PSM developer bandwidth, I think. Note sure. Not an NSS limitation. (*)
> Is there a bug on file?
I'm sure there is.
(*): Note: in the meantime, you can personally enable the use of the
new sqlite3 cert9.db files in your Firefox profile by setting the variable
NSS_DEFAULT_DB_TYPE=sql
in your environment before running Firefox.
| Assignee | ||
Comment 11•16 years ago
|
||
The NSS teams knows that there is often a long lag between the time when a
new feature is available in NSS and the time when it is available in
programs that use NSS, so NSS often (not always) provides ways to enable
new NSS features through the use of environment variables. so that end users
don't have to wait for application programmers to catch up.
See https://developer.mozilla.org/En/NSS_reference:NSS_environment_variables
for a list.
Priority: -- → P2
Target Milestone: --- → 3.12.4
| Assignee | ||
Comment 12•16 years ago
|
||
Comment on attachment 372186 [details] [diff] [review]
patch alternative 1, v2 - allow 90KB DB key, limit of one per subject
Asking for review on both alternatives (the differences between the two are quite small)
Attachment #372186 -
Flags: superreview?(rrelyea)
Attachment #372186 -
Flags: review?(mozbugzilla)
Comment 13•16 years ago
|
||
Comment on attachment 372188 [details] [diff] [review]
patch alternative 2, v2 - no keys or issuer names > 60 KB (checked in)
r+ relyea
I prefer this semantic.
One note: There's a lot of rewrite in this patch. The new code is tighter, but it made reviewing harder as I had to make sure the new and old code had the same semantic.
One issue:
- entry->emailAddrs = (char **)PORT_ArenaAlloc(arena, sizeof(char *));
+ entry->emailAddrs = PORT_ArenaNewArray(arena, char *, 2);
Is this meant to fix an actual bug (I don't see one in the code), or is the '2' just paranoia?
(NOTE: the array is not null terminated, only the data pointed to by the array).
bob
Attachment #372188 -
Flags: superreview?(rrelyea) → superreview+
Updated•16 years ago
|
Attachment #372186 -
Flags: superreview?(rrelyea) → superreview+
Comment 14•16 years ago
|
||
Comment on attachment 372186 [details] [diff] [review]
patch alternative 1, v2 - allow 90KB DB key, limit of one per subject
r+ this does what it advertizes. I strongly prefer the simpler and less risky (and easier to explain to users) 60K limit patch.
The complication to add 30K (for exactly one cert)seems a poor trade-off)
bob
| Assignee | ||
Updated•16 years ago
|
Attachment #372186 -
Attachment is obsolete: true
Attachment #372186 -
Flags: review?(mozbugzilla)
| Assignee | ||
Comment 15•16 years ago
|
||
Comment on attachment 372188 [details] [diff] [review]
patch alternative 2, v2 - no keys or issuer names > 60 KB (checked in)
Thanks for the review, Bob.
Checking in lowcert.c; new revision: 1.5; previous revision: 1.4
Checking in pcertdb.c; new revision: 1.10; previous revision: 1.9
Checking in pcertt.h; new revision: 1.3; previous revision: 1.2
Attachment #372188 -
Attachment description: patch alternative 2, v2 - no keys or issuer names > 60 KB → patch alternative 2, v2 - no keys or issuer names > 60 KB (checked in)
Attachment #372188 -
Flags: review?(mozbugzilla)
| Assignee | ||
Updated•16 years ago
|
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Comment 16•16 years ago
|
||
(In reply to comment #2)
> Created an attachment (id=372177) [details]
> zip file of certs that demonstrate some of the problems
>
> This zip file contains 6 binary (DER) certs in 6 files, all contributed
> by Kaspar. Don't import these into a cert DB you care about!
>
> 90547 Apr 9 14:02 mozchomp.der the root
> 90477 Apr 9 14:03 mozchomp2.der the intermediate CA
For the sake of completeness/consistency: mozchomp2.der is not an intermediate CA cert - just an EE cert with a very long subject name, originally attached to bug 486304. I'm attaching the "real" intermediate CA cert here (in case someone would like to do full path validation for any of the 4 EE certs...).
Comment 17•16 years ago
|
||
(In reply to comment #9)
> Kaspar, Bob,
> I'd appreciate your reviews on either of these alternatives.
> Alternative 2 completely eliminates the problems of 16-bit size field overflows
> by disallowing issuer names that exceed 60KB. I think it's "safer".
I wasn't fast enough with my review, apparently - sorry about that. Nelson, you have added tests for len > NSS_MAX_LEGACY_DB_KEY_SIZE to EncodeDBCertKey and EcodeDBSubjectKey so far. The following functions currently lack a similar check, however:
EncodeDBGenericKey
EncodeDBNicknameKey
EncodeDBSMimeKey
All of them suffer from the same problem, and will corrupt a legacy DB as well. EncodeDBGenericKey is used for CRLs, so if the issuer name is >64K, we run into problems as well (I'm attaching a CRL for the issuing CA, for illustration/testing purposes), and EncodeDBNicknameKey/EncodeDBSMimeKey will both overflow when executing this command:
certutil -d mydb -A -t ,, -n `perl -e 'print "A"x66000'` -i mozchomp.crt
(Note that mozchomp.crt has a fairly "normal" issuer name length, i.e. you don't even need a peculiar cert for triggering the EncodeDBNicknameKey/EncodeDBSMimeKey overflow... just an OS where ARG_MAX is more than 65K - otherwise the shell won't exec the above certutil command.)
I certainly agree with Bob that "patch alternative 2" is preferrable (limiting keys to 60K max.), but the fix is not yet complete.
Comment 18•16 years ago
|
||
Reopening (for reasons explained in comment 17).
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
| Assignee | ||
Comment 19•16 years ago
|
||
Kaspar, thanks for the additional attachments.
The decision to not impose a limit on the index for nicknames and SMIME
records was intentional. Nicknames and even email addresses are locally
provided to NSS by the calling application, and do not necessarily come
from the cert contents. If a user wants to hurt himself with clever
certutil commands, oh well! :)
But you may well be (probably are :) right about CRLs, so I will look at
that soon (12 hours from now, maybe). And clearly, since it's likely
trivial to do, I might as well impose limits on the nickname and SMIME
record "keys", too.
Comment 20•16 years ago
|
||
(In reply to comment #19)
> The decision to not impose a limit on the index for nicknames and SMIME
> Nicknames and even email addresses are locally provided to NSS by the
> calling application
Well, maybe nicknames in PKCS#12 files could also pose a problem, so I think better safe than sorry is the right approach here.
Another issue which remains even with the patch applied is that when trying to add a cert with an issuer name >60K, you will end up with an orphan/incomplete nickname entry. Not sure if that's easily fixable, though.
| Assignee | ||
Comment 21•16 years ago
|
||
(In reply to comment #13)
> One issue:
> - entry->emailAddrs = (char **)PORT_ArenaAlloc(arena, sizeof(char *));
> + entry->emailAddrs = PORT_ArenaNewArray(arena, char *, 2);
>
> Is this meant to fix an actual bug (I don't see one in the code), or is
> the '2' just paranoia?
just paranoia.
> (NOTE: the array is not null terminated, only the data pointed to by the
> array).
Yeah, and I used PORT_ArenaNewArray, not PORT_ArenaZNewArray, so it was
pointless. :-/
(In reply to comment 20)
> you will end up with an orphan/incomplete nickname entry.
Please spell out the scenario of concern a little more. Perhaps it is
one of these. I see two scenarios:
1. Adding a new cert with a previously unseen subject name, which adds a
new nickname entry, new subject entry, and new cert entry (a.k.a. new
"issuer and serial number" entry. clearly, we'd like all 3 or none to be
added. I believe there's code that tries to back out partially successful
additions. I'll have to find it again.
2. Adding a second (or N'th) cert with the same subject name as an existing one. Nickname record should be unchanged. Subject name entry will be
updated and a new cert record will be added. Ideally, in the event of
failure, the results are same as before. We really want to avoid losing
access to the cert that was previously accessible by that nickname in the
DB.
Comment 22•16 years ago
|
||
(In reply to comment #21)
> (In reply to comment 20)
> > you will end up with an orphan/incomplete nickname entry.
>
> Please spell out the scenario of concern a little more. Perhaps it is
> one of these. I see two scenarios:
>
> 1. Adding a new cert with a previously unseen subject name, which adds a
> new nickname entry, new subject entry, and new cert entry (a.k.a. new
> "issuer and serial number" entry. clearly, we'd like all 3 or none to be
> added.
It's this one I came across when experimenting with the intermediate CA cert.
> I believe there's code that tries to back out partially successful additions.
Right, it's at the end of AddCertToPermDB(). My observation in comment 20 was most likely a false alarm, sorry - I didn't realize that nickname and subject entries are correctly removed if the cert entry can't be added successfully.
| Assignee | ||
Comment 23•16 years ago
|
||
Kaspar,
I think this patch contains the additional changes you were suggesting.
Please let me know if I've neglected any other details. Thanks.
I will try to test it with your new cert and CRL next.
Attachment #372350 -
Flags: superreview?(rrelyea)
Attachment #372350 -
Flags: review?(mozbugzilla)
| Assignee | ||
Updated•16 years ago
|
Attachment #372271 -
Attachment mime type: application/pkix-crl → application/octet-stream
| Assignee | ||
Updated•16 years ago
|
Attachment #372270 -
Attachment mime type: application/pkix-cert → application/octet-stream
| Assignee | ||
Comment 24•16 years ago
|
||
Comment on attachment 372270 [details]
Proper intermediate CA cert (cf. attachment 372177 [details])
Sorry, have to change the MIME type for these attachments, otherwise, BMO won't let me download them.
| Assignee | ||
Comment 25•16 years ago
|
||
Sorry, that last patch patched the wrong function. :(
Last-second bug fix pressures fry my brain.
Attachment #372350 -
Attachment is obsolete: true
Attachment #372351 -
Flags: superreview?(rrelyea)
Attachment #372351 -
Flags: review?(mozbugzilla)
Attachment #372350 -
Flags: superreview?(rrelyea)
Attachment #372350 -
Flags: review?(mozbugzilla)
| Assignee | ||
Comment 26•16 years ago
|
||
I tested the last patch above with this command:
crlutil -I -u "https://bugzilla.mozilla.org/attachment.cgi?id=372271" \
-i crl.der -d DBDIR -B
The new code in EncodeDBGenericKey worked as expected.
Attachment #372351 -
Flags: review?(mozbugzilla) → review+
Comment 27•16 years ago
|
||
Comment on attachment 372351 [details] [diff] [review]
supplemental patch suggested by Kaspar, v2 (checked in)
r+
Can/will these fixes (and those from bug 486304) also be applied to the NSS_3_11_BRANCH? For some Mozilla clients (e.g. Thunderbird 2) this is still useful, IMO.
| Assignee | ||
Comment 28•16 years ago
|
||
Comment on attachment 372351 [details] [diff] [review]
supplemental patch suggested by Kaspar, v2 (checked in)
Checking in pcertdb.c; new revision: 1.11; previous revision: 1.10
Attachment #372351 -
Attachment description: supplemental patch suggested by Kaspar, v2 → supplemental patch suggested by Kaspar, v2 (checked in)
Attachment #372351 -
Flags: superreview?(rrelyea)
| Assignee | ||
Comment 29•16 years ago
|
||
thanks, Kaspar
Status: REOPENED → RESOLVED
Closed: 16 years ago → 16 years ago
Resolution: --- → FIXED
| Assignee | ||
Comment 30•16 years ago
|
||
(In reply to comment #27)
> Can/will these fixes (and those from bug 486304) also be applied to the
> NSS_3_11_BRANCH?
I doubt that these patches will simply "apply" because the files have changed,
but equivalent patches for the 3.11 branch could certainly be developed.
I'm not volunteering to do that at this time, but if you wish to do so,
please file a new security sensitive bug with a different version and target
milestone. Thanks.
| Assignee | ||
Comment 31•16 years ago
|
||
| Assignee | ||
Updated•16 years ago
|
Attachment #372457 -
Attachment is obsolete: true
Comment 32•16 years ago
|
||
(In reply to comment #30)
> I doubt that these patches will simply "apply"
They did, actually... the legacydb code hasn't changed that much since 3.11.
> please file a new security sensitive bug with a different version and target
> milestone.
Done - bug 488241.
Comment 33•16 years ago
|
||
Taking a guess at a security "severity" for this bug. Is the corruption controlled in any way? That is, could the attacker confuse the database and inject what would be interpreted as a different cert somehow? So far as I understand it this could be used to destroy someone's db, potentially losing them their client certs or other important data, but not otherwise hack into their machine or sites to which they have credentials.
Keywords: dataloss
Whiteboard: [sg:dos]
| Assignee | ||
Comment 34•16 years ago
|
||
In reply to comment 33, Dan, these are great questions, and not all the
answers are immediately apparent. Here are some facts that may help to
judge the severity.
There are generally two ways for a cert to get imported into a cert DB:
a) manually by the user, and
b) automatic, when the cert is a CA cert issued by a trusted issuer, as
part of an SSL or SMIME cert chain that chains to a trusted root.
And of course, we know that certs that appear to be issued by trusted CAs
are one of:
a) legitimate certs issued by legitimate and careful CAs, or
b) certs issued by stupid CAs, or
c) certs not issued by trusted CA, but which appear to be issued by
trusted CAs because of use of weak hash algorithms or compromised private
keys.
So, I think the big risk here might be from users being tricked somehow
into downloading bogus certs. But I think in most cases, PSM checks that
the certs were issued by valid issuers, so even most of those cases are
covered. The question may be, under what circumstances can a user be
persuaded to download a cert that imports successfully despite having not
been issued by a trusted issuer. And that's a PSM question to which I do
not know the answer.
Comment 35•16 years ago
|
||
(In reply to comment #34)
> The question may be, under what circumstances can a user be
> persuaded to download a cert that imports successfully despite having not
> been issued by a trusted issuer. And that's a PSM question to which I do
> not know the answer.
Adding (permanent) exceptions comes to mind here - even if everything looks reasonable with the cert, the user can be fooled into adding a cert >64K (and has basically no chance to detect this before it's too late).
Updated•12 years ago
|
Group: core-security
You need to log in
before you can comment on or make changes to this bug.
Description
•