Closed Bug 129080 Opened 22 years ago Closed 22 years ago

Unable to import large CRLs into cert7.db

Categories

(NSS :: Build, defect, P1)

3.3.1
Sun
SunOS

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: Jim.McCarthy, Assigned: rrelyea)

Details

(Whiteboard: [ADT2])

Attachments

(4 files, 6 obsolete files)

I am trying to import a CRL into the cert7.db using CERT_ImportCRL().  The 
function returns a valid CERTSignedCrl structure but I cannot read the CRL out 
of the cert7.db.  I am using the SEC_LookupCrls function to get a 
CERTCrlHeadNode structure.  The function returns a valid structure but the 
first node in the structure is null.

I have 2 CRLs I'm testing with; one is 108KB and the other one is 36KB.  The 
smaller CRL works fine; the larger one is the one I'm having problems with.

I created the cert7.db using the certutil that came with 3.3.1.  I also tried 
to do this with the crlutil that comes with 3.3.1, but I'm getting the same 
error.  Here is what I'm doing with crlutil:

# crlutil -I -i chamb.crl -t 1 -d . (108KB CRL - no errors reported)
# clrutil -L -d .
CRL names                                CRL Type

# crlutil -I -i denver.crl -t 1 -d . (36KB CRL - no errors reported)
# clrutil -L -d .
CRL names                                CRL Type
CN=DOD CLASS 3 CA-4, OU=PKI, OU=DoD, O=U.S. Government, C=US CRL

Is there still a problem with the CRL size?
Assigned the bug to Bob.
Assignee: wtc → relyea
Yes there is a bug in NSS 3.3 and earlier when CRL's > 64K would not be
visible in the database after loading.

This has been fixed in NSS 3.3.1 and later. I just verified that NSS 3.4 works
correctly. pre-NSS 3.4, crlutil is statically bound, so it may be possible that
an old crlutil is being used?

bob
This problem with loading CRL's > 64K was fixed as PSM bug 74343.
The date it was marked VERIFIED was 2001-06-22, so it is definitely
fixed in NSS 3.3.1 and may even be fixed in NSS 3.3.
I verified that bug 74343 was fixed in NSS 3.3.

Status: UNCONFIRMED → NEW
Ever confirmed: true
OK, the problem is DBM cannot return an object bigger than it's cache. There are
two options for fixing this problem:

1) increase the size of the cache when we open the file in NSS. (change to NSS)
2) change DBM to increase it's buffersize while reading in a file by locking the
buffer pages. (change to dbm).

Both changes would require a respin of nss3.dll and certain tools (3.3.x) and
softokn3.dll and tools (3.4).

Change 1 has the advantages of :
   Not requiring a rebuild of dbm.
   Does not change the semantics of dbm for other dbm users.
   Risk of unattended side effects are small.

Change 2 has the advantages of:
   Will work with any database objects of arbitarially large size (limitted by
application memory).
   Users which do not have large CRL's do not pay the dbm cache penalty.

I'm attaching 3 separate patches:
Patch a: change 1 for 3.3.x
Patch b: change 1 for 3.4
Patch c: change 2 for mozilla dbm.

bob
Note pcertdb.c is in mozilla/security/nss/lib/certdb in nss 3.3
NOTE: pcertdb is in mozilla/security/nss/lib/softoken in nss 3.4
Attached patch Option 2; Mozilla dbm patch (obsolete) — Splinter Review
Adding Nelson to review (patches for option 1).
Patch review comments:

Patch id 74774 increases the sizes of the DB buffer pools to 1/2 megabyte 
each for both the temporary and permanent cert DBs.  It appears to be 
correct, given what it is attempting.  However, I believe that most NSS 
clients would find the increase in memory allocation objectionable.
It's probably ok for servers.

Patch id 74775 contains some changes that are not obviously related to
the CRL size issue.  Also, it doesn't declare the new HASHINFO to be const.
It really should declare that to be const.  Otherwise, it seems correct.
This patch changes the size of the perm cert DB's memory buffer pool to 
1/2 megabyte, and therefore is probably objectionable for the browser.  
Also, is 1/2 megabyte enough for the servers for the forseeable future?

Patch id 74777 requires more review than its size suggests.
I've checked in the NSS 3.3 version of the patch.

A better solution for both clients and servers would be to fix the dbm collect()
code so that it does not use the entire buffer pool to read in large objects, or
can dynamically increase the buffer pool size when reading large objects. We'll
work this idea for NSS 3.4.
Priority: -- → P1
Target Milestone: --- → 3.4
This patch does not prevent the cache from growing when it encounters large
objects. Flushing out the buffers has some unintended side effects, including
some  very long write times. The patch does pin all the buffers that it needs
to pin, that is because it is possible for a random buffer the cause another
buffer on the list (not necessarily on the LRU) to get freed because it
believes that buffer is part of it's overlay tree. There is now code in the
overlay stuff that prevents buffers that are pinned from being freed.
Attached patch rest of changes for patch 75309 (obsolete) — Splinter Review
Oops, the buffer stuff I described in comment 12 is in hash_buf.c, included in
this patch.
Attached patch Arrn NFS messed me up... (obsolete) — Splinter Review
This *should* be the correct patch.
Attachment #75318 - Attachment is obsolete: true
Will these new patches be making their way into version 3.4?  I see that the 
RC1 version was just released.
Yes, they are targetting for the final release of NSS 3.4
I fixed a few typographical errors in the comments,
and changed spaces to tab characters in two places.
(DBM sources indent with tab characters.)  I also declare
'save_flags' as 'char' because that's the type of the
'flags' field in struct _bufhead (BUFHEAD).

I compiled DBM with gcc on Linux and MSVC 6.0 on Windows
and found that Bob's new code only introduced one warning:
  'bp' might be used uninitialized in 'collect_data'.
I reviewed the code and verified that this warning is
benign, that is, 'bp' is never used uninitialized.
I ran the 'lots' test program (mozilla/dbm/tests/lots.c)
under BoundsChecker with both "-quick" and "-quick -large"
command line options.  No memory leaks or memory access
errors were reported in DBM.
I combined the previous patches into this patch.

Nelson, Bob, could you review this patch and see
if any additional changes need to be made?  Thanks.
Attachment #75309 - Attachment is obsolete: true
Attachment #75321 - Attachment is obsolete: true
Attachment #75473 - Attachment is obsolete: true
Attachment #75629 - Attachment is obsolete: true
Attachment #74777 - Attachment is obsolete: true
All patches have been checked in.  Marked the bug fixed.
Status: NEW → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
It was pointed out to me that Bob's use of 'int' for
local variables containing a char is actually the C
way.  So I changed the type of 'save_flags' back to
'int'.
adt1.0.0+ (on ADT's behalf) approval for checkin into 1.0, pending second review.
Keywords: adt1.0.0+
Whiteboard: [ADT2]
Comment on attachment 75643 [details] [diff] [review]
Combined patch for easier review

r=wtc.	(Nelson has already reviewed this patch.)
Attachment #75643 - Flags: review+
Comment on attachment 75643 [details] [diff] [review]
Combined patch for easier review

a=shaver for the 1.0 branch.
a=asa (on behalf of drivers) for checkin to the 1.0 branch (re-approval)
I checked in the patch (with more comments) on the MOZILLA_1_0_0_BRANCH
and the tip of mozilla/dbm.
adding fixed1.0.0 keyword (branch resolution). This bug has comments saying it
was fixed on the 1.0 branch and a bonsai checkin comment that agrees. To verify
the bug has been fixed on the 1.0 branch please replace the fixed1.0.0 keyword
with verified1.0.0.
Keywords: fixed1.0.0
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: