Closed Bug 96626 Opened 23 years ago Closed 23 years ago

Use /dev/urandom if possible

Categories

(NSS :: Libraries, defect, P1)

3.3.1
x86
Linux
defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: ak, Assigned: nelson)

Details

Attachments

(2 files, 1 obsolete file)

unix_rand.c didn't use the urandom device when it was available (e.g. on linux)
This patch fixes it. It should improve the quality of the NSS session keys,
because urandom has much better entropy than anything else unix_rand.c currently
uses.
Attached patch Add /dev/urandom (obsolete) — Splinter Review
Keywords: patch
Confirming. Marking NEW.
Status: UNCONFIRMED → NEW
Ever confirmed: true
Ian, could you evaluate this proposal?
Assignee: wtc → ian.mcgreer
Component: Tools → Libraries
Priority: -- → P1
Target Milestone: --- → 3.4
Version: unspecified → 3.3.1
I've previously looked at this suggestion.  
I wonder how much time /dev/urandom takes to generate 32KB.
That's really my only concern.
/dev/urandom for 32K costs about 32K/160bytes * cost of an SHA-1 + some
bookkeeping. 

Unlike /dev/random it'll never block waiting for new entropy, but when it
runs out of previously collected "noise" just degenerate to a pseudo random
RND seeded by some old remembered entropy and 64bit timestamps. On 
interactive boxes there should be always enough noise (every mouse move 
and keypress adds to it); on racked servers it is a bit more problematic
as it'll only get input from disk movements. It should be still better than
the method previously used.

If that should be too expensive you can use some cheaper secure RND and just
seed it with urandom.
If the output of /dev/urandom is the output of a SHA-1 PRNG, there's
no point in taking more than 160 bits of it.  
How about some code that takes whatever data is available from /dev/random
plus 160 bits from /dev/urandom?   Is it feasible to read whatever data
is already available from /dev/random without blocking? 
SHA-1 is only used to protect the integrity of the internal entropy pool.  
The output is SHA-1, but the input of the hash varies.  Normally (=enough
entropy in pool) for each 160bits output it'll have true random unique 160bits 
of input. Only when it runs out of entropy /dev/urandom will reuse
data in the entropy pool, but still using some feedback (e.g. with 
each /dev/urandom read the 64bit cycle counter is mixed in again) 
So it makes sense to use more data from /dev/urandom.

/dev/random doesn't have this fallback; it'll block for new entropy if it
runs out of it. There is also a cross possible; using non blocking reads
and checking the entropy count using the RNDGETENTCNT ioctl and asking
the user to move the mouse or press random keys to fill it again. This
is unfortunately rather annoying for the user and worse doesn't work
very well for remote X display.

The default size of the entropy pool is 512bytes for the primary pool + 
128 bytes in a secondary pool; so it probably doesn't make much sense 
to read more than 300-600 bytes.
I agree that it doesn't make sense to take more than (say) 1K from /dev/urandom.
 Our own PRNG will take the seeding bytes provided by RNG_FileForRNG and mix
them up with a SHA-1 hash.  Thus, reading in more bytes than the size of the
internal entropy pool from /dev/urandom will just cause double hashing.

This patch, however, still reads 32K.  Would you like to provide a patch that
only reads 1K?  It won't work to limit totalFileBytes to 1*1024, because
RNG_FileForRNG needs to be general enough to read files that are larger than 1K.
 I think the way to do it is to have a separate function for reading in the
device file, in lieu of modifying RNG_FileForRNG.
I've written an alternative patch for including up to 1KB of data from
/dev/urandom.  It depends on another patch.  I'll attach both patches here.
Ian, if you don't mind, I'll assign this bug to myself.

This first patch changes the declarations of RNG_RandomUpdate and
RNG_FileInfoForRNG to treat their inputs as const, and propagates const
as necessary.
This second patch reads in 1KB from /dev/urandom prior to reading any 
data from any of the other files named.
Attachment #46878 - Attachment is obsolete: true
Attachment #57725 - Attachment is patch: true
Comment on attachment 57725 [details] [diff] [review]
redeclare RNG functions to treat their inputs as const

Setting the "patch" flag on attachment 57725 [details] [diff] [review].
Nelson, go ahead and take the bug.  The patch looks good to me.
Taking bug.
Assignee: ian.mcgreer → nelsonb
The above two patches were checked in on the NSS_3_3_BRANCH.
I'm working on porting them to the trunk now.  
Status: NEW → ASSIGNED
Code checked in on both NSS_3_3_BRANCH and on trunk.
If mozilla wants this code, someone in CPD will have to move the 
NSS_CLIENT_TAG on the affected files.
Status: ASSIGNED → RESOLVED
Closed: 23 years ago
Resolution: --- → FIXED
Target Milestone: 3.4 → 3.3.2
Comment on attachment 57726 [details] [diff] [review]
Take 1KB from /dev/urandom, if it exists, prior to other files

This patch looks good to me and I found that Ian
already reviewed the patch.  So Bob, you don't
need to review it again.

Nelson, I have some questions about this patch.

>-    char *files[] = {
>+    static const char * const files[] = {
> 	"/etc/passwd",
> 	"/etc/utmp",
> 	"/tmp",

I have to admit I have trouble deciphering this
double const syntax.  Does the second 'const' mean
the array itself is const, that is, its elements
can't be changed to point to other text strings?

Did you add the second 'const' so that the
compiler can put the array in the text segment?

>+#define TOTAL_FILE_LIMIT 1000000	/* one million */

This constant was 1024*1024 in the original code.
I am just curious as to why you changed it to
1000000.

>-	    if (totalFileBytes > 1024*1024) break;
>+	    /* after TOTAL_FILE_LIMIT has been reached, only read in first
>+	    ** buffer of data from each subsequent file.
>+	    */
>+	    if (totalFileBytes > TOTAL_FILE_LIMIT) 
>+		break;

Do you think that we should still read in at least one
buffer of data from each subsequent file after we have
exceeded the total limit?
Comment on attachment 57726 [details] [diff] [review]
Take 1KB from /dev/urandom, if it exists, prior to other files

The only change I'd suggest is that the new
function RNG_FileUpdate can be made static.
This is not a big deal.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: