Closed Bug 266206 Opened 16 years ago Closed 15 years ago

NSS initialization hangs if file descriptors are unlimited (MAX_INT)


(NSS :: Libraries, defect, P1)



(Not tracked)



(Reporter: julien.pierre, Assigned: julien.pierre)



(2 files)

There is code in unix_rand.c which does the following :

	for (fd = getdtablesize(); --fd > 2; close(fd))

If getdtablesize() returns MAX_INT, then NSS will attempt to make 4 billion
close calls, which is too long. This is an actual use case reported to us. In
Solaris, su to root, and in bash, do "ulimit -n limited" ...

One possible fix for Solaris (and I believe Linux as well) is to look at the
files open by the current process in /proc, and close selectively. For the other
Unix platforms we may need to leave the code in. It seems it's not as easy in
the other OSes anyway to have a limit that high. I couldn't actually raise the
file descriptors to MAX_INT in Linux, for instance.
Priority: -- → P1
Target Milestone: --- → 3.9.4
The function in question, safe_popen, has other
problems.  Its use of sigaction on SIGCHLD is
not thread-safe, and it calls many unsafe
functions after forking.

We use safe_popen to invoke the ps and netstat
commands.  We need to first ask ourselves
whether we must use safe_popen to do that.
Is there a security risk for ps or netstat to
have access to the open file descriptors?
(Note: safe_popen still gives them access to

Second, we should do without safe_popen if
possible.  This means we need to do one of
the following:
1. Show that the entropy collected from ps
and netstat output is negligible.
2. Implement what ps and netstat do in NSS.
3. Find new entropy sources to replace ps
and netstat.

You raised very good points and questions.

Jason pointed out to me earlier that closing all the file descriptors before
forking didn't really make much sense, since the permissions of the child
process aren't changing. The child process could still reopen the very same
files that the parent did, if it wanted. The only way to to prevent it would be
if the parent process ran as root and changed to a different user before
forking; or did a chroot before forking; but neither is happening here.
Given this, the short-term fix here might be to just remove this file descriptor
closing loop altogether. Do you agree ?

Then, we need to figure out whether we need to run these programs for entropy or
not. Several bugs discuss this : bugzilla 156600 for Solaris, and 182758 for
Linux . My feeling is that right now we still need to keep running these
programs for entropy generation on Solaris 8, because the /dev/*random device
may not be available.

Regarding other sources of entropy, I think many of them are going to be
platform-specific . Reimplementing netstat and ps equivalents certainly would
have too much platform specific code, IMO. One thing, we could try that's
cross-platform is to read the content of allocated but uninitialized memory, but
we couldn't rely on the content being one way or the other.
The fd's don't just reference open files.  They
may reference sockets, pipes, etc., which may not
be possible to open new references to.

In any case, I don't understand why we are worried
about 'ps' or 'netstat' having access to the fd's.
My guess is that this is to protect against the risk of the "ps" and "netstat"
being executed are trojan programs .
I believe the descriptors are sequentially allocated by the OS. Most NSS
applications initialize NSS very early after main(), before they have 65536
descriptors. Thus, for all practical purposes, this patch still causes NSS to
close all the programs' descriptors, except 0 , 1 and 2.
I have checked in this fix to NSS_3_9_BRANCH

Checking in unix_rand.c;
/cvsroot/mozilla/security/nss/lib/freebl/unix_rand.c,v  <--  unix_rand.c
new revision:; previous revision: 1.13

And to the tip :

Checking in unix_rand.c;
/cvsroot/mozilla/security/nss/lib/freebl/unix_rand.c,v  <--  unix_rand.c
new revision: 1.15; previous revision: 1.14

I'm leaving this bug open for now so we can continue to discuss the remaining
Comment on attachment 163785 [details] [diff] [review]
close only up to 65536 descriptors

>-	for (fd = getdtablesize(); --fd > 2; close(fd))
>+	for (fd = PR_MIN(65536,getdtablesize()); --fd > 2; close(fd))

It would be a good idea to save the result of
getdtablesize() in a local variable and pass
that instead to PR_MIN.  The reason is that
PR_MIN is a macro, so it will invoke getdtablesize()
twice if getdtablesize() is less than 65536.
checked in to NSS_3_9_BRANCH

Checking in unix_rand.c;
/cvsroot/mozilla/security/nss/lib/freebl/unix_rand.c,v	<--  unix_rand.c
new revision:; previous revision:

and to the tip

Checking in unix_rand.c;
/cvsroot/mozilla/security/nss/lib/freebl/unix_rand.c,v	<--  unix_rand.c
new revision: 1.16; previous revision: 1.15
Marking fixed. I left the target milestone to 3.9.4 since that's the first
release that had the fix. 3.9.5 will have the incremental patch .
Closed: 15 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.