Closed Bug 466180 Opened 16 years ago Closed 16 years ago

SSL_ConfigMPServerSIDCache with default parameters fails on {Net,Open}BSD (in default environment)

Categories

(NSS :: Libraries, defect, P3)

All
NetBSD
defect

Tracking

(Not tracked)

RESOLVED FIXED
3.12.3

People

(Reporter: mozbgz, Assigned: mozbgz)

Details

Attachments

(3 files)

NetBSD has a limit of 64 open files per process, by default. Trying to configure an SSL session cache with

  SSL_ConfigMPServerSIDCache(0, 0, 0, NULL);

will therefore fail, because libssl will try to create about 80 pipes (each requiring one pair of FDs) and then run out of descriptors.

I would suggest to treat NetBSD (and OpenBSD) the same as Linux et al., i.e. set MAX_SID_CACHE_LOCKS to 8 for these two platforms, too.

Nelson, does the attached patch look like an acceptable solution?
Attachment #349439 - Flags: review?(nelson)
It's acceptable to me if it's acceptable to others in the BSD community.
Can we get something indicating the presence of absence of consensus on
this?  I wouldn't want to commit this change and then get a bunch of hate
mail from BSD users :)
Attachment #349439 - Flags: review?(nelson) → review+
Ok. Before I try to get consensus "from the BSD community" on this, let me ask another question first: on systems where shared unnamed POSIX semaphores are available, this is preferred over the emulation with pipes, correct?
It's been a long time since I worked on this server sid cache locking stuff.
On some platforms, the NSS team itself decided which choice was most 
desirable, and on other platforms we let others speak for the platform 
concerning what was best.  

I think that generally we chose whichever method seemed least constrained,
offering the greatest number of locks.  I seem to recall that on some 
platforms, there was an astonishingly low limit on the available number 
of POSIX semaphores, making them not the preferred choice, even though they
are available.  I don't remember which system that was.  It was undoubtedly
one of the systems on which we use pipes instead of POSIX semaphores, which
are:
 defined(LINUX) || defined(AIX) || defined(VMS) || defined(BEOS) || 
 defined(BSDI) || defined(NETBSD) || defined(OPENBSD)
(In reply to comment #3)
> I think that generally we chose whichever method seemed least constrained,
> offering the greatest number of locks.  I seem to recall that on some 
> platforms, there was an astonishingly low limit on the available number 
> of POSIX semaphores, making them not the preferred choice, even though they
> are available.

What was "astonishingly low" in this context? Since Linux/AIX/VMS are limiting the number of locks to 8 in any case, is that a reasonable setting?

I'm asking because NetBSD introduced support for POSIX semaphores with version 2.0 (released in December 2004). In releases 2.0 through 4.0, the maximum number of POSIX semaphores is set to 30, but has been increased to 128 in the meantime (http://cvsweb.netbsd.org/bsdweb.cgi/src/sys/kern/uipc_sem.c?rev=1.22&content-type=text/x-cvsweb-markup).

So, for NetBSD, switching to semaphores while at the same time limiting MAX_SID_CACHE_LOCKS to 8 would be an option (for NetBSD release 2.0 and later).
> What was "astonishingly low" in this context?

As I recall, it was a system-wide limit that was a 2 digit number. :-/
One process could consume nearly (or perhaps entirely) the entire system
resource.
Ok, let's stay with pipes for NetBSD for the time being (it's no option for OpenBSD anyway, since its implementation does not support shared POSIX semaphores).

(In reply to comment #1)
> It's acceptable to me if it's acceptable to others in the BSD community.
> Can we get something indicating the presence of absence of consensus on
> this?

I'm adding Martynas Venckus to the Cc, who is the maintainer of both the NSS and NSPR ports/packages for OpenBSD.

Martynas, the question is whether the proposed fix (attachment 349439 [details] [diff] [review]) is an acceptable solution for OpenBSD and NetBSD. Both platforms have OPEN_MAX set to 64, so trying to create about 80 pipes for the SSL server cache in its default configuration will obviously fail. Are you ok with lowering the limit to 8?

Note that this change only applies to the SSL *server* session ID cache, and an application is free to increase that limit at runtime through SSL_SetMaxServerCacheLocks, actually. So the question is mostly whether the default currently in use for Linux/AIX/VMS is also suitable for OpenBSD/NetBSD.
Yes,  of course.  Please lower it to 8.
Kaspar, one more question.  Given that the FD limit is a per-process 
limitation and not a system wide limitation, doesn't it make sense that a 
server process would raise that limit, or that an administrator would raise
the limit for a server process?  Such a low limit on the number of FDs also 
severely limits the number of sockets/connections that it can serve.  Now, 
if a server does raise those FD limits, do we still wish to give it such a 
low limit on the number of pipes?
Cc'ing another "BSD community" member who happens to have a Bugzilla account...

(In reply to comment #8)
> Now if a server does raise those FD limits, do we still wish to give it such
> a low limit on the number of pipes?

That same question would also apply to Linux/AIX/VMS, actually. Lowering the default limit of cache locks primarily affects the granularity of locking, if I'm correctly understanding this comment (from sslsnce.c):

 * The set of Cache entries are divided up into "sets" of 128 entries.
 * Each set is protected by a lock.  There may be one or more sets protected
 * by each lock.  That is, locks to sets are 1:N.
 * There is one lock for the entire cert cache.
 * There is one lock for the set of wrapped sym wrap keys.

My intention was to make sure that "SSL_ConfigMPServerSIDCache(0, 0, 0, NULL);" does not fail in an obscure way on NetBSD or OpenBSD in their default configuration. If someone is tweaking an NSS-based server application for a BSD environment, then most likely he will not only want to increase the number of FDs, but also tune the the maximum number of locks (and will best do that with SSL_SetMaxServerCacheLocks).

Therefore I would still recommend to apply the patch - lowering MAX_SID_CACHE_LOCKS shouldn't hurt for standard configurations.

Additionally, what should be fixed in any case is setting a proper NSPR error when pipe() fails. I'll attach an additional patch for that right now.
Summary: SSL_ConfigMPServerSIDCache with default parameters fails on NetBSD (in default environment) → SSL_ConfigMPServerSIDCache with default parameters fails on {Net,Open}BSD (in default environment)
If we return early from sslMutex_Init after a pipe() failure, then we need to make sure that an appropriate error is set.
Attachment #350651 - Flags: review?(nelson)
Both patches are fine with me, though I wouldn't mind a default config using POSIX semaphores if __NetBSD_Version__ >= 500000000 (that is NetBSD 5.0 or newer).
Comment on attachment 350651 [details] [diff] [review]
Set proper NSPR error when pipe() (in sslMutexInit) fails (checked in on trunk)

looks good to me.

I'll also be happy to review a patch to do as Martin suggested in comment 11.
Attachment #350651 - Flags: review?(nelson) → review+
(In reply to comment #12)
> I'll also be happy to review a patch to do as Martin suggested in comment 11.

Like so? Note that this patch is independent of the second fix (attachment 350651 [details] [diff] [review], which also touches sslmutex.c) - this is why I didn't combine them into one.
Attachment #350778 - Flags: review?(nelson)
Comment on attachment 350778 [details] [diff] [review]
Use POSIX semaphores on NetBSD 5.0 and later releases (checked in on trunk)

This patch looks good.  I'd like someone to state that they've successfully tested an NSS server with this patch on NetBSD 5.0 before approving it.
Running the ssl.sh test script should be a good way to do that.
Comment on attachment 350778 [details] [diff] [review]
Use POSIX semaphores on NetBSD 5.0 and later releases (checked in on trunk)

I committed all 3 patches for this bug on the trunk in one blow.

Checking in sslmutex.c; new revision: 1.23; previous revision: 1.22
Checking in sslmutex.h; new revision: 1.11; previous revision: 1.10
Checking in sslsnce.c;  new revision: 1.49; previous revision: 1.48

Kaspar,
Is there more to be done? or is this bug resolved/fixed now?
Attachment #350778 - Attachment description: Use POSIX semaphores on NetBSD 5.0 and later releases → Use POSIX semaphores on NetBSD 5.0 and later releases (checked in on trunk)
Attachment #350778 - Flags: review?(nelson) → review+
Assignee: nobody → mozbugzilla
Priority: -- → P3
Target Milestone: --- → 3.12.3
Attachment #349439 - Attachment description: Proposed fix - set MAX_SID_CACHE_LOCKS to 8 for {Net,Open}BSD → Proposed fix - set MAX_SID_CACHE_LOCKS to 8 for {Net,Open}BSD (checked in on trunk)
Attachment #350651 - Attachment description: Set proper NSPR error when pipe() (in sslMutexInit) fails → Set proper NSPR error when pipe() (in sslMutexInit) fails (checked in on trunk)
Thanks for committing the patches, Nelson.

(In reply to comment #15)
> Is there more to be done? or is this bug resolved/fixed now?

Nothing left - resolving as FIXED therefore.

(In reply to comment #14)
> (From update of attachment 350778 [details] [diff] [review])
> This patch looks good.  I'd like someone to state that they've successfully
> tested an NSS server with this patch on NetBSD 5.0 before approving it.
> Running the ssl.sh test script should be a good way to do that.

I did that (on 5.0_BETA, actually), and used selfserv with "-M 2" (and higher numbers as well) - no problems with creating the 10 POSIX semaphores and connecting to selfserv afterwards. Note that ssl.sh never starts selfserv in multi-process mode, AFAICT, so it wouldn't run sslMutex_Init with shared mode (and no semaphores would be created).
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
I have received email from one of the people involved with this bug 
indicating that there remain issues on NetBSD 5.0 because the shared 
libraries and/or programs need to be linked with system shared libraries
with which they were not linked before.  I have suggested changes to 
coreconf/NetBSD.mk

Perhaps that is a separate bug, but if the commits made for THIS bug 
have caused builds to stop working on NetBSD, then IMO, that should be
resolved in this bug.
(In reply to comment #17)
> Perhaps that is a separate bug, but if the commits made for THIS bug 
> have caused builds to stop working on NetBSD, then IMO, that should be
> resolved in this bug.

It's a separate bug, most likely. Tweaking OS_LIBS in NetBSD.mk (add "-lpthread" or possibly "-lrt -pthread") will make NSS compile, but then other very nasty problems will show up... it's actually due to the way a "bundled version" of NSPR is built when using nss_build_all. Will come up with a patch, but not before tomorrow, probably.
You need to log in before you can comment on or make changes to this bug.