Closed Bug 123753 Opened 23 years ago Closed 22 years ago

softoken should not have compile-time configuration parameters for clients

Categories

(NSS :: Libraries, defect, P2)

defect

Tracking

(Not tracked)

RESOLVED FIXED

People

(Reporter: wtc, Assigned: rrelyea)

Details

Attachments

(2 files)

The built-in softoken has several configuration parameters
whose values are controlled by the MOZ_CLIENT macro at
compile time.  This means servers and clients will be
using different softoken binaries.

----------- softoken/pkcs11i.h -----------------
#ifdef MOZ_CLIENT
/* clients care more about memory usage than lookup performance on
 * cyrptographic objects. Clients also have less objects around to play with 
 *
 * we eventually should make this configurable at runtime! Especially now that
 * NSS is a shared library.
 */
#define ATTRIBUTE_HASH_SIZE 32 
#define SESSION_OBJECT_HASH_SIZE 16
#define TOKEN_OBJECT_HASH_SIZE 32
#define SESSION_HASH_SIZE 32
#else
#define ATTRIBUTE_HASH_SIZE 32
#define SESSION_OBJECT_HASH_SIZE 32
#define TOKEN_OBJECT_HASH_SIZE 1024
#define SESSION_HASH_SIZE 512
#define MAX_OBJECT_LIST_SIZE 800  /* how many objects to keep on the free list
                                   * before we start freeing them */
#endif
----------- softoken/pkcs11i.h -----------------

We should have only one softoken binary.  The configuration
parameters should be customized at run time by environment
variables or flags passed to the NSS initialization function.
Set target milestone 3.4.1, priority P2.
Priority: -- → P2
Target Milestone: --- → 3.4.1
Changed the QA contact to Bishakha.
QA Contact: sonja.mirtitsch → bishakhabanerjee
targetting 3.5 for now. It would be nice to have a single version of softoken
for both servers and clients.
Target Milestone: 3.4.1 → 3.5
I just want to point out that environment variables are not an option on NT to 
configure this. Servers will run as services and a client running on the same 
machine as the same user as the service would unherit the same environment 
variables and therefore have the server settings.
Therefore it would have to be a configuration option passed to the NSS 
initialization function. That will require having a new signature.
I suggest that by default we optimize NSS for speed and
we add a new flag NSS_INIT_OPTIMIZEFORSPACE for NSS_Initialize
for NSS clients that want to optimize NSS for space.
I think this is an excellent suggestion.
Perhaps this could be extended to the build?
122974 regards building under Solaris -xO3.
No, I was thinking more along the lines of a run-time call the application would
make. Probably along the lines of:

set client parms/set server params, or

optimize space.
optimize speed.

bob
Now that was a non-sequitir. My response was supposed to be to Julien, not Kirk.

Kirk, I don't think we can control compilier options at runtime;). This feature
would change all the compile time static arrays into allocated arrays whose size
varies according the the runtime options suppllied.

bob
Wan-Teh,

Yes, this flag passed in to the init function for server vs client optimization 
sounds like the right thing to do.

I added an flag to NSS_Initialize(): NSS_INIT_OPTIMIZESPACE.

OptimizeSpace is automatically turned on if you use NSS_Init(),
NSS_InitReadWrite(), or NSS_NoDB_Init().
If you call NSS_Initialize() without specifying the flag, it is automatically
turned off (that is you are automatically in server mode).

Since our server products have all renamed the cert and key database, the only
way they can initialize is with NSS_Initialize(). Our client products and sample
apps, for the most part, all use NSS_Init_*.
Bob,

Comments :

I have a client that needs the performance settings of server. It currently uses 
NSS_Init. This is the test client for the web server. I guess this client will 
have to be changed to call NSS_Initialize.
Just in case another client app in the future wanted more flexibility WRT 
database names, I'd prefer for it to be possible to specify both modes through 
the flags. Eg : have both NSS_INIT_OPTIMIZESPACE and NSS_INIT_OPTIMIZESPEED, and 
if neither is set, default to NSS_INIT_OPTIMIZESPEED . This would be useful in 
case the defaults get changed in the future.

Also, I just remembered that the server does not always call NSS_Initialize. It 
first tries to do that, but if that fails (most likely because there are no 
databases), it will then call NSS_NoDB_Init. This is so that outgoing 
SSL connections for example for LDAPS can still work. I don't think we want to 
have the NSS space optimizations in that case. This is not what we have today.
Perhaps a separate API is needed to toggle the optimization setting before the 
initialization function is called.
Julien,

If you are calling NSS_Initialize() you can explicitly select. Not specifying
OPTIMIZESPACE implies OPTIMIZESPEED. If you want to can define
NSS_INIT_OPTIMIZESPEED to 0 and you have exactly the semantics you were requesting.

As far as providing a new API, That shouldn't be necessary. The three main init
functions (NSS_Init, NSS_InitRW, NSS_NoDB_Init) can be accomplished using
NSS_Initialize.

As far as the servers using NSS_NoDB_Init() for LDAP connections, I don't see
that as a problem. Our speed implementations have been tuned for serving SSL
connections, not for using SSL as a client. There are plenty of other issues on
the client path that would prevent even noticing the contention issues which the
large data structures in the OPTIMIZESPEED prevents.

bob
Checked into NSS 3.6.
Status: NEW → RESOLVED
Closed: 22 years ago
Resolution: --- → FIXED
Target Milestone: 3.5 → 3.6
This patch has been checked in.
I missed this in the previous patch.  This patch has also
been checked in.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Creator:
Created:
Updated:
Size: