Closed Bug 338601 Opened 14 years ago Closed 10 years ago
System Files is slow
I was looking at some Quantify data, and it looks like most of NSS initialization time on Windows is spent in ReadSystemFiles (win_rand.c). I'll attach the annotated source for the file. As you can see, most of the time spent is in findnext, so it's the actual enumeration that's slow. One obvious win would be to only walk the system directory once instead of twice. I'm still a little worried about the performance though, I have 2300 files in my system directory and I suspect that's pretty typical. Is there a way we could get a smaller set of filesystem data for entropy? Maybe use the desktop directory instead or something?
This is a copy and paste from the annotated source window. I added column labels at the left but I had to abbreviate a bit to get them to fit. From left to right, they are: Line time, L+D time, Percent of Function time, Percent of F+D time, Line number
oops wrong nelsonb i think
I think the desktop or my documents would be (a) much faster and (b) perhaps even more random. The number of files in the system directory almost never changes with use. I might also suggest using volumen information such as total disk size and amount of space free. Hopefully, these sources can make up for losing the system directory entropy.
What is "D time" ? contributed patches are welcome. Ideally any changes will work on Win98, Win2K, WinXP, and WinCE. New code should not be dependent on the latest MSVC or latest run time.
(In reply to comment #4) > What is "D time" ? D = Descendant, so F time is just the time spent executing in function foo, while F+D time also includes time spent in other functions called from function foo.
Please apply the second patch for bug 331404 to your tree, and then remeasure your results, as above. I'd like to know what impact that change has on the results, if any. Also, what are the units of "L+D time"? Seconds? (or milliseconds?) Am I to believe that your process spent over 35 seconds in ReadSystemFiles ? That's WAY out of line with anything I've ever experienced. I experience 30-40 milliseconds, at most! That doesn't seem like an inordinately long period of time to spend on initialzing a PRNG. Is there anything unusual about your system that might explain that LONG time?? Are you running on an old 100 Mhz 486? I'm trying to gauge the real severity of the problem here. 35 seconds is so bad that it's P1 to be investigated, but it also begs the question of why your system is 3 orders of magnitude slower than mine! OTOH, 35 milliseconds doesn't seem like it's a problem for something done once per process.
Version: unspecified → 3.11
35ms is huge for startup time. People work for weeks to get that kind of improvement.
(In reply to comment #6) > Please apply the second patch for bug 331404 to your tree, and then > remeasure your results, as above. I'd like to know what impact that > change has on the results, if any. > I'll try that when I have a chance. > Also, what are the units of "L+D time"? Seconds? (or milliseconds?) It's certainly not seconds. It's probably milliseconds. > Is there anything unusual about your system that might explain that LONG time?? > Are you running on an old 100 Mhz 486? It's a 1.7GHz Pentium M laptop. Basically, the reason I think this is worth investigating is that thusfar in Firefox we've been able to defer NSS initialization until the user visits an SSL site. But, it's easy for an extension to (knowingly or not) trigger this much earlier. So ideally we'd like to lower the cost of NSS initialization - I found that if it happens during startup, it accounts for about 5% of overall startup time.
See bug 322529 comment 33. Does ReadSystemFiles give any bits of entropy, given the known read-every-M-files modulus and the likely-known state of the system in light of Windows Update? From bug 322529 comment 30 it seems modern Windows has an API akin to Ted T'so's /dev/random. If so, can we retire ReadSystemFiles altogether? /be
A slow ReadSystemFiles operation has come up in bug 501605 - some users complain about a delay of 5 minutes. (I'm keeping the discussion here, to prevents too much noise on bug 501605) If I compare the Unix version of RNG_SystemInfoForRNG() with the Windows version, I can see that Windows has its way with various files in temp and other directories. Unix reads /dev/urandom first, before also reading a number of files, but a much smaller number (for instance, it reads /tmp as a file instead of trying to walk it as a directory). Maybe Windows can call CryptGenRandom here too, so that it doesn't have to read as much files to collect enough entropy (see the discussion in bug 501605 about limiting the number of files during the walk). Reading the files is still necessary, as not to depend too much on the implementation of CryptGenRandom or /dev/urandom.
Note that I found a similar problem, if we find that the directory-walk is slow and can be improved (limit number of files, faster enumeration, whatever). RNG_SystemRNG will call rng_systemFromNoise, which calls rng_systemJitter (multiple times possibly), which also does an enumeration of the system files, calling ReadOneFile for exactly one file. ReadOneFile reads the entire file (1K at a time), but doesn't do anything with it. It's only used to advance the clock a bit, for use in RNG_GetNoise. Enumeration seem to be slow, but this function reads in an entire file too. What if that just happens to be a 100 MB file ?
Flags: blocking1.9.2? → blocking1.9.2+
(In reply to comment #11) > Enumeration seem to be slow, but this function reads in an entire file too. > What if that just happens to be a 100 MB file ? I forgot to add that RNG_FileForRNG (which is used in ReadSystemFiles) won't read more than 250K for all files combined (with a minimum of 1K per file), while ReadOneFile will always read the entire file. 1K at a time.
Don't think we need this to fix bug 501605, but it would help. Not blocking on this specific expression of the fix, though.
(I know nothing of ReadSystemFiles, but reading through Bug 501605 brought this to mind ... Hope it helps in some way.) If you're enumerating potentially large directory tree's it need not necessarily be slow. At least in /NTFS/ file systems, it can be done efficiently. See what this fellow has done, http://www.voidtools.com/ . (And while I'm here, it has always stuck, perhaps incorrectly, in my mind that having large numbers of files in a single directory was inherently inefficient. Another sticky thought is that a once large directory subsequently with all files within deleted is still inefficient because the directory structure <file pointers is likely the wrong word> is not reduced (as in it does not resize dynamically). Maybe this all dealt with FAT file systems?) http://www.pcqanda.com/dc/dcboard.php?az=show_topic&forum=2&topic_id=475805#476038
This bug and bug 501605 are mostly duplicates. The same patch fixes both. However, this bug was about 3.11 and bug 501605 is about 3.12.3 which is must worse. So I'm going to mark them fixed separately. Checking in win_rand.c; new revision: 1.26; previous revision: 1.25
Status: NEW → RESOLVED
Closed: 11 years ago
Resolution: --- → FIXED
Target Milestone: --- → 3.12.4
Sigh. It appears to me that this fix was only committed on the trunk for NSS 3.12 and not on the branch for 3.11.11 :(
Assignee: nobody → nelson
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Target Milestone: 3.12.4 → 3.11.11
Samuel, this fix is already in ff 3.5.1, IINM.
This bug is fixed in NSS 3.12.3. See bug 501605. We're not going to do another 3.11.x release.
Status: REOPENED → RESOLVED
Closed: 11 years ago → 10 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.