scalability limited on 4-socket Westmere by hot lock for RSA blinding params list



8 years ago
8 years ago


(Reporter: aleksey.v.ignatenko, Assigned: aleksey.v.ignatenko)




Firefox Tracking Flags

(Not tracked)


(Whiteboard: 4_3.12.10)


(4 attachments, 10 obsolete attachments)

219.38 KB, application/pdf
227.25 KB, application/pdf
192.50 KB, application/pdf
14.46 KB, patch
: review+
: superreview+
Details | Diff | Splinter Review


8 years ago
User-Agent:       Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20100914 Firefox/3.6.10
Build Identifier: 3.12.6

I'm continuing work of Serguei which reported scalability issues in NSS bugs 559508, 559510. I use NSS 3.12.6, HW: Westmere EX (80 logical cores). 
CPU utilization with the patches applied from the bugs (559508, 559510) is ~86-87% when running crypto.rsa benchmark from SpecJVM2008. Low CPU issue is due to synchronization hot spot in get_blinding_params (rsa.c) function on "blindingParamsList.lock". I propose 2 alternative solutions giving up to 12-14% of performance gain for crypto.rsa and increasing CPU utilization up to 95-96%. 

1. Switch nssRSAUseBlinding to FALSE, that will stop use of blinding (see the patch in the attachment rsa_UseBlinding_Off.patch). This increases crypto.rsa performance up to 14% and CPU utilization up to ~96%.  

2. Increase number of blinding parameters reuse in 10 times. See the proposed patch (rsa_max_reuse.patch). With RSA_BLINDING_PARAMS_MAX_REUSE == 50 there are 355000 calls to generate_blinding_params during crypto.rsa run. With RSA_BLINDING_PARAMS_MAX_REUSE == 500 there are only 37000 calls to generate_blinding_params. That allows to lessen pressure to "blindingParamsList.lock" and CPU utilization goes up. The crypto.rsa score increases on ~12-13% . Here is experiments with RSA_BLINDING_PARAMS_MAX_REUSE and performance impact to crypto.rsa,  you can see that RSA_BLINDING_PARAMS_MAX_REUSE = 500 is the optimal value.
REUSE	crypto.rsa score	   
10	1505	   
25	5493	   
50	5778	   
100	5615	   
200	6134	   
300	6332	   
400	6439	   
500	6460	   
600	6462	   
700	6372	   
900	6371	 

Formally the 1-st approach gives better opportunities for performance as we completely remove "blindingParamsList.lock". 

Reproducible: Always


8 years ago
Keywords: perf

Comment 1

8 years ago
Created attachment 478748 [details] [diff] [review]

Comment 2

8 years ago
Created attachment 478749 [details] [diff] [review]
Attachment #478749 - Attachment is patch: true
Attachment #478749 - Attachment mime type: application/octet-stream → text/plain
Comment on attachment 478748 [details] [diff] [review]

we won't be doing this.
Attachment #478748 - Flags: review-
Comment on attachment 478749 [details] [diff] [review]

Blinding is done to thwart one of the earliest known attacks against RSA.
The original implementation computed a new blinding value for EVERY use,
and so had no "blinding list" and needed no lock to protect that list. 
The change to use the list and the lock was seen as a big performance 
improvement, but at added risk.  50 was opined by a small number of noted
cryptographers to be an UPPER BOUND on the number of safe reuses.  500 is
obviously WAY OVER that limit.  

So, think about other solutions to this issue.  Does it now make sense to 
return to computing a new value for each use and having no list and no lock?
Summary: Bad scalability on 4-socket Westmere caused by low value RSA_BLINDING_PARAMS_MAX_REUSE (nssRSAUseBlinding mode) → Bad scalability on 4-socket Westmere caused by hot lock for RSA blinding params list
Summary: Bad scalability on 4-socket Westmere caused by hot lock for RSA blinding params list → scalability limited on 4-socket Westmere by hot lock for RSA blinding params list


8 years ago
Ever confirmed: true

Comment 5

8 years ago
Here is update to the bug with a new solution proposal to fix the issue. 

I tried generating parameters for every use by Nelson's proposal. This approach showed crypto.rsa ~15% performance degradation for 1 thread. For 80 threads the approach causes ~20% crypto.rsa degradation due to ~10 % of CPU utilization decrease. That is due other synchronization is stressed instead of blinding parameters lock: Random Global Generator lock. The performance gain of blinding parameters reuse is significant therefore I found different solution. 

The idea of the proposed patch (blinding_params_concurrent.patch) is to make waiting threads (on blinding parameters lock) do some useful job, particularly generate blinding values {f,g} for a parameter in parallel. The generation of blinding values can be concurrent. Lock is required for blinding parameters list read/updates only. A cache of blinding values (blindingParams) was introduced to blinding parameter to avoid overhead of blinding value generation for every use. When a thread comes to get blinding value and blinding parameter reuse counter > 0, then the blinding parameter value is reused. In case the parameter reuse count is exhausted (== 0) we get new blinding values from cache or if no values in cache - regenerate blinding values for a current parameter. If parameter reuse count < 0 threads start generating 1 parameter value for every thread and adding them into blinding values cache of blinding parameter. Generation of blinding values is now concurrent, synchronization is done for blinding parameters list and parameters cache only. 

I've also done a small extension for the patch (blinding_params_concurrent_cache_limit.patch) which allows to limit number of blinding values stored in cache (cache size is limited by size RSA_BLINDING_PARAMS_MAX_CACHE_SIZE constant). RSA_BLINDING_PARAMS_MAX_CACHE_SIZE should satisfy a balance of blinding parameters throughput and cache size. For the system with 80 threads I chosen max cache size value equal to 30 (50 reuses + 30 blinding values generation).

This simple model allows to avoid long threads suspension on blinding parameters lock and shows good performance results on Westmere EX. The increase of crypto.rsa score is ~11% and ~10% of CPU Utilization . There is also no performance degradation for 1 thread run with the proposed approach.

Comment 6

8 years ago
Created attachment 487321 [details] [diff] [review]

Comment 7

8 years ago
Created attachment 487322 [details] [diff] [review]
Created attachment 495321 [details] [diff] [review]
Aleksey's blinding_params_concurrent.patch as CVS diff

I converted Aleksey's last two patches to CVS diffs so that bugzilla can 
compare them.  I am attaching them now.
Attachment #487321 - Attachment is obsolete: true
Created attachment 495322 [details] [diff] [review]
Aleksey's blinding_params_concurrent_cache_limit.patch as CVS diff
Attachment #487322 - Attachment is obsolete: true
Created attachment 495324 [details] [diff] [review]
Modified blinding_params_concurrent.patch

While I reviewed Aleksey's blinding_params_concurrent.patch, I edited it 
to fix small stylistic issues and to make it compile with some of our 
pre c99 compilers, which don't accept variable definitions in the middle
of a badic block.  I also made some other changes which I will explain in 
a subsequent review comment.
Comment on attachment 495321 [details] [diff] [review]
Aleksey's blinding_params_concurrent.patch as CVS diff

There were numerous problems with this patch. Some were so minor that, if they had been the only problems, I would have just fixed them and commited the result without any negative review.  But others necessitated a negative review.  

The minor trivial stuff included:
1) lines longer than 80 columns
2) lines not indented a multiple of 4 characters.
3) open brace not on same line as "if".
4) struct name didn't end in "Str" to distinguish it from typedef name.
5) variabled defined in the middle of a basic block.  While allowed by c99, 
NSS is supported on platfoms that lack c99 compilers, so we don't 
allow it in NSS code. Trivially fixed by moving the definition up to 
the beginning of whatever block it is in, while leaving the assignment 
where it is.

The more serious problems were:

6. Not checking to see if an allocation call returned NULL.
7. leaking digit arrays in mp_ints.  
8. always trying to release the lock at label "Cleanup", whether we were holding the lock when we got there, or not.  

I have attempted to address all these problems in my version of the patch which I attached above.  However, I think there are many more improvements that can be made to the blinding code, so I'm not submitting my patch for consideration.  

Aleksey, feel free to incorporate my changes into your next patch.

Here are some suggestions for further improvement.  
a) Add an array of blindingParams structs and a free list pointer in the declaration of RSABlindingParamsStr.  Then, never allocate and free blinding params structs.  During initialization of a RSABlindingParamsStr put all the blindingParams structs in that RSABlindingParamsStr's array onto that RSABlindingParamsStr's free list.  Then take an item from the free list instead of allocating, and put items back on the head of that list instead of freeing.  The dimension of the array will obviously be your new RSA_BLINDING_PARAMS_MAX_CACHE_SIZE macro.  

Instead of dealing with a counter that has gone too negative, you'll be dealing with an empty free list.  In that case, you'll have to wait like
we did before.  I'll comment on the problem with that separately.
Attachment #495321 - Flags: review-
Comment on attachment 495322 [details] [diff] [review]
Aleksey's blinding_params_concurrent_cache_limit.patch as CVS diff

In my review of this patch, I will only review the differences between this 
patch and Aleksey's previous one.  Those differences are small.

I may be wrong (and if so, please tell me) but it appears to me that the
blinding_params_concurrent_cache_limit patch has the property that when the "cache" (I think the word "queue" is better) of unused blinding params reaches the limit, the next thread to run into it will spin, continually decrementing 
the counter and going to start, locking and unlocking the list lock as fast 
as it can, until the counter underflows!!  Maybe I'm not thinking about this 
clearly.  If not, be gentle. :)

One other issue I see with this design.  The counter is only associated with the values in the RSABlindingParameters.  Suppose that those get "used up"
and some thread starts to recalcuate some new values, but for some reason
that thread gets pre-empted.  Another thread comes along, find the count is
negative, calculates a new set of blinding parameters, adds them to the 
queue (cache).  So, now there is a usable set of blinding parameters, but 
counter is still negative because it's associated with the set at the head 
of the list.  So, the next thread that comes along will generate yet another
set of parameters, rather than using the usable set.  This will continue 
until that first thread finishes.  

I have a solution to this, which is to have a separate counter for each set of blinding params, and not have a special set at t the head.  You get the lock,
find the RSABlindingParams for your key, then walk down the list of bps for
that RSABlindingParams looking for one with a positive counter.  You stop and use the first one you find, after decrementing its value.
Attachment #495322 - Flags: review-
Created attachment 495345 [details] [diff] [review]
Big restructuring of RSA blinding

This patch implements the ideas I have previously described.  
I don't have a cpu with many cores and threads, so I cannot determine what 
effect it will have on performance.  Please let me know.

Comment 14

8 years ago
Thanks for review! I've measured you approach and there is just a 0.5% of performance gain instead of 10-11% expected. So I've started looking into your proposal.

Comment 15

8 years ago
First of all please ignore statement about 0.5% of performance gain in previous comment. That was due HW configuration problem. 

I've reviewed the patch "Big restructuring of RSA blinding" and consider it is definitely the best solution due to much more flexibility for multiple threads generating a new value. Nelson has already wrote about it. 

I've measured the performance gain of the patch and here are the results (used 3.12.6 as base + 2 patches from bugs 559508 and 559510, see the bug description for details):
the overall performance boost is ~12-13%! on 80 threads, CPU utilization goes from ~80% up to ~90-93%! (the diff with previous measurements in comment5 is due to HW settings change). 
The performance gain seems just ~1-2% less than those obtained with blinding_params_concurrent.patch.  

I have no comments on the implementation, the only minor one is to us macro for PR_Sleep value. That could be useful when there are > 100 threads (50 reuses + 50 threads generating parameter) systems on the market. 
+ #define WAIT_FOR_BP_VALUE 50
+ PR_Sleep(PR_MillisecondsToInterval(WAIT_FOR_BP_VALUE));

Now I wonder what should be the next steps to integrate this solution?
Thanks.  Your idea of putting the threads to use to pregenerate values 
rather than being blocked on a lock was my inspiration.  Prior to that 
time, I had been trying to minimize the effort being expended on generating
blinding pairs.  When I realized that that is counter-productive, the rest
fell out.  I think maybe there's an opportunity for a paper here.  Or maybe
this is all well known to all but me. :)

The next step is to measure the effect of the value of the macro
RSA_BLINDING_PARAMS_MAX_CACHE_SIZE  on performance.  I set it to 50 
as a wild guess.  Maybe that's way too high.  Or maybe it's too low. 
I expect there is a curve with a "knee" at a certain point, below 
which the performance drops off quickly, and above which the performance
does not improve significantly.  I'd like to set the value of 
RSA_BLINDING_PARAMS_MAX_CACHE_SIZE to a value just above that knee.  

Please do more tests to find the ideal number for your CPU.  Perhaps you 
could create a table of your results showing how some performance metric 
value varies as RSA_BLINDING_PARAMS_MAX_CACHE_SIZE varies.

Likewise, let me suggest that you play with the value of WAIT_FOR_BP_VALUE 
and publish those results, too.  

I suspect that we should set those numbers differently according to the type 
of CPU in use and the number of threads that it has.  It seems intuitively obvious to me that there's no point in having more entries in that cache than there are threads to generate them.  So, the number of CPU threads is an upper bound for RSA_BLINDING_PARAMS_MAX_CACHE_SIZE.  I'd guess that the amount of 
wall clock time that it takes a single thread to generate one blinding pair 
is a lower bound for the wait time WAIT_FOR_BP_VALUE, but I have no idea 
what that is for your CPU.
> I think maybe there's an opportunity for a paper here.

To be clear, I think You, Aleksey, have an opportunity to write a paper here.
You can list me as a collaborator. :)
Assignee: nobody → aleksey.v.ignatenko
Priority: -- → P2
Target Milestone: --- → 3.13
Version: unspecified → trunk
Aleksey wrote:
> The performance gain seems just ~1-2% less than those obtained with
> blinding_params_concurrent.patch.  

Less?  I expected more.  Hmm.  Maybe RSA_BLINDING_PARAMS_MAX_CACHE_SIZE 
is too small in my patch?  My patch fixed a couple leaks in the concurrent 
patch.  Maybe the time taken to free those blocks is the cause?
If you remove the mp_clear calls that I added to plug those leaks, does that bring performance to parity?

Comment 19

8 years ago
The results of experiments with RSA_BLINDING_PARAMS_MAX_CACHE_SIZE and WAIT_FOR_BP_VALUE are in attachment crypto.rsa.pdf. 


When RSA_BLINDING_PARAMS_MAX_CACHE_SIZE = 50 there are no PR_Sleep invokations at all, it means all 80 threads are engaged with some work. RSA_BLINDING_PARAMS_MAX_CACHE_SIZE  > 20 seems to be enough to pass w/o performance degradation for crypto.rsa benchmark. I would propose to set RSA_BLINDING_PARAMS_MAX_CACHE_SIZE  to 30 to have some reserve for higher number of threads (you could see the data there is no performance degradation on that). 


I used RSA_BLINDING_PARAMS_MAX_CACHE_SIZE  = 5 to emulate situation when some threads execute PR_Sleep. It seems it is better not to add PR_Sleep at all. See the data shows that PR_Sleep results in aither CPU utilization and Score going down. 

I also tried to reproduce the difference in scores between blinding_params_concurrent.patch and the "Big restructuring of RSA blinding" patch. And here's what I've got after lot of experiments: blinding_params_concurrent.patch  gives better scores unstably. This could be instability of benchmark runs (I've observed ~2% instability), but the problem is that I have-not seen the same change for "Big restructuring of RSA blinding" patch. Then I commented mp_clear calls by your advice and now see the same performance gain as  for blinding_params_concurrent.patch. This about up to 2 % of gain more.
Priority: P2 → --
Version: trunk → unspecified

Comment 20

8 years ago
Created attachment 496153 [details]
crypto.rsa.pdf - experiments

Comment 21

8 years ago
In the crypto.rsa document the data include score and CPU utilization changes comparing to RSA_BLINDING_PARAMS_MAX_CACHE_SIZE and WAIT_FOR_BP_VALUE values change. The main metric there is % of difference between run with "Big restructuring of RSA blinding" optimization and w/o it. 

BTW there is a bug I filed about memory management problem which could explain mp_free affect on the performance.


8 years ago
Priority: -- → P2
Version: unspecified → trunk
In reply to comment 19, 
You reported that when you reduced cache size to 5, then CPU utilization 
went down until you removed the PR_Sleep.  Then it increased again  
Isn't the increased CPU utilization caused by threads spinning?  
Are you saying that your measurements find spinning is beneficial to overall system performance?

Comment 23

8 years ago
> Are you saying that your measurements find spinning is beneficial to overall system performance?
No, that's not my point. 

I've found why we have such a strange score dependency on WAIT_FOR_BP_VALUE. The problem was that we took 1ms is the minimum for measurements w/o considering that blinding parameter generation time can be less than 1ms!!! I've measured approximate value of blinding parameter generation time, it is about 250 mks! I wrapped up internals of generate_blinding_params with gettimeofday calls + added a cycle of generating 1000 values there for more accurate measurement. That means we need to use microseconds interval for measurements:
- PR_Sleep(PR_MillisecondsToInterval(N1));
+ PR_Sleep(PR_MicrosecondsToInterval(N2));

Then I did measurements with a new interval starting from 20 mks up to 5000 mks. The results look reasonable now, see page2 in the attached file crypto.rsa1.pdf for details. On page1 the measurements on interval in ms is presented. I also decreased RSA_BLINDING_PARAMS_MAX_CACHE_SIZE  to 1 to emulate harder conditions for testing of WAIT_FOR_BP_VALUE affect (more threads start waiting on it). BTW I changed Diff % to Score on graphs. Now we see that performance degradation comes from WAIT_FOR_BP_VALUE = 400, and it is expected as I the approximate blinding parameter generation time about 250 mks. 

According to the stated above I think the default values should be the following:

#define	WAIT_FOR_BP_VALUE	300

Please ask me if we need some more info for integration.
Priority: P2 → --
Version: trunk → unspecified

Comment 24

8 years ago
Created attachment 496463 [details]
crypto.rsa1.pdf - additional measurements

Comment 25

8 years ago
So the changes additionally to "Big restructuring of RSA blinding" patch should be: 

+/* Blinding Parameters max cache size  */
+ // Wait for new blinding value generation time (mks)
+ #define    WAIT_FOR_BP_VALUE    300
- PR_Sleep(PR_MillisecondsToInterval(50));
+ PR_Sleep(PR_MicrosecondsToInterval(WAIT_FOR_BP_VALUE));


8 years ago
Priority: -- → P2
Version: unspecified → trunk
Now that you've found the optimal value for the PR_Sleep call, 
perhaps you should repeat the test for the optimal cache size.

Comment 27

8 years ago
Created attachment 496791 [details]
crypto.rsa2.pdf - measurements

I've done measurements of score & CPU utilization depending on cache size (crypto.rsa2.pdf attachment). You can see Score and CPU utilization are stabilized on RSA_BLINDING_PARAMS_MAX_CACHE_SIZE = 18. Therefore it is reasonable to set RSA_BLINDING_PARAMS_MAX_CACHE_SIZE = 20 unless we want to make a small reserve for a more number of threads.
Created attachment 497084 [details] [diff] [review]
next revision of blinding params restructuring patch

I believe this patch incorporates Alexei's latest numbers.
Alexei, would you please do one more test build and run, to make sure I 
haven't caused any performance regressions?
If you find this to be as good or better than the test results you just
published, let us know here, and I will launch this patch on to the next 
Attachment #495345 - Attachment is obsolete: true

Comment 29

8 years ago
I've checked "next revision of blinding params restructuring patch". There is no performance regression there. The performance gain is the same as expected ~12-13%. Please proceed with the next steps.
Comment on attachment 497084 [details] [diff] [review]
next revision of blinding params restructuring patch

Bob, please review.

Alexei Volkov, 
You may wish to test this patch on various Sun systems (e.g. Niagara class) 
to see how it affects performance.
Attachment #497084 - Flags: review?(rrelyea)
Attachment #497084 - Flags: feedback?(alexei.volkov.bugs)
Attachment #495324 - Attachment is obsolete: true
Attachment #478748 - Attachment is obsolete: true
Attachment #478749 - Attachment is obsolete: true
Attachment #495321 - Attachment is obsolete: true
Alexei Volkov, You may wish to test this patch on various Sun systems 
(e.g. Niagara class) to see how it affects performance.

Comment 32

8 years ago
Comment on attachment 497084 [details] [diff] [review]
next revision of blinding params restructuring patch

None of these are actual bugs, so I didn't r-, but I'd like to see these changes made before I r+'ed the patch

I think in general, the patch is fine, though I found a couple of things I would like to see fixed:

In init_blinding_params() :
   We explicitly initialize rsabp->array elements 0 to RSA_BLINDING_PARAMS_MAX_CACHE_SIZE-2. Mostly we are setting elements to zero, but we are setting the bp->next pointer to the next element in the array. The last element of rsabp->array is left uninitialized. Since we allocated the blinding params using ZNew, we know that the entire block is initialized to zero, which is the proper initialization for the last block (bp->next = NULL, the rest set to zero). This seems either tricky or accidental, so I would like to see either: 1) an explicit initialization of the last block [probably most easily accomplished after the for statement, or 2) Clear comments saying this is exactly what is happening (and warning that this function depends on the fact the entire memory block is cleared. In option 2 we can probably dispense with the explicit setting of counter and MP_DIGITS &bp->{f,g}.

I prefer option 1, but I'm OK with option 2 if we are worried about the performance of setting extra fields to zero -- just as long as there is a comment explicitly saying that the fields have to be zero, but we know the are already.

In get_blinding_params():
line 1224:

I believe the for() loop is unnecessary. We are trying to pull a bp off of the free list, we check to see if bp->counter < 0 with the comment that another thread is working on this. However any element on the free list is known to be free. If another thread is working on the bp, it will remove it from the free list before it starts. So instead of

prevbp = NULL;
for (bp = rsabp->free; bp prevbp=bp, bp=bp->next) {
    if (bp->counter < 0) 
        /* another thread is working on this one. */
    /* unlink this bp */
    if (prevbp)
       prevbp->next = bp->next;
       rsabp->free = bp->next;
    bp->next = NULL;

we would have:

if ((bp = rsabp->free) != NULL) {
   /* unlink this bp */
   rsabp->free = bp->next;
   bp->next = NULL;

line 1265:
   The comment is correct, we should use PRCondVar. This will remove the need to have different sleep intervals for different processors. and allow us to get rid of the #if at line 70. We should keep a count of waiters. We can wake one waiter at line 1218 (PR_NotifyCondVar), and all waiters at line 1257 (PR_NotifyAllCondVar). While we are here, we can do it right. We already have the PRLock for the condition variable (blindingParamList.lock). PRCondVar is defined in prcvar.h.

Comment 33

8 years ago
Hi Robert, thanks for the review! I've looked through your comments and all of them look reasonable. I'll work on the new patch next week. Besides I have found an issue which was provoked by the latest fix version on crypto.signverify suit in SPECjvm2008. I observed intermittent performance degradation of crypto.signverify during the joint runs of crypto.rsa and crypto.signverify, crypto.rsa performance is always stable, single runs of crypto.signverify also showed no degradation. Additionally I checked there is no degradation with the initial patch blinding_params_concurrent.patch. So seems it's specifics of the latest patch and I'm looking into the problem now.

Comment 34

8 years ago
Created attachment 507978 [details] [diff] [review]
fixed performance stability issue in blinding params restructuring patch

The problem of intermittent performance degradation is no break when key is found in keys searching cycle of get_blinding_params function as it is done in the initial algorithm. See fix in the patch attached compared to the latest one.

Comment 35

8 years ago
Created attachment 509739 [details] [diff] [review]
Fixed all rrelyea comments + signverify instability fix

Nelson, Robert, could you please look at the attached patch which contains fixes of all review comments and fix of instability I described previously. 

Tested it with several SPECjvm2008 runs, the result is stable.

Comment 36

8 years ago
Please let me know if I can do anything to assist with the patch integration.
Comment on attachment 509739 [details] [diff] [review]
Fixed all rrelyea comments + signverify instability fix

I;ll review, and if it passes, I will ask Bob for second review.
Attachment #509739 - Flags: review?(nelson)
Attachment #497084 - Attachment is obsolete: true
Attachment #497084 - Flags: review?(rrelyea)
Attachment #497084 - Flags: feedback?(alexei.volkov.bugs)
Attachment #507978 - Attachment is obsolete: true
Attachment #495322 - Attachment is obsolete: true
Comment on attachment 509739 [details] [diff] [review]
Fixed all rrelyea comments + signverify instability fix

Looks good to me.  You caught a few other mistakes, too, I see.  Good.

Bob, please SR (since I'm a co-author of this patch.)
Attachment #509739 - Flags: superreview?(rrelyea)
Attachment #509739 - Flags: review?(nelson)
Attachment #509739 - Flags: review+

Comment 39

8 years ago
Comment on attachment 509739 [details] [diff] [review]
Fixed all rrelyea comments + signverify instability fix

r+ rrelyea.

I was initially worried that we need to decrement waitCount after the wakeup, buf I've convinced my self that the current handling of waitCount is correct even if PR_WaitCondVar misbehaves.

Attachment #509739 - Flags: superreview?(rrelyea) → superreview+

Comment 40

8 years ago
Nelson, Bob, what is the process of patch integration? – I see the patch is reviewed but do not see that it is committed, so, I’m asking to know what and when to expect – just for my understanding


8 years ago
Severity: enhancement → normal
Priority: P2 → P1
Whiteboard: 4_3.12.10
Target Milestone: 3.13 → 3.12.10

Comment 41

8 years ago
Sorry, I need to apply it. I thought Aleksey was Aleksey Volkov who has commit permission...

Comment 42

8 years ago
Checking in rsa.c;
/cvsroot/mozilla/security/nss/lib/freebl/rsa.c,v  <--  rsa.c
new revision:; previous revision:
Checking in stubs.c;
/cvsroot/mozilla/security/nss/lib/freebl/stubs.c,v  <--  stubs.c
new revision:; previous revision:
Checking in stubs.h;
/cvsroot/mozilla/security/nss/lib/freebl/stubs.h,v  <--  stubs.h
new revision:; previous revision:
Last Resolved: 8 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.