Created attachment 8491191 [details] sha benchmark.htm User Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:35) Gecko/20100101 Firefox/35.00.10 Build ID: 20140917030202 Steps to reproduce: I would like to use window.crypto.subtle.digest() as a seedable random to generate lot of values, but in current Firefox Nightly it makes a noticeable delay. Actual results: Turns out that Chrome calculates SHA digest about 5.6 times faster on my desktop (Win7 x64, Intel E8400 CPU). See the attached benchmark or this jsperf - http://jsperf.com/sha-digest (nightly with E10S labeled as "Firefox 35.00.10")
Fwiw, on Mac I only see about a 2x difference. Looking at a profile, at least 25% of the the time is the NS_NewThread gunk caused by every single hash() call starting a new thread. I _think_ bug 1001691 is supposed to cover that? Until that's fixed, I'm not sure there's a point in measuring here, since the many random threads also mean lots of time waiting, etc.
Oh, and another 25% is PR_JoinThread. And there's my 2x difference! On Windows, threads are more expensive, as I recall, which might explain why it's a 5.6x difference there.
I see a huge difference (Win 7) Chrome 41 takes 170ms Nightly takes 6200ms (e10s on) and 7200ms (e10s off) Firefox 36.0.1 takes 11700ms The rapportmgmtservice.exe from IBM Trusteer used by Banco do Brasil makes Firefox to take 65000ms.
Right, there's really no point measuring this until bug 1001691 is fixed.
With the current patches from bug 1001691 I see: Fx Nightly: 5,3k computations Fx Nightly with patches: 13,2k computations Chrome Canary: 20,9k computations Still not where we'd like to be but roughly 2.5 times faster. Can look into it more after the patches landed.
I did some re-profiling and a lot of the time is actually spent in the promise code. Should remeasure once bug 911216 lands.
I see current Nightly taking ~350ms here, vs ~200ms for Chrome. However, if I disable async stacks, Nightly takes ~230ms. Blocking on bug 1280819 because of that. The jsperf benchmark isn't available, so I can't comment on that.
We're now faster on both the jsperf test and the attached benchmark: 28k ops/sec vs 21k ops/sec on the former and 135ms vs ~210ms on the latter.