Closed Bug 815579 Opened 12 years ago Closed 12 years ago

IonMonkey: Compile function called through VM call Invoke faster.

Categories

(Core :: JavaScript Engine, defect)

defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla20

People

(Reporter: h4writer, Assigned: h4writer)

References

Details

Attachments

(1 file)

Calling from IM to Interpreter/JM through a VM call is slow. We should consider compiling that function earlier in order to use a fastcall from MI -> MI. ATM we use usecount to determine this, but we could cheat and increase usecount faster when a function is called through VM call. This makes sense because: V8 compile times: mean 272us ssd 317us max 2952us min 14us SS compile times: mean 364us std 457us max 2482us min 22us Overhead VM call (no execution time is taken in account. Only the transition from IM to Interpreter): mean 2us max 3us min 1us So calling this function 10.240 times through VM call would yield an overhead of 20480. This is 10x higher than the max. compile time of V8/SS on a function. Therefore it would be smart to increase (pessimistic) with "10" every time we call through invoke. We could even (optimistic) increase with "100" and our mean compile time is payed by not using the VM call. This would give an increase of 6%-8% on v8-raytrace, because e.g. (but not only) raytrace.js:426 now gets compiled and has the benefit of running using IM to IM calls. I'll give the patch and numbers on Kraken, SS and V8 this afternoon.
Blocks: 768745
Sunspider: 3d-raytrace: 1308 => 1000 InvokeFunction 22.4ms +/- 4.7% 22.1ms +/- 5.4% string-validate-input: 786 => 324 InvokeFunction 9.4ms +/- 4.8% 9.0ms +/- 3.6% other tests in SS don't use InvokeFunction and are unaffected V8: Richards 18.194 to 4.551 InvokeFunction 7430 to 7268 (before patch, jumping between 7000 and 7500, after patch consistent 7250) DeltaBlue 341.900 to 89.506 InvokeFunction 8360 to 8162 Crypto 119.343 to 203.541 InvokeFunction 10539 to 10811 RayTrace 137.079 to 32.097 InvokeFunction 5432 to 5755 EarleyBoyer 455.202 to 121.823 InvokeFunction 9625 to 10056 RegExp 0 to 0 InvokeFunction 1183 to 1192 Splay 59.304 to 11.114 InvokeFunction 8160 to 8418 NavierStokes 460 to 490 InvokeFunction 15744 to 15567 Total: 6906 to 6998 Raytrace, EarleyBoyer, Splay show a nice speedup. Richards and Deltablue decrease performance Overal speedup of 1%-2% Kraken: =============================================== RESULTS (means and 95% confidence intervals) ----------------------------------------------- Total: 2523.7ms +/- 2.0% ----------------------------------------------- ai: 145.1ms +/- 0.9% astar: 145.1ms +/- 0.9% audio: 924.5ms +/- 5.3% beat-detection: 297.0ms +/- 12.4% dft: 225.5ms +/- 0.8% fft: 182.7ms +/- 19.7% oscillator: 219.3ms +/- 0.7% imaging: 671.3ms +/- 4.4% gaussian-blur: 272.2ms +/- 9.2% darkroom: 226.3ms +/- 0.8% desaturate: 172.8ms +/- 11.3% json: 142.5ms +/- 1.2% parse-financial: 91.0ms +/- 1.2% stringify-tinderbox: 51.5ms +/- 2.8% stanford: 640.3ms +/- 0.5% crypto-aes: 142.7ms +/- 1.2% crypto-ccm: 143.6ms +/- 1.0% crypto-pbkdf2: 258.8ms +/- 0.9% crypto-sha256-iterative: 95.2ms +/- 0.5% to =============================================== RESULTS (means and 95% confidence intervals) ----------------------------------------------- Total: 2499.9ms +/- 1.9% ----------------------------------------------- ai: 144.9ms +/- 0.4% astar: 144.9ms +/- 0.4% audio: 912.1ms +/- 4.1% beat-detection: 281.3ms +/- 0.7% dft: 244.5ms +/- 15.1% fft: 166.0ms +/- 0.5% oscillator: 220.3ms +/- 0.9% imaging: 656.3ms +/- 1.7% gaussian-blur: 261.7ms +/- 0.7% darkroom: 226.7ms +/- 1.0% desaturate: 167.9ms +/- 5.9% json: 151.3ms +/- 13.8% parse-financial: 100.1ms +/- 20.6% stringify-tinderbox: 51.2ms +/- 3.5% stanford: 635.3ms +/- 5.0% crypto-aes: 141.9ms +/- 1.7% crypto-ccm: 136.4ms +/- 1.4% crypto-pbkdf2: 255.5ms +/- 7.0% crypto-sha256-iterative: 101.5ms +/- 14.3%
This increases the use count when calling invoke with 5. (The tests above are done with 10). Decreasing to 5 yields the same performance increase on v8-raytrace, but doesn't decrease performance on v8-richards.
Assignee: general → hv1989
Attachment #685610 - Flags: review?(dvander)
Comment on attachment 685610 [details] [diff] [review] Increment usecount on slow call Review of attachment 685610 [details] [diff] [review]: ----------------------------------------------------------------- ::: js/src/ion/VMFunctions.cpp @@ +71,5 @@ > > + // When caller runs in IM, but callee not, we take a slow path to the interpreter. > + // This has a significant overhead. In order to decrease the number of times this happens, > + // the useCount gets incremented faster to compile this function in IM and use the fastpath. > + fun->nonLazyScript()->incUseCount(js_IonOptions.slowCallIncUsecount); The "C" of slowCallIncUsecount should be capitalized. (Probably forget to qref before posting)
Attachment #685610 - Flags: review?(dvander) → review+
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla20
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: