Experiment with removing VMFunction wrappers
Categories
(Core :: JavaScript Engine: JIT, task, P3)
Tracking
()
People
(Reporter: jandem, Assigned: jandem)
References
(Blocks 3 open bugs)
Details
(Whiteboard: [js-perf-next][sp3])
Last week Ted and I came up with some ideas for 'inlining' VMFunction
wrappers by emitting the equivalent code directly at the call site.
The main benefit of this is faster JIT => C++ calls by reducing the number of memory loads/stores for the arguments. It would also avoid some frame pointer overhead. Especially for Baseline Interpreter/IC code that's shared, we don't gain much by calling out to separate trampoline code.
The first step here is to identify some hot VM functions and then prototype calling these functions directly and measure how this affects performance and code size.
Updated•6 months ago
|
Comment 1•6 months ago
|
||
We do a lot of VM calls. Even if this is a relatively small performance win per-call, it seems quite plausible that it's a meaningful improvement overall.
js-perf-next: Evaluate the performance/code-size implications of doing this by hand for a few hot functions, then implement it if the numbers look good.
Updated•5 months ago
|
Updated•5 months ago
|
Updated•15 days ago
|
Description
•