Open Bug 1913924 Opened 6 months ago Updated 15 days ago

Experiment with removing VMFunction wrappers

Categories

(Core :: JavaScript Engine: JIT, task, P3)

task

Tracking

()

ASSIGNED

People

(Reporter: jandem, Assigned: jandem)

References

(Blocks 3 open bugs)

Details

(Whiteboard: [js-perf-next][sp3])

Last week Ted and I came up with some ideas for 'inlining' VMFunction wrappers by emitting the equivalent code directly at the call site.

The main benefit of this is faster JIT => C++ calls by reducing the number of memory loads/stores for the arguments. It would also avoid some frame pointer overhead. Especially for Baseline Interpreter/IC code that's shared, we don't gain much by calling out to separate trampoline code.

The first step here is to identify some hot VM functions and then prototype calling these functions directly and measure how this affects performance and code size.

Blocks: sm-opt-jits
Severity: -- → N/A
Priority: -- → P3

We do a lot of VM calls. Even if this is a relatively small performance win per-call, it seems quite plausible that it's a meaningful improvement overall.

js-perf-next: Evaluate the performance/code-size implications of doing this by hand for a few hot functions, then implement it if the numbers look good.

Whiteboard: [js-perf-next]
Whiteboard: [js-perf-next] → [js-perf-next][sp3]
See Also: → 1935626
You need to log in before you can comment on or make changes to this bug.