asm.js preserves js semantics, and it in principle seems like the speed enhancements made by OdinMonkey should be applicable to vanilla js given enough engineering effort. I don't know how much of this is actually feasible to do, but it seems a worthy goal to pursue. This would improve performance by some amount for code that looks nothing like asm.js, and by a lot for code that is very similar to asm.js but includes unsupported features like property accesses. Note that this would not affect the frontend performance / memory improvements made by asm.js (overhead from constructing JSScripts / baseline code etc.), just the peak performance reached by generated code.
Some numbers from experimenting on fannkuch(10) in the emscripten benchmark suite. This benchmark involves pretty much no calls so is good for isolating the cost of memory accesses and simple integer operations. Numbers are an x86/darwin post 804676 non-threadsafe build with the script size limit turned off (most of fannkuch is normally only compiled if off thread compilation is available, due to differing script size thresholds). Pre 804676 seems to behave similarly. lsra backtracking asm.js 600 562 trunk 976 875 w/memops 803 701 w/o interrupts 789 640 trunk w/memops adds paths to IonBuilder to generate the same MIR nodes as asm.js for memory loads and stores when possible. trunk w/o interrupts additionally removes code for interrupt checks at loop headers (code which is not required by asm.js). This still leaves a 30% gap with lsra, and 14% with backtracking. This looks to be some combination of worse regalloc due to snapshots keeping things alive longer, and just plain more instructions. Comparing w/o interrupt w/backtracking to asm.js w/backtracking, the former executes 66m (16%) more spill instructions and 234m (11%) more instructions overall. Needs more investigating.
You need to log in before you can comment on or make changes to this bug.