Closed Bug 1138132 Opened 9 years ago Closed 8 years ago

Odin: rework to compile asm.js modules when called so that constant asm.js module imports and the buffer are known.

Categories

(Core :: JavaScript Engine: JIT, defect, P5)

defect

Tracking

()

RESOLVED WONTFIX

People

(Reporter: dougc, Unassigned)

Details

It might help unblock a few challenges to rework Odin to compile when the module is first called (the point that it is currently linked).

Constant imports would be initialized and known. This allows some compile-time conditional compilation support. For example, conditionally compiling particular SIMD support based on imported flags.

This effectively blocks index-masking if a run-time choice or change of heap size is needed. Without this support the JS source would need to be rewritten for good performance in Odin.

I believe Ion also has access to the imported values at compile time, and bug 1077945 has requested better support for this in Ion.

V8 TurboFan delays compilation until the module is called, and has access to the imported variable values at compile time. The support is not yet complete, but with a small patch it performs very well with index masks imported (faster than Odin x64 on the zlib benchmark). TurboFan does not require the use of a 'const', and a 'var' that is not mutated appears equivalent.

This might also help unblock the heap-change issue. It's a bit of a stretch, but if compilers could recognise when a heap-change is dead code based on an imported constant then the compiler could still optimize this case. Further if masking were included, then setting the masks to -1 would disabled them and they could be optimized away when not needed. This would allow one asm.js source code module to handle a good range of heap access optimization strategies.

Might the asm.js module still be cached by noting imported constants that the compiler specializes to, and that are not patched.

It would also help resolve issues with buffer-at-zero because the buffer would be known at compile time and the compiler could generated optimized code in this case. If the module were relinked with a buffer not at zero then it would recompile the module. This specialization could also be noted when caching. Then all that is needed is to add an API to allocated an asm.js specialized buffer, which could be polyfilled.
This could also eliminate the need to 'declare' the minimum asm.js buffer size by making a constant index reference, because the actual buffer size is known.
How would this play with plans to do incremental compilation as we download, so we don't have a huge compilation pause after the lengthy download for large modules?
Other concerning issues:
 - we'd need to syntax-parse the whole asm.js module only to full-parse it when we compile (more total work)
 - since compilation wouldn't happen during parsing, <script async> would no longer be a reliable mechanism to take compilation off the main thread and receive notification when compilation completed
 - we'd need a new background compilation mechanism b/c certainly we wouldn't want to do all the odin-compilation synchronously from the main thread (which would also lose caching which, for Gecko reasons can't happen from the main thread)
 - while the background compilation was in progress (which can take 10-20s), we'd be performing full parses and baseline jit compilation of everything being run on the main thread which could make for a very rough 10-20s

An alternative route to consider is to analyze the context of the asm.js module to see that it is a top-level IIFE and thus run only once with a fixed heap parameter.
Might it be possible to speculatively compile asm.js modules? And to add a declarative definition of the application requirements to give a good hit ratio? When the asm.js module is called it could use a speculatively compiled module if there is a hit, otherwise it would need to block while compiling.

A buffer allocation API could be added to allocate the planned buffer so that it is likely to match a speculatively compiled asm.js module. This call might also be a useful trigger into a planning stage, communicating that the application has reached a particular stage, for example triggering the AOT compilation of the next tier of asm.js modules. Taking it further, an API to de-allocate buffers could be added, so that a staged computation requiring multiple buffers could be planned that takes into account the peak usage.

The JS runtime could use the app definition to determine the buffers that will be allocated, their initial length, and future needed lengths, and some asm.js environment inputs (index masks based on buffer lengths etc). The definition might give an expected order in which the modules will be called to support ordering compilation. Perhaps the definition could also hint at the optimization levels.

This definition might also be usable to determine if the resources are available ahead of time.  Web browsers could experiment with strategies to negotiate this with the user when necessary. For example, if attempting to run a demanding game on a limited device then the web browser could give the user the option to restart in a single app mode if necessary, rather than just crashing with an OOM.
Luke, what are your thoughts on this, 2 years later?
Flags: needinfo?(luke)
Priority: -- → P5
Not something we're going to do for asm.js and wasm has a more clear compilation model.
Status: NEW → RESOLVED
Closed: 8 years ago
Flags: needinfo?(luke)
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.