Backend optimizations in JM+TI already do things in several places that are subsumed by SSA, tracking values as they are pushed and values of variables in straight-line code. Doing full-blown SSA would simplify these and would make future analysis and inference work easier and more precise (interval analysis for bug 650496, bug 649260). IonMonkey will also need SSA, but its needs as I understand them will be somewhat different --- the plan there is an SSA-based IR which can be transformed by high level optimizations, while here we need static information that easily maps to and from the bytecode. Can just implement this now (since we need it now) and figure out the interaction with IonMonkey later.
Created attachment 526798 [details] [diff] [review]
Restructure frontend analyses so that all analyses have the same lifetime as inference data (persist until the next GC) and allocate from the same per-compartment arena as type inference. Also flattens analyze::Script, analyze::LifetimeScript and types::TypeScript into a single data structure analyze::ScriptAnalysis, which holds all analysis and type information for the script (destroyed on each GC).
There was a real hodgepodge of different lifetimes and arenas around, this simplifies things and makes it easier to add SSA as an optional frontend pass. Skeleton SSA structures are here, no algorithm and all this does is compile.
Created attachment 527340 [details] [diff] [review]
Has an SSA implementation now, not tested at all and needs integration with inference, other analyses and the compiler itself.
Created attachment 527456 [details] [diff] [review]
Inference and Compiler using SSA values, works on small examples, still some missing functionality.
Created attachment 527650 [details] [diff] [review]
Passes jit-tests with --jitflags=na (i.e. no methodjit, but the methodjit changes aren't that big).
Created attachment 527772 [details] [diff] [review]
Patch landed on JM.
Speeds up SS and other benchmarks slightly for me, I think due to the lowered recompilation penalty --- we only need to do the compilation pass and not redo the analysis passes. But doing this analysis doesn't seem to cost much, and the implementation is pretty clean. One way it could be improved:
- Depends on lifetime analysis having been done first. At loop heads we eagerly generate phi nodes for all variables modified in the loop, which requires lifetimes indicating to figure out that modified set. Making this lazier would cut this dependency, which would be good as for IonMonkey the lifetime analysis will I believe want to be done on the SSA form.
Looks to have broken one of the Kraken crypto tests, does anyone mind doing a testcase reduction? (Need to learn to use these myself one of these days...). A new MochiTest-4 failure too, surprisingly green otherwise.