All users were logged out of Bugzilla on October 13th, 2018
Before scripts are compiled, we should interpret them some. Keep interpreting until either the function head or a loop head is reached N times, then compile. Measurements in bug 615199, comment 20 suggest that compilation cost is ~100 times the interpretation cost (for the same amount of code, i.e. no loops), so setting N = 10 or so should not affect compilation time hugely (and will be beneficial for scripts that execute few times). Interpreting scripts some first lets us record information about runtime types that won't be picked up by inference, and would otherwise force us to recompile scripts more to get the most precise type info we can. This information won't be foolproof of course, and some recompilation will still be required, but most of the events covered below will happen in the first few executions of a piece of code, and knowing that before compilation will remove the need to recompile. The main information wanted before compiling: 1. Which integer divisions can produce doubles. Generally, an integer division will either always produce ints or usually produce doubles. It's important to know when the former holds, but the inference can't tell (it used to guess, badly, but that was removed by bug 619737). 2. Which dense arrays are unpacked. If an array can have holes, it will usually get them early on in its initialization. There is a mechanism to guess this which will be removed soon due to imprecision. 3. Which property and element reads can access holes. There is a mechanism for this which errs on the side of precision (reads like 'if (a.f)' are marked as maybe-void), which incurs recompilation costs. Another mechanism marks an array as always-maybe-void if an unexpected hole is ever read out of it, which could probably be removed after this bug is done (it saves about 50 recompilations on crypto-md5). 4. Which element assignments can write to holes (aka array initialization code). This is not needed right now but will be wanted for LICM, which has to guess at which array slot pointers and length properties will stay loop invariant. Except for 4., all of this is already reported to the inference by the interpreter, so the simple way to record this information is to analyze the script on first execution, interpret it some and then compile and enter the JIT code.
Would it make sense for the running time of the script in the interpreter to affect the amount of times it should be interpreted before compilation? That is, are there reasonable examples of monolithic scripts that are executed often and would benefit greatly from trying to compile them sooner?
Well, scripts which are larger take longer to interpret but also longer to compile. More data than what's referenced in comment 0 (two scripts with arithmetic, local and name accesses) would be good, but if it's the case that the compiler always takes about the same time vs. the interpreter then script size shouldn't matter.
This has been addressed by bug 631951 (interpret the scripts before compiling) and bug 613221 (keep track of dynamic overflows in TypeResult structures).
Status: NEW → RESOLVED
Last Resolved: 8 years ago
Resolution: --- → WORKSFORME
You need to log in before you can comment on or make changes to this bug.