Currently, the ALIASEDVAR ops are only emitted for local accesses. Once we have script scope nesting information (bug 753145), we can emit ALIASEDVAR ops for upvar accesses as well. (The nesting information is necessary for TI to connect the typesets of upvar reads/writes.) This will be much faster than using JSOP_NAME. Hopefully, we can measure and show that ALIASEDVAR makes the non-reentrant closure optimization (and all of the associated TypeScriptNesting stuff) unnecessary.
Looks like this might be about 3-4% on v8, based on changing MGetNameCache to be an optimizable instruction.
Created attachment 639232 [details] [diff] [review] WIP 1 This patch takes care of the VM, just need to fix up the now-broken use of ScopeCoordinateToFrameIndex in JM/TI. Pretty close, though.
Created attachment 639620 [details] [diff] [review] WIP 2 This one passes tests. On a microbenchmark, it is about 20% faster accessing a single upvar and it takes the earley-boyer score from roughly 11000 to 12000. However, it is actually a bit slower on navier-stokes due to the fact that 100% of typesets are "unknown" (compared to 100% having int32 w/ the non-reentrant closure optimization). To fix this, I need to use addSubsetBarrier (instead of just addSubset), make the ALIASEDVAR ops have JOF_TYPESET, and add the barrier to jit access paths.
Created attachment 640004 [details] [diff] [review] patch This patch adds a type barrier which recovers type info. With this change on JM+TI, I see the same perf as when the non-reentrant closure optimization applies (both in a micro-bench and on v8/navier-stokes). In a micro-bench where the non-reentrant closure doesn't apply (e.g., doing y += x where both are upvars), the patch is about 2x faster. Lastly, on JM+TI, our earley-boyer score, measured in the shell, is about 7% better. Flagging Brian for review to make sure I got the TI/jit stuff right. Flagging David for feedback mostly so you can see changes necessary for IM (note the added type barrier to the GETALIASEDVAR path, see comment in jsinfer.cpp). (Patch is green on try.)
Comment on attachment 640004 [details] [diff] [review] patch Review of attachment 640004 [details] [diff] [review]: ----------------------------------------------------------------- Nice! ::: js/src/jsanalyze.cpp @@ +319,5 @@ > break; > > case JSOP_GETALIASEDVAR: > case JSOP_CALLALIASEDVAR: > + case JSOP_SETALIASEDVAR: Can you fold these cases in with the previous one, since the case bodies are the same?
Unfortunately bug 772303 (https://hg.mozilla.org/integration/mozilla-inbound/rev/fad7d06d7dd5) had to be backed out for winxp pgo-only jsreftest failures. This bug (amongst others) either caused unresolvable (for someone who doesn't work on the JS engine at least) conflicts with the backout of fad7d06d7dd5, or else bugzilla dependencies indicated that one of it's dependants had now been backed out. Since there was no one in #developers that could resolve the conflicts, unfortunately this bug has been backed out: https://hg.mozilla.org/integration/mozilla-inbound/rev/f0be4b70b814
Backed out due to bug 773609 and bug 773844. https://hg.mozilla.org/mozilla-central/rev/0602e44ac248
Created attachment 642798 [details] [diff] [review] fix and test Arg, there is a pre-existing bug where the parser completely ignores function statements when it links uses to definitions in the enclosing scope. The fix is to just tweak the existing deoptimization check in LeaveFunction so that it catches function statements in addition to eval/with.
(This patch fixes both two regressions in comment 10.)
On Maemo5 with XUL/Qt build (ARM Thumb2/NEON), we've noticed a significant slowdown in our Sunspider benchmark performance. Prior to this commit we achived 6060ms on N900 at stock CPU frequency. After this commit it was down to 6734ms. Current mozilla-central gives 8900ms, so we've got more regressions to find. :(
After this patch landed, we had to fix the string.replace(lambda) hack: bug 775801. Do you still see a regression with this fix?
With latest m-c the sunspider result is 6120ms.