See URL for the shell test case. I get these numbers: -j: 990 ms -m: 1729 ms -m -n: 2328 ms Profiling shows much time is spent in GetElem/SetElem stub calls with -n. Also looks like we're not inlining every charCodeAt call. Maybe either ropes or missing type information.
The main problem here is that this benchmark uses variable "i" for both doubles and integers. If I use a different variable for the "switch(msgLen % 4)" in SHA1 (where it uses i as double) -m -n improves a lot: -m -n: 1234 ms The remaining difference vs TM is mostly bug 608782 and bug 609296.
Created attachment 525342 [details] Micro-benchmark If I change 1.1 to 1 in this micro-benchmark, -n -m becomes 9x faster. Can we add an heuristic or something to detect things like this?
Ah, there are a few ways to fix this problem. kraken-stanford-crypto-pbkwhatever (bug 642412) also has this problem, where the arg is an array that gets copied to a local and then reused as the loop iterator. - This can be fixed within inference itself by running on an SSA-ified form of the script, i.e. make it flow-sensitive enough to distinguish the different uses of the local variable. This is how pointer analyses usually work. For Ionmonkey we're probably going to want SSA in the backend for codegen, and may want to consider pushing it up to the bytecode so that inference ends up more precise. - This can be fixed within JM by speculating within the loop the variable is an integer, and making sufficient checks to ensure it stays an integer (in this case, no extra checks are required). I think in the end we'll want both of these, but I'm tempted to investigate the SSA option first. That is harder but will improve inference results both inside and outside the script, and should feed into Ionmonkey well. There is a separate issue that our performance adding doubles in a loop is lousy, noticed that yesterday and definitely needs fixing.
With the patch for bug 650715, the problem with this micro-benchmark is gone. For the SHA1 benchmark: -m -n: 1839 ms (was 2328 ms) Still slower than the 1234 ms from comment 1 but good progress.
Linux64 --enable-optimize --disable-debug builds: [jwalden@find-waldo-now src]$ opt/js /tmp/pk-sha1.js 540ms [jwalden@find-waldo-now src]$ opt/js --no-ti /tmp/pk-sha1.js 1390ms [jwalden@find-waldo-now src]$ opt/js --no-ion /tmp/pk-sha1.js 609ms Seems like TI has been changed just a bit, such that this is no longer an issue. Feel free to reopen if there's something I've missed. (I haven't tested other engines because the initial bug report was purely about SpiderMonkey-to-self comparisons.)