User Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.7; rv:18.0) Gecko/18.0 Firefox/18.0 Build ID: 20121015042010 Steps to reproduce: See this jsperf which illustrates the problem quite bluntly: http://jsperf.com/functions-created-globally-or-via-new-function This test creates four global functions (foo1, foo2, foo3, and foo4). foo1 and foo3 are created via inline code in a script-tag, whereas foo2 and foo4 are created from inside of a `new Function(" .. ")` environment. But all 4 are created as global functions hung off the `window` object. Then the tests call those functions to see how fast those functions operate. NOTE: all 4 functions have basically identical code. There's only one difference between the first two (foo1 and foo2) and the second two (foo3 and foo4), which is whether or not a global var is referenced from inside the function or not. But foo1 and foo2 are identical to each other, and foo3 and foo4 are identical to each other, other than respectively their constructions. Actual results: See all other non-FF browser results, the variances in perf are FAR less than they are for the FF results. Also, note that the point of this test is to look at the relative perf diffs between the pairs foo1/foo2 and foo3/foo4, not necessarily that foo1/foo2 should be the same as foo3/foo4 (although that'd be nice, and IS true in Saf6). Expected results: There should theoretically be no diff between foo1 and foo2, or between foo3 and foo4, but there's quite a bit of variance there. In the other browsers, such variance doesn't exist. In fact, in Safari 6, both pairs are the same, which is even more surprising (and nice). But again, to be clear, the variance between foo3 and foo4 is the major concern to focus on here.
OS: Mac OS X → All
Product: Firefox → Core
Hardware: x86 → All
Bradley Meck updated the jsperf with variations not using `new Function(..)` but just `Function(..)`. Should be the same, and results unsurprisingly are the same. But it's more complete as a control in the test. http://jsperf.com/functions-created-globally-or-via-new-function/2
I should point out that the reason this is an issue is that a lot of template engines, code transpilers, and other JS code generators use `Function(..)` to execute compiled code. I discovered this problem while troubleshooting perf issues for my "grips" template engine, and then looked around and saw several other major template engines with similar techniques and, assumably, the same less-than-optimal performance characteristics in FF.
Can we verify that with the patch from that bug?
Yeah, I think the problem is just that Function() constructor code isn't compiled with COMPILE_N_GO. I think someone tried to turn this on and ran into a few problems, but I forget the exact details. Ideally we'd remove COMPILE_N_GO (bug 679939).
Agreed this is bug 646597. The patch there is pretty thoroughly bitrotted. I'll give it a shot...
Glad to know this is identified as a valid bug. Hope it can be addressed quickly!?!? :) Just to let you see how it's affecting "real" (not just contrived) code, here's a performance test for my "grips" templating engine: http://jsperf.com/grips-performance/3 Notice the huge diffs between the red and blue bars for FF, compared to the sameness of them for all others. I've talked to a few other template engine authors in the past couple of days and they've confirmed they're plagued by the same problem.
Just a heads up that script injection and global eval no longer avoid this issue in the Nightly so as of the Nightly there is no workaround for this issue :'(
Has this bug been addressed? I just re-ran this performance test in FF23, and the blue bar (`Function` compiled code) was actually 21% *faster* than loading the same code in a .js file and executing. Firstly, that would make it appear that perhaps this bug has been addressed. Is that true? Secondly, it's kinda bizarre now that the same code in a `Function` context runs quite a bit faster than the same code just loaded as stand-alone .js. What's up with that?
This has, indeed, ben addressed, see bug 646597. In the current Nightly, at least, I can't reproduce the `new Function` version being faster. Maybe that was a fluke in your run? If it was not, I don't know how it could happen at all, but I do know that it doesn't, anymore. ;)
Status: NEW → RESOLVED
Last Resolved: 6 years ago
Resolution: --- → FIXED
Sorry, may have confused when I referred to a performance test. The original perf test above does indeed appear to be identical. This test, which I linked later in the thread, still shows Function() code running faster (the second run I just did was 10% not 21%) than the inlined code: http://jsperf.com/grips-performance/3 It's the blue and red bars, for comparison. Specifically it's the "direct partial, compiled" (Function compiled) vs. "direct partial, pre-compiled" (inlined code) tests. Can you try that perf test in nightly?
Ah, I see. I get 304,336 (compiled) vs 312,400 (pre-compiled), with error margins making the results basically equal, so this, too, seems to behave as one would expect in current Nightly.
Thanks! Good to hear!
(In reply to Kyle Simpson from comment #14) > Thanks! Good to hear! Thank you for the feedback! And keep it coming. :)
Hmmm... I just got upgraded to FF 24 (I'm on the Aurora channel), and I decided to re-run both the tests. I noticed performance discrepancies (20-50%) on both tests between inline and Function-compiled. I examined the original test and discovered that its logic is a little flawed in terms of actual predictable run-time, so I modified it here: http://jsperf.com/functions-created-globally-or-via-new-function/4 I've now re-run that test now in FF24, Chrome 30, and Saf 6.0.5. You can see the results there. In Chrome and Saf, the blue and red tests are basically the same. In FF, the inline code runs significantly faster than the Function compiled version of the same logic. In one test run, it was 30% faster and in the second test run it was 55% faster. So, I think based on that observation, unless I'm missing something, I think FF23 had sorta addressed this problem (as I saw last nite), but it looks to have been regressed in FF24. Can you confirm?
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
In the currently Nightly, I can't reproduce that, no. So, I tested this with an, again, slightly modified version of the benchmark: http://jsperf.com/functions-created-globally-or-via-new-function/5 How does this test run for you? In general, jsperf is just a very difficult testing environment that makes it hard to see whether a slowdown comes from the code you want to test, or from the harness around it. Sadly, I don't really have a better alternative.
Well now I'm really confused, because a little while ago when I ran version 4 of that test, I was seeing big diffs in my FF24, as I reported. But now, when I run version 4, I'm getting identical results. You can see the initial runs I did were quite different by looking at the test results saved on versino 4 for "FF24". But now they're not different. The only thing that might have changed? That I've relaunched the browser, perhaps. I don't get it. I also don't think that either version 4 or version 5 should be different. Can you explain what would be different? I see the difference in code, but I don't see how it should affect the perf of the tests at all. Should be identical, in the way I understand things.
(In reply to Kyle Simpson from comment #18) > Well now I'm really confused, because a little while ago when I ran version > 4 of that test, I was seeing big diffs in my FF24, as I reported. But now, > when I run version 4, I'm getting identical results. You can see the initial > runs I did were quite different by looking at the test results saved on > versino 4 for "FF24". But now they're not different. Oddly, I saw a difference at some point, too. Maybe there was some change in how jsperf works? Unlikely, but I don't have any better explanation. > > The only thing that might have changed? That I've relaunched the browser, > perhaps. I don't get it. Neither do I :/ > > I also don't think that either version 4 or version 5 should be different. > Can you explain what would be different? I see the difference in code, but I > don't see how it should affect the perf of the tests at all. Should be > identical, in the way I understand things. The reason they might be different is that version 4 technically has to create a closure for the IIFE and that the tested function uses that closure. However, given that most (all?) browsers should pretty much make that closure go away, it probably shouldn't cause any real differences. As I mentioned in 782906#c6, I'm working on a testing environment now that I hope to create more reliable results, while still being as simple to use as jsperf is.
Status: REOPENED → NEW
I now have FF25. The first time (after a page load) that I run the version5 test, I get this: http://gyazo.com/0fbf293d8d14f19f0250b8fe47eafc7c No difference. Then I run again in that same page session, and I get this: http://gyazo.com/a78364ebf0e275159333ecdb00b1c27e Huge difference. Then I run a third time, and another huge variation. I can't figure out why this test is jumping all over the place in terms of performance. Some little variations is not surprising, but major diffs definitely are. I make lots of jsperfs and I have never seen this erratic of perf diffs. Any idea what's making it so susceptible to diffs?
Could you find the regressed changeset by bisecting with mozregression? http://mozilla.github.io/mozregression/
If you're asking me, I don't have any evidence or clue that this was actually a regression. It's been around long enough that I just assumed it's always been this way.
jsperf loads a whole bunch of script (all the graph junk) into the same global as the code it's testing, and the graph junk can impact the numbers measured for the test scripts (by competing for GC resources, TI resources, etc). So measuring anything on a jsperf test after the first time is going to give you different numbers from the first time....
Just updated jsPerf to use the `forceIFrame` option of the Google Charts. See if that helps.
The `forceIFrame` option just moved the chart to an iframe. A better solution, one we will go for next, is to move the chart scripts to an iframe as well. With all that said, if Firefox is the only browser experiencing this issue, then this a Firefox problem and some heuristics need tweaking.
I've run into similar issues with Chrome as well, for what it's worth.
I've added `#nochart` support. It will still post to Browserscope but won't load the charts. I see the dramatically slower perf for functions created with `Function(...)` in FF22 but not in FF23. http://jsperf.com/functions-created-globally-or-via-new-function/5#nochart For what it's worth I've seen this FF issue outside of jsPerf too.
Sure, the Function() thing is our issue. The instability in the measured numbers was (and is, if the no-chart thing is not the default) the jsperf issue.
FWIW, I just re-ran the "nochart" version of the test in FF25: http://gyazo.com/069d1fbfd74f40e5ec5468fbcbecdc20 These results are now flip-flopped from before. It used to be that inline (test 1) was faster than Function-compiled (test 2), and now these tests are repeatably giving me the opposite result, as you can see in that screenshot... that Function-compiled (test 2) is outperforming by over 2x the inline equivalent (test 1). NOTE: that's with no global involved. Tests 3 and 4 involve a global, and they seem to be even.
Spoke too soon. First 5 or 6 tests showed like the screenshot. But 10 or so more tests are showing the results jumping around a lot. So, it would appear the "nochart" option was not successful in preventing the significant performance variances I'm observing.
Latest stable Chrome, IE, Firefox produce consistent results from test to retest with and without the charts. The odd results come from Firefox 25 & 26. This looks like an issue with Firefox and not jsPerf.
John, latest stable firefox gives different results on some tests depending on whether there is any data to chart (as in, the first time you run the test, you get different numbers from all later runs of the same test). The fact that there is an impact from the charts is, imo, an issue with Firefox to the extent that there are limits to how many resources a page can consume. But the fact that the charts are able to have an impact at all the issue is with jsperf, which is running the chart code and the test code in the same global. None of which is likely directly relevant to this bug.
> an issue with Firefox to the extent that there are limits to how many resources a page > can consume. Note, by the way, that that particular effect is gone with nightlies.
> latest stable firefox gives different results on some tests depending on whether there is any data to chart (as in, the first time you run the test, you get different numbers from all later runs of the same test) Not on the one I was testing (rev 25). > The fact that there is an impact from the charts is, imo, an issue with Firefox to the extent that there are limits to how many resources a page can consume. There is no impact from charts, as no other browser has this issue in the test presented, not even stable FF. > But the fact that the charts are able to have an impact at all the issue is with jsperf, which is running the chart code and the test code in the same global. I've still not seen a real issue shown from a stable browser.
25 isn't latest stable. 23 is latest stable. If you want to reproduce an issue from the charts in 23, it's trivial: 1) Load http://jsperf.com/demo-of-microbenchmark-silliness 2) Wait for the charts to load. 3) Run the test and observe the number for the first test ("prop"). 4) Clone the test (presumably to create http://jsperf.com/demo-of-microbenchmark-silliness/5 or so). 5) Run the test. 6) Compare the numbers to those from step 3. The charts are affecting the JITs heuristics, pretty clearly.
> 25 isn't latest stable. I was speaking about the 5th revision of the test (typoed as 25th). > If you want to reproduce an issue from the charts in 23, it's trivial Thanks for the repo. Chrome & Safari don't seem to be affected while Firefox 23 will perform similar to others when charts are present, however when charts aren't loaded it runs `id = node.id` as fast as `id = 3` which seems odd or at least different than Chrome/Safari.
id = node.id performs as fast as id = 3 when TI correctly deduces that "node" is always an HTMLElement. At that point the pure getter is loop-hoisted, and the test becomes a no-op, just like the "id = 3" test (which is why microbenchmark tests are .... <sigh>). The charts code is interfering with the type inference machinery in this instance in Firefox 23, causing it to deoptimize all DOM property access on <div> elements because it does some weird things internally with <div>s. In Firefox 25 we modified the type-inference code to avoid this situation in more cases (see bug 893897), but the larger point stands: to the extent that JITs have heuristics based on observed code and data patterns, the charts code running in the same environment as the code being tested will skew the results for the code being tested. Now we've _really_ strayed from the issue in this bug...
Cool cool, I'll have the charts moved to their own iframe shortly. Also, thanks for the insight into Firefox's DOM/JIT optimizations.
I ran five times https://jsperf.com/functions-created-globally-or-via-new-function/5 and I got the same result everytime. foo1 (inline, no global) 5,600 foo2 (new Function, no global) 5,600 foo3 (inline, global) 2,500 foo4 (new Function, global) 2,500 WFM?
You need to log in before you can comment on or make changes to this bug.