Closed
Bug 451974
Opened 16 years ago
Closed 16 years ago
TM: Basic for loop slower with JIT enabled
Categories
(Core :: JavaScript Engine, defect)
Tracking
()
VERIFIED
FIXED
People
(Reporter: Dpeelen, Assigned: gal)
References
()
Details
(Keywords: testcase, verified1.9.1, Whiteboard: fixed-in-tracemonkey)
Attachments
(6 files, 2 obsolete files)
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.0; nl; rv:1.9.0.1) Gecko/2008070208 Firefox/3.0.1
Build Identifier: Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1a2pre) Gecko/20080824031931 Minefield/3.1a2pre
Testing a basic for loop with javascript.options.jit.content set to true result in 50% slower performance. See url for the loop test.
Basic for loop. for (var i=0; i<sarr.length; i++) { }
AND
While loop that imitates a for loop. var i = 0; while (i < hColl.length) { i++; }
Both run about 50% slower. (30ms with no jit, 45ms with jit). All other test run between 0-5 times faster. Made 10 test with jit on, 10 with jit off, calculated the average, the standard deviation, and looked if the difference in average was more then 3 times the standard deviation. I'm pretty sure these results are accurate.
Reproducible: Always
Updated•16 years ago
|
Version: unspecified → Trunk
Reporter | ||
Comment 1•16 years ago
|
||
Native Array:
While looping by popping values (this fails on sparse arrays). var x; while (x = arr.pop()) { }
AND
Sparse Native Array:
For reference against forEach(). var f=function(x){}; for (var i=0, len=sarr.length; i<len; i++) { f(sarr[i]); }
Also seem to be giving a small slowdown. 25% at the while loop and 15% at the for loop. Trough it is difficult to prove this statistical, as the randomness of the test is bigger then that. I still think these results are solid after 30 test's with jit on and 30 with jit off.
Also, i'm not sure what i was doing when i wrote the previous message, but i was talking about the two loops in "Sparse Native Array",
Basic for loop. for (var i=0; i<sarr.length; i++) { }
AND
While loop that imitates a for loop. var i = 0; while (i < sarr.length) { i++; }
NOT the While loop in "HTML Collection", that one shows no difference in performance with jit on/off.
Reporter | ||
Comment 2•16 years ago
|
||
I see a BIG slowdown in Native Array:
For reference against forEach(). var f=function(x){}; for (var i=0, len=arr.length; i<len; i++) { f(arr[i]); }
With jit enabled. Where previous jit speed that one up from 8ms to 2ms, with a newer nightly it jumps from 9ms to 85ms with jit enabled.
This first happens on:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1b1pre) Gecko/20080908091724 Minefield/3.1b1pre
and did not happen on:
Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1b1pre) Gecko/20080907032646 Minefield/3.1b1pre
Summary: Basic for loop slower with JIT enabled → TM: Basic for loop slower with JIT enabled
Assignee | ||
Comment 3•16 years ago
|
||
Could you post a full test case? I can't reproduce this. Also, we just added push and pop.
Assignee | ||
Updated•16 years ago
|
Assignee: general → gal
Reporter | ||
Comment 4•16 years ago
|
||
I'm unsure how to make a more detailed test case. I'm able to reproduce this both on a win vista computer with a Intel Core 2 Duo CPU E6750, as well as a win xp computer with a AMD sempron 3000+. Any other information i could have overlooked? I used a fresh nightly (zip, not install version), and a fesh profile each time.
Still able to reproduce the reported problems with the latest nightly 20080919032843.
I don't see the problem (or I might be blind =)) you are describing using Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1b1pre) Gecko/20080919032843 Minefield/3.1b1pre
See attachment for more details of my sample result comparison
By the way there are some slowdowns but I'm not sure if you were referring to those.
Reporter | ||
Comment 7•16 years ago
|
||
Look at 17, 19, and 22. That are the ones i talk about. 19 and 22 where there from the start (50% slowdown). 17 is new since comment #2, 22ms vs 157ms LOOKS big to me.
Try build 20080907 or one before and see the difference in 17, that used to be a 4x speedup.
Reporter | ||
Comment 8•16 years ago
|
||
Look at the new numbers, recently Sparse Native Array: For reference against forEach(). Dropped like a stone as well (look at attachment for a more clear overview, referring to these test in words is a bit difficult).
Didn't bother to find a regression window for that one yet, would that be useful?
Using Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1b2pre) Gecko/20081112 Minefield/3.1b2pre the performance seems to have "normalized"
Reporter | ||
Comment 10•16 years ago
|
||
I certainly hope that by "normalized" you mean "no change", because that is what i see. I even got more red now i'm using a more accurate test (modified the page to let the test run 4 times as long).
Just look at 17, 19, 21 and 29 in the attachment. Especially 17 and 29 the there 600% and 400% slowdown worry me.
Attachment #341271 -
Attachment is obsolete: true
Comment 11•16 years ago
|
||
we need to make sure we take care of any huge regressions we see here.
Flags: blocking1.9.1+
Updated•16 years ago
|
Status: UNCONFIRMED → NEW
Ever confirmed: true
Comment 12•16 years ago
|
||
My mistake, I might have been looking at the wrong comparison.
You are right the perf loss is still present; though we have some perf gains as well.
I have attached new data using 20081116033829 Minefield/3.1b2pre. Hopefully I got the formulas right.
Assignee | ||
Comment 13•16 years ago
|
||
Test 17 seems to have the worst perf loss (> 70%).
Here is a shell test case extracted from the web page:
var chars = '0123456789abcdef';
function getRandomString() {
var len = Math.ceil(Math.random() * 7) + 3; // 4-10
var result = "";
while (len--) {
result += chars.charAt(Math.floor(Math.random() * chars.length));
}
return result;
}
var size = 1000;
var mult = 100;
var arr = [];
var lsize = size;
while (lsize--) { arr.push('' + getRandomString() + ''); }
for(var a=0;a<mult;a++) {
var f=function(x){}; for (var i=0, len=arr.length; i<len; i++) { f(arr[i]); }
}
In the shell I get a 5.5 speedup, not a 70% slowdown.
Assignee | ||
Comment 14•16 years ago
|
||
I can reproduce the slowdown in the browser. We are taking a mismatch side exit per iteration.
callee0=object<0x162019d8:Function> this0=object<0x160fdba0:Window> vars0=int<1000> vars1=int<100> vars2=object<0x16202660:Array> vars3=int<-1> vars4=object<0x162026c0:Array> vars5=int<11303> vars6=int<0> vars7=double<1.22775e+12> vars8=object<0x16202700:Object> vars9=object<0x16202720:Object> vars10=boolean<2> vars11=string<0x97ea000> vars12=int<18> vars13=object<0x162029c0:Function> vars14=int<489> vars15=int<1000> stack0=object<0x162029c0:Function> stack1=object<0x0:null> stack2=string<0x1620cca8>
Looking for compat peer 75@290, from 0x17241a10 (ip: 0x16e8e192, hits=2)
checking vm types 0x17241a10 (ip: 0x16e8e192): callee0=O/O this0=O/O vars0=I/I vars1=I/I vars2=O/O vars3=I/I vars4=O/O vars5=I/I vars6=I/I vars7=D/D vars8=O/O vars9=O/O vars10=B/B vars11=S/S vars12=I/I vars13=O/O vars14=I/I vars15=I/I
entering trace at file:///Users/gal/Desktop/loop-test.html:75@290, native stack slots: 23 code: 0x16209ee0
stack: callee0=object<0x162019d8:Function> this0=object<0x160fdba0:Window> vars0=int<1000> vars1=int<100> vars2=object<0x16202660:Array> vars3=int<-1> vars4=object<0x162026c0:Array> vars5=int<11303> vars6=int<0> vars7=double<1.22775e+12> vars8=object<0x16202700:Object> vars9=object<0x16202720:Object> vars10=boolean<2> vars11=string<0x97ea000> vars12=int<18> vars13=object<0x162029c0:Function> vars14=int<490> vars15=int<1000>
leaving trace at file:///Users/gal/Desktop/loop-test.html:75@300, op=call, lr=0xa5a129c, exitType=3, sp=3, calldepth=0, cycles=46200
Assignee | ||
Comment 15•16 years ago
|
||
We perform a shapeless callee check on f in the call, which seems to fail in the browser, which seems reasonable since we clone the function here and every instance of the inner loop has its own function. Strangely, in the shell the same code seems to get traced (???). I will have to gdb through this and find out what exactly happens in the interpreter.
Assignee | ||
Comment 16•16 years ago
|
||
ANONFUNOBJ seems to clone the function only in certain cases. The behavior seems to differ in the browser and in the shell (parent). Not sure why. Any ideas?
BEGIN_CASE(JSOP_ANONFUNOBJ)
/* Load the specified function object literal. */
LOAD_FUNCTION(0);
/* If re-parenting, push a clone of the function object. */
parent = js_GetScopeChain(cx, fp);
if (!parent)
goto error;
obj = FUN_OBJECT(fun);
if (OBJ_GET_PARENT(cx, obj) != parent) {
obj = js_CloneFunctionObject(cx, fun, parent);
if (!obj)
goto error;
}
PUSH_OPND(OBJECT_TO_JSVAL(obj));
END_CASE(JSOP_ANONFUNOBJ)
Assignee | ||
Comment 17•16 years ago
|
||
Assignee | ||
Comment 18•16 years ago
|
||
Diagnosed with Brendan's help:
The problem occurs if ANONFUNOBJ happens inside a function. Even if that function is not heavyweight, the parent is the outer function, not the global object.
static JSFunction *
NewCompilerFunction(JSContext *cx, JSTreeContext *tc, JSAtom *atom,
uintN lambda)
{
JSObject *parent;
JSFunction *fun;
JS_ASSERT((lambda & ~JSFUN_LAMBDA) == 0);
parent = (tc->flags & TCF_IN_FUNCTION)
? FUN_OBJECT(tc->u.fun)
: tc->u.scopeChain;
This forces a clone at every iteration.
There might be hidden dependencies on the proto-func's parent being outer proto-func. Igor might know.
Assignee | ||
Comment 19•16 years ago
|
||
Assignee | ||
Comment 20•16 years ago
|
||
Attachment #350825 -
Attachment is obsolete: true
Attachment #350835 -
Flags: review?(brendan)
Updated•16 years ago
|
Attachment #350835 -
Flags: review?(brendan) → review+
Assignee | ||
Comment 21•16 years ago
|
||
Fixed in TM. Independent confirmation welcome. We are still slow for some cases like sparse arrays using integer indexes since we keep falling off trace here and can't do much about it. We might want to spawn of a separate bug that kills and blacklists a tree location if we mindless exit from there all the time.
http://hg.mozilla.org/tracemonkey/rev/6052f603166e
Whiteboard: fixed-in-tracemonkey
![]() |
||
Updated•16 years ago
|
Flags: in-testsuite?
Reporter | ||
Comment 22•16 years ago
|
||
I can confirm nr #17 and #29 (nr in the attachment) to be fixed. #17 wend from 712% more time needed to 18% time remaining compared to no jit. #29 now speed up by about 10%, used to be a 4x slowdown.
Only remaining now is #19 and #21, where the original bug report was all about. They even changed from a 40% slowdown when i first reported them, to a 100% slowdown now.
Comment 23•16 years ago
|
||
We need new bugs on remaining slowdowns, even if they are low priority hard cases. One bug per problem testcase, please. Thanks,
/be
Comment 24•16 years ago
|
||
There was a tracemonkey -> mozilla-central merge today.
Status: NEW → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Reporter | ||
Comment 25•16 years ago
|
||
Are the new bugs mentioned in comment #23 already made, or should i create them?
![]() |
||
Comment 26•16 years ago
|
||
If you can file, that would be great.
Reporter | ||
Comment 27•16 years ago
|
||
Ok, i'm opening new bugs now.
Also, where both #17 and #29 (from the attachments) where solved in the 20081202 tracemonkey tinderbox build, only #17 is now solved after the merge in Gecko/20081212 Minefield/3.2a1pre.
Should i also fill a separate bug on #29, or will that part still get merged later?
![]() |
||
Comment 28•16 years ago
|
||
The recent merge should have merged in anything that was on the branch on Dec 2. So please file a bug on comment 29, yes.
We really need to start running performance regression tests on all this stuff. :(
Reporter | ||
Comment 29•16 years ago
|
||
Meh, filled bug 469347, bug 469351 and bug 469370. But the last one, about problem #29 was suddenly gone when i ran the test again, so i guess that one was just a screw up on my side.
Guess the merge was successful on this one after all. :)
Comment 30•16 years ago
|
||
Keywords: fixed1.9.1
Comment 31•16 years ago
|
||
tests modeled on Andreas' adapation. Note 02 fails perhaps related to comment 18.
Comment 32•16 years ago
|
||
http://hg.mozilla.org/mozilla-central/rev/72511e233790
/cvsroot/mozilla/js/tests/js1_8_1/trace/regress-451974-01.js,v <-- regress-451974-01.js
initial revision: 1.1
/cvsroot/mozilla/js/tests/js1_8_1/trace/regress-451974-02.js,v <-- regress-451974-02.js
initial revision: 1.1
Flags: in-testsuite?
Flags: in-testsuite+
Flags: in-litmus-
Comment 33•16 years ago
|
||
modify test to prevent random failures on x64
http://hg.mozilla.org/mozilla-central/rev/c3f8c5bc3e37
Comment 34•16 years ago
|
||
verifying 1.9.1, 1.9.2 based on js1_8_1/trace/regress-451974-01.js but js1_8_1/trace/regress-451974-02.js still fails. Filed Bug 482914.
Status: RESOLVED → VERIFIED
Keywords: fixed1.9.1 → verified1.9.1
You need to log in
before you can comment on or make changes to this bug.
Description
•