Closed Bug 470737 Opened 16 years ago Closed 16 years ago

TM: "switch(1/0){case Infinity:}" 4X slower with JIT enabled

Categories

(Core :: JavaScript Engine, defect)

x86
macOS
defect
Not set
normal

Tracking

()

VERIFIED FIXED

People

(Reporter: jruderman, Assigned: Waldo)

Details

(Keywords: perf, testcase, verified1.9.1, Whiteboard: fixed-in-tracemonkey)

Attachments

(1 file)

for (i=0;i<500000; ++i) { switch (1/0) { case Infinity: } }

4X slower with JIT enabled.  Keeps hitting a side exit?
If you enable TRACEMONKEY=verbose you can see the abort reason.
It keeps repeating stuff like this:

double<inf> int<49> 
stack0=double<inf> stack1=double<inf> 
Looking for compat peer 1@11, from 0x30d110 (ip: 0x30d0bb, hits=2)
checking vm types 0x30d110 (ip: 0x30d0bb): global0=D/D global1=I/I 
entering trace at typein:1@11, native stack slots: 3 code: 0x261f70
global: double<inf> int<50> 
stack: 
leaving trace at typein:1@18, op=case, lr=0x269a60, exitType=0, sp=2, calldepth=0, cycles=4914
Side exit, not abort; is there a reason we don't give guards a "reason" argument (debug-only, perhaps) to aid in this determination?

But in any case, the problem here is simple: pointers to equivalent doubles are not generally equal (which, incidentally, is fixed by the errant prescription noted in bug 407391, if not for this reason).
Assignee: general → jwalden+bmo
Status: NEW → ASSIGNED
Attachment #354192 - Flags: review?(gal)
Comment on attachment 354192 [details] [diff] [review]
d1 != d2 && *d1 == *d2

We have asNumber() which you could use here.
Attachment #354192 - Flags: review?(gal) → review+
This is gotta-have for 191, don't want to side-exit any time we're comparing doubles for strict equality (or if we're passed an integer and somehow compute a double which is equal to that integer).
Flags: blocking1.9.1?
Whiteboard: fixed-in-tracemonkey
why is this causing a 4x slowdown? shouldn't it quit attempting to use the JIT?
Flags: blocking1.9.1? → blocking1.9.1+
I don't know; ^ gal, dvander, brendan?
Something very bad is happening here:

recorder: started(1), aborted(0), completed(16), different header(0), trees trashed(0), slot promoted(0), unstable loop variable(0), breaks(0), returns(0), unstableInnerCalls(0)
monitor: triggered(499998), exits(499998), type mismatch(0), global mismatch(1)

As you can see we attach 16 branches to the tree (1 tree started, 16 traces completed, so those are all branches). At some point the branches-per-tree limit kicked in and prevented us from slowing down to a crawl or running out of memory (16 traces per tree max). Basically we keep making new branches of the case, but they keep mismatching, hence the 4x slowdown. Waldo's patch fixes the root cause.
Once we hit the branches-per-tree limit, shouldn't we stop trying to create new branches?
merged to m-c
Status: ASSIGNED → RESOLVED
Closed: 16 years ago
Resolution: --- → FIXED
Did we ever file a bug on comment 10?  That sounds like a problem to me, if that's really what's going on.
(In reply to comment #10)
> Once we hit the branches-per-tree limit, shouldn't we stop trying to create new
> branches?

(In reply to comment #12)
> Did we ever file a bug on comment 10?  That sounds like a problem to me, if
> that's really what's going on.

Andreas, what do you think?

/be
test included in js1_8_1/trace/trace-test.js 
http://hg.mozilla.org/mozilla-central/rev/8f967a7729e2
Flags: in-testsuite+
Flags: in-litmus-
v 1.9.1, 1.9.2 assuming js1_8_1/trace/trace-test.js covers the performance issue.
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: