Closed Bug 472180 Opened 16 years ago Closed 15 years ago

TM: Move fragment hit and blacklist counts to hashtable in oracle

Categories

(Core :: JavaScript Engine, defect)

x86
macOS
defect
Not set
normal

Tracking

()

VERIFIED FIXED

People

(Reporter: gal, Assigned: graydon)

References

Details

(Keywords: verified1.9.1)

Attachments

(1 file, 2 obsolete files)

When dealing with many fragments the binary search algorithm used in nanojit's map implementation becomes a major performance bottleneck.
Flags: in-testsuite+
Flags: in-litmus-
One possibility would be modifying the branch opcodes to have a "fragment index" parameter that gets ignored by the interpreter.  This would be an index into a per-function local table.  The tracer would store the Fragment pointers in these tables and there wouldn't be a need for the Fragmento lookup.
Assignee: general → graydon
Blocks: 468988
blocks a blocker
Flags: blocking1.9.1?
Attached patch Part 1 (obsolete) — Splinter Review
I'm doing this in parts. Part 1 here moves all the hit/blacklist logic into the oracle and corrects the busted calculations performed in the fragment hit and blacklist functions. Subsequent parts will shift to consulting the oracle first in the hot path, before looking up a fragment at all.

Note that this patch causes various jit stats to drift. Quite a number of them. I can investigate them 1 by 1 if necessary?
Attachment #357456 - Flags: review?(gal)
Comment on attachment 357456 [details] [diff] [review]
Part 1

>diff -r 0a9626d5642c js/src/jstracer.cpp
>--- a/js/src/jstracer.cpp	Fri Jan 16 14:43:15 2009 -0800
>+++ b/js/src/jstracer.cpp	Fri Jan 16 16:59:25 2009 -0800
>@@ -401,6 +401,66 @@
>     return int(h);
> }
> 
>+JS_REQUIRES_STACK static inline size_t
>+hitHash(const void* ip)
>+{    
>+    uintptr_t h = 5381;
>+    hash_accum(h, uintptr_t(ip));
>+    return size_t(h);
>+}

Where does 5381 come from and why is this JS_REQUIRES_STACK?

>+
>+Oracle::Oracle()
>+{
>+    clear();
>+}
>+
>+/* Fetch the jump-target hit count for the current global/pc pair. */
>+int32_t
>+Oracle::getHits(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    uint32_t hc = hits[h];
>+    uint32_t bl = blacklistLevels[h];
>+
>+    /* Clamp ranges for subtraction. */
>+    if (bl > 30) 
>+        bl = 30;
>+    hc &= 0x7fffffff;
>+    
>+    return hc - (1<<bl);
>+}
>+
>+/* Fetch and increment the jump-target hit count for the current global/pc pair. */
>+int32_t 
>+Oracle::hit(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    if (hits[h] < 0xffffffff)
>+        hits[h]++;
>+    
>+    return getHits(ip);
>+}
>+
>+/* Reset the hit count for an jump-target and relax the blacklist count. */
>+void 
>+Oracle::resetHits(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    if (hits[h] > 0)
>+        hits[h]--;
>+    if (blacklistLevels[h] > 0)
>+        blacklistLevels[h]--;
>+}
>+
>+/* Blacklist with saturation. */
>+void 
>+Oracle::blacklist(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    if (blacklistLevels[h] < 0xffffffff)
>+        blacklistLevels[h]++;
>+}
>+
> 
> /* Tell the oracle that a certain global variable should not be demoted. */
> JS_REQUIRES_STACK void
>@@ -434,6 +494,8 @@
> void
> Oracle::clear()
> {
>+    memset(hits, 0, sizeof(hits));
>+    memset(blacklistLevels, 0, sizeof(blacklistLevels));
>     _stackDontDemote.reset();
>     _globalDontDemote.reset();
> }
>@@ -3141,9 +3203,9 @@
>         c->root = f;
>     }
> 
>-    debug_only_v(printf("trying to attach another branch to the tree (hits = %d)\n", c->hits());)
>-
>-    if (++c->hits() >= HOTEXIT) {
>+    debug_only_v(printf("trying to attach another branch to the tree (hits = %d)\n", oracle.getHits(c->ip));)
>+
>+    if (oracle.hit(c->ip) >= HOTEXIT) {
>         /* start tracing secondary trace from this point */
>         c->lirbuf = f->lirbuf;
>         unsigned ngslots;
>@@ -3289,10 +3351,10 @@
>             if (old == NULL)
>                 old = tm->recorder->getFragment();
>             js_AbortRecording(cx, "No compatible inner tree");
>-            if (!f && ++peer_root->hits() < MAX_INNER_RECORD_BLACKLIST)
>+            if (!f && oracle.hit(peer_root->ip) < MAX_INNER_RECORD_BLACKLIST)
>                 return false;
>             if (old->recordAttempts < MAX_MISMATCH)
>-                old->resetHits();
>+                oracle.resetHits(old->ip);
>             f = empty ? empty : tm->fragmento->getAnchor(cx->fp->regs->pc);
>             return js_RecordTree(cx, tm, f, old, demotes);
>         }
>@@ -3319,13 +3381,13 @@
>             /* abort recording so the inner loop can become type stable. */
>             old = fragmento->getLoop(tm->recorder->getFragment()->root->ip);
>             js_AbortRecording(cx, "Inner tree is trying to stabilize, abort outer recording");
>-            old->resetHits();
>+            oracle.resetHits(old->ip);
>             return js_AttemptToStabilizeTree(cx, lr, old);
>         case BRANCH_EXIT:
>             /* abort recording the outer tree, extend the inner tree */
>             old = fragmento->getLoop(tm->recorder->getFragment()->root->ip);
>             js_AbortRecording(cx, "Inner tree is trying to grow, abort outer recording");
>-            old->resetHits();
>+            oracle.resetHits(old->ip);
>             return js_AttemptToExtendTree(cx, lr, NULL, old);
>         default:
>             debug_only_v(printf("exit_type=%d\n", lr->exitType);)
>@@ -3818,7 +3880,7 @@
>        activate any trees so increment the hit counter and start compiling if appropriate. */
>     if (!f->code() && !f->peer) {
> monitor_loop:
>-        if (++f->hits() >= HOTLOOP) {
>+        if (oracle.hit(f->ip) >= HOTLOOP) {
>             /* We can give RecordTree the root peer. If that peer is already taken, it will
>                walk the peer list and find us a free slot or allocate a new tree if needed. */
>             return js_RecordTree(cx, tm, f->first, NULL, NULL);
>@@ -3830,7 +3892,7 @@
>     debug_only_v(printf("Looking for compat peer %d@%d, from %p (ip: %p, hits=%d)\n",
>                         js_FramePCToLineNumber(cx, cx->fp), 
>                         FramePCOffset(cx->fp),
>-                        f, f->ip, f->hits());)
>+                        f, f->ip, oracle.getHits(f->ip));)
>     Fragment* match = js_FindVMCompatiblePeer(cx, f);
>     /* If we didn't find a tree that actually matched, keep monitoring the loop. */
>     if (!match) 
>@@ -3969,7 +4031,7 @@
> {
>     if (frag->kind == LoopTrace)
>         frag = frago->getLoop(frag->ip);
>-    frag->blacklist();
>+    oracle.blacklist(frag->ip);
> }
> 
> JS_REQUIRES_STACK void
>diff -r 0a9626d5642c js/src/jstracer.h
>--- a/js/src/jstracer.h	Fri Jan 16 14:43:15 2009 -0800
>+++ b/js/src/jstracer.h	Fri Jan 16 16:59:25 2009 -0800
>@@ -159,17 +159,26 @@
> #endif
> 
> /*
>- * The oracle keeps track of slots that should not be demoted to int because we know them
>- * to overflow or they result in type-unstable traces. We are using a simple hash table.
>- * Collisions lead to loss of optimization (demotable slots are not demoted) but have no
>- * correctness implications.
>+ * The oracle keeps track of hit counts for program counter locations, as
>+ * well as slots that should not be demoted to int because we know them to
>+ * overflow or they result in type-unstable traces. We are using simple
>+ * hash tables.  Collisions lead to loss of optimization (demotable slots
>+ * are not demoted, etc.) but have no correctness implications.
>  */
> #define ORACLE_SIZE 4096
> 
> class Oracle {
>+    uint32_t hits[ORACLE_SIZE];
>+    uint32_t blacklistLevels[ORACLE_SIZE];
>     avmplus::BitSet _stackDontDemote;
>     avmplus::BitSet _globalDontDemote;
> public:
>+    Oracle();
>+    int32_t hit(const void* ip);
>+    int32_t getHits(const void* ip);
>+    void resetHits(const void* ip);
>+    void blacklist(const void* ip);
>+
>     JS_REQUIRES_STACK void markGlobalSlotUndemotable(JSContext* cx, unsigned slot);
>     JS_REQUIRES_STACK bool isGlobalSlotUndemotable(JSContext* cx, unsigned slot) const;
>     JS_REQUIRES_STACK void markStackSlotUndemotable(JSContext* cx, unsigned slot);


Did you benchmark this? Is our oracle size chosen right? When do we flush the hit counters?
Attachment #357456 - Flags: review?(gal) → review+
r+ but we won't commit until more work is in place.
Yeah. The constant comes from the same place as in the other hashes, and the accumulate step: the old chestnut DJB hash. There is an infinite variety of nonsense about hashes on the net and in the literature, and very few concrete contenders in the "recommendations" space. If you look into it, it's sorta funny/sad.

Anyway, the JS_REQUIRES_STACK is a mistake; vestigial from earlier version of patch that hashed global object and shape as well. Benchmark is as you say possibly a touch slower, but since it doesn't include the "real" fix (kicking out the binary search) there's no way it could possibly be faster. 

Hashtable size is same as for all other oracle decisions. No idea if this is wise. I can try to measure the range/spread if you like, or run a sequence of builds at different sizes? You suggested it was rare to have a lot of traces, and we flush frequencly, so..

We flush the hit counters when the oracle is cleared, which ... oh my, I thought that was every time we flushed the cache, but it seems to be only when we GC. Maybe more often is better? Or some other time?
Attached patch Update the patch (obsolete) — Splinter Review
Ok, changed mind: the "part 2" of the patch is sufficiently minimal that there's not much difference of doing it in 2 passes.

This update makes a few changes: 
  - Copies in an "initial blacklist level" of 5, from nj's logic. This means that 
    the first blacklisting penalizes the pc by 2**5=32 hits, and goes up from 
    there.
  - Splits the clearing logic in 2, so that we can clear the hit counts
    independently when the cache flushes.
  - Removes the vestigial JS_REQUIRES_STACK on the hash function
  - Implements the "fast path" of bypassing binary search in the fragmento
    until such time as the initial hit count is reached.

This variant of the patch shows a very slight performance improvement (<1%) over what's in the existing code. Tests are all OK, except that some of the frequency counts change. Here are the before/after counts for the tests that changed:

testNestedExitStackOuter
  [recorderStarted: 5 recorderAborted: 2 traceTriggered: 9]
  [recorderStarted: 4 recorderAborted: 1 traceTriggered: 9]
testHOTLOOPCorrectness
  [recorderStarted: 1 recorderAborted: 0 traceTriggered: 0]
  [recorderStarted: 0 recorderAborted: 0 traceTriggered: 0]
testRUNLOOPCorrectness
  [recorderStarted: 1 recorderAborted: 0 traceTriggered: 1]
  [recorderStarted: 1 recorderAborted: 0 traceTriggered: 0]
testDateNow
  [recorderStarted: 1 recorderAborted: 0 traceTriggered: 1]
  [recorderStarted: 1 recorderAborted: 0 traceTriggered: 0]
testNewDate
  [recorderStarted: 1 recorderAborted: 0 traceTriggered: 1]
  [recorderStarted: 1 recorderAborted: 0 traceTriggered: 0]
testThinLoopDemote
  [recorderStarted: 3 recorderAborted: 0 traceCompleted: 1 traceTriggered: 0 unstableLoopVariable: 2]
  [recorderStarted: 2 recorderAborted: 0 traceCompleted: 1 traceTriggered: 0 unstableLoopVariable: 1]
testWeirdDateParse
  [recorderStarted: 10 recorderAborted: 1 traceCompleted: 5 traceTriggered: 13 unstableLoopVariable: 6 noCompatInnerTrees: 1]
  [recorderStarted: 8 recorderAborted: 1 traceCompleted: 5 traceTriggered: 13 unstableLoopVariable: 4 noCompatInnerTrees: 0]
testAddAnyInconvertibleObject
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 93]
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 92]
testAddInconvertibleObjectAny
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 93]
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 92]
testAddInconvertibleObjectInconvertibleObject
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 93]
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 92]
testBitOrAnyInconvertibleObject
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 93]
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 92]
testBitOrInconvertibleObjectAny
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 93]
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 92]
testBitOrInconvertibleObjectInconvertibleObject
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 93]
  [recorderStarted: 1 recorderAborted: 0 sideExitIntoInterpreter: 92]

I believe most of these are not a big deal, and *think* I roughly understand what's happening: the logic for the existing lookup fetches the fragment first and falls through to searching the peer list (and possibly extending it) if there's already code in the fragment. The new logic is to consult a pc-based hit count before looking at any fragments at all, and to return early if the hot loop count has not been met. The new logic essentially consolidates "hotness" into one-counter-per-pc rather than one-value-per-fragment, which makes a difference if there's a nonempty peer list.

I'll study the changing cases a bit more to be sure that's what's going on. But does the patch otherwise look ok?
Attachment #357456 - Attachment is obsolete: true
Attachment #357848 - Flags: review?(gal)
Attachment #357848 - Flags: review?(gal) → review+
Comment on attachment 357848 [details] [diff] [review]
Update the patch

>diff -r e0b6dc460118 js/src/jstracer.cpp
>--- a/js/src/jstracer.cpp	Sun Jan 18 14:45:56 2009 -0500
>+++ b/js/src/jstracer.cpp	Tue Jan 20 11:51:21 2009 -0800
>@@ -105,6 +105,9 @@
> /* Max blacklist level of inner tree immediate recompiling  */
> #define MAX_INNER_RECORD_BLACKLIST  -16
> 
>+/* Blacklist level to obtain on first blacklisting. */
>+#define INITIAL_BLACKLIST_LEVEL 5
>+
> /* Max native stack size. */
> #define MAX_NATIVE_STACK_SLOTS 1024
> 
>@@ -401,6 +404,68 @@
>     return int(h);
> }
> 
>+static inline size_t
>+hitHash(const void* ip)
>+{    
>+    uintptr_t h = 5381;
>+    hash_accum(h, uintptr_t(ip));
>+    return size_t(h);
>+}
>+
>+Oracle::Oracle()
>+{
>+    clear();
>+}
>+
>+/* Fetch the jump-target hit count for the current global/pc pair. */
>+int32_t
>+Oracle::getHits(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    uint32_t hc = hits[h];
>+    uint32_t bl = blacklistLevels[h];
>+
>+    /* Clamp ranges for subtraction. */
>+    if (bl > 30) 
>+        bl = 30;
>+    hc &= 0x7fffffff;
>+    
>+    return hc - (1<<bl);
>+}
>+
>+/* Fetch and increment the jump-target hit count for the current global/pc pair. */

/* ... for the current pc. */

>+int32_t 
>+Oracle::hit(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    if (hits[h] < 0xffffffff)
>+        hits[h]++;
>+    
>+    return getHits(ip);
>+}
>+
>+/* Reset the hit count for an jump-target and relax the blacklist count. */
>+void 
>+Oracle::resetHits(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    if (hits[h] > 0)
>+        hits[h]--;
>+    if (blacklistLevels[h] > 0)
>+        blacklistLevels[h]--;
>+}
>+
>+/* Blacklist with saturation. */
>+void 
>+Oracle::blacklist(const void* ip)
>+{
>+    size_t h = hitHash(ip);
>+    if (blacklistLevels[h] == 0)
>+        blacklistLevels[h] = INITIAL_BLACKLIST_LEVEL;
>+    else if (blacklistLevels[h] < 0xffffffff)
>+        blacklistLevels[h]++;
>+}
>+
> 
> /* Tell the oracle that a certain global variable should not be demoted. */
> JS_REQUIRES_STACK void
>@@ -432,7 +497,14 @@
> 
> /* Clear the oracle. */
> void
>-Oracle::clear()
>+Oracle::clearHitCounts()
>+{
>+    memset(hits, 0, sizeof(hits));
>+    memset(blacklistLevels, 0, sizeof(blacklistLevels));    
>+}
>+
>+void
>+Oracle::clearDemotability()
> {
>     _stackDontDemote.reset();
>     _globalDontDemote.reset();
>@@ -3141,9 +3213,9 @@
>         c->root = f;
>     }
> 
>-    debug_only_v(printf("trying to attach another branch to the tree (hits = %d)\n", c->hits());)
>-
>-    if (++c->hits() >= HOTEXIT) {
>+    debug_only_v(printf("trying to attach another branch to the tree (hits = %d)\n", oracle.getHits(c->ip));)
>+
>+    if (oracle.hit(c->ip) >= HOTEXIT) {
>         /* start tracing secondary trace from this point */
>         c->lirbuf = f->lirbuf;
>         unsigned ngslots;
>@@ -3289,10 +3361,10 @@
>             if (old == NULL)
>                 old = tm->recorder->getFragment();
>             js_AbortRecording(cx, "No compatible inner tree");
>-            if (!f && ++peer_root->hits() < MAX_INNER_RECORD_BLACKLIST)
>+            if (!f && oracle.hit(peer_root->ip) < MAX_INNER_RECORD_BLACKLIST)
>                 return false;
>             if (old->recordAttempts < MAX_MISMATCH)
>-                old->resetHits();
>+                oracle.resetHits(old->ip);
>             f = empty ? empty : tm->fragmento->getAnchor(cx->fp->regs->pc);
>             return js_RecordTree(cx, tm, f, old, demotes);
>         }
>@@ -3319,13 +3391,13 @@
>             /* abort recording so the inner loop can become type stable. */
>             old = fragmento->getLoop(tm->recorder->getFragment()->root->ip);
>             js_AbortRecording(cx, "Inner tree is trying to stabilize, abort outer recording");
>-            old->resetHits();
>+            oracle.resetHits(old->ip);
>             return js_AttemptToStabilizeTree(cx, lr, old);
>         case BRANCH_EXIT:
>             /* abort recording the outer tree, extend the inner tree */
>             old = fragmento->getLoop(tm->recorder->getFragment()->root->ip);
>             js_AbortRecording(cx, "Inner tree is trying to grow, abort outer recording");
>-            old->resetHits();
>+            oracle.resetHits(old->ip);
>             return js_AttemptToExtendTree(cx, lr, NULL, old);
>         default:
>             debug_only_v(printf("exit_type=%d\n", lr->exitType);)
>@@ -3808,6 +3880,10 @@
>         js_FlushJITCache(cx);
>     
>     jsbytecode* pc = cx->fp->regs->pc;
>+
>+    if (oracle.hit(pc) < HOTLOOP)
>+        return false;
>+
>     Fragmento* fragmento = tm->fragmento;
>     Fragment* f;
>     f = fragmento->getLoop(pc);
>@@ -3817,24 +3893,19 @@
>     /* If we have no code in the anchor and no peers, we definitively won't be able to 
>        activate any trees so increment the hit counter and start compiling if appropriate. */
>     if (!f->code() && !f->peer) {
>-monitor_loop:
>-        if (++f->hits() >= HOTLOOP) {
>-            /* We can give RecordTree the root peer. If that peer is already taken, it will
>-               walk the peer list and find us a free slot or allocate a new tree if needed. */
>-            return js_RecordTree(cx, tm, f->first, NULL, NULL);
>-        }
>-        /* Threshold not reached yet. */
>-        return false;
>+        /* We can give RecordTree the root peer. If that peer is already taken, it will
>+           walk the peer list and find us a free slot or allocate a new tree if needed. */
>+        return js_RecordTree(cx, tm, f->first, NULL, NULL);
>     }
>     
>     debug_only_v(printf("Looking for compat peer %d@%d, from %p (ip: %p, hits=%d)\n",
>                         js_FramePCToLineNumber(cx, cx->fp), 
>                         FramePCOffset(cx->fp),
>-                        f, f->ip, f->hits());)
>+                        f, f->ip, oracle.getHits(f->ip));)
>     Fragment* match = js_FindVMCompatiblePeer(cx, f);
>     /* If we didn't find a tree that actually matched, keep monitoring the loop. */
>     if (!match) 
>-        goto monitor_loop;
>+        return js_RecordTree(cx, tm, f->first, NULL, NULL);
> 
>     VMSideExit* lr = NULL;
>     VMSideExit* innermostNestedGuard = NULL;
>@@ -3969,7 +4040,7 @@
> {
>     if (frag->kind == LoopTrace)
>         frag = frago->getLoop(frag->ip);
>-    frag->blacklist();
>+    oracle.blacklist(frag->ip);
> }
> 
> JS_REQUIRES_STACK void
>@@ -4181,6 +4252,7 @@
>         tm->globalSlots->clear();
>         tm->globalTypeMap->clear();
>     }
>+    oracle.clearHitCounts();
> }
> 
> JS_FORCES_STACK JSStackFrame *
>diff -r e0b6dc460118 js/src/jstracer.h
>--- a/js/src/jstracer.h	Sun Jan 18 14:45:56 2009 -0500
>+++ b/js/src/jstracer.h	Tue Jan 20 11:51:21 2009 -0800
>@@ -159,22 +159,36 @@
> #endif
> 
> /*
>- * The oracle keeps track of slots that should not be demoted to int because we know them
>- * to overflow or they result in type-unstable traces. We are using a simple hash table.
>- * Collisions lead to loss of optimization (demotable slots are not demoted) but have no
>- * correctness implications.
>+ * The oracle keeps track of hit counts for program counter locations, as
>+ * well as slots that should not be demoted to int because we know them to
>+ * overflow or they result in type-unstable traces. We are using simple
>+ * hash tables.  Collisions lead to loss of optimization (demotable slots
>+ * are not demoted, etc.) but have no correctness implications.
>  */
> #define ORACLE_SIZE 4096
> 
> class Oracle {
>+    uint32_t hits[ORACLE_SIZE];
>+    uint32_t blacklistLevels[ORACLE_SIZE];
>     avmplus::BitSet _stackDontDemote;
>     avmplus::BitSet _globalDontDemote;
> public:
>+    Oracle();
>+    int32_t hit(const void* ip);
>+    int32_t getHits(const void* ip);
>+    void resetHits(const void* ip);
>+    void blacklist(const void* ip);
>+
>     JS_REQUIRES_STACK void markGlobalSlotUndemotable(JSContext* cx, unsigned slot);
>     JS_REQUIRES_STACK bool isGlobalSlotUndemotable(JSContext* cx, unsigned slot) const;
>     JS_REQUIRES_STACK void markStackSlotUndemotable(JSContext* cx, unsigned slot);
>     JS_REQUIRES_STACK bool isStackSlotUndemotable(JSContext* cx, unsigned slot) const;
>-    void clear();
>+    void clearHitCounts();
>+    void clearDemotability();
>+    void clear() { 
>+        clearDemotability(); 
>+        clearHitCounts();
>+    }
> };
> 
> typedef Queue<uint16> SlotList;
Ah, found it.

@@ -430,7 +430,7 @@
         bl = 30;
     hc &= 0x7fffffff;
     
-    return hc - (1<<bl);
+    return hc - (bl ? (1<<bl) : 0);
 }

The hit count was 1 too low. With that edit, we have only one bit of drift: 

testNestedExitStackOuter  
  [recorderStarted: 5 recorderAborted: 2 traceTriggered: 9]  
  [recorderStarted: 5 recorderAborted: 2 traceTriggered: 11] 

The nature of that test is ... somewhat opaque to me. I'm not sure whether I'm doing bad here by shifting to 11 triggers. I imagine it's due to the collapse of the peer counts into a single heat value, so we don't wait for peers to heat up. But I can't really tell if there's even a peer involved here. Opinions?
A couple more minor differences of opinion between what the old code does and the new code fixed, with respect to whether blacklisting should prevent recording or execution. Now passes trace-test as is, with no changes.
Attachment #357848 - Attachment is obsolete: true
Attachment #358299 - Flags: review?(gal)
Attachment #358299 - Flags: review?(gal) → review+
Summary: TM: Binary search yields poor performance for fragment lookup and LirNameMap → TM: Move fragment hit and blacklist counts to hashtable in oracle
http://hg.mozilla.org/tracemonkey/rev/47a266935e7d
Status: NEW → RESOLVED
Closed: 15 years ago
Resolution: --- → FIXED
Flags: blocking1.9.1? → blocking1.9.1+
Andreas, js1_5/Regress/regress-451322.js still times out for me. Not sure how to verify this based on a time out.
Bob, did you mean to make that comment private? Andreas can't see it, so I'm clearing the private flag, assuming it was set by accident.

/be
Actually I did but this is fine. Sorry Andreas.
js1_5/Regress/regress-451322.js stopped timing out around 2009/04/15. v 1.9.1, 1.9.2
Status: RESOLVED → VERIFIED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: