Closed Bug 606477 Opened 14 years ago Closed 14 years ago

JM: Compile INITPROP/INITELEM

Categories

(Core :: JavaScript Engine, defect)

x86
macOS
defect
Not set
normal

Tracking

()

RESOLVED FIXED

People

(Reporter: bhackett1024, Assigned: bhackett1024)

References

(Blocks 1 open bug)

Details

(Whiteboard: fixed-in-tracemonkey)

Attachments

(1 file, 7 obsolete files)

Some straggler stubbed opcodes.  ss-date-format-tofte and v8-splay do a lot of these.  I don't know how much there is to gain, it's hard to compare the cost of initializing elements between engines without also comparing the cost of doing the allocations themselves.  TM is about 20% faster than JM at microbenchmarks making small objects.
Blocks: 605426
When does INITELEM get used as opposed to INITPROP?
INITELEM is used for [...] initializers, but I forgot about NEWARRAY, which is used for [...] initializers within functions.  TM is faster than JM at both NEWARRAY and NEWINIT.  For NEWINIT it doesn't stub the INITPROPs, and for NEWARRAY it preallocates an array and fills its elements in directly.  JM stubs the INITPROPs, and does the fairly silly thing on NEWARRAY of writing the array elements to the stack, then doing a stub call which makes the array and copies in the stack values.

I don't see an easy way to make [...] fast in JM without getting rid of NEWARRAY; this opcode looks like a holdover for making the interpreter faster (which doesn't constant fold).  Would need to watch TM to make sure it doesn't regress.  Initializers are not observable while they are in the process of being constructed, so then NEWINIT could preallocate an object with enough slots (this opcode knows its final size now), write the final shape for object initializers (script must pin this against GC) and INITELEM/INITPROP would write directly to the object's slots.
Luke and I were discussing direct slot writes a few weeks ago but I can't remember if we filed. The "let's build the array on the stack and then move it over" is really painful to see. That one seems easiest to fix, so maybe we can test the waters by changing that first.

For objects, is it the case that INITPROP can ever see two different shapes?
I think removing NEWARRAY without also fixing INITELEM would slow things down (could just do that and handle INITPROP separately, though).  INITPROP should I think have the same shape as long as the script is COMPILE_N_GO.  Initializers ignore setters in Object.prototype and Array.prototype, so the only difference should be which Object.prototype object the initializer has as its prototype (which determines its initial shape, and all subsequent shapes).  With COMPILE_N_GO that Object.prototype is always the same; non-COMPILE_N_GO object initializers would need to be handled differently (probably just stubbing things as they're done now).
I think you need an ic at the NEWINIT to catch:

  function f() { return [1,2,3] }
  assertEq(f().x, undefined);
  Object.prototype.x = 3;
  assertEq(f().x, 3);
(In reply to comment #5)
Oops, wrong!  The literal's shape remains the same.
Assignee: general → bhackett1024
Attached patch initelem patch (obsolete) — Splinter Review
This removes NEWARRAY and fixes TM and JM to do better on INITELEM.  For this microbenchmark:

function foo() {
  for (var i = 0; i < 500000; i++) {
    var a = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16];
  }
}
foo();

I get these times:

before js -j: 120.2 ms 
after  js -j: 116 ms 
before js -m: 131.08 ms 
after  js -m: 120.62 ms 
v8:  34.46 ms
jsc: 36.6 ms

Most of the extra cost we take is in page faults; V8 and JSC both do much better with small working sets.  If the arrays are kept live (storing each as an element in an outer array), JSC goes to 220ms and V8 goes to 300ms.  V8 suite improves ~15ms on my machine.
A couple more things:

- This can change the behavior of code using sharp variables (which *can* observe initializers while they're in the process of being constructed), as it sets the final length of [...] arrays in the NEWINIT rather than bumping it as the array is constructed.  In the middle of construction, the array will be full of holes.  This used to print 0, now it prints 1.

#1=[print(#1#.length)];

- Arrays still need to be protected against slowification while they're being constructed.  Except when used in an object initializer, INITELEM assumes it is accessing a preallocated dense array, and Very Bad Things would happen if it was slowified while being constructed.
To deal with slowification, I think Igor's approach in bug 603318, comment 7 of only slowifying during growth is best.  Arrays can only grow during construction if sharp variables are used to add new elements to them, so INITELEM would then be slow pathed if script->hasSharps (this would preserve sharp semantics for [...] initializers, and needs to be done anyways as sharp variables can trigger slowification in other ways).
Depends on: 603318
Blocks: 508849
Attached patch initelem + initprop patch (obsolete) — Splinter Review
This handles object initializers by filling in the final shape at the start of the initializer and doing direct writes for each property.  After an INITMETHOD, INITELEM, or convert to dictionary mode further properties are stubbed.  Still WIP for working with sharps correctly, but passes trace-tests.  For the V8 suite, saves me 50ms with -m (2.5%), 30ms with -m -j -p (2%).
Attachment #485598 - Attachment is obsolete: true
Attached patch patch (obsolete) — Splinter Review
This patch works with sharp variables, and should be correct except for array slowification during GC.  This is ready for review, but can't land until bug 603318 does (this could be hacked around by only handling arrays with size < MIN_SPARSE_INDEX, but it would be better to just wait).  dvander/jorendorff, do you mind reviewing the appropriate bits now?
Attachment #486113 - Attachment is obsolete: true
Attachment #486195 - Flags: review?(jorendorff)
Attachment #486195 - Flags: review?(dvander)
Bug 602262 optimizes "f.apply(x, [1,2,3])" by recognizing the sequence JSOP_NEWARRAY, JSOP_APPLY(2).  Having JSOP_NEWINIT makes the pattern matching harder (for purely technical limitations).  Its easy enough to recover this information in the bytecode analyzer, but it could slow down compilation time (since, at least for the naive strategy I'm imagining).  Would it be ok to keep JSOP_NEWARRAY around as a no-op for annotation purposes?
Discussed with Luke; I'll modify this patch to keep NEWARRAY as an opcode but never actually generate it.  Bug 602262 can then pattern match and emit NEWARRAY for the second argument to apply() calls (or remove NEWARRAY entirely, depending on how things look).
Blocks: 607825
Attached patch patch handling slowify (obsolete) — Splinter Review
This handles slowify during GC by generating slower code in TM and JM for array initializers which might be slowified (those with at least MIN_SPARSE_INDEX elements, aka 256), unblocking on bug 603318.  Bits which can be fixed after 603318 are marked for followup bug 607825.
Attachment #486195 - Attachment is obsolete: true
Attachment #486521 - Flags: review?(jorendorff)
Attachment #486521 - Flags: review?(dvander)
Attachment #486195 - Flags: review?(jorendorff)
Attachment #486195 - Flags: review?(dvander)
Attached patch revised patch (obsolete) — Splinter Review
Fix an inverted test which Luke noticed was causing all array initializer elements to be slow pathed.
Attachment #486521 - Attachment is obsolete: true
Attachment #486537 - Flags: review?(jorendorff)
Attachment #486537 - Flags: review?(dvander)
Attachment #486521 - Flags: review?(jorendorff)
Attachment #486521 - Flags: review?(dvander)
>+  return String(#1=[1,2,#1#.length,3,4,delete #1#[0]]);
>+  return String(#1=[1,2,#1#.foo = 3,4,5,6]);

These should be syntax errors, in my opinion. Brendan, can we forbid this? My proposal is to make #1# legal only as an element of an ArrayLiteral or as the RHS of a PropertyAssignment in an ObjectLiteral. That way, I *think*, they wouldn't escape to evil user code until the full ArrayLiteral or ObjectLiteral was done being evaluated. I believe that covers all the uses toSource() has for it, which is the point, right?
jorendorff: why mess with sharps at this point in the release cycle if we can simply deoptimize them?

IIUC, we need to deoptimize arrays that may be slowified, so why not sharps too, and avoid risking breaking some weirdo using sharps?

/be
This patch deoptimizes in any script which uses sharps, so does things right already.  This is more defensive than anything else, as we don't compile sharp variables with either JIT and the interpreter's behavior on INITELEM/INITPROP is unchanged.
Blocks: 606631
No longer blocks: 606631
Still working on this. I'll finish tomorrow morning.
In jit-test/tests/basic/testInitDictionary.js:
>+function stringify(a) {
>+  assertEq(shapes[shapeOf(a)], undefined);
>+  shapes[shapeOf(a)] = 1;
>+  var b = "";
>+  for (var c in a) { b += c + ":" + a[c] + ","; }
>+  return b;
>+}
>+
>+function test1() {
>+  return stringify({a: 0, b: 1, a: function() {} });
>+}
>+for (var i = 0; i < 3; i++)
>+  assertEq(test1(), "b:1,a:function () {\n},");

Please consider making this test not depend on decompiler quirks.

>+for (var i = 0; i < 3; i++)

Use HOTLOOP + 2 or something so we hit the tracer as well.

testInitSlowify.js is good stuff. Please add another one for Objects, just to
be sure. (GC can shrink unused slots from an object; see js_TraceObject. Your
code is correct, as far as I can tell, but tests are good. We'll get it wrong
someday.)

In jsemit.cpp, EmitNewInit:
>+    JS_ASSERT(unsigned(key) < JS_BIT(8));
>+    if (pn->pn_count >= JS_BIT(24))
>+        return false;
>+
>+    ptrdiff_t off = js_EmitN(cx, cg, JSOP_NEWINIT, 4);
>+    if (off < 0)
>+        return false;

The first check here needs to report an error before returning false.

The second one doesn't, because js_EmitN always reports before returning false.

>+void
>+JSObject::revertFromDictionaryMode(const js::Shape *shape)
>+{
>+    JS_ASSERT(inDictionaryMode());
>+    JS_ASSERT_IF(!JSID_IS_VOID(shape->id), shape->slotSpan < lastProp->slotSpan);
>+
>+    lastProp = const_cast<js::Shape *>(shape);
>+}

Below I argue
for doing something else instead of adding this method.

But if you decide to use this, it should also revert objShape to shape->shape.
(Just to avoid breaking the shape invariant. If you're calling this, the object
is never going to see the light of day, so it's probably not an actual bug.)

In JSTracer.cpp, TraceRecorder::record_JSOP_INITELEM:
>+    // The object is either a dense array or an object.  Only handle the dense case here.

Micro-nit: Capitalize "Object" please.

>+    // Set the element.
>+    LIns *slots_ins = slots(obj_ins);
>+    box_value_into(v, v_ins, slots_ins, idx.toInt32() * sizeof(Value), ACCSET_SLOTS);

I don't suppose we CSE away the load emitted by slots(obj_ins), do we? Can it
be done easily?
In methodjit/FastOps.cpp, mjit::Compiler::jsop_initprop:
>+    JS_ASSERT(!info.object->inDictionaryMode());
>+    const Shape *shape = info.object->lastProperty();
>+
>+    JSProperty *prop = NULL;
>+    if (!js_DefineNativeProperty(cx, info.object, ATOM_TO_JSID(atom), UndefinedValue(), NULL, NULL,
>+                                 JSPROP_ENUMERATE, 0, 0, &prop, 0)) {
>+        return false;
>+    }
>+
>+    if (info.object->inDictionaryMode()) {

js_DefineNativeProperty will get the job done, but js::PropertyTree::getChild
(declared in jspropertytree.h) is more to the point. And I think it would make
this oddball inDictionaryMode() case impossible, so you wouldn't need revertFromDictionaryMode.

I only skimmed the methodjit part, so that part still needs dvander's review. But everything looks good. r=me with the comments above addressed.
Comment on attachment 486537 [details] [diff] [review]
revised patch

Sorry for the delay. "It's morning somewhere"? :-\
Attachment #486537 - Flags: review?(jorendorff) → review+
(In reply to comment #20)
> I don't suppose we CSE away the load emitted by slots(obj_ins), do we? Can it
> be done easily?

I believe this will get CSE'ed as long as the expressions in the elements don't have side effects on object slots.  I haven't seen such initializers in hot code in benchmarks or in the wild (generally either [x,y,z] or [1,2,3]).
> In JSTracer.cpp, TraceRecorder::record_JSOP_INITELEM:
> >+    // The object is either a dense array or an object.  Only handle the dense case here.
>
> Micro-nit: Capitalize "Object" please.

But not Array in "dense array"? Just trying to keep up. I like concrete names too.

More substantively: if you bypass js_DefineNativeProperty, you have to be careful of a few things:

1. No getter/setter defining or default-value normalizing, just data properties.

2. Beware the JSDNP_SET_METHOD code in js_DefineNativeProperty. JSOP_INITMETHOD defeats this optimization so you should be ok. Are you ruling out duplicate ids (ES5 strict mode does)?

3. If you exceed PropertyTree::MAX_HEIGHT without switching to dictionary mode you'll need to lastProp->maybeHash(cx).

JS_PROPERTY_TREE(cx).getChild would be lean and mean, along with maybeHash usage. Hope it can work. I have a bug to clean up the mislocated (in files and in data structures) property tree code, not to worry.

/be
My intent with using js_DefineNativeProperty was to make sure the pregenerated shape is straightforwardly the same as what the interpreter would generate (it's also a handy way to keep the final shape *and* its predecessors rooted for GC).  A JIT'ed initializer object should get a dictionary shape at the same time as an interpreter initializer object.

Hmm, looking at this again there's a problem with the call to revertFromDictionaryMode.  If the js_DefineNativeProperty GCs after setting the dictionary property then even though the previous shape is on the C stack it can still be collected (forgot that shapes aren't GC things).  The simple solution is to maintain two objects per initializer, one with the old shape and one with the new shape (updating the old to the new if dictionary mode wasn't triggered on the object).  That would get rid of the need for revertFromDictionaryMode.
Attached patch patch (obsolete) — Splinter Review
This fixes the rooting problem, and gets rid of revertFromDictionaryMode.
Attachment #486537 - Attachment is obsolete: true
Attachment #488406 - Flags: review?(dvander)
Attachment #486537 - Flags: review?(dvander)
(In reply to comment #24)
> > In JSTracer.cpp, TraceRecorder::record_JSOP_INITELEM:
> > >+    // The object is either a dense array or an object.  Only handle the dense case here.
> >
> > Micro-nit: Capitalize "Object" please.
> 
> But not Array in "dense array"? Just trying to keep up. I like concrete names
> too.

Oh, I just said that because elsewhere, "object" usually means "any object" and not "Object object". So "Object" seemed more precise. No such worries with "dense array". Though likely as not the whole idea wasn't even worth wasting a micro-nit on. :)
Nice answer. Using the property tree would have been a sidetrack.

Ultimately we want NEWINIT to create the object with the correct *final* shape and the slots pre-allocated, right? The compiler can just stick a JSObject with the appropriate shape in the JSScript. Then each INITPROP/INITELEM is just a store and we have no GC worries. Do we already have a bug on file for that?
That's what this bug does for JM (the object is in the JITScript, not the JSScript), though only when the final shape is predictable.
(In reply to comment #28)
> Nice answer. Using the property tree would have been a sidetrack.

Yeah, although the layering is a bit much (but that is other bugs). Also shapes not being GC-things is hurting.

Indeed Bill is doing pretty much what Brian's latest patch here does, over in bug 606631 -- just keep track of the object that ended up being created.

/be
(In reply to comment #28)
> Ultimately we want NEWINIT to create the object with the correct *final* shape
> and the slots pre-allocated, right? The compiler can just stick a JSObject with
> the appropriate shape in the JSScript. Then each INITPROP/INITELEM is just a
> store and we have no GC worries. Do we already have a bug on file for that?

Bug 577359 (I need to finish reviewing -- bhackett, want to address comments so far and rebase?).

/be
(In reply to comment #31)
> Bug 577359 (I need to finish reviewing -- bhackett, want to address comments so
> far and rebase?).
> 
> /be

I'll rebase bug 577359 after this one lands.  I might cut that bug down to be simpler now that it isn't perf critical for any benchmarks --- still construct the parse nodes (special casing pn_count on the node type is nasty and error prone given the current pn_blah design), and emit a new bytecode for initializers that run once and are composed entirely of constants.
(In reply to comment #29)
> That's what this bug does for JM (the object is in the JITScript, not the
> JSScript), though only when the final shape is predictable.

In that case we are doing different things in interpreter/tracer/methodjit, which is a red flag. What happens if we're on trace and bail out to methodjit code in the middle of an object initializer? In that case, the object created by NEWINIT on trace won't have the right shape. It might not have enough slots, so this might crash.

Even if that's not a bug, we should probably have a test for it.

If bug 577359 fixes the inconsistency, maybe we can just land them both at once...
Attachment #488406 - Flags: review?(dvander)
(In reply to comment #33)
> (In reply to comment #29)
> > That's what this bug does for JM (the object is in the JITScript, not the
> > JSScript), though only when the final shape is predictable.
> 
> In that case we are doing different things in interpreter/tracer/methodjit,
> which is a red flag. What happens if we're on trace and bail out to methodjit
> code in the middle of an object initializer? In that case, the object created
> by NEWINIT on trace won't have the right shape. It might not have enough slots,
> so this might crash.
> 
> Even if that's not a bug, we should probably have a test for it.
> 
> If bug 577359 fixes the inconsistency, maybe we can just land them both at
> once...

This is right, good catch!  I'll put together a patch modifying the emitter with new bytecodes for initializers with pregenerated shapes.
Attached patch jsemit patch (obsolete) — Splinter Review
This moves all logic about whether to optimize initializers into the emitter and bytecode (JIT changes are smaller now).  NEWINIT is still there for the unoptimized case.  Alternatively, a NEWARRAY or NEWOBJECT bytecode can be emitted which indicates the final length or shape, respectively.  These all have the same INITELEM/INITPROP contents, and end with an ENDINIT (this is a new NEWARRAY, the old one is gone as bug 602262 won't be needing it).  The tracer's behavior on INITPROP is unchanged, but since this now a SETPROP rather than an ADDPROP it goes much faster (the tracer stubs ADDPROPs).  I get these times:

function foo() {
  for (var i = 0; i < 500000; i++) {
    var a = {a:0,b:1,c:2,d:3};
  }
}
foo();

before -j: 84ms
after -j: 55ms
before -m: 103ms
after -m: 56ms
Attachment #488406 - Attachment is obsolete: true
Attachment #488653 - Flags: review?(jorendorff)
Attachment #488653 - Flags: review?(dvander)
INITPROP/ELEM must define not set (unless __proto__, it's the only exception) in order to avoid running a prototype setter. This is an important defense against attacks, codified in ES5.

/be
(In reply to comment #36)
> INITPROP/ELEM must define not set (unless __proto__, it's the only exception)
> in order to avoid running a prototype setter. This is an important defense
> against attacks, codified in ES5.
> 
> /be

The shape is pregenerated at the preceding NEWOBJECT bytecode, so the object already has the property and won't run into a setter on either the prototype or the object itself (initializers with getter/setter are deoptimized).
Attached patch debug mode patchSplinter Review
This always deoptimizes if script->debugMode, so the debugger can do whatever it wants as the object is constructed without breaking things.
Attachment #488653 - Attachment is obsolete: true
Attachment #488656 - Flags: review?(jorendorff)
Attachment #488656 - Flags: review?(dvander)
Attachment #488653 - Flags: review?(jorendorff)
Attachment #488653 - Flags: review?(dvander)
Re comment 12 and 13, bug 602262 has been WONTFIX'd, so feel free to nix JSOP_NEWARRAY.
Brian, the debugger can't inspect the operand stack, so I don't think it can get at this object until we're done initializing it. (We used to expose a JSObjectHook that the debugger could use to get notified of the new object before it was populated, but that's gone--yay bug 578159.)

Given that, is there a need to deoptimize in debugMode?
In jit-test/tests/jaeger/bug563000/eif-trap-typechange.js:
> function caller(obj) {
>   assertJit();
>   var x = ({ dana : "zuul" });
>   return x;
> }
>-trap(caller, 23, "x = 'success'; nop()");
>+trap(caller, 21, "x = 'success'; nop()");
> assertEq(caller(this), "success");

Instead of hard-coding the pc of "return x", this could say:

  // 0 is the pc of "assertJit()", we want the pc of "return x", +2 lines below.
  var pc = line2pc(pc2line(caller, 0) + 2);
  trap(caller, pc, ...);

> const char *
> js_AtomToPrintableString(JSContext *cx, JSAtom *atom)
> {
>+    if (!atom)
>+        return "(unknown)";
>     return js_ValueToPrintableString(cx, StringValue(ATOM_TO_STRING(atom)));
> }

What's this for?

If it's necessary, change js_DumpAtom in jsobj.cpp too. If it's just for debugging, change js_DumpAtom instead and revert this. :)
> > const char *
> > js_AtomToPrintableString(JSContext *cx, JSAtom *atom)
> > {
> >+    if (!atom)
> >+        return "(unknown)";
> >     return js_ValueToPrintableString(cx, StringValue(ATOM_TO_STRING(atom)));
> > }
> 
> What's this for?
> 
> If it's necessary, change js_DumpAtom in jsobj.cpp too. If it's just for
> debugging, change js_DumpAtom instead and revert this. :)

IIRC this was something I stuck in because using TMFLAGS=full was crashing while printing stuff about functions with no name atom.  I can try to repro, if that works fix it upstream.  Either way I'll revert this.
Comment on attachment 488656 [details] [diff] [review]
debug mode patch

In jsobj.cpp:
> JSObject* FASTCALL
>-js_InitializerObject(JSContext* cx, int32 count)
>-{
>-    gc::FinalizeKind kind = GuessObjectGCKind(count, false);
>-    return NewBuiltinClassInstance(cx, &js_ObjectClass, kind);
>-}
>-
>-JS_DEFINE_CALLINFO_2(extern, OBJECT, js_InitializerObject, CONTEXT, INT32, 0,
>-                     nanojit::ACCSET_STORE_ANY)
>+js_InitializerObject(JSContext* cx, JSObject *proto, JSObject *baseobj)
>+{
>+    if (!baseobj) {
>+        gc::FinalizeKind kind = GuessObjectGCKind(0, false);
>+        return NewObjectWithClassProto(cx, &js_ObjectClass, proto, kind);
>+    }
>+
>+    return CopyInitializerObject(cx, baseobj);
>+}
>+
>+JS_DEFINE_CALLINFO_3(extern, OBJECT, js_InitializerObject, CONTEXT, OBJECT, OBJECT,
>+                     0, nanojit::ACCSET_STORE_ANY)

Since we always know in TraceRecorder which behavior we want, it seems like this should be two separate functions. Leaving conditional branches for runtime just goes against the grain at this point. ;)

I'm really sorry I haven't finished this review yet. Still plugging away. Lots of threads going, I'm afraid.
Blocks: 577359
(In reply to comment #42)
> > > const char *
> > > js_AtomToPrintableString(JSContext *cx, JSAtom *atom)
> > > {
> > >+    if (!atom)
> > >+        return "(unknown)";
> > >     return js_ValueToPrintableString(cx, StringValue(ATOM_TO_STRING(atom)));
> > > }
> > 
> > What's this for?
> > 
> > If it's necessary, change js_DumpAtom in jsobj.cpp too. If it's just for
> > debugging, change js_DumpAtom instead and revert this. :)
> 
> IIRC this was something I stuck in because using TMFLAGS=full was crashing
> while printing stuff about functions with no name atom.  I can try to repro, if
> that works fix it upstream.  Either way I'll revert this.

This was bug 610582, which I probably should have filed instead of sticking in a workaround.
Comment on attachment 488656 [details] [diff] [review]
debug mode patch

Nice work.

In jsemit.cpp, js_EmitTree, it looks like "case TOK_ARRAYCOMP" no longer has
much in common with "case TOK_RB". Consider making it a separate chunk of code.

>           if (pn2->pn_type == TOK_COMMA && pn2->pn_arity == PN_NULLARY) {
>               if (js_Emit1(cx, cg, JSOP_HOLE) < 0)
>                   return JS_FALSE;

I think the change you're making means never having to emit JSOP_HOLE. Instead,
Elisions in ArrayLiterals become no-ops. If so, please delete it! (This is how things were before JSOP_NEWARRAY, so you can probably just resurrect the old decompiler code from hg history.)

In jsopcode.tbl:
>+OPDEF(JSOP_INITPROP,  93,  ...  JOF_ATOM|JOF_PROP|JOF_SET|JOF_DETECTING)
>+OPDEF(JSOP_INITELEM,  94,  ...  JOF_BYTE |JOF_ELEM|JOF_SET|JOF_DETECTING)

While you're in here, please delete the extra space after JOF_BYTE.

The entry for JSOP_NEWINIT shows that it only takes the one UINT8 immediate
operand, but is 3 bytes long. A one-line comment explaining why this is
necessary might be nice, at your discretion.

We already share dictionary-mode shapes in one case (Call objects), so if we
wanted to share shapes for large object initializers, we could. Just a thought;
I don't think it's even worth a follow-up bug ...unless it would simplify
things (or that path eventually becomes hot in some JS program of interest).

Again, I didn't look at the Jaeger side of things.
Attachment #488656 - Flags: review?(jorendorff) → review+
Comment on attachment 488656 [details] [diff] [review]
debug mode patch

Nice patch, just some nits on the method jit changes:

>+        fe->initializerArray = false;
>+        fe->initializerObject = NULL;
>+    } else {
>+        fe->initializerArray = (*PC == JSOP_NEWARRAY);
>+        fe->initializerObject = baseobj;
>+    }

Make these private, and introduce a FrameState::pushInitializer(reg, op, obj). FS can sniff for debug mode too. ::Compiler should use getters to access.

>+    /*
>+     * The initialized index is always a constant, but we won't remember which
...
>+    if (!id->isConstant() || !obj->initializerArray) {

Should have test cases for these two conditions (sorry if there is one and I missed it).
Attachment #488656 - Flags: review?(dvander) → review+
Blocks: 612930
Whiteboard: fixed-in-tracemonkey
This appears to have regressed TM-only on AWFY, while both JM and JM+TM improved. Was that expected? comment #7 appeared to suggest that TM would benefit too.
Yes, this was expected.  Both JITs improved, but the interpreter slowed down.  The old NEWARRAY opcode which this patch removed was an important interpreter optimization, and TM-only (which is really TM+interpreter) spends all its time in the interpreter on v8-splay, which makes lots of arrays.
http://hg.mozilla.org/mozilla-central/rev/32aa5d70f490
Status: NEW → RESOLVED
Closed: 14 years ago
Resolution: --- → FIXED
Depends on: 618350
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: