Closed Bug 1480524 Opened Last year Closed Last year

Convert NewObject IC to CacheIR


(Core :: JavaScript Engine: JIT, defect)

Not set



Tracking Status
firefox63 --- fixed


(Reporter: mgaudet, Assigned: mgaudet)


(Blocks 2 open bugs)



(3 files, 4 obsolete files)

Convert the NewObject code to use CacheIR
Attachment #8997142 - Flags: review?(jdemooij)
I'm open to some suggestions about better lifetime management for the template
object, especially if we can avoid modifying the IonIC trace code!
Attachment #8997143 - Flags: review?(jdemooij)
Attachment #8997144 - Flags: review?(jdemooij)
The first patch also has a question about the allocation metadata guard; the shared IC uses a runtime guard, but my reading of the setter suggests that maybe we can get away without it so long as we don't attach an IC while one exists, as we will toss the JIT code when we set the guard.
Compacting GC can move GC things so we risk subtle bugs if we bake in GC pointers without updating them (for example when a JSObject address is reused for a different object after a compacting GC we could unexpectedly reuse the stub code because the CacheIR will be identical).

One option is to store a counter (as uint64_t) in JitRuntime (something like [0]), increment that, and bake that in, to force different CacheIR for each stub. Another option is to special case by checking |kind| here [1].


Flags: needinfo?(mgaudet)
The IonBuilder fast path giving up if the template object has dynamic slots doesn't make sense to me:

If I remove that check, jit-tests still pass (except for recover-objects.js but that's caused by the test I think). Looking at the history of that code, Hannes added it but it's unclear why.

Would you mind checking if removing that condition helps performance without having the NewObject shared stub in Ion? The NewObject Baseline IC is special enough (the unshared code part) that I'd still prefer not having a CacheIR/Ion IC for this if possible.
Depends on: 1481556
So I ran the experiment suggested (disable the Ion stubs, but also allow dynamic slotted template objects to hit the fast path in Ion). The results on perfherder are here: 

In summary: No high confidence regressions, though "tp6_youtube opt 1_thread e10s stylo" in linux64-qr may show a 5% regression, but it still isn't clear after 20 runs. 

I'll update the patches on this bug to only have Baseline support (using stub fields to ensure correct GC behaviour, and a counter to disambiguate baseline stubs).
Flags: needinfo?(mgaudet)
Attachment #8997144 - Attachment is obsolete: true
Attachment #8997144 - Flags: review?(jdemooij)
Attachment #8997143 - Attachment is obsolete: true
Attachment #8997143 - Flags: review?(jdemooij)
Attachment #8997142 - Attachment is obsolete: true
Attachment #8997142 - Flags: review?(jdemooij)
This instead uses a counter to disambiguate caches, and uses the pre-existing
stub field machinery to hold the JSObject*.
Attachment #8998356 - Flags: review?(jdemooij)
This ends up converting recover-object-bug1175233.js into a permafail (as you
suggested earlier). 

I'll have to investigate why tomorrow, but in the mean time I'm wondering if
we'll be able to land this in light of the performance testing in Comment 7.
Attachment #8998357 - Flags: feedback?(jdemooij)
Whoops: I meant to say recover-objects.js ( recover-object-bug1175233.js is the intermittent classification it gets)
The failing line is this one [1], asserting the object is not recovered on bailout. 

After enabling the fast-path code in Ion in Part 2 (Attachment #8998357 [details] [diff]) this appears to no longer be true. Can I just flip the sense of the assertion?
Comment on attachment 8998357 [details] [diff] [review]
[Part 2] Remove Ion NewObject stub, and remove limitation on template object dynamic slots for fast path

Review of attachment 8998357 [details] [diff] [review]:

\o/ We can keep an eye on Talos/AWFY but I think this is fine.

As for the test, I agree we can just change it from false => true. r=me on that too.
Attachment #8998357 - Flags: review+
Attachment #8998357 - Flags: feedback?(jdemooij)
Attachment #8998357 - Flags: feedback+
Comment on attachment 8998356 [details] [diff] [review]
[Part 1] Use CacheIR version of NewObject for Baseline

Review of attachment 8998356 [details] [diff] [review]:

Nice, r=me with comments addressed

::: js/src/jit/CacheIR.cpp
@@ +5450,5 @@
> +
> +bool
> +NewObjectIRGenerator::tryAttachStub()
> +{
> +    if (!templateObject_) {

With the other comment addressed, we can remove this check.

@@ +5470,5 @@
> +        trackAttached(IRGenerator::NotAttached);
> +        return false;
> +    }
> +    // Stub doesn't support metadata builder
> +    // MG:XXX: But it does guard, so technically the stub would

I think it makes sense to remove this XXX comment and keep it like this.

@@ +5479,5 @@
> +    //         be able to get away without the runtime guard,
> +    //         so long as we don't attach stubs when the metadata
> +    //         builder is attached.
> +    if (cx_->realm()->hasAllocationMetadataBuilder())
> +        return false;

Should there be a trackAttached(IRGenerator::NotAttached); call here?

::: js/src/jit/CacheIR.h
@@ +1256,5 @@
> +        addStubField(uintptr_t(templateObj), StubField::Type::JSObject);
> +        // Bake in a monotonically increasing number to ensure we differentiate
> +        // between different baseline stubs that otherwise might share
> +        // stub code.
> +        writeUint32Immediate(cx_->runtime()->jitRuntime()->nextDisambiguationId());

We should write the whole uint64_t. There's no writeUint64Immediate, one option is to do something like:

uint64_t id = ..;
writeUint32Immediate(id & UINT32_MAX);
writeUint32Immediate(id >> 32);

::: js/src/jit/CacheIRCompiler.h
@@ -710,5 @@
>      void emitLoadStubField(StubFieldOffset val, Register dest);
>      void emitLoadStubFieldConstant(StubFieldOffset val, Register dest);
>      uintptr_t readStubWord(uint32_t offset, StubField::Type type) {
> -        MOZ_ASSERT(stubFieldPolicy_ == StubFieldPolicy::Constant);

I'd prefer keeping this assertion and instead inlining the code below in objectStubFieldUnchecked.

It's still a bit confusing to use the "ForIon" name because this is Baseline, but it's also not the end of the world - it is a very unusual case for Baseline anyway.

::: js/src/jit/JitRealm.h
@@ +333,5 @@
>      void ionLazyLinkListRemove(JSRuntime* rt, js::jit::IonBuilder* builder);
>      void ionLazyLinkListAdd(JSRuntime* rt, js::jit::IonBuilder* builder);
> +
> +    uint32_t nextDisambiguationId() {
> +        return disambiguationId_++; 

Nit: remove trailing whitespace

::: js/src/jit/SharedIC.cpp
@@ +1829,5 @@
>          if (obj && !obj->isSingleton() &&
>              !obj->group()->maybePreliminaryObjectsDontCheckGeneration())
>          {
> +            templateObject = NewObjectOperation(cx, script, pc, TenuredObject);

Nit: convention is to OOM-check immediately after the call instead of in the else-branch below:

if (!templateObject)
    return false;
Attachment #8998356 - Flags: review?(jdemooij) → review+
Attachment #8998358 - Attachment is obsolete: true
Comment on attachment 8999746 [details] [diff] [review]
[Part 3] Move ICNewObject_Fallback out of SharedIC and remove non-CacheIR code generation

Review of attachment 8999746 [details] [diff] [review]:

::: js/src/jit/BaselineIC.cpp
@@ +5200,5 @@
> +
> +    RootedObject obj(cx);
> +
> +    RootedObject templateObject(cx, stub->templateObject());
> +    if (templateObject) {

So we only attach an optimized stub if there's no template object yet? What if GC discards optimized stubs, I think we will then keep the template object and never attach an IC stub again?

If this matches the old behavior, would you mind filing a follow-up bug?
Attachment #8999746 - Flags: review?(jdemooij) → review+
Blocks: 1483345
Pushed by
[Part 1] Use CacheIR version of NewObject for Baseline r=jandem
[Part 2] Remove Ion NewObject stub, and remove limitation on template object dynamic slots for fast path r=jandem
[Part 3] Move ICNewObject_Fallback out of SharedIC and remove non-CacheIR code generation r=jandem
You need to log in before you can comment on or make changes to this bug.