Closed Bug 1141986 Opened 5 years ago Closed 4 years ago

Implement Atomics.exchange

Categories

(Core :: JavaScript Engine, defect)

defect
Not set

Tracking

()

RESOLVED FIXED
mozilla42
Tracking Status
firefox42 --- fixed

People

(Reporter: lth, Assigned: lth)

References

(Blocks 1 open bug)

Details

(Keywords: dev-doc-complete, Whiteboard: [DocArea=JS])

Attachments

(5 files)

The function Atomics.exchange() is new in the Atomics spec and must be implemented with JIT support on x86, x64, and ARM-32, for plain JS and asm.js.  The asm.js bits will also require support in the validator.
Keywords: dev-doc-needed
Whiteboard: [DocArea=JS]
Attached patch Complete featureSplinter Review
Interpreter, Ion, Odin, test cases.  I will divide this into suitable patches and ask for review once float32 and float64 atomics have landed.  (This patch sits on top of that patch set, as well as on top of others.)
Assignee: nobody → lhansen
Status: NEW → ASSIGNED
Jukka, do you have a sense for whether Atomics.exchange() would be useful?  (Initially for integer data.)  Eg, do you have workarounds in place for it now, do you see the operation used much, etc.  It was requested during the design discussion, but probably in part because C++ provides it.
Flags: needinfo?(jujjyl)
It would be quite useful! Unity3D uses it in the core thread task queue to implement a lock-free linked list, and another partner codebase does as well, although their codebase did a manual cas loop to do the exchange for compatibility, but they could use a direct exchange() call if it was there. Unity3D uses a 64-bit exchange operation. We are tracking this in Emscripten as https://github.com/kripken/emscripten/issues/3498 . Btw, see also this github label https://github.com/kripken/emscripten/issues?q=is%3Aopen+is%3Aissue+label%3Amultithreading which tracks current various known todos.
Flags: needinfo?(jujjyl)
Luke, any objections in principle to this feature?  If not I'll extract the pertinent integer-only code from the patch above and get it ready for review.
Flags: needinfo?(luke)
Sounds good!  (sorry for delay)
Flags: needinfo?(luke)
Comment on attachment 8628136 [details] [diff] [review]
Atomics.exchange, interp + ion + non-asm test cases

Not completely trivial, since it has both the Ion code as well as the back-end code for all platforms, LMK if this is too much and I can split it.
Attachment #8628136 - Flags: review?(hv1989)
Comment on attachment 8628138 [details] [diff] [review]
Atomics.exchange, odin + asm.js test cases

(Trying bbouvier since Luke is away.)

As for the other patch, this can be split if so desired.
Attachment #8628138 - Flags: review?(benj)
Comment on attachment 8628136 [details] [diff] [review]
Atomics.exchange, interp + ion + non-asm test cases

Review of attachment 8628136 [details] [diff] [review]:
-----------------------------------------------------------------

@SStangl: I reviewed everything. Though I'm not too familiar with ARM. As far as I understand it is correct, but can you have a second look to "MacroAssembler-arm.cpp" and "MacroAssembler-arm.h" and confirm this does what is expected on ARM?

::: js/src/builtin/AtomicsObject.cpp
@@ +287,5 @@
> +
> +template<XchgStoreOp op>
> +static int32_t
> +do_exchangeOrStore(Scalar::Type viewType, int32_t numberValue, void* viewData, uint32_t offset,
> +                   bool* badArrayType)

hmmmm. The SM style is to have static function capitalize and camelcase.
DoExchangeOrStore

(except if I oversee some style rule?)

While you are at it, can you also camelcase:
atomics_fence_impl -> AtomicsFencempl
do_cmpxchg -> DoCmpxchg
atomics_binop_impl -> AtomicsBinopImpl

@@ +295,5 @@
> +    if (op == DoStore) \
> +        jit::AtomicOperations::storeSeqCst(ptr, value); \
> +    else \
> +        value = jit::AtomicOperations::exchangeSeqCst(ptr, value); \
> +  JS_END_MACRO

- I think this should be aligned on 4 spaces.
- Please let all "\" end on the same column by adding spaces before.

@@ +301,5 @@
> +    switch (viewType) {
> +      case Scalar::Int8: {
> +          int8_t value = (int8_t)numberValue;
> +          INT_OP((int8_t*)viewData + offset, value);
> +          return value;

Align on 2 spaces. So it there is 4 spaces difference with switch and content of switch. Same for all other cases under this:

::: js/src/jit-test/tests/atomics/basic-tests.js
@@ +51,5 @@
>  
> + 	// val = 9
> +	assertEq(Atomics.exchange(a, x, 4), 9);
> +	// val = 4
> +	assertEq(Atomics.exchange(a, x, 9), 4);

Would it be possible to add cases to test numbers on the edge of the Scalar size. E.g. where we can see the coercion to int8/uint8/... works?

::: js/src/jit/CodeGenerator.cpp
@@ +9287,5 @@
> +    Register elements = ToRegister(lir->elements());
> +    AnyRegister output = ToAnyRegister(lir->output());
> +    Register temp = lir->temp()->isBogusTemp() ? InvalidReg : ToRegister(lir->temp());
> +
> +    MOZ_ASSERT(lir->value()->isRegister());

This assert can get removed. ToRegister already checks this.

::: js/src/jit/MCallOptimize.cpp
@@ +2822,5 @@
> +    if (!atomicsMeetsPreconditions(callInfo, &arrayType))
> +        return InliningStatus_NotInlined;
> +
> +    MDefinition* value = callInfo.getArg(2);
> +    if (!(value->type() == MIRType_Int32 || value->type() == MIRType_Double))

if (!IsNumberType(value))

@@ +2835,5 @@
> +    MDefinition* toWrite = value;
> +    if (value->type() == MIRType_Double) {
> +        toWrite = MTruncateToInt32::New(alloc(), value);
> +        current->add(toWrite->toInstruction());
> +    }

This seems a job for TypePolicy. Can you implement TruncateToInt32Policy and remove this code?

::: js/src/jit/MIR.h
@@ +12971,5 @@
>  };
>  
> +class MAtomicExchangeTypedArrayElement
> +  : public MAryInstruction<3>,
> +    public Mix3Policy<ObjectPolicy<0>, IntPolicy<1>, IntPolicy<2>>::Data

replace IntPolicy<2> with TruncateToInt32<2> (that still needs to get added).

::: js/src/jit/x86-shared/MacroAssembler-x86-shared.h
@@ +781,5 @@
> +    void atomicExchange8SignExtend(const T& mem, Register value, Register output) {
> +        if (value != output)
> +            movl(value, output);
> +        xchgb(output, Operand(mem));
> +        movzbl(output, output);

s/movzbl/movsbl
(Make sure the extra tests I requested definitely fail here)
Attachment #8628136 - Flags: review?(sstangl)
Attachment #8628136 - Flags: review?(hv1989)
Attachment #8628136 - Flags: review+
(In reply to Hannes Verschore [:h4writer] from comment #11)
> Comment on attachment 8628136 [details] [diff] [review]
> Atomics.exchange, interp + ion + non-asm test cases
> 
> Review of attachment 8628136 [details] [diff] [review]:
> -----------------------------------------------------------------

Thanks!

One item of pushback from me:

> ::: js/src/jit/MCallOptimize.cpp
> @@ +2822,5 @@
> > +    if (!atomicsMeetsPreconditions(callInfo, &arrayType))
> > +        return InliningStatus_NotInlined;
> > +
> > +    MDefinition* value = callInfo.getArg(2);
> > +    if (!(value->type() == MIRType_Int32 || value->type() == MIRType_Double))
> 
> if (!IsNumberType(value))

Not quite - because that includes Float32 as well, and that is explicitly excluded here (a comment would have been helpful I guess).

But I will reconsider once I've added TruncateToInt32Policy.

> ::: js/src/jit/x86-shared/MacroAssembler-x86-shared.h
> @@ +781,5 @@
> > +    void atomicExchange8SignExtend(const T& mem, Register value, Register output) {
> > +        if (value != output)
> > +            movl(value, output);
> > +        xchgb(output, Operand(mem));
> > +        movzbl(output, output);
> 
> s/movzbl/movsbl
> (Make sure the extra tests I requested definitely fail here)

Lovely catch.  Thanks again.
(In reply to Lars T Hansen [:lth] from comment #12)
> (In reply to Hannes Verschore [:h4writer] from comment #11)
> > Comment on attachment 8628136 [details] [diff] [review]
> > Atomics.exchange, interp + ion + non-asm test cases
> > 
> > Review of attachment 8628136 [details] [diff] [review]:
> > -----------------------------------------------------------------
> 
> Thanks!
> 
> One item of pushback from me:
> 
> > ::: js/src/jit/MCallOptimize.cpp
> > @@ +2822,5 @@
> > > +    if (!atomicsMeetsPreconditions(callInfo, &arrayType))
> > > +        return InliningStatus_NotInlined;
> > > +
> > > +    MDefinition* value = callInfo.getArg(2);
> > > +    if (!(value->type() == MIRType_Int32 || value->type() == MIRType_Double))
> > 
> > if (!IsNumberType(value))
> 
> Not quite - because that includes Float32 as well, and that is explicitly
> excluded here (a comment would have been helpful I guess).

Ok, in that case can you adjust it to:
if (value->type() != MIRType_Int32 && value->type() != MIRType_Double)

which reads slightly easier.
Comment on attachment 8628138 [details] [diff] [review]
Atomics.exchange, odin + asm.js test cases

Review of attachment 8628138 [details] [diff] [review]:
-----------------------------------------------------------------

Started my review, I'll finish it on Monday. A few remarks already, if you're willing to update the patch in the meanwhile ;) Cheers!

::: js/src/asmjs/AsmJSFrameIterator.cpp
@@ +689,5 @@
>  #if defined(JS_CODEGEN_ARM)
>        case AsmJSExit::Builtin_IDivMod:   return "software idivmod (in asm.js)";
>        case AsmJSExit::Builtin_UDivMod:   return "software uidivmod (in asm.js)";
>        case AsmJSExit::Builtin_AtomicCmpXchg:  return "Atomics.compareExchange (in asm.js)";
> +      case AsmJSExit::Builtin_AtomicXchg:  return "Atomics.exchange (in asm.js)";

nit: please align the |return|

::: js/src/asmjs/AsmJSValidate.cpp
@@ +5137,5 @@
> +
> +    if (!valueArgType.isIntish())
> +        return f.failf(arrayArg, "%s is not a subtype of intish", valueArgType.toChars());
> +
> +    PrepareArrayIndex(f, &pointerDef, needsBoundsCheck, mask);

Can you put this call right after the CheckSharedArrayAtomicAccess call, please?

@@ +5140,5 @@
> +
> +    PrepareArrayIndex(f, &pointerDef, needsBoundsCheck, mask);
> +
> +    *def = f.atomicExchangeHeap(viewType, pointerDef, valueArgDef, needsBoundsCheck);
> +    *type = Type::Intish;

I thought the intent was that atomic operations would return int or signed, as they throw on OOB? Or isn't it the case right now and you're planning to implement this later?

::: js/src/builtin/AtomicsObject.cpp
@@ +685,5 @@
> +{
> +    void* heap;
> +    size_t heapLength;
> +    GetCurrentAsmJSHeap(&heap, &heapLength);
> +    if ((size_t)offset >= heapLength)

nit: C++ style coercion, please

size_t(offset)

::: js/src/jit-test/tests/asm.js/testAtomics.js
@@ +11,5 @@
>      var atomic_fence = stdlib.Atomics.fence;
>      var atomic_load = stdlib.Atomics.load;
>      var atomic_store = stdlib.Atomics.store;
>      var atomic_cmpxchg = stdlib.Atomics.compareExchange;
> +    var atomic_exchange = stdlib.Atomics.exchange;

Can you add tests that the value parameter can be intish, please? (for instance, let value be i + i, where i is an int32 param)?
And other tests that check that the result value is intish? (for instance, can't be returned directly without a |0 coercion)
Comment on attachment 8628138 [details] [diff] [review]
Atomics.exchange, odin + asm.js test cases

Review of attachment 8628138 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good, according to what I've seen by comparing with the compareExchange code. Could benefit from a few refactorings in all the CodeGen files (can be done as a follow-up). r=me with a few more test cases.

::: js/src/builtin/AtomicsObject.cpp
@@ +685,5 @@
> +{
> +    void* heap;
> +    size_t heapLength;
> +    GetCurrentAsmJSHeap(&heap, &heapLength);
> +    if ((size_t)offset >= heapLength)

Also, feel free to fix all the other instances of that issue in this file, plus the few "if (something) return 0;" on one line should be split on two..

@@ +690,5 @@
> +        return 0;
> +    bool badType = false;
> +    switch (Scalar::Type(vt)) {
> +      case Scalar::Int8:
> +        return do_exchangeOrStore<DoExchange>(Scalar::Int8, value, heap, offset, &badType);

Maybe the API of do_exchangeOrStore should be changed so that the bool argument can be facultative? badType isn't read in this function. Or you can pass nullptr in all cases (there won't be a deref in do_eOS because we've checked the scalar type here).

Otherwise, I think all these return should be changed to:

int32_t result = do_eOS<DoExchange>(...); break;

And outside the switch:

MOZ_ASSERT(!badType);
return result;

@@ +696,5 @@
> +        return do_exchangeOrStore<DoExchange>(Scalar::Uint8, value, heap, offset, &badType);
> +      case Scalar::Int16:
> +        return do_exchangeOrStore<DoExchange>(Scalar::Int16, value, heap, offset>>1, &badType);
> +      case Scalar::Uint16:
> +        return do_exchangeOrStore<DoExchange>(Scalar::Uint16, value, heap, offset>>1, &badType);

Please double-check my understanding: we don't need the Scalar::Int32 / Scalar::Uint32 because those are word-sized and thus implemented on all tier1 platforms?

::: js/src/jit/arm/CodeGenerator-arm.cpp
@@ +1985,5 @@
> +    Register value = ToRegister(ins->value());
> +
> +    Label rejoin;
> +    uint32_t maybeCmpOffset = 0;
> +    if (mir->needsBoundsCheck()) {

Looks like the ARM backend could also benefit from a small refactoring :) (see comment regarding the x86 and the x64 ones)

::: js/src/jit/arm/LIR-arm.h
@@ +490,5 @@
>          return mir_->toAsmJSCompareExchangeHeap();
>      }
>  };
>  
> +class LAsmJSAtomicExchangeCallout : public LInstructionHelper<1, 2, 0>

As this is calling into C++, you probably need to use LCallInstructionHelper here (otherwise regalloc won't spill registers around this LInstruction)

::: js/src/jit/arm/Lowering-arm.cpp
@@ +703,5 @@
>  void
> +LIRGeneratorARM::visitAsmJSAtomicExchangeHeap(MAsmJSAtomicExchangeHeap* ins)
> +{
> +    MOZ_ASSERT(ins->ptr()->type() == MIRType_Int32);
> +    MOZ_ASSERT(byteSize(ins->accessType()) <= 4);

For more consistency with other backends, can you check that ins->accessType() <= Scalar::Uint32 here? (or change the assertions in the x86/x64 codegen functions, as you prefer)

::: js/src/jit/none/Lowering-none.h
@@ +76,5 @@
>      void visitAtomicTypedArrayElementBinop(MAtomicTypedArrayElementBinop* ins) { MOZ_CRASH(); }
>      void visitCompareExchangeTypedArrayElement(MCompareExchangeTypedArrayElement* ins) { MOZ_CRASH(); }
>      void visitAtomicExchangeTypedArrayElement(MAtomicExchangeTypedArrayElement* ins) { MOZ_CRASH(); }
>      void visitAsmJSCompareExchangeHeap(MAsmJSCompareExchangeHeap* ins) { MOZ_CRASH(); }
> +    void visitAsmJSAtomicExchangeHeap(MAsmJSAtomicExchangeHeap* ins) { MOZ_CRASH(); }

Can you please add the same stub for MIPS, please?

::: js/src/jit/x64/CodeGenerator-x64.cpp
@@ +647,5 @@
> +    if (rejoin.used())
> +        masm.bind(&rejoin);
> +    MOZ_ASSERT(mir->offset() == 0,
> +               "The AsmJS signal handler doesn't yet support emulating "
> +               "atomic accesses in the case of a fault from an unwrapped offset");

I would put this assertion closer to where mir->offset() is used (that is, at the declaration of srcAddr), but that's personal taste :)

::: js/src/jit/x86/CodeGenerator-x86.cpp
@@ +730,5 @@
> +    Register value = ToRegister(ins->value());
> +    Register addrTemp = ToRegister(ins->addrTemp());
> +    Label rejoin;
> +
> +    asmJSAtomicComputeAddress(addrTemp, ptrReg, mir->needsBoundsCheck(), mir->offset(),

I was looking at visitAsmJSAtomicCompareExchangeHeap as the reference implementation, and I saw that asmJSAtomicComputeAddress could be used in visitAsmJSAtomicCompareExchangeHeap, probably? Not a matter of this patch, but probably a nice refactoring follow-up.

Actually, with such a function in the x64 backend, I guess most of the code there could be refactored as well to avoid redundancy?
Attachment #8628138 - Flags: review?(benj) → review+
Quite a small patch pertaining to the requirested type policy, but I figure it's good to get a review on this - My First Type Policy and all.

(This sits on top of the base patch you reviewed previously, with adjustments.)
Attachment #8630384 - Flags: review?(hv1989)
(In reply to Benjamin Bouvier [:bbouvier] from comment #14)
> Comment on attachment 8628138 [details] [diff] [review]
> Atomics.exchange, odin + asm.js test cases
> 
> Review of attachment 8628138 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> @@ +5140,5 @@
> > +
> > +    PrepareArrayIndex(f, &pointerDef, needsBoundsCheck, mask);
> > +
> > +    *def = f.atomicExchangeHeap(viewType, pointerDef, valueArgDef, needsBoundsCheck);
> > +    *type = Type::Intish;
> 
> I thought the intent was that atomic operations would return int or signed,
> as they throw on OOB? Or isn't it the case right now and you're planning to
> implement this later?

It isn't the case right now and I plan to implement it later.  Bug 1178793.
(In reply to Benjamin Bouvier [:bbouvier] from comment #15)
> Comment on attachment 8628138 [details] [diff] [review]
> Atomics.exchange, odin + asm.js test cases
> 
> Review of attachment 8628138 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Looks good, according to what I've seen by comparing with the
> compareExchange code. Could benefit from a few refactorings in all the
> CodeGen files (can be done as a follow-up). r=me with a few more test cases.

Yes, refactoring is becoming really desirable, but has been consciously put off until I change the Intish -> Int thing (bug 1178793) since that will rip out most bounds checking code.  (Ditto for the rest of your comments on refactoring.)

> @@ +690,5 @@
> > +        return 0;
> > +    bool badType = false;
> > +    switch (Scalar::Type(vt)) {
> > +      case Scalar::Int8:
> > +        return do_exchangeOrStore<DoExchange>(Scalar::Int8, value, heap, offset, &badType);
> 
> Maybe the API of do_exchangeOrStore should be changed so that the bool
> argument can be facultative? badType isn't read in this function. Or you can
> pass nullptr in all cases (there won't be a deref in do_eOS because we've
> checked the scalar type here).
> 
> Otherwise, I think all these return should be changed to:
> 
> int32_t result = do_eOS<DoExchange>(...); break;
> 
> And outside the switch:
> 
> MOZ_ASSERT(!badType);
> return result;

Fair point.  I'll probably prefer to pass a NULL pointer.

> @@ +696,5 @@
> > +        return do_exchangeOrStore<DoExchange>(Scalar::Uint8, value, heap, offset, &badType);
> > +      case Scalar::Int16:
> > +        return do_exchangeOrStore<DoExchange>(Scalar::Int16, value, heap, offset>>1, &badType);
> > +      case Scalar::Uint16:
> > +        return do_exchangeOrStore<DoExchange>(Scalar::Uint16, value, heap, offset>>1, &badType);
> 
> Please double-check my understanding: we don't need the Scalar::Int32 /
> Scalar::Uint32 because those are word-sized and thus implemented on all
> tier1 platforms?

That's right.  At the moment, these are strictly for calling out from ARMv6 code in the byte/halfword case.  They could be used also on MIPS where there are no instructions for it.

> ::: js/src/jit/arm/LIR-arm.h
> @@ +490,5 @@
> >          return mir_->toAsmJSCompareExchangeHeap();
> >      }
> >  };
> >  
> > +class LAsmJSAtomicExchangeCallout : public LInstructionHelper<1, 2, 0>
> 
> As this is calling into C++, you probably need to use LCallInstructionHelper
> here (otherwise regalloc won't spill registers around this LInstruction)

OK, will investigate and look for similar issues elsewhere.
Attachment #8630460 - Flags: review?(sstangl)
Attachment #8630460 - Flags: review?(branislav.rankov)
Attachment #8630460 - Flags: review?(branislav.rankov) → review+
Attachment #8630460 - Flags: review?(sstangl) → review+
Comment on attachment 8630384 [details] [diff] [review]
Atomics.exchange, introduce TruncateToInt32Policy

Review of attachment 8630384 [details] [diff] [review]:
-----------------------------------------------------------------

Looks good. Thanks!

[1] Instead of removing any checks it might be better to test for the following:
if (arg->mightBeType(MIRType_Object) || arg->mightBeType(MIRType_Symbol))
   return InliningStatus_NotInlined;

These are the corner cases that MTruncateToInt32 doesn't support yet. They are also the most annoying cases that don't happen often. 
This code makes sure we don't introduce repeated bailouts.

::: js/src/jit/MCallOptimize.cpp
@@ -2772,5 @@
>          return InliningStatus_NotInlined;
>  
> -    MDefinition* oldval = callInfo.getArg(2);
> -    if (!(oldval->type() == MIRType_Int32 || oldval->type() == MIRType_Double))
> -        return InliningStatus_NotInlined;

[1]

@@ -2776,5 @@
> -        return InliningStatus_NotInlined;
> -
> -    MDefinition* newval = callInfo.getArg(3);
> -    if (!(newval->type() == MIRType_Int32 || newval->type() == MIRType_Double))
> -        return InliningStatus_NotInlined;

[1]

@@ -2823,5 @@
>          return InliningStatus_NotInlined;
>  
> -    MDefinition* value = callInfo.getArg(2);
> -    if (!(value->type() == MIRType_Int32 || value->type() == MIRType_Double))
> -        return InliningStatus_NotInlined;

[1]

@@ -2894,5 @@
>      if (!atomicsMeetsPreconditions(callInfo, &arrayType, DontCheckAtomicResult))
>          return InliningStatus_NotInlined;
>  
>      MDefinition* value = callInfo.getArg(2);
> -    if (!(value->type() == MIRType_Int32 || value->type() == MIRType_Double))

[1]

@@ -2958,5 @@
>          return InliningStatus_NotInlined;
>  
> -    MDefinition* value = callInfo.getArg(2);
> -    if (!(value->type() == MIRType_Int32 || value->type() == MIRType_Double))
> -        return InliningStatus_NotInlined;

[1]
Attachment #8630384 - Flags: review?(hv1989) → review+
(In reply to Hannes Verschore [:h4writer] from comment #20)
> Comment on attachment 8630384 [details] [diff] [review]
> Atomics.exchange, introduce TruncateToInt32Policy
> 
> Review of attachment 8630384 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Looks good. Thanks!
> 
> [1] Instead of removing any checks it might be better to test for the
> following:
> if (arg->mightBeType(MIRType_Object) || arg->mightBeType(MIRType_Symbol))
>    return InliningStatus_NotInlined;
> 
> These are the corner cases that MTruncateToInt32 doesn't support yet. They
> are also the most annoying cases that don't happen often. 
> This code makes sure we don't introduce repeated bailouts.

@h4writer:

Well, ho hum.  The net complexity of the code ends up being higher than it was before the introduction of the truncation operation, since there's a new policy /and/ a guard, and the guard is not semantics-oriented but implementation-oriented.

It's very unlikely that those kinds of values will actually be provided as arguments.  I assume you're not saying that MTruncateToInt32 will do something Bad when it sees these, it will just bail out.  If that's the case then I may just want to live with the performance hit.

Unless there's a side effect from the presence of the bailout in the code, eg, extra guards that I don't want - since they were not there before.
(In reply to Lars T Hansen [:lth] from comment #21)
> (In reply to Hannes Verschore [:h4writer] from comment #20)
> > Comment on attachment 8630384 [details] [diff] [review]
> > Atomics.exchange, introduce TruncateToInt32Policy
> > 
> > Review of attachment 8630384 [details] [diff] [review]:
> > -----------------------------------------------------------------
> > 
> > Looks good. Thanks!
> > 
> > [1] Instead of removing any checks it might be better to test for the
> > following:
> > if (arg->mightBeType(MIRType_Object) || arg->mightBeType(MIRType_Symbol))
> >    return InliningStatus_NotInlined;
> > 
> > These are the corner cases that MTruncateToInt32 doesn't support yet. They
> > are also the most annoying cases that don't happen often. 
> > This code makes sure we don't introduce repeated bailouts.
> 
> @h4writer:
> 
> Well, ho hum.  The net complexity of the code ends up being higher than it
> was before the introduction of the truncation operation, since there's a new
> policy /and/ a guard, and the guard is not semantics-oriented but
> implementation-oriented.

Not sure that is true. 
1) You removed all the "if (double) do truncate", with a typepolicy (which is the normal way to do it)
2) You changed "if (not double or not int) don't inline" to "if (object or symbol) don't inline"

So as a result (1) is a win. (2) is similar in complexity.

> It's very unlikely that those kinds of values will actually be provided as
> arguments.  I assume you're not saying that MTruncateToInt32 will do
> something Bad when it sees these, it will just bail out.  If that's the case
> then I may just want to live with the performance hit.

Both with and without [1] we have performance hit for 'abnormal' code. The only difference is that without [1] we will degrade the performance of the whole script (and inlined functions). With [1] we we only degrade the call to Atomics.exchange. That makes having [1] a little bit more desirable. (Off course no hit in performance for normal code).
Comment on attachment 8628136 [details] [diff] [review]
Atomics.exchange, interp + ion + non-asm test cases

Review of attachment 8628136 [details] [diff] [review]:
-----------------------------------------------------------------

Lgtm. Just looked at the ARM files.

::: js/src/jit/arm/MacroAssembler-arm.cpp
@@ +4834,5 @@
> +    // If LDREXB/H and STREXB/H are not available we use the
> +    // word-width operations with read-modify-add.  That does not
> +    // abstract well, so fork.
> +    //
> +    // Bug 1077321: We may further optimize for ARMv8 (AArch32) here.

Nit: ARMv8 is "AArch64", which doesn't share a backend with ARM (the architectures are very different). Probably can just remove the last line of this comment.

@@ +4847,5 @@
> +MacroAssemblerARMCompat::atomicExchangeARMv7(int nbytes, bool signExtend, const T& mem,
> +                                             Register value, Register output)
> +{
> +    Label Lagain;
> +    Label Ldone;

Prefer just "again" and "done".

@@ +4885,5 @@
> +                                             Register value, Register output)
> +{
> +    // Bug 1077318: Must use read-modify-write with LDREX / STREX.
> +    MOZ_ASSERT(nbytes == 1 || nbytes == 2);
> +    MOZ_CRASH("NYI");

This is safe because atomics code is still accessible Nightly-only, right?
Attachment #8628136 - Flags: review?(sstangl) → review+
(In reply to Sean Stangl [:sstangl] from comment #23)
> Comment on attachment 8628136 [details] [diff] [review]
> Atomics.exchange, interp + ion + non-asm test cases
> 
> Review of attachment 8628136 [details] [diff] [review]:
> -----------------------------------------------------------------
> 
> Lgtm. Just looked at the ARM files.
> 
> ::: js/src/jit/arm/MacroAssembler-arm.cpp
> @@ +4834,5 @@
> > +    // If LDREXB/H and STREXB/H are not available we use the
> > +    // word-width operations with read-modify-add.  That does not
> > +    // abstract well, so fork.
> > +    //
> > +    // Bug 1077321: We may further optimize for ARMv8 (AArch32) here.
> 
> Nit: ARMv8 is "AArch64", which doesn't share a backend with ARM (the
> architectures are very different). Probably can just remove the last line of
> this comment.

The ARMv8 docs distinguish (see eg "A1.1 About the ARM architecture" in the ARMv8 doc) between the "ARMv7-compatible" AArch32 and the new AArch64, and ARMv8 does indeed have new instructions for AArch32, some of which can be used to optimize this code if we detect dynamically that we're running in 32-bit mode on ARMv8.  The mentioned bug has some details.

> @@ +4885,5 @@
> > +                                             Register value, Register output)
> > +{
> > +    // Bug 1077318: Must use read-modify-write with LDREX / STREX.
> > +    MOZ_ASSERT(nbytes == 1 || nbytes == 2);
> > +    MOZ_CRASH("NYI");
> 
> This is safe because atomics code is still accessible Nightly-only, right?

Technically because ARMv6 is more or less a dead platform, but yes, Nightly-only makes this acceptable for now.  I had to flesh out the asm.js primitives to handle ARMv6 because of some fuzzing issues; those implementations just call out to C.  But I've not yet done the Ion primitives.  I should expedite this work so that we can stop worrying about it.
Try run with adjusted patches is green:
https://treeherder.mozilla.org/#/jobs?repo=try&revision=ca99f9cad320
Depends on: 1183308
You need to log in before you can comment on or make changes to this bug.