StackTypeSet::convertDoubleElements preventing dense arrays from being compiled as such in Ion

RESOLVED INVALID

Status

()

Core
JavaScript Engine
RESOLVED INVALID
5 years ago
5 years ago

People

(Reporter: shu, Unassigned)

Tracking

Firefox Tracking Flags

(Not tracked)

Details

Attachments

(2 attachments)

(Reporter)

Description

5 years ago
Created attachment 711195 [details]
microbenchmark

The current logic on |StackTypeSet::convertDoubleElements| returns |AmbiguousDoubleConversion| for a type set that includes a mix between packed arrays with an element type set of {int} and packed arrays with an element typeset of {int,float}. The {int} packed array will set |dontConvert = true| via |!types->hasType(Types::DoubleType())|, and the {int,float} packed array will set |maybeConvert = true|.

This causes a getelem IC, or worse, a VM call.

There's a ParallelArray benchmark that has a loop similar to the |sum| loop in the attached microbenchmark.

Benchmark before patch:
  % time: 57

Benchmark after patch:
  % time: 32
(Reporter)

Comment 1

5 years ago
Created attachment 711196 [details] [diff] [review]
patch
Attachment #711196 - Flags: feedback?(bhackett1024)
Comment on attachment 711196 [details] [diff] [review]
patch

Review of attachment 711196 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/jsinfer.cpp
@@ +2047,5 @@
> +            if (!types->hasType(Type::Int32Type()))
> +                dontConvert = true;
> +            alwaysConvert = false;
> +            continue;
> +        }

This logic will allow writing doubles to an array that only contains int32s, which will both break the type information for the array and unnecessarily deoptimize it.

Can you post the benchmark and point out where the problem is at?  This has to be a setelem rather than a getelem as an AmbiguousDoubleConversion on getelem just means we can't do an optimized double load.

I think the only way to fix this is a new MIR/LIR setelem node that checks for double arrays and then either writes its int32 input or converts the input to a double first.
Attachment #711196 - Flags: feedback?(bhackett1024)
(Reporter)

Comment 3

5 years ago
(In reply to Brian Hackett (:bhackett) from comment #2)
> This logic will allow writing doubles to an array that only contains int32s,
> which will both break the type information for the array and unnecessarily
> deoptimize it.
> 

I see.

> Can you post the benchmark and point out where the problem is at?  This has
> to be a setelem rather than a getelem as an AmbiguousDoubleConversion on
> getelem just means we can't do an optimized double load.

The benchmark is already attached. The problem is that all the reads from a[i+j] gets compiled as GetElemCache, as |TypeInferenceOracle::elementReadIsDenseNative| returns true only if convertDoubleElements returns something other than AmbiguousDoubleConversion.

Among other things, GetElemCache isn't thread-safe.

> 
> I think the only way to fix this is a new MIR/LIR setelem node that checks
> for double arrays and then either writes its int32 input or converts the
> input to a double first.
(Reporter)

Comment 4

5 years ago
I was looking at the wrong tree. This problem is local to the ParallelArray branch.
Status: NEW → RESOLVED
Last Resolved: 5 years ago
Resolution: --- → INVALID
You need to log in before you can comment on or make changes to this bug.