Closed Bug 1481556 Opened Last year Closed Last year

Compute the appropriate stub field index instead of assuming it based on execution order

Categories

(Core :: JavaScript Engine: JIT, enhancement, P3)

enhancement

Tracking

()

RESOLVED FIXED
mozilla63
Tracking Status
firefox63 --- fixed

People

(Reporter: mgaudet, Assigned: mgaudet)

References

(Blocks 1 open bug)

Details

Attachments

(1 file, 2 obsolete files)

From bug 1461374, comment 8

> The issue here is that readStubWord is stateful: it actually disregards the offset provided to it, and uses instead a member field nextStubField_ to keep track of how many we've processed. 

There's a number of places where this bites us, so it would be nice to get rid of this restriction by actually using the offset we bake into the CacheIR code.
Attachment #8998287 - Attachment is obsolete: true
Comment on attachment 8998290 [details] [diff] [review]
Compute the appropriate stub field index instead of assuming it based on execution order

Review of attachment 8998290 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry for the delay. It's great to have this fixed finally; this is much nicer. One suggestion below to make this O(n) in most cases instead of O(n^2).

::: js/src/jit/CacheIR.h
@@ +552,5 @@
>      // This should not be used when compiling Baseline code, as Baseline code
>      // shouldn't bake in stub values.
> +    StubField readStubFieldForIon(uint32_t offset, StubField::Type type) const {
> +        size_t index = 0;
> +        size_t current_offset = 0;

Style nit: currentOffset

This loop has potential O(n^2) behavior and we allow up to 20 stub fields - not a whole lot but enough that it could hurt a little. How do you feel about adding these arguments at the end of this function's argument list:

uint32_t* lastOffset, uint32_t* lastIndex

Then IonCacheIRCompiler could have two uint32_t fields with similar names and pass in pointers to them. And here we could do something like:

if (*lastOffset <= offset) {
    index = *lastIndex;
    currentOffset = *lastOffset;
}

And at the end:

    *lastOffset = offset;
    *lastIndex = index;

Just a primitive cache but very effective because we access these fields linearly.

And then we should probably move this from CacheIR.h to CacheIR.cpp :)
Another option is to just have these fields in CacheIRWriter, but then you probably have to make them |mutable| (or else we have to remove the |const| in a number of places).
Updated with caching. Used the mutable-field-in-CacheIRWriter approach because I think
it makes the most sense from a design standpoint. 

try build: https://treeherder.mozilla.org/#/jobs?repo=try&revision=ec38171ed60c2e7f5016a1abb0aa5fcf944cdf73
Attachment #8999000 - Flags: review?(jdemooij)
Attachment #8998290 - Attachment is obsolete: true
Attachment #8998290 - Flags: review?(jdemooij)
Comment on attachment 8999000 [details] [diff] [review]
Compute the appropriate stub field index instead of assuming it based on execution order

Review of attachment 8999000 [details] [diff] [review]:
-----------------------------------------------------------------

Great, thanks! r=me with style nits fixed.

::: js/src/jit/CacheIR.cpp
@@ +37,5 @@
>      assertSameCompartmentDebugOnly(cx_, obj);
>  }
>  
> + StubField
> + CacheIRWriter::readStubFieldForIon(uint32_t offset, StubField::Type type) const

The indentation looks very wrong. Doing a hg qdiff (or non-mq equivalent) before posting a patch is useful to find such issues.

@@ +44,5 @@
> +        size_t currentOffset = 0;
> +
> +        // If we've seen an offset earlier than this before,
> +        // we know we can start the search there at least.
> +        // Otherwise, we start the search from the beginning.

Nit: max line length for comments is 80 chars so this can probably fit on two lines :)

::: js/src/jit/CacheIR.h
@@ +438,5 @@
>      static const size_t MaxStubDataSizeInBytes = 20 * sizeof(uintptr_t);
>      bool tooLarge_;
>  
> +    // Basic caching to avoid quadatic lookup behaviour in
> +    // readStubFieldForIon.

Nit: I think this will fit on one line.

@@ +440,5 @@
>  
> +    // Basic caching to avoid quadatic lookup behaviour in
> +    // readStubFieldForIon.
> +    mutable uint32_t lastOffset_;
> +    mutable size_t lastIndex_;

Nit: maybe make this uint32_t too, for better packing on 64-bit platforms? None of these values should come close to UINT32_MAX.
Attachment #8999000 - Flags: review?(jdemooij) → review+
Pushed by mgaudet@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/cd69094e843a
Compute the appropriate stub field index instead of assuming it based on execution order r=jandem
https://hg.mozilla.org/mozilla-central/rev/cd69094e843a
Status: NEW → RESOLVED
Closed: Last year
Resolution: --- → FIXED
Target Milestone: --- → mozilla63
You need to log in before you can comment on or make changes to this bug.