Closed Bug 1349924 Opened 7 years ago Closed 1 year ago

Inline through polymorphic wrapper functions

Categories

(Core :: JavaScript Engine: JIT, enhancement, P2)

enhancement

Tracking

()

RESOLVED INVALID
Performance Impact low

People

(Reporter: jandem, Unassigned)

References

(Blocks 3 open bugs)

Details

(Keywords: perf)

Attachments

(3 files, 9 obsolete files)

90.87 KB, patch
Details | Diff | Splinter Review
12.65 KB, patch
Details | Diff | Splinter Review
19.93 KB, patch
jandem
: review+
Details | Diff | Splinter Review
Ember and React have wrapper functions that (simplified) look like this:

  function wrap(fun) {
    function wrapper() {
      return fun.apply(this, arguments);
    }
    return wrapper;
  }

We can often inline the wrapper, but then we get confused because |fun| is highly polymorphic.

I think the simplest way to victory here is as follows:

(1) Ion should look at the Baseline Call IC to get the list of observed callees (the actual functions, not the canonical lambda function) if necessary. It can guard on this when inlining.

(2) Ion should optimize JSOP_GETALIASEDVAR to a constant if we're inlining a function and have the JSFunction* (we can walk its environment chain). For this it would be great if the frontend could tell us whether the aliased var is read only so we don't need any guards.

A hackish prototype seems to work.
Shu, how feasible is the frontend part of (2)?

The debugger can also modify aliased vars, but hopefully we can handle that somehow.
Flags: needinfo?(shu)
(In reply to Jan de Mooij [:jandem] from comment #1)
> Shu, how feasible is the frontend part of (2)?
> 
> The debugger can also modify aliased vars, but hopefully we can handle that
> somehow.

Recapping our brief chat on IRC. I think the cheapest way to do this in the frontend would be to attach a list of obviously read-only binding names to the Scopes. Ion can then inspect that if it'd like.

I need to think more about the Debugger. I think Debugger should be fine:

If the environment object is still created, the Debugger can mutate the corresponding slot. Debugger will force a bail out to baseline, which then should work fine.

If not, and the environment object is created only on bailout, then the Debugger can probably stash some stuff on the RematerializedFrame for use only on bailout.

Keeping NI to remind myself to prototype something next week.
(In reply to Shu-yu Guo [:shu] from comment #2)
> If not, and the environment object is created only on bailout, then the
> Debugger can probably stash some stuff on the RematerializedFrame for use
> only on bailout.

For the case we care about in this bug (see example in comment 0) there will definitely be an environment object: it's created when we execute the outer function and in IonBuilder we are inlining the *inner* function that has the outer function's env on its environment chain.
The two patches add an assignedTo bit to BindingName in addition to closedOver. If a binding name is assigned to outside of initialization, its assignedTo bit should be set. This is tracked for all binding names except global ones -- it doesn't matter for those since they can't be accessed by slot.

This work had to be done in the Parser. I had originally hoped to do this in BCE, which would've been a lot easier, but we can't delay until BCE due to syntax parsing. We can't wait to emit an assignment for an upvar use to mark the outer function scope's binding name as assignedTo.

Currently the following expressions result in a binding name being marked assignedTo. Hope I didn't miss anything.
  - Increment/decrement
  - Name assignment and var redeclaration
  - Destructuring assignment and var redeclaration
  - Annex B function in block
  - Body-level function statements
  - for-in/of target assignment and var redeclaration

To use this info in Ion, you can use a BindingIter to iterate over a Scope's names, and query the assignedTo() predicate.

These patches aren't very well tested. Let me know if this approach works for you.
Flags: needinfo?(shu)
Add the assignedTo() predicate to BindingIter.
Attachment #8854284 - Attachment is obsolete: true
Thanks Shu, this is great. What about 'arguments' and 'eval', do they also mark names assignedTo?

I'll try to get to this soonish.
(In reply to Jan de Mooij [:jandem] from comment #8)
> Thanks Shu, this is great. What about 'arguments' and 'eval', do they also
> mark names assignedTo?
> 

For 'eval', it marks all bindings as assignedTo. I forgot about arguments aliasing, good call -- this isn't handled correctly currently.
v3: consider all positional formals assignedTo if 'arguments' is used.
Attachment #8854286 - Attachment is obsolete: true
Attachment #8854285 - Attachment is obsolete: true
Attachment #8854598 - Attachment is obsolete: true
Argh attached the wrong patches. I'm just going to roll these up.
Whiteboard: [qf]
I'm not sure if this is possible in the current time-frame (17 April is merge)? Therefore I made it P2. Feel free to up this to P1. Looks important enough.
Priority: -- → P2
Whiteboard: [qf] → [qf:p1]
(In reply to Shu-yu Guo [:shu] from comment #13)
> Argh attached the wrong patches. I'm just going to roll these up.

Did you forget to attach it?

---

I keep seeing JS code that would benefit from this optimization and it's probably one of the most important blockers for framework-like code (inlining failures can cause bad polymorphism). I have some other bugs to fix first, though.
Attachment #8854599 - Attachment is obsolete: true
Attachment #8854600 - Attachment is obsolete: true
I've been working on this a bit and my logging indicates this will also help |bind| perf. Makes a lot of sense because it uses the same wrapper pattern as in comment 0. Hopefully tomorrow I'll have a working prototype.
Reworked to handle destructuring on top of anba's recently landed patches.

Jan, I left in |scope.dump(pc)| in the patch so you can see during parsing what
declarations are marked as assigned to.
Attachment #8857233 - Attachment is obsolete: true
No longer blocks: TimeToFirstPaint_FB
Whiteboard: [qf:p1] → [qf:p2]
I can work on this if no one else is, it seems like a nice improvement for these frameworks.  I don't think we should be computing this information in the frontend, on account of the complexity, unlikelihood of the information ever being used, and the alternative that IonBuilder can either do its own analysis or put in a cheap guard.
Assignee: nobody → bhackett1024
Cool. This also seems related to your TI-for-call-objects patch, I wonder if that's worth revisiting?
(In reply to Jan de Mooij [:jandem] from comment #20)
> Cool. This also seems related to your TI-for-call-objects patch, I wonder if
> that's worth revisiting?

For call-object-type-information to help in this example the call objects for different instances of |wrapper| would need different object groups and be able to differentiate the possible values of the |fun| aliased variable.  TI just doesn't provide much help with polymorphic code; having call object type information would help with monomorphic code where an aliased var is always written with values of a particular type, but it is complicated to prove that the aliased var is written before its initial undefined value is read.
For the wrap function in comment 0 there is a pretty simple fix that works.  If I remove the usesReturn part of the test in frontend/SharedContext.h:

    bool isLikelyConstructorWrapper() const {
        return usesArguments && usesApply && usesThis && !usesReturn;
    }

This causes the wrapper functions to be treated as constructor wrappers and causes the resulting functions to both be singletons and to have their scripts cloned separately.  It changes our time on the microbenchmark below from 200ms to 8ms (v8 is 14ms; all are x64 builds).

function wrap(fun) {
    function wrapper() {
	return fun.apply(this, arguments);
    }
    return wrapper;
}

var adder = wrap(function(a, b) { return a + b; });
var subber = wrap(function(a, b) { return a - b; });

function foo() {
    var i = 0;
    var a = 0;
    for (var i = 0; i < 10000000; i++) {
	a = adder(a, 1);
	a = subber(a, 1);
    }
}

var start = new Date;
foo();
print(new Date - start);

I don't know what the real wrapper functions look like in Ember and React or whether this change would work for them.

The drawback with making this change is that loosening (and renaming) isLikelyConstructorWrapper() has potential to backfire.

I doubt there will be major negative effects from having more singleton functions floating around; the main one I can think of is that if a call site has many singletons with the same canonical function then the type set will just say "object" and it won't be inlineable.

Cloning the script each time a function is created can have negative memory, compilation time, etc. effects if many of the functions are created.  This could be throttled at some point I guess but it doesn't seem like this should be necessary for us to be able to generate good code --- we should be able to look at the specific singleton target function we are inlining and refine type information based on it, either a |this.initialize| access in the Class.create motivating example for this optimization (i.e. octane-raytrace), or the |fun| aliased var in the wrapper example here.

Ignoring isLikelyConstructorWrapper() and just generating guards based on information that appears at runtime is fine for aliased var accesses, though for more complicated accesses (e.g. |this.initialize|) we would need the slower generic paths.  This option also doesn't let us use baseline information to guide it; what if the value we read from the function's environment chain during IonBuilder doesn't always appear in practice?  There are compilation hints on JSScript like failedShapeGuard() and while something similar would work here it would be brittle.

So there are a number of ways forward with no obvious best one.  What I'd like to do next is get a more complete picture of what we are leaving on the table by generating MCallGeneric at monomorphic-in-practice call sites on account of poor type information or other issues.  Cases like these are just so painful for Ion perf that it's worth focusing on them.  I'll try to instrument speedometer and get some data tomorrow.
(In reply to Brian Hackett (:bhackett) from comment #19)
> I can work on this if no one else is, it seems like a nice improvement for
> these frameworks.  I don't think we should be computing this information in
> the frontend, on account of the complexity, unlikelihood of the information
> ever being used, and the alternative that IonBuilder can either do its own
> analysis or put in a cheap guard.

In general I agree with the sentiment of not doing analysis on the frontend. FWIW though, this is a small complexity added onto the existing frontend name analysis. I feel like that it will be easy to maintain, if we go the frontend route.

If getting at the info otherwise is even more complex or slows down Ion compilation, it's reasonable to keep track of "assigned-to outside of declaration" on bindings.
(In reply to Brian Hackett (:bhackett) from comment #22)
> The drawback with making this change is that loosening (and renaming)
> isLikelyConstructorWrapper() has potential to backfire.

If you look at blame data, notice that I added that usesReturn check because we were seeing overrecursion errors on actual websites because this optimization kept cloning scripts and running them in the interpreter instead of JIT. This optimization also relies on source code length and so is very brittle. Cloning scripts is the kind of quick hack I'd like to avoid - these things backfire all the time.
So, I instrumented the JS engine to assign unique numbers to each CallGeneric we end up emitting in Ion, and then keeping track of which of those call sites always call the same JSFunction and how many times they are called.  I ran speedometer and aggregated the resulting data according to the source location of the call (which could be many different CallGenerics, due to inlining and recompilation etc.) and sorted by hit count.  The top hits are below, with assorted source fragments and comments, accounting for over 96% of the total.

HITS 14601028: http://speedometer2.benj.me/resources/todomvc/functional-prog-examples/elm/dist/elm.js:92 0.698338420721284

function A2(fun, a, b)
{
  return fun.arity === 2
    ? fun.func(a, b)
    : fun(a)(b);
}

The |fun.func| call is the hot one here.  If we had enough information from baseline about the value of |fun| when inlining we could determine the likely callee here and inline it.  Having that information in the general case would be tricky because |fun| is not a singleton (it is a wrapper function created by the F2 function in this file), but for this elm file it looks like call sites usually specify a global variable (e.g. _elm_lang$html$Html_Lazy$lazy2) for this and we could look that global up to get an idea of the specific |fun| to expect.

HITS 2385335: http://speedometer2.benj.me/resources/todomvc/functional-prog-examples/elm/dist/elm.js:98 0.8124242971945846

function A3(fun, a, b, c)
{
  return fun.arity === 3
    ? fun.func(a, b, c)
    : fun(a)(b)(c);
}

This has the same concerns as A2 above.

HITS 982005: http://speedometer2.benj.me/resources/todomvc/architecture-examples/emberjs/assets/vendor.js:32730 0.8593916628376342

    function superWrapper() {
      var orig = this._super;
      this._super = superFunc;
      var ret = func.apply(this, arguments);
      this._super = orig;
      return ret;
    }

|func| is an aliased var.

HITS 755722: http://speedometer2.benj.me/resources/todomvc/architecture-examples/react-redux/dist/static/js/main.946269ff.js:1 0.8955363581278789

function(e,t,n,i){var a=e._currentElement;if(t!==a||i!==e._context){var u=o.shouldUpdateRefs(a,t);u&&o.detachRefs(e,a),e.receiveComponent(t,n,i),u&&e._currentElement&&null!=e._currentElement.ref&&n.getReactMountReady().enqueue(r,e)}}

I'm not sure which call is being referenced here, or whether there's much to do here.

HITS 659697: http://speedometer2.benj.me/resources/todomvc/architecture-examples/jquery/node_modules/handlebars/dist/handlebars.js:1304 0.927088366735394

      function prog(context) {
        var options = arguments.length <= 1 || arguments[1] === undefined ? {} : arguments[1];

        var currentDepths = depths;
        if (depths && context != depths[0]) {
          currentDepths = [context].concat(depths);
        }

        return fn(container, context, container.helpers, container.partials, options.data || data, blockParams && [options.blockParams].concat(blockParams), currentDepths);
      }

'fn' here is an aliased var, similar to the ember case.

HITS 309329: http://speedometer2.benj.me/resources/todomvc/architecture-examples/emberjs/assets/vendor.js:31850 0.9418829637557746

    notifySubscribers: function (callbackToSkip, contextToSkip) {
      var subscriber = this.subscriberHead;

      while (subscriber) {
        var next = subscriber.next;

        var callback = subscriber.callback;
        var context = subscriber.context;

        subscriber = next;

        if (callback === callbackToSkip && context === contextToSkip) {
          continue;
        }

        if (context === undefined) {
          callback(this);
        } else {
          callback.call(context, this);
        }
      }
    },

|callback.call| is the call of interest here.  I don't think there's much we can do here.

HITS 297708: http://speedometer2.benj.me/resources/todomvc/architecture-examples/react-redux/dist/static/js/main.946269ff.js:2 0.9561217512271836

function(){var e=this._callbacks,t=this._contexts;if(e){e.length!==t.length?o("24"):void 0,this._callbacks=null,this._contexts=null;for(var n=0;n<e.length;n++)e[n].call(t[n]);e.length=0,t.length=0}}

I don't think there's much we can do here.

HITS 183945: http://speedometer2.benj.me/resources/todomvc/architecture-examples/emberjs/assets/vendor.js:45493 0.9649194784008851

  function registryAlias(name) {
    return function () {
      var _registry__;

      return (_registry__ = this.__registry__)[name].apply(_registry__, arguments);
    };
  }

This is an interesting twist on the aliased var and function property accesses seen elsewhere.

So for 93% of the calls captured here we should be able to use the place where we inlined a function to get a clue about the functions which that inlined function will call.  Only 8% of the calls actually use an aliased var to transmit that information, though.
Attached patch patch (obsolete) — Splinter Review
This patch adds several optimizations based on the Ember and Elm framework code which led to most of speedometer's monomorphic CallGenerics identified above:

- IonBuilder::getStaticName tries to ensure that the resulting value will be a specific object rather than just some object with a particular group.  This is only done for functions rather than all objects, as that seems like it will lead to fewer bailouts.  We can't really use type information (as the functions don't have singleton type) or baseline information (as we are often inlining polymorphic functions) here.  If we want more support from the rest of the VM for this optimization, we would need a new technique.  One option would be to have shapes indicate whether their data values have ever changed, which would be neat to do but can have performance fallout and would want plenty of time to bake.

- Building JSOP_GETPROP can use the getStaticName logic, and some other places like JSOP_GETALIASEDVAR have been made more robust to handle constant objects coming in from new places.

- Lambdas used in (function() { ... }).call(...) and .apply are now treated as run once lambdas.  This pattern is used by Elm and this change is necessary so that we can use the getStaticName stuff in this framework.  (Note that we do not assume/require that run once lambdas only run once, so moderately speculative optimizations like this are ok to do.)

This patch reduces the number of monomorphic CallGenerics executed on speedometer by roughly 80%, from ~20m to ~4m.  The microbenchmark in comment 22 improves from 195ms to 7ms (d8 is 12ms), and the below microbenchmark based on Elm improves from 230ms to 25ms (d8 is 145ms).

(function() {
'use strict';

function F2(fun)
{
  function wrapper(a) { return function(b) { return fun(a,b); }; }
  wrapper.arity = 2;
  wrapper.func = fun;
  return wrapper;
}

function A2(fun, a, b)
{
  return fun.arity === 2
    ? fun.func(a, b)
    : fun(a)(b);
}

var adder = F2(function(a, b) { return a + b; });
var subber = F2(function(a, b) { return a - b; });

var others = [
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
    F2(function(a, b) {}),
];

for (var i = 0; i < 10000; i++) {
    for (var j = 0; j < others.length; j++)
	A2(others[j], 0, 0);
}

function foo() {
    var i = 0;
    var a = 0;
    for (var i = 0; i < 10000000; i++) {
	a = A2(adder, a, 1);
	a = A2(subber, a, 1);
    }
}

var start = new Date;
foo();
print(new Date - start);

}).call(this);
Attachment #8888400 - Flags: review?(jdemooij)
Looks good for the most part, except the bailout part worries me a bit. We will bail 10 times then set the hadFrequentBailouts flag which also disables LICM. This is the kind of thing that may not affect our (micro-)benchmarks, but is going to backfire on some random benchmark/website out there and will make us look bad.
(In reply to Jan de Mooij [:jandem] from comment #28)
> Looks good for the most part, except the bailout part worries me a bit. We
> will bail 10 times then set the hadFrequentBailouts flag which also disables
> LICM. This is the kind of thing that may not affect our (micro-)benchmarks,
> but is going to backfire on some random benchmark/website out there and will
> make us look bad.

Would it be OK to add another bit on the outer JSScript for hadObjectMismatchBailout or something, so that we bail out immediately and only deoptimize this particular optimization?
(In reply to Brian Hackett (:bhackett) from comment #29)
> Would it be OK to add another bit on the outer JSScript for
> hadObjectMismatchBailout or something, so that we bail out immediately and
> only deoptimize this particular optimization?

That would be better but still a bit unfortunate we can waste an Ion compilation on this. I think the main problem is that Ion is flying blind and doesn't have any information to tell whether the JSFunction* has changed or might change. I wonder if we should limit this to smaller scripts where a bailout is less of an issue, although that doesn't account for (other) inlined scripts...

Can you check if this bailout happens on any of our shell benchmarks (including assorted) or Speedometer/Gmail/etc?
I added a bailout kind and some instrumentation to see when this bailout is triggered, wrote a testcase for it, and then tested the assorted shell benchmark suite, speedometer, and gmail, and did not see this bailout get triggered on any of these workloads.  Because of all the type information, observed types, etc. which can trigger invalidations it's hard to get this specific bailout to trigger even when writing a testcase, I'll include the one I wrote when checking this stuff in.
Comment on attachment 8888400 [details] [diff] [review]
patch

So can you post the patch that adds the BailoutKind?
Attachment #8888400 - Flags: review?(jdemooij)
Attached patch patch (obsolete) — Splinter Review
This patch adds a bailout kind, though it is treated the same as Bailout_ObjectIdentityOrTypeGuard and does not update any bits on JSScript.  Putting another bit on JSScript seems excessive since I haven't seen this bailout hit in practice, but I can add that too if you want.
Attachment #8888400 - Attachment is obsolete: true
Attachment #8889427 - Flags: review?(jdemooij)
Comment on attachment 8889427 [details] [diff] [review]
patch

Review of attachment 8889427 [details] [diff] [review]:
-----------------------------------------------------------------

::: js/src/jit/BaselineBailouts.cpp
@@ +2020,1 @@
>          HandleBaselineInfoBailout(cx, outerScript, innerScript);

IIUC this is actually worse than before because in IonBuilder we check the hadFrequentBailouts flag, but we will never set that if we immediately invalidate here.

::: js/src/jit/IonBuilder.cpp
@@ +7386,5 @@
> +    // likely to change over time and are more likely to require precise
> +    // information for inlining decisions.
> +    if (!outermostBuilder()->script()->hadFrequentBailouts()) {
> +        Value v = staticObject->as<NativeObject>().getSlot(slot);
> +        if (v.isObject() && v.toObject().is<JSFunction>()) {

Maybe also add
  
  && v.toObject().as<JSFunction>().isInterpreted()

To reduce the bailout likelyhood even more.
Attached patch patchSplinter Review
Attachment #8889427 - Attachment is obsolete: true
Attachment #8889427 - Flags: review?(jdemooij)
Attachment #8889500 - Flags: review?(jdemooij)
Comment on attachment 8889500 [details] [diff] [review]
patch

Review of attachment 8889500 [details] [diff] [review]:
-----------------------------------------------------------------

Thanks.

I really don't want to be pedantic, but a lot of these "should be uncommon" cases do show up in the wild at some point. The past months we have fixed lots of weird/uncommon cases we didn't expect, so I just want to avoid adding more instances of that.
Attachment #8889500 - Flags: review?(jdemooij) → review+
Pushed by bhackett@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/7aa7d265948a
Try to specialize property loads to specific function objects, r=jandem.
https://hg.mozilla.org/mozilla-central/rev/7aa7d265948a
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla56
Is it possible this made things slower on Speedometer? See for instance:

https://arewefastyet.com/#machine=35&view=single&suite=speedometer-misc&subtest=Elm-TodoMVC-CompletingAllItems-sync
https://arewefastyet.com/#machine=17&view=single&suite=speedometer-misc&subtest=Elm-TodoMVC-CompletingAllItems-sync

The data is noisy but the last 4 data points or so seem to show a regression.
Flags: needinfo?(bhackett1024)
(In reply to Jan de Mooij [:jandem] from comment #39)
> Is it possible this made things slower on Speedometer? See for instance:
> 
> https://arewefastyet.com/#machine=35&view=single&suite=speedometer-
> misc&subtest=Elm-TodoMVC-CompletingAllItems-sync
> https://arewefastyet.com/#machine=17&view=single&suite=speedometer-
> misc&subtest=Elm-TodoMVC-CompletingAllItems-sync
> 
> The data is noisy but the last 4 data points or so seem to show a regression.

Sure, it's possible.  This patch allows us to inline more but that does not always mean things will speed up.  These are both Elm tests and if we are inlining a lot more now in code that isn't all that hot it could be a net slowdown.
Flags: needinfo?(bhackett1024)
(In reply to Brian Hackett (:bhackett) from comment #40)
> Sure, it's possible.  This patch allows us to inline more but that does not
> always mean things will speed up.  These are both Elm tests and if we are
> inlining a lot more now in code that isn't all that hot it could be a net
> slowdown.

OK but can you look into this more? Else we should consider a back out, which would be unfortunate :/
Flags: needinfo?(bhackett1024)
OK, I ran the Elm subtest of Speedometer and compared scores for Elm-TodoMVC-CompletingAllItems-Sync both with and without this bug and with and without your inlining heuristics patch in bug 1382650.  All these scores are averages of 10 runs; none of the runs had large outliers that can distort the results.

Without 1349924, Without 1382650 : 202ms
With 1349924, Without 1382650    : 228ms
Without 1349924, With 1382650    : 205ms
With 1349924, With 1382650       : 198ms

So, by itself this bug does slow down this test, and by itself the inlining heuristics may be a minor slowdown, but when combined there may be a minor speedup.  I think the best way forward would be to land the new inlining heuristics.  Even if the improvement on speedometer is small, this patch will improve codegen for other people using the Elm and Ember frameworks.
Flags: needinfo?(bhackett1024)
Thanks for looking at this! I don't have immediate plans to finish bug 1382650 though, since it's a lot of work to get right.

I guess we can look at Elm to see where our current inlining heuristics are not ideal.
(In reply to Brian Hackett (:bhackett) from comment #42)
> OK, I ran the Elm subtest of Speedometer and compared scores for
> Elm-TodoMVC-CompletingAllItems-Sync both with and without this bug and with
> and without your inlining heuristics patch in bug 1382650.  All these scores
> are averages of 10 runs; none of the runs had large outliers that can
> distort the results.
> 
> Without 1349924, Without 1382650 : 202ms
> With 1349924, Without 1382650    : 228ms
> Without 1349924, With 1382650    : 205ms
> With 1349924, With 1382650       : 198ms
> 
> So, by itself this bug does slow down this test, and by itself the inlining
> heuristics may be a minor slowdown, but when combined there may be a minor
> speedup.  I think the best way forward would be to land the new inlining
> heuristics.  Even if the improvement on speedometer is small, this patch
> will improve codegen for other people using the Elm and Ember frameworks.

Given that Jan doesn't have immediate plans to finish bug 1382650, this may mean that we would ship this regression for Speedometer in 56 (and by extension in 57), which would be really unfortunate given all of the ongoing effort that has gone into optimizing for this benchmark across the board.  Perhaps the best path going forward for now would be to consider a backout here and make this depend on bug 1382650, to ensure that they will both land together so that we wouldn't incur a regression from this landing without the other piece?  What do you think?

Thanks! :-)
Flags: needinfo?(bhackett1024)
Pushed by bhackett@mozilla.com:
https://hg.mozilla.org/integration/mozilla-inbound/rev/9c03ad1d4c47
Backed out changeset 7aa7d265948a due to regression.
Flags: needinfo?(bhackett1024)
Depends on: 1382650
Status: RESOLVED → REOPENED
Resolution: FIXED → ---
Target Milestone: mozilla56 → ---
Keywords: perf
Brian please revisit. Next landing goal is FF60
Flags: needinfo?(bhackett1024)
Whiteboard: [qf:p2] → [qf:p1]
This is blocked on bug 1382650.
Flags: needinfo?(bhackett1024)
Whiteboard: [qf:p1] → [qf:i60][qf:p1]
Whiteboard: [qf:i60][qf:p1] → [qf:f60][qf:p1]
Whiteboard: [qf:f60][qf:p1] → [qf:f61][qf:p1]
Whiteboard: [qf:f61][qf:p1] → [qf:f64][qf:p1]
Whiteboard: [qf:f64][qf:p1] → [qf:p1:f64]
Whiteboard: [qf:p1:f64] → [qf:p3:f67]
Whiteboard: [qf:p3:f67] → [qf:p3]
Assignee: bhackett1024 → nobody
Performance Impact: --- → P3
Whiteboard: [qf:p3]
Severity: normal → S3

Warp changed the inlining story a lot here, so situation is quite different these days.

Status: REOPENED → RESOLVED
Closed: 7 years ago1 year ago
Resolution: --- → INVALID
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: