Closed Bug 1288994 Opened 4 years ago Closed 4 years ago

2.39% tp5o Main_RSS (osx-10-10) regression on push 0affe22555807bf2b349f94e115ce0f038ff989d (Thu Jul 21 2016)


(Core :: Graphics: Layers, defect)

50 Branch
Not set



Tracking Status
firefox47 --- unaffected
firefox48 --- unaffected
firefox49 --- unaffected
firefox50 --- affected


(Reporter: ashiue, Unassigned)



(Keywords: perf, regression, talos-regression, Whiteboard: [gfx-noted])

Talos has detected a Firefox performance regression from push 0affe22555807bf2b349f94e115ce0f038ff989d. As author of one of the patches included in that push, we need your help to address this regression.

Summary of tests that regressed:

  tp5o Main_RSS osx-10-10 opt e10s - 2.39% worse

You can find links to graphs and comparsion views for each of the above tests at:

On the page above you can see an alert for each affected platform as well as a link to a graph showing the history of scores for this test. There is also a link to a treeherder page showing the Talos jobs in a pushlog format.

To learn more about the regressing test(s), please see:

For information on reproducing and debugging the regression, either on try or locally, see:

*** Please let us know your plans within 3 business days, or the offending patch(es) will be backed out! ***

Our wiki page outlines the common responses and expectations:
I did some retriggers for this alert, here is the zooming better view:,ce9560a05bd11e46e1898331e206d3b393b2cc6f,1,1%5D&zoom=1469071706407.4827,1469082697853.9104,317390246.9470289,343314999.193818

This issue might be caused by push 0affe22555807bf2b349f94e115ce0f038ff989d.

Hi George, as you are the patch author, can you take a look at this and determine what is the root cause? Thanks!
Blocks: 1280481, 1279341
Flags: needinfo?(gwright)
I'm investigating this. It's probably fine, as we expected a small increase in memory usage from that change, but I want to see if we're seeing a benefit to offset this cost.
Flags: needinfo?(gwright)
Whiteboard: [gfx-noted]
Will bug 1289525 fix this? Or are we willing to live with this regression in 50?
Flags: needinfo?(gwright)
bug 1289525 will probably eventually fix this. I think I'm going to re-introduce the timer that clears the pool after a certain amount of time, which will likely cause this to go away. However, I don't think reverting the patch is the right thing to do as I have seen performance improvements (just from subjective testing).
Flags: needinfo?(gwright)
This regression patch 0affe2255580 has been merged into Aurora, so please make sure to uplift any fixes to Aurora, thank you.

  tp5o Main_RSS osx-10-10 opt e10s - 2.35% worse
from comment 4, it sounds like we should close this as wontfix which I am fine with.  I do see another patch for review on bug 1289525, possibly we need to see if that improves performance and consider uplifting it to Aurora to help out with this bug?
Flags: needinfo?(gwright)
Let's leave this open for now and reassess when bug 1289525 is done.
Flags: needinfo?(gwright)
Just to expand on that, if the fix turns out to be relatively straightforward like tweaking some values or adding a timeout, then we should probably uplift and resolve this bug.

If the fix turns out to be more invasive, I'd rather not uplift as I don't think this regression is that important, especially as we handle memory pressure events so that they result in freeing all our used memory in the pool. In this case, we should probably wontfix this.
:gwright, now that bug 1289525 is done, what are the next steps here?
Flags: needinfo?(gwright)
I spoke with overholt about this, but neglected to update this bug. Sorry. I think we want to keep the changes that regressed this as there is a definite perf gain with scrolling. I think the 2% increase in memory usage is justified.
Flags: needinfo?(gwright)
Closed: 4 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.