Closed Bug 1257810 Opened 4 years ago Closed 4 years ago

2-3% linux64 damp/sessionrestore regression on inbound (v.48) from push b20234ac6cf4 on March 8, 2016


(Core :: JavaScript: GC, defect)

Not set



Tracking Status
firefox48 --- fixed


(Reporter: jmaher, Unassigned)



(Keywords: perf, regression, Whiteboard: [talos_regression])


(1 file)

Talos has detected a Firefox performance regression from push. As author of one of the patches included in that push, we need your help to address this regression.

This is a list of all known regressions and improvements related to the push:

On the page above you can see an alert for each affected platform as well as a link to a graph showing the history of scores for this test. There is also a link to a treeherder page showing the Talos jobs in a pushlog format.

To learn more about the regressing test(s), please see:

Reproducing and debugging the regression:
If you would like to re-run this Talos test on a potential fix, use try with the following syntax:
try: -b o -p linux64 -u none -t other,other-e10s --rebuild 5  # add "mozharness: --spsProfile" to generate profile data

(we suggest --rebuild 5 to be more confident in the results)

To run the test locally and do a more in-depth investigation, first set up a local Talos environment:

Then run the following command from the directory where you set up Talos:
talos --develop -e [path]/firefox -a sessionrestore

(add --e10s to run tests in e10s mode)

Making a decision:
As the patch author we need your feedback to help us handle this regression.
*** Please let us know your plans by Monday, or the offending patch(es) will be backed out! ***

Our wiki page outlines the common responses and expectations:
Hi Terrence, this is bug #2- sorry for the flood!  Can you take a look at this bug, it seems to affect linux64 only- all other signals seem to be noise.
Flags: needinfo?(terrence)
"Talos has detected a Firefox performance regression from push."

What is the commit id I should be looking at?
Flags: needinfo?(jmaher)
oh, silly me!

it is this push:
Flags: needinfo?(jmaher)
Ah, I see that it is in the bug title and I simply can't read.
Ah, this is bug 1224038. Last time I landed, it regressed all the things, so it looks like the fix I applied mostly worked. There's one other thing we can try here to maybe eek out a bit more performance.
cool, glad that you fixed most of the regressions and have an idea to look at the others.  Thanks again for looking into this!
:terrence, did you have any luck trying something here?  Are we out of options?
The counter is a single word, so ReleaseAcquire should be good enough. Until we fixed the bugs in beginOffThreadParseTask, we were still seeing crashes, even with SequentiallyConsistent ordering, so we can be basically certain that those crashes were not caused by too-loose ordering guarantees.
Flags: needinfo?(terrence)
Attachment #8739149 - Flags: review?(sphink)
Comment on attachment 8739149 [details] [diff] [review]

Review of attachment 8739149 [details] [diff] [review]:

Seems safe to me. There's only one code path that looks at this value, and it's good as long as all threads see _nextCellUniqueId_ updates (which ReleaseAcquire will ensure.)

But was this expected to fix a perf regression? I thought that on x86 architectures, there was no difference in behavior between ReleaseAcquire and SequentiallyConsistent?
Attachment #8739149 - Flags: review?(sphink) → review+
Closed: 4 years ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla48
You need to log in before you can comment on or make changes to this bug.