Closed Bug 1523314 Opened 5 years ago Closed 5 years ago

7.7 - 17.21% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-fcp (linux64, linux64-qr, osx-10-10, windows10-64, windows10-64-qr, windows7-32) regression on push c31d2c5695b185f5dab74513f911388ddcb23a46 (Sat Jan 19 2019)

Categories

(Core :: DOM: Security, defect, P3)

defect

Tracking

()

RESOLVED WONTFIX

People

(Reporter: Bebe, Unassigned)

References

Details

(Keywords: perf, regression, Whiteboard: [domsecurity-backlog1])

Raptor has detected a Firefox performance regression from push:

https://hg.mozilla.org/integration/autoland/pushloghtml?changeset=c31d2c5695b185f5dab74513f911388ddcb23a46

As author of one of the patches included in that push, we need your help to address this regression.

Regressions:

17% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-fcp linux64-qr opt 71.38 -> 83.66
16% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-fcp linux64 opt 72.75 -> 84.33
14% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-fcp osx-10-10 opt 113.19 -> 129.04
11% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-fcp windows10-64 opt 72.19 -> 79.83
9% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-fcp windows7-32 opt 71.77 -> 78.29
8% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-fcp windows10-64-qr opt 71.73 -> 77.25

Improvements:

17% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-loadtime linux64-qr opt 390.91 -> 324.78
13% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-loadtime linux64 opt 375.25 -> 326.79
10% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-loadtime windows7-32 opt 376.21 -> 337.33
10% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-loadtime windows7-32 pgo 348.08 -> 314.62
9% raptor-tp6-yandex-firefox raptor-tp6-yandex-firefox-loadtime windows10-64 opt 376.25 -> 342.08
7% raptor-tp6-yandex-firefox linux64-qr opt 171.04 -> 158.36
5% raptor-tp6-yahoo-news-firefox raptor-tp6-yahoo-news-firefox-fcp linux64 pgo 265.17 -> 252.54
3% raptor-tp6-imdb-firefox raptor-tp6-imdb-firefox-loadtime linux64-qr opt 624.48 -> 607.50

You can find links to graphs and comparison views for each of the above tests at: https://treeherder.mozilla.org/perf.html#/alerts?id=18882

On the page above you can see an alert for each affected platform as well as a link to a graph showing the history of scores for this test. There is also a link to a Treeherder page showing the Raptor jobs in a pushlog format.

To learn more about the regressing test(s) or reproducing them, please see: https://wiki.mozilla.org/Performance_sheriffing/Raptor

*** Please let us know your plans within 3 business days, or the offending patch(es) will be backed out! ***

Our wiki page outlines the common responses and expectations: https://wiki.mozilla.org/Performance_sheriffing/Talos/RegressionBugsHandling

Component: General → DOM: Security
Product: Testing → Core
Version: Version 3 → unspecified
Flags: needinfo?(gijskruitbosch+bugs)

I looked into this yesterday; the markers in these profiles (comment #1) for fnbpaint and fcp are confusing. They seem to appear before the load captured in that profile.

Unfortunately, the noisyness means it's kind of hard to figure out what the difference between before/after is, as there are big differences between individual runs.

I also tried to find a way of feeding the profiles to https://perf-html.io/compare/ but I couldn't find the right incantation.

Julien, can you help with the latter? Is this possible directly from the zipfiles exposed on taskcluster or do I need to reupload individual profiles or something?

I'll reiterate (from bug 1519074) that these metrics are just returning to the status quo prior to the regression caused by bug 1515863. From that perspective, I think we could close this as wontfix - but I'm curious what the cause of the noise here is and if we can improve the situation still further.

Any suggestions for how to do that would be gratefully received. Florian, curious if you have a moment to look at these profiles and flag up anything I could do to look into them further; I couldn't really find much myself... In the content process, we seem like we might be spending a few ms checking CSPs for whether they allow particular subresource loads (ie images etc.) in a way that now happens before these paints instead of after, but (a) I don't really understand why (ie we would still be doing that work prior to the changes in bug 1519074, it's unclear why it now happens before instead of after paint) and (b) that doesn't really explain the noise (ie why is it different between runs).

Flags: needinfo?(gijskruitbosch+bugs)
Flags: needinfo?(florian)
Flags: needinfo?(felash)

Julien, can you help with the latter? Is this possible directly from the zipfiles exposed on taskcluster or do I need to reupload individual profiles or something?

Yeah unfortunately you have to upload them separately for now -- this feature is still a WIP!

Flags: needinfo?(felash)
Priority: -- → P3
Whiteboard: [domsecurity-backlog1]
Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → WONTFIX
Flags: needinfo?(florian)
You need to log in before you can comment on or make changes to this bug.