Scroll lag on a specific samsung.com site
Categories
(Core :: Layout, defect)
Tracking
()
| Tracking | Status | |
|---|---|---|
| firefox-esr115 | --- | unaffected |
| firefox-esr128 | --- | wontfix |
| firefox131 | --- | wontfix |
| firefox132 | --- | wontfix |
| firefox133 | --- | fixed |
People
(Reporter: I, Assigned: jfkthame)
References
(Regression)
Details
(Keywords: perf, perf-alert, regression)
Attachments
(4 files)
User Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:130.0) Gecko/20100101 Firefox/130.0
Steps to reproduce:
- Install new Firefox profile
- Go to
https://www.samsung.com/th/smartphones/galaxy-s23/ - Scroll to see the result, similar to a
2024-09-27 21-52-28.mkvfile (Recorded in 60 fps)
Actual results:
There's scroll lag when using the mouse scroll, but not when using the scroll bar on the https://www.samsung.com/th/smartphones/galaxy-s23/ site.
Profiler : https://share.firefox.dev/4eC7cMs
Expected results:
Smooth scroll when using the mouse scroll.
Comment 2•1 year ago
|
||
Trying with Core::Layout, please move if there's a better component.
Comment 3•1 year ago
|
||
Thanks for the bug report! I can reproduce the scroll jankiness here, yup.
Here's a profile I captured (similar to the one in comment 0 but with C++ samples to show what we're doing in Gecko, recorded at a 0.1ms sampling interval):
https://share.firefox.dev/3XERMAl
The site seems to be firing scroll events that run JS on every sample, and that JS includes a call to Element.getBoundingClientRect, which ends up triggering an expensive layout flush. I'm not sure why the layout flush is expensive -- the restyle between each scroll event shows it being associated with some class attribute being dynamically added & removed, so presumably that class is associated with some styles that are expensive for us to invalidate.
Comment 4•1 year ago
|
||
The relevant code is:
b.prototype.fixMenu = function (a) {
var b = document.body.scrollTop ||
document.documentElement.scrollTop;
if (void 0 === a ? 0 : a) this.ele.section.removeClass('floating-navigation--fixed'),
this.menuTop = this.ele.section.target[0].getBoundingClientRect().top + b;
a = this.ele.section.hasClass('floating-navigation--fixed');
this.menuTop < b &&
(!1 !== this.statePC || !0 !== this.isLandscape) ? a ||
this.ele.section.addClass('floating-navigation--fixed') : a &&
this.ele.section.removeClass('floating-navigation--fixed')
};
Where a is the scroll event.
The relevant css rule afaict is:
[class*='--fixed'] .floating-navigation__wrap{position:fixed;top:0;left:0;right:0}
So they're re-laying out at least the whole menu every scroll, twice (not great).
That said, it seems 99% of that layout time is spent in ICU4X code, which seems probably excessive? Jonathan, Ting-Yu, are we doing something dumb, or is it expected?
Comment 5•1 year ago
|
||
Ok, if I disable icu4x segmenter, then this works a lot more reasonably.
Comment 6•1 year ago
|
||
Set release status flags based on info from the regressing bug 1854032
| Assignee | ||
Comment 7•1 year ago
|
||
This is Thai segmentation being expensive. In the old (non-ICU4X) code path, we have a cache in ComplexBreaker::GetBreaks in front of the equivalent NS_GetComplexLineBreaks call, so it may be similarly expensive but this is mitigated by caching the results.
Putting a similar cache in front of the ICU4X usage here would fix this.
One consideration is that the old code partitions the text according to Unicode ranges, and only caches runs that it considers "complex" (Thai, Khmer, Lao, Tibetan). We could do the same here, but that would mean an additional pass over the text to identify runs to be cached; or we could just try caching always, and see what the memory/performance implications are like.
Updated•1 year ago
|
| Assignee | ||
Comment 8•1 year ago
|
||
This does not change user-visible behavior, though the MruCache may perform slightly better
than the previous hashtable implementation. Primarily this is preparation for using the same
cache in conjunction with the new ICU4X-based line segmenter.
Updated•1 year ago
|
| Assignee | ||
Comment 9•1 year ago
|
||
| Assignee | ||
Comment 10•1 year ago
|
||
Updated•1 year ago
|
Comment 11•1 year ago
|
||
Thanks for working on a fix, Jonathan!
Comment 12•1 year ago
|
||
Comment 13•1 year ago
|
||
Comment 14•1 year ago
|
||
Backed out for causing Base-toolchains bustage.
- Backout link
- Push with failures
- Failure Log
- Failure line:
[task 2024-10-16T13:13:41.804Z] 13:13:41 ERROR - /builds/worker/checkouts/gecko/intl/lwbrk/LineBreakCache.h:45:7: error: 'mozilla::intl::LineBreakCache' has a base 'mozilla::MruCache<mozilla::intl::{anonymous}::LBCacheKey, mozilla::intl::{anonymous}::LBCacheEntry, mozilla::intl::LineBreakCache, 4093>' whose type uses the anonymous namespace [-Werror=subobject-linkage]
[task 2024-10-16T13:13:41.804Z] 13:13:41 INFO - class LineBreakCache
[task 2024-10-16T13:13:41.805Z] 13:13:41 INFO - ^~~~~~~~~~~~~~
| Assignee | ||
Updated•1 year ago
|
Comment 15•1 year ago
|
||
Comment 16•1 year ago
|
||
| bugherder | ||
https://hg.mozilla.org/mozilla-central/rev/eb85113b320e
https://hg.mozilla.org/mozilla-central/rev/e73c4076035c
https://hg.mozilla.org/mozilla-central/rev/9e89f9e1953f
Updated•1 year ago
|
Comment 17•1 year ago
|
||
(In reply to Cosmin Sabou [:CosminS] from comment #16)
https://hg.mozilla.org/mozilla-central/rev/eb85113b320e
https://hg.mozilla.org/mozilla-central/rev/e73c4076035c
https://hg.mozilla.org/mozilla-central/rev/9e89f9e1953f
Perfherder has detected a browsertime performance change from push 9e89f9e1953f2588fd94d262bd8361b3b3b05f52.
Improvements:
| Ratio | Test | Platform | Options | Absolute values (old vs new) | Performance Profiles |
|---|---|---|---|---|---|
| 3% | speedometer Vanilla-ES2015-TodoMVC/Adding100Items/Async | linux1804-64-shippable-qr | fission webrender | 2.96 -> 2.87 | Before/After |
Details of the alert can be found in the alert summary, including links to graphs and comparisons for each of the affected tests.
If you need the profiling jobs you can trigger them yourself from treeherder job view or ask a sheriff to do that for you.
You can run these tests on try with ./mach try perf --alert 2511
For more information on performance sheriffing please see our FAQ.
Comment 18•1 year ago
•
|
||
(In reply to Pulsebot from comment #13)
Backout by agoloman@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/a592cf6623ea
Backed out 3 changesets for causing Base-toolchains bustage.
Perfherder has detected a browsertime performance change from push a592cf6623ea583a06b324d408678dfd2c685e39.
Regressions:
| Ratio | Test | Platform | Options | Absolute values (old vs new) | Performance Profiles |
|---|---|---|---|---|---|
| 4% | speedometer Angular2-TypeScript-TodoMVC/Adding100Items/Async | linux1804-64-nightlyasrelease-qr | fission webrender | 2.83 -> 2.93 | Before/After |
| 4% | speedometer Angular2-TypeScript-TodoMVC/Adding100Items/Async | linux1804-64-nightlyasrelease-qr | fission webrender | 2.82 -> 2.93 | |
| 4% | speedometer React-TodoMVC/Adding100Items/Async | linux1804-64-nightlyasrelease-qr | fission webrender | 3.12 -> 3.24 | |
| 4% | speedometer VanillaJS-TodoMVC/Adding100Items/Async | linux1804-64-nightlyasrelease-qr | fission webrender | 3.00 -> 3.11 | |
| 3% | speedometer Vanilla-ES2015-Babel-Webpack-TodoMVC/Adding100Items/Async | linux1804-64-nightlyasrelease-qr | fission webrender | 3.09 -> 3.18 |
As author of one of the patches included in that push, we need your help to address this regression.
Details of the alert can be found in the alert summary, including links to graphs and comparisons for each of the affected tests. Please follow our guide to handling regression bugs and let us know your plans within 3 business days, or the patch(es) may be backed out in accordance with our regression policy.
If you need the profiling jobs you can trigger them yourself from treeherder job view or ask a sheriff to do that for you.
You can run these tests on try with ./mach try perf --alert 2746
For more information on performance sheriffing please see our FAQ.
Comment 19•1 year ago
•
|
||
(In reply to Andra Esanu (needinfo me) from comment #18)
(In reply to Pulsebot from comment #13)
Backout by agoloman@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/a592cf6623ea
Backed out 3 changesets for causing Base-toolchains bustage.Perfherder has detected a browsertime performance change from push
Looks like the perf regression is attributed to the backout, which means the actual patches here really bring a perf improvement.
| Assignee | ||
Comment 20•1 year ago
|
||
(In reply to Daniel Holbert [:dholbert] from comment #19)
(In reply to Andra Esanu (needinfo me) from comment #18)
(In reply to Pulsebot from comment #13)
Backout by agoloman@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/a592cf6623ea
Backed out 3 changesets for causing Base-toolchains bustage.Perfherder has detected a browsertime performance change from push
Looks like the perf regression is attributed to the backout, which means the actual patches here really bring a perf improvement.
These "regressions" from the backout (implying improvements from the actual patches) also look similar to the improvement already noted in comment 17. (Note that although only one test is mentioned there, the actual alert summary shows improvements on 4 of the speedometer tests.)
For actual (minor) regressions that were associated with this, see bug 1926569.
Updated•1 year ago
|
Comment 21•1 year ago
|
||
Reproduced the issue with Firefox 132.0a1 (2024-09-27) on Windows 10.
The issue is verified fixed with Firefox 134.0a1 (20241106212017) and Firefox 133.0b4(20241106091549) on Windows 10.
Description
•