Closed Bug 1546313 Opened 5 years ago Closed 5 years ago

YouTube with Polymer v3.2.0 is loading slower than with Polymer v1.x

Categories

(Core :: JavaScript Engine, defect, P1)

defect

Tracking

()

RESOLVED FIXED
Performance Impact high
Tracking Status
firefox-esr60 --- unaffected
firefox66 --- unaffected
firefox67 --- unaffected
firefox68 --- disabled

People

(Reporter: tgnff242, Unassigned)

References

(Regression)

Details

(Keywords: perf:pageload, regression)

User Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Firefox/68.0

Steps to reproduce:

  1. Open Firefox with a new and clean profile.
  2. Open a Private Window and load YouTube. Repeat until you get the Polymer.version (type than in the console) 3.2.0.
  3. Reload and take note of the needed time.
  4. Compare the time with the Polymer.version 1.x.

Actual results:

About three times more time is needed for the YouTube page to load with the Polymer.version@3.2.0 versus the Polymer.version@1.x.

Expected results:

It should be, if not faster, at least equal.

This is a regression on Firefox's side. Bisection points to Bug 1504947.

Mozregression:
27:08.26 INFO: No more inbound revisions, bisection finished.
27:08.26 INFO: Last good revision: e9408a26af674fbf535d853ede4ab629ac652c05
27:08.26 INFO: First bad revision: 8eaa269b362fd429c04779eaff8e2e7cd0075ad7
27:08.26 INFO: Pushlog:
https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=e9408a26af674fbf535d853ede4ab629ac652c05&tochange=8eaa269b362fd429c04779eaff8e2e7cd0075ad7

Has Regression Range: --- → yes
Has STR: --- → yes
Regressed by: 1504947
Flags: needinfo?(jwalden)
Whiteboard: [qf]

Seems like an odd cause if the bisect is correct, but we'd really hope that dropping the old polyfill will have sped things up, not slowed them down. This is a high visibility issue, of course. We could also use a confirmation, and profiles of load with and without the revision

Flags: needinfo?(sdetar)

Jeff, see Randell's comments.

Flags: needinfo?(sdetar)

I re-ran the bisection and reached to the same result as before. Although, it, now, seems that a simple refresh (F5) might not necessarily be worse - a hard refresh (Ctrl+Shift+R) or opening a video page always exhibits the bad behavior.

As requested, I captured the performance profiles under Polymer 3.2.0. Both are about 5sec before a reload and about 5sec after the page is fully loaded and rendered.
Last good revision: https://perfht.ml/2GU28Ga
First bad revision: https://perfht.ml/2GVbhOW

I can't seem to be able to get the Polymer 1.11.3 version right now. Tell me if it's needed and I can try later.

Whiteboard: [qf] → [qf:p1:pageload]

This smells a little bit like LSNG - the profile shows a long gap after starting to load the page. However, that landed IIRC March 8th

Could you try setting dom.storage.next_gen = false, restart the browser, and retest? Thanks!

Flags: needinfo?(tgnff242)

This is fairly clearly due to an algorithmic change in the JS engine, to compute column numbers as counts of code points, not code units. The latter requires some manner of iterating from the start of each line through code units, to compute a count. The latter was just a subtraction and potentially a division by 2. There's nothing particular to investigate here -- except whether some particular characteristic of the relevant source text is worth considering as heuristic when optimizing this.

This is probably a duplicate of all the other open regressions from bug 1504947.

Flags: needinfo?(tgnff242)
Flags: needinfo?(jwalden)

BTW, my understanding from youtube is that they're now at 100% 3.2 Polymer, so doing a/b comparisons against 1.* won't be possible

My high-end linux profile of this case doesn't show the long gap in the middle of loading -- and I don't see any significant CPU use in unusual threads (using perf, since I can't get Gecko Profiler to work right now in nightly Mac or Linux with private windows - even though it's greenlighted for private windows. The gap makes me think whatever is taking the time isn't on MainThread. Jeff - does the impact of bug 1504947 happen on other threads? if so, we can look for them in profiles.

Flags: needinfo?(jwalden)

I believe the DOM script loader code compiles scripts off the main thread, so yes, you'd see effects there.

Flags: needinfo?(jwalden)

If we can get a profile (I can't capture Gecko Profiles with Private Windows open for some reason), we'd want to add "HTML" and "Quota" to the list of threads (they're substring-matched).
Also: OffMainThread parsing is blocked by major GCs... that's another possibility. The second (bad) profile has a GCMajor -- but not in the gap, it's in the area after the gap (when JS is running). The first (good) profile has no GCMajor during the load. So that doesn't really explain the multi-second gap

(In reply to Randell Jesup [:jesup] (needinfo me) from comment #9)

we'd want to add "HTML" and "Quota" to the list of threads (they're substring-matched).

Here you go:
Last good revision: https://perfht.ml/2V7PBIb
First bad revision: https://perfht.ml/2Va7XZ0

Also, setting dom.storage.next_gen to false didn't have any positive or negative effect.

Priority: -- → P1

(In reply to tgn-ff from comment #0)

https://hg.mozilla.org/integration/mozilla-inbound/pushloghtml?fromchange=e9408a26af674fbf535d853ede4ab629ac652c05&tochange=8eaa269b362fd429c04779eaff8e2e7cd0075ad7

Before I discovered this bug report, I bisected this YouTube slowdown with mozregression (on Windows 10) and independently landed on this exact same regression range pointing to bug 1504947.

Depends on: 1549758

Our recorded youtube page should be on the old version, if you've run raptor it should be in the ../testing/mozproxy directory. Might be of some use

The backout is on nightly. If there are still concerns here for FF68 we should make sure they are identified.

As far as the reported regression goes, everything seems good on my end.

Thanks for verifying! :)

Julien, given comment 14, should we untrack for 68 (and presumably keep this open while Waldo finishes work (comment 6)(?

Flags: needinfo?(jcristau)

Fixed by the reversion in bug 1549758, which makes the columnAt function in the profile in comment 10 contain purely constant-time operations. Bug 1551916 will have patches that will address the concern in the future.

Status: UNCONFIRMED → RESOLVED
Closed: 5 years ago
Resolution: --- → FIXED
Flags: needinfo?(jcristau)
Performance Impact: --- → P1
Keywords: perf:pageload
Whiteboard: [qf:p1:pageload]
You need to log in before you can comment on or make changes to this bug.