Closed Bug 1623669 Opened 4 years ago Closed 4 years ago

A lot of picture cache invalidation when the scrollable area isn't the root of the document

Categories

(Core :: Graphics: WebRender, defect, P3)

defect

Tracking

()

RESOLVED FIXED
Tracking Status
firefox78 --- wontfix

People

(Reporter: nical, Unassigned)

References

(Blocks 3 open bugs)

Details

Attachments

(2 files)

921.32 KB, application/x-zip-compressed
Details
1.08 MB, application/x-zip-compressed
Details

Scrolling with picture caching works extremely well on pages where the whole document scrolls together (for example a wikipedia page). On pages with more elaborate structure, most of the picture cache gets invalidated every frame during scrolling. Typically pages like bugzilla bugs that contain a header and below that large scroll frame. The picture cache tiles end up being relative to the header and almost everything is constantly invalidated during scrolling.

When OS composition is disabled the pages end up using quite a bit more CPU and GPU with picture caching than without during scrolling. because of the many render target switches, extra composite pass, reduced batching opportunities and invalidation code.
With OS composition enabled we at least don't suffer from the extra composite pass.

Nical - I think you were going to add some example URLs to this bug?

Flags: needinfo?(nical.bugzilla)
Flags: needinfo?(bpeers)

A few sites where I am seeing the particular issue of "tiles not following how the majority of the content moves" causing most of the screen to invalidate during scrolling:

I must say that a month or two ago I was running into a lot more sites that were exhibiting this behavior, nice work!
It used to cause a ton of over-invalidation on many sites with headers or a side bars, I am much much less worried about it now.

Flags: needinfo?(nical.bugzilla)
Attached file 1595680.1.out.zip

I captured a Bugzilla page and ran it through tileview. All the content appears to be on a single slice (slice 2): the header, the comments, but also the scrollbar and thumb. So scrolling changes their clip_rect (for example the grey background that slides under the header [1], but also all text and images), and/or changes the primitive count as they enter/leave tiles.

I'm not sure if this is layout not using stacking contexts aggressively enough to help webrender determine slice boundaries -- or if they do and something gets lost during unpacking.

[1]

Content: Descriptor changed for uid 402

    prim_clip_rect changed from 850x148 at (174,512) to 850x119 at (174,512)
    Item: PrimitiveKey { common: PrimKeyCommonData { flags: IS_BACKFACE_VISIBLE, prim_size: SizeKey { w: 1572.0, h: 800.0 } }, kind: Rectangle { color: Value(ColorU { r: 40, g: 41, b: 42, a: 51 }) } }

(I'll use this as an opportunity to tweak tileview a bit -- it could be easier to determine what's on which slice).

Flags: needinfo?(bpeers)
Attached file treeherder.out.zip

Likewise for Treeherder -- everything is on slice 2, clip rects and/or prim counts change when scrolling.

Glenn, can you write up the ideas that you have for fixing this?

Flags: needinfo?(gwatson)

There are a couple of possibilities here that I have been pondering:

  1. When we end up with a slice with multiple scroll roots, we can get a reasonably good estimate of the area of each scroll root, by accumulating the bounding rects of the primitive clusters for each scroll root in the slice. We could use this to select the scroll root with the largest area as the picture cache root node. This would mean we only invalidate / scroll the smaller region of the slice. This would probably work well on many pages.

  2. We generally want to avoid creating extra slices in this case because we lose subpixel AA rendering (and have extra GPU memory etc). However, it's probably quite common that pages with this layout don't have any overlap of the clusters. In this case, there's no reason we shouldn't create extra picture cache slices, since it won't affect subpixel AA at all.

We should identify some pages that have this problem, and do some testing with these ideas to see if they help, to work out which of these are feasible.

Flags: needinfo?(gwatson)

Because this bug's Severity has not been changed from the default since it was filed, and it's Priority is P3 (Backlog,) indicating it has been triaged, the bug's Severity is being updated to S3 (normal.)

Severity: normal → S3

I tested the pages from comment 2 again and we are doing much better now. Enough to consider this fixed.

Status: NEW → RESOLVED
Closed: 4 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: