Used cached surfaces when zooming with swgl on Android
Categories
(Core :: Graphics: WebRender, enhancement)
Tracking
()
People
(Reporter: jrmuizel, Assigned: jrmuizel)
References
(Blocks 2 open bugs)
Details
Attachments
(2 obsolete files)
This should give us much better zoom performance.
Assignee | ||
Updated•4 years ago
|
Comment 1•4 years ago
|
||
I don't think it's super complex, although I've only briefly thought through how it would work so far, without delving into any details.
The trickiest parts to get right are likely to be:
-
Ensuring we can update a small number of tiles per frame and/or asynchronously. With SWGL, it'd probably be relatively easy to plumb through an interface to rasterize the tiles on a background thread (if that's not already supported). For hardware, we'd probably want to do something like rasterize one tile per frame, and then swap in the new set of tiles when they are all complete.
-
Making sure GPU memory doesn't blow up too much.
I can't think of anything else especially difficult about it right now, will ponder a bit more. Even if it's not technically difficult, it's still probably a fair amount of code to get everything working correctly (for example, ensuring we can always safely recomposite tiles from an async zoom without needing to update the GPU / texture cache which has lifetimes tied to the frame currently being built).
Having said that, I've noticed on a fairly high end android device that even pinch-zooming relatively simple pages can feel quite janky, when drawing that page and others on that devices generally hold 60 fps easily. An example of such a page is [1]. So it would be good to do some profiling before we dive too far into this, and see if there's any obvious bugs or improvements we can make (gfx related or not) to the current pinch zoom performance.
Updated•4 years ago
|
Updated•4 years ago
|
Updated•4 years ago
|
Updated•4 years ago
|
Updated•4 years ago
|
Comment 2•4 years ago
|
||
Probably not useful in its current hacky form - but a start to being able to suspend and reuse picture cache tiles.
Comment 3•4 years ago
|
||
I've been thinking a bit more about this, and I think the hacky WIP patch above is not the right way to go. It would be fragile code, with a lot of edge cases to handle correctly.
But, I have been thinking about a slightly different approach that I think would give a similar result but be easier to implement without a heap of complex edge cases.
- Assume we can handle the scrolling performance not by caching stale tiles, but by tweaking the current tile size and display ports we retain (to be proven, but seems likely that we could make this good enough).
- Introduce an API or renderer option that says pinch-zoom should be implemented by having the compositor step apply the scale + offset, rather than having them baked in to the picture cache tile coordinates.
- The normal picture caching logic remains the same - if a new tile is needed due to zoom / panning etc, it will still be drawn. However, changing the pinch-zoom factors won't invalidate tiles, as they'll be in a "compositor surface" coordinate space, that doesn't include the zoom transform.
- We rely on the existing support for transforms in the compositor (used for compositor surfaces) to apply the zoom transform to the existing tiles (in this performance mode) or it gets baked in to the picture cache tile transforms (existing quality mode).
This would mean:
- When a pinch-zoom begins, we reuse existing rasterized tiles and start magnifying them via the compositor surface transform support.
- We only rasterize new tiles, or discard tiles, based on the existing logic (they won't invalidate if in performance mode, as the zoom transform is extracted and passed to the compositor).
- No extra complexity with suspending dirty rect updates etc, primitive dependency checks etc.
- We will end up rasterizing tiles during pinch-zoom if the content inside the tile is changing, but otherwise we'll reuse tiles implicitly.
- At the end of a pinch-zoom, we bake the compositor surface transform back in to the prim deps, which will implicitly invalidate those tiles and re-render them at the correct high-quality resolution for the final zoom factors.
There's a fair amount of work to implement this (perhaps 1-2 weeks of work?) but conceptually it seems a lot simpler. The majority of the work is likely to be quite mechanical (e.g. refactoring all the picture cache tile coordinates to be in compositor surface space, rather than device space).
Jeff, does that sound like a reasonable plan to you?
Updated•4 years ago
|
Updated•4 years ago
|
Comment 5•4 years ago
|
||
Although not complete, there is enough of this implemented now that it could be built and tested on a Mali 400 device, to see what effect it has on zoom performance.
If you want to try it, you'll need to:
- Get latest m-c.
- Apply https://phabricator.services.mozilla.com/D116796 (this is on autoland, should be in m-c soon).
- Apply https://phabricator.services.mozilla.com/D117378 (approved but not landed yet).
- Apply the WIP patch here.
Toggling the ENABLE_CACHE_ZOOM_MODE
variable in the last patch determines whether the cached zoom mode is enabled.
The remaining work is relatively simple - fixing up any filtering artifacts, adding support for enabling this dynamically and re-rasterizing tiles at full resolution when the zoom completes etc. There might still be some stray invalidations occurring, which might hurt performance of the zoom as well.
Updated•4 years ago
|
Comment 6•4 years ago
|
||
The parts of this that are needed for initial performance testing are all landed.
There will be further work after some initial performance testing on real devices (either further performance / bug fixes, or enabling on certain devices), so I'll move this to Jeff for now, rather than closing the bug.
Updated•4 years ago
|
Description
•