Open Bug 1558133 Opened 5 years ago Updated 2 years ago

Reducing overdraw with huge semi-transparent images

Categories

(Core :: Graphics: WebRender, enhancement, P3)

enhancement

Tracking

()

People

(Reporter: nical, Unassigned)

References

(Blocks 1 open bug)

Details

Attachments

(1 file)

Silly idea of the day.

On some websites (for example www.gitbook.com) there are huge images where most of the image is transparent pixels.

these images cause a ton of overdraw which really hurts performance on high resolution screens. It's not very hard to detect that an image is going to be very expensive: if it's resolution is very high and it has an alpha channel, there is a decent chance that it will cause grief.
What if we could in these cases determine that the image has very large transparent areas? Ideally we'd be able to ask the image decoder to give is a coarse mask of NxN tiles saying whether the tile is fully opaque, partially transparent or fully transparent. Unfortunately png which is usually used by the offending images, isn't encoded in tiles IIRC, so we'd have to figure that out on our own with the decoded image.
This could be done with compute shaders or fragment shaders that down-scale the image somewhat like we render blurs but simpler since it only needs to know whether tiles of the image are fully opaque, fully transparent or in-between (so in practice it should be quite a bit faster than a blur).

With that information we could render the image as a grid of quads, let the vertex shader query whether its tile is fully transparent and bail out when it is the case by generating a 0x0 quad. We could even move opaque tiles of the image to the opaque pass using the same filtering approach.

It may sound like a lot of effort for a specific case but I think that large semi-transparent images killing perf because of overdraw aren't that uncommon. And when that happens there isn't much else we can do about it. This optimization could kick in only for very large semi transparent images and the coarse transparency mask only needs to be computed once.
The same optimization could extend to blob images except we'd probably be able to generate the mask on the CPU by looking at the drawing commands.

Attached image wr-background2.png

Attaching one of the images in question. A simpler optimization (to tiling the image) could be - obtaining a bounding box of the opaque texels in the image. It would not need any batching changes, would just affect the bounding box of the image primitive.

I suspect that it would be much cheaper to generate the coarse tile mask information (with, say, 64x64 tiles) than compute the bounding rect of the non-transparent area, though. The former is a very local problem which is as simple as down-scaling. I am not sure about how to generate a bounding rect without sequentially going over large parts of the image on the other hand.
There are also pages with images that have large transparent areas and the bounding rect of the non-transparent areas cover the whole image (like a big triangle).

FWIW, I tried adding a discard() as well as disabling scissored clears - both produced higher GPU times (9ms versus about 8ms).

Blocks: wr-overdraw
No longer blocks: wr-perf
Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: