This is the macOS version of bug 1623530.
At the moment, videos get on the screen by having a dedicated CALayer with its own IOSurface in screen space. Then WebRender draws the video into that IOSurface, doing scaling and color conversion in the process. And then CoreAnimation puts the layer on the screen without further scaling.
We would like to skip the WebRender draw (it's an extra copy) and do the scaling and color conversion using CoreAnimation.
In order to do this, we'll need to extend the OS compositor API with a method that lets WebRender say "Add a surface containing this external image, scaled to the following rect" (bug 1623638), and we'll need to have an IOSurface handle for that external image.
For hardware decoded video and for WebGL, we should already have an IOSurface, unless we accidentally read back the contents into a shmem at some point in the pipeline.
For software decoded video and 2D canvas we currently only have a shmem. As a first step, we can do a CPU-side copy into an IOSurface in the compositor. As a second step, we should move the IOSurface closer to where the data is being generated (e.g. software video decoding) so that we can skip that copy.
We'll also need to keep in mind the buffer lifetimes. Giving an IOSurface to the window server means that the IOSurface content must not be touched until the window server is done with it. In the past, WR was doing a copy during the composite, so the IOSurface could be unlocked right after that composite. But now it will need to stay locked for longer.