(In reply to Jean-Yves Avenard [:jya] from comment #13) > I'm a bit unfamiliar with the mapping of the GPU image to be used later by the wayland code. Does that performs a readback or the mapping is handled like it would with a GL surface handle? That was the most difficult part of the work. FFmpeg decoded VASurfaces are GEM objects which can be mapped as dmabuf object (a fd in user space) so there it's any copy here. We use EXT_image_dma_buf_import extension to map dmabuf fd as EGLImage without copy. The issue here is that VASurfaces/GEM to dmabuf mapping isn't exact, dmabuf object is live until the fd is closed but underlying GEM object can be changed. So there's a problem that VASurfaces/GEM are altered for the same dmabuf fd as VASurfaces are reused by va-api hw decoder. That's reason there's the frame holder class here - to keep VASurfaces/GEM mapped to exact dmabuf object until the dmabuf/EGLimage on top of it is used by gecko compositor. So yes, we do direct rendering from va-api decoded frames in gecko without any copy. I'm not sure what do you mean with 'GL surface handle'. We use EGLimage which is an abstraction over GPU memory and can be mapped as texture/framebuffer so it's pretty much versatile. > And how does this work in a multi-process environment? The VASurfaces/GEM mapped as dmabuf can be shared as fd or EGLImage. I use fd and SurfaceDescriptorDMABuf is used for sharing. > decoding is currently done in the content process (but not for much longer), while compositing is in the GPU process. > Once bug 1595994 lands, decoding will be done in the RDD (remote data decoder) process. That's not a problem. But Wayland does not use GPU process. > I'm unfamiliar with how this done, have done work with latest vaapi version. Last I implemented a vaapi decoder was almost 10 years ago (in mythtv), things have changed since. > > Ultimately, on windows we had to implement a GPU process and run the decoding there, because HW decoders and drivers have proven to be very crashy. so to enable this we will have to wait on 1595994 I guess there's not difference from Wayland POV. It does not matter which process does the decoding as results are always shared by SurfaceDescriptorDMABuf. Thanks.
Bug 1616185 Comment 14 Edit History
Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.
(In reply to Jean-Yves Avenard [:jya] from comment #13) > I'm a bit unfamiliar with the mapping of the GPU image to be used later by the wayland code. Does that performs a readback or the mapping is handled like it would with a GL surface handle? That was the most difficult part of the work. FFmpeg decoded VASurfaces are GEM objects which can be mapped as dmabuf object (a fd in user space) so there it's any copy here. We use EXT_image_dma_buf_import extension to map dmabuf fd as EGLImage without copy. The issue here is that VASurfaces/GEM to dmabuf mapping isn't exact, dmabuf object is live until the fd is closed but underlying GEM object can be changed. So there's a problem that VASurfaces/GEM are altered for the same dmabuf fd as VASurfaces are reused by va-api hw decoder. That's reason there's the frame holder class here - to keep VASurfaces/GEM mapped to exact dmabuf object until the dmabuf/EGLimage on top of is used by gecko compositor. So yes, we do direct rendering from va-api decoded frames in gecko without any copy. I'm not sure what do you mean with 'GL surface handle'. We use EGLimage which is an abstraction over GPU memory and can be mapped as texture/framebuffer so it's pretty much versatile. > And how does this work in a multi-process environment? The VASurfaces/GEM mapped as dmabuf can be shared as fd or EGLImage. I use fd and SurfaceDescriptorDMABuf is used for sharing. > decoding is currently done in the content process (but not for much longer), while compositing is in the GPU process. > Once bug 1595994 lands, decoding will be done in the RDD (remote data decoder) process. That's not a problem. But Wayland does not use GPU process. > I'm unfamiliar with how this done, have done work with latest vaapi version. Last I implemented a vaapi decoder was almost 10 years ago (in mythtv), things have changed since. > > Ultimately, on windows we had to implement a GPU process and run the decoding there, because HW decoders and drivers have proven to be very crashy. so to enable this we will have to wait on 1595994 I guess there's not difference from Wayland POV. It does not matter which process does the decoding as results are always shared by SurfaceDescriptorDMABuf. Thanks.