Use VA-API encoder with WebRTC
Categories
(Core :: WebRTC: Audio/Video, enhancement, P4)
Tracking
()
People
(Reporter: stransky, Unassigned)
References
(Blocks 1 open bug)
Details
Let's use VA-API encoder with WebRTC. That involves to implement kNative WebRTC dmabuf surfaces and encode frames directly at GPU memory by ffmpeg encoder similar to Apple/Android.
Reporter | ||
Updated•5 years ago
|
Updated•4 years ago
|
Comment 2•4 years ago
|
||
For the record, I'm currently working on bug 1724900 (using Pipewire and xdg-portals for camera access) and by default Pipewire appears to prefer Dmabuf as buffer sharing mechanism there as well (in YUV2 format).
So once we have that, we can send both screen sharing and camera stream over to VAAPI and encode it there. And we should be able to use the camera input in Webrender directly.
But only if widget.use-xdg-desktop-portal
set to non-default true
, I suppose? Firefox use no portals by default.
Reporter | ||
Comment 4•4 years ago
|
||
Getting dmabuf from camera it not difficult, it can be done here:
the tricky part is to route the dmabuf buffer through webrtc.
Reporter | ||
Comment 5•4 years ago
|
||
I think the path is to implement VideoFrameBuffer kNative type based on dmabuf:
and then pass it in VideoFrame
Reporter | ||
Comment 6•4 years ago
|
||
(In reply to Martin Stránský [:stransky] (ni? me) from comment #5)
I think the path is to implement VideoFrameBuffer kNative type based on dmabuf:
Filed as Bug 1729167.
Reporter | ||
Comment 7•4 years ago
|
||
For the debugging we can create dmabuf surfaces and upload captured frames there (as we do that for SW decoded frames).
Reporter | ||
Comment 8•1 year ago
|
||
With the latest work on FFmpegVideoEncoder it may be fairly easy to implement HW encoder as the hard work is already done.
With media.webrtc.platformencoder set to true WebRTC routes encoding to FFmpegVideoEncoder module.
We may need to implement VA-API encoding there, simple example is at:
https://ffmpeg.org/doxygen/trunk/doc_2examples_2vaapi__encode_8c_source.html
(we may need something better configured but it's enough as a starting point).
We may consider direct dmabuf frame upload and do all YUV operations on GPU (scale & etc) and then supply frame for encoding.
Description
•