Use VA-API encoder with WebRTC
Categories
(Core :: WebRTC: Audio/Video, enhancement, P4)
Tracking
()
People
(Reporter: stransky, Unassigned)
References
(Depends on 2 open bugs, Blocks 1 open bug)
Details
Let's use VA-API encoder with WebRTC. That involves to implement kNative WebRTC dmabuf surfaces and encode frames directly at GPU memory by ffmpeg encoder similar to Apple/Android.
Reporter | ||
Updated•3 years ago
|
Updated•2 years ago
|
Comment 2•1 year ago
|
||
For the record, I'm currently working on bug 1724900 (using Pipewire and xdg-portals for camera access) and by default Pipewire appears to prefer Dmabuf as buffer sharing mechanism there as well (in YUV2 format).
So once we have that, we can send both screen sharing and camera stream over to VAAPI and encode it there. And we should be able to use the camera input in Webrender directly.
But only if widget.use-xdg-desktop-portal
set to non-default true
, I suppose? Firefox use no portals by default.
Reporter | ||
Comment 4•1 year ago
|
||
Getting dmabuf from camera it not difficult, it can be done here:
the tricky part is to route the dmabuf buffer through webrtc.
Reporter | ||
Comment 5•1 year ago
|
||
I think the path is to implement VideoFrameBuffer kNative type based on dmabuf:
and then pass it in VideoFrame
Reporter | ||
Comment 6•1 year ago
|
||
(In reply to Martin Stránský [:stransky] (ni? me) from comment #5)
I think the path is to implement VideoFrameBuffer kNative type based on dmabuf:
Filed as Bug 1729167.
Reporter | ||
Comment 7•1 year ago
|
||
For the debugging we can create dmabuf surfaces and upload captured frames there (as we do that for SW decoded frames).
Description
•