Open Bug 1658900 Opened 1 year ago Updated 14 days ago

Use VA-API encoder with WebRTC

Categories

(Core :: WebRTC: Audio/Video, enhancement, P4)

enhancement

Tracking

()

People

(Reporter: stransky, Unassigned)

References

(Depends on 2 open bugs, Blocks 1 open bug)

Details

Let's use VA-API encoder with WebRTC. That involves to implement kNative WebRTC dmabuf surfaces and encode frames directly at GPU memory by ffmpeg encoder similar to Apple/Android.

Bug title says decoder and comment says encoder ._.

Summary: Use VA-API decoder with WebRTC → Use VA-API encoder with WebRTC
Severity: -- → S4
Priority: -- → P4
Depends on: 1724900

For the record, I'm currently working on bug 1724900 (using Pipewire and xdg-portals for camera access) and by default Pipewire appears to prefer Dmabuf as buffer sharing mechanism there as well (in YUV2 format).
So once we have that, we can send both screen sharing and camera stream over to VAAPI and encode it there. And we should be able to use the camera input in Webrender directly.

But only if widget.use-xdg-desktop-portal set to non-default true, I suppose? Firefox use no portals by default.

Getting dmabuf from camera it not difficult, it can be done here:

https://searchfox.org/mozilla-central/rev/fdd13237fcff2692404313b731a4ee0cba9e8ecb/third_party/libwebrtc/webrtc/modules/video_capture/linux/video_capture_linux.cc#120

the tricky part is to route the dmabuf buffer through webrtc.

Depends on: 1729167

(In reply to Martin Stránský [:stransky] (ni? me) from comment #5)

I think the path is to implement VideoFrameBuffer kNative type based on dmabuf:

Filed as Bug 1729167.

For the debugging we can create dmabuf surfaces and upload captured frames there (as we do that for SW decoded frames).

You need to log in before you can comment on or make changes to this bug.