The current code assumes that each video frame represents one fixed duration image (30fps). This needs to be wired to whatever GIPS is actually doing. I.e., some multiple of the timestamp rate. See MediaPipelineReceiveVideo::PipelineRenderer::PipelineRenderr and MediaPipelineReceiveVideo::PipelineRenderer::RenderVideoFrame
So, looking at the MediaStreamGraph code and associated stuff, I think we can set the rate to the nominal video RTP timestamp rate for incoming video (90000). It's not a framerate, it's a tick rate. (Initial attempts had set it to 10 or 30.) Related: I believe the correct thing to do for video frames (at least from PeerConnection/GIPS) is to set the duration to '1'. I don't think there's anything in video that demands they match the elapsed time. If we need to have it match, retroactively set the previous frames mDuration to the actual difference in timestamps when the next frame is added, but I don't think that's needed. (mDuration is an iffy concept in general for video, doubly so for video transmitted over a network.) It works for audio because GIPS always supplies audio even if it has to make it up (PLC, etc).
You can't retroactively set a frame's duration. I think you should emit video frames alongside your audio. When you receive a new chunk of audio, but you don't have a new video frame, emit the previous video frame with the same duration as the new audio. You can store a reference to the previous video frame's Image object and reuse it --- that's basically free, no data is copied.
The code has changed on the send side, but the receive side may still be a problem.
Whiteboard: [WebRTC], [blocking-webrtc+] → [WebRTC], [blocking-webrtc-]
other changes have effectively dealt with this by inserting with duration 1 and then extending that on NotifyPull
Status: NEW → RESOLVED
Last Resolved: 3 years ago
Resolution: --- → FIXED
Whiteboard: [WebRTC], [blocking-webrtc-]
You need to log in before you can comment on or make changes to this bug.