Closed Bug 1194918 Opened 9 years ago Closed 9 years ago

Create VideoSink subclass from MediaSink and encapsulate MDSM::UpdateRenderVideoFrame related-logic into it.

Categories

(Core :: Audio/Video: Playback, defect, P1)

defect

Tracking

()

RESOLVED FIXED
FxOS-S9 (16Oct)
feature-b2g 2.5+
Tracking Status
firefox44 --- fixed

People

(Reporter: kikuo, Assigned: kikuo)

References

(Blocks 3 open bugs)

Details

(Whiteboard: [ft:conndevices][partner-blocker])

Attachments

(3 files, 18 obsolete files)

17.95 KB, patch
kikuo
: review+
Details | Diff | Splinter Review
36.61 KB, patch
kikuo
: review+
Details | Diff | Splinter Review
1.81 KB, patch
kikuo
: review+
Details | Diff | Splinter Review
Once Bug 1188268 landed, it would also be good to encapsulate logic of consuming VideoData in MediaDecoderStateMachine into DecodedVideoDataSink. So that it would be possible to create other kind of VideDataSink for further purpose such as EncryptedVideoDataSink/RawVideoDataSink for (EME) HW Rendering.
Blocks: 1146795
Blocks: 1194920
Priority: -- → P1
Target Milestone: --- → FxOS-S7 (18Sep)
To demonstrate sending video frames into VideoFrameContainer via a videoloop thread in DecodedVideoDataSink. But currently we still need to hold MediaDecoder's ReentrantMonitor while calling |MDSM::GetClock()| for different threads. I'm thinking to create a class ReferenceMediaClock to centralize these time-related values and logics. Will consult with JW, and have a update here.
Assignee: nobody → kikuo
Blocks: TV_FxOS2.5
feature-b2g: --- → 2.5+
Whiteboard: [ft:conndevices][partner-blocker]
Status: NEW → ASSIGNED
Hi JW, I created a RefClock in MDSM and tried centralizing some control logic of clock. In WIP patch Attachment 8653467 [details] [diff], I removed |MDSM::UpdateRenderVideoFrames()| & |MDSM::RenderVideoFrames()| and put these logic into VideoSink which executes |VideoLoop()| in another thread, say VThread. IMHO, I think the clock should eventually be accessible by multi-threads, because of the pull-based data consuming model by more than 2 consumers who may need the clock information to sync frames with each other (like in VThread, audio or system clock is needed for the calculation of presenting time stamps). Could you give me some suggestions regarding the modification in this patch ?
Attachment #8656955 - Flags: feedback?(jwwang)
You might want to base your work on bug 1199562 where MDSM can always rely on the MediaSink to provide the playback position. And the code is a lot simpler because MDSM doesn't need to worry about which system, audio or stream clock to use.
Attachment #8656955 - Flags: feedback?(jwwang)
I think this patch should be make the architecture more clear. 1. DecodedVideoDataSink is created and has its own thread loop to keep calculating timestamp from VideoQueue, and rendering frames by video thread. 2. MDSM now only obtain rendering information from VideoSink and update to MediaDecoder. And I'll re-base the code according to Bug 1199121 (To get clock from MediaSink).
Attachment #8653467 - Attachment is obsolete: true
Attachment #8656955 - Attachment description: Create RefClock in MDSM to centralize the control logic of clock_260672 → [1] Create RefClock in MDSM to centralize the control logic of clock_260672
Hi JW, I re-based code based on new MediaSink architecture. Basically I 1. Remove |MDSM::UpdateRenderedVideoFrame()/MDSM::RenderVideoFrames()| out of MDSM into DecodedVideoDataSink. 2. Expose several VideoSink's API for obtaining statics information by MDSM. 3. Remove several assertions in MDSM & AudioSinkWrapper for video loop thread. Could you give me some suggestions regarding the design especially and I'm not quite sure if it's ok to remove those assertions. But I think if thread-safety of those functions are guaranteed, it should be ok.
Attachment #8656955 - Attachment is obsolete: true
Attachment #8657703 - Attachment is obsolete: true
Attachment #8658672 - Flags: feedback?(jwwang)
Comment on attachment 8658672 [details] [diff] [review] Add VideoSink, DecodedVideoDataSink and remove logic of video frame rendering out of MDMS_261366 Review of attachment 8658672 [details] [diff] [review]: ----------------------------------------------------------------- We spend a lot effort in removing bi-directional dependency from MDSM and its folks to make the code more readable and maintainable. DecodedVideoDataSink should not depend on MDSM anyway. ::: dom/media/mediasink/AudioSinkWrapper.cpp @@ +66,5 @@ > int64_t > AudioSinkWrapper::GetVideoPosition(TimeStamp aNow) const > { > + // Called from VideoLoop thread and state machine thread, comment the assertion. > + //AssertOwnerThread(); It is wrong to remove the assertion, because this function is not thread-safe at all.
Attachment #8658672 - Flags: feedback?(jwwang) → feedback-
(In reply to JW Wang [:jwwang] from comment #6) > Comment on attachment 8658672 [details] [diff] [review] > Add VideoSink, DecodedVideoDataSink and remove logic of video frame > rendering out of MDMS_261366 > > Review of attachment 8658672 [details] [diff] [review]: > ----------------------------------------------------------------- > > We spend a lot effort in removing bi-directional dependency from MDSM and > its folks to make the code more readable and maintainable. > DecodedVideoDataSink should not depend on MDSM anyway. Yeah, that's also my plan. Currently only MDSM::GetClock() is only called in |DecodedVideoDataSink::TryRenderVideoFrames()|. Since MediaSink is landed, I'm thinking to move MDSM::GetClock() into MediaSink and pass MediaSink into DecodedVideoDataSink (like the class "RefClock" in Attachment 8656955 [details] [diff]). So VideoSink is no longer related to the MDSM. Other thoughts ? > > ::: dom/media/mediasink/AudioSinkWrapper.cpp > @@ +66,5 @@ > > int64_t > > AudioSinkWrapper::GetVideoPosition(TimeStamp aNow) const > > { > > + // Called from VideoLoop thread and state machine thread, comment the assertion. > > + //AssertOwnerThread(); > > It is wrong to remove the assertion, because this function is not > thread-safe at all. Oh my bad, it's an oversight. I'll make these functions thread-safe in the next patch.
Here is my proposal: 1. create a MediaSink subclass to wrap around AudioSinkWrapper 2. do video rendering in the state machine thread as before 3. the subclass renders video frames based on the playback position reported by AudioSinkWrapper MDSM should always see a single MediaSink for rendering the media data.
Hi JW, Thanks for the guidance in Comment 8. 1. I've added new structure PlaybackStatistics in MediaSink and provide API to query the information. This is used for MDSM to obtain dropped/corrupted frames information while in STATE_DECODING. 2. I've create a class "VideoSinkWrapper" which subclass MediaSink, and mimic the logic in AudioSinkWrapper. 3. Using MDSM's taskqueue in DecodedVideoDataSink and remove unnecessary members. But I still keep the state machinery in VideoLoop to have a simple and clear view. As far as i tested, this patch passes mochitest(dom/media/test/{test_eme_playback.html,test_playback.html}) and I'll try run all the mochitest later. I'm also figuring out how to separate this patch into several part for review Could you give me some feedback regarding this patch ? Thanks : )
Attachment #8658672 - Attachment is obsolete: true
Attachment #8659833 - Flags: feedback?(jwwang)
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #9) > Created attachment 8659833 [details] [diff] [review] > Add VideoSink/ DecodedVideoDataSInk/ VideoSinkWrapper and remove rendering > logic out of MDSM_261592.diff > > Hi JW, > Thanks for the guidance in Comment 8. > > 1. I've added new structure PlaybackStatistics in MediaSink and provide API > to query the information. This is used for MDSM to obtain dropped/corrupted > frames information while in STATE_DECODING. > > 2. I've create a class "VideoSinkWrapper" which subclass MediaSink, and > mimic the logic in AudioSinkWrapper. > > 3. Using MDSM's taskqueue in DecodedVideoDataSink and remove unnecessary > members. But I still keep the state machinery in VideoLoop to have a simple > and clear view. > > As far as i tested, this patch passes > mochitest(dom/media/test/{test_eme_playback.html,test_playback.html}) and > I'll try run all the mochitest later. > > I'm also figuring out how to separate this patch into several part for > review > Could you give me some feedback regarding this patch ? > Thanks : ) It seems there's also same refactor in Bug 1172830 (To make buffering-check as a single method out of MDSM::UpdateRenderedVideoFrames()) :p
Comment on attachment 8659833 [details] [diff] [review] Add VideoSink/ DecodedVideoDataSInk/ VideoSinkWrapper and remove rendering logic out of MDSM_261592.diff Review of attachment 8659833 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/MediaDecoderStateMachine.cpp @@ +292,5 @@ > > mMetadataManager.Connect(mReader->TimedMetadataEvent(), OwnerThread()); > > mMediaSink = CreateAudioSink(); > + mVideoSink = CreateVideoSink(); It is clumsy for MDSM to manage a MediaSink and a VideoSink respectively. MDSM should just manage one MediaSink which is responsible for audio/video rendering. ::: dom/media/mediasink/AudioSinkWrapper.cpp @@ +82,5 @@ > > int64_t pos = -1; > TimeStamp t = TimeStamp::Now(); > > + if (mAudioSink && !mAudioEnded) { Why checking mAudioSink additionally? ::: dom/media/mediasink/MediaSink.h @@ +68,5 @@ > virtual void SetPlaybackParams(const PlaybackParams& aParams) = 0; > > + // Provide playback statistics of this sink (For video) > + // Can be called in any state. > + virtual const PlaybackStatistics& GetPlaybackStatistics() { return mStatistics; } Don't put any implementation here. We should have all pure virtual functions so sub-classes have the freedom to have their own implementations. @@ +128,5 @@ > virtual void Shutdown() {} > > protected: > virtual ~MediaSink() {} > + PlaybackStatistics mStatistics; Ditto.
Attachment #8659833 - Flags: feedback?(jwwang) → feedback-
(In reply to JW Wang [:jwwang] from comment #11) > Comment on attachment 8659833 [details] [diff] [review] > Add VideoSink/ DecodedVideoDataSInk/ VideoSinkWrapper and remove rendering > logic out of MDSM_261592.diff > > Review of attachment 8659833 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: dom/media/MediaDecoderStateMachine.cpp > @@ +292,5 @@ > > > > mMetadataManager.Connect(mReader->TimedMetadataEvent(), OwnerThread()); > > > > mMediaSink = CreateAudioSink(); > > + mVideoSink = CreateVideoSink(); > > It is clumsy for MDSM to manage a MediaSink and a VideoSink respectively. > MDSM should just manage one MediaSink which is responsible for audio/video > rendering. > Ha, just want to discuss with you if I should put Videosink into MediaSink. In that case, no VideoSinkWrapper(which subclasses MediaSink) is needed, AudioSinkWrapper could manage DecodedVideoDataSink directly just like DecodedAudioDataSink. I'll modify this in the next patch. > ::: dom/media/mediasink/AudioSinkWrapper.cpp > @@ +82,5 @@ > > > > int64_t pos = -1; > > TimeStamp t = TimeStamp::Now(); > > > > + if (mAudioSink && !mAudioEnded) { > > Why checking mAudioSink additionally? > Oh ! When I ran mochitest for my previous WIP, at that time mAudioSink is not created in MDSM's constructor yet. I encountered an exception (only once) in MDSM::GetClock where MDSM was trying to access a null mAudioSink. Since now mAudioSink lives with MDSM and there's no extra thread in VideoSink to access MDSM, this check can be removed. > ::: dom/media/mediasink/MediaSink.h > @@ +68,5 @@ > > virtual void SetPlaybackParams(const PlaybackParams& aParams) = 0; > > > > + // Provide playback statistics of this sink (For video) > > + // Can be called in any state. > > + virtual const PlaybackStatistics& GetPlaybackStatistics() { return mStatistics; } > > Don't put any implementation here. We should have all pure virtual functions > so sub-classes have the freedom to have their own implementations. Got it ! Originally I was thinking not to explicit implement this function in AudioSinkWrapper because it doesn't make sense for Audio. By putting VideoSink into AudioSinkWrapper, MediaSink can be a pure interface class. > > @@ +128,5 @@ > > virtual void Shutdown() {} > > > > protected: > > virtual ~MediaSink() {} > > + PlaybackStatistics mStatistics; > > Ditto.
Hi JW, Sorry to bother you again : ) I split all modification into 3 patches. 1. Add additional PlaybackStatistics in MediaSink and provide API to query. I think the information provided in the structure is enough for now. Maybe there's other insight I missed. 2. Create base class VideoSink & its derived class DecodedVideoDataSink. Also add the creation procedure into AudioSinkWrapper w/o operating logic. For now, we put the lambda function of instance creation into AudioSinkWrapper, I think there should be a follow-up bug to make switch of different managed sinks dynamically. 3. Move MDSM::UpdateRenderedVideoFrames()/ MDSM::RenderVideoFrames()/ MDSM::CheckFrameValidity() into DecodedVidoeDataSink, and also add a state-transition machinery in DecodedVideoDataSink to make it more easier and clearer to develop and maintain. I'm running mochitest locally, but found some failures regarding ogg/ogv/webm during autoplay_contentEditabl.html/delay_load.html. Still investigating, not sure if it's related to the modification. Could you give me some suggestions about these modification, or do you think it's ok for review ?
Attachment #8659833 - Attachment is obsolete: true
Attachment #8661055 - Flags: feedback?(jwwang)
Comment on attachment 8661051 [details] [diff] [review] [Part][1] Add PlaybackStatistics In MediaSink & Provide API for query_rv262240 Review of attachment 8661051 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/mediasink/MediaSink.h @@ +53,5 @@ > + , mCorruptFrames(0) > + , mDisableHWAcceleration(false) > + {} > + int64_t mRemainingTime; > + int32_t mDroppedFrames; I think you can just pass MediaDecoder::mFrameStats to the video sink so it can count frames on its own without bothering MDSM. @@ +55,5 @@ > + {} > + int64_t mRemainingTime; > + int32_t mDroppedFrames; > + int32_t mCorruptFrames; > + bool mDisableHWAcceleration; This should belong to the policy of MDSM to decide how many corrupted frames are enough to disable hardware acceleration.
Attachment #8661051 - Flags: feedback-
Comment on attachment 8661052 [details] [diff] [review] [Part][2] Add base class VideoSink & derivded class DecodedVideoDataSink and put video sink into AudioSinkWrapper_rv262240 Review of attachment 8661052 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/MediaDecoderStateMachine.cpp @@ +2995,5 @@ > // Create a new sink according to whether audio is captured. > // TODO: We can't really create a new DecodedStream until OutputStreamManager > // is extracted. It is tricky that the implementation of DecodedStream > // happens to allow reuse after shutdown without creating a new one. > + mMediaSink = aCaptured ? mStreamSink : CreateMediaSink(); It is wrong because MDSM will not render video frames if audio is captured. You should create a subclass of MediaSink for video rendering which syncs to the position reported by another MediaSink which could be AudioSinkWrapper or DecodedStream.
Attachment #8661052 - Flags: feedback-
Comment on attachment 8661055 [details] [diff] [review] [Part][3] Move MDSM::UpdateRenderedVideoFrames() related functions into DecodedVideoDataSink_rv262240 Review of attachment 8661055 [details] [diff] [review]: ----------------------------------------------------------------- I don't think DecodedVideoDataSink should copy the implementation of DecodedAudioDataSink which is much complicated. Most of it should just be the code moved from MediaDecoderStateMachine::UpdateRenderedVideoFrames(). ::: dom/media/MediaDecoderStateMachine.cpp @@ -2026,5 @@ > AssertCurrentThreadInMonitor(); > DECODER_LOG("FinishDecodeFirstFrame"); > > - if (!IsRealTime() && !mSentFirstFrameLoadedEvent) { > - RenderVideoFrames(1); How this is done in the video sink? @@ -2136,5 @@ > mCurrentSeek.Resolve(mState == DECODER_STATE_COMPLETED, __func__); > ScheduleStateMachine(); > > if (video) { > - RenderVideoFrames(1); Ditto. ::: dom/media/MediaDecoderStateMachine.h @@ +624,5 @@ > bool IsAudioDecoding(); > bool IsVideoDecoding(); > > + // To check if MDSM should start buffering > + bool CheckNeedToBuffer(int64_t aRemainingTime); Will be solved by bug 1172830. @@ -1219,5 @@ > nsAutoPtr<MetadataTags> mMetadataTags; > > mozilla::MediaMetadataManager mMetadataManager; > > - mozilla::RollingMean<uint32_t, uint32_t> mCorruptFrames; I think this should be done in MDSM when a video sample is received because a corrupted frame doesn't become valid after being rendered. Corrupted frames should be counted independently of the rendering.
Attachment #8661055 - Flags: feedback?(jwwang) → feedback-
(In reply to JW Wang [:jwwang] from comment #16) > Comment on attachment 8661051 [details] [diff] [review] > [Part][1] Add PlaybackStatistics In MediaSink & Provide API for > query_rv262240 > > Review of attachment 8661051 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: dom/media/mediasink/MediaSink.h > @@ +53,5 @@ > > + , mCorruptFrames(0) > > + , mDisableHWAcceleration(false) > > + {} > > + int64_t mRemainingTime; > > + int32_t mDroppedFrames; > > I think you can just pass MediaDecoder::mFrameStats to the video sink so it > can count frames on its own without bothering MDSM. > Good point !! That would make this part more simpler ! > @@ +55,5 @@ > > + {} > > + int64_t mRemainingTime; > > + int32_t mDroppedFrames; > > + int32_t mCorruptFrames; > > + bool mDisableHWAcceleration; > > This should belong to the policy of MDSM to decide how many corrupted frames > are enough to disable hardware acceleration. Hmm ... in that case, I think we should add a rolling mean variable in MediaDecoder::mFrameStats for MDSM to obtain the information and make decision. And DecodedVideoDataSink is responsible for updating rolling mean to MediaDecoder::mFrameStats because it's the place of checking frame validity. I'll create a new bug to encapsulate MDSM::mCorruptFrames into MediaDecoder::mFrameStats, do you think it's reasonable ?
Sounds good to me. Please also extract MediaDecoder::FrameStatistics to its own files so it is easier to be used among multiple classes.
Wait a second. I think it makes more sense for MDSM to count corrupted frames because it is about decoding instead of rendering. I think we should count corrupted frames once they are received by MDSM. As I said, a corrupted frame won't become valid after being rendered.
Depends on: 1204882
Target Milestone: FxOS-S7 (18Sep) → FxOS-S8 (02Oct)
I create a MediaSink which is basically a VideoSinkWrapper but containing an AudioSinkWrapper inside. The reason I did this (always create a video-based sink) is that this could avoid MDSM switching between audiosink/streamsink only and videosink(with an audiosink/streamsink insdie). This patch only include the framework to make sure the operation to videosink works well on audiosink too. All mochitest under dom/media/test/ are passed.
Attachment #8661051 - Attachment is obsolete: true
Attachment #8661052 - Attachment is obsolete: true
Attachment #8661055 - Attachment is obsolete: true
Based on Attachment 8665284 [details] [diff], I moved MDSM::RenderVideoFrame() & MDSM::UpdateRenderedFrames() into DecodedVideoSink. Also, in order to handle RenderVideoFrame at the time when playback is not started(MDSM may be in decodeing state), I try to cover all cases in DecodedVideoDataSink::TryRenderVideoFrame(). Currently, encountered few stall while testing "seek"-related mochitest. Try to fix them. One question is that, the state-change will rely on information in ImageContainer, e.g. [1] in HTMLMediaElement::UpdateReadyStateInternal(). And I feel kinda weird to check the logic here. The condition shouldn't be bound with GetImageContainer()->HasCurrentImage(), because HasCurrentImage() will only be true when RenderVideoFrames() is called, and this is far from the place the HTMLMediaElement should know. [1] https://dxr.mozilla.org/mozilla-central/source/dom/html/HTMLMediaElement.cpp#3622 JW, could you give me some feedback on both these 2 patch to make sure if the architecture refactor is in the right direction.
Attachment #8665291 - Flags: feedback?(jwwang)
Comment on attachment 8665291 [details] [diff] [review] [WIP][Part2] Move MDSM rendering logic to DecodedVideoSink_263446. Review of attachment 8665291 [details] [diff] [review]: ----------------------------------------------------------------- I guess you upload the wrong patch. There is no rendering code inside DecodedVideoDataSink.cpp. And why do you have to separate VideoSink from its wrapper? I think you should be able to just create a VideoSink class to inherit MediaSink for the rendering jobs.
Attachment #8665291 - Flags: feedback?(jwwang)
Oops, sorry for the mistake. Here' is the correct one.
Attachment #8665291 - Attachment is obsolete: true
Attachment #8665315 - Flags: feedback?(jwwang)
(In reply to JW Wang [:jwwang] from comment #24) > And why do you have to separate VideoSink from its > wrapper? I think you should be able to just create a VideoSink class to > inherit MediaSink for the rendering jobs. Oh, since VideoSinkWrapper also needs to bypass operations to the passed-in AudioSinkWrapper. I think creating a DecodedVideoDataSink can not only make the pattern more similar to *AudioSink* but also separate the logic of operating audiosinkwrapper & video rendering in VSW. But if you think there's too many hierarchies, I can make it flatter. : )
What is the point to resemble *AudioSink* for VideoSink?
IMHO, the derived class of MediaSink can be a simpler and centralized wrapper instance to interact with MDSM. The detailed implementation should be kept into another specific class. I was expecting someone can use this pattern to reuse the controlling flow of VSW and implement his own *VideoDataSink for avsync+rendering through base VideoSink interfaces. Am I overthinking ? or sure, there's no explicit need to do this. @@
Simplify the design, discarding old VideoSink <- DeocdedVideoDataSink / VideoSinkWrapper paradigm. Just create a VideoSinkWrapper (which inherits MediaSink) and make it as the default MediaSink in MDSM.
Attachment #8665284 - Attachment is obsolete: true
Hi JW, per discussion, several modifications have been done. 1. Shift the timing of checking frame validity to MDSM::OnVideoDecoded. Since the calculation could be done here and won't bound to rendering. 2. Add a MediaSink::ReDraw() instead of MediaSink::Prepare() (which is counterintuitive)
Attachment #8665315 - Attachment is obsolete: true
Attachment #8665315 - Flags: feedback?(jwwang)
Attachment #8668285 - Flags: review?(jwwang)
Attachment #8668292 - Flags: review?(jwwang)
Comment on attachment 8668285 [details] [diff] [review] [Part1] Create a VideoSinkWrapper derived from MediaSink as the default MediaSink in MDSM_265274 Review of attachment 8668285 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/MediaDecoderStateMachine.cpp @@ +1854,5 @@ > if (!mMediaSink->IsStarted()) { > mAudioCompleted = false; > mMediaSink->Start(GetMediaTime(), mInfo); > > + auto promiseVideo = mMediaSink->OnEnded(TrackInfo::kVideoTrack); call it |videoPromise|. @@ +1855,5 @@ > mAudioCompleted = false; > mMediaSink->Start(GetMediaTime(), mInfo); > > + auto promiseVideo = mMediaSink->OnEnded(TrackInfo::kVideoTrack); > + auto promiseAudio = mMediaSink->OnEnded(TrackInfo::kAudioTrack); Ditto. ::: dom/media/mediasink/VideoSinkWrapper.cpp @@ +10,5 @@ > + > +extern PRLogModuleInfo* gMediaDecoderLog; > +#define VSINK_LOG(msg, ...) \ > + MOZ_LOG(gMediaDecoderLog, LogLevel::Debug, \ > + ("\033[1;34mVSW=%p \033[m" msg, this, ##__VA_ARGS__)) No color code which is not supported on all platforms. @@ +83,5 @@ > +VideoSinkWrapper::HasUnplayedFrames(TrackType aType) const > +{ > + AssertOwnerThread(); > + MOZ_ASSERT(mAudioSinkWrapper, "AudioSinkWrapper should exist."); > + MOZ_ASSERT(aType == TrackInfo::kAudioTrack, "For now, MDSM doesn't need" I would prefer a comment like "Not implemented" which doesn't assume the client code of this class. @@ +105,5 @@ > + AssertOwnerThread(); > + MOZ_ASSERT(mAudioSinkWrapper, "AudioSinkWrapper should exist."); > + > + VSINK_LOG_V(" playing (%d) -> (%d)", mAudioSinkWrapper->IsPlaying(), aPlaying); > + // Resume/pause matters only when playback started. This comment is not relevant here. @@ +168,5 @@ > + return mAudioSinkWrapper->IsPlaying(); > +} > + > +void > +VideoSinkWrapper::Shutdown() Assert this is called after playback being stopped. ::: dom/media/mediasink/VideoSinkWrapper.h @@ +21,5 @@ > +template <class T> class MediaQueue; > + > +namespace media { > + > +class VideoSinkWrapper : public MediaSink Just call it VideoSink since it is not wrapping anything. @@ +35,5 @@ > + : mOwnerThread(aThread) > + , mAudioSinkWrapper(aAudioSink) > + , mVideoQueue(aVideoQueue) > + , mContainer(aContainer) > + , mProducerID(aProducerID) The producer id should be allocated by this class instead of being assigned by the client code. @@ +65,5 @@ > + > + bool IsPlaying() const override; > + > + void Shutdown() override; > +private: An empty line before "private". @@ +73,5 @@ > + { > + MOZ_ASSERT(mOwnerThread->IsCurrentThreadIn()); > + } > + > + virtual MediaQueue<MediaData>& VideoQueue() const { Why "virtual"? @@ +80,5 @@ > + > + void OnVideoEnded(); > + > + const nsRefPtr<AbstractThread> mOwnerThread; > + nsRefPtr<MediaSink> mAudioSinkWrapper; Don't call it mAudioSinkWrapper which is connected to a concrete class name. Moreover, it could refer to DecodedStream which is not an AudioSinkWrapper at all. @@ +88,5 @@ > + // Help ImageContainer distinguish different streams of FrameIDs, > + // according to MDSM. > + const ProducerID mProducerID; > + > + // True is we are decoding a realtime stream, according to MDSM What does "according to MDSM" mean?
Attachment #8668285 - Flags: review?(jwwang) → review+
Target Milestone: FxOS-S8 (02Oct) → FxOS-S9 (16Oct)
Comment on attachment 8668292 [details] [diff] [review] [Part2] Move MDMS av-sync and render logic to VideoSinkWrapper_265274 Review of attachment 8668292 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/MediaDecoderStateMachine.cpp @@ +389,5 @@ > + nsRefPtr<media::MediaSink> mediaSink = > + new VideoSinkWrapper(mTaskQueue, audioSink, mVideoQueue, > + mDecoder->GetVideoFrameContainer(), mProducerID, > + mRealTime, mDecoder->GetFrameStatistics(), > + AUDIO_DURATION_USECS, sVideoQueueSendToCompositorSize); It is wrong to pass |sVideoQueueSendToCompositorSize| because the value should change as pref updates. @@ +882,5 @@ > (video ? video->mDiscontinuity : 0)); > > + // Check frame validity here for every frame decoded in order to have a > + // better chance to make the decision of turning off HW acceleration. > + CheckFrameValidity(aVideoSample->As<VideoData>()); This change is a prerequisite of this bug. Create a new bug for this change so it is easier to track why and how this change is made. @@ +1087,5 @@ > > mDecoder->DispatchPlaybackStopped(); > > if (IsPlaying()) { > + mMediaSink->ReDraw(); MediaSink should be responsible for flushing video frames when playback is paused. @@ -2239,5 @@ > UpdatePlaybackPositionInternal(newCurrentTime); > > // Try to decode another frame to detect if we're at the end... > DECODER_LOG("Seek completed, mCurrentPosition=%lld", mCurrentPosition.Ref()); > - Don't delete this empty lines which are for readability. @@ -2247,5 @@ > mQuickBuffering = false; > > mCurrentSeek.Resolve(mState == DECODER_STATE_COMPLETED, __func__); > ScheduleStateMachine(); > - Ditto. @@ -2404,5 @@ > // We're playing, but the element/decoder is in paused state. Stop > // playing! > StopPlayback(); > } > - Ditto. @@ +2641,5 @@ > // the monitor and get a staled value from GetCurrentTimeUs() which hits the > // assertion in GetClock(). > > + // TODO : Maybe could obtain the remainging from last frame's timestamp in > + // ImageContainer, so that we don't need to ask from VideoSink. I can't see how VideoSink is relevant here. ::: dom/media/MediaDecoderStateMachine.h @@ +483,3 @@ > // decoder monitor must be held with exactly one lock count. Called on the > // state machine thread. > + void UpdateRenderedVideoStatus(); Change the function name to match the comment. ::: dom/media/mediasink/MediaSink.h @@ +94,5 @@ > virtual void SetPlaying(bool aPlaying) = 0; > > + // Single frame rendering operation may need to be done before playback > + // started (1st frame) or right after seek completed or playback stopped. > + // Can be called in any state, only for video. s/only for video/Do nothing if this sink has no video track/ to be consistent with comments above. @@ +95,5 @@ > > + // Single frame rendering operation may need to be done before playback > + // started (1st frame) or right after seek completed or playback stopped. > + // Can be called in any state, only for video. > + virtual void ReDraw() {}; "Redraw" ::: dom/media/mediasink/VideoSinkWrapper.cpp @@ +220,5 @@ > +void > +VideoSinkWrapper::OnVideoQueueEvent() > +{ > + AssertOwnerThread(); > + TryUpdateRenderedVideoFrames(); I don't think we need to render when the queue is finished. We should kick off the update timer when playback is started and schedule next render cycle based on the frame times. @@ +330,5 @@ > +void > +VideoSinkWrapper::UpdateRenderedVideoFrames() > +{ > + AssertOwnerThread(); > + if (!mAudioSinkWrapper || !mAudioSinkWrapper->IsPlaying() || We should assert this is called only when playback is not paused. @@ +341,5 @@ > + // Skip frames up to the frame at the playback position, and figure out > + // the time remaining until it's time to display the next frame and drop > + // the current frame. > + NS_ASSERTION(clockTime >= 0, "Should have positive clock time."); > + int64_t remainingTime = mDelayDuration; We should only render again before the video frames are all consumed by the compositor which is roughly the end of the last video frame sent to the compositor. It is a waste of time to wake up so often to send duplicated video frames to the compositor. ::: dom/media/mediasink/VideoSinkWrapper.h @@ +74,5 @@ > + void ConnectListener(); > + void DisconnectListener(); > + > + // Sets VideoQueue images into the VideoFrameContainer. Called on the shared > + // state machine thread. Decode monitor must be held. The first aMaxFrames Fix the comment. @@ +136,5 @@ > + > + bool mHasVideo; > + > + // TODO : This is exactly the same as DelayedScheduler in MDSM, maybe it will > + // be better to treat it as a utility class for other class usage. So just extract the code to a common class.
Attachment #8668292 - Flags: review?(jwwang) → review-
Thanks for review, but I have several questions before addressing issues. (In reply to JW Wang [:jwwang] from comment #32) > Comment on attachment 8668292 [details] [diff] [review] > [Part2] Move MDMS av-sync and render logic to VideoSinkWrapper_265274 > > Review of attachment 8668292 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: dom/media/MediaDecoderStateMachine.cpp > @@ +389,5 @@ > > + nsRefPtr<media::MediaSink> mediaSink = > > + new VideoSinkWrapper(mTaskQueue, audioSink, mVideoQueue, > > + mDecoder->GetVideoFrameContainer(), mProducerID, > > + mRealTime, mDecoder->GetFrameStatistics(), > > + AUDIO_DURATION_USECS, sVideoQueueSendToCompositorSize); > > It is wrong to pass |sVideoQueueSendToCompositorSize| because the value > should change as pref updates. I thought about this indeed, but the use of preference-related method is limited in main thread in non MOZ_B2G environment [1]. On the other hand, although this value changes as pref updates, but it's only taken into account when a new MDSM is created. We don't need this value changing dynamically while the MDSM is running, right ? And this why I think passing it as a parameter would be ok. Does it make sense to you ? [1] https://dxr.mozilla.org/mozilla-central/source/modules/libpref/Preferences.cpp#437 > > @@ +882,5 @@ > > (video ? video->mDiscontinuity : 0)); > > > > + // Check frame validity here for every frame decoded in order to have a > > + // better chance to make the decision of turning off HW acceleration. > > + CheckFrameValidity(aVideoSample->As<VideoData>()); > > This change is a prerequisite of this bug. Create a new bug for this change > so it is easier to track why and how this change is made. Bug 1211364, I'll fix this first. > > @@ +1087,5 @@ > > > > mDecoder->DispatchPlaybackStopped(); > > > > if (IsPlaying()) { > > + mMediaSink->ReDraw(); > > MediaSink should be responsible for flushing video frames when playback is > paused. > @@ +2641,5 @@ > > // the monitor and get a staled value from GetCurrentTimeUs() which hits the > > // assertion in GetClock(). > > > > + // TODO : Maybe could obtain the remainging from last frame's timestamp in > > + // ImageContainer, so that we don't need to ask from VideoSink. > > I can't see how VideoSink is relevant here. Originally, MDSM triggers a timer w/ the last frame remaining time to keep updating rendered frames status and playback position [2], but now the calculation will be moved into VideoSinkWrapper. So that I was thinking that using a default value(40ms) to keep refreshing (may be longer if we don't need that much precision) or we can obtain the frame remaining time by a new MediaSink's API to get from VideoSinkWrapper(or from container). [2] https://dxr.mozilla.org/mozilla-central/source/dom/media/MediaDecoderStateMachine.cpp#2688. > ::: dom/media/mediasink/VideoSinkWrapper.cpp > @@ +220,5 @@ > > +void > > +VideoSinkWrapper::OnVideoQueueEvent() > > +{ > > + AssertOwnerThread(); > > + TryUpdateRenderedVideoFrames(); > > I don't think we need to render when the queue is finished. We should kick ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ I agreed. will add the check. > off the update timer when playback is started and schedule next render cycle > based on the frame times. I put the checking logic (check if there's a scheduler or not) in TryUpdateRenderedVideoFrames(), and that's the reason I called it Try*** because all conditional logic could be done inside. For callers, just give it a try. Do you think I should bring that up in OnVideoQueueEvent ? > > @@ +341,5 @@ > > + // Skip frames up to the frame at the playback position, and figure out > > + // the time remaining until it's time to display the next frame and drop > > + // the current frame. > > + NS_ASSERTION(clockTime >= 0, "Should have positive clock time."); > > + int64_t remainingTime = mDelayDuration; > > We should only render again before the video frames are all consumed by the > compositor which is roughly the end of the last video frame sent to the > compositor. It is a waste of time to wake up so often to send duplicated > video frames to the compositor. Sure, the mDelayDuration is just a default value like AUDIO_DURATION_USECS in MDSM, and the actual remaining time is calculated after the code, you can see the code snippet below, =========================================================== if (VideoQueue().GetSize() > 0) { ... while (VideoQueue().GetSize() > 0) { MediaData* nextFrame = VideoQueue().PeekFront(); if (!mRealTime && nextFrame->mTime > clockTime) { remainingTime = nextFrame->mTime - clockTime; // >>> update the time to schedule next render cycle. break; } ... } RenderVideoFrames(mVideoQueueSendToCompositorSize, clockTime, nowTime); TimeStamp target = nowTime + TimeDuration::FromMicroseconds(remainingTime); mUpdateScheduler.Ensure(target); } =========================================================== I think I did it in right way, or did I go wrong ?
Flags: needinfo?(jwwang)
Depends on: 1211364
No longer depends on: 1204882
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #33) > I thought about this indeed, but the use of preference-related method is > limited in main thread in non MOZ_B2G environment [1]. > On the other hand, although this value changes as pref updates, but it's > only taken into account when a new MDSM is created. We don't need this value > changing dynamically while the MDSM is running, right ? > And this why I think passing it as a parameter would be ok. > Does it make sense to you ? https://hg.mozilla.org/mozilla-central/file/d56c816b35c3/modules/libpref/Preferences.h#l263 "The value will be modified when the pref value is changed"
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #33) > Originally, MDSM triggers a timer w/ the last frame remaining time to keep > updating rendered frames status and playback position [2], but now the > calculation will be moved into VideoSinkWrapper. > So that I was thinking that using a default value(40ms) to keep refreshing > (may be longer if we don't need that much precision) or we can obtain the > frame remaining time by a new MediaSink's API to get from > VideoSinkWrapper(or from container). When we have VideoSink, MDSM will only need to update playback position periodically without concerning the frame time.
Btw, it is easier to track review comments using review board.
Flags: needinfo?(jwwang)
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #33) > Sure, the mDelayDuration is just a default value like AUDIO_DURATION_USECS > in MDSM, and the actual remaining time is calculated after the code, you can > see the code snippet below, > > =========================================================== > if (VideoQueue().GetSize() > 0) { > ... > while (VideoQueue().GetSize() > 0) { > MediaData* nextFrame = VideoQueue().PeekFront(); > if (!mRealTime && nextFrame->mTime > clockTime) { > remainingTime = nextFrame->mTime - clockTime; // >>> update the > time to schedule next render cycle. > break; > } > ... > } > RenderVideoFrames(mVideoQueueSendToCompositorSize, clockTime, nowTime); > TimeStamp target = nowTime + > TimeDuration::FromMicroseconds(remainingTime); > mUpdateScheduler.Ensure(target); > } > =========================================================== > I think I did it in right way, or did I go wrong ? The current code wakes up the rendering loop at the end of "1st" frame instead of the "last" frame. This is something we can optimize.
(In reply to JW Wang [:jwwang] from comment #34) > (In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #33) > > I thought about this indeed, but the use of preference-related method is > > limited in main thread in non MOZ_B2G environment [1]. > > On the other hand, although this value changes as pref updates, but it's > > only taken into account when a new MDSM is created. We don't need this value > > changing dynamically while the MDSM is running, right ? > > And this why I think passing it as a parameter would be ok. > > Does it make sense to you ? > > https://hg.mozilla.org/mozilla-central/file/d56c816b35c3/modules/libpref/ > Preferences.h#l263 > "The value will be modified when the pref value is changed" Oh, I understand your point ! Sorry I focused on the accessibility of this static variable. I could either 1) pass |&sVideoQueueSendToCompositorSize| into VSW and read the value via a pointer or 2) make VSW derived from nsIObserver, hence VSW can obtain the latest pref value via VSW::Observe() and store to its member variable. I'll choose the former because I think the later is kinda overkill in this case. Any comments ?
Blocks: 1211814
Addressed issues and carry r+ from Comment 31. But I want to discuss 2 issues ... 1. Considering the mProducerID, the |ImageContainer::AllocateProducerID()| needs to be called in main thread. But VideoSink will be created in 2 places, i.e a) MDSM's constructor, b) MDSM::SetAudioCaptured(bool aCaptured) which runs in TaskQueue. I removed this member in this patch first and think of 2 solutions as below. After discussion, I will apply to [Part2] patch. A) to provide 2 new APIs in MediaSink like {Set,Get}PlaybackParams(), called {Set,Get}SinkParams() to backup / restore the specific sink parameters when VideoSink needs to be recreated in non-main thread. B) to provide 1 new APIs in MediaSink called |ResetWithReferenceSink(MediaSink* aSink)| and rename |MediaSink::Shutdown()| to |MediaSink::Cleanup()|. In this case, we don't have to create VideoSink again in |MDSM::SetAudioCaptured(bool aCaptured)|, we can just call "ResetWithReferenceSink" and pass in corresponding sink. Also the "Cleanup" fits the meaning to release all the resources. And for EME CDM HW Rendering data path, I think no matter which the following cases is applied (1) clear audio + encrypted video (2) encrypted audio & video (3) switching between (1)/(2) dynamically, the derived AudioSink/VideoSink can also fit in this paradigm. [1] https://dxr.mozilla.org/mozilla-central/source/gfx/layers/ImageContainer.cpp#679 2. Regarding the member variable name which stands for AudioSinkWrapper or DecodedStream, in this patch I named it mStreamOrAudioSink, but I'm not quite sure if it's a good name or not. Because currently I think the only thing in common for both AudioSinkWrapper and DecodedStream is to provide a reference clock time. Do you think mReferenceClockSink is a suitable name for it ? Would you give me some suggestions? Big thanks ~
Attachment #8668285 - Attachment is obsolete: true
Flags: needinfo?(jwwang)
Attachment #8670402 - Flags: review+
1. Make ImageContainer::AllocateProducerID() callable on all threads. 2. I don't like the idea of ResetXXX() which complicate the states of a class. 3. Just call it mAudioSink.
Flags: needinfo?(jwwang)
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #38) > Oh, I understand your point ! Sorry I focused on the accessibility of this > static variable. I could either > 1) pass |&sVideoQueueSendToCompositorSize| into VSW and read the value via a > pointer or > 2) make VSW derived from nsIObserver, hence VSW can obtain the latest pref > value via VSW::Observe() and store to its member variable. > > I'll choose the former because I think the later is kinda overkill in this > case. Any comments ? It looks like a data race to me if we read it off the main thread while the pref system updates it on the main thread without any synchronization. I will file a new bug to discuss about it.
Depends on: 1212220
(In reply to JW Wang [:jwwang] from comment #40) > 1. Make ImageContainer::AllocateProducerID() callable on all threads. Though I can achieve this by simply making the static sProducerID [1] atomic. I'm not sure if this is ok to do so. Hi ROC, Per Comment 39 and Comment 41, the plan is, we may create VideoSink many times (only one exists at a time) per MDSM, and let each VideoSink has its own mProducerID. Since MDSM get the ImageContainer from HTMLMediaElement and use it to create VideoData and send these data to rendering components via |SetCurrentFrames| [2]. According to comment [3], I'm not sure if there will be any rendering problems if I set 2 different mProducerIDs to the same ImageContainer with increasing mFrameID. Also, what does "changing to a new mFrameID namespace" stand for ? And is it appropriate to make mProducerID as a member variable allocated in each VideoFrameContainer and provide a API for client code to query ? Sorry, I'm not so familiar with graphics-related components ... Could you give me some feedback regarding these questions ? Thanks : ) [1] https://dxr.mozilla.org/mozilla-central/source/gfx/layers/ImageContainer.cpp#676 [2] https://dxr.mozilla.org/mozilla-central/source/dom/media/MediaDecoderStateMachine.cpp?#2542 [3] https://dxr.mozilla.org/mozilla-central/source/gfx/layers/ImageContainer.h#329
Flags: needinfo?(roc)
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #42) > Per Comment 39 and Comment 41, the plan is, we may create VideoSink many > times (only one exists at a time) per MDSM, and let each VideoSink has its > own mProducerID. Since MDSM get the ImageContainer from HTMLMediaElement and > use it to create VideoData and send these data to rendering components via > |SetCurrentFrames| [2]. According to comment [3], I'm not sure if there will > be any rendering problems if I set 2 different mProducerIDs to the same > ImageContainer with increasing mFrameID. I don't quite understand what you're asking here. As the comment says, all images in a set of images passed to SetCurrentImages must have the same mProducerID. But you can specify any mProducerID you want in separate calls to SetCurrentImages. > Also, what does "changing to a new > mFrameID namespace" stand for ? That means you can reuse frame IDs as long as the mProducerID is different. > And is it appropriate to make mProducerID as > a member variable allocated in each VideoFrameContainer and provide a API > for client code to query ? I'd prefer to not store an mProducerID on the VideoFrameContainer. Currently the video frame producer (e.g. MDSM) tracks mProducerID and I think that's a good thing. Why do you need to store mProducerID on the VideoFrameContainer? Making AllocateProducerID() callable on any thread should be easy.
Flags: needinfo?(roc)
(In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #43) > (In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #42) > > Per Comment 39 and Comment 41, the plan is, we may create VideoSink many > > times (only one exists at a time) per MDSM, and let each VideoSink has its > > own mProducerID. Since MDSM get the ImageContainer from HTMLMediaElement and > > use it to create VideoData and send these data to rendering components via > > |SetCurrentFrames| [2]. According to comment [3], I'm not sure if there will > > be any rendering problems if I set 2 different mProducerIDs to the same > > ImageContainer with increasing mFrameID. > > I don't quite understand what you're asking here. > > As the comment says, all images in a set of images passed to > SetCurrentImages must have the same mProducerID. But you can specify any > mProducerID you want in separate calls to SetCurrentImages. Thanks for solving my doubts. That's what I was wondering if I can set any mProducerID in separate calls to |SetCurrentImages()|. > > I'd prefer to not store an mProducerID on the VideoFrameContainer. Currently > the video frame producer (e.g. MDSM) tracks mProducerID and I think that's a > good thing. Why do you need to store mProducerID on the VideoFrameContainer? > Making AllocateProducerID() callable on any thread should be easy. Understood ! I filed a bug to handle this (Bug 1212288). Thanks for the guidance.
Rebase to latest trunk since some modification is done in MDSM (no decoder monitor is needed !!). Carry r+ and addressed all issues in Comment 31 & Comment 40, also remove the allocation of mProducerID to patch [Part2] after Bug 1212288 landed.
Attachment #8670402 - Attachment is obsolete: true
Attachment #8671839 - Flags: review+
Depends on: 1212288
Depends on: 1213897
Blocks: 1214131
Rebase to trunk (several prerequisites landed) and addressed issues in Comment 32, except ... 1. Still passing MDSM's |sVideoQueueSendToCompositorSize| into VideoSink. Because Bug 1212220 change the update behavior of that variable. There will be no difference between storing it by reference or by value in VideoSink.
Attachment #8668292 - Attachment is obsolete: true
Attachment #8673136 - Flags: review?(jwwang)
Comment on attachment 8673136 [details] [diff] [review] [Part2] Move MDMS Render Logic To VideoSink_v2_267319 Review of attachment 8673136 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/mediasink/VideoSink.cpp @@ +141,5 @@ > + if (aPlaying) > + TryUpdateRenderedVideoFrames(); > + } else { > + // Flush all video frames in VideoQueue when playback is paused. > + VideoQueue().Reset(); It is wrong to reset the queue here which should be already handled by MDSM. @@ +168,5 @@ > } > } > > void > VideoSink::OnVideoEnded() This function seems to do nothing. Why do we need it? @@ +229,5 @@ > +void > +VideoSink::OnVideoQueueEvent() > +{ > + AssertOwnerThread(); > + if (!VideoQueue().AtEndOfStream()) { Why do we render only when VideoQueue().AtEndOfStream() is false? @@ +265,5 @@ > +{ > + AssertOwnerThread(); > + mPushListener = VideoQueue().PushEvent().Connect( > + mOwnerThread, this, &VideoSink::OnVideoQueueEvent); > + mFinishListener = VideoQueue().FinishEvent().Connect( Why do we need to listen to the 'finish' event? @@ +341,5 @@ > +void > +VideoSink::UpdateRenderedVideoFrames() > +{ > + AssertOwnerThread(); > + MOZ_ASSERT(mAudioSink && mAudioSink->IsPlaying(), Assert mAudioSink is not null in the constructor once and for all. ::: dom/media/mediasink/VideoSink.h @@ +27,5 @@ > > class VideoSink : public MediaSink > { > public: > + typedef mozilla::layers::ImageContainer::ProducerID ProducerID; this typedef doesn't need to be public. @@ +141,5 @@ > + // A delay duration to trigger next time UpdateRenderedVideoFrames(). > + // Based on the default value in MDSM. > + const int mDelayDuration; > + > + // Based on the pref value obtained in MDSM. Rephrase the comment because it doesn't really tell you much.
Attachment #8673136 - Flags: review?(jwwang) → review-
(In reply to JW Wang [:jwwang] from comment #47) > Comment on attachment 8673136 [details] [diff] [review] > [Part2] Move MDMS Render Logic To VideoSink_v2_267319 > > Review of attachment 8673136 [details] [diff] [review]: > ----------------------------------------------------------------- > > ::: dom/media/mediasink/VideoSink.cpp > @@ +141,5 @@ > > + if (aPlaying) > > + TryUpdateRenderedVideoFrames(); > > + } else { > > + // Flush all video frames in VideoQueue when playback is paused. > > + VideoQueue().Reset(); > > It is wrong to reset the queue here which should be already handled by MDSM. > According to Comment 32, you mentioned that "MediaSink is responsible for flushing video frames if the playback is paused", I've checked MDSM, VideoQueue() will be reset when (Entering dormant, shutdown, Initiating Seek), and I agreed your point. That's why I added VideoQueue().Reset() there. Do you mean that "reset the queue" should be moved to consumer (MediaSink) according to its operating state in the future ? > @@ +168,5 @@ > > } > > } > > > > void > > VideoSink::OnVideoEnded() > > This function seems to do nothing. Why do we need it? Oops, actually this function will never be called because the corresponding requestHolder will be disconnected when VideoSink::Stop() is called. I'll make them simpler. > > @@ +229,5 @@ > > +void > > +VideoSink::OnVideoQueueEvent() > > +{ > > + AssertOwnerThread(); > > + if (!VideoQueue().AtEndOfStream()) { > > Why do we render only when VideoQueue().AtEndOfStream() is false? ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > > @@ +265,5 @@ > > +{ > > + AssertOwnerThread(); > > + mPushListener = VideoQueue().PushEvent().Connect( > > + mOwnerThread, this, &VideoSink::OnVideoQueueEvent); > > + mFinishListener = VideoQueue().FinishEvent().Connect( > > Why do we need to listen to the 'finish' event? > I added the check |if (!VideoQueue().AtEndOfStream())|, because there's no need to render if the queue is finished. So if I remove the event listener for 'finish', then the check |if (!VideoQueue().AtEndOfStream()| above won't be needed anymore. Or I should remove the event listener but use |if (!VideoQueue().IsFinished())| instead ? > @@ +341,5 @@ > > +void > > +VideoSink::UpdateRenderedVideoFrames() > > +{ > > + AssertOwnerThread(); > > + MOZ_ASSERT(mAudioSink && mAudioSink->IsPlaying(), > > Assert mAudioSink is not null in the constructor once and for all. > OK > ::: dom/media/mediasink/VideoSink.h > @@ +27,5 @@ > > > > class VideoSink : public MediaSink > > { > > public: > > + typedef mozilla::layers::ImageContainer::ProducerID ProducerID; > > this typedef doesn't need to be public. > OK > @@ +141,5 @@ > > + // A delay duration to trigger next time UpdateRenderedVideoFrames(). > > + // Based on the default value in MDSM. > > + const int mDelayDuration; > > + > > + // Based on the pref value obtained in MDSM. > > Rephrase the comment because it doesn't really tell you much. Ok. Thanks.
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #48) > According to Comment 32, you mentioned that "MediaSink is responsible for > flushing video frames if the playback is paused", I've checked MDSM, > VideoQueue() will be reset when (Entering dormant, shutdown, Initiating > Seek), and I agreed your point. That's why I added VideoQueue().Reset() > there. By flushing, I am talking about the underlying compositor. Since we send a batch of video frames to the compositor at a time, it is possible for playback to be paused before all frames are rendered by the compositor. We need a way to tell the compositor to stop rendering remaining frames when playback is paused. Otherwise we will see video frames keep changing after pause. This is already handled by |RenderVideoFrames(1)|. Furthermore, VideoSink as a consumer of video frames, should only read from but never write to the queue, right? > Do you mean that "reset the queue" should be moved to consumer (MediaSink) > according to its operating state in the future ? No. See the reason above.
(In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #48) > I added the check |if (!VideoQueue().AtEndOfStream())|, because there's no > need to render if the queue is finished. So if I remove the event listener > for 'finish', then the check |if (!VideoQueue().AtEndOfStream()| above won't > be needed anymore. > > Or I should remove the event listener but use |if > (!VideoQueue().IsFinished())| instead ? I am afraid this would miss the last frame. If MDSM push the last frame to the queue and finish it, VideoSink will not render the last frame, right?
(In reply to JW Wang [:jwwang] from comment #50) > (In reply to Kilik Kuo [:kikuo][:kilikkuo] from comment #48) > > I added the check |if (!VideoQueue().AtEndOfStream())|, because there's no > > need to render if the queue is finished. So if I remove the event listener > > for 'finish', then the check |if (!VideoQueue().AtEndOfStream()| above won't > > be needed anymore. > > > > Or I should remove the event listener but use |if > > (!VideoQueue().IsFinished())| instead ? > > I am afraid this would miss the last frame. If MDSM push the last frame to > the queue and finish it, VideoSink will not render the last frame, right? Per discussion, I'll remove the event listener for 'finish' and the check in |OnVideoQueueEvent()|. Thanks for your feedback.
Addressed issues in Comment 48.
Attachment #8673136 - Attachment is obsolete: true
Attachment #8673648 - Flags: review?(jwwang)
Attachment #8673648 - Flags: review?(jwwang) → review+
Thanks for review and all the guidance :)
Summary: Create base class VideoSink and encapsulate MDSM::UpdateRenderVideoFrame related-logic into DecodedVideoDataSink → Create VideoSink subclass from MediaSink and encapsulate MDSM::UpdateRenderVideoFrame related-logic into it.
try run, https://treeherder.mozilla.org/#/jobs?repo=try&revision=dd348414c10b Seems a lot of failures are related to Bug 1143613. I'll try to fix it first.
After traced code in dom/media/webaudio, found that I made a stupid mistake ... The volume is not set correctly to DecodedStream and that results to the mStream(which is an AudioNodeExternalInputStream) contained in MediaElementAudioSourceNode can NOT obtained correct volume while copying input chunk to the script processing node. Need to override SetVolume / SetPreservesPitch for the AudioSink contained in VideoSink.
Attachment #8675314 - Flags: review?(jwwang)
Attachment #8675314 - Flags: review?(jwwang) → review+
Rebase for nsRefPtr & carry r+
Attachment #8671839 - Attachment is obsolete: true
Attachment #8675585 - Flags: review+
Rebase for nsRefPtr & carry r+.
Attachment #8673648 - Attachment is obsolete: true
Attachment #8675589 - Flags: review+
Rebase and carry r+
Attachment #8675314 - Attachment is obsolete: true
Attachment #8675591 - Flags: review+
Keywords: checkin-needed
Depends on: 1230141
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: