Closed Bug 864654 Opened 12 years ago Closed 11 years ago

Video RTCP SR's are never sent because of conduit split

Categories

(Core :: WebRTC: Audio/Video, defect)

defect
Not set
critical

Tracking

()

RESOLVED FIXED
mozilla27
Tracking Status
firefox25 --- wontfix
firefox26 --- affected
firefox27 --- verified disabled

People

(Reporter: jesup, Assigned: jesup)

References

(Blocks 1 open bug)

Details

(Whiteboard: [WebRTC] [blocking-webrtc-])

Attachments

(8 files, 7 obsolete files)

47.96 KB, patch
Details | Diff | Splinter Review
2.73 KB, patch
Details | Diff | Splinter Review
2.74 KB, patch
Details | Diff | Splinter Review
62.75 KB, patch
Details | Diff | Splinter Review
17.87 KB, patch
Details | Diff | Splinter Review
2.78 KB, patch
Details | Diff | Splinter Review
41.79 KB, patch
Details | Diff | Splinter Review
15.67 KB, patch
ekr
: review+
Details | Diff | Splinter Review
Attached patch WIP refactor of the AudioConduit (obsolete) — Splinter Review
Refactor AudioConduits and VideoConduits to create wrapper objects for the Voice/VideoEngines and avoid ugly cross-object transference.
WIP: survives a quick test; still needs to be extended to VideoConduits, and maybe a little cleanup of the PCMedia conduit storage
Attachment #740671 - Attachment is obsolete: true
Comment on attachment 741171 [details] [diff] [review] make AudioConduits bidirectional, with Send and Receive Transports (WIP, working) 12233 ># HG changeset patch ># User Randell Jesup <rjesup@jesup.org> ># Date 1366781111 14400 ># Node ID 79c11487f5347366db0d4d28c7f46402d55ba138 ># Parent 4d94e34d8eb46863fc8ee614550f6eaa8205ef52 >Bug 864654: make AudioConduits bidirectional, with Send and Receive Transports (WIP, working) > >diff --git a/media/webrtc/signaling/src/media-conduit/AudioConduit.cpp b/media/webrtc/signaling/src/media-conduit/AudioConduit.cpp >--- a/media/webrtc/signaling/src/media-conduit/AudioConduit.cpp >+++ b/media/webrtc/signaling/src/media-conduit/AudioConduit.cpp >@@ -21,26 +21,26 @@ namespace mozilla { > static const char* logTag ="WebrtcAudioSessionConduit"; > > // 32 bytes is what WebRTC CodecInst expects > const unsigned int WebrtcAudioConduit::CODEC_PLNAME_SIZE = 32; > > /** > * Factory Method for AudioConduit > */ >-mozilla::RefPtr<AudioSessionConduit> AudioSessionConduit::Create(AudioSessionConduit *aOther) >+mozilla::RefPtr<AudioSessionConduit> AudioSessionConduit::Create() > { > CSFLogDebug(logTag, "%s ", __FUNCTION__); > #ifdef MOZILLA_INTERNAL_API > // unit tests create their own "main thread" > NS_ASSERTION(NS_IsMainThread(), "Only call on main thread"); > #endif > > WebrtcAudioConduit* obj = new WebrtcAudioConduit(); >- if(obj->Init(static_cast<WebrtcAudioConduit*>(aOther)) != kMediaConduitNoError) >+ if(obj->Init()) > { > CSFLogError(logTag, "%s AudioConduit Init Failed ", __FUNCTION__); > delete obj; > return NULL; > } > CSFLogDebug(logTag, "%s Successfully created AudioConduit ", __FUNCTION__); > return obj; > } >@@ -51,117 +51,97 @@ mozilla::RefPtr<AudioSessionConduit> Aud > WebrtcAudioConduit::~WebrtcAudioConduit() > { > #ifdef MOZILLA_INTERNAL_API > // unit tests create their own "main thread" > NS_ASSERTION(NS_IsMainThread(), "Only call on main thread"); > #endif > > CSFLogDebug(logTag, "%s ", __FUNCTION__); >+ > for(std::vector<AudioCodecConfig*>::size_type i=0;i < mRecvCodecList.size();i++) > { > delete mRecvCodecList[i]; > } > > delete mCurSendCodecConfig; > > // The first one of a pair to be deleted shuts down media for both > if(mPtrVoEXmedia) > { >- if (!mShutDown) { >- mPtrVoEXmedia->SetExternalRecordingStatus(false); >- mPtrVoEXmedia->SetExternalPlayoutStatus(false); >- } >+ mPtrVoEXmedia->SetExternalRecordingStatus(false); >+ mPtrVoEXmedia->SetExternalPlayoutStatus(false); > mPtrVoEXmedia->Release(); > } > > if(mPtrVoEProcessing) > { > mPtrVoEProcessing->Release(); > } > > //Deal with the transport > if(mPtrVoENetwork) > { >- if (!mShutDown) { >- mPtrVoENetwork->DeRegisterExternalTransport(mChannel); >- } >+ mPtrVoENetwork->DeRegisterExternalTransport(mChannel); > mPtrVoENetwork->Release(); > } > > if(mPtrVoECodec) > { > mPtrVoECodec->Release(); > } > > if(mPtrVoEBase) > { >- if (!mShutDown) { >- mPtrVoEBase->StopPlayout(mChannel); >- mPtrVoEBase->StopSend(mChannel); >- mPtrVoEBase->StopReceive(mChannel); >- mPtrVoEBase->DeleteChannel(mChannel); >- mPtrVoEBase->Terminate(); >- } >+ mPtrVoEBase->StopPlayout(mChannel); >+ mPtrVoEBase->StopSend(mChannel); >+ mPtrVoEBase->StopReceive(mChannel); >+ mPtrVoEBase->DeleteChannel(mChannel); >+ mPtrVoEBase->Terminate(); > mPtrVoEBase->Release(); > } > >- if (mOtherDirection) >+ // only one opener can call Delete. Have it be the last to close. >+ if(mVoiceEngine) > { >- // mOtherDirection owns these now! >- mOtherDirection->mOtherDirection = NULL; >- // let other side we terminated the channel >- mOtherDirection->mShutDown = true; >- mVoiceEngine = nullptr; >- } else { >- // only one opener can call Delete. Have it be the last to close. >- if(mVoiceEngine) >- { >- webrtc::VoiceEngine::Delete(mVoiceEngine); >- } >+ webrtc::VoiceEngine::Delete(mVoiceEngine); > } > } > > /* > * WebRTCAudioConduit Implementation > */ >-MediaConduitErrorCode WebrtcAudioConduit::Init(WebrtcAudioConduit *other) >+MediaConduitErrorCode WebrtcAudioConduit::Init() > { >- CSFLogDebug(logTag, "%s this=%p other=%p", __FUNCTION__, this, other); >+ // the AudioSessionConduit::Create() call already asserts mainthread >+ CSFLogDebug(logTag, "%s", __FUNCTION__); > >- if (other) { >- MOZ_ASSERT(!other->mOtherDirection); >- other->mOtherDirection = this; >- mOtherDirection = other; >+ // Only init once >+ MOZ_ASSERT(!mVoiceEngine); > >- // only one can call ::Create()/GetVoiceEngine() >- MOZ_ASSERT(other->mVoiceEngine); >- mVoiceEngine = other->mVoiceEngine; >- } else { >- //Per WebRTC APIs below function calls return NULL on failure >- if(!(mVoiceEngine = webrtc::VoiceEngine::Create())) >- { >- CSFLogError(logTag, "%s Unable to create voice engine", __FUNCTION__); >- return kMediaConduitSessionNotInited; >+ //Per WebRTC APIs below function calls return NULL on failure >+ if(!(mVoiceEngine = webrtc::VoiceEngine::Create())) >+ { >+ CSFLogError(logTag, "%s Unable to create voice engine", __FUNCTION__); >+ return kMediaConduitSessionNotInited; >+ } >+ >+ PRLogModuleInfo *logs = GetWebRTCLogInfo(); >+ if (!gWebrtcTraceLoggingOn && logs && logs->level > 0) { >+ // no need to a critical section or lock here >+ gWebrtcTraceLoggingOn = 1; >+ >+ const char *file = PR_GetEnv("WEBRTC_TRACE_FILE"); >+ if (!file) { >+ file = "WebRTC.log"; > }123 >- >- PRLogModuleInfo *logs = GetWebRTCLogInfo(); >- if (!gWebrtcTraceLoggingOn && logs && logs->level > 0) { >- // no need to a critical section or lock here >- gWebrtcTraceLoggingOn = 1; >- >- const char *file = PR_GetEnv("WEBRTC_TRACE_FILE"); >- if (!file) { >- file = "WebRTC.log"; >- } >- CSFLogDebug(logTag, "%s Logging webrtc to %s level %d", __FUNCTION__, >- file, logs->level); >- mVoiceEngine->SetTraceFilter(logs->level); >- mVoiceEngine->SetTraceFile(file); >- } >+ CSFLogDebug(logTag, "%s Logging webrtc to %s level %d", __FUNCTION__, >+ file, logs->level); >+ mVoiceEngine->SetTraceFilter(logs->level); >+ mVoiceEngine->SetTraceFile(file); > } > > if(!(mPtrVoEBase = VoEBase::GetInterface(mVoiceEngine))) > { > CSFLogError(logTag, "%s Unable to initialize VoEBase", __FUNCTION__); > return kMediaConduitSessionNotInited; > } > >@@ -184,82 +164,89 @@ MediaConduitErrorCode WebrtcAudioConduit > } > > if(!(mPtrVoEXmedia = VoEExternalMedia::GetInterface(mVoiceEngine))) > { > CSFLogError(logTag, "%s Unable to initialize VoEExternalMedia", __FUNCTION__); > return kMediaConduitSessionNotInited; > } > >- if (other) { >- mChannel = other->mChannel; >- } else { >- // init the engine with our audio device layer >- if(mPtrVoEBase->Init() == -1) >- { >- CSFLogError(logTag, "%s VoiceEngine Base Not Initialized", __FUNCTION__); >- return kMediaConduitSessionNotInited; >- } >+ // init the engine with our audio device layer >+ if(mPtrVoEBase->Init() == -1) >+ { >+ CSFLogError(logTag, "%s VoiceEngine Base Not Initialized", __FUNCTION__); >+ return kMediaConduitSessionNotInited; >+ } > >- if( (mChannel = mPtrVoEBase->CreateChannel()) == -1) >- { >- CSFLogError(logTag, "%s VoiceEngine Channel creation failed",__FUNCTION__); >- return kMediaConduitChannelError; >- } >+ if( (mChannel = mPtrVoEBase->CreateChannel()) == -1) >+ { >+ CSFLogError(logTag, "%s VoiceEngine Channel creation failed",__FUNCTION__); >+ return kMediaConduitChannelError; >+ } > >- CSFLogDebug(logTag, "%s Channel Created %d ",__FUNCTION__, mChannel); >+ CSFLogDebug(logTag, "%s Channel Created %d ",__FUNCTION__, mChannel); > >- if(mPtrVoENetwork->RegisterExternalTransport(mChannel, *this) == -1) >- { >- CSFLogError(logTag, "%s VoiceEngine, External Transport Failed",__FUNCTION__); >- return kMediaConduitTransportRegistrationFail; >- } >+ if(mPtrVoENetwork->RegisterExternalTransport(mChannel, *this) == -1) >+ { >+ CSFLogError(logTag, "%s VoiceEngine, External Transport Failed",__FUNCTION__); >+ return kMediaConduitTransportRegistrationFail; >+ } > >- if(mPtrVoEXmedia->SetExternalRecordingStatus(true) == -1) >- { >- CSFLogError(logTag, "%s SetExternalRecordingStatus Failed %d",__FUNCTION__, >- mPtrVoEBase->LastError()); >- return kMediaConduitExternalPlayoutError; >- } >+ if(mPtrVoEXmedia->SetExternalRecordingStatus(true) == -1) >+ { >+ CSFLogError(logTag, "%s SetExternalRecordingStatus Failed %d",__FUNCTION__, >+ mPtrVoEBase->LastError()); >+ return kMediaConduitExternalPlayoutError; >+ } > >- if(mPtrVoEXmedia->SetExternalPlayoutStatus(true) == -1) >- { >- CSFLogError(logTag, "%s SetExternalPlayoutStatus Failed %d ",__FUNCTION__, >- mPtrVoEBase->LastError()); >- return kMediaConduitExternalRecordingError; >- } >- CSFLogDebug(logTag , "%s AudioSessionConduit Initialization Done (%p)",__FUNCTION__, this); >+ if(mPtrVoEXmedia->SetExternalPlayoutStatus(true) == -1) >+ { >+ CSFLogError(logTag, "%s SetExternalPlayoutStatus Failed %d ",__FUNCTION__, >+ mPtrVoEBase->LastError()); >+ return kMediaConduitExternalRecordingError; > } >+ CSFLogDebug(logTag , "%s AudioSessionConduit Initialization Done",__FUNCTION__); >+ > return kMediaConduitNoError; > } > > // AudioSessionConduit Implementation > MediaConduitErrorCode >-WebrtcAudioConduit::AttachTransport(mozilla::RefPtr<TransportInterface> aTransport) >+WebrtcAudioConduit::AttachTransport(mozilla::RefPtr<TransportInterface> aTransport, >+ bool aIsReceive) > { > CSFLogDebug(logTag, "%s ", __FUNCTION__); > > if(!aTransport) > { > CSFLogError(logTag, "%s NULL Transport", __FUNCTION__); > return kMediaConduitInvalidTransport; > } > // set the transport >- mTransport = aTransport; >+ if (aIsReceive) >+ mReceiveTransport = aTransport; >+ else >+ mSendTransport = aTransport; >+ > return kMediaConduitNoError; > } > > MediaConduitErrorCode > WebrtcAudioConduit::ConfigureSendMediaCodec(const AudioCodecConfig* codecConfig) > { > CSFLogDebug(logTag, "%s ", __FUNCTION__); > MediaConduitErrorCode condError = kMediaConduitNoError; > int error = 0;//webrtc engine errors > webrtc::CodecInst cinst; > >+ if (!mSendTransport) >+ { >+ CSFLogError(logTag, "%s without having a Send-side Transport", __FUNCTION__); >+ } >+ > //validate codec param > if((condError = ValidateCodecConfig(codecConfig, true)) != kMediaConduitNoError) > { > return condError; > } > > //are we transmitting already, stop and apply the send codec > if(mEngineTransmitting) >@@ -349,16 +336,21 @@ MediaConduitErrorCode > WebrtcAudioConduit::ConfigureRecvMediaCodecs( > const std::vector<AudioCodecConfig*>& codecConfigList) > { > CSFLogDebug(logTag, "%s ", __FUNCTION__); > MediaConduitErrorCode condError = kMediaConduitNoError; > int error = 0; //webrtc engine errors > bool success = false; > >+ if (!mReceiveTransport) >+ { >+ CSFLogError(logTag, "%s without having a Receive-side Transport", __FUNCTION__); >+ } >+ > // are we receiving already. If so, stop receiving and playout > // since we can't apply new recv codec when the engine is playing > if(mEngineReceiving) > { > CSFLogDebug(logTag, "%s Engine Already Receiving. Attemping to Stop ", __FUNCTION__); > // AudioEngine doesn't fail fatal on stop reception. Ref:voe_errors.h. > // hence we need-not be strict in failing here on error > mPtrVoEBase->StopReceive(mChannel); >@@ -624,74 +616,54 @@ WebrtcAudioConduit::ReceivedRTCPPacket(c > return kMediaConduitSessionNotInited; > } > //good here > return kMediaConduitNoError; > } > > //WebRTC::RTP Callback Implementation > >+// Called on MSG thread > int WebrtcAudioConduit::SendPacket(int channel, const void* data, int len) > { > CSFLogDebug(logTag, "%s : channel %d %s",__FUNCTION__,channel, > (mEngineReceiving && mOtherDirection) ? "(using mOtherDirection)" : ""); > >- if (mEngineReceiving) >+ if(mSendTransport && (mSendTransport->SendRtpPacket(data, len) == NS_OK)) > { >- if (mOtherDirection) >- { >- return mOtherDirection->SendPacket(channel, data, len); >- } >- CSFLogDebug(logTag, "%s : Asked to send RTP without an RTP sender on channel %d", >- __FUNCTION__, channel); >+ CSFLogDebug(logTag, "%s Sent RTP Packet ", __FUNCTION__); >+ return len; >+ } else { >+ CSFLogError(logTag, "%s RTP Packet Send Failed ", __FUNCTION__); > return -1; >- } else { >- if(mTransport && (mTransport->SendRtpPacket(data, len) == NS_OK)) >- { >- CSFLogDebug(logTag, "%s Sent RTP Packet ", __FUNCTION__); >- return len; >- } else { >- CSFLogError(logTag, "%s RTP Packet Send Failed ", __FUNCTION__); >- return -1; >- } > } > } > >+// Called on WebRTC Process thread > int WebrtcAudioConduit::SendRTCPPacket(int channel, const void* data, int len) > { > CSFLogDebug(logTag, "%s : channel %d", __FUNCTION__, channel); > >- if (mEngineTransmitting) >+ if(mReceiveTransport && mReceiveTransport->SendRtcpPacket(data, len) == NS_OK) > { >- if (mOtherDirection) >- { >- return mOtherDirection->SendRTCPPacket(channel, data, len); >- } >- CSFLogDebug(logTag, "%s : Asked to send RTCP without an RTP receiver on channel %d", >- __FUNCTION__, channel); >+ CSFLogDebug(logTag, "%s Sent RTCP Packet ", __FUNCTION__); >+ return len; >+ } else { >+ CSFLogError(logTag, "%s RTCP Packet Send Failed ", __FUNCTION__); > return -1; >- } else { >- if(mTransport && mTransport->SendRtcpPacket(data, len) == NS_OK) >- { >- CSFLogDebug(logTag, "%s Sent RTCP Packet ", __FUNCTION__); >- return len; >- } else { >- CSFLogError(logTag, "%s RTCP Packet Send Failed ", __FUNCTION__); >- return -1; >- } > } > } > > /** > * Converts between CodecConfig to WebRTC Codec Structure. > */ > > bool > WebrtcAudioConduit::CodecConfigToWebRTCCodec(const AudioCodecConfig* codecInfo, >- webrtc::CodecInst& cinst) >+ webrtc::CodecInst& cinst) > { > const unsigned int plNameLength = codecInfo->mName.length()+1; > memset(&cinst, 0, sizeof(webrtc::CodecInst)); > if(sizeof(cinst.plname) < plNameLength) > { > CSFLogError(logTag, "%s Payload name buffer capacity mismatch ", > __FUNCTION__); > return false; >@@ -749,17 +721,17 @@ WebrtcAudioConduit::CopyCodecToDB(const > return true; > } > > /** > * Checks if 2 codec structs are same > */ > bool > WebrtcAudioConduit::CheckCodecsForMatch(const AudioCodecConfig* curCodecConfig, >- const AudioCodecConfig* codecInfo) const >+ const AudioCodecConfig* codecInfo) const > { > if(!curCodecConfig) > { > return false; > } > > if(curCodecConfig->mType == codecInfo->mType && > (curCodecConfig->mName.compare(codecInfo->mName) == 0) && >@@ -795,17 +767,17 @@ WebrtcAudioConduit::CheckCodecForMatch(c > > > /** > * Perform validation on the codecCofig to be applied > * Verifies if the codec is already applied. > */ > MediaConduitErrorCode > WebrtcAudioConduit::ValidateCodecConfig(const AudioCodecConfig* codecInfo, >- bool send) const >+ bool send) const > { > bool codecAppliedAlready = false; > > if(!codecInfo) > { > CSFLogError(logTag, "%s Null CodecConfig ", __FUNCTION__); > return kMediaConduitMalformedArgument; > } >diff --git a/media/webrtc/signaling/src/media-conduit/AudioConduit.h b/media/webrtc/signaling/src/media-conduit/AudioConduit.h >--- a/media/webrtc/signaling/src/media-conduit/AudioConduit.h >+++ b/media/webrtc/signaling/src/media-conduit/AudioConduit.h >@@ -32,63 +32,62 @@ > */ > > namespace mozilla { > > /** > * Concrete class for Audio session. Hooks up > * - media-source and target to external transport > */ >-class WebrtcAudioConduit:public AudioSessionConduit >- ,public webrtc::Transport >+class WebrtcAudioConduit : public AudioSessionConduit >+ , public webrtc::Transport > { > public: > //VoiceEngine defined constant for Payload Name Size. > static const unsigned int CODEC_PLNAME_SIZE; > > /** > * APIs used by the registered external transport to this Conduit to > * feed in received RTP Frames to the VoiceEngine for decoding > */ >- virtual MediaConduitErrorCode ReceivedRTPPacket(const void *data, int len); >+ virtual MediaConduitErrorCode ReceivedRTPPacket(const void *data, int len); > > /** > * APIs used by the registered external transport to this Conduit to > * feed in received RTCP Frames to the VoiceEngine for decoding > */ >- virtual MediaConduitErrorCode ReceivedRTCPPacket(const void *data, int len); >+ virtual MediaConduitErrorCode ReceivedRTCPPacket(const void *data, int len); > > /** > * Function to configure send codec for the audio session > * @param sendSessionConfig: CodecConfiguration > * @result: On Success, the audio engine is configured with passed in codec for send > * On failure, audio engine transmit functionality is disabled. > * NOTE: This API can be invoked multiple time. Invoking this API may involve restarting > * transmission sub-system on the engine. > */ >- virtual MediaConduitErrorCode ConfigureSendMediaCodec( >- const AudioCodecConfig* codecConfig); >+ virtual MediaConduitErrorCode ConfigureSendMediaCodec(const AudioCodecConfig* codecConfig); > /** > * Function to configure list of receive codecs for the audio session > * @param sendSessionConfig: CodecConfiguration > * @result: On Success, the audio engine is configured with passed in codec for send > * Also the playout is enabled. > * On failure, audio engine transmit functionality is disabled. > * NOTE: This API can be invoked multiple time. Invoking this API may involve restarting > * transmission sub-system on the engine. > */ >- virtual MediaConduitErrorCode ConfigureRecvMediaCodecs( >+ virtual MediaConduitErrorCode ConfigureRecvMediaCodecs( > const std::vector<AudioCodecConfig* >& codecConfigList); > > /** >- * Register External Transport to this Conduit. RTP and RTCP frames from the VoiceEnigne >+ * Register External Transport to this Conduit. RTP and RTCP frames from the VoiceEngine > * shall be passed to the registered transport for transporting externally. > */ >- virtual MediaConduitErrorCode AttachTransport( >- mozilla::RefPtr<TransportInterface> aTransport); >+ virtual MediaConduitErrorCode AttachTransport(mozilla::RefPtr<TransportInterface> aTransport, >+ bool aIsReceive); > > /** > * Function to deliver externally captured audio sample for encoding and transport > * @param audioData [in]: Pointer to array containing a frame of audio > * @param lengthSamples [in]: Length of audio frame in samples in multiple of 10 milliseconds > * Ex: Frame length is 160, 320, 440 for 16, 32, 44 kHz sampling rates > respectively. > audioData[] should be of lengthSamples in size >@@ -96,20 +95,20 @@ public: > samples of 16-bits each for a 10m audio frame. > * @param samplingFreqHz [in]: Frequency/rate of the sampling in Hz ( 16000, 32000 ...) > * @param capture_delay [in]: Approx Delay from recording until it is delivered to VoiceEngine > in milliseconds. > * NOTE: ConfigureSendMediaCodec() SHOULD be called before this function can be invoked > * This ensures the inserted audio-samples can be transmitted by the conduit > * > */ >- virtual MediaConduitErrorCode SendAudioFrame(const int16_t speechData[], >- int32_t lengthSamples, >- int32_t samplingFreqHz, >- int32_t capture_time); >+ virtual MediaConduitErrorCode SendAudioFrame(const int16_t speechData[], >+ int32_t lengthSamples, >+ int32_t samplingFreqHz, >+ int32_t capture_time); > > /** > * Function to grab a decoded audio-sample from the media engine for rendering > * / playoutof length 10 milliseconds. > * > * @param speechData [in]: Pointer to a array to which a 10ms frame of audio will be copied > * @param samplingFreqHz [in]: Frequency of the sampling for playback in Hertz (16000, 32000,..) > * @param capture_delay [in]: Estimated Time between reading of the samples to rendering/playback >@@ -141,30 +140,31 @@ public: > virtual int SendRTCPPacket(int channel, const void *data, int len) ; > > > > WebrtcAudioConduit(): > mOtherDirection(NULL), > mShutDown(false), > mVoiceEngine(NULL), >- mTransport(NULL), >+ mSendTransport(NULL), >+ mReceiveTransport(NULL), > mEngineTransmitting(false), > mEngineReceiving(false), > mChannel(-1), > mCurSendCodecConfig(NULL), > mCaptureDelay(150), > mEchoOn(true), > mEchoCancel(webrtc::kEcAec) > { > } > > virtual ~WebrtcAudioConduit(); > >- MediaConduitErrorCode Init(WebrtcAudioConduit *other); >+ MediaConduitErrorCode Init(); > > int GetChannel() { return mChannel; } > webrtc::VoiceEngine* GetVoiceEngine() { return mVoiceEngine; } > > private: > WebrtcAudioConduit(const WebrtcAudioConduit& other) MOZ_DELETE; > void operator=(const WebrtcAudioConduit& other) MOZ_DELETE; > >@@ -197,17 +197,18 @@ private: > > WebrtcAudioConduit* mOtherDirection; > // Other side has shut down our channel and related items already > bool mShutDown; > > // These are shared by both directions. They're released by the last > // conduit to die > webrtc::VoiceEngine* mVoiceEngine; >- mozilla::RefPtr<TransportInterface> mTransport; >+ mozilla::RefPtr<TransportInterface> mSendTransport; >+ mozilla::RefPtr<TransportInterface> mReceiveTransport; > webrtc::VoENetwork* mPtrVoENetwork; > webrtc::VoEBase* mPtrVoEBase; > webrtc::VoECodec* mPtrVoECodec; > webrtc::VoEExternalMedia* mPtrVoEXmedia; > webrtc::VoEAudioProcessing* mPtrVoEProcessing; > > //engine states of our interets > bool mEngineTransmitting; // If true => VoiceEngine Send-subsystem is up >diff --git a/media/webrtc/signaling/src/media-conduit/MediaConduitInterface.h b/media/webrtc/signaling/src/media-conduit/MediaConduitInterface.h >--- a/media/webrtc/signaling/src/media-conduit/MediaConduitInterface.h >+++ b/media/webrtc/signaling/src/media-conduit/MediaConduitInterface.h >@@ -129,17 +129,18 @@ public: > > > /** > * Function to attach Transport end-point of the Media conduit. > * @param aTransport: Reference to the concrete teansport implementation > * Note: Multiple invocations of this call , replaces existing transport with > * with the new one. > */ >- virtual MediaConduitErrorCode AttachTransport(RefPtr<TransportInterface> aTransport) = 0; >+ virtual MediaConduitErrorCode AttachTransport(RefPtr<TransportInterface> aTransport, >+ bool aIsReceive) = 0; > > NS_INLINE_DECL_THREADSAFE_REFCOUNTING(MediaSessionConduit) > > }; > > > /** > * MediaSessionConduit for video >@@ -218,17 +219,17 @@ class AudioSessionConduit : public Media > { > public: > > /** > * Factory function to create and initialize a Video Conduit Session > * return: Concrete VideoSessionConduitObject or NULL in the case > * of failure > */ >- static mozilla::RefPtr<AudioSessionConduit> Create(AudioSessionConduit *aOther); >+ static mozilla::RefPtr<AudioSessionConduit> Create(); > > virtual ~AudioSessionConduit() {} > > virtual Type type() const { return AUDIO; } > > > /** > * Function to deliver externally captured audio sample for encoding and transport >diff --git a/media/webrtc/signaling/src/media-conduit/VideoConduit.cpp b/media/webrtc/signaling/src/media-conduit/VideoConduit.cpp >--- a/media/webrtc/signaling/src/media-conduit/VideoConduit.cpp >+++ b/media/webrtc/signaling/src/media-conduit/VideoConduit.cpp >@@ -282,17 +282,18 @@ WebrtcVideoConduit::AttachRenderer(mozil > } > mEngineRendererStarted = true; > } > > return kMediaConduitNoError; > } > > MediaConduitErrorCode >-WebrtcVideoConduit::AttachTransport(mozilla::RefPtr<TransportInterface> aTransport) >+WebrtcVideoConduit::AttachTransport(mozilla::RefPtr<TransportInterface> aTransport, >+ bool aIsReceive) > { > CSFLogDebug(logTag, "%s ", __FUNCTION__); > if(!aTransport) > { > CSFLogError(logTag, "%s NULL Transport ", __FUNCTION__); > MOZ_ASSERT(PR_FALSE); > return kMediaConduitInvalidTransport; > } >diff --git a/media/webrtc/signaling/src/media-conduit/VideoConduit.h b/media/webrtc/signaling/src/media-conduit/VideoConduit.h >--- a/media/webrtc/signaling/src/media-conduit/VideoConduit.h >+++ b/media/webrtc/signaling/src/media-conduit/VideoConduit.h >@@ -95,17 +95,18 @@ public: > */ > virtual MediaConduitErrorCode ConfigureRecvMediaCodecs( > const std::vector<VideoCodecConfig* >& codecConfigList); > > /** > * Register External Transport to this Conduit. RTP and RTCP frames from the VoiceEnigne > * shall be passed to the registered transport for transporting externally. > */ >- virtual MediaConduitErrorCode AttachTransport(mozilla::RefPtr<TransportInterface> aTransport); >+ virtual MediaConduitErrorCode AttachTransport(mozilla::RefPtr<TransportInterface> aTransport, >+ bool aIsReceive); > > /** > * Function to deliver a capture video frame for encoding and transport > * @param video_frame: pointer to captured video-frame. > * @param video_frame_length: size of the frame > * @param width, height: dimensions of the frame > * @param video_type: Type of the video frame - I420, RAW > * @param captured_time: timestamp when the frame was captured. >diff --git a/media/webrtc/signaling/src/media/VcmSIPCCBinding.cpp b/media/webrtc/signaling/src/media/VcmSIPCCBinding.cpp >--- a/media/webrtc/signaling/src/media/VcmSIPCCBinding.cpp >+++ b/media/webrtc/signaling/src/media/VcmSIPCCBinding.cpp >@@ -1357,21 +1357,22 @@ static int vcmRxStartICE_m(cc_mcapid_t m > CSFLogError( logTag, "Could not create RTCP flow"); > return VCM_ERROR; > } > > if (CC_IS_AUDIO(mcap_id)) { > std::vector<mozilla::AudioCodecConfig *> configs; > > // Instantiate an appropriate conduit >- mozilla::RefPtr<mozilla::AudioSessionConduit> tx_conduit = >+ mozilla::RefPtr<mozilla::AudioSessionConduit> conduit = > pc.impl()->media()->GetConduit(level, false); > >- mozilla::RefPtr<mozilla::AudioSessionConduit> conduit = >- mozilla::AudioSessionConduit::Create(tx_conduit); >+ if (!conduit) { >+ conduit = mozilla::AudioSessionConduit::Create(); >+ } > if(!conduit) > return VCM_ERROR; > > pc.impl()->media()->AddConduit(level, true, conduit); > > mozilla::AudioCodecConfig *config_raw; > > for(int i=0; i <num_payloads ; i++) >@@ -2010,21 +2011,22 @@ static int vcmTxStartICE_m(cc_mcapid_t m > payload->audio.packet_size, > payload->audio.channels, > payload->audio.bitrate); > > // Take possession of this pointer > mozilla::ScopedDeletePtr<mozilla::AudioCodecConfig> config(config_raw); > > // Instantiate an appropriate conduit >- mozilla::RefPtr<mozilla::AudioSessionConduit> rx_conduit = >+ mozilla::RefPtr<mozilla::AudioSessionConduit> conduit = > pc.impl()->media()->GetConduit(level, true); > >- mozilla::RefPtr<mozilla::AudioSessionConduit> conduit = >- mozilla::AudioSessionConduit::Create(rx_conduit); >+ if (!conduit) { >+ conduit = mozilla::AudioSessionConduit::Create(); >+ } > > if (!conduit || conduit->ConfigureSendMediaCodec(config)) > return VCM_ERROR; > > pc.impl()->media()->AddConduit(level, false, conduit); > > mozilla::RefPtr<mozilla::MediaPipeline> pipeline = > new mozilla::MediaPipelineTransmit( >diff --git a/media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp b/media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp >--- a/media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp >+++ b/media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp >@@ -62,17 +62,17 @@ nsresult MediaPipeline::Init() { > &MediaPipeline::Init_s), > NS_DISPATCH_NORMAL); > > return NS_OK; > } > > nsresult MediaPipeline::Init_s() { > ASSERT_ON_THREAD(sts_thread_); >- conduit_->AttachTransport(transport_); >+ conduit_->AttachTransport(transport_, direction_); > > nsresult res; > MOZ_ASSERT(rtp_transport_); > // Look to see if the transport is ready > rtp_transport_->SignalStateChange.connect(this, > &MediaPipeline::StateChange); > > if (rtp_transport_->state() == TransportLayer::TS_OPEN) { >diff --git a/media/webrtc/signaling/test/mediaconduit_unittests.cpp b/media/webrtc/signaling/test/mediaconduit_unittests.cpp >--- a/media/webrtc/signaling/test/mediaconduit_unittests.cpp >+++ b/media/webrtc/signaling/test/mediaconduit_unittests.cpp >@@ -273,17 +273,17 @@ void AudioSendAndReceive::GenerateAndRea > int sampleLengthDecoded = 0; > int SAMPLES = (PLAYOUT_SAMPLE_FREQUENCY * 10); //10 seconds > int CHANNELS = 1; //mono audio > int sampleLengthInBytes = sizeof(audioInput); > //generated audio buffer > inbuf = (short *)moz_xmalloc(sizeof(short)*SAMPLES*CHANNELS); > memset(audioInput,0,sampleLengthInBytes); > memset(audioOutput,0,sampleLengthInBytes); >- MOZ_ASSERT(SAMPLES <= PLAYOUT_SAMPLE_LENGTH); >+ MOZ_ASSERT((unsigned int) SAMPLES <= PLAYOUT_SAMPLE_LENGTH); > > FILE* inFile = fopen( iFile.c_str(), "wb+"); > if(!inFile) { > cerr << "Input File Creation Failed " << endl; > return; > } > > FILE* outFile = fopen( oFile.c_str(), "wb+"); >@@ -485,33 +485,33 @@ class TransportConduitTest : public ::te > { > } > > //1. Dump audio samples to dummy external transport > void TestDummyAudioAndTransport() > { > //get pointer to AudioSessionConduit > int err=0; >- mAudioSession = mozilla::AudioSessionConduit::Create(NULL); >+ mAudioSession = mozilla::AudioSessionConduit::Create(); > if( !mAudioSession ) > ASSERT_NE(mAudioSession, (void*)NULL); > >- mAudioSession2 = mozilla::AudioSessionConduit::Create(NULL); >+ mAudioSession2 = mozilla::AudioSessionConduit::Create(); > if( !mAudioSession2 ) > ASSERT_NE(mAudioSession2, (void*)NULL); > > FakeMediaTransport* xport = new FakeMediaTransport(); > ASSERT_NE(xport, (void*)NULL); > xport->SetAudioSession(mAudioSession, mAudioSession2); > mAudioTransport = xport; > > // attach the transport to audio-conduit >- err = mAudioSession->AttachTransport(mAudioTransport); >+ err = mAudioSession->AttachTransport(mAudioTransport, false); > ASSERT_EQ(mozilla::kMediaConduitNoError, err); >- err = mAudioSession2->AttachTransport(mAudioTransport); >+ err = mAudioSession2->AttachTransport(mAudioTransport, true); > ASSERT_EQ(mozilla::kMediaConduitNoError, err); > > //configure send and recv codecs on the audio-conduit > //mozilla::AudioCodecConfig cinst1(124,"PCMU",8000,80,1,64000); > mozilla::AudioCodecConfig cinst1(124,"opus",48000,960,1,64000); > mozilla::AudioCodecConfig cinst2(125,"L16",16000,320,1,256000); > > >@@ -563,19 +563,19 @@ class TransportConduitTest : public ::te > FakeMediaTransport* xport = new FakeMediaTransport(); > ASSERT_NE(xport, (void*)NULL); > xport->SetVideoSession(mVideoSession,mVideoSession2); > mVideoTransport = xport; > > // attach the transport and renderer to video-conduit > err = mVideoSession2->AttachRenderer(mVideoRenderer); > ASSERT_EQ(mozilla::kMediaConduitNoError, err); >- err = mVideoSession->AttachTransport(mVideoTransport); >+ err = mVideoSession->AttachTransport(mVideoTransport, false); > ASSERT_EQ(mozilla::kMediaConduitNoError, err); >- err = mVideoSession2->AttachTransport(mVideoTransport); >+ err = mVideoSession2->AttachTransport(mVideoTransport, true); > ASSERT_EQ(mozilla::kMediaConduitNoError, err); > > //configure send and recv codecs on theconduit > mozilla::VideoCodecConfig cinst1(120, "VP8", 640, 480); > mozilla::VideoCodecConfig cinst2(124, "I420", 640, 480); > > > std::vector<mozilla::VideoCodecConfig* > rcvCodecList; >diff --git a/media/webrtc/signaling/test/mediapipeline_unittest.cpp b/media/webrtc/signaling/test/mediapipeline_unittest.cpp >--- a/media/webrtc/signaling/test/mediapipeline_unittest.cpp >+++ b/media/webrtc/signaling/test/mediapipeline_unittest.cpp >@@ -44,17 +44,17 @@ MtransportTestUtils *test_utils; > namespace { > class TestAgent { > public: > TestAgent() : > audio_flow_(new TransportFlow()), > audio_prsock_(new TransportLayerPrsock()), > audio_dtls_(new TransportLayerDtls()), > audio_config_(109, "opus", 48000, 960, 2, 64000), >- audio_conduit_(mozilla::AudioSessionConduit::Create(NULL)), >+ audio_conduit_(mozilla::AudioSessionConduit::Create()), > audio_(), > audio_pipeline_(), > video_flow_(new TransportFlow()), > video_prsock_(new TransportLayerPrsock()), > video_config_(120, "VP8", 640, 480), > video_conduit_(mozilla::VideoSessionConduit::Create()), > video_(), > video_pipeline_() {
Attachment #741171 - Attachment is obsolete: true
Target Milestone: --- → mozilla23
Comment on attachment 741699 [details] [diff] [review] make Audio/VideoConduits bidirectional, with Send and Receive Transports Ready to start reviews on this I think. We may have more to do with pipelines, but I think that's largely separate from this in any case; the exposure of this to the pipelines is limited. Passes mediaconduit and mediapipeline unit tests. Note that they have unidirectional video/audio; this patch does not deal with there not being a transport/pipeline for sending RTCP from the sender (RTCP SR); that must be addressed separately (see the sendony/recvonly media bug). Thus with logging on you'll get failures to send RTCP getting logged at points in the tests.
Attachment #741699 - Flags: review?(tterribe)
Attachment #741699 - Flags: review?(ekr)
Oops, just uploaded the delta patch for video instead of the entire thing
Attachment #741855 - Flags: review?(tterribe)
Attachment #741855 - Flags: review?(ekr)
Attachment #741699 - Attachment is obsolete: true
Attachment #741699 - Flags: review?(tterribe)
Attachment #741699 - Flags: review?(ekr)
Blocks: 859971
I'll write up some details about why this is useful and what practical differences it makes; I'll do that sometime before middle of next week.
Attachment #741855 - Attachment is obsolete: true
Attachment #741855 - Flags: review?(tterribe)
Attachment #741855 - Flags: review?(ekr)
Attachment #770384 - Flags: review?(tterribe)
Attachment #770384 - Flags: review?(ekr)
Unbitrotted. This fixes that we're not sending video RTCP's, in particular periodic RTCPs (SR/RR). This will be a must-have for support of unidirectional video or audio. In bidirectional video we'll miss a fair number of RTCPs, especially when packet loss is non-existant. It also simplifies the entire setup where audio conduits need to be correlated and made to reference each other directly while using a single channel. Side effect: moved the "Inserted A Frame" log to Error from Debug. :-)
(In reply to Randell Jesup [:jesup] from comment #8) > This fixes that we're not sending video RTCP's, in particular periodic RTCPs > (SR/RR). This will be a must-have for support of unidirectional video or > audio. > In bidirectional video we'll miss a fair number of RTCPs, especially when > packet loss is non-existant. Can you give a precise set of STR that can be used to demonstrate each of these improvements? Right now we don't have tests for any of this, so it's not obvious what correct behavior is or how to verify that we're improving things with this patch.
In a video call, set signaling:5,webrtc_trace:65535 and WEBRTC_TRACE_FILE, and look for SendRTCPPacket Failed logs. The ones that are failing are the normal periodic RTCPs; it tries to send them from the "sender's" channel, which fails with split conduits. It's calling ::SendRTCP() from webrtc::ModuleRtpRtcpImpl::Process(). Also, since the two channels are split, the "receive" channel that's sending RTCPs will only send RR reports, no SR's. (Verified experimentally) No SRs means no SSRC/timestamp/NTP/count blocks. Not having that information means that the receiver cannot do AV sync properly, and must run "open" sync. Other video RTCPs (PLI, NACK, etc) usually get sent from the "receive" channel, and thus go through (but prepended with RR instead of SR) In unidirectional video, obviously all the RTCPs will be sent on one channel. If we're sending unidirectional, no RTCPs will ever go through, which means no A/V sync. This also completes the cleanup of the code from the patch that fixed AEC by making audio conduits share a channel while still being split; for RTCP it basically did an "am I sending RTP? If so, have the other conduit send this RTCP". Shutdown of conduits is less prone to race issues (though they'd been resolved)
As I understood our discussion at the work week, what Tim is asking for (and what I want to see) is some set of STR/measurements that show that some user-observable behavior is noticeably better with this patch.
(In reply to Eric Rescorla (:ekr) from comment #11) > As I understood our discussion at the work week, what Tim is asking for > (and what I want to see) is some set of STR/measurements that show > that some user-observable behavior is noticeably better with this > patch. To expand on this, perhaps some tests that showed that: (a) A/V sync works poorly now. (b) A/V sync works with this patch.
We don't have any A/V sync tests. Building them (while possible) will not be a trivial exercise (since we'd need an external method of verifying sync down to the 10ish ms range). You can report the internal belief of what sync is, but that's not the actual sync (for example, it would totally miss the 44/44.1Khz issue), and it would miss any external-to-GIPS offsets that exist. Ironically, a speaker and a listener in a call is the best test for lip sync I've seen - and the only one I've seen in use, though I'm sure some companies have them. From test runs, I can tell you that no SRs were getting through (I can make an actual trigger test instead of just some conditional GDB breakpoints). With no SRs, correct A/V sync is not possible. The best you can do is run pseudo-sync - either display-as-soon-as-possible, or assume the first N frames were synced and base future sync on that offset, or something like that. I'm sure GIPS does one of these; perhaps a calculated offset. Will that work? yes, mostly - drift can mess it up unless they use a long-term calculation of effective timestamp rate (and they may). Lack of initial effective sync could mess it up. The offset way helps when the jitter buffer widens to handle increased jitter, since it will allow the video pipeline to delay to match. Is it what we should be doing? No.
Target Milestone: mozilla23 → mozilla26
Retitled. This is blocking other A/V sync work.
Severity: normal → critical
Summary: Refactor Audio/Video conduits → Video RTCP SR's are never sent because of conduit split
Attachment #770384 - Flags: review?(tterribe)
Attachment #770384 - Attachment is obsolete: true
Attachment #770384 - Flags: review?(ekr)
Comment on attachment 804280 [details] [diff] [review] make Audio/VideoConduits bidirectional, with Send and Receive Transports Minor un-bitrot
Attachment #804280 - Flags: review?(ekr)
Same patch, but with whitespace differences suppressed (hg qdiff -w) since indent level was changed in several spots
As I said in IRC: 1. I think this really needs to come with tests that measure that it does something useful, not just that it sends RTCP. Those tests should come in the form of actual measurements that the audio gets out of sync without this and otherwise is in sync. This is the second set of patches that have claimed to improve sync and it's now clear that the first ones could not have because of this bug. So, this time I would like to see actual tests. I understand your position from c13 that you think that's hard to test, but it shouldn't be that hard to produce *something* by introducing synchronized A/V, adding network delay, and verifying that the A/V still comes out synced. 2. I'm not convinced that this is the right technical architecture. In particular, it's weird to have the pipelines be one-way but the conduits two-way and to create yet another table of things in PCMedia. So we should consider what the right thing is from a more global level. I might be more willing to overlook this point if we had the measurements indicated in point #1 that showed that this patch made a difference to people's experiences and thus was high priority, but absent those measurements, we should consider what the right thing is, not just what the expedient thing is. I'm happy to have a discussion to discuss these points. As I said on IRC, I suggest a call between you and Tim, and I.
Running with this patch shows very good 1-second-delayed sync between audio and video. (after a short delay - instant-1-second-delay is outside the normal design parameters, it doesn't slew video instantly)
I'm sorry I wasn't clearer on what I was looking for here, namely an automated or at least automatable test that runs in either the mochitests or mediaconduit_unittests without requiring modifications to the conduit code itself. With that said, I agree that this test provides sufficient evidence that this does something useful that I think we can discuss the architectural issues in parallel with preparation of a real test (which I think is required for this to actually land). I'll review your approach and provide written comments for you early next week.
This is the same evil test patch that applies without the main patch here, for comparing sync @ 1-second delay, with and without the main patch
Blocks: 917916
updated to apply on top of the bug 917916 patch, and do more testing of SR reception
Attachment #804280 - Attachment is obsolete: true
Attachment #804280 - Flags: review?(ekr)
Attachment #806800 - Flags: review?(ekr)
Implements divided send/receive GIPS channels within VideoConduits, since Google is moving towards making Video channels be unidirectional. They stay within a single bidirectional conduit for this variation (option 'c' from some email I sent). When applied on top of the other patch, passes the unidirectional and bidirectional mediaconduit unit tests from the other bug. Uploaded for reference as an option for where we can go with this.
re-targeting for Firefox 27, landing Oct 15 or earlier.
Target Milestone: mozilla26 → mozilla27
Attachment #804726 - Attachment is obsolete: true
Attachment #804726 - Attachment is obsolete: false
Alternate implementation that mirrors the kludge used for the AudioConduit backend to share the mChannel between send and receive conduits, and transfer RTP/RTCP sends on the 'wrong' conduit to the 'right' one. Also cleans up a bunch of stuff in AudioConduit. This is expected to be removed when Google changes the ViE channel implementation to be unidirectional, in conjunction with BUNDLE changes in the Pipeline code
Attachment #806800 - Flags: review?(ekr)
Comment on attachment 817705 [details] [diff] [review] merge backend for send and receive VideoConduits to match AudioConduits & cleanup This has been tested with the 1-second-delay patch; it successfully syncs to 1-second-delayed audio and holds lipsync
Attachment #817705 - Flags: review?(ekr)
Comment on attachment 817705 [details] [diff] [review] merge backend for send and receive VideoConduits to match AudioConduits & cleanup Review of attachment 817705 [details] [diff] [review]: ----------------------------------------------------------------- Preliminary formatting comments. Still reviewing logic. ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp @@ +308,3 @@ > return kMediaConduitInvalidSendCodec; > } > + CSFLogError(logTag, "%s SetSendCodec Failed %d ", __FUNCTION__, This indentation is kinda weird @@ +357,5 @@ > codecConfig->mChannels, > codecConfig->mRate); > > > + // by now we should be successfully started the transmission Grammar. @@ +400,5 @@ > } > > //Try Applying the codecs in the list > + // we treat as success if atleast one codec was applied and reception was > + // started successfully. Grammar. @@ +611,5 @@ > } > return kMediaConduitUnknownError; > } > } else { > + CSFLogError(logTag, "%s Engine Error: Not Receiving !!! ", __FUNCTION__); These !!! seem excessive. @@ +625,5 @@ > CSFLogDebug(logTag, "%s : channel %d",__FUNCTION__, mChannel); > > if(mEngineTransmitting) > { > + //let the engine know of RTCP packet to decode. Grammar @@ +648,3 @@ > int WebrtcAudioConduit::SendPacket(int channel, const void* data, int len) > { > + CSFLogDebug(logTag, "%s : channel %d %s",__FUNCTION__, channel, Looks like you added a space after the second comma but not the first. @@ +682,5 @@ > return mOtherDirection->SendRTCPPacket(channel, data, len); > } > + } > + > + // We come here, if we have only one pipeline/conduit setup spurious comma. @@ +684,5 @@ > + } > + > + // We come here, if we have only one pipeline/conduit setup > + // say for unidirectional streams. > + // We also end up here if we are receiving engine. the receiving engine ::: media/webrtc/signaling/src/media-conduit/AudioConduit.h @@ +47,5 @@ > /** > * APIs used by the registered external transport to this Conduit to > * feed in received RTP Frames to the VoiceEngine for decoding > */ > + virtual MediaConduitErrorCode ReceivedRTPPacket(const void *data, int len); This file seems to consist of whitespace and editorial only changes. Can we do it in another patch? ::: media/webrtc/signaling/src/media-conduit/VideoConduit.h @@ +95,5 @@ > virtual MediaConduitErrorCode ConfigureRecvMediaCodecs( > const std::vector<VideoCodecConfig* >& codecConfigList); > > /** > + * Register External Transport to this Conduit. RTP and RTCP frames from the VideoEngine Let's fix the grammar here since we're changing stuff. @@ +207,5 @@ > > //Utility function to dump recv codec database > void DumpCodecDB() const; > + > + WebrtcVideoConduit* mOtherDirection; This seems deserving of a comment. ::: media/webrtc/signaling/src/media/VcmSIPCCBinding.cpp @@ +1611,2 @@ > mozilla::RefPtr<mozilla::VideoSessionConduit> conduit = > + mozilla::VideoSessionConduit::Create(static_cast<VideoSessionConduit *>(tx_conduit.get())); This could do with a comment about ownership, since we're passing a raw pointer downward and then the conduit below is holding a weak reference. ::: media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.h @@ +330,4 @@ > int index_inner = aIndex * 2 + (aReceive ? 0 : 1); > > + MOZ_ASSERT(!mConduits[index_inner]); > + mConduits[index_inner] = aConduit; I'm worried about crossing the streams here. Suggest we add some sort of type field or a virtual downcast fxn to make sure that can't happen.
(In reply to Eric Rescorla (:ekr) from comment #29) > Comment on attachment 817705 [details] [diff] [review] > merge backend for send and receive VideoConduits to match AudioConduits & > cleanup > > Review of attachment 817705 [details] [diff] [review]: > ----------------------------------------------------------------- > > Preliminary formatting comments. Still reviewing logic. > > ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp > @@ +308,3 @@ > > return kMediaConduitInvalidSendCodec; > > } > > + CSFLogError(logTag, "%s SetSendCodec Failed %d ", __FUNCTION__, > > This indentation is kinda weird This line after this, but yes. I think tab conversion bit at some point. Some of the grammar issues may have been stuff I got initially from Suhas (long enough ago I forget where I got them from, but I think that was it). All stuff I just merged forward from the old patch. I'll fix them, but none of them give an incorrect read. > @@ +648,3 @@ > > int WebrtcAudioConduit::SendPacket(int channel, const void* data, int len) > > { > > + CSFLogDebug(logTag, "%s : channel %d %s",__FUNCTION__, channel, > > Looks like you added a space after the second comma but not the first. Sure. I didn't fix *every* formatting error in these files. > ::: media/webrtc/signaling/src/media-conduit/AudioConduit.h > @@ +47,5 @@ > > /** > > * APIs used by the registered external transport to this Conduit to > > * feed in received RTP Frames to the VoiceEngine for decoding > > */ > > + virtual MediaConduitErrorCode ReceivedRTPPacket(const void *data, int len); > > This file seems to consist of whitespace and editorial only changes. Can we > do it in another patch? Sure. I'll split them. > ::: media/webrtc/signaling/src/media-conduit/VideoConduit.h > @@ +207,5 @@ > > > > //Utility function to dump recv codec database > > void DumpCodecDB() const; > > + > > + WebrtcVideoConduit* mOtherDirection; > > This seems deserving of a comment. Sure. > > ::: media/webrtc/signaling/src/media/VcmSIPCCBinding.cpp > @@ +1611,2 @@ > > mozilla::RefPtr<mozilla::VideoSessionConduit> conduit = > > + mozilla::VideoSessionConduit::Create(static_cast<VideoSessionConduit *>(tx_conduit.get())); > > This could do with a comment about ownership, since we're passing a raw > pointer downward and then the conduit below is holding a weak reference. Sure; I'll note this just mirrors the existing code for Audio conduits. > ::: media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.h > @@ +330,4 @@ > > int index_inner = aIndex * 2 + (aReceive ? 0 : 1); > > > > + MOZ_ASSERT(!mConduits[index_inner]); > > + mConduits[index_inner] = aConduit; > > I'm worried about crossing the streams here. Suggest we add some sort of > type field or a virtual downcast fxn to make sure that can't happen. Since this is m-line based, it would be hard to cross them, but I understand your concern. I'll think if there's a good way to do so.
non-AudioConduit part of the patch; cleaned up grammar/nits, added MOZ_ASSERT_IF's for conduits, unbitrotted
Attachment #817705 - Attachment is obsolete: true
Attachment #817705 - Flags: review?(ekr)
AudioConduit cleanup with nits addressed
Attachment #820573 - Flags: review?(ekr)
Attachment #820574 - Flags: review?(ekr)
Comment on attachment 820574 [details] [diff] [review] cleanup AudioConduit Review of attachment 820574 [details] [diff] [review]: ----------------------------------------------------------------- Thanks for breaking this up. lgtm with nits below. ::: media/webrtc/signaling/src/media-conduit/AudioConduit.cpp @@ +371,5 @@ > MediaConduitErrorCode condError = kMediaConduitNoError; > int error = 0; //webrtc engine errors > bool success = false; > > + // Are we receiving already? If so, stop receiving and playout playout. @@ +400,5 @@ > } > > + // Try Applying the codecs in the list > + // We succeed if at least one codec was applied and reception was > + // started successfully periods at the end. @@ +809,5 @@ > } > > > /** > + * Perform validation on the codecConfig to be applied applied. ::: media/webrtc/signaling/src/media-conduit/AudioConduit.h @@ +78,1 @@ > const std::vector<AudioCodecConfig* >& codecConfigList); Fix indent. @@ +85,1 @@ > mozilla::RefPtr<TransportInterface> aTransport); Fix indent. @@ +193,5 @@ > > //Utility function to dump recv codec database > void DumpCodecDB() const; > > + // The two sides of a send/receive pair of conduits each keep a pointer to the other other.
Attachment #820574 - Flags: review?(ekr) → review+
Comment on attachment 820573 [details] [diff] [review] merge backend for send and receive VideoConduits to match AudioConduits & cleanup Review of attachment 820573 [details] [diff] [review]: ----------------------------------------------------------------- please see questions below about SyncTo and the size of changes. ::: media/webrtc/signaling/src/media-conduit/MediaConduitInterface.h @@ +260,5 @@ > public: > > /** > + * Factory function to create and initialize an Audio Conduit Session > + * return: Concrete AudioSessionConduitObject or NULL in the case Are we converting to nullptr? Not a big deal, but I noticed it, ::: media/webrtc/signaling/src/media-conduit/VideoConduit.cpp @@ +286,5 @@ > + if(mPtrRTP->SetRTCPStatus(mChannel, webrtc::kRtcpCompound_RFC4585) != 0) > + { > + CSFLogError(logTag, "%s RTCPStatus Failed %d ", __FUNCTION__, > + mPtrViEBase->LastError()); > + return kMediaConduitRTCPStatusError; I am assuming this is all just indentation from the else { Please advise if not and I will review. @@ +303,5 @@ > if (aConduit) { > mPtrViEBase->SetVoiceEngine(aConduit->GetVoiceEngine()); > mPtrViEBase->ConnectAudioChannel(mChannel, aConduit->GetChannel()); > // NOTE: this means the VideoConduit will keep the AudioConduit alive! > + } else if ((mOtherDirection && mOtherDirection->mSyncedTo) || mSyncedTo) { How many times is syncTo being called? If twice, doesn't the first one get disconnected when the second one is being called. And if once, why do we need this logic here? @@ +488,5 @@ > int error = 0; //webrtc engine errors > bool success = false; > std::string payloadName; > > + // are we receiving already. If so, stop receiving and playout already? @@ +489,5 @@ > bool success = false; > std::string payloadName; > > + // are we receiving already. If so, stop receiving and playout > + // since we can't apply new recv codec when the engine is playing playing. @@ +824,5 @@ > return kMediaConduitCaptureError; > } > > //insert the frame to video engine in I420 format only > + if(!mPtrExtCapture || How can this be zero? If only with a bug, maybe an ASSERT? @@ +836,5 @@ > mPtrViEBase->LastError()); > return kMediaConduitCaptureError; > } > > + CSFLogDebug(logTag, "%s Inserted A Frame", __FUNCTION__); a frame @@ +1030,5 @@ > return false; > } > > /** > + * Checks if the codec is already in Conduit's database Did anything other than reordering happen here? ::: media/webrtc/signaling/src/media-conduit/VideoConduit.h @@ +95,5 @@ > virtual MediaConduitErrorCode ConfigureRecvMediaCodecs( > const std::vector<VideoCodecConfig* >& codecConfigList); > > /** > + * Register External Transport for this Conduit. RTP and RTCP frames from the VideoEngine Grammar. @@ +230,5 @@ > void DumpCodecDB() const; > + > + // The two sides of a send/receive pair of conduits each keep a pointer to the other > + // The also share a single VideoEngine and mChannel. Shutdown must be coordinated > + // carefully to avoid double-freeing or accessing after one frees. This comment seems like it needs some cleanup. @@ +232,5 @@ > + // The two sides of a send/receive pair of conduits each keep a pointer to the other > + // The also share a single VideoEngine and mChannel. Shutdown must be coordinated > + // carefully to avoid double-freeing or accessing after one frees. > + WebrtcVideoConduit* mOtherDirection; > + // Other side has shut down our channel and related items already This one too. @@ +236,5 @@ > + // Other side has shut down our channel and related items already > + bool mShutDown; > + > + // These are shared by both directions. They're released by the last > + // conduit to die These? There is more than oen? ::: media/webrtc/signaling/src/media/VcmSIPCCBinding.cpp @@ +1577,5 @@ > pc.impl()->media()->GetConduit(level, false); > + MOZ_ASSERT_IF(tx_conduit, tx_conduit->type() == AUDIO); > + > + // The two sides of a send/receive pair of conduits each keep a raw pointer to the other, > + // and are responsible for cleanly shutting down down. ::: media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.h @@ +310,2 @@ > void AddTransportFlow(int aIndex, bool aRtcp, > mozilla::RefPtr<mozilla::TransportFlow> aFlow) { Can this be const ref? I think so.
Attachment #820573 - Flags: review?(ekr)
Comment on attachment 820573 [details] [diff] [review] merge backend for send and receive VideoConduits to match AudioConduits & cleanup Review of attachment 820573 [details] [diff] [review]: ----------------------------------------------------------------- and the other part was all indenting for else {} as you surmised ::: media/webrtc/signaling/src/media-conduit/VideoConduit.cpp @@ +303,5 @@ > if (aConduit) { > mPtrViEBase->SetVoiceEngine(aConduit->GetVoiceEngine()); > mPtrViEBase->ConnectAudioChannel(mChannel, aConduit->GetChannel()); > // NOTE: this means the VideoConduit will keep the AudioConduit alive! > + } else if ((mOtherDirection && mOtherDirection->mSyncedTo) || mSyncedTo) { >How many times is syncTo being called? If twice, doesn't the first one get >disconnected when the second one is being called. And if once, why do >we need this logic here? It should be called once - though I believe it was coded as it is because I wasn't sure that StorePipeline in PCMedia would find the same one each time, and I am pretty sure I can't guarantee that the SyncTo(nullptr) will go to the same Conduit used for SyncTo() before. And this is a RefPtr, not a raw pointer, so it holds the AudioConduit alive until released. @@ +824,5 @@ > return kMediaConduitCaptureError; > } > > //insert the frame to video engine in I420 format only > + if(!mPtrExtCapture || > How can this be zero? If only with a bug, maybe an ASSERT? Sure we can ASSERT - only really could happen if we try to insert a frame to encode from the wrong 'side' conduit (the receiving side). @@ +1030,5 @@ > return false; > } > > /** > + * Checks if the codec is already in Conduit's database > Did anything other than reordering happen here? Nope; changed ordering to match AudioConduit ::: media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.h @@ +310,2 @@ > void AddTransportFlow(int aIndex, bool aRtcp, > mozilla::RefPtr<mozilla::TransportFlow> aFlow) { > Can this be const ref? Probably, though it isn't something I was touching in this patch.
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: