Closed Bug 839650 Opened 12 years ago Closed 12 years ago

ASSERTION: Buffer underran: 'bufferEnd >= mCurrentTime', file content/media/MediaStreamGraph.cpp, line 389

Categories

(Core :: Audio/Video, defect)

x86_64
macOS
defect
Not set
normal

Tracking

()

RESOLVED FIXED
mozilla22
Tracking Status
firefox21 --- disabled
firefox22 --- fixed
firefox23 --- fixed
firefox-esr17 --- unaffected
b2g18 --- unaffected

People

(Reporter: whimboo, Assigned: jesup)

References

()

Details

(Keywords: assertion, sec-high, Whiteboard: [WebRTC][blocking-webrtc+][qa-][adv-main22-])

Attachments

(8 files, 11 obsolete files)

6.58 KB, patch
jesup
: review-
Details | Diff | Splinter Review
2.65 KB, text/plain
Details
1.47 KB, text/plain
Details
8.37 KB, patch
whimboo
: review+
jsmith
: feedback+
Details | Diff | Splinter Review
7.52 KB, patch
roc
: review+
Details | Diff | Splinter Review
11.35 KB, patch
Details | Diff | Splinter Review
22.44 KB, patch
ekr
: review+
Details | Diff | Splinter Review
14.06 KB, patch
jsmith
: review+
Details | Diff | Splinter Review
Clicking start in the test page and accepting gUM notification a couple of assertions are thrown in a Firefox debug build. I have seen this with 121190:eb8f60c782da. The assertion doesn't happen in a gUM only test. ###!!! ASSERTION: Buffer underran: 'bufferEnd >= mCurrentTime', file /Volumes/data/code/firefox/nightly/content/media/MediaStreamGraph.cpp, line 389 mozilla::MediaStreamGraphImpl::RecomputeBlockingAt(nsTArray<mozilla::MediaStream*> const&, long long, long long, long long*)+0x00000065 [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/XUL +0x00E683E5] mozilla::MediaStreamGraphImpl::RecomputeBlocking(long long)+0x000000F6 [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/XUL +0x00E68136] mozilla::MediaStreamGraphImpl::RunThread()+0x0000045B [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/XUL +0x00E69EBB] mozilla::(anonymous namespace)::MediaStreamGraphThreadRunnable::Run()+0x0000000D [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/XUL +0x00E6DFED] nsThread::ProcessNextEvent(bool, bool*)+0x000005E3 [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/XUL +0x01A38493] NS_ProcessNextEvent_P(nsIThread*, bool)+0x0000004F [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/XUL +0x019DB13F] nsThread::ThreadFunc(void*)+0x0000010D [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/XUL +0x01A3708D] _pt_root+0x000000E7 [/Volumes/data/code/firefox/obj/debug/dist/NightlyDebug.app/Contents/MacOS/libnspr4.dylib +0x000213B7] _pthread_start+0x0000014F [/usr/lib/system/libsystem_c.dylib +0x0004E8BF]
blocking- for now as it only has been seen at startup of peerconnections, but it happens fairly easily
Whiteboard: [WebRTC][blocking-webrtc?] → [WebRTC][blocking-webrtc-]
This was the cause of my crashes on downloaded win32 try builds. Downloaded from here (and full symbols are there too, though it's painful to use them - I had to use "expand" on xul.pd_ to get a usable xul.pdb file: http://ftp.mozilla.org/pub/mozilla.org/firefox/try-builds/rjesup@wgate.com-2e418b85cbe6/try-win32-debug/ This started in the last few weeks. Not sure what's triggering it, but the mCurrentTime and bufferEnd are wildly different. Local vars from VS2010: - this 0x04c5a540 string "XPCOM_DEBUG_DLG" {mThread={...} mStreams={...} mCurrentTime=0x6d69746e75725f6d ...} mozilla::MediaStreamGraphImpl * const + mozilla::MediaStreamGraph {mPendingUpdateRunnables={...} mGraphUpdatesSent=0x00474c445f475542 } mozilla::MediaStreamGraph + mThread {mRawPtr=0x00000000 } nsCOMPtr<nsIThread> + mStreams {...} nsTArray<nsRefPtr<mozilla::MediaStream> > mCurrentTime 0x6d69746e75725f6d __int64 mStateComputedTime 0x2874726f62615f65 __int64 + mInitialTimeStamp {mValue=0x0000000000000000 } mozilla::TimeStamp + mCurrentTimeStamp {mValue=0x00642520656e696c } mozilla::TimeStamp mProcessingGraphUpdateIndex 0x656c696600000000 __int64 mPortCount 0x2c732520 int + mMonitor {mMutex={...} mCondVar={...} } mozilla::Monitor + mStreamUpdates {...} nsTArray<mozilla::StreamUpdate> + mUpdateRunnables {...} nsTArray<nsCOMPtr<nsIRunnable> > + mMessageQueue {...} nsTArray<mozilla::MediaStreamGraphImpl::MessageBlock> mLifecycleState 0x4e4f4954 mozilla::MediaStreamGraphImpl::LifecycleState mWaitState WAITSTATE_RUNNING mozilla::MediaStreamGraphImpl::WaitState mNeedAnotherIteration true bool mForceShutDown true bool mPostedRunInStableStateEvent true bool + mCurrentTaskMessageQueue {...} nsTArray<nsAutoPtr<mozilla::ControlMessage> > mDetectedNotRunning true bool mPostedRunInStableState true bool + aStream 0x10fd4318 {mLastConsumptionState=CONSUMED mMutex={...} mUpdateKnownTracksTime=0x7fffffffffffffff ...} mozilla::MediaStream * aTime 0x00000000004f0fb4 __int64 aEndBlockingDecisions 0x00000000004f6716 __int64 + aEnd 0x1f49fbd4 __int64 * bufferEnd 0x00000000004b38aa __int64
Flags: needinfo?(roc)
Whiteboard: [WebRTC][blocking-webrtc-] → [WebRTC][blocking-webrtc+]
This is happening because we're creating a SourceMediaStream, writing some audio to it, playing the audio (which means we've called AdvanceKnownTracksTime with some value > 0), and *then* adding a video track with start time 0. That's not good. There should be an assertion firing earlier where the track addition is processed, but it's not firing due to int64 overflow. I'll attach a patch for that.
Flags: needinfo?(roc)
Assignee: nobody → rjesup
Which bug has landed this change?
As you'll note, this patch totally breaks audio+video capture
Comment on attachment 717061 [details] [diff] [review] assertions patch Review of attachment 717061 [details] [diff] [review]: ----------------------------------------------------------------- r- for graph failure with this patch applied. I suspect something having to do with the reduction in MEDIA_TIME_MAX/STREAM_TIME_MAX/GRAPH_TIME_MAX; perhaps this uncovered an existing bug. No assertions fire so we're not adding a track with a bad time. Fails even trying to just capture a single audio stream.
Attachment #717061 - Flags: review?(rjesup) → review-
So, what start time should a new track be added with? (safely!) We'll need to add video tracks at arbitrary points in a stream's lifetime. Right now we do: source_->AddTrack(track_id_, USECS_PER_S, 0, new VideoSegment()); source_->AdvanceKnownTracksTime(STREAM_TIME_MAX); (similar for audio, and in the future either could come first, or we may have >1 of a type)
For Realtime streams, what I'd love to do is add a track "now", without having to know what the current "stream time" is. Attempting to find the current stream time from somewhere other than on the MSG thread is likely perilous due to possible race conditions anyways.
(In reply to Randell Jesup [:jesup] from comment #10) > For Realtime streams, what I'd love to do is add a track "now", without > having to know what the current "stream time" is. Attempting to find the > current stream time from somewhere other than on the MSG thread is likely > perilous due to possible race conditions anyways. I don't think so. The stream can only advance as much as you allow it to by putting data into tracks, or by AdvanceKnownTracksTime if there are no tracks. The code that generates the stream should know "where it is up to" in terms of what's been generated, which is what matters here.
So the code that inserts the data into the tracks runs on other threads or in response to NotifyPull(). So we'd have to have a Lock to get the current time in Mainthread from one of those to use for AddTrack(); and we'd need to use the lock in all tracks currently in-use for the stream (and have a "stream" pointer to get to the shared "current time" value). We'd have to call AddTrack while still holding the lock (since if we release it our knowledge of the current time may be arbitrarily out-of-date). This is doable, though a bit of a pain to do just to be able to start a track "now". These inputs are typically independent and asynchronous, so we don't have some master source inserting the data. Alternatively, if we can run code on the MSG thread (see bug 823600), we could do all our AddTracks from MSG thread (though we'd still need a way to get the current stream time, which I think is there, and on MSG it should be stable/safe. It's still more work than "AddTrack(..., MSG_NOW, ...), but it's probably better than locks and stream pointers. It does make code that wants to AddTracks inherently async, but that's probably ok
See also bug 833824 ("Need to find a way to add a stream now"
We've been having some intermittent orange due to this bug, from the assertions creeping well past the tests that are annotated as having large numbers of failures (since the assertions seem to keep happening for quite a while sometimes, which led to a large number of tests being annotated as expecting large numbers of assertions). (Maybe until a GC happens?) https://tbpl.mozilla.org/php/getParsedLog.php?id=20224674&tree=Mozilla-Inbound Rev3 WINNT 6.1 mozilla-inbound debug test mochitest-3 on 2013-03-01 07:42:33 PST for push 396454d30d27 slave: talos-r3-w7-048 07:46:41 INFO - 578 ERROR TEST-UNEXPECTED-FAIL | /tests/dom/plugins/test/test_NPNVdocumentOrigin.html | Assertion count 114 is greater than expected range 0-0 assertions. 07:46:42 INFO - 585 ERROR TEST-UNEXPECTED-FAIL | /tests/dom/plugins/test/test_NPPVpluginWantsAllNetworkStreams.html | Assertion count 83 is greater than expected range 0-0 assertions. 07:46:45 INFO - 589 ERROR TEST-UNEXPECTED-FAIL | /tests/dom/plugins/test/test_bug532208.html | Assertion count 279 is greater than expected range 0-0 assertions.
Comment on attachment 720149 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Happy with a review from whomever feels comfortable doing it. This doesn't change the existing AdvanceKnownTracksTime(STREAM_TIME_MAX) behavior that is at least in theory incorrect. The current PipelineListeners could be refactored for less duplicate code in general, but this patch doesn't address that. I want to end the pain of the sheriffs and whimboo, etc :-) Try at https://tbpl.mozilla.org/?tree=Try&rev=ca1172335dae Ran 25 local mochitests with no assertions (previously would get assertions occasionally), plus local calls (frequent assertions before), and crashtest runs.
Attachment #720149 - Flags: review?(tterribe)
Attachment #720149 - Flags: review?(roc)
Biggest open question is if I'm getting the current stream time correctly
Comment on attachment 720149 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 720149 [details] [diff] [review]: ----------------------------------------------------------------- I'm happy to take a look, but as the MediaPipeline stuff is mostly ekr's code, I think he needs to review it.
Attachment #720149 - Flags: review?(tterribe) → review?(ekr)
switches (per roc on IRC) to stream->GetBufferEnd() for the time
Attachment #720149 - Attachment is obsolete: true
Attachment #720149 - Flags: review?(roc)
Attachment #720149 - Flags: review?(ekr)
GetBufferEnd() causes pretty much immediate (different) assertions when the start time was not 0. These were caused by using "new VideoSegment()" for the initial segment - you need to AppendFrames() dummy data of duration == start_time. For video this is easy, I'm not sure about audio... Also, if you start at time != 0, the code for handling NotifyPull has to know about this since it depends on a "played_" var that needs to know how much the stream thinks it's gotten from the track. (The API would be simpler if it said "I want data starting at time YY (normally the end of the previous set of data), and I want XX time units".) In almost all cases, a start time of 0 for a track is ok, but once in a blue moon, it will assert before doing a NotifyPull on the track. (Side note: the pipeline code needed the same mod we made to gUM for a fallible AppendToTrack) With GetBufferEnd(), it returns the point to which there is data already buffered in the existing track(s). We might be able to provide data for that period if we were polled with NotifyPull(), but given the max time we pull for in NotifyPull this likely makes little difference. There is the possibility of it introducing some delay/offset in audio, but given how NetEQ works I think this would disappear if it ever were to exist. Worth checking later. Other things of note: All data is inserted in response to NotifyPulls. I found that there was an error case where we wouldn't insert data on a conduit error - I fixed that and added a debug, but it doesn't get hit. I've seen nothing to indicate we didn't respond to a NotifyPull with the full desired_time. One thing wrong I did find was that on destruction (DetachMedia), the pipeline removes its Listener, but does not call EndTrack first (so you might have a stream that needs data but there's no listener to supply it). I fixed this, but it doesn't seem to solve all instances of underflows, but it may solve some.
Attachment #720173 - Attachment is obsolete: true
Comment on attachment 721108 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time This also fixes some ancillary issues such as not calling EndTrack() before removing the Listener (which might be another source of underruns) and handling failure to get audio data from the conduit/GIPS, and correct handling of failure to append data to a stream.
Attachment #721108 - Flags: review?(tterribe)
Attachment #721113 - Flags: review?(roc)
Attachment #721115 - Flags: review?(jsmith)
Comment on attachment 721115 [details] [diff] [review] remove (underrun) assertion expectations from webrtc mochitests Review of attachment 721115 [details] [diff] [review]: ----------------------------------------------------------------- Changes look safe, but I haven't learned much yet about these new assertions that were added. I'm going to redirect the review to whimboo - he'll probably know more (I need to learn about what these are actually).
Attachment #721115 - Flags: review?(jsmith)
Attachment #721115 - Flags: review?(hskupin)
Attachment #721115 - Flags: feedback+
Comment on attachment 721115 [details] [diff] [review] remove (underrun) assertion expectations from webrtc mochitests Review of attachment 721115 [details] [diff] [review]: ----------------------------------------------------------------- r=me in the case that your core patch will fix all of those assertions. Please make sure to run a try-server build before landing the patches.
Attachment #721115 - Flags: review?(hskupin) → review+
Comment on attachment 721108 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 721108 [details] [diff] [review]: ----------------------------------------------------------------- This is totally broken, contains a ton of duplicated code, and wouldn't even solve all the issues if it wasn't broken. ::: content/media/MediaSegment.h @@ +35,5 @@ > { > return aTime*(1.0/(1 << MEDIA_TIME_FRAC_BITS)); > } > > +inline uint64_t MediaTimeToMilliseconds(MediaTime aTime) Instead of adding this, I think we should just convert played_ to TrackTicks. NotifyPull() can simply use samples_length instead of the hard-coded 10. The loop condition there should be comparing desired_time against TicksToTimeRoundDown(rate_, played_), to make sure the conversions happen the same way they will in the MSG. ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +851,5 @@ > SourceMediaStream * source, TrackID track_id, > const RefPtr<MediaSessionConduit>& conduit) > : source_(source), > track_id_(track_id), > + rate_(16000), // XXX rate assumption What kind of rate is this? Sampling rate? Bitrate? 16000 is perfectly plausible for both. Please pick a better variable name. @@ +860,5 @@ > + > +// Add a track starting "now" (whenever 'now' is) > + > +// We really should find a way to convert incoming display timestamps to > +// StreamTime reliably!!! Maybe we should file a bug on that? @@ +872,5 @@ > + : ControlMessage(aStream), mTrack(aTrack), mRate(aRate), mPlayed(aPlayed) {} > + > + virtual void Run() > + { > + StreamTime current_end = mStream->GetBufferEnd(); This is just wrong. StreamTime is Q20 seconds, but you're passing it to a bunch of places below that expect TrackTicks (i.e., mRate ticks per second). What you actually want is TrackTicks current_end = TimeToTicksRoundUp(mRate, mStream->GetBufferEnd()); @@ +875,5 @@ > + { > + StreamTime current_end = mStream->GetBufferEnd(); > + // Add a track 'now' to avoid possible underrun, especially if we add > + // a track "later". > + // XXX how does this interact with AdvanceKnownTracksTicksTime()? That's not very confidence-inspiring. @@ +891,5 @@ > + if (current_end != (StreamTime) 0) { > + int samples_length = (mRate/100)*sizeof(uint16_t); > + MOZ_MTLOG(MP_LOG_INFO, "Dummy audio samples in bytes: " << samples_length); > + nsAutoTArray<const int16_t*,1> channels; > + segment->AppendFrames(nullptr, channels, samples_length); Can't you just call segment->AppendNullData(current_end)? That seems a lot easier. See above for issues with current_end, however. It should be in TrackTicks. @@ +893,5 @@ > + MOZ_MTLOG(MP_LOG_INFO, "Dummy audio samples in bytes: " << samples_length); > + nsAutoTArray<const int16_t*,1> channels; > + segment->AppendFrames(nullptr, channels, samples_length); > + > + // duplicated so we don't need to allocate buffers in the normal case I don't understand this comment. The AudioSegment does not need the channels list passed to AppendFrames() after the call returns (not that it allocates anything anyway), so I don't see why any of this code needs to be duplicated. @@ +895,5 @@ > + segment->AppendFrames(nullptr, channels, samples_length); > + > + // duplicated so we don't need to allocate buffers in the normal case > + mStream->AsSourceStream()->AddTrack(mTrack, mRate, > + current_end, segment); AddTrack takes a time in units of TrackTicks (i.e., mRate per second). You can't pass it a StreamTime. @@ +906,5 @@ > + } > + mStream->AsSourceStream()->AdvanceKnownTracksTime(STREAM_TIME_MAX); > + } > + TrackID mTrack; > + TrackRate mRate; Currently this file uses Google-style member var names (rate_). I'm happy to convert to Moz-style (mRate), but we shouldn't be mixing and matching. @@ +912,5 @@ > + }; > + > + MOZ_ASSERT(this); > + > + source_->GraphImpl()->AppendMessage(new Message(source_, aTrack, aRate, &played_)); This seems incredibly dangerous. Nothing in the message has a reference to the listener, so what guarantees played_ is still available when you want to write to it? @@ +936,5 @@ > return; > } > > // This comparison is done in total time to avoid accumulated roundoff errors. > while (MillisecondsToMediaTime(played_) < desired_time) { See above. @@ +951,4 @@ > MediaConduitErrorCode err = > static_cast<AudioSessionConduit*>(conduit_.get())->GetAudioFrame( > samples_data, > + rate_, // Sampling rate fixed at 16 kHz for now Are you going to remember to go update this comment when you change how rate_ is set? @@ +960,5 @@ > + MOZ_MTLOG(PR_LOG_ERROR, "Audio conduit failed (" << err << ") to return data @ " << played_ << > + " (desired " << desired_time << " -> " << MediaTimeToSeconds(desired_time) << ")"); > + samples_length = (rate_/100)*sizeof(uint16_t); // if this is not enough we'll loop and provide more > + memset(samples_data, '\0', samples_length); > + } I don't really think this is what you want to do here. GIPS will _always_ return data here. If it doesn't, something must be seriously wrong, so we should probably just end the stream (not sure what other mechanism we have for reporting the error to the user). @@ +971,5 @@ > segment.AppendFrames(samples.forget(), channels, samples_length); > > + // Handle track not actually added yet or removed/finished > + if (source_->AppendToTrack(track_id_, &segment)) { > + played_ += 10; // ms See above. @@ +1018,5 @@ > + AddTrack(track_id_, USECS_PER_S, width_, height_); > +#endif > +} > + > +// Add a track starting "now" (whenever 'now' is) So, this code looks an _awful_ lot like the code above except with s/Audio/Video/g. Especially if you convert played_ to TrackTicks use AppendNullData() like I'm suggesting. This is also broken in all of the ways the audio one was. 1) There has to be some way to share all of this. I think all you'd need to do is pass in a pre-constructed MediaSegment of the appropriate type (or have a factory that takes a MediaSegment::Type and returns one... that should probably live in the MSG code). 2) Both tracks should really be handled in the same message. Otherwise there's a chance NotifyPull gets called between them and you get a permanent 10 ms offset added. You can just iterate over a simple list of {TrackID, TrackRate, MediaSegment */MediaSegment::Type, <some way to get to played_ that's safer than what you've got>} tuples. 3) You should be adding the listener in that message, too (yes, this requires an AddListener() call that works from the MSG thread instead of always queuing a message).
Attachment #721108 - Flags: review?(tterribe) → review-
(In reply to Timothy B. Terriberry (:derf) from comment #28) > Comment on attachment 721108 [details] [diff] [review] > proxy AddTrack() to MSG thread via a custom command so we can get access to > the current stream time > > Review of attachment 721108 [details] [diff] [review]: > ----------------------------------------------------------------- > > This is totally broken, contains a ton of duplicated code, and wouldn't even > solve all the issues if it wasn't broken. tl;dr on my responses (ignoring nits) 1) We're using milliseconds on purpose because TrackTicks (and StreamTime) don't convert to milliseconds without rounding error, and this causes a drift. I plan to keep this unless/until StreamTime is a clean conversion to ms (and even then add some sort of compile-time assertion if I can). 2) Real error identified about TrackTicks vs StreamTime; it was masked since only video currently can have a start time other than 0 (and still close to 0), and TrackTicks for video is *very* close to StreamTime because of the high rate used. 3) AppendNullData() - didn't know it existed (roc suggested using AppendToTrack(nullptr,...) It may make it possible to share code in the video and audio pipeline listeners, which had been my plan going into this revision. Non-trivial to do so, but not a huge problem. 4) Multiple AddTracks in one call - Not worth it today; maybe not ever. It helps point out, however, that we're incorrectly adding the tracks before adding the Listener, and I'll correct that. 5) Through thoroughly confusing analysis, I claim we doesn't *need* to hold a Listener in the AddTrack message. However, I agree it's far more clearly safe if we do, so I will. I plan to modify NotifyPull's API in the future to get rid of the need for passing in the &played_ pointer in the first place (and have roc's blessing to do so). > ::: content/media/MediaSegment.h > @@ +35,5 @@ > > { > > return aTime*(1.0/(1 << MEDIA_TIME_FRAC_BITS)); > > } > > > > +inline uint64_t MediaTimeToMilliseconds(MediaTime aTime) > > Instead of adding this, I think we should just convert played_ to > TrackTicks. NotifyPull() can simply use samples_length instead of the > hard-coded 10. The loop condition there should be comparing desired_time > against TicksToTimeRoundDown(rate_, played_), to make sure the conversions > happen the same way they will in the MSG. TrackTicks don't come out to an even number of milliseconds, which is what the NetEQ code feeds us. If we don't track in ms, we will get accumulating error (which is why we kept it in ms in the first place). > > ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp > @@ +851,5 @@ > > SourceMediaStream * source, TrackID track_id, > > const RefPtr<MediaSessionConduit>& conduit) > > : source_(source), > > track_id_(track_id), > > + rate_(16000), // XXX rate assumption > > What kind of rate is this? Sampling rate? Bitrate? 16000 is perfectly > plausible for both. Please pick a better variable name. Ok > > @@ +860,5 @@ > > + > > +// Add a track starting "now" (whenever 'now' is) > > + > > +// We really should find a way to convert incoming display timestamps to > > +// StreamTime reliably!!! > > Maybe we should file a bug on that? There is one (though more general). > > @@ +872,5 @@ > > + : ControlMessage(aStream), mTrack(aTrack), mRate(aRate), mPlayed(aPlayed) {} > > + > > + virtual void Run() > > + { > > + StreamTime current_end = mStream->GetBufferEnd(); > > This is just wrong. StreamTime is Q20 seconds, but you're passing it to a > bunch of places below that expect TrackTicks (i.e., mRate ticks per second). > > What you actually want is > TrackTicks current_end = TimeToTicksRoundUp(mRate, mStream->GetBufferEnd()); Ok, thanks. I wasn't paying proper attention to the type for AddTrack (and hey, the compiler says, these are all integers, so conversions are fine! :-( ). And for Video, because of the trackrate we use, the difference happens to be minimal, which explains why it worked and the numbers looked reasonable in my debugs. It would be major for audio, but the current code always adds audio first (and in fact I didn't modify the audio code until I had video working). > > @@ +875,5 @@ > > + { > > + StreamTime current_end = mStream->GetBufferEnd(); > > + // Add a track 'now' to avoid possible underrun, especially if we add > > + // a track "later". > > + // XXX how does this interact with AdvanceKnownTracksTicksTime()? > > That's not very confidence-inspiring. This is a long-standing issue with the API that is not directly affected by this patch. For realtime code, I'd prefer not to deal with calling AdvanceKnownTracksTicksTime and having to have cross-thread locks just to avoid violating a part of the API that doesn't help us in realtime code (it's really oriented to streaming code). So, it's simpler to advance it to the heat death of the universe, and then add tracks anyways. There's a separate bug on changing the API (really, the comments in the API) to officially allow this. The implementation already does allow it in practice. > > @@ +891,5 @@ > > + if (current_end != (StreamTime) 0) { > > + int samples_length = (mRate/100)*sizeof(uint16_t); > > + MOZ_MTLOG(MP_LOG_INFO, "Dummy audio samples in bytes: " << samples_length); > > + nsAutoTArray<const int16_t*,1> channels; > > + segment->AppendFrames(nullptr, channels, samples_length); > > Can't you just call segment->AppendNullData(current_end)? That seems a lot > easier. Sure! Never happened to notice AppendNullData() before, and when I asked roc the right way to insert null data to cover the start of the track, he suggested Using AppendFrames with nullptr. > See above for issues with current_end, however. It should be in TrackTicks. > > @@ +893,5 @@ > > + MOZ_MTLOG(MP_LOG_INFO, "Dummy audio samples in bytes: " << samples_length); > > + nsAutoTArray<const int16_t*,1> channels; > > + segment->AppendFrames(nullptr, channels, samples_length); > > + > > + // duplicated so we don't need to allocate buffers in the normal case > > I don't understand this comment. The AudioSegment does not need the channels > list passed to AppendFrames() after the call returns (not that it allocates > anything anyway), so I don't see why any of this code needs to be duplicated. Sorry, comment was a bit crufty left over from an intermediate test where I allocated buffers. Given AppendNullData, there's no reason to duplicate any code or key off current_end != 0. > @@ +906,5 @@ > > + } > > + mStream->AsSourceStream()->AdvanceKnownTracksTime(STREAM_TIME_MAX); > > + } > > + TrackID mTrack; > > + TrackRate mRate; > > Currently this file uses Google-style member var names (rate_). I'm happy to > convert to Moz-style (mRate), but we shouldn't be mixing and matching. Cut-and-paste from MSG code. NP > @@ +912,5 @@ > > + }; > > + > > + MOZ_ASSERT(this); > > + > > + source_->GraphImpl()->AppendMessage(new Message(source_, aTrack, aRate, &played_)); > > This seems incredibly dangerous. Nothing in the message has a reference to > the listener, so what guarantees played_ is still available when you want to > write to it? Ok (hold onto your seats, this ride is bumpy): 1. There's a listener attached to the stream*** 2. This message goes on the command queue. a) If the Listener is removed with RemoveListener() after AddTrack, it has to go through the queue behind the custom AddTrack message, so the Listener must still be held by the MediaStream. b) If the Listener is removed with RemoveListener() before AddTrack, it would be dangerous, but we won't put an AddTrack in if we've already called RemoveListener. c) If the MediaStream is Destroyed after the AddTrack message is added, that won't remove the listener until it goes through the queue. d) If the MediaStream is Destroyed before the AddTrack, we'd get an assertion. (This does bring up a longstanding concern of mine about how Destroy and the command queue interact, and that Destroy() can happen and "close the queue" before we can possibly know about it - we've already made RemoveListener "safe" against this, but roc rejected my patch to make it safe for all commands.) Note that since the Pipeline code doesn't use NotifyRemoved(), it must manually RemoveListener(), and thus a Destroy before AddTrack, even if it were to happen, can't cause the Listener to lose it's last ref. This in summary shows we're actually safe here today, but because of the complexity of this web it's fragile. Holding a ref would be much more clearly safe and protect against inadvertently violating this web with future mods. *** In investigating the code later in your response (about merging the listener with AddTrack), I find that it's calling AddTrack before(!) adding the Listener.... Which is quite wrong for a pull-enabled stream to begin with. (and in fact I added an assertion to NotifyPull in the "add debugs to MSG" patch to complain if it fires and there are no listeners). This assertion hasn't fired in my tests, but theoretically it could happen. So this should be revised. Note however since AddTrack was in new MediaPipelineReceive*(), it can't fail, and the Init() calls which do the AddListeners immediately follow with no error paths, so in practice it can't fail to add the Listener following the AddTrack, so even without fixing the AddListener/AddTrack order this was safe. > @@ +936,5 @@ > > return; > > } > > > > // This comparison is done in total time to avoid accumulated roundoff errors. > > while (MillisecondsToMediaTime(played_) < desired_time) { > > See above. Ditto (and this is the existing code) > > @@ +951,4 @@ > > MediaConduitErrorCode err = > > static_cast<AudioSessionConduit*>(conduit_.get())->GetAudioFrame( > > samples_data, > > + rate_, // Sampling rate fixed at 16 kHz for now > > Are you going to remember to go update this comment when you change how > rate_ is set? I'd hope so. I can remove it (it was a hard-coded 16000 with this comment). > > @@ +960,5 @@ > > + MOZ_MTLOG(PR_LOG_ERROR, "Audio conduit failed (" << err << ") to return data @ " << played_ << > > + " (desired " << desired_time << " -> " << MediaTimeToSeconds(desired_time) << ")"); > > + samples_length = (rate_/100)*sizeof(uint16_t); // if this is not enough we'll loop and provide more > > + memset(samples_data, '\0', samples_length); > > + } > > I don't really think this is what you want to do here. GIPS will _always_ > return data here. If it doesn't, something must be seriously wrong, so we > should probably just end the stream (not sure what other mechanism we have > for reporting the error to the user). Well, reporting error is another thing, but I don't know what or why it failed here, and in NotifyPull() I don't want to fail to provide the requested data (and cause the underruns we were getting assertions on). I added this earlier while figuring out all the ways an underrun could be triggered. Note also we're not calling GIPS here, we're calling AudioConduit(), and I'd rather not embed here knowledge that it's GIPS. For realtime streams, if (for whatever reason) audio fails, I don't want to block and cause video to freeze as well, especially as freezing video will cause it to start buffering data in memory to exhaustion. > > @@ +1018,5 @@ > > + AddTrack(track_id_, USECS_PER_S, width_, height_); > > +#endif > > +} > > + > > +// Add a track starting "now" (whenever 'now' is) > > So, this code looks an _awful_ lot like the code above except with > s/Audio/Video/g. Especially if you convert played_ to TrackTicks use > AppendNullData() like I'm suggesting. That might make it possible to re-merge the the routines, which I had planned but because of the null-data difference hadn't done. > > This is also broken in all of the ways the audio one was. > > 1) There has to be some way to share all of this. I think all you'd need to > do is pass in a pre-constructed MediaSegment of the appropriate type (or > have a factory that takes a MediaSegment::Type and returns one... that > should probably live in the MSG code). > 2) Both tracks should really be handled in the same message. Otherwise > there's a chance NotifyPull gets called between them and you get a permanent > 10 ms offset added. You can just iterate over a simple list of {TrackID, > TrackRate, MediaSegment */MediaSegment::Type, <some way to get to played_ > that's safer than what you've got>} tuples. We will need to handle arbitrary track adds once we support that elsewhere, so while we can batch some AddTracks, we can't generally, and linking them to do this may be tricky. The tracks need to support starting at a StreamTime (ok TrackTicks time) that isn't 0. For Video, this is no problem due to the mImage-swapping nature of this code*. A later add of Audio should generally be ok*. Adding two audio tracks at once: Perhaps in theory could get an offset, but I'd need to delve deeper into how NetEQ syncs multiple audio tracks to be sure.* * = All of this may become moot once we deal properly with presentation timestamps, or at least this all becomes somewhat different. > 3) You should be adding the listener in that message, too (yes, this > requires an AddListener() call that works from the MSG thread instead of > always queuing a message). Yeah... (See above, it was calling AddListener after AddTrack, which also isn't good.) Since we have separate Listeners per track, this is doable, if we change the API to add an AddListenerImpl() we can call from MSG thread. I'll check with ROC if he wants that API change, or if we should just reverse the order but keep it two messages.
> > 3) You should be adding the listener in that message, too (yes, this > > requires an AddListener() call that works from the MSG thread instead of > > always queuing a message). > > Yeah... (See above, it was calling AddListener after AddTrack, which also > isn't good.) Since we have separate Listeners per track, this is doable, if > we change the API to add an AddListenerImpl() we can call from MSG thread. > I'll check with ROC if he wants that API change, or if we should just > reverse the order but keep it two messages. Turns out AddListenerImpl() already exists, so this is even simpler.
(In reply to Randell Jesup [:jesup] from comment #29) > 3) AppendNullData() - didn't know it existed (roc suggested using > AppendToTrack(nullptr,...) It may make it possible to share code in the > video and audio pipeline listeners, which had been my plan going into this > revision. Non-trivial to do so, but not a huge problem. Sorry --- I *meant* AppendNullData.
Blocks: 848669
Checked in the MSG debugs (which may also help verify what thread is leaking in bug 833769) https://hg.mozilla.org/integration/mozilla-inbound/rev/be1ee54becf4
Whiteboard: [WebRTC][blocking-webrtc+] → [WebRTC][blocking-webrtc+][leave-open]
(In reply to Randell Jesup [:jesup] from comment #34) > Checked in the MSG debugs (which may also help verify what thread is leaking > in bug 833769) > https://hg.mozilla.org/integration/mozilla-inbound/rev/be1ee54becf4 Backed out for build failures: https://tbpl.mozilla.org/?tree=Mozilla-Inbound&rev=be1ee54becf4 https://hg.mozilla.org/integration/mozilla-inbound/rev/945e901d852a
Attachment #721113 - Attachment is obsolete: true
Comment on attachment 722113 [details] [diff] [review] Add debugs to MediaStreamGraph to ease investigation of issues in the future Trivial re-review - added names to MSG and MM threads (pushed to m-c without realizing I'd qref'd the change into this patch, and MediaStreamGraph + nul == 16 chars, which is apparently the limit for thread names
Attachment #722113 - Flags: review?(roc)
Hit it during fuzzing and get a read + 1 heap overflow after the assertion - marking it s-s.
Group: core-security
Keywords: sec-high
Comment on attachment 722104 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 722104 [details] [diff] [review]: ----------------------------------------------------------------- ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +1017,5 @@ > +} > + > +// Add a track starting "now" (whenever 'now' is) > + > +// We really should find a way to convert incoming display timestamps to Cut-and-paste error, I'll remove the comment
Attachment #721115 - Attachment is obsolete: false
Comment on attachment 722104 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Weird: bzexport used the wrong comment and obsoleted the wrong patch.... The patch is the correct patch, and has the correct comment in my queue.
Attachment #722104 - Attachment description: remove (underrun) assertion expectations from webrtc mochitests → proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time
Attachment #721108 - Attachment is obsolete: true
Comment on attachment 722104 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 722104 [details] [diff] [review]: ----------------------------------------------------------------- Given the complexity in this patch, I need to ask again why this isn't just something that is handled by MSG. If MSG just provided something like: class TrackAddedCallback { public: TrackAdded(StreamTime aCurrentTime) = 0; NS_INLINE_DECL_THREADSAFE_REF_COUNTING(TrackAddedCallback). }; AddTrackAndListenerWithCurrentTime(TrackId aID, TrackRate aRate, MediaSegment *aSegment, MediaStreamListener *aListener, const RefPtr<TrackAddedCallback>& aCompleted); We could simply remove almost all of this code. This would have the additional advantage that you could use the same code in gUM and thus eliminate the (allegedly theoretical) race in gUM initialization. If for some reason we can't do this in MSG, the right thing to do here seems to me to be write this fxn here as a separate unit rather than wiring it so deep into MediaPipeline. ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +6,5 @@ > > +#ifdef MOZILLA_INTERNAL_API > +#include "MediaStreamGraph.h" > +#include "MediaStreamGraphImpl.h" > +#endif I'm not excited about this much conditional compilation around this code. Why not write the right mocks for FakeMediaStream instead? @@ +846,5 @@ > return MediaPipelineReceive::Init(); > } > > > +void GenericReceiveListener::SetupStream(MediaStream *source, Why is this a method? Instead of using the member variables, it takes a bunch of stuff that's already part of the derived class as arguments. Why not just make it a static and then you won't need to derive the inner PipelineListener classes from anything. @@ +857,5 @@ > + class Message : public ControlMessage { > + public: > + Message(MediaStream* stream, TrackID track, TrackRate rate, > + MediaSegment *segment, > + GenericReceiveListener *listener) Please move to previous line. Please be consistent about */& placement.This code seems to generally attach them to the type. @@ +861,5 @@ > + GenericReceiveListener *listener) > + : ControlMessage(stream), track_id_(track), track_rate_(rate), > + segment_(segment), listener_(listener) {} > + > + virtual void Run() Run () { Please fix indents throughout this fxn to conform to local style. @@ +863,5 @@ > + segment_(segment), listener_(listener) {} > + > + virtual void Run() > + { > + StreamTime current_end = mStream->GetBufferEnd(); Please thread assert here. @@ +868,5 @@ > + TrackTicks current_ticks = TimeToTicksRoundUp(track_rate_, current_end); > + > + // We need to know how much has been "inserted" because we're given absolute > + // times in NotifyPull. > + listener_->SetPlayedTime(MediaTimeToMilliseconds(current_end)); MediaTimeToMilliseconds returns uint64_t but you are passing it to a fxn that takes TrackTicks @@ +879,5 @@ > + // AdvanceKnownTracksTicksTime(HEAT_DEATH_OF_UNIVERSE) means that in > + // theory per the API, we can't add more tracks before that > + // time. However, the impl actually allows it, and it avoids a whole > + // bunch of locking that would be required (and potential blocking) > + // if we used smaller values and update them on each NotifyPull. I don't understand what this comment means. Can you rewrite so it's clearer. Minimally it seems like it belongs later. @@ +880,5 @@ > + // theory per the API, we can't add more tracks before that > + // time. However, the impl actually allows it, and it avoids a whole > + // bunch of locking that would be required (and potential blocking) > + // if we used smaller values and update them on each NotifyPull. > +#ifdef DEBUG Why is this bracketed in #ifdef DEBUG? This isn't something that happens a lot so it just looks like clutter. @@ +881,5 @@ > + // time. However, the impl actually allows it, and it avoids a whole > + // bunch of locking that would be required (and potential blocking) > + // if we used smaller values and update them on each NotifyPull. > +#ifdef DEBUG > + if (current_end != 0L) { if (current_end) ? @@ +883,5 @@ > + // if we used smaller values and update them on each NotifyPull. > +#ifdef DEBUG > + if (current_end != 0L) { > + MOZ_MTLOG(MP_LOG_INFO, "added track @ " << current_end << > + " -> " << MediaTimeToSeconds(current_end)); Wouldn't MediaTimeToMilliseconds be more useful here. @@ +918,4 @@ > conduit_(conduit), > + played_(0) {} > + > +void MediaPipelineReceiveAudio::PipelineListener:: All on one line, please. @@ +918,5 @@ > conduit_(conduit), > + played_(0) {} > + > +void MediaPipelineReceiveAudio::PipelineListener:: > +EndTrack(TrackID aTrack) { I don't understand what's going on here... We are getting an EndTrack from the MSG and then turning around and calling EndTrack() on the source? This seems like it needs a comment at least. @@ +920,5 @@ > + > +void MediaPipelineReceiveAudio::PipelineListener:: > +EndTrack(TrackID aTrack) { > + MOZ_MTLOG(PR_LOG_DEBUG, "EndTrack(audio)"); > +#ifdef MOZILLA_INTERNAL_API Please add a mock for EndTrack() to FakeMediaStream? One less conditional compilation. @@ +938,5 @@ > while (MillisecondsToMediaTime(played_) < desired_time) { > + // TODO(ekr@rtfm.com): Is there a way to avoid mallocating here? Or reduce the size? > + // Max size given mono is 480*2*1 = 960 (48KHz) > +#define AUDIO_SAMPLE_BUFFER_MAX 1000 > + MOZ_ASSERT((track_rate_/100)*sizeof(uint16_t) <= AUDIO_SAMPLE_BUFFER_MAX); If you're going to compute the length, why not use it? @@ +957,5 @@ > + if (err != kMediaConduitNoError) { > + MOZ_MTLOG(PR_LOG_ERROR, "Audio conduit failed (" << err << ") to return data @ " << played_ << > + " (desired " << desired_time << " -> " << MediaTimeToSeconds(desired_time) << ")"); > + samples_length = (track_rate_/100)*sizeof(uint16_t); // if this is not enough we'll loop and provide more > + memset(samples_data, '\0', samples_length); Why are you bothering to fill in your own comfort noise/PLC? Are there any legitimate circumstances in which GIPS would not do so? If you think this is anything other than an assertable error, please actually close the stream. @@ +967,5 @@ > nsAutoTArray<const int16_t*,1> channels; > channels.AppendElement(samples_data); > segment.AppendFrames(samples.forget(), channels, samples_length); > > + // Handle track not actually added yet or removed/finished How could we be getting NotifyPull when the track has not yet been added? For that matter, why are we getting it once it's been removed? Both of these seem like they shouldn't be happening, or if they do happen, should be handled in a cleaner fashion than this. @@ +1020,5 @@ > + > +// We really should find a way to convert incoming display timestamps to > +void MediaPipelineReceiveVideo::PipelineListener:: > +EndTrack(TrackID aTrack) { > + MOZ_MTLOG(PR_LOG_DEBUG, "EndTrack(video)"); Again, what is this dfor? @@ +1076,5 @@ > // goes past the current aDesiredTime Doing so means a negative > // delta and thus messes up handling of the graph > if (delta > 0) { > VideoSegment segment; > + // FIX! use Width and height of the incoming frame Why is this a fix? I believe we are supposed to get framesize updates separately. @@ +1081,4 @@ > segment.AppendFrame(image ? image.forget() : nullptr, delta, > gfxIntSize(width_, height_)); > + // Handle track not actually added yet or removed/finished > + if (source_->AppendToTrack(track_id_, &segment)) { Again, this seems confusing. ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h @@ +235,5 @@ > > bool IsRtp(const unsigned char *data, size_t len); > }; > > +class GenericReceiveListener : public MediaStreamListener Inheritance seems like a poor fit here. Looking at SetupStream (the only thing that's really shared here), and noticing that it doesn't make use of any member variables, it seems to me that you could just make a generic function that isn't part of MediaPipeline at all that did the same thing. @@ +415,5 @@ > { > // release conduit on mainthread. Must use forget()! > NS_DispatchToMainThread(new ConduitDeleteEvent(conduit_.forget()), NS_DISPATCH_NORMAL); > } > + virtual void Init() { SetupStream(source_, track_id_, track_rate_, new AudioSegment); } Convention in this code is new AudioSegment(). @@ +417,5 @@ > NS_DispatchToMainThread(new ConduitDeleteEvent(conduit_.forget()), NS_DISPATCH_NORMAL); > } > + virtual void Init() { SetupStream(source_, track_id_, track_rate_, new AudioSegment); } > + > + virtual void SetPlayedTime(TrackTicks time) { played_ = time; } played_ is of type uint64_t, not TrackTicks. @@ +424,5 @@ > virtual void NotifyQueuedTrackChanges(MediaStreamGraph* graph, TrackID tid, > TrackRate rate, > TrackTicks offset, > uint32_t events, > const MediaSegment& queued_media) MOZ_OVERRIDE {} If you're going to add MOZ_OVERRIDE to virtual fxns, please do it all throughout the file. @@ +427,5 @@ > uint32_t events, > const MediaSegment& queued_media) MOZ_OVERRIDE {} > virtual void NotifyPull(MediaStreamGraph* aGraph, StreamTime aDesiredTime) MOZ_OVERRIDE; > > + virtual void EndTrack(TrackID aTrack); This code does not generally use aFoo. Please also change the NotifyPull signature above. @@ +513,5 @@ > PipelineListener(SourceMediaStream * source, TrackID track_id); > > + virtual void Init() { > +#ifdef MOZILLA_INTERNAL_API > + SetupStream(source_, track_id_, USECS_PER_S, new VideoSegment); Convention in this code is new VideoSegment() @@ +527,5 @@ > uint32_t events, > + const MediaSegment& queued_media) MOZ_OVERRIDE {} > + virtual void NotifyPull(MediaStreamGraph* aGraph, StreamTime aDesiredTime) MOZ_OVERRIDE; > + > + virtual void EndTrack(TrackID aTrack); Please change argument names to conform to local style.
Attachment #722104 - Flags: review?(tterribe) → review-
Comment on attachment 722104 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 722104 [details] [diff] [review]: ----------------------------------------------------------------- Restoring r?derf
Attachment #722104 - Flags: review- → review?(tterribe)
Attachment #722104 - Flags: review-
(In reply to Eric Rescorla (:ekr) from comment #44) > Comment on attachment 722104 [details] [diff] [review] > proxy AddTrack() to MSG thread via a custom command so we can get access to > the current stream time > > Review of attachment 722104 [details] [diff] [review]: > ----------------------------------------------------------------- > > Given the complexity in this patch, I need to ask again why this > isn't just something that is handled by MSG. If MSG just provided > something like: > > class TrackAddedCallback { > public: > TrackAdded(StreamTime aCurrentTime) = 0; > > NS_INLINE_DECL_THREADSAFE_REF_COUNTING(TrackAddedCallback). > }; > > > AddTrackAndListenerWithCurrentTime(TrackId aID, TrackRate aRate, > MediaSegment *aSegment, MediaStreamListener *aListener, > const RefPtr<TrackAddedCallback>& > aCompleted); > > > We could simply remove almost all of this code. This would > have the additional advantage that you could use the same > code in gUM and thus eliminate the (allegedly theoretical) > race in gUM initialization. > > If for some reason we can't do this in MSG, the right > thing to do here seems to me to be write this fxn here > as a separate unit rather than wiring it so deep into > MediaPipeline. Generally, I've tried to work within the API of MediaStream where possible, and use the mechanisms provided there. It also avoids arguments over what the API should be, though I've done those too where needed. In this case, AddTrackAndListenerWithCurrentTime() is rather more use-specific than I think would be appropriate to ask for in the API. Also, such a command API would not remove complexity, it would simply shift if wholesale from here to there. I have argued for a more general, simpler form for AddTrack that would allow you to AddTrack() 'now' (and I think that's still useful), and hadn't gotten acceptance of that so an even more use-specific version seemed unlikely to be acceptable. My original patch here was the equivalent to AddTrack('now'), but in the final revision I noticed the code was already adding the tracks before adding the listener (a big no-no and possible cause of underruns and assertions), so I merged it in. I could re-separate them, and re-argue for AddTrack('now'), but this seemed more likely to get an r+ and get in. See comment 10 through comment 12. If AddTrack('now') is implemented, we can simplify SetupStream() using it, but the complexity is encapsulated there. This is, in fact, using the exact mechanism you (and Tim) had called for, and which I had written a bug requesting, which was to expose the ability to execute code in the context of the MSG thread via the command queue. That mechanism is now available as MediaStreamGraphImpl.h has been broken out for use by WebAudio. > ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp > @@ +6,5 @@ > > > > +#ifdef MOZILLA_INTERNAL_API > > +#include "MediaStreamGraph.h" > > +#include "MediaStreamGraphImpl.h" > > +#endif > > I'm not excited about this much conditional compilation around > this code. Why not write the right mocks for FakeMediaStream > instead? This doesn't bother me as much (mostly because I'm already holding my nose at the unfortunate requirement to have FakeMediaStream at all, as we probably all are). However, it could use some cleanup: Changed to #ifndef USE_FAKE_MEDIA_STREAMS and removed MediaStreamGraph.h (as it's already in MediaPipeline.h under the same ifdef guards) and moved it down below MediaPipeline.h. > @@ +846,5 @@ > > return MediaPipelineReceive::Init(); > > } > > > > > > +void GenericReceiveListener::SetupStream(MediaStream *source, > > Why is this a method? Instead of using the member variables, it takes a bunch > of stuff that's already part of the derived class as arguments. Why not just > make it a static and then you won't need to derive the inner PipelineListener > classes from anything. It simplifies interacting with SetPlayedTime(). I had at one point passed a pointer to played_ in instead, and then passed the pointer into the ControlMessage subclass. This seemed cleaner than passing pointers to member vars that need updating. I can't call SetPlayedTime() unless there is some base class to do it on. I could break out SetupStream() leaving just SetPlayedTime, but that generally strikes me as a style or 6 of one, half-dozen of the other sort of issue. (And given the different types for played_ (see below), I think the balance tips towards some base class they can override.) > @@ +857,5 @@ > > + class Message : public ControlMessage { > > + public: > > + Message(MediaStream* stream, TrackID track, TrackRate rate, > > + MediaSegment *segment, > > + GenericReceiveListener *listener) > > Please move to previous line. > > Please be consistent about */& placement.This code seems to generally attach > them to the type. 30 years of C programming has ingrained the K&R pointer style in me, and I still strongly prefer it (though I prefer ANSI brace style). See "The Pitfalls of Ignoring K&R" in: http://www.bookofbrilliantthings.com/book/eic/the-golden-rule-of-pointers If you want a pointer type, make a typedef. But the driving requirement is match the existing code, so no problem, but the current code isn't consistent. Should I correct all the other uses of "Type *foo" in this code? I'll note that this code is probably 80% "Type *foo" currently; most of the "Type* foo" code (not all) is in the Listeners; probably because it was copied from a template. I'd prefer to swap them to "Type *foo" if we're going to convert; and to do that in a separate patch to reduce the clutter factor. And if we're making wholesale style changes like that, we should consider converting it to mozilla style (and we can then argue over "Type *foo" vs "Type* foo"). However, given all the real bugs on our plate, I'd consider style unification work on existing code to be blocking-. I will make the patch internally consistent at least, using "Type* foo". > > @@ +861,5 @@ > > + GenericReceiveListener *listener) > > + : ControlMessage(stream), track_id_(track), track_rate_(rate), > > + segment_(segment), listener_(listener) {} > > + > > + virtual void Run() > > Run () { > > Please fix indents throughout this fxn to conform to local style. Ok. Those probably derive from it being modified from an existing ControlMessage command from MSG. > > @@ +863,5 @@ > > + segment_(segment), listener_(listener) {} > > + > > + virtual void Run() > > + { > > + StreamTime current_end = mStream->GetBufferEnd(); > > Please thread assert here. MSG commands don't have thread asserts, and MSG handles what thread commands run on (in a few instances the specifically will run on the calling thread, if we're in Shutdown, but only if the command requests it, which we aren't). > > @@ +868,5 @@ > > + TrackTicks current_ticks = TimeToTicksRoundUp(track_rate_, current_end); > > + > > + // We need to know how much has been "inserted" because we're given absolute > > + // times in NotifyPull. > > + listener_->SetPlayedTime(MediaTimeToMilliseconds(current_end)); > > MediaTimeToMilliseconds returns uint64_t but you are passing it to a fxn > that takes TrackTicks Ah, yes, thanks. For fun, Video and Audio keep track of the played_ value in different units. The conversion can be moved into the SetPlayedTime() override. Thank you, C++ default conversions. :-( > @@ +879,5 @@ > > + // AdvanceKnownTracksTicksTime(HEAT_DEATH_OF_UNIVERSE) means that in > > + // theory per the API, we can't add more tracks before that > > + // time. However, the impl actually allows it, and it avoids a whole > > + // bunch of locking that would be required (and potential blocking) > > + // if we used smaller values and update them on each NotifyPull. > > I don't understand what this comment means. Can you rewrite so it's clearer. > > Minimally it seems like it belongs later. Moved it. It made more sense where it was before I added that debug. It makes more sense if you've read the MSG include file. The comment encapsulates the reasoning behind ignoring the requirement in the MSG comments that you not add a track at time less than the last AdvanceKnownTracksTime() (and we've always done so). > @@ +880,5 @@ > > + // theory per the API, we can't add more tracks before that > > + // time. However, the impl actually allows it, and it avoids a whole > > + // bunch of locking that would be required (and potential blocking) > > + // if we used smaller values and update them on each NotifyPull. > > +#ifdef DEBUG > > Why is this bracketed in #ifdef DEBUG? This isn't something that happens a > lot so it just looks like clutter. I don't normally write if (foo) { LOG(...); } without bracketing it in ifdef. However, this does make me think that DEBUG isn't the correct ifdef, it should be #ifdef PR_LOGGING so FORCE_PR_LOG will enable it. (As I never really use FORCE_PR_LOG, I hadn't been thinking about it.) > @@ +881,5 @@ > > + // time. However, the impl actually allows it, and it avoids a whole > > + // bunch of locking that would be required (and potential blocking) > > + // if we used smaller values and update them on each NotifyPull. > > +#ifdef DEBUG > > + if (current_end != 0L) { > > if (current_end) ? For clarity I usually reserve "if (foo)" for pointers and booleans, which this is not. > @@ +883,5 @@ > > + // if we used smaller values and update them on each NotifyPull. > > +#ifdef DEBUG > > + if (current_end != 0L) { > > + MOZ_MTLOG(MP_LOG_INFO, "added track @ " << current_end << > > + " -> " << MediaTimeToSeconds(current_end)); > > Wouldn't MediaTimeToMilliseconds be more useful here. No, the other MediaStream command logs like this all use the above, which is time in (float) seconds. > @@ +918,4 @@ > > conduit_(conduit), > > + played_(0) {} > > + > > +void MediaPipelineReceiveAudio::PipelineListener:: > > All on one line, please. Fine. > @@ +918,5 @@ > > conduit_(conduit), > > + played_(0) {} > > + > > +void MediaPipelineReceiveAudio::PipelineListener:: > > +EndTrack(TrackID aTrack) { > > I don't understand what's going on here... We never called EndTrack() before and we need to if we're going to remove the listener. DetachMediaStream is calling into the listener (which what holds the SourceMediaStream), and telling it to end the track. > We are getting an EndTrack from the MSG and then turning around and calling > EndTrack() on the source? > This seems like it needs a comment at least. This could be moved out of the listener (and add another AsSourceStream(), which is really just a dynamic cast), but the listener is really already more than that, it's the encapsulation of our interactions with the MediaStream itself (addtrack/removetrack/NotifyBlah). If we were to change here, I'd prefer to move the RemoveListener() calls into the current Listener and rename it. > > @@ +920,5 @@ > > + > > +void MediaPipelineReceiveAudio::PipelineListener:: > > +EndTrack(TrackID aTrack) { > > + MOZ_MTLOG(PR_LOG_DEBUG, "EndTrack(audio)"); > > +#ifdef MOZILLA_INTERNAL_API > > Please add a mock for EndTrack() to FakeMediaStream? One less conditional > compilation. Sure. > > @@ +938,5 @@ > > while (MillisecondsToMediaTime(played_) < desired_time) { > > + // TODO(ekr@rtfm.com): Is there a way to avoid mallocating here? Or reduce the size? > > + // Max size given mono is 480*2*1 = 960 (48KHz) > > +#define AUDIO_SAMPLE_BUFFER_MAX 1000 > > + MOZ_ASSERT((track_rate_/100)*sizeof(uint16_t) <= AUDIO_SAMPLE_BUFFER_MAX); > > If you're going to compute the length, why not use it? I could. This was existing code I commented and added an assert for (since there was an unexplained constant), but I could reduce the allocation to the size I believe is the max. Really this code should be replaced to avoid the malloc (per your todo, though I wasn't biting off that here), so other than safety checks (ASSERT) I didn't see a need to optimize the slow-but-works malloc for 40 temporary bytes here. > @@ +957,5 @@ > > + if (err != kMediaConduitNoError) { > > + MOZ_MTLOG(PR_LOG_ERROR, "Audio conduit failed (" << err << ") to return data @ " << played_ << > > + " (desired " << desired_time << " -> " << MediaTimeToSeconds(desired_time) << ")"); > > + samples_length = (track_rate_/100)*sizeof(uint16_t); // if this is not enough we'll loop and provide more > > + memset(samples_data, '\0', samples_length); > > Why are you bothering to fill in your own comfort noise/PLC? Are there any > legitimate circumstances in which GIPS would not do so? If you think this > is anything other than an assertable error, please actually close the stream. The existing code would have caused underrun errors (which this bug is about eliminating) if that call for some reason failed, so I wanted to remove it as a source of possible underruns. I hadn't analyzed how this might fail or might fail in some future implementation, and hadn't wanted to assume it was a virtual direct call to GIPS. This was safer than the existing code (which also didn't close the stream, which would require much more extensive changes for what I believe is a very-unlikely-or-less edge case). > @@ +967,5 @@ > > nsAutoTArray<const int16_t*,1> channels; > > channels.AppendElement(samples_data); > > segment.AppendFrames(samples.forget(), channels, samples_length); > > > > + // Handle track not actually added yet or removed/finished > > How could we be getting NotifyPull when the track has not > yet been added? For that matter, why are we getting it once > it's been removed? Both of these seem like they shouldn't > be happening, or if they do happen, should be handled in > a cleaner fashion than this. This is an existing issue with all the NotifyPulls if the AddTrack is queued - AddTrack normally works partly on the caller, and partly in the MSG thread's next iteration - and it can call NotifyPull before the track is actually added. This was the cause of other problems we resolved (for GetUserMedia) by making AppendToTrack return an error, so this mirrors that fix. > @@ +1076,5 @@ > > // goes past the current aDesiredTime Doing so means a negative > > // delta and thus messes up handling of the graph > > if (delta > 0) { > > VideoSegment segment; > > + // FIX! use Width and height of the incoming frame > > Why is this a fix? I believe we are supposed to get framesize updates > separately. Removed. It's not relevant; I added it while looking for the code that sets them and didn't remove it. > @@ +415,5 @@ > > { > > // release conduit on mainthread. Must use forget()! > > NS_DispatchToMainThread(new ConduitDeleteEvent(conduit_.forget()), NS_DISPATCH_NORMAL); > > } > > + virtual void Init() { SetupStream(source_, track_id_, track_rate_, new AudioSegment); } > > Convention in this code is new AudioSegment(). ok > > @@ +417,5 @@ > > NS_DispatchToMainThread(new ConduitDeleteEvent(conduit_.forget()), NS_DISPATCH_NORMAL); > > } > > + virtual void Init() { SetupStream(source_, track_id_, track_rate_, new AudioSegment); } > > + > > + virtual void SetPlayedTime(TrackTicks time) { played_ = time; } > > played_ is of type uint64_t, not TrackTicks. See above; addressed. > > @@ +424,5 @@ > > virtual void NotifyQueuedTrackChanges(MediaStreamGraph* graph, TrackID tid, > > TrackRate rate, > > TrackTicks offset, > > uint32_t events, > > const MediaSegment& queued_media) MOZ_OVERRIDE {} > > If you're going to add MOZ_OVERRIDE to virtual fxns, please do it all > throughout the file. It was already on the other listener; I added it here to make it consistent. It was added on the other listener due to review comments requesting it in an earlier change (roc has asked for those on NotifyFoo implementations), and should have been added to this one but wasn't. So unless you believe this is a big issue for the rest of the code, I'd prefer to leave as it is. > @@ +427,5 @@ > > uint32_t events, > > const MediaSegment& queued_media) MOZ_OVERRIDE {} > > virtual void NotifyPull(MediaStreamGraph* aGraph, StreamTime aDesiredTime) MOZ_OVERRIDE; > > > > + virtual void EndTrack(TrackID aTrack); > > This code does not generally use aFoo. Please also change the NotifyPull > signature above. Ok. > @@ +513,5 @@ > > PipelineListener(SourceMediaStream * source, TrackID track_id); > > > > + virtual void Init() { > > +#ifdef MOZILLA_INTERNAL_API > > + SetupStream(source_, track_id_, USECS_PER_S, new VideoSegment); > > Convention in this code is new VideoSegment() Ok > @@ +527,5 @@ > > uint32_t events, > > + const MediaSegment& queued_media) MOZ_OVERRIDE {} > > + virtual void NotifyPull(MediaStreamGraph* aGraph, StreamTime aDesiredTime) MOZ_OVERRIDE; > > + > > + virtual void EndTrack(TrackID aTrack); > > Please change argument names to conform to local style. ok.
ekr: if you want to re-review as well, please do so.
Attachment #722104 - Attachment is obsolete: true
Attachment #722104 - Flags: review?(tterribe)
Attachment #727678 - Flags: review?(tterribe)
> Generally, I've tried to work within the API of MediaStream where possible, > and use the mechanisms provided there. It also avoids arguments over what > the API should be, though I've done those too where needed. In this case, > AddTrackAndListenerWithCurrentTime() is rather more use-specific than I > think would be appropriate to ask for in the API. Really? It seems to me like this is exactly the API call that is needed based on the invariants of MSG. > would not remove complexity, it would simply shift if wholesale from here to > there. It would move it from all the consumer classes where it can keep getting messed up into the MSG where we can get it right once. > This is, in fact, using the exact mechanism you (and Tim) had called for, > and which I had written a bug requesting, which was to expose the ability to > execute code in the context of the MSG thread via the command queue. That > mechanism is now available as MediaStreamGraphImpl.h has been broken out for > use by WebAudio. I mostly remember Tim asking for this. I remember asking the question I asked above, which is why this can't be added to MSG. I don't consider the answer above convincing. I disagree with a number of your comments below, but there's not point in addressing those without resolving this basic design issue.
Blocks: 854149
Status: NEW → ASSIGNED
Comment on attachment 727678 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 727678 [details] [diff] [review]: ----------------------------------------------------------------- This is making progress. > 1) We're using milliseconds on purpose because TrackTicks (and StreamTime) don't > convert to milliseconds without rounding error, and this causes a drift. I plan to > keep this unless/until StreamTime is a clean conversion to ms (and even then add some > sort of compile-time assertion if I can). The problem is that your conversions to/from milliseconds is not the same as the MSG's conversions between TrackTicks and StreamTime, so you may get different ideas about when you're done in NotifyPull(). In this case TrackTicks _can_ be converted to milliseconds (and back) without rounding error, but there isn't really a reason to. TrackTicks correspond to audio samples per channel, and GIPS always outputs a fixed number of those, so keeping your accounting in TrackTicks isn't going to have any drift or rounding error. I don't know why you've claimed otherwise. > Ok (hold onto your seats, this ride is bumpy): I had done the same analysis and come to the same conclusion, including the "this is too fragile to trust that someone won't come along and break it without noticing" part. > (and in fact I added an assertion to NotifyPull in the "add debugs to MSG" patch to > complain if it fires and there are no listeners). This assertion hasn't fired in my > tests, but theoretically it could happen. If you actually read MediaStreamGraphImpl::ExtractPendingInput(), you'll find it skips calling NotifyPull() if aStream->mListeners.IsEmpty() is true... since the only thing that runs between that check and your assertion are graph and stream time conversions, I'd be very surprised if your assertion ever fired. That's somewhat irrelevant to the issue here, however, as listeners are a property of the stream, but the ones here only service an individual track. That means it's perfectly possible for a stream to have _a_ listener, but not necessarily have the one that is supposed to service the track that was just added. > > Generally, I've tried to work within the API of MediaStream where possible, and use > > the mechanisms provided there. It also avoids arguments over what the API should > > be, though I've done those too where needed. In this case, > > AddTrackAndListenerWithCurrentTime() is rather more use-specific than I think would > > be appropriate to ask for in the API. > > Really? It seems to me like this is exactly the API call that is > needed based on the invariants of MSG. Well, I think the requirements of gUM will wind up being different. It's similar in that you want to add a set of tracks and a listener as a single atomic action, but in that case there's only one listener, NotifyPull() isn't used for audio, and the stream source may be shared (with its own ideas of what the stream times should be). And eventually we'll want to take into account real timestamps. I think gUM needs to be refactored to look something _like_ this, but I don't think it will look exactly like this. We can do this in stages. Once we know what both things will look like, we can move the common elements into the MSG. > Ah, yes, thanks. For fun, Video and Audio keep track of the played_ value in > different units. The conversion can be moved into the SetPlayedTime() override. > Thank you, C++ default conversions. :-( Again why I was recommending it all be done in TrackTicks. > This was safer than the existing code (which also didn't close the stream, which > would require much more extensive changes for what I believe is a very-unlikely- > or-less edge case). I agree it's less-than-unlikely. If it can't happen, just add an abort and be done with it. > AddTrack normally works partly on the caller, and partly in the MSG thread's next > iteration - and it can call NotifyPull before the track is actually added. This was > the cause of other problems we resolved (for GetUserMedia) by making AppendToTrack > return an error, so this mirrors that fix. AddTrack() isn't queued (though AddListener() is). Instead, AddTrack() acquires a mutex on the stream and then takes effect immediately. The problems in bug 843214 were caused by the fact that sources are shared, which meant its internal state indicating whether or not the source had been "started" did not match whether or not the track had been added to any particular stream. ::: content/media/MediaSegment.h @@ +35,5 @@ > { > return aTime*(1.0/(1 << MEDIA_TIME_FRAC_BITS)); > } > > +inline uint64_t MediaTimeToMilliseconds(MediaTime aTime) I still think this can go away (see below). ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +837,5 @@ > return MediaPipelineReceive::Init(); > } > > > +void GenericReceiveListener::SetupStream(MediaStream* source, Aren't we setting up a Track, not a whole Stream? @@ +854,5 @@ > + track_rate_(rate), > + segment_(segment), > + listener_(listener) {} > + > + virtual void Run() MOZ_OVERRIDE @@ +924,5 @@ > return; > } > > // This comparison is done in total time to avoid accumulated roundoff errors. > while (MillisecondsToMediaTime(played_) < desired_time) { I maintain that this should be TicksToTimeRoundDown(track_rate_, played_), with played_ measured in TrackTicks, in order to guarantee we actually deliver what the MSG is requesting. Then SetPlayedTime() no longer needs to be virtual and you get rid of even more specialized code. @@ +943,1 @@ > 0, // TODO(ekr@rtfm.com): better estimate of capture delay Not your comment, I know, but this is supposed to be playback delay, not capture delay. @@ +957,5 @@ > nsAutoTArray<const int16_t*,1> channels; > channels.AppendElement(samples_data); > segment.AppendFrames(samples.forget(), channels, samples_length); > > + // Handle track not actually added yet or removed/finished Because you add the listener directly from the MSG thread at the same time you add the track, if we're here, it must have been added. @@ +1007,5 @@ > } > > +void MediaPipelineReceiveVideo::PipelineListener::EndTrack(TrackID track_id) { > + MOZ_MTLOG(PR_LOG_DEBUG, "EndTrack(video)"); > + source_->EndTrack(track_id); // XXX is there a danger it was Destroy()ed? Why does it matter? @@ +1062,5 @@ > if (delta > 0) { > VideoSegment segment; > segment.AppendFrame(image ? image.forget() : nullptr, delta, > gfxIntSize(width_, height_)); > + // Handle track not actually added yet or removed/finished See above. ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h @@ +229,5 @@ > +{ > +public: > + virtual ~GenericReceiveListener() {}; > + > + virtual void Init() = 0; Both implementations of this function just call SetupStream(), and the only callers already know which derived class they're dealing with. Why can't you just call SetupStream() directly? @@ +231,5 @@ > + virtual ~GenericReceiveListener() {}; > + > + virtual void Init() = 0; > + > + virtual void SetupStream(MediaStream *source, TrackID track_id, TrackRate track_rate, Any reason this needs to be virtual? I thought the point was to share the implementation. @@ +404,5 @@ > { > // release conduit on mainthread. Must use forget()! > NS_DispatchToMainThread(new ConduitDeleteEvent(conduit_.forget()), NS_DISPATCH_NORMAL); > } > + virtual void Init() { SetupStream(source_, track_id_, track_rate_, new AudioSegment()); } MOZ_OVERRIDE @@ +406,5 @@ > NS_DispatchToMainThread(new ConduitDeleteEvent(conduit_.forget()), NS_DISPATCH_NORMAL); > } > + virtual void Init() { SetupStream(source_, track_id_, track_rate_, new AudioSegment()); } > + > + virtual void SetPlayedTime(StreamTime time) { played_ = MediaTimeToMilliseconds(time); } MOZ_OVERRIDE (though, if you follow the comments above, this entire specialization can go away) @@ +421,3 @@ > > private: > SourceMediaStream *source_; It seems like source_, track_id_, track_rate_, and played_ could all be members of GenericReceiveListener, which would, e.g., let you share EndTrack(), etc. Also not sure why EndTrack() needs to take a TrackID if it's in a listener that already knows the track ID it corresponds to. @@ +426,2 @@ > RefPtr<MediaSessionConduit> conduit_; > uint64_t played_; // Amount of media played in milliseconds. I would argue that if you need to call the method SetPlayedTime(), then the variable should be played_time_, not just played_. Though if you follow my advice, SetPlayedTicks()/played_ticks_ would be better. @@ +501,4 @@ > public: > PipelineListener(SourceMediaStream * source, TrackID track_id); > > + virtual void Init() { MOZ_OVERRIDE @@ +501,5 @@ > public: > PipelineListener(SourceMediaStream * source, TrackID track_id); > > + virtual void Init() { > +#ifdef MOZILLA_INTERNAL_API Doesn't SetupStream() already have a MOZILLA_INTERNAL_API #ifdef around its body? @@ +506,5 @@ > + SetupStream(source_, track_id_, USECS_PER_S, new VideoSegment()); > +#endif > + } > + > + virtual void SetPlayedTime(StreamTime time) { played_ = TimeToTicksRoundUp(USECS_PER_S, time); } MOZ_OVERRIDE (though, if you follow the comments above, this entire specialization can go away)
Attachment #727678 - Flags: review?(tterribe) → review-
(In reply to Timothy B. Terriberry (:derf) from comment #50) > Comment on attachment 727678 [details] [diff] [review] > proxy AddTrack() to MSG thread via a custom command so we can get access to > the current stream time > > Review of attachment 727678 [details] [diff] [review]: > ----------------------------------------------------------------- > > This is making progress. Good, thanks. Mostly I just implemented your comments. > > > 1) We're using milliseconds on purpose because TrackTicks (and StreamTime) don't > > convert to milliseconds without rounding error, and this causes a drift. I plan to > > keep this unless/until StreamTime is a clean conversion to ms (and even then add some > > sort of compile-time assertion if I can). > > The problem is that your conversions to/from milliseconds is not the same as > the MSG's conversions between TrackTicks and StreamTime, so you may get > different ideas about when you're done in NotifyPull(). In this case > TrackTicks _can_ be converted to milliseconds (and back) without rounding > error, but there isn't really a reason to. TrackTicks correspond to audio > samples per channel, and GIPS always outputs a fixed number of those, so > keeping your accounting in TrackTicks isn't going to have any drift or > rounding error. I don't know why you've claimed otherwise. My apologies; I misread your comments as wanting to switch back to StreamTime, which does have inexact conversion issues. Yes, we can use TrackTicks as a standin for ms, and that does simplify things. > > > Ok (hold onto your seats, this ride is bumpy): > > I had done the same analysis and come to the same conclusion, including the > "this is too fragile to trust that someone won't come along and break it > without noticing" part. Cool; glad we got the same result. > > This was safer than the existing code (which also didn't close the stream, which > > would require much more extensive changes for what I believe is a very-unlikely- > > or-less edge case). > > I agree it's less-than-unlikely. If it can't happen, just add an abort and > be done with it. Well, there is an error value, and perhaps it could happen in the future if something changes in Conduit or GIPS, so better safe IMHO. I'll add an NS_ASSERTION(). > > +void GenericReceiveListener::SetupStream(MediaStream* source, > > Aren't we setting up a Track, not a whole Stream? Yup. Changed. > @@ +924,5 @@ > > return; > > } > > > > // This comparison is done in total time to avoid accumulated roundoff errors. > > while (MillisecondsToMediaTime(played_) < desired_time) { > > I maintain that this should be TicksToTimeRoundDown(track_rate_, played_), > with played_ measured in TrackTicks, in order to guarantee we actually > deliver what the MSG is requesting. Then SetPlayedTime() no longer needs to > be virtual and you get rid of even more specialized code. Done. > @@ +957,5 @@ > > nsAutoTArray<const int16_t*,1> channels; > > channels.AppendElement(samples_data); > > segment.AppendFrames(samples.forget(), channels, samples_length); > > > > + // Handle track not actually added yet or removed/finished > > Because you add the listener directly from the MSG thread at the same time > you add the track, if we're here, it must have been added. Ok (comment predates that), but in theory it could be removed/finished, so I'd like to keep the code, and as a "good example". > ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h > @@ +229,5 @@ > > +{ > > +public: > > + virtual ~GenericReceiveListener() {}; > > + > > + virtual void Init() = 0; > > Both implementations of this function just call SetupStream(), and the only > callers already know which derived class they're dealing with. Why can't you > just call SetupStream() directly? Ok > @@ +421,3 @@ > > > > private: > > SourceMediaStream *source_; > > It seems like source_, track_id_, track_rate_, and played_ could all be > members of GenericReceiveListener, which would, e.g., let you share > EndTrack(), etc. Also not sure why EndTrack() needs to take a TrackID if > it's in a listener that already knows the track ID it corresponds to. Yup, definitely cleaner. >> + virtual void Init() { >> +#ifdef MOZILLA_INTERNAL_API > > Doesn't SetupStream() already have a MOZILLA_INTERNAL_API #ifdef around its body? VideoSegment() is MOZILLA_INTERNAL_API only
Comment on attachment 729444 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Note: interdiffs posted as well
Attachment #729444 - Flags: review?(tterribe)
Now that you and derf seem to be converging some, I want to discuss the general architectural question I raised in comment #49. There is simply no good reason to refactor MediaPipeline when we can provide a separable fxn that will do an atomic AddTrack/AddListener (the fxn that I contend should be in MSG in any case.) Please do not land this patch until we have resolved this issue.
Comment on attachment 729444 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 729444 [details] [diff] [review]: ----------------------------------------------------------------- As discussed in the meeting today, let's abstract out SetupTrack() along the lines that ekr suggested in comment 44 (except with TrackTicks instead of a StreamTime in the callback). I'll take another look when that's done. ekr, you should still go through this patch and point out the style issues you mentioned. > Well, there is an error value, and perhaps it could happen in the future if > something changes in Conduit or GIPS, so better safe IMHO. I'll add an > NS_ASSERTION(). If someone changes something to allow this in the future, having it crash here is a pretty good way to make sure they notice. But injecting silence is a good way for people to discover there's some kind of unknown problem creating audio quality issues without having any way to find the underlying cause. > VideoSegment() is MOZILLA_INTERNAL_API only I start to undertsand why there's so much nose-holding going on. ::: content/media/MediaSegment.h @@ +35,5 @@ > { > return aTime*(1.0/(1 << MEDIA_TIME_FRAC_BITS)); > } > > +inline uint64_t MediaTimeToMilliseconds(MediaTime aTime) This is unused.
Attachment #729444 - Flags: review?(tterribe)
(In reply to Timothy B. Terriberry (:derf) from comment #56) > Comment on attachment 729444 [details] [diff] [review] > proxy AddTrack() to MSG thread via a custom command so we can get access to > the current stream time > > Review of attachment 729444 [details] [diff] [review]: > ----------------------------------------------------------------- > > As discussed in the meeting today, let's abstract out SetupTrack() along the > lines that ekr suggested in comment 44 (except with TrackTicks instead of a > StreamTime in the callback). I'll take another look when that's done. Per meeting, done and about to be uploaded > > Well, there is an error value, and perhaps it could happen in the future if > > something changes in Conduit or GIPS, so better safe IMHO. I'll add an > > NS_ASSERTION(). > > If someone changes something to allow this in the future, having it crash > here is a pretty good way to make sure they notice. But injecting silence is > a good way for people to discover there's some kind of unknown problem > creating audio quality issues without having any way to find the underlying > cause. MOZ_ASSERT() vs NS_ASSERTION - I don't care a lot; either will get flagged (I'd hope - ok, MOZ_ASSERT is more likely to generate a bug report from tbpl; I can do that). I think for opt builds, we have a choice of ignore it and get random underruns, or handle it and get silence. I really do dislike embedding knowledge of the implementations elsewhere, especially in an imported API that might change, and MOZ_ASSERT will only catch it if it happens in a test or a developer's debug build; I can see changes where it fails in certain real-world conditions getting past our testing, and when it gets hit in the field silence is better than crashing (MOZ_CRASH) or underruns (randomish behavior). IMHO. It's an opinion. It's worth what you paid for it. :-) > > VideoSegment() is MOZILLA_INTERNAL_API only > > I start to undertsand why there's so much nose-holding going on. :-/ > > ::: content/media/MediaSegment.h > @@ +35,5 @@ > > { > > return aTime*(1.0/(1 << MEDIA_TIME_FRAC_BITS)); > > } > > > > +inline uint64_t MediaTimeToMilliseconds(MediaTime aTime) > > This is unused. Right, thanks.
Attachment #722105 - Attachment is obsolete: true
Attachment #727679 - Attachment is obsolete: true
Attachment #729442 - Attachment is obsolete: true
Attachment #729444 - Attachment is obsolete: true
Attachment #729884 - Flags: review?(ekr)
Comment on attachment 729884 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 729884 [details] [diff] [review]: ----------------------------------------------------------------- This is architecturally a lot more like what I was hoping for, but I'd like to see one more pass... ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +861,5 @@ > +{ > + // This both adds the listener and the track > +#ifdef MOZILLA_INTERNAL_API > + class Message : public ControlMessage { > + public: public is indented one stop. @@ +872,5 @@ > + segment_(segment), > + listener_(listener), > + completed_(completed) {} > + > + virtual void Run() MOZ_OVERRIDE Opening brace on same line. @@ +882,5 @@ > + > + // Add a track 'now' to avoid possible underrun, especially if we add > + // a track "later". > + > +#ifdef PR_LOGGING Please remove the #define. It's just cruft. MOZ_MTLOG() is already a macro; if we want to remove it when not debugging, the fix is to do it globally. @@ +905,5 @@ > + // We need to know how much has been "inserted" because we're given absolute > + // times in NotifyPull. > + completed_->TrackAdded(current_ticks); > + } > + TrackID track_id_; Is there a reason these are not private? @@ +953,5 @@ > MediaConduitErrorCode err = > static_cast<AudioSessionConduit*>(conduit_.get())->GetAudioFrame( > samples_data, > + track_rate_, > + 0, // TODO(ekr@rtfm.com): better estimate of "capture" delay can you fix this comment per derf. @@ +960,2 @@ > > + if (err != kMediaConduitNoError) { I would really prefer if this were removed. It just seems like cruft for a "can't happen" condition At least please restore the assert that you had here before. @@ +979,5 @@ > + played_ticks_ += track_rate_/100; // 10ms in TrackTicks > + } else { > + // we can't un-read the data, but that's ok since we don't want to > + // buffer - but don't i-loop! > + break; s/break/return/ Log on failure. But more generally, is this really the right answer? If this fails are we going to be called again and just loop around the requested time? This seems like the least optimal outcome.... @@ +1078,5 @@ > segment.AppendFrame(image ? image.forget() : nullptr, delta, > gfxIntSize(width_, height_)); > + // Handle track not actually added yet or removed/finished > + if (source_->AppendToTrack(track_id_, &segment)) { > + played_ticks_ = target; I would prefer a return on failure. Also, please log. ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.h @@ +232,5 @@ > +class MediaPipelineReceiveVideo; > + > +class GenericReceiveListener : public MediaStreamListener > +{ > +public: public/private are indented one space. @@ +236,5 @@ > +public: > + GenericReceiveListener(SourceMediaStream *source, TrackID track_id, > + TrackRate track_rate) > + : source_(source) > + , track_id_(track_id) Style in this code has , at the end of the line @@ +242,5 @@ > + , played_ticks_(0) {} > + > + virtual ~GenericReceiveListener() {} > + > + void SetPlayedTicks(TrackTicks time) Opening brace on the same line here and below. @@ +258,5 @@ > + TrackID track_id_; > + TrackRate track_rate_; > + TrackTicks played_ticks_; > + > + friend class MediaPipelineReceiveAudio; Rather than making these friends, why not have an AddSelf() that is a thunk to AddTrackAndListener. @@ +287,5 @@ > +private: > + RefPtr<GenericReceiveListener> listener_; > +}; > + > +// Add a track and listener on the MSG thread using the MSG command queue Why not define this as static? @@ +291,5 @@ > +// Add a track and listener on the MSG thread using the MSG command queue > +void AddTrackAndListener(MediaStream* source, > + TrackID track_id, TrackRate track_rate, > + MediaStreamListener* listener, MediaSegment* segment, > + RefPtr<TrackAddedCallback>& completed); const RefPtr<>&, please. @@ +292,5 @@ > +void AddTrackAndListener(MediaStream* source, > + TrackID track_id, TrackRate track_rate, > + MediaStreamListener* listener, MediaSegment* segment, > + RefPtr<TrackAddedCallback>& completed); > + Whitespace.
Attachment #729884 - Flags: review?(ekr) → review-
(In reply to Eric Rescorla (:ekr) from comment #60) In general all comments should be addressed. > > +#ifdef PR_LOGGING > > Please remove the #define. It's just cruft. > > MOZ_MTLOG() is already a macro; if we want to remove it when not debugging, > the fix is to do it globally. done. I use #ifdef when a LOG statement needs logic or lookup code around it to avoid relying on the compiler to wipe it away (and often code like that in opt will generate unused warnings, so that's why it's habit.) > > @@ +905,5 @@ > > + // We need to know how much has been "inserted" because we're given absolute > > + // times in NotifyPull. > > + completed_->TrackAdded(current_ticks); > > + } > > + TrackID track_id_; > > Is there a reason these are not private? Only because none of the MSG Message subclasses bother with it (all of which are embedded classes like this, and thus have limited visibility). Done. > @@ +960,2 @@ > > > > + if (err != kMediaConduitNoError) { > > I would really prefer if this were removed. It just seems like cruft for a > "can't happen" condition > > At least please restore the assert that you had here before. It's there; I had moved it to after the MTLOG statement and made it a MOZ_ASSERT per derf's request. > @@ +979,5 @@ > > + played_ticks_ += track_rate_/100; // 10ms in TrackTicks > > + } else { > > + // we can't un-read the data, but that's ok since we don't want to > > + // buffer - but don't i-loop! > > + break; > > s/break/return/ > > Log on failure. done. > But more generally, is this really the right answer? If this fails > are we going to be called again and just loop around the requested > time? This seems like the least optimal outcome.... It doesn't re-call you until it gets what it wants (it calls once max per 10ms MSG iteration); I wanted to be sure if the call failed (for whatever reason) we didn't i-loop ourselves.
Attachment #729943 - Flags: review?(ekr)
Attachment #729884 - Attachment is obsolete: true
(In reply to Randell Jesup [:jesup] from comment #61) > > But more generally, is this really the right answer? If this fails > > are we going to be called again and just loop around the requested > > time? This seems like the least optimal outcome.... > > It doesn't re-call you until it gets what it wants (it calls once max per > 10ms MSG iteration); I wanted to be sure if the call failed (for whatever > reason) we didn't i-loop ourselves. As I said when I asked for the comment to be fixed, this can only happen if the track has ended. In that case, the stream no longer cares how much data is in it.
Against my preferred paranoia, I'll change it to whatever you want. Just include the snippet you want and I'll check it in with that. I just need to get this one fixed. Thanks.
Comment on attachment 729943 [details] [diff] [review] proxy AddTrack() to MSG thread via a custom command so we can get access to the current stream time Review of attachment 729943 [details] [diff] [review]: ----------------------------------------------------------------- lgtm with minor nits. ::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp @@ +862,5 @@ > + const RefPtr<TrackAddedCallback>& completed) { > + // This both adds the listener and the track > +#ifdef MOZILLA_INTERNAL_API > + class Message : public ControlMessage { > + public: Sorry I wasn't clear. Intdentation pattern is: class Foo { public: Foo()... @@ +960,5 @@ > + if (err != kMediaConduitNoError) { > + // Insert silence on conduit/GIPS failure (extremely unlikely) > + MOZ_MTLOG(PR_LOG_ERROR, "Audio conduit failed (" << err << ") to return data @ " << played_ticks_ << > + " (desired " << desired_time << " -> " << MediaTimeToSeconds(desired_time) << ")"); > + MOZ_ASSERT(err == kMediaConduitNoError); Wouldn't MOZ_ASSERT(false) be clearer?
Attachment #729943 - Flags: review?(ekr) → review+
https://hg.mozilla.org/integration/mozilla-inbound/rev/7c249cd86217 MOZ_ASSERT(foo) will log "foo" (the failed invariant) before crashing. (You can also use MOZ_ASSERT(foo, "bar")), so I usually prefer to avoid MOZ_ASSERT(false). An alternative would be MOZ_ASSERT(false, "bar") (which would output "false ( bar )", which still isn't great.
Target Milestone: --- → mozilla22
A scan of builds from the landing and following pushes shows no "underran" assertions, which almost always showed up in small numbers on Mac M2's and C's. I'll put together a patch to remove the assertion suppressions from the dom/media tests.
Status: ASSIGNED → RESOLVED
Closed: 12 years ago
Flags: in-testsuite+
Resolution: --- → FIXED
If the intermittent failures are silent for a few days, then we can claim that this bug is marked verified.
Whiteboard: [WebRTC][blocking-webrtc+][leave-open] → [WebRTC][blocking-webrtc+][leave-open][qa-]
Comment on attachment 730381 [details] [diff] [review] Remove assertion supression from dom/media/test/mochitests Review of attachment 730381 [details] [diff] [review]: ----------------------------------------------------------------- lgtm
Attachment #730381 - Flags: review?(jsmith) → review+
Whiteboard: [WebRTC][blocking-webrtc+][leave-open][qa-] → [WebRTC][blocking-webrtc+][qa-]
Blocks: 855595
Whiteboard: [WebRTC][blocking-webrtc+][qa-] → [WebRTC][blocking-webrtc+][qa-][adv-main22-]
Group: core-security
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: