Closed Bug 1400363 Opened 7 years ago Closed 7 years ago

Update muted state on tracks when negotiation happens

Categories

(Core :: WebRTC: Signaling, enhancement, P2)

enhancement

Tracking

()

RESOLVED FIXED
mozilla59
Tracking Status
firefox59 --- fixed

People

(Reporter: bwc, Assigned: bwc)

References

Details

Attachments

(5 files)

      No description provided.
Rank: 15
Priority: -- → P2
Assignee: nobody → docfaraday
There are now patches on bug 1208378 that you can use to start implementing this.
Flags: needinfo?(docfaraday)
Ok, got it.
Flags: needinfo?(docfaraday)
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209564

Looks good with the two concerns below addresses.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1934
(Diff revision 2)
> +    NS_DispatchToMainThread(NewRunnableMethod(
> +          "GenericReceiveListener::OnRtpReceived_m",
> +          this,
> +          &GenericReceiveListener::OnRtpReceived_m));

Do we need to hold a temporary reference to this to avoid the object getting destroyed before that runnable gets executed?

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1934
(Diff revision 2)
>      }
>    }
>  
> +  void OnRtpReceived()
> +  {
> +    NS_DispatchToMainThread(NewRunnableMethod(

Does this mean we are going to dispatch a runnable to main thread on every single RTP packet we receive?

If so then we should probably only do this on the first received RTP packet.
Attachment #8933098 - Flags: review?(drno) → review+
Comment on attachment 8933099 [details]
Bug 1400363 - Part 3: Start webrtc receive tracks as muted.

https://reviewboard.mozilla.org/r/204068/#review209566

LGTM
Attachment #8933099 - Flags: review?(drno) → review+
Comment on attachment 8933100 [details]
Bug 1400363 - Part 4: Mute webrtc receive tracks when they are negotiated to stop receiving.

https://reviewboard.mozilla.org/r/204070/#review209576

LGTM after making that comment less confusing.

::: media/webrtc/signaling/src/peerconnection/TransceiverImpl.cpp:515
(Diff revision 2)
>        } else {
> -        currentDirection = dom::RTCRtpTransceiverDirection::Inactive;
> +        aJsTransceiver.SetCurrentDirection(
> +            dom::RTCRtpTransceiverDirection::Inactive, aRv);
>        }
> +
> +      // If negotiation stops a remote track from sending, we mark the track muted.

This comment confused me. I think it would be easier to match to the code above if it says "If negotiation stops a track from receiving, we mark the track muted."
Attachment #8933100 - Flags: review?(drno) → review+
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209564

> Do we need to hold a temporary reference to this to avoid the object getting destroyed before that runnable gets executed?

My understanding is that NewRunnableMethod does this, since it requires the second arg to have AddRef() and Release() methods. I can double-check.

> Does this mean we are going to dispatch a runnable to main thread on every single RTP packet we receive?
> 
> If so then we should probably only do this on the first received RTP packet.

Hmm. Easier said than done. This code-path needs to be exercised every time an RTP packet is received when the track is muted (which can happen any number of times), but it is only safe to check that on main. We'd probably have to cache the muted state in an Atomic or something. I'll look into it.
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209564

> My understanding is that NewRunnableMethod does this, since it requires the second arg to have AddRef() and Release() methods. I can double-check.

Ok, this does acquire a reference, so it should be good.
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review209688

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:82
(Diff revision 2)
> +  let onmute = track => {
> +    return new Promise(resolve => track.onmute = () => {
> +      if (track.muted) {
> +        resolve();
> +      } else {
> +        ok(false, "onmute fired, but track isn't muted!");
> +        reject();
> +      }
> +    });
> +  };
> +
> +  let onunmute = track => {
> +    return new Promise(resolve => track.onunmute = () => {
> +      if (track.muted) {
> +        ok(false, "onunmute fired, but track is still muted!");
> +        reject();
> +      } else {
> +        resolve();
> +      }
> +    });
> +  };

Since you have the full test framework here, you could use `haveEvent()` instead. Perhaps even `haveEventsButNoMore()`.

https://searchfox.org/mozilla-central/rev/be78e6ea9b10b1f5b2b3b013f01d86e1062abb2b/dom/media/tests/mochitest/head.js#678

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:824
(Diff revision 2)
> +    // Check that receive tracks start muted
> +    hasProps(pc1.getTransceivers(),
> +      [
> +        {receiver: {track: {kind: "audio", muted: true}}},
> +        {receiver: {track: {kind: "video", muted: true}}}
> +      ]);
> +
> +    hasProps(pc1.getTransceivers(),
> +      [
> +        {receiver: {track: {kind: "audio", muted: true}}},
> +        {receiver: {track: {kind: "video", muted: true}}}
> +      ]);

We should check that video is black and audio is silent when muted.

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:855
(Diff revision 2)
> +    // Check that receive tracks are unmuted when RTP starts flowing
> +    await onunmuteaudio1;
> +    await onunmutevideo1;
> +    await onunmuteaudio2;
> +    await onunmutevideo2;

Should check that audio and video are flowing when unmuted.

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:872
(Diff revision 2)
> +    await onmuteaudio1;
> +    await onmutevideo1;

Should check that audio1, video1 are silent/black, after mute. Likewise we should check that audio2, video2 are still flowing.

and so on for the remaining two cases too.
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209694

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1896
(Diff revision 3)
>  };
>  
>  class GenericReceiveListener : public MediaStreamListener
>  {
>   public:
> -  GenericReceiveListener(SourceMediaStream *source, TrackID track_id)
> +  GenericReceiveListener(DOMMediaStream *stream, TrackID track_id)

Can we replace the `DOMMediaStream*` and `TrackID` here with just a `MediaStreamTrack*`?

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1947
(Diff revision 3)
> +    }
> +  }
> +
> +  void OnRtpReceived_m()
> +  {
> +    if (listening_ && track_ && track_->Muted()) {

This check should really happen on the thread that calls `OnRtpReceived()`. Then you can just dispatch  `MutedChanged()` straight to main thread.
Comment on attachment 8933100 [details]
Bug 1400363 - Part 4: Mute webrtc receive tracks when they are negotiated to stop receiving.

https://reviewboard.mozilla.org/r/204070/#review209704

::: media/webrtc/signaling/src/peerconnection/TransceiverImpl.cpp:515
(Diff revision 3)
> +      // If negotiation stops a track from receiving (ie; m-section is
> +      // negotiated "sendonly" or "inactive"), we mark the track muted.  We do
> +      // _not_ do the reverse; we need to wait for RTP to unmute according to
> +      // the spec. That happens in MediaPipeline.

I think it would simplify things if this signaled the muted state to MediaPipeline on the thread that receives rtp packets.

The main thread muted state change should then probably be handled by MediaPipeline as well, to keep state in sync.

::: media/webrtc/signaling/src/peerconnection/TransceiverImpl.cpp:520
(Diff revision 3)
> +      // If negotiation stops a track from receiving (ie; m-section is
> +      // negotiated "sendonly" or "inactive"), we mark the track muted.  We do
> +      // _not_ do the reverse; we need to wait for RTP to unmute according to
> +      // the spec. That happens in MediaPipeline.
> +      nsTArray<RefPtr<dom::MediaStreamTrack>> tracks;
> +      mReceiveStream->GetTracks(tracks);

Ouch. I think `mReceiveTrack` would be better. I'd prefer poking a hole to `MediaStreamTrack::GetInputStream` rather than you having to keep a `DOMMediaStream` here.
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209694

> Can we replace the `DOMMediaStream*` and `TrackID` here with just a `MediaStreamTrack*`?

Is there a way to get the SourceMediaStream from the MediaStreamTrack? If so, that would be fine.

> This check should really happen on the thread that calls `OnRtpReceived()`. Then you can just dispatch  `MutedChanged()` straight to main thread.

None of these variables are safe to touch from STS. It is simpler to dispatch and then check (with an Atomic flag that tells us when we _might_ need to check) than it is to make all of these things threadsafe.
Comment on attachment 8933100 [details]
Bug 1400363 - Part 4: Mute webrtc receive tracks when they are negotiated to stop receiving.

https://reviewboard.mozilla.org/r/204070/#review209704

> I think it would simplify things if this signaled the muted state to MediaPipeline on the thread that receives rtp packets.
> 
> The main thread muted state change should then probably be handled by MediaPipeline as well, to keep state in sync.

I could live with that.

> Ouch. I think `mReceiveTrack` would be better. I'd prefer poking a hole to `MediaStreamTrack::GetInputStream` rather than you having to keep a `DOMMediaStream` here.

That would be really nice to have, and simplify quite a bit of code. Probably a large enough change to merit its own bug, honestly.
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209694

> Is there a way to get the SourceMediaStream from the MediaStreamTrack? If so, that would be fine.

We could expose [1] one way or the other. It's not exposed today because the producer of the track is supposed to know the SourceMediaStream already. However now with certain things (muted state in this case) signaled directly from the producer to the track, I think it makes sense to open this up. Doing it in another bug or here doesn't really matter, as long as it gets dealt with soonish.

[1] https://searchfox.org/mozilla-central/rev/be78e6ea9b10b1f5b2b3b013f01d86e1062abb2b/dom/media/MediaStreamTrack.h#523

> None of these variables are safe to touch from STS. It is simpler to dispatch and then check (with an Atomic flag that tells us when we _might_ need to check) than it is to make all of these things threadsafe.

Would it be enough to make `track_` a `const RefPtr<MediaStreamTrack>` to ensure it doesn't go away?
And to do message passing with mirrored muted state (from signaling) on the STS thread?

If this is the only place setting the muted state on the MediaStreamTrack there's nothing else that can modify it. So you'd never have to read `track_->Muted()` here - maintaining a local copy would be fine.
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209694

> Would it be enough to make `track_` a `const RefPtr<MediaStreamTrack>` to ensure it doesn't go away?
> And to do message passing with mirrored muted state (from signaling) on the STS thread?
> 
> If this is the only place setting the muted state on the MediaStreamTrack there's nothing else that can modify it. So you'd never have to read `track_->Muted()` here - maintaining a local copy would be fine.

track_ needs to be released on main (in EndTrack), otherwise we hit assertions due to the last PipelineListener ref going away on some webrtc.org thread. |listening_| is also updated on main, and I would prefer to avoid races on it.
Comment on attachment 8933098 [details]
Bug 1400363 - Part 2: Unmute webrtc receive tracks when RTP is received.

https://reviewboard.mozilla.org/r/204066/#review209694

> We could expose [1] one way or the other. It's not exposed today because the producer of the track is supposed to know the SourceMediaStream already. However now with certain things (muted state in this case) signaled directly from the producer to the track, I think it makes sense to open this up. Doing it in another bug or here doesn't really matter, as long as it gets dealt with soonish.
> 
> [1] https://searchfox.org/mozilla-central/rev/be78e6ea9b10b1f5b2b3b013f01d86e1062abb2b/dom/media/MediaStreamTrack.h#523

Ok, I have done this in part 0.
Comment on attachment 8933100 [details]
Bug 1400363 - Part 4: Mute webrtc receive tracks when they are negotiated to stop receiving.

https://reviewboard.mozilla.org/r/204070/#review209704

> I could live with that.

I think this is more trouble than it is worth. Dispatching this to STS in order to update a cached value, and then dispatching back to main again to perform the actual muting, seems a bit much to me.
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review209934

Lgtm, just nits and a question.

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:82
(Diff revision 2)
> +  let onmute = track => {
> +    return new Promise(resolve => track.onmute = () => {
> +      if (track.muted) {
> +        resolve();
> +      } else {
> +        ok(false, "onmute fired, but track isn't muted!");
> +        reject();
> +      }
> +    });
> +  };
> +
> +  let onunmute = track => {
> +    return new Promise(resolve => track.onunmute = () => {
> +      if (track.muted) {
> +        ok(false, "onunmute fired, but track is still muted!");
> +        reject();
> +      } else {
> +        resolve();
> +      }
> +    });
> +  };

Agree. Otherwise, please rewrite to have minimal code inside the Promise constructor executor function. E.g.

    await new Promise(resolve => track.onunmute = resolve);
    is(track.muted, false, "onunmute fired, but track is still muted!");

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:82
(Diff revision 2)
>  
>    let negotiationNeeded = pc => {
>      return new Promise(resolve => pc.onnegotiationneeded = resolve);
>    };
>  
> +  let onmute = track => {

nit: onMute and onUnmute

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:794
(Diff revision 2)
>        ]);
>  
>      trickle(pc2, pc1);
>      await pc2.setLocalDescription(answer);
>  
> +    let onunmutePc1 = onunmute(pc1.getTransceivers()[0].receiver.track);

Maybe s/onunmutePc1/haveUnmutePc1/ ?

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:800
(Diff revision 2)
> +
>      await iceConnected(pc1);
>      await iceConnected(pc2);
>  
> +    await onunmutePc1;
> +    await wait(1000);

Why this?
Attachment #8933097 - Flags: review?(jib) → review+
Comment on attachment 8933463 [details]
Bug 1400363 - Part 0: Expose the SourceMediaStream of MediaStreamTrack, and store tracks instead of the source streams.

https://reviewboard.mozilla.org/r/204384/#review210160

Great! Mostly nits to fix.

FWIW I think in the long term we'll need an equivalent of NotifyPull on MediaStreamTrackListener. That way we'll be able to get rid of the direct use of SourceMediaStream here and code can be simplified further.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1894
(Diff revision 1)
> -  GenericReceiveListener(SourceMediaStream *source, TrackID track_id)
> -    : source_(source),
> +  GenericReceiveListener(dom::MediaStreamTrack* track, TrackID track_id)
> +    : track_(track),
>        track_id_(track_id),

track_id == track->mInputTrackID.

Maybe we can make `MediaStreamTrack::mInputTrackID` and `MediaStreamTrack::mTrackID` public and const?
That would get rid of `track_id_` too.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1956
(Diff revision 1)
>  
> -      RefPtr<SourceMediaStream> source_;
> +      RefPtr<MediaStreamTrack> track_;
>        const TrackID track_id_;
>      };
>  
> -    source_->GraphImpl()->AppendMessage(MakeUnique<Message>(source_, track_id_));
> +    track_->GraphImpl()->AppendMessage(MakeUnique<Message>(track_, track_id_));

Grab the SourceMediaStream on main thread to avoid passing the MediaStreamTrack to the MSG.

Also, no need to keep `source_`. There's `MediaStream* ControlMessage::GetStream();`.

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1968
(Diff revision 1)
>    {
>      class Message : public ControlMessage
>      {
>      public:
>        Message(GenericReceiveListener* listener,
> -              MediaStream* stream,
> +              dom::MediaStreamTrack* track,

It's ok to pass `nullptr` to ControlMessage if you're not going to use the stream (`ControlMessage::GetStream() like above).

::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:2142
(Diff revision 1)
>      nsCOMPtr<nsIEventTarget> main_thread,
>      nsCOMPtr<nsIEventTarget> sts_thread,
>      RefPtr<AudioSessionConduit> conduit,
> -    SourceMediaStream* aStream) :
> +    dom::MediaStreamTrack* aTrack) :
>    MediaPipelineReceive(pc, main_thread, sts_thread, conduit),
> -  listener_(aStream ? new PipelineListener(aStream, conduit_) : nullptr)
> +  listener_(aTrack ? new PipelineListener(aTrack, conduit_) : nullptr)

Is it legal to not have a track here?

::: media/webrtc/signaling/src/peerconnection/PeerConnectionImpl.cpp:2326
(Diff revision 1)
> -OwningNonNull<DOMMediaStream>
> -PeerConnectionImpl::CreateReceiveStreamWithTrack(
> +OwningNonNull<dom::MediaStreamTrack>
> +PeerConnectionImpl::CreateReceiveTrack(
>      SdpMediaSection::MediaType type) {
>  
> -  OwningNonNull<DOMMediaStream> stream = MakeMediaStream();
> +  MediaStreamGraph* graph =
> +    MediaStreamGraph::GetInstance(MediaStreamGraph::AUDIO_THREAD_DRIVER, GetWindow());

Note that AUDIO_THREAD_DRIVER is the driver that will be created if there's no existing graph.

This should then be AUDIO_THREAD_DRIVER if we're creating an audio track, or SYSTEM_THREAD_DRIVER if we're creating a video track.

::: media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.h:120
(Diff revision 1)
> -      DOMMediaStream& aReceiveStream,
> +      dom::MediaStreamTrack& aReceiveTrack,
>        dom::MediaStreamTrack* aSendTrack,

Any reason for the receive and send tracks to be of different types - ref vs ptr?

::: media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.cpp:1144
(Diff revision 1)
> -    DOMMediaStream& aReceiveStream,
> +    dom::MediaStreamTrack& aReceiveTrack,
>      dom::MediaStreamTrack* aSendTrack,

Also here
Attachment #8933463 - Flags: review?(apehrson) → review+
Comment on attachment 8933463 [details]
Bug 1400363 - Part 0: Expose the SourceMediaStream of MediaStreamTrack, and store tracks instead of the source streams.

https://reviewboard.mozilla.org/r/204384/#review210160

> Is it legal to not have a track here?

The mediapipeline_unittest does this :(

> Any reason for the receive and send tracks to be of different types - ref vs ptr?

Transceivers always have a receive track, but might not have a send track. Just enforcing that invariant everywhere.
Comment on attachment 8933463 [details]
Bug 1400363 - Part 0: Expose the SourceMediaStream of MediaStreamTrack, and store tracks instead of the source streams.

https://reviewboard.mozilla.org/r/204384/#review210160

> It's ok to pass `nullptr` to ControlMessage if you're not going to use the stream (`ControlMessage::GetStream() like above).

Is that something I can count on being true in the future?
Comment on attachment 8933463 [details]
Bug 1400363 - Part 0: Expose the SourceMediaStream of MediaStreamTrack, and store tracks instead of the source streams.

https://reviewboard.mozilla.org/r/204384/#review210160

> Is that something I can count on being true in the future?

Sure. There are already plenty of examples in the tree:
https://searchfox.org/mozilla-central/search?q=symbol:_ZN7mozilla14ControlMessageC1EPNS_11MediaStreamE&redirect=false
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review209934

> nit: onMute and onUnmute

I've renamed to gotMuteEvent and gotUnmuteEvent.

> Why this?

Making sure that onunmute doesn't fire for pc2's receive track. I can add a comment.
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review209688

> We should check that video is black and audio is silent when muted.

It isn't possible to do this on a naked track as far as I know, and checking for frames feels a little out-of-scope for this test-case anyhow.
Comment on attachment 8933463 [details]
Bug 1400363 - Part 0: Expose the SourceMediaStream of MediaStreamTrack, and store tracks instead of the source streams.

https://reviewboard.mozilla.org/r/204384/#review210248


C/C++ static analysis found 1 defect in this patch.

You can run this analysis locally with: `./mach static-analysis check path/to/file.cpp`


::: media/webrtc/signaling/src/mediapipeline/MediaPipeline.cpp:1939
(Diff revision 2)
> -        : ControlMessage(stream),
> +          track_id_(track->GetInputTrackId())
> -          source_(stream),
> -          track_id_(track_id)
>        {}
>  
> +      virtual ~Message() {}

Warning: Use '= default' to define a trivial destructor [clang-tidy: modernize-use-equals-default]

      virtual ~Message() {}
              ^
                         = default;
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review209688

> It isn't possible to do this on a naked track as far as I know, and checking for frames feels a little out-of-scope for this test-case anyhow.

Testing this in other test cases now.
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review210958

IMHO these issues still need fixing before landing this.

::: dom/media/tests/mochitest/test_peerConnection_removeThenAddVideoTrack.html:64
(Diff revision 5)
>                "pcRemote should have two transceivers");
>            const track = test.pcRemote._pc.getTransceivers()[0].receiver.track;
>  
>            const vAdded = test.pcRemote.remoteMediaElements.find(
>                elem => elem.id.includes(track.id));
>            return helper.checkVideoPaused(vAdded, 10, 10, 16, 5000);

This checks that the video is paused while muted, but spec says explicitly that it should be black, [1] (there's certainly ambiguity in there. however for a media element I interpret it as rendering black).

[1] https://w3c.github.io/mediacapture-main/getusermedia.html#life-cycle-and-media-flow

::: dom/media/tests/mochitest/test_peerConnection_removeThenAddVideoTrack.html:69
(Diff revision 5)
> +    test.chain.insertBefore("PC_REMOTE_SET_LOCAL_DESCRIPTION", [
> +      function PC_REMOTE_SETUP_ONMUTE(test) {
> +        haveMuteEvent = haveEvent(test.pcRemote._pc.getReceivers()[0].track, "mute");
> +      },
> +      function PC_REMOTE_SETUP_ONUNMUTE(test) {
> +        haveUnmuteEvent = haveEvent(test.pcRemote._pc.getReceivers()[1].track, "unmute");
> +      }

This checks that removing a track and adding another works.

A more important (as in where I could see issues happening) case would be where a transceiver goes from receiving to not receiving and back to receiving again. I.e., re-using the receiver.

The transceivers test tests muted events for the re-use case but not that media is flowing, and the other tests don't cover the re-use case at all :/

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:829
(Diff revision 5)
> +    let gotUnmuteAudio1 = gotUnmuteEvent(pc1.getTransceivers()[0].receiver.track);
> +    let gotUnmuteVideo1 = gotUnmuteEvent(pc1.getTransceivers()[1].receiver.track);
> +
> +    let gotUnmuteAudio2 = gotUnmuteEvent(pc2.getTransceivers()[0].receiver.track);
> +    let gotUnmuteVideo2 = gotUnmuteEvent(pc2.getTransceivers()[1].receiver.track);

This is fine and all but also kind of a happy-path test. Would you mind adding checks that we don't get unexpected "mute" or "unmute" events in between the expected ones?

::: dom/media/tests/mochitest/test_peerConnection_transceivers.html:879
(Diff revision 5)
> +    gotUnmuteAudio1 = gotUnmuteEvent(pc1.getTransceivers()[0].receiver.track);
> +    gotUnmuteVideo1 = gotUnmuteEvent(pc1.getTransceivers()[1].receiver.track);
> +    gotUnmuteAudio2 = gotUnmuteEvent(pc2.getTransceivers()[0].receiver.track);
> +    gotUnmuteVideo2 = gotUnmuteEvent(pc2.getTransceivers()[1].receiver.track);
> +    await pc1.setRemoteDescription(answer);
> +    await pc2.setLocalDescription(answer);

What about ordering here? It should take until finishing sRD and sLD before "unmute" can fire, no? Should enforce that by checking muted and setting up the `gotUnmuteEvent` after them?

Same question goes for the other cases in this test.
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review210958

Agree. My review was meant as "in addition to" Andreas's comments.
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review210958

> This checks that the video is paused while muted, but spec says explicitly that it should be black, [1] (there's certainly ambiguity in there. however for a media element I interpret it as rendering black).
> 
> [1] https://w3c.github.io/mediacapture-main/getusermedia.html#life-cycle-and-media-flow

We have never done this in response to negotiation. That would be a new bug.

> This checks that removing a track and adding another works.
> 
> A more important (as in where I could see issues happening) case would be where a transceiver goes from receiving to not receiving and back to receiving again. I.e., re-using the receiver.
> 
> The transceivers test tests muted events for the re-use case but not that media is flowing, and the other tests don't cover the re-use case at all :/

Then this is a pre-existing problem, and not within scope of this bug. I can open a new one.

> What about ordering here? It should take until finishing sRD and sLD before "unmute" can fire, no? Should enforce that by checking muted and setting up the `gotUnmuteEvent` after them?
> 
> Same question goes for the other cases in this test.

Current spec has onmute firing before resolution of sRD.
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review210958

> We have never done this in response to negotiation. That would be a new bug.

We have also never implemented muted before. Now that we do it's non-compliant.

> Then this is a pre-existing problem, and not within scope of this bug. I can open a new one.

Current lack of tests doesn't warrant future lack of tests IMHO.

It'll be much easier to write these tests now that you're doing related stuff here, than at some arbitrary time in the future when the edge cases of the API are forgotten.

> Current spec has onmute firing before resolution of sRD.

That's not how I read the spec just now.

sRD do these steps sync:
- 2.2.8.2.6. process the removal of a remote track [1]
- 2.2.11. Resolve p with undefined

[1] then does
- 4. If track.muted is false, update the muted state [2] of track with the value true.

[2] then does (in mediacapture-main)
"To update a track's muted state to newState, the User Agent MUST queue a task to run the following steps"


[1] http://w3c.github.io/webrtc-pc/#process-remote-track-removal
[2] http://w3c.github.io/webrtc-pc/#update-track-muted
Comment on attachment 8933097 [details]
Bug 1400363 - Part 1: Test muted/onmuted/onunmuted for webrtc cases.

https://reviewboard.mozilla.org/r/204064/#review210958

> That's not how I read the spec just now.
> 
> sRD do these steps sync:
> - 2.2.8.2.6. process the removal of a remote track [1]
> - 2.2.11. Resolve p with undefined
> 
> [1] then does
> - 4. If track.muted is false, update the muted state [2] of track with the value true.
> 
> [2] then does (in mediacapture-main)
> "To update a track's muted state to newState, the User Agent MUST queue a task to run the following steps"
> 
> 
> [1] http://w3c.github.io/webrtc-pc/#process-remote-track-removal
> [2] http://w3c.github.io/webrtc-pc/#update-track-muted

Scratch this. I was looking at the latest draft, which 40 minutes ago didn't include jib's latest PR. Now it does, and there's no task-queueing to fire "muted" in sRD anymore.
Pushed by bcampen@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/be10a74ec95d
Part 0: Expose the SourceMediaStream of MediaStreamTrack, and store tracks instead of the source streams. r=pehrsons
https://hg.mozilla.org/integration/autoland/rev/787ddacd432d
Part 1: Test muted/onmuted/onunmuted for webrtc cases. r=jib
https://hg.mozilla.org/integration/autoland/rev/6cf1b98c8e77
Part 2: Unmute webrtc receive tracks when RTP is received. r=drno
https://hg.mozilla.org/integration/autoland/rev/89067522fb61
Part 3: Start webrtc receive tracks as muted. r=drno
https://hg.mozilla.org/integration/autoland/rev/c8c2e98d91fe
Part 4: Mute webrtc receive tracks when they are negotiated to stop receiving. r=drno
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: