WebRTC: SecurityError: The operation is insecure. from resource://gre/modules/media/PeerConnection.jsm
Categories
(Core :: WebRTC, defect)
Tracking
()
People
(Reporter: guest271314, Unassigned)
References
()
Details
(Whiteboard: WebRTC: Signaling Audio/Video: MediaStreamGraph Audio/Video: Recording)
User Agent: Mozilla/5.0 (X11; Linux i686; rv:68.0) Gecko/20100101 Firefox/68.0
Steps to reproduce:
- Create 2 local WebRTC PeerConnection instances
- Execute addTrack
- Within a Promise constructor include "track" event for PeerConnection
- Execute MediaRecorder.start()
- Resolve Promise
Actual results:
Code execution is halted at
SecurityError: The operation is insecure. sIXPljF0hXZZIEJb:227
mediaStreamTrackPromise https://run.plnkr.co/sIXPljF0hXZZIEJb/:227
dispatchEvent resource://gre/modules/media/PeerConnection.jsm:707
_processTrackAdditionsAndRemovals resource://gre/modules/media/PeerConnection.jsm:1324
onSetRemoteDescriptionSuccess resource://gre/modules/media/PeerConnection.jsm:1661
haveSetRemote resource://gre/modules/media/PeerConnection.jsm:1032
haveSetRemote resource://gre/modules/media/PeerConnection.jsm:1029
AsyncFunctionNext self-hosted:839
Expected results:
No errors.
Reporter | ||
Comment 1•6 years ago
|
||
Chromium 72 outputs expected result
plnkr https://plnkr.co/edit/l0RoRH?p=preview
<!DOCTYPE html>
<html>
<head>
<title>Record media fragments to single webm video using AudioCAudioContext.createMediaStreamDestination(), canvas.captureStream(), PeerConnection(), RTCRtpSender.replaceTrack(), MediaRecorder()</title>
<!--
<script src="https://cdnjs.cloudflare.com/ajax/libs/webrtc-adapter/6.4.0/adapter.min.js"></script>
-->
<!--
Without using adapter.js at Chromium if {once: true} is not used at "icecandidate" event
Uncaught (in promise) TypeError: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Candidate missing values for both sdpMid and sdpMLineIndex
at RTCPeerConnection.
Uncaught (in promise) TypeError: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Candidate missing values for both sdpMid and sdpMLineIndex
at RTCPeerConnection.
-->
<!-- Without using adapter.js at Firefox
SecurityError: The operation is insecure. debugger eval code:152
mediaStreamTrackPromise debugger eval code:152
dispatchEvent resource://gre/modules/media/PeerConnection.jsm:707
_processTrackAdditionsAndRemovals resource://gre/modules/media/PeerConnection.jsm:1324
onSetRemoteDescriptionSuccess resource://gre/modules/media/PeerConnection.jsm:1661
haveSetRemote resource://gre/modules/media/PeerConnection.jsm:1032
haveSetRemote resource://gre/modules/media/PeerConnection.jsm:1029
AsyncFunctionNext self-hosted:839
at `resolve()`
-->
<!--
With using adapter.js at Firefox even with {once: true} set at "icecandidate" event
from
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
from
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
from
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
from
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
from
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:114:21
from
icecandidate { target: RTCPeerConnection, isTrusted: true, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, composed: false, … }
Tc5OsSypnNbxzidJ:114:21
to
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:123:21
to
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:123:21
to
icecandidate { target: RTCPeerConnection, isTrusted: true, candidate: RTCIceCandidate, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, … }
Tc5OsSypnNbxzidJ:123:21
to
icecandidate { target: RTCPeerConnection, isTrusted: true, srcElement: RTCPeerConnection, currentTarget: RTCPeerConnection, eventPhase: 2, bubbles: false, cancelable: false, returnValue: true, defaultPrevented: false, composed: false, … }
Tc5OsSypnNbxzidJ:123:21
-->
</head>
<body>
<h1 id="click">click</h1>
<video id="video" src="" controls="true" autoplay="true"></video>
<video id="playlist" src="" controls="true" muted="true"></video>
<script>
const captureStream = mediaElement =>
!!mediaElement.mozCaptureStream ? mediaElement.mozCaptureStream() : mediaElement.captureStream();
const width = 320;
const height = 240;
const videoConstraints = {
frameRate: 60,
resizeMode: "crop-and-scale",
width,
height
};
const blobURLS = [];
const urls = Promise.all([{
src: "https://upload.wikimedia.org/wikipedia/commons/a/a4/Xacti-AC8EX-Sample_video-001.ogv",
from: 0,
to: 4
}, {
from: 10,
to: 20,
src: "https://mirrors.creativecommons.org/movingimages/webm/ScienceCommonsJesseDylan_240p.webm#t=10,20"
}, {
from: 55,
to: 60,
src: "https://nickdesaulniers.github.io/netfix/demo/frag_bunny.mp4"
}, {
from: 0,
to: 5,
src: "https://raw.githubusercontent.com/w3c/web-platform-tests/master/media-source/mp4/test.mp4"
}, {
from: 0,
to: 5,
src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerBlazes.mp4"
}, {
from: 0,
to: 5,
src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerJoyrides.mp4"
}, {
from: 0,
to: 6,
src: "https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/ForBiggerMeltdowns.mp4#t=0,6"
}].map(async({
from,
to,
src
}) => {
try {
const request = await fetch(src);
const blob = await request.blob();
const blobURL = URL.createObjectURL(blob);
blobURLS.push(blobURL);
return ${blobURL}#t=${from},${to}
;
} catch (e) {
throw e;;
}
}));
const playlist = document.getElementById("playlist");
playlist.width = width;
playlist.height = height;
const video = document.getElementById("video");
video.width = width;
video.height = height;
const canvas = document.createElement("canvas");
const ctx = canvas.getContext("2d");
canvas.width = width;
canvas.height = height;
let recorder;
let resolveResult;
const promiseResult = new Promise(resolve => resolveResult = resolve);
document.getElementById("click")
.onclick = e =>
(async() => {
try {
// create MediaStream, audio and video MediaStreamTrack
const audioContext = new AudioContext();
const audioContextDestination = audioContext.createMediaStreamDestination();
let mediaStream = audioContextDestination.stream;
const [audioTrack] = mediaStream.getAudioTracks();
const [videoTrack] = canvas.captureStream().getVideoTracks();
videoTrack.applyConstraints(videoConstraints);
mediaStream.addTrack(videoTrack);
console.log("initial MediaStream, audio and video MediaStreamTracks", mediaStream, mediaStream.getTracks());
let tracks = 0;
const fromLocalPeerConnection = new RTCPeerConnection();
const toLocalPeerConnection = new RTCPeerConnection();
fromLocalPeerConnection.addEventListener("icecandidate", async e => {
console.log("from", e);
try {
await toLocalPeerConnection.addIceCandidate(e.candidate ? e.candidate : null);
} catch (e) {
console.error(e);
}
}, {
once: true
});
toLocalPeerConnection.addEventListener("icecandidate", async e => {
console.log("to", e);
try {
await fromLocalPeerConnection.addIceCandidate(e.candidate ? e.candidate : null);
} catch (e) {
console.error(e);
}
}, {
once: true
});
fromLocalPeerConnection.addEventListener("negotiationneeded", e => {
console.log(e);
});
toLocalPeerConnection.addEventListener("negotiationneeded", e => {
console.log(e);
});
const mediaStreamTrackPromise = new Promise(resolve => {
toLocalPeerConnection.addEventListener("track", track => {
console.log("track event", track);
const {
streams: [stream]
} = track;
console.log(tracks, stream.getTracks().length);
// Wait for both "track" events
if (typeof tracks === "number" && ++tracks === 2) {
console.log(stream);
// Reassign stream to initial MediaStream reference
//mediaStream = stream;
// set video srcObject to reassigned MediaStream
video.srcObject = stream;
tracks = void 0;
let result;
recorder = new MediaRecorder(stream, {
mimeType: "video/webm;codecs=vp8,opus",
audioBitsPerSecond: 128000,
videoBitsPerSecond: 2500000
});
recorder.addEventListener("start", e => {
console.log(e);
});
recorder.addEventListener("stop", e => {
console.log(e);
resolveResult(result);
});
recorder.addEventListener("dataavailable", e => {
console.log(e);
result = e.data;
});
recorder.start();
resolve();
}
});
});
// Add initial audio and video MediaStreamTrack to PeerConnection, pass initial MediaStream
const audioSender = fromLocalPeerConnection.addTrack(audioTrack, mediaStream);
const videoSender = fromLocalPeerConnection.addTrack(videoTrack, mediaStream);
const offer = await fromLocalPeerConnection.createOffer();
await toLocalPeerConnection.setRemoteDescription(offer);
await fromLocalPeerConnection.setLocalDescription(toLocalPeerConnection.remoteDescription);
const answer = await toLocalPeerConnection.createAnswer();
await fromLocalPeerConnection.setRemoteDescription(answer);
await toLocalPeerConnection.setLocalDescription(fromLocalPeerConnection.remoteDescription);
const media = await urls;
await mediaStreamTrackPromise;
console.log(audioSender, videoSender, mediaStream);
for (const blobURL of media) {
await new Promise(async resolve => {
playlist.addEventListener("canplay", async e => {
console.log(e);
await playlist.play();
const stream = captureStream(playlist);
const [playlistVideoTrack] = stream.getVideoTracks();
const [playlistAudioTrack] = stream.getAudioTracks();
// Apply same constraints on each video MediaStreamTrack
playlistVideoTrack.applyConstraints(videoConstraints);
// Replace audio and video MediaStreamTrack with a new media resource
await videoSender.replaceTrack(playlistVideoTrack);
await audioSender.replaceTrack(playlistAudioTrack);
console.log(recorder.state, recorder.stream.getTracks());
}, {
once: true
});
playlist.addEventListener("pause", async e => {
// await audioSender.replaceTrack(audioTrack);
// await videoSender.replaceTrack(videoTrack);
resolve();
}, {
once: true
});
playlist.src = blobURL;
});
}
recorder.stop();
blobURLS.forEach(blobURL => URL.revokeObjectURL(blobURL));
mediaStream.getTracks().forEach(track => track.stop());
[audioTrack, videoTrack].forEach(track => track.stop());
fromLocalPeerConnection.close();
toLocalPeerConnection.close();
return await promiseResult;
} catch (e) {
throw e;
}
})()
.then(blob => {
console.log(blob);
video.remove();
playlist.remove();
const videoStream = document.createElement("video");
videoStream.width = width;
videoStream.height = height;
videoStream.controls = true;
document.body.appendChild(videoStream);
videoStream.src = URL.createObjectURL(blob);
})
.catch(console.error);
</script>
</body>
</html>
Reporter | ||
Updated•6 years ago
|
Reporter | ||
Comment 2•6 years ago
|
||
The issue appears to be that the MediaStreamTrack label properties are set to "remote audio" and "remote video". The error is evidently caused by executing MediaRecorder start() method.
inactive
(2) […]
​
0: MediaStreamTrack { kind: "audio", id: "{60c51af0-8180-4e12-9b37-ec330da7c6ff}", label: "remote audio", … }
​
1: MediaStreamTrack { kind: "video", id: "{c8bc3533-3808-4f7c-89bd-4530b7e7ec24}", label: "remote video", … }
How to avoid the error?
Reporter | ||
Comment 3•6 years ago
|
||
The result is described at the WebRTC specification
5.3 RTCRtpReceiver Interface https://w3c.github.io/webrtc-pc/#rtcrtpreceiver-interface
- Initialize track.label to the result of concatenating the string "remote " with kind.
Updated•6 years ago
|
Updated•6 years ago
|
Comment 4•6 years ago
|
||
This sounds like bug 1212237. Byron, is the analysis there still correct? How should we handle this per spec?
Comment 5•6 years ago
|
||
This looks like the same thing, yes. It looks like the identity spec adds an |isolated| attribute and an |onisolationchange| EventHandler to MediaStreamTrack, which would be how JS could tell whether it could do this sort of capture:
Updated•6 years ago
|
Reporter | ||
Comment 7•6 years ago
|
||
Has the DTLS connection event described in the linked bug been implemented? What is the current specification compliant approach or workaround?
Comment 8•6 years ago
|
||
In order to work around, you could wait for the remote track to unmute. The unmute event (on the track) fires when the first media packet arrives for that track, which means that DTLS is established.
Reporter | ||
Comment 9•6 years ago
|
||
#8 From the code that have tried once labeled "remote" the MediaStreamTrack does not unmute.
Reporter | ||
Comment 10•6 years ago
|
||
#8 Was able to create a version for Firefox following the suggestion to await media packets before calling MediaRecorder() and executing start() which has several issues. The playback of the MediaStreamTracks appears to have a reduced playback rate. Only the first video track is recorded. https://bugzilla.mozilla.org/show_bug.cgi?id=1544234
Reporter | ||
Comment 11•6 years ago
|
||
#(In reply to Byron Campen [:bwc] from comment #8)
In order to work around, you could wait for the remote track to unmute. The unmute event (on the track) fires when the first media packet arrives for that track, which means that DTLS is established.
toLocalPeerConnection.addEventListener("track", track => {
track.onunmute = e => console.log(e);
});
is not dispatched.
Was able to compose a version which outputs similar result at Firefox 68 and Chromium 73 https://bugzilla.mozilla.org/show_bug.cgi?id=1544234#c4.
Note that to achieve recording the entire fragment at Firefox 68, relevant to
the first media packet arrives for that track
the code currently copies and plays back 0.2 seconds of the first Blob URL in the array, catches a SecurityError, then plays backs the complete initial first media fragment.
Comment 12•6 years ago
|
||
(In reply to guest271314 from comment #11)
#(In reply to Byron Campen [:bwc] from comment #8)
In order to work around, you could wait for the remote track to unmute. The unmute event (on the track) fires when the first media packet arrives for that track, which means that DTLS is established.
toLocalPeerConnection.addEventListener("track", track => {
track.onunmute = e => console.log(e);
});is not dispatched.
Your event listener for the "track" event shouldn't take a bare track, but instead one of these:
Reporter | ||
Comment 13•6 years ago
|
||
(In reply to Byron Campen [:bwc] from comment #12)
(In reply to guest271314 from comment #11)
#(In reply to Byron Campen [:bwc] from comment #8)
In order to work around, you could wait for the remote track to unmute. The unmute event (on the track) fires when the first media packet arrives for that track, which means that DTLS is established.
toLocalPeerConnection.addEventListener("track", track => {
track.onunmute = e => console.log(e);
});is not dispatched.
Your event listener for the "track" event shouldn't take a bare track, but instead one of these:
Do you mean the code should be
toLocalPeerConnection.addEventListener("track", track => {
track.track.onunmute = e => console.log(e);
});
which does dispatch the unmute event?
Using `track.track.onunmute instead of track.onunmute with Promise.all() still does not record first video.
Comment 14•6 years ago
|
||
(In reply to guest271314 from comment #13)
(In reply to Byron Campen [:bwc] from comment #12)
(In reply to guest271314 from comment #11)
#(In reply to Byron Campen [:bwc] from comment #8)
In order to work around, you could wait for the remote track to unmute. The unmute event (on the track) fires when the first media packet arrives for that track, which means that DTLS is established.
toLocalPeerConnection.addEventListener("track", track => {
track.onunmute = e => console.log(e);
});is not dispatched.
Your event listener for the "track" event shouldn't take a bare track, but instead one of these:
Do you mean the code should be
toLocalPeerConnection.addEventListener("track", track => {
track.track.onunmute = e => console.log(e);
});
I would call your "track" param something like "event", but essentially yes.
which does dispatch the unmute event?
Using `track.track.onunmute instead of track.onunmute with Promise.all() still does not record first video.
Are you saying you still get a SecurityError after all the tracks have fired an "unmute" event? Or is there some other error? It might be helpful if you could give us a link to a test-case that reproduces the problem (eg; on jsfiddle), or give us access to the app you're developing.
Reporter | ||
Comment 15•6 years ago
|
||
(In reply to Byron Campen [:bwc] from comment #14)
(In reply to guest271314 from comment #13)
(In reply to Byron Campen [:bwc] from comment #12)
(In reply to guest271314 from comment #11)
#(In reply to Byron Campen [:bwc] from comment #8)
In order to work around, you could wait for the remote track to unmute. The unmute event (on the track) fires when the first media packet arrives for that track, which means that DTLS is established.
toLocalPeerConnection.addEventListener("track", track => {
track.onunmute = e => console.log(e);
});is not dispatched.
Your event listener for the "track" event shouldn't take a bare track, but instead one of these:
Do you mean the code should be
toLocalPeerConnection.addEventListener("track", track => {
track.track.onunmute = e => console.log(e);
});I would call your "track" param something like "event", but essentially yes.
which does dispatch the unmute event?
Using `track.track.onunmute instead of track.onunmute with Promise.all() still does not record first video.
Are you saying you still get a SecurityError after all the tracks have fired an "unmute" event? Or is there some other error? It might be helpful if you could give us a link to a test-case that reproduces the problem (eg; on jsfiddle), or give us access to the app you're developing.
Adjusted the param to "event" at "ontrack" event. Will update the param name at GitHub repository within a day or so.
Yes, the SecurityError is still dispatched, though caught and handled. Test case plnkr plnkr https://plnkr.co/edit/Ro1MGAxNfrs7qN160DUc?p=preview. The "unmute" event being dispatched has no affect on the expected result. Without the workaround at Line 169 when the recording is completed only the first frame of the first video that is recorded is displayed at the resulting webm video playback. Then playback immediately proceeds to the second recorded video. To reproduce what describe above comment Line 169 "media.unshift(media[0].replace(/#.+$/, "#t=0,0.2"));".
The code is accessible at https://github.com/guest271314/MediaFragmentRecorder. There are currently 5 branches where the requirement is the same for each branch. Each branch has its own issues (save for "canvas-webaudio" where the only issue there is requestFrame difference in Chromium and Firefox) at Chromium and Firefox. The branch that this bug references is "webrtc-replacetrack".
Reporter | ||
Comment 16•6 years ago
|
||
Found another bug when trying to create initial audio and video MediaStream to play at <video> element instead of playing 0.2 seconds of the first video then playing and recording the array of media resources.
When using AudioContext.createMediaStreamDestination() at MediaRecorder to record .5 seconds of an OscillatorNode and returning a Blob URL which is set as the first of more than one media resource to play in sequence at <video> element, the audio output of the remainder of the media resources at the <video> element is muted.
The resulting recording of more than one media resources is not muted except for the last 1 second, having the same output as https://bugzilla.mozilla.org/show_bug.cgi?id=1539186. Will file a separate bug report for that issue. Awaiting unmute event of each MediaStreamTrack does not appear to have any affect on the procedure or result.
Comment 17•6 years ago
|
||
I can confirm that the testcase tries to record an active stream with two unmuted tracks but still gets thrown a SecurityError in MediaRecorder::Start.
Byron, could you take a look?
Also a thought -- when updating the principal for a MediaStreamTrack on main thread we need to wait for media chunks tagged with the same principal to come through on the MSG thread before the new principal comes into effect, [1]. Do we do this before firing unmute?
Comment 18•6 years ago
|
||
If it takes a while for this to take effect, then yes I can see unmute firing before it is done, which would make unmute insufficient as a workaround. There's nothing in the spec that says what the ordering should be, it is just surprising that it takes that long for the new principal to take effect. I think the only thing we can do here is fix bug 1475360. I guess you could work around by repeatedly trying until it works, but that's really lame.
Updated•6 years ago
|
Reporter | ||
Comment 20•6 years ago
|
||
(In reply to Byron Campen [:bwc] from comment #18)
If it takes a while for this to take effect, then yes I can see unmute firing before it is done, which would make unmute insufficient as a workaround. There's nothing in the spec that says what the ordering should be, it is just surprising that it takes that long for the new principal to take effect. I think the only thing we can do here is fix bug 1475360. I guess you could work around by repeatedly trying until it works, but that's really lame.
The unmute event has no observable affect on the procedure. There is nothing which appears to take effect relevant to the MediaStreamTracks being labeled "remote".
Why are the MediaStreamTracks labeled "remote" in any event where both RTCPeerConnections originate from the same origin and scope? The error appears to be generated by resource://gre/modules/media/PeerConnection.jsm. What does that JavaScript do which is essential to WebRTC working? How that that code be turned off?
What is the fix for the linked bug, which was reported 4 years ago?
Reporter | ||
Comment 21•6 years ago
|
||
Why is this bug marked "FIXED"? The bug may be a duplicate of a 4 year old bug that has not been fixed. "FIXED" label is not accurate, unless "FIXED" means something different than the plain meaning of the term at bugzilla.mozilla.org?
Comment 22•6 years ago
|
||
(In reply to guest271314 from comment #20)
(In reply to Byron Campen [:bwc] from comment #18)
If it takes a while for this to take effect, then yes I can see unmute firing before it is done, which would make unmute insufficient as a workaround. There's nothing in the spec that says what the ordering should be, it is just surprising that it takes that long for the new principal to take effect. I think the only thing we can do here is fix bug 1475360. I guess you could work around by repeatedly trying until it works, but that's really lame.
The unmute event has no observable affect on the procedure. There is nothing which appears to take effect relevant to the MediaStreamTracks being labeled "remote".
"unmute" means that media packets (audio or video) are being received, which can only happen after DTLS is established, which is when we know for sure it is safe to allow the track to be captured (see below). Unfortunately, due to queuing/processing delays, that knowledge (that capture is ok) takes a while to sink in; apparently long enough that the "unmute" signal can make it to JS first. This is why this workaround isn't helping you.
Why are the MediaStreamTracks labeled "remote" in any event where both RTCPeerConnections originate from the same origin and scope? The error appears to be generated by resource://gre/modules/media/PeerConnection.jsm. What does that JavaScript do which is essential to WebRTC working? How that that code be turned off?
RTCPeerConnection is fundamentally intended to allow two different browsers (or a browser and some other type of media endpoint, like a conferencing server) to exchange media in a peer-to-peer fashion. In that context, a "remote" track is media being sent to the browser from the other endpoint (be it browser, or media server, or what-have-you). By default, a browser is not permitted to capture "remote" media; the sender of the media has to opt-out of this privacy mechanism during the DTLS handshake. This DTLS handshake occurs before media is transmitted, and in most cases the sender does opt-out (including your case), but because the remote stream object exists long before any of this handshake even starts, there needs to be some way of communicating to JS that the opt-out has happened, and that the browser won't prevent capture. That's what the isolated attribute and isolationchange event (over at bug 1475360) are for. isolated == false means it is ok to capture, isolated == true means it isn't.
What is the fix for the linked bug, which was reported 4 years ago?
Someone needs to implement the internal logic to update that flag and fire events when it is appropriate.
Comment 23•6 years ago
|
||
(In reply to guest271314 from comment #21)
Why is this bug marked "FIXED"? The bug may be a duplicate of a 4 year old bug that has not been fixed. "FIXED" label is not accurate, unless "FIXED" means something different than the plain meaning of the term at bugzilla.mozilla.org?
It is marked as a duplicate again; the transition through "fixed" was operator error.
Reporter | ||
Comment 24•6 years ago
|
||
(In reply to Byron Campen [:bwc] from comment #22)
(In reply to guest271314 from comment #20)
(In reply to Byron Campen [:bwc] from comment #18)
If it takes a while for this to take effect, then yes I can see unmute firing before it is done, which would make unmute insufficient as a workaround. There's nothing in the spec that says what the ordering should be, it is just surprising that it takes that long for the new principal to take effect. I think the only thing we can do here is fix bug 1475360. I guess you could work around by repeatedly trying until it works, but that's really lame.
The unmute event has no observable affect on the procedure. There is nothing which appears to take effect relevant to the MediaStreamTracks being labeled "remote".
"unmute" means that media packets (audio or video) are being received, which can only happen after DTLS is established, which is when we know for sure it is safe to allow the track to be captured (see below). Unfortunately, due to queuing/processing delays, that knowledge (that capture is ok) takes a while to sink in; apparently long enough that the "unmute" signal can make it to JS first. This is why this workaround isn't helping you.
Why are the MediaStreamTracks labeled "remote" in any event where both RTCPeerConnections originate from the same origin and scope? The error appears to be generated by resource://gre/modules/media/PeerConnection.jsm. What does that JavaScript do which is essential to WebRTC working? How that that code be turned off?
RTCPeerConnection is fundamentally intended to allow two different browsers (or a browser and some other type of media endpoint, like a conferencing server) to exchange media in a peer-to-peer fashion. In that context, a "remote" track is media being sent to the browser from the other endpoint (be it browser, or media server, or what-have-you). By default, a browser is not permitted to capture "remote" media; the sender of the media has to opt-out of this privacy mechanism during the DTLS handshake. This DTLS handshake occurs before media is transmitted, and in most cases the sender does opt-out (including your case), but because the remote stream object exists long before any of this handshake even starts, there needs to be some way of communicating to JS that the opt-out has happened, and that the browser won't prevent capture. That's what the isolated attribute and isolationchange event (over at bug 1475360) are for. isolated == false means it is ok to capture, isolated == true means it isn't.
What is the fix for the linked bug, which was reported 4 years ago?
Someone needs to implement the internal logic to update that flag and fire events when it is appropriate.
The sole reason that RTCPeerConnection was utilized for this use case is for RTCRtpSender.replaceTrack(), specifically the language in the specification "multiple sources of media stitched together" (https://github.com/w3c/webrtc-pc/issues/2171). Given that MediaRecorder does not have a means to replace MediaStream tracks (see https://github.com/w3c/mediacapture-record/issues/4, et al). What is necessary to use the code which implements RTCRtpSender.replaceTrack() with any MediaStream/MediaStreamTrack without explicitly or implicitly using RTCPeerConnection()?
Description
•