Closed Bug 824263 Opened 7 years ago Closed 7 years ago

WebRTC use-after-free crash [@mozilla::NrIceMediaStream::~NrIceMediaStream]

Categories

(Core :: WebRTC: Networking, defect, P1, critical)

x86_64
macOS
defect

Tracking

()

RESOLVED FIXED
mozilla20
Tracking Status
firefox17 --- disabled
firefox18 - disabled
firefox19 - disabled
firefox20 + fixed
firefox21 --- fixed
firefox-esr10 --- unaffected
firefox-esr17 - disabled
b2g18 --- disabled

People

(Reporter: posidron, Assigned: jib)

References

(Blocks 1 open bug)

Details

(Keywords: crash, csectype-uaf, sec-critical, Whiteboard: [WebRTC], [blocking-webrtc+], [nICEr], [qa-][adv-main20-])

Attachments

(2 files, 5 obsolete files)

Attached file callstack
This happened till now only 1 time out of 1600 tests.

alloc: ./media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.cpp:117

  // TODO(ekr@rtfm.com): This is not connected to the PCCimpl.
  // Will need to do that later.
  for (std::size_t i=0; i<mIceStreams.size(); i++) {
    mIceStreams[i]->SignalReady.connect(this, &PeerConnectionMedia::IceStreamReady);
  }


free: ./media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.h:230

  ~PeerConnectionMedia() {
    PR_DestroyLock(mLocalSourceStreamsLock);
  }


re-use: ./media/mtransport/nricemediastream.cpp:98

NrIceMediaStream::~NrIceMediaStream() {
  // We do not need to destroy anything. All major resources
  // are attached to the ice ctx.
}



SEED: 1356187787.1
Fault was detected on test 595


Tested with m-i changeset: 116877:aa42fc5b6396
Keywords: sec-critical
Can you confirm that this happened after bug 822158 was landed? this looks like that should fix it.
Note: this should have the patch from bug 822158 since I landed that last night (and before this changeset)
That's correct, this happened after the patch of bug 822158.
Assignee: nobody → ekr
Priority: -- → P1
Whiteboard: [WebRTC], [blocking-webrtc+]
OK, I see the problem. It's basically the same problem as 822158 was intended to resolve but we missed one.

After rereading the documentation I see that there is a less brittle implementation than this.

Change the .disconnect() calls in 822158 to .disconnect_all(). That will disable all the connections without having to keep track.

jib: can you take this and test?
Assignee: ekr → jib
Comment on attachment 696140 [details] [diff] [review]
Now calls disconnect_all() instead of disconnect() on each signal for better coverage at Shutdown

I'm still having trouble setting up Christoph's fuzzing tool locally to reproduce this problem (something is different with my python setup)...

But here's the patch at least as recommended by ekr, so someone else can give it a try if they wish, to see if it cures the reported problem.

The fix seems like an improvement regardless.
Attachment #696140 - Flags: review?(rjesup)
Attachment #696140 - Flags: review?(ekr)
Attachment #696140 - Flags: review?(rjesup) → review+
Comment on attachment 696140 [details] [diff] [review]
Now calls disconnect_all() instead of disconnect() on each signal for better coverage at Shutdown

Review of attachment 696140 [details] [diff] [review]:
-----------------------------------------------------------------

Sorry, what I want is to do disconnect_all() on the receiver object, i.e., this, not on each signal.

The point is that we don't want to receive any signals. What you are doing is the opposite, it
has each sender not send any.
Attachment #696140 - Flags: review?(ekr) → review-
You mean like this?

 void
 PeerConnectionMedia::ShutdownMediaTransport()
 {
-  mIceCtx->SignalCompleted.disconnect_all();
-  mIceCtx->SignalGatheringCompleted.disconnect_all();
+  disconnect_all();
   mTransportFlows.clear();
   mIceStreams.clear();
   mIceCtx = NULL;
 }
yes, that's what I have in mind. Does it work?
Attachment #696140 - Attachment is obsolete: true
It appears NOT to fix the problem. After just two runs, I got what I think is a comparable crash right after the detach_all() has run, accessing a freed peerconnection instance:

ice_peer_ctx.c:

int nr_ice_peer_ctx_find_component(nr_ice_peer_ctx *pctx, nr_ice_media_stream *str, int component_id, nr_ice_component **compp)
  {
    nr_ice_media_stream *pstr;
    int r,_status;

    pstr=STAILQ_FIRST(&pctx->peer_streams);
    while(pstr){
      if(pstr->local_stream==str) < Thread 8 Socket Thread: EXC_BAD_ACCESS (code=13,address=0x0)
        break;

pctx      nr_ice_peer_ctx * 0x0000000111d18f2c
  label   char *            0x5a5a5a5a5a5a5a5a
  ctx     nr_ice_ctx *      0x5a5a5a5a5a5a5a5a
  handler nr_ice_handler *  0x5a5a5a5a5a5a5a5a

#0	0x0000000104669c45 in nr_ice_peer_ctx_find_component at /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/ice_peer_ctx.c:454
#1	0x0000000104669d2b in nr_ice_peer_ctx_deliver_packet_maybe at /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/ice_peer_ctx.c:488
#2	0x0000000104664d22 in nr_ice_ctx_deliver_packet at /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/ice_ctx.c:462
#3	0x000000010466a5b1 in nr_ice_socket_readable_cb at /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/ice_socket.c:162
#4	0x000000010469fdbd in mozilla::NrSocket::fire_callback(int) at /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/media/mtransport/build/nr_socket_prsock.cpp:190
#5	0x000000010469fcf3 in mozilla::NrSocket::OnSocketReady(PRFileDesc*, short) at /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/media/mtransport/build/nr_socket_prsock.cpp:124
#6	0x00000001014a2296 in nsSocketTransportService::DoPollIteration(bool) at /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.cpp:784
#7	0x00000001014a1c62 in nsSocketTransportService::Run() at /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.cpp:641
#8	0x00000001014a269c in non-virtual thunk to nsSocketTransportService::Run() at /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.cpp:707
#9	0x0000000103a4e0e4 in nsThread::ProcessNextEvent(bool, bool*) at /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:627
#10	0x00000001039b6aef in NS_ProcessNextEvent_P(nsIThread*, bool) at /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/xpcom/build/nsThreadUtils.cpp:237
#11	0x0000000103a4cbb7 in nsThread::ThreadFunc(void*) at /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:265
#12	0x00000001006386d3 in _pt_root at /Users/Jan/moz/mozilla-central/nsprpub/pr/src/pthreads/ptthread.c:156
#13	0x00007fff8d9f9742 in _pthread_start ()
#14	0x00007fff8d9e6181 in thread_start ()
FYI the main thread at this moment is here:

#0	0x0000000104b15a55 in js::UnwrapObject(JSObject*, bool, unsigned int*) at /Users/Jan/moz/mozilla-central/js/src/jswrapper.cpp:100
#1	0x0000000104b1a4aa in js::NukeCrossCompartmentWrappers(JSContext*, js::CompartmentFilter const&, js::CompartmentFilter const&, js::NukeReferencesToWindow) at /Users/Jan/moz/mozilla-central/js/src/jswrapper.cpp:1035
#2	0x0000000102476fca in WindowDestroyedEvent::Run() at /Users/Jan/moz/mozilla-central/dom/base/nsGlobalWindow.cpp:6988
#3	0x0000000103a4e0e4 in nsThread::ProcessNextEvent(bool, bool*) at /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:627
#4	0x00000001039b6aef in NS_ProcessNextEvent_P(nsIThread*, bool) at /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/xpcom/build/nsThreadUtils.cpp:237
#5	0x0000000103a4cf5a in nsThread::Dispatch(nsIRunnable*, unsigned int) at /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:410
#6	0x000000010149f97f in nsSocketTransportService::Dispatch(nsIRunnable*, unsigned int) at /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.cpp:116
#7	0x000000010149f9f5 in non-virtual thunk to nsSocketTransportService::Dispatch(nsIRunnable*, unsigned int) at /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.cpp:123
#8	0x0000000104714ac8 in RUN_ON_THREAD at /Users/Jan/moz/mozilla-central/media/webrtc/signaling//../../../media/mtransport/runnable_utils.h:46
#9	0x0000000104714817 in sipcc::PeerConnectionMedia::SelfDestruct() at /Users/Jan/moz/mozilla-central/media/webrtc/signaling/src/peerconnection/PeerConnectionMedia.cpp:221
#10	0x000000010470cf73 in mozilla::runnable_args_m_0<sipcc::PeerConnectionMedia*, void (sipcc::PeerConnectionMedia::*)()>::Run() at /Users/Jan/moz/mozilla-central/media/webrtc/signaling//../../../media/mtransport/runnable_utils_generated.h:48
#11	0x0000000104708811 in RUN_ON_THREAD at /Users/Jan/moz/mozilla-central/media/webrtc/signaling//../../../media/mtransport/runnable_utils.h:49
#12	0x000000010470a533 in sipcc::PeerConnectionImpl::ShutdownMedia(bool) at /Users/Jan/moz/mozilla-central/media/webrtc/signaling/src/peerconnection/PeerConnectionImpl.cpp:962
#13	0x0000000104706a77 in sipcc::PeerConnectionImpl::CloseInt(bool) at /Users/Jan/moz/mozilla-central/media/webrtc/signaling/src/peerconnection/PeerConnectionImpl.cpp:937
#14	0x000000010470a459 in sipcc::PeerConnectionImpl::Close(bool) at /Users/Jan/moz/mozilla-central/media/webrtc/signaling/src/peerconnection/PeerConnectionImpl.cpp:919

Is it an option to tear down the socket thread before we get here?
One thing sticks out in the main thread callstack:

> #9	0x0000000104714817 in sipcc::PeerConnectionMedia::SelfDestruct()
> #10   0x000000010470cf73 in mozilla::runnable_args_m_0<sipcc::PCM*,...>::Run()
> #11	0x0000000104708811 in RUN_ON_THREAD

This is effectively a fancy function call, and doesn't seem to match the expectations expressed in the comments in the code (i.e. I don't think any reentrancy is avoided):

PeerConnectionImpl::ShutdownMedia(bool aIsSynchronous)
{
...
  // Post back to our own thread to shutdown the media objects.
  // This avoids reentrancy issues with the garbage collector.
  // Note that no media calls may be made after this point
  // because we have removed the pointer.
  // For the aIsSynchronous case, we *know* the PeerConnection is
  // still alive, and are shutting it down on network teardown/etc, so
  // recursive GC isn't an issue. (Recursive GC should assert)

  // Forget the reference so that we can transfer it to
  // SelfDestruct().
  RUN_ON_THREAD(mThread, WrapRunnable(mMedia.forget().get(),
                                      &PeerConnectionMedia::SelfDestruct),
                aIsSynchronous ? NS_DISPATCH_SYNC : NS_DISPATCH_NORMAL);
}

And aIsSynchronous=false
Say aIsSynchronous three times (sorry, it's late).
This is a different problem. You'll notice that no signals are involved. This is
purely a problem in nICEr. ABR and I worked through a suggested fix
yesterday and he's going to try to test it today.

(In reply to Jan-Ivar Bruaroey [:jib] from comment #12)
> It appears NOT to fix the problem. After just two runs, I got what I think
> is a comparable crash right after the detach_all() has run, accessing a
> freed peerconnection instance:
> 
> ice_peer_ctx.c:
> 
> int nr_ice_peer_ctx_find_component(nr_ice_peer_ctx *pctx,
> nr_ice_media_stream *str, int component_id, nr_ice_component **compp)
>   {
>     nr_ice_media_stream *pstr;
>     int r,_status;
> 
>     pstr=STAILQ_FIRST(&pctx->peer_streams);
>     while(pstr){
>       if(pstr->local_stream==str) < Thread 8 Socket Thread: EXC_BAD_ACCESS
> (code=13,address=0x0)
>         break;
> 
> pctx      nr_ice_peer_ctx * 0x0000000111d18f2c
>   label   char *            0x5a5a5a5a5a5a5a5a
>   ctx     nr_ice_ctx *      0x5a5a5a5a5a5a5a5a
>   handler nr_ice_handler *  0x5a5a5a5a5a5a5a5a
> 
> #0	0x0000000104669c45 in nr_ice_peer_ctx_find_component at
> /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> ice_peer_ctx.c:454
> #1	0x0000000104669d2b in nr_ice_peer_ctx_deliver_packet_maybe at
> /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> ice_peer_ctx.c:488
> #2	0x0000000104664d22 in nr_ice_ctx_deliver_packet at
> /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> ice_ctx.c:462
> #3	0x000000010466a5b1 in nr_ice_socket_readable_cb at
> /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> ice_socket.c:162
> #4	0x000000010469fdbd in mozilla::NrSocket::fire_callback(int) at
> /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/media/
> mtransport/build/nr_socket_prsock.cpp:190
> #5	0x000000010469fcf3 in mozilla::NrSocket::OnSocketReady(PRFileDesc*,
> short) at
> /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/media/
> mtransport/build/nr_socket_prsock.cpp:124
> #6	0x00000001014a2296 in nsSocketTransportService::DoPollIteration(bool) at
> /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.
> cpp:784
> #7	0x00000001014a1c62 in nsSocketTransportService::Run() at
> /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.
> cpp:641
> #8	0x00000001014a269c in non-virtual thunk to
> nsSocketTransportService::Run() at
> /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.
> cpp:707
> #9	0x0000000103a4e0e4 in nsThread::ProcessNextEvent(bool, bool*) at
> /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:627
> #10	0x00000001039b6aef in NS_ProcessNextEvent_P(nsIThread*, bool) at
> /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/xpcom/
> build/nsThreadUtils.cpp:237
> #11	0x0000000103a4cbb7 in nsThread::ThreadFunc(void*) at
> /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:265
> #12	0x00000001006386d3 in _pt_root at
> /Users/Jan/moz/mozilla-central/nsprpub/pr/src/pthreads/ptthread.c:156
> #13	0x00007fff8d9f9742 in _pthread_start ()
> #14	0x00007fff8d9e6181 in thread_start ()
(In reply to Jan-Ivar Bruaroey [:jib] from comment #14)
> One thing sticks out in the main thread callstack:
> 
> > #9	0x0000000104714817 in sipcc::PeerConnectionMedia::SelfDestruct()
> > #10   0x000000010470cf73 in mozilla::runnable_args_m_0<sipcc::PCM*,...>::Run()
> > #11	0x0000000104708811 in RUN_ON_THREAD
> 
> This is effectively a fancy function call, and doesn't seem to match the
> expectations expressed in the comments in the code (i.e. I don't think any
> reentrancy is avoided):
> 
> PeerConnectionImpl::ShutdownMedia(bool aIsSynchronous)
> {
> ...
>   // Post back to our own thread to shutdown the media objects.
>   // This avoids reentrancy issues with the garbage collector.
>   // Note that no media calls may be made after this point
>   // because we have removed the pointer.
>   // For the aIsSynchronous case, we *know* the PeerConnection is
>   // still alive, and are shutting it down on network teardown/etc, so
>   // recursive GC isn't an issue. (Recursive GC should assert)
> 
>   // Forget the reference so that we can transfer it to
>   // SelfDestruct().
>   RUN_ON_THREAD(mThread, WrapRunnable(mMedia.forget().get(),
>                                       &PeerConnectionMedia::SelfDestruct),
>                 aIsSynchronous ? NS_DISPATCH_SYNC : NS_DISPATCH_NORMAL);
> }
> 
> And aIsSynchronous=false

You're right that this can't be RUN_ON_THREAD because we *always* want to dispatch and
RUN_ON_THREAD tries not to when it doesn't have to. Just change it to be mThread->Dispatch()
and then it should post.

That said, I doubt it will fix the problem you are seeing above.
(In reply to Eric Rescorla (:ekr) from comment #16)
> This is a different problem. You'll notice that no signals are involved.
> This is
> purely a problem in nICEr. ABR and I worked through a suggested fix
> yesterday and he's going to try to test it today.

See bug 825212.


> (In reply to Jan-Ivar Bruaroey [:jib] from comment #12)
> > It appears NOT to fix the problem. After just two runs, I got what I think
> > is a comparable crash right after the detach_all() has run, accessing a
> > freed peerconnection instance:
> > 
> > ice_peer_ctx.c:
> > 
> > int nr_ice_peer_ctx_find_component(nr_ice_peer_ctx *pctx,
> > nr_ice_media_stream *str, int component_id, nr_ice_component **compp)
> >   {
> >     nr_ice_media_stream *pstr;
> >     int r,_status;
> > 
> >     pstr=STAILQ_FIRST(&pctx->peer_streams);
> >     while(pstr){
> >       if(pstr->local_stream==str) < Thread 8 Socket Thread: EXC_BAD_ACCESS
> > (code=13,address=0x0)
> >         break;
> > 
> > pctx      nr_ice_peer_ctx * 0x0000000111d18f2c
> >   label   char *            0x5a5a5a5a5a5a5a5a
> >   ctx     nr_ice_ctx *      0x5a5a5a5a5a5a5a5a
> >   handler nr_ice_handler *  0x5a5a5a5a5a5a5a5a
> > 
> > #0	0x0000000104669c45 in nr_ice_peer_ctx_find_component at
> > /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> > ice_peer_ctx.c:454
> > #1	0x0000000104669d2b in nr_ice_peer_ctx_deliver_packet_maybe at
> > /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> > ice_peer_ctx.c:488
> > #2	0x0000000104664d22 in nr_ice_ctx_deliver_packet at
> > /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> > ice_ctx.c:462
> > #3	0x000000010466a5b1 in nr_ice_socket_readable_cb at
> > /Users/Jan/moz/mozilla-central/media/mtransport/third_party/nICEr/src/ice/
> > ice_socket.c:162
> > #4	0x000000010469fdbd in mozilla::NrSocket::fire_callback(int) at
> > /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/media/
> > mtransport/build/nr_socket_prsock.cpp:190
> > #5	0x000000010469fcf3 in mozilla::NrSocket::OnSocketReady(PRFileDesc*,
> > short) at
> > /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/media/
> > mtransport/build/nr_socket_prsock.cpp:124
> > #6	0x00000001014a2296 in nsSocketTransportService::DoPollIteration(bool) at
> > /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.
> > cpp:784
> > #7	0x00000001014a1c62 in nsSocketTransportService::Run() at
> > /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.
> > cpp:641
> > #8	0x00000001014a269c in non-virtual thunk to
> > nsSocketTransportService::Run() at
> > /Users/Jan/moz/mozilla-central/netwerk/base/src/nsSocketTransportService2.
> > cpp:707
> > #9	0x0000000103a4e0e4 in nsThread::ProcessNextEvent(bool, bool*) at
> > /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:627
> > #10	0x00000001039b6aef in NS_ProcessNextEvent_P(nsIThread*, bool) at
> > /Users/Jan/moz/mozilla-central/obj-x86_64-apple-darwin12.2.1-debug/xpcom/
> > build/nsThreadUtils.cpp:237
> > #11	0x0000000103a4cbb7 in nsThread::ThreadFunc(void*) at
> > /Users/Jan/moz/mozilla-central/xpcom/threads/nsThread.cpp:265
> > #12	0x00000001006386d3 in _pt_root at
> > /Users/Jan/moz/mozilla-central/nsprpub/pr/src/pthreads/ptthread.c:156
> > #13	0x00007fff8d9f9742 in _pthread_start ()
> > #14	0x00007fff8d9e6181 in thread_start ()
Attachment #696219 - Attachment is obsolete: true
Attachment #696386 - Attachment is obsolete: true
Attachment #696387 - Flags: review?(rjesup)
Attachment #696387 - Flags: review?(ekr)
Forgot one thing. Hold on.
Attachment #696387 - Attachment is obsolete: true
Attachment #696387 - Flags: review?(rjesup)
Attachment #696387 - Flags: review?(ekr)
Attachment #696402 - Flags: review?(rjesup)
Attachment #696402 - Flags: review?(ekr)
Attachment #696402 - Flags: review?(rjesup) → review+
Comment on attachment 696402 [details] [diff] [review]
Shutdown: PeerConnectionMedia disconnect_all() + SelfDestruct-dispatch fix + peer_ctx cleanup

Review of attachment 696402 [details] [diff] [review]:
-----------------------------------------------------------------

lgtm
Attachment #696402 - Flags: review?(ekr) → review+
Hmm. I'm getting consistent segment fault in signalling unit test. Suspect this patch. More to do. Sorry for rushing review.

[==========] Running 39 tests from 1 test case.
[----------] Global test environment set-up.
[----------] 39 tests from SignalingTest
[ RUN      ] SignalingTest.JustInit
Init Complete
Init Complete
Close
Segmentation fault: 11
Attachment #696402 - Attachment is obsolete: true
Comment on attachment 696424 [details] [diff] [review]
Shutdown: PeerConnectionMedia disconnect_all() + peer_ctx cleanup.

Confirmed. I've now backed out the SelfDestruct-dispatch fix, and the signalling tests now succeed. This makes sense since the SelfDestruct-as-dispatch codepath has not worked before, and as such this is a major new codepath, that apparently has problems.

Since the backed-out fix was never really strictly necessary to fix the reported problem, I've carried forward r+ from rjesup and ekr.
Attachment #696424 - Flags: review+
Comment on attachment 696424 [details] [diff] [review]
Shutdown: PeerConnectionMedia disconnect_all() + peer_ctx cleanup.

[Security approval request comment]
How easily could an exploit be constructed based on the patch?
Difficult, though in theory, use-after-free can be exploited to run attacker code using heap blasting.

Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem?
No.

Which older supported branches are affected by this flaw?
web-rtc only, behind a pref.

If not all supported branches, which bug introduced the flaw?

Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be?

How likely is this patch to cause regressions; how much testing does it need?
Not likely. basic error. Passed all unit-tests.
Attachment #696424 - Flags: sec-approval?
Attachment #696424 - Flags: sec-approval? → sec-approval+
Duplicate of this bug: 825212
Attachment #696424 - Flags: checkin?(rjesup)
abr: this needs to be pushed upstream to resiprocate on landing.
Whiteboard: [WebRTC], [blocking-webrtc+] → [WebRTC], [blocking-webrtc+], [nICEr]
Untracking for FF18 considering we have already gone to build with our final beta & since we don't expect to ship WebRTC enabled in that version of Firefox. Doing the same for FF19 as the feature is disabled there as well & would prefer to get good baketime/testing on trunk before the uplift.

Please feel free renominate if there is strong reason this needs to get uplifted in  FF19 in case i missed anything .
Attachment #696424 - Flags: checkin?(rjesup) → checkin+
https://hg.mozilla.org/mozilla-central/rev/908d47fb422b
Status: NEW → RESOLVED
Closed: 7 years ago
Resolution: --- → FIXED
Whiteboard: [WebRTC], [blocking-webrtc+], [nICEr] → [WebRTC], [blocking-webrtc+], [nICEr], [qa-]
Flags: in-testsuite-
Whiteboard: [WebRTC], [blocking-webrtc+], [nICEr], [qa-] → [WebRTC], [blocking-webrtc+], [nICEr], [qa-][adv-main20-]
Group: core-security
You need to log in before you can comment on or make changes to this bug.