8 simultaneous pc_tests (16 PeerConnections); had a failure in the AudioGUM runnable in AppendToTrack because *this was freed memory. (a5) Linux64, debug nightly. More details to come.
This is the AudioGUM runnable: RUN_ON_THREAD(mThread, WrapRunnable(mSources[i], &SourceMediaStream::AppendToTrack,... mSources is nsTArray<SourceMediaStream*> mSources. Normally, the mSources values are held alive by the owning DOMUserMediaStream for the lifetime of mSources, but with the dispatch for AudioGUM, a raw ptr can be accessed on another thread independent of the MediaEngineWebRTCAudioSource's lifetime. Introduced with bug 979716, which landed in 33 and was uplifted to 32. Solution likely is simply to make mSources an nsTArray<nsRefPtr<...>> instead (and for good measure, the video mSources as well. traffic on mSources is low. Probably very hard to exploit (witness how hard I had to stress my linux box to hit this, with a ridiculous amount of memory traffic occurring), but it is a UAF.
Note: due to how WrapRunnable constructs dynamic runnables, the first arg/local member is an nsRefPtr<SourceMediaStream>, and so has it's own reference
Attachment #8560292 - Flags: review?(roc)
Attachment #8560292 - Flags: review?(roc) → review+
Trivial rebase to Aurora/37
Comment on attachment 8560292 [details] [diff] [review] mSources update [Security approval request comment] How easily could an exploit be constructed based on the patch? Pretty hard. It only indirectly exposes that something should be refcounted. Do comments in the patch, the check-in comment, or tests included in the patch paint a bulls-eye on the security problem? No Which older supported branches are affected by this flaw? See above: 32 and later If not all supported branches, which bug introduced the flaw? Bug 979716, which landed in 33 and was uplifted to 32. Do you have backports for the affected branches? If not, how different, hard to create, and risky will they be? Almost no risk; trivial backport (did Aurora and put it here, but it's trivial) How likely is this patch to cause regressions; how much testing does it need? Very unlikely to cause regressions.
I could approve this for Trunk and Aurora but we're about to ship 36. The question is whether this needs to wait until the middle of the next cycle (like another three weeks or so) as a sec-high that is present on release branches. Randell, do you have any thoughts? It looks like the overall risk of us zero daying ourselves is low.
I do think the overall risk from this is low, given what has to be done to provoke it. In addition, the risk of someone figuring it out from the patch is low. They can notice it's a sec bug, and that it made something refptr'd, but leveraging that to an exploit would be difficult. On the other hand, the risk of leaving it until later in the cycle is also low. There's little risk to landing later given the patch (worst risk would be a leak, but that's unlikely and easy to find/fix).
Comment on attachment 8560292 [details] [diff] [review] mSources update giving a sec-approval+ for landing on trunk on March 10 then.
Attachment #8560292 - Flags: sec-approval? → sec-approval+
Comment on attachment 8560292 [details] [diff] [review] mSources update 37 is now Beta. Need this on Beta as well.
Status: NEW → RESOLVED
Last Resolved: 4 years ago
Resolution: --- → FIXED
You need to log in before you can comment on or make changes to this bug.