Reported ICE negotiation errors in nightly in Hubs
Categories
(Core :: WebRTC: Networking, defect, P2)
Tracking
()
Tracking | Status | |
---|---|---|
firefox73 | --- | wontfix |
People
(Reporter: gfodor, Unassigned)
Details
Attachments
(1 file)
1.42 MB,
text/html
|
Details |
Hi there, we had a team member using Hubs (hubs.mozilla.com) and running into a series of ICE negotiation failures. Switching to stable Firefox resolved the issue. The ICE failures happened continually, and our client code is configured to reconnect. There was no novel changes to her network, so it seems possible this is a regression in the nightly WebRTC stack.
I've attached the webrtc log from firefox. (Sorry, it is quite long.) The most striking message in the log however seems to be:
(generic/EMERG) Error in sendto IP4:209.85.232.127:19302/UDP: -5980
I am not familiar with what this could mean but wanted to highlight it as a potential proximate cause. Thank you!
Comment 1•5 years ago
|
||
Hi Greg, the main change in the nightly ICE stack is the mDNS hostname obfuscation work that I've been looking at. If this is a repeatable problem for your team member, it would be great if they could retry Nightly with media.peerconnection.ice.obfuscate_host_addresses set to false and see if that fixes things. Otherwise, we'll dig into the log file. Thanks!
Reporter | ||
Comment 2•5 years ago
|
||
Thanks Dan -- will do. Is there any other relevant info you can share as to what might cause that EMERG error? I've never seen that one before.
Comment 3•5 years ago
|
||
I'm not sure about that one. Byron, what do you think?
Comment 4•5 years ago
|
||
So that error is here:
Seems like Firefox cannot send to that remote candidate.
However, I see a candidate pair that succeeded, but was never nominated. Since Firefox is the offerer in this case, the remote end is in charge of nomination, so maybe the remote end is not getting responses to its checks?
Updated•5 years ago
|
Reporter | ||
Comment 5•5 years ago
|
||
I should mention that I believe ICE negotiation succeeds for a period and then fails, repeatedly. I didn't dig into the log to confirm but the user was able to re-connect successfully each time ICE failed, but then would subsequently disconnect/re-connect again. (Our client reconnects to our SFU when an ICE error occurs.)
Comment 6•5 years ago
|
||
So media packets flow, but then stop?
Reporter | ||
Comment 7•5 years ago
|
||
Media packets flow, then there is an "ICE Failed" in the log, and the client software reconnects. I don't know if media packets continued to flow after the ICE failure and before the reconnect sequence. However the person continued to be heard after the reconnect finished. (And the cycle repeated)
Comment 8•5 years ago
|
||
Is this something that is reproducible? If so, a wireshark capture might shed some light on this, as would running the browser with R_LOG_LEVEL=5 and R_LOG_DESTINATION=stderr as environment variables.
Reporter | ||
Comment 9•5 years ago
|
||
Not currently reproducible I'm afraid, but I will make a note and if it happens again we can try to debug with those options.
Comment 10•5 years ago
|
||
I can reproduce this issue on Windows 10 with the latest Nightly. I tried setting media.peerconnection.ice.obfuscate_host_addresses to false but same result.
Reporter | ||
Comment 11•5 years ago
|
||
Ah yes, this looks like a further regression. Easily reproducable in nightly: (hit https://hubs.mozilla.com/Hp5R6jD/easygoing-sturdy-gala)
Reporter | ||
Comment 12•5 years ago
|
||
Apologies, wrong needinfo.
Comment 13•5 years ago
|
||
I think you're probably hitting https://bugzilla.mozilla.org/show_bug.cgi?id=1617704#c2. We recently bumped the minimum version of DTLS that we support, and it appears that hubs does not support it.
Comment 14•5 years ago
|
||
It is still working with Chrome Canary and I thought they were also removing support for older versions of DTLS at the same time as us.
Reporter | ||
Comment 15•5 years ago
|
||
good news, we bumped DTLS to 1.2 via this commit: https://github.com/meetecho/janus-gateway/commit/435a1e91f1661e99d7a78c7953adfeedd95b66e3
issue resolved. (thankfully, this didn't require a full WebRTC stack upgrade, we got lucky :))
Updated•5 years ago
|
Description
•