Closed Bug 1626081 Opened 7 months ago Closed 5 months ago

Granting microphone permission kills audio.

Categories

(Core :: Audio/Video: MediaStreamGraph, defect)

Unspecified
All
defect

Tracking

()

RESOLVED FIXED
mozilla78
Tracking Status
firefox78 --- fixed

People

(Reporter: rbarker, Assigned: padenot, NeedInfo)

References

Details

(Whiteboard: [fxr:p1][geckoview:m77])

Attachments

(5 files, 1 obsolete file)

In GVE visit https://hubs.mozilla.com/SxkhBbN/audio-test
Grant microphone permission.

Expected:
Audio continues to play

Actual:
Audio is muted.

Whiteboard: [fxr:p1]

This breaks Hubs in FxR.

Summary: Granting microphone permission kill audio. → Granting microphone permission kills audio.

Can someone take a look at this please? FxR releases are blocked until this is resolved.

Component: General → Audio/Video: Playback
Flags: needinfo?(bvandyk)
Product: GeckoView → Core
Whiteboard: [fxr:p1] → [fxr:p1][geckoview:m77]

Please provide more info about this, we don't know how to reproduce. Is this something one of us could reproduce with a phone? What is GVE? Is audio muted, or is it not processing at all?

Every other website out there works with the microphone, as is Firefox Preview and Firefox Preview Nightly, so it must be something specific with the hardware, the way GeckoView is used, or the website. Is this a regression? If yes, do we have a range? What revision of GeckoView are you tracking in this project?

In any case, logs are a good start, MOZ_LOG=MediaManager:4,cubeb:4,MediaTrackGraph:4 would be helpful, and a logcat, while this bug is being reproduced. We're in #media on matrix if chatting real-time would be helpful (same nick).

Component: Audio/Video: Playback → Audio/Video: MediaStreamGraph
Flags: needinfo?(rbarker)
Flags: needinfo?(etoop)
Flags: needinfo?(bvandyk)
Flags: needinfo?(rbarker)

GVE = GeckoView Example which can be built from m-c. You should be able reproduce on any phone using the site above (I used a pixel 2 running Android 10). When I grant microphone permission I can still hear noise coming from the phone's speaker but it is a sort of hiss. I've attached logs for both granting and denying mic permission.

I am testing on my mob, also Android 10, and it works. The volume is very low. I had to maximize the device volume and the element's volume (inside the room) and bring it very close to my ear in order to hear the sounds. But the sound is there with the mic permission granted. One observation is that the element does not display anything in the mobile case. Compared to the desktop version that it displays the histogram of the current sound. A second observation is that one of the sounds is a hiss "noise" similar to what it is described above as the problematic result. Are you sure this is not the current sound? Have you left it open for the whole duration (5 mins) to check if it jumps from sound to sound? I have also checked the logs, I don't see an obvious failure there.

The drop in audio happens as soon as the mic permission is granted in Gecko. The difference between accepting and not accepting is quite obvious. the video element not showing up is a different android specific platform bug.

:gfodor Maybe you can shed some light on what Hubs is doing in this case? It's a black box to me.

Flags: needinfo?(gfodor)

Unfortunately those are not the logs we need the most. Can we have answers to the questions asked in https://bugzilla.mozilla.org/show_bug.cgi?id=1626081#c4, and the right logs, to fix this quickly if there is anything to fix ?

My bad the Gecko logs were buried in other things.

After more testing I have found that this only happens when using the external device speaker or bluetooth speaker. Plugging in headphones fixes the issue for all devices that it reproduces on.

:bpeiris created this reduced sample that reproduces the issue outside of hubs: https://server.peiris.io/fxr-audio-test/

Flags: needinfo?(gfodor)

Reproduces in both Fenix Preview and Fenix Nightly. I tested this in Chrome for android both release and canary and it gets into a feedback loop. Maybe this is related to noise canceling?

:padenot is there any updates on the progress of this issue?

Flags: needinfo?(etoop) → needinfo?(padenot)

Just a reminder, this issue blocks the next FxR release, which is high priority as it is part of a major launch with a commercial partner.

(In reply to Randall Barker [:rbarker] from comment #13)

After more testing I have found that this only happens when using the external device speaker or bluetooth speaker. Plugging in headphones fixes the issue for all devices that it reproduces on.

I can't seem to repro here on my Sony XZ1 Compact with Firefox Preview Nightly. I also don't have other phones to test on at home though. It works well with bluetooth headphones (Sony WH-1000xm3) and the built-in speakers/mic of the phone.

It "works" on my phone because the volume of the example is very low, so it's picked up at a very low volume by the mic, and probably ignored by the echo canceller/noise suppression algorithm.

I'll now reexplain how an echo canceller works, I don't know if it's necessary or if you already know, but I feel it's important to understand this issue: An echo canceller has two input streams, and one output stream.

The two input streams are:

  • the microphone input, where undesirable echo needs to be cancelled out
  • the reverse-stream, which is the complete mixed audio output that goes out of the speakers

The output stream is the microphone input, with the reverse-stream "subtracted" from the audio, if it is found in the microphone input stream. In video conferencing applications, this is fine: the reverse-stream is the mixed audio of the other participants in the call, and is subtracted, if found, from the microphone stream, so that the microphone stream can be sent to the other participants of the call without them hearing themselves back. If the machine plays a sound on the speakers, the echo canceller will try to cancel it out, but in general this is a rather hard problem, and results in residual echo, where the other participants will still hear a muted version of the sound.

Now, coming back to our setup here. We're playing audio content on a phone's speaker, and we also have this phone's microphone recording, and outputting this recorded audio to the speakers. If the audio output signal can be picked up by the microphone (i.e., is loud enough), bad things will happen, that's kind of a given, considering what is being output is also the signal that we try to cancel out with the echo canceller. You have a feedback loop in Chrome because they are not doing the right thing, and not sending the complete audio output of the browser to the echo canceller return stream. I've recently had a discussion with Kevin Lee on the hubs project where they add the opposite issue, and I made drawing to make it a bit clearer what was going on, that might help: https://github.com/mozilla/hubs/pull/2361. In particular the drawings shows the difference in the audio topology of Firefox and Chrome, that might be of interest.

This doesn't explain why audio is killed however, because the echo canceller doesn't touch the audio output (it uses it as reverse-stream only, doesn't modify it), which made me wonder (since Alex and I can't repro), if the Pixel phones have hardware noise suppression/echo cancellation that sits beneath Firefox, and I think that the answer is yes, looking at source code and other online sources only (I don't have a Pixel and I'm locked at home). It can be disabled in software it seems like.

But in any case, the bottom line of this whole story is: you're not supposed to get audio input with a mic and output it on speakers that are near the mic, this is the definition of a feedback loop and it doesn't work. Either we need to filter out parts of the the audio input, or parts of the audio output. We do the former, the pixel does the latter. I've read the Chrome on Android can disable the hardware echo canceller, hence the feedback loop.

But the real question here is: what are you trying to do with audio, at maybe a higher level? For this we can certainly help. We're in #media on chat.mozilla.org if that helps (same nicks).

Flags: needinfo?(rbarker)
Flags: needinfo?(padenot)
Flags: needinfo?(etoop)

I'll add (if that helps), that all the audio processing that is done in Gecko can be disabled from content:

navigator.mediaDevices.getUserMedia({audio: { echoCancellation: false,
                                              noiseSuppression: false,
                                              autoGainControl: false } }).then((...) => { });

Also of course disabling the mic will help when playing back content, simply call stop on the mediastream that is the argument of the thenable in comment 19.

Maybe Brian can expand on what hubs is trying to do.

Flags: needinfo?(rbarker) → needinfo?(brianpeiris)

I'd like to add some observations I've made in regards to behavior of Hubs in Firefox vs Chrome on Android.
Here is the test page: https://hubs.mozilla.com/4a3qpLR
In both Chrome and Firefox, when the page first loads, the system volume slider in Android only shows the "media" volume and look like this:
https://i.gyazo.com/99ac411f09df672f24b40d16a4d191e2.png
However, once you grant mic permissions, an additional slider for the "call" volume appears in Chrome only:
https://i.gyazo.com/27c9912709c2d444488dc6fb00c8c753.png
In Firefox, no additional slider appears. However we do observe a significant change in volume (either a drop or increase) once the permission is granted.

Additionally, the behavior is quite different between the two:
In Chrome, once you grant mic permissions, volume (of Hubs) continues to be controlled by the media volume slider, despite the call volume slider appearing. In Firefox, I've observed that the volume of Hubs is controlled via the call volume slider (which you have to expand the volume controls to access). In some cases, that slider never appears though, and volume cannot be changed.
Note that the behavior for Hubs in Chrome is actually inverted from the behavior in Brian's test page that was posted above (https://server.peiris.io/fxr-audio-test/). In that test page, the behavior is identical to Firefox, in that the volume appears to only be controlled via the call volume slider. It's maybe worth noting that this test page does differ somewhat from how Hubs works primarily in that Hubs uses THREE's Audio (https://github.com/mrdoob/three.js/blob/dev/src/audio/Audio.js) and AudioListener (https://github.com/mrdoob/three.js/blob/dev/src/audio/AudioListener.js) classes which does some additional stuff like creating gain nodes that get attached to the Panner/Audio nodes and the AudioContext.

I'm not sure if this helps point to anything in particular, but perhaps GVE is similarly "switching" audio contexts from "media" to "call" but somehow getting lost in the process?

Flags: needinfo?(brianpeiris)

Hi folks - hoping to get an update on this bug? [please see email thread for details]

As an additional data point, muting the microphone in hubs has no effect.

If it helps, this is specifically where Hubs calls getUserMedia and what constraints we use: https://github.com/mozilla/hubs/blob/8379705dd22c82e55ef9ed34bad05dd55ca705f4/src/react-components/ui-root.js#L596-L639

We currently don't have a way to disable echoCancellation, noiseSupression, and autoGainControl but if desired, we can expose those (probably via url arguments) but I don't really see this being the issue- like what Paul said, these options don't apply to your audio output, only your input.

Kevin, how does hubs mute the mic? Does it call stop? If the mic is truly muted I would expect the audio to be restored when muted and it isn't. Which tells me either hubs isn't actually muting the mic or the issue is not what has been described.

Flags: needinfo?(kevin)

Hubs mutes the mic by toggling your WebRTC publisher's tracks to disabled. https://github.com/mozilla/naf-janus-adapter/blob/c00a8f3f8895c23c1015caecaa5db663715cfb2a/src/index.js#L919-L927

We do not ever call stop from the MediaDevices api. (This allows us to alert the user if we detect they are talking but are muted).

Flags: needinfo?(kevin)

From my understanding, the reason of sound drop is because Android switches the audio output device to the front receiver, instead of the speaker when we starts to recording.

According to what Paul said in the comment18, the reason of routing to the receiver is to avoid the feedback caused by outputting sound from the speaker and recording sound from the front mic at the same time. And he mentioned that this use case is not proper. If you really want to do that, you can achieve that goal by canceling input processing which is mentioned in comment19. But of cource, feedback would happen which might be not a thing we want to see.

So could anyone help answer Paul's question, what are you trying to do with audio, at maybe a higher level? Because from his explanation, this usage seems incorrect.

Thank you.

Flags: needinfo?(rbarker)
Flags: needinfo?(kevin)

At a high level, Hubs wants to record audio from your microphone with all input processing enabled, and transmit that over WebRTC to other clients who are co-present in a Hubs room with you. Hubs wants this transmitted audio to be heard in combination with additional sound from other media such as videos, without the echo being picked up and re-transmitted (echo cancellation) and minimal background noise (noise suppression). This incoming audio is also spatially mixed using PannerNode from the WebAudio Api. Hubs also expects it can do this on Desktops, laptops, standalone VR devices, and mobile devices with or without headphones connected.

Effectively, Hubs wants to operate almost exactly like any webrtc call application (e.g. appr.tc) would work, with the addition of using the WebAudio API for the purpose of spatialization and having additional media playing at the same time (this is probably where this situation gets complicated).

Flags: needinfo?(kevin)

It's also worth noting that FxR is largely used on standalone VR headsets, which, while largely operating just like any android device, have quite a different arrangement when it comes to audio (e.g. the speakers of these devices are usually attached to a headstrap a few inches away from the display, whereas the microphone typically is located much closer to the display). This is a factor that might be worth considering with how audio is routed.

Flags: needinfo?(rbarker)

(In reply to Kevin Lee from comment #29)

Effectively, Hubs wants to operate almost exactly like any webrtc call application (e.g. appr.tc) would work, with the addition of using the WebAudio API for the purpose of spatialization and having additional media playing at the same time (this is probably where this situation gets complicated).

Right, this is the problem. You can only do this with headphones. If there is a way to distinguish between GVE for FxR at build time, I can try to make a few tweaks:

  • Route the audio to the normal speakers regardless of the audio input processing (this can't go to to normal Firefox because it would break normal WebRTC usage)
  • Increase the suppression level of the echo canceler/noise suppressor

On your side (in the app), you might want to feed a quieter version of what the user will hear to the outbound PeerConnection, if you want to send to others what is being heard. Having it picked up by the mic because it's playing on speakers AND wanting echo cancellation are mutually exclusive.

I can't guarantee that it will work, because intentionally feeding back an output signal into a microphone that also plays this very signal is simply not something one should be doing.

I want to be clear: Hubs only sends your voice data over the PeerConnection; we don't send other audio e.g. that from videos, in case that is the impression I made. Each hubs client may have many media sources playing in addition to voices heard from over WebRTC, but from each PeerConnection we only want to hear the single voice from that remote user (and not any media they happen to playing locally).

  • Route the audio to the normal speakers regardless of the audio input processing (this can't go to to normal Firefox because it would break normal WebRTC usage)

Randall will need to comment on this, but I believe this is exactly what we want, and this is what the Oculus Browser likely does for the Oculus Quest and Go, if I were to guess. If I were to speculate further, they've likely thrown out any concept of media vs call audio being handled differently.

  • Increase the suppression level of the echo canceler/noise suppressor

On your side (in the app), you might want to feed a quieter version of what the user will hear to the outbound PeerConnection, if you want to send to others what is being heard. Having it picked up by the mic because it's playing on speakers AND wanting echo cancellation are mutually exclusive.

I don't think I follow regarding why this is necessary: As I understand it, we have multiple audio sources being played in Hubs- As per your diagram (https://user-images.githubusercontent.com/1821537/78954923-74b42b80-7a92-11ea-8c44-c5432888e599.png), all of those get combined in the AudioMixer and gets "subtracted" from the Audio Input stream. Afaict, this is working fine on firefox desktop, regardless of headphones being used or not. I understand that it might be slightly different on mobile, but it seems that fundamentally they work the same way.

What I mean to say is that AEC doesn't seem to be the issue here- it's simply that audio is not routing to the speakers correctly.

That is unless what you're saying is that if audio playing to the "normal" speakers it can bypass AEC entirely, but works fine if you play from the "call" speaker.

(In reply to Kevin Lee from comment #33)

I want to be clear: Hubs only sends your voice data over the PeerConnection; we don't send other audio e.g. that from videos, in case that is the impression I made. Each hubs client may have many media sources playing in addition to voices heard from over WebRTC, but from each PeerConnection we only want to hear the single voice from that remote user (and not any media they happen to playing locally).

This is what was unclear to me. You can disregard the paragraph that starts by "on your side" then. I've attached a patch that implements the two other bits, for testing, a well as more aggressive noise suppression. I'd still like an answer to this: If there is a way to distinguish between GVE for FxR at build time so I can do a landable patch.

If I were to speculate further, they've likely thrown out any concept of media vs call audio being handled differently.

Plausible, it's specialized so it's OK for them to not care about some use-cases. Not doing this would mean a weird experience for users that use bluetooth handsfree devices. On our side, we might be able to switch to what the attached patch is doing if we can detect that we're in a situation where it looks like we're doing a call, but it's in VR or something.

If there is a way to distinguish between GVE for FxR at build time

This is a question for Randall.

Plausible, it's specialized so it's OK for them to not care about some use-cases. Not doing this would mean a weird experience for users that use bluetooth handsfree devices.

AFAIK none of the standalone VR headsets support pairing with bluetooth headphones so this may be the case.

Flags: needinfo?(rbarker)

(In reply to Paul Adenot (:padenot) from comment #34)

This is what was unclear to me. You can disregard the paragraph that starts by "on your side" then. I've attached a patch that implements the two other bits, for testing, a well as more aggressive noise suppression. I'd still like an answer to this: If there is a way to distinguish between GVE for FxR at build time so I can do a landable patch.

FxR uses the same GeckoView AAR that Fenix uses so there is no compile time flags in Gecko to distinguish them. It would need to be a runtime setting that gets plumbed through the GeckoView API.

Flags: needinfo?(rbarker)

A build of Hubs with audio input processing disabled is available here: https://dev.reticulum.io/Q2kEA9Q/accurate-noteworthy-sphere

Randall, can you test this and see if this prevents the audio from being routed incorrectly?

Flags: needinfo?(rbarker)

Kevin, I tested the build on a Neo 2 and the audio did not cut out after granting microphone permissions. I also retested with https://hubs.mozilla.com/SxkhBbN/audio-test and the audio cuts out after granting the mic permission as expected.

Flags: needinfo?(rbarker)

We did some of our own testing with a G2 and while echo is definitely being picked up with AEC disabled (the speakers and mic on the G2 are pretty much right next to each other), the audio does work (albeit possibly a little quiet due to autoGainControl being disabled) and is being routed to the correct speakers.

Despite the echo, I'm going to push an update to Hubs that will turn off all audio input processing specifically for FxR on standalone (if that's possible to determine from the user-agent). I strongly suggest finding a way to disable this audio routing for GeckoView though, as this is not an ideal solution.

The changes would need to be at the media layer. GeckoView would only provide an API for enabling/disabling. FxR does have a unique UA (It has Mobile VR in the UA).

Hubs has been updated to default disable audio input processing on FxR. echoCancellation, noiseSuppression and autoGainControl can be manually re-enabled via the preferences menu if you need to test.

cubeb patch ready to review upstream: https://github.com/kinetiknz/cubeb/pull/586. I'll upload the gecko patches here, to be able to configure this with a pref, and tweak the echo canceler and noise reduction algorithm to be more aggressive.

Bridging with GV will happen shortly after if at all.

Assignee: nobody → padenot
Status: NEW → ASSIGNED

Depends on D74274

Blocks: 1628779
Attachment #9146529 - Attachment description: Bug 1626081 - Add a pref to change the disable the audio output stream routing on Android. r?achronop → Bug 1626081 - Add a pref to disable the audio output stream routing on Android. r?achronop
Pushed by padenot@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/2676f31077bc
Set and add a way to change the default routing mode for echo cancellation on mobile. r=achronop
https://hg.mozilla.org/integration/autoland/rev/42135f164217
Add a pref to disable the audio output stream routing on Android. r=achronop
https://hg.mozilla.org/integration/autoland/rev/82cae35b52eb
Update cubeb to 80c3d838d2929c27. r=cubeb-reviewers,kinetik

Push with failures: https://treeherder.mozilla.org/#/jobs?repo=autoland&resultStatus=testfailed%2Cbusted%2Cexception&revision=82cae35b52ebc57af979c04a96b3159cc16353cc&selectedTaskRun=SttYEFDeS0au7KrXKTnX7g-0

Failure log: https://treeherder.mozilla.org/logviewer.html#?job_id=301766218&repo=autoland

Backout link: https://hg.mozilla.org/integration/autoland/rev/61a83cc0b74b43117a9fa6d92c3d693ea03bbffc

[task 2020-05-11T18:36:44.227Z] 18:36:44     INFO -  make[4]: Entering directory '/builds/worker/workspace/obj-build/dom/media'
[task 2020-05-11T18:36:44.231Z] 18:36:44     INFO -  /builds/worker/fetches/sccache/sccache /builds/worker/fetches/clang/bin/clang++ -std=gnu++17 -o Unified_cpp_dom_media8.o -c  -I/builds/worker/workspace/obj-build/dist/stl_wrappers -I/builds/worker/workspace/obj-build/dist/system_wrappers -include /builds/worker/checkouts/gecko/config/gcc_hidden.h -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fstack-protector-strong -DNDEBUG=1 -DTRIMMED=1 -DHAVE_UINT64_T -DWEBRTC_POSIX -DWEBRTC_BUILD_LIBEVENT -DWEBRTC_LINUX -DMOZILLA_INTERNAL_API -DTRACING -DOS_POSIX=1 -DOS_LINUX=1 -DMOZ_HAS_MOZGLUE -DMOZILLA_INTERNAL_API -DIMPL_LIBXUL -DSTATIC_EXPORTABLE_JS_API -I/builds/worker/checkouts/gecko/dom/media -I/builds/worker/workspace/obj-build/dom/media -I/builds/worker/checkouts/gecko/caps -I/builds/worker/checkouts/gecko/docshell/base -I/builds/worker/checkouts/gecko/dom/base -I/builds/worker/checkouts/gecko/layout/generic -I/builds/worker/checkouts/gecko/layout/xul -I/builds/worker/checkouts/gecko/media/libyuv/libyuv/include -I/builds/worker/checkouts/gecko/netwerk/base -I/builds/worker/checkouts/gecko/toolkit/content/tests/browser -I/builds/worker/checkouts/gecko/media/webrtc/signaling/src/common -I/builds/worker/checkouts/gecko/media/webrtc/trunk -I/builds/worker/checkouts/gecko/media/webrtc/trunk/webrtc -I/builds/worker/workspace/obj-build/ipc/ipdl/_ipdlheaders -I/builds/worker/checkouts/gecko/ipc/chromium/src -I/builds/worker/checkouts/gecko/ipc/glue -I/builds/worker/workspace/obj-build/dist/include -I/builds/worker/workspace/obj-build/dist/include/nspr -I/builds/worker/workspace/obj-build/dist/include/nss -fPIC -DMOZILLA_CLIENT -include /builds/worker/workspace/obj-build/mozilla-config.h -Qunused-arguments -Qunused-arguments -Wall -Wbitfield-enum-conversion -Wempty-body -Wignored-qualifiers -Woverloaded-virtual -Wpointer-arith -Wshadow-field-in-constructor-modified -Wsign-compare -Wtype-limits -Wunreachable-code -Wunreachable-code-return -Wwrite-strings -Wno-invalid-offsetof -Wclass-varargs -Wempty-init-stmt -Wfloat-overflow-conversion -Wfloat-zero-conversion -Wloop-analysis -Wc++2a-compat -Wcomma -Wimplicit-fallthrough -Wunused-function -Wunused-variable -Werror=non-literal-null-conversion -Wstring-conversion -Wtautological-overlap-compare -Wtautological-unsigned-enum-zero-compare -Wtautological-unsigned-zero-compare -Wno-error=tautological-type-limit-compare -Wno-inline-new-delete -Wno-error=deprecated-declarations -Wno-error=array-bounds -Wno-error=backend-plugin -Wno-error=return-std-move -Wno-error=atomic-alignment -Wformat -Wformat-security -Wno-gnu-zero-variadic-macro-arguments -Wno-unknown-warning-option -D_GLIBCXX_USE_CXX11_ABI=0 -fno-sized-deallocation -fno-aligned-new -fcrash-diagnostics-dir=/builds/worker/artifacts -fno-exceptions -fno-strict-aliasing -fno-rtti -ffunction-sections -fdata-sections -fno-exceptions -fno-math-errno -pthread -pipe -g -Xclang -load -Xclang /builds/worker/workspace/obj-build/build/clang-plugin/libclang-plugin.so -Xclang -add-plugin -Xclang moz-check -O2 -fno-omit-frame-pointer -funwind-tables -Werror -Wno-error=shadow -Wno-error=attributes -Wno-error=shadow -fexperimental-new-pass-manager  -MD -MP -MF .deps/Unified_cpp_dom_media8.o.pp   Unified_cpp_dom_media8.cpp
[task 2020-05-11T18:36:44.235Z] 18:36:44     INFO -  In file included from Unified_cpp_dom_media8.cpp:38:
[task 2020-05-11T18:36:44.236Z] 18:36:44    ERROR -  /builds/worker/checkouts/gecko/dom/media/MediaTrackGraph.cpp:3662:29: error: out-of-line definition of 'AudioInputLatency' does not match any declaration in 'mozilla::MediaTrackGraphImpl'
[task 2020-05-11T18:36:44.237Z] 18:36:44     INFO -  double MediaTrackGraphImpl::AudioInputLatency() {
[task 2020-05-11T18:36:44.238Z] 18:36:44     INFO -                              ^~~~~~~~~~~~~~~~~
[task 2020-05-11T18:36:44.239Z] 18:36:44    ERROR -  /builds/worker/checkouts/gecko/dom/media/MediaTrackGraph.cpp:3664:7: error: use of undeclared identifier 'mAudioInputLatency'
[task 2020-05-11T18:36:44.240Z] 18:36:44     INFO -    if (mAudioInputLatency != 0.0) {
[task 2020-05-11T18:36:44.240Z] 18:36:44     INFO -        ^
[task 2020-05-11T18:36:44.241Z] 18:36:44    ERROR -  /builds/worker/checkouts/gecko/dom/media/MediaTrackGraph.cpp:3665:12: error: use of undeclared identifier 'mAudioInputLatency'; did you mean 'mAudioOutputLatency'?
[task 2020-05-11T18:36:44.242Z] 18:36:44     INFO -      return mAudioInputLatency;
[task 2020-05-11T18:36:44.243Z] 18:36:44     INFO -             ^~~~~~~~~~~~~~~~~~
[task 2020-05-11T18:36:44.244Z] 18:36:44     INFO -             mAudioOutputLatency
[task 2020-05-11T18:36:44.245Z] 18:36:44     INFO -  /builds/worker/checkouts/gecko/dom/media/MediaTrackGraphImpl.h:1014:10: note: 'mAudioOutputLatency' declared here
[task 2020-05-11T18:36:44.245Z] 18:36:44     INFO -    double mAudioOutputLatency;
[task 2020-05-11T18:36:44.246Z] 18:36:44     INFO -           ^
[task 2020-05-11T18:36:44.247Z] 18:36:44     INFO -  In file included from Unified_cpp_dom_media8.cpp:38:
Flags: needinfo?(padenot)

Because this bug's Severity has not been changed from the default since it was filed, and it's Priority is -- (Backlog,) indicating it has has not been previously triaged, the bug's Severity is being updated to -- (default, untriaged.)

Severity: normal → --
Pushed by padenot@mozilla.com:
https://hg.mozilla.org/integration/autoland/rev/da9fd351df2e
Set and add a way to change the default routing mode for echo cancellation on mobile. r=achronop
https://hg.mozilla.org/integration/autoland/rev/a4007e6f12f6
Add a pref to disable the audio output stream routing on Android. r=achronop
https://hg.mozilla.org/integration/autoland/rev/c1dc82182495
Update cubeb to 80c3d838d2929c27. r=cubeb-reviewers,kinetik
Attachment #9145719 - Attachment is obsolete: true
Status: ASSIGNED → RESOLVED
Closed: 5 months ago
Resolution: --- → FIXED
Target Milestone: --- → mozilla78
Flags: needinfo?(padenot)

I'm looking at adding setting the pref media.cubeb.output_voice_routing for GeckoView. Is there a reason GeckoView should not always set this to true? If not, it isn't clear to me what the pref does so I'm not sure what to name the potential API nor how to document it so embedders can understand how invoking it will affect their application. Any guidance would be appreciated.

Flags: needinfo?(padenot)

(In reply to Randall Barker [:rbarker] from comment #51)

I'm looking at adding setting the pref media.cubeb.output_voice_routing for GeckoView. Is there a reason GeckoView should not always set this to true? If not, it isn't clear to me what the pref does so I'm not sure what to name the potential API nor how to document it so embedders can understand how invoking it will affect their application. Any guidance would be appreciated.

When I originally did audio input/output routing changes, it was necessary for Bluetooth earpieces to work during a call. Unfortunately my test hardware for this is in the office, that is closed. I'm trying to get it shipped to me, and I'll be able to do more testing when this is done.

The pref (with its default value of true, which is what GV did before the patches in this bug) tags all audio stream that are considered to carry voice-type content as a such, using those values. When flipped to false, the type of audio is ignored and everything is routed like it's not voice.

A stream is considered to carry voice-type content when at least one audio input processing algorithm is enabled (such as noise reduction, echo cancellation, auto gain compensation). When this is the case, the input and output devices are considered to be in use for voice content, for the web page.

What happens on a particular device when a stream is voice or not voice is down the OEM, and looking at the results you gave me the other day, different behavior have been chosen by different manufacturers. The current default GV behavior with the pref at true works well for a video-conferencing use-case on all devices that we've tested. I'm ordering new test devices right now because I fear the devices I'm testing on are not representative enough.

Flags: needinfo?(padenot)
You need to log in before you can comment on or make changes to this bug.