Bug 1626081 Comment 29 Edit History

Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.

At a high level, Hubs wants to record audio from your microphone with all input processing enabled, and transmit that over WebRTC to other clients who are co-present in a Hubs room with you. Hubs wants this transmitted audio to be heard in combination with additional sound from other media such as videos, without the echo being picked up and re-transmitted (echo cancellation) and minimal background noise (noise suppression). This incoming audio is also spatially mixed using PannerNode from the WebAudio Api.

Effectively, Hubs wants to operate almost exactly like any webrtc call application (e.g. appr.tc) would work, with the addition of using the WebAudio API for the purpose of spatialization.
At a high level, Hubs wants to record audio from your microphone with all input processing enabled, and transmit that over WebRTC to other clients who are co-present in a Hubs room with you. Hubs wants this transmitted audio to be heard in combination with additional sound from other media such as videos, without the echo being picked up and re-transmitted (echo cancellation) and minimal background noise (noise suppression). This incoming audio is also spatially mixed using PannerNode from the WebAudio Api. Hubs also expects it can do this on Desktops, laptops, standalone VR devices, and mobile devices with or without headphones connected.

Effectively, Hubs wants to operate almost exactly like any webrtc call application (e.g. appr.tc) would work, with the addition of using the WebAudio API for the purpose of spatialization.
At a high level, Hubs wants to record audio from your microphone with all input processing enabled, and transmit that over WebRTC to other clients who are co-present in a Hubs room with you. Hubs wants this transmitted audio to be heard in combination with additional sound from other media such as videos, without the echo being picked up and re-transmitted (echo cancellation) and minimal background noise (noise suppression). This incoming audio is also spatially mixed using PannerNode from the WebAudio Api. Hubs also expects it can do this on Desktops, laptops, standalone VR devices, and mobile devices with or without headphones connected.

Effectively, Hubs wants to operate almost exactly like any webrtc call application (e.g. appr.tc) would work, with the addition of using the WebAudio API for the purpose of spatialization and having additional media playing at the same time (this is probably where this situation gets complicated).

Back to Bug 1626081 Comment 29