Closed Bug 1657203 Opened 4 years ago Closed 3 years ago

Implement virtual surround channels for Firefox Reality

Categories

(Core :: WebVR, enhancement)

enhancement

Tracking

()

RESOLVED INCOMPLETE

People

(Reporter: kip, Assigned: kip)

Details

Attachments

(1 file)

Attached patch hackweek.patchSplinter Review

When watching videos or accessing content WebAudio based surround sound in Firefox reality, we can take advantage of the continuous 6-dof head tracked headphones to provide a virtual speaker pinned to world coordinates. A prior "hackweek" exploration demonstrated the effect, using HRTF processing during downmixing of multi-channel audio.

I would like to implement this functionality, behind a pref, as a feature users can activate in Firefox Reality.

The original HackWeek patch is attached for reference. The final solution would be cleaned up a bit and take into account the HMD orientation to lerp between multiple HRTF samples.

Assignee: nobody → kgilbert

Note that although it's possible to create WebAudio based HRTF spatial sound sources and move them according to the WebXR API output, the goal of this patch is to make the spatial audio accessible across the entire 2d web. Video playback with a "virtual theatre"-like experience in VR is a primary use case.

After some experimentation, it seems feasible to add the azimuth, elevation, and attenuation of each virtual speaker to the VR Service ShMem. The AudioConverter is then only responsible for selecting the correct HRTF sample and applying attenuation. This would enable Firefox Reality to provide end-user UX for adjustments based on the individual preference. (likely as a set of presets initially).

By simplifying this to direction and attenuation, there will not be phase changes based on distance as in real-life, except for the phase difference between the ears modeled in the HRTF samples. This, in effect, allows for the "listener sweet spot" to be larger than in a real-world multi-channel speaker arrangement. Essentially, we wish to physically model only to the effect of improving the listening experience.

It is also important to ensure that any changing HRTF sample selections are de-clicked by AudioConverter.

Emphasizing that a key motivation here is that, with a tracked headset and 360 videos, you expect the sound to change to reflect your viewing position.

Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → INCOMPLETE
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: