Bug 1907367 Comment 14 Edit History

Note: The actual edited comment in the bug view page will always show the original commenter’s name and original timestamp.

(In reply to Andreas Pehrson [:pehrsons] from comment #13)
>     - for the `cubeb_resampler_speex_one_way` processor `input_needed_for_output` depends on the size of two buffers that may contain some leftovers from the last resampling pass and may [return zero](https://searchfox.org/mozilla-central/rev/8b0666aff1197e1dd8017de366343de9c21ee437/media/libcubeb/src/cubeb_resampler_internal.h#297). This seems like the most likely culprit. Easiest way to trigger it should be with a low latency stream. I'll see if I can find any changes to default latency in 128.

I'm not convinced this is the culprit, fwiw, because on macOS in the vast majority of cases I'd expect input and output callback buffers to be identical in size and in running in lockstep in terms of timing.

Gecko uses the same sample rate (the native rate of the default output device) for both input and output, and the requested latency is the same on both sides of a duplex stream (and clamped to [128, 512]).
What could happen is the native rate of the input device is different from that of the output device so we end up resampling once from input to data callback. In the case of VPIO we request the native rate of the default output for the data callback, but the backend is forced to use the native rate of the input device so there may be two resampling passes: both from input to data callback and from data callback to output.

Even with an aggregate device (the common non-VPIO case) it appears that input and output callbacks are consistent on buffer size and timing.
(In reply to Andreas Pehrson [:pehrsons] from comment #13)
>  for the `cubeb_resampler_speex_one_way` processor `input_needed_for_output` depends on the size of two buffers that may contain some leftovers from the last resampling pass and may [return zero](https://searchfox.org/mozilla-central/rev/8b0666aff1197e1dd8017de366343de9c21ee437/media/libcubeb/src/cubeb_resampler_internal.h#297). This seems like the most likely culprit. Easiest way to trigger it should be with a low latency stream. I'll see if I can find any changes to default latency in 128.

I'm not convinced this is the culprit, fwiw, because on macOS in the vast majority of cases I'd expect input and output callback buffers to be identical in size and in running in lockstep in terms of timing.

Gecko uses the same sample rate (the native rate of the default output device) for both input and output, and the requested latency is the same on both sides of a duplex stream (and clamped to [128, 512]).
What could happen is the native rate of the input device is different from that of the output device so we end up resampling once from input to data callback. In the case of VPIO we request the native rate of the default output for the data callback, but the backend is forced to use the native rate of the input device so there may be two resampling passes: both from input to data callback and from data callback to output.

Even with an aggregate device (the common non-VPIO case) it appears that input and output callbacks are consistent on buffer size and timing.

Back to Bug 1907367 Comment 14