Open Bug 1265406 Opened 8 years ago Updated 1 year ago

Add suspend() and resume() for OfflineAudioContext


(Core :: Web Audio, defect, P3)





(Reporter: padenot, Unassigned)



(Keywords: dev-doc-needed)

This allows deterministically mutating an OfflineAudioContext graph.

Spec changes at
Priority: -- → P2
Assignee: nobody → dminor
Rank: 25
The value of this is questionable IMO.

If required, deterministic equivalents of OfflineAudioContext graph
manipulation can be achieved through gain node connections, which can be
turned on and off through AudioParams.

However, the length of rendering of offline contexts is limited by the fact
that all rendering is recorded in an uncompressed buffer.  That makes it
unlikely that the rendering will be long enough to require support for graph
changes during rendering.

Offline audio contexts need to be able to render arbitrary durations which is
If that is resolved by allowing the graph to be restarted after rendering
completes, then graph manipulation could be performed between restarts and
suspend() would not be required.

I suspect this was implemented in Blink to help with testing.  Their
implementation does not permit multiple changes to the graph in a single
transaction, and so a mechanism to pause the graph is useful to effect such
changes.  Gecko already provides complex graph manipulation in a single
transaction in realtime graphs and so does not need this feature.

For Gecko, I think it is better to wait for a solution to
rather than implement a different partial solution now, one which I doubt would be useful.
When I discussed this with padenot part of the interest in working on this was that Blink had used this to improve their testing. If this isn't useful for testing in Gecko, then we should drop the priority here and I should find another Web Audio bug to pick up instead. Paul, what do you think?
Flags: needinfo?(padenot)
Yeah maybe we can do something else. Not sure what though.
Flags: needinfo?(padenot)
Assignee: dminor → nobody
I find it useful to have.  This is useful for connecting new nodes which can cause the graph to change internal configuration because the new nodes being connected have a different number of output channels.  You can't fake that with gain nodes and audio params.

Perhaps you could do something with a mix of channelmergers/splitters, but then your offline graph really starts to diverge from what you might want to do with an online graph, and that makes the utility of offline graphs much less useful for figuring out why your online graph isn't working.

This is primarily what I use offline context and suspend for: simulating my online graph that isn't working for some unknown reason.

Besides it's part of the spec now, for better or worse.
Hmm, I guess you can fake it with gain nodes by fiddling with the channelCount, channelCountMode, and channelInterpretation.  But that just makes your testing flaky because you don't know when this will happen wrt to the offline audio context time.
Mass change P2->P3 to align with new Mozilla triage process.
Priority: P2 → P3
What we could do here is implement this behind a hidden pref but never ship
it, like we did for some deprecated features in the past.
That would enable Gecko to run some of Blink's tests.

I still don't want to ship this because I think a solution to
will meet any needs that suspend/resume supports, and I don't want to
encourage two different APIs for the same purpose, nor a solution that
is constrained by the suspend/resume API design.

Raymond is right in pointing out that gain nodes cannot provide the channel
count changes equivalent to adding/removing connections, but I expect a
solution to to provide a
means to add and remove connections at arbitrary block boundaries.
My personal take is that if it's behind a pref, you might as well not waste your time doing it, and remain non-compliant.

Is there any changes? I use suspend/resume on OfflineAudioContext to build a visualization of a whole file using AnalyzerNode. Because of this missing in Firefox, I have no choice but to say in bold that Firefox is not supported.

Can a new OfflineAudioContext for each FFT suffice?

  1. This sounds like a torture for performance. In turn, suspend/resume are asynchronous and don't affect main thread much.
  2. This may produce inconsistent results and will be hard to implement stable. Is it defined which exactly part of a resulting AudioBuffer will frequency data be for?
    Interesting enough, I use the suspend/resume method proposed on Stack Overflow by Raymond Toy — at the same time he recommended to stay non-compliant above.

(In reply to Vladyslav Huba from comment #12)

  1. This may produce inconsistent results and will be hard to implement stable. Is it defined which exactly part of a resulting AudioBuffer will frequency data be for?

The behavior of getFloatFrequencyData() is similar when the context rendering is suspended or completed. The frequency data is defined in terms of AnalyserNode input: "The most recent fftSize frames are used in computing the frequency data."

The contents of the AudioBuffer resulting from an OfflineAudioContext depend on the graph attached to the DestinationNode.

at the same time he recommended to stay non-compliant above.

There was a qualifier on that comment.

See Also: → 1573234

I'd like to add one use case that requires this feature. I have a web based audio player that has custom visualizations. I send the AnalyserNode data to a WebGL shader. During realtime playback it is fine to just use whatever the live AudioContext has at the moment. However it would be grand if I could reliably render a sequence of images with the proper data for any point in time (to export a video for example). I'd like to be able to render each frame with the proper buffer data, essentially advancing the OfflineContext by 1/60th of a second, getting the analyser data, rendering the WebGL frame, repeating thousands of times until finished.

If there is a workaround let me know, thanks!

I guess a workaround could be to take the buffer of the OfflineAudioContext rendering and feed chunks into other shorter OfflineAudioContexts containing AnalyserNodes. At the completion of rendering of each the shorter contexts, AnalyserNode can be used to read an analysis of the (most recent) data processed.

You'd want to set the AnalyserNode smoothing to zero and do your own smoothing, if any.

There's still the problem with the size of the rendered buffer. The Audio WG is aware we need a solution to provide a stream of chunks of output with back-pressure.

The size of the rendered audio buffer isn't a problem for me 650MB/hour is small compared to the rendered images.

Severity: normal → S3
You need to log in before you can comment on or make changes to this bug.