Add suspend() and resume() for OfflineAudioContext
Categories
(Core :: Web Audio, defect, P3)
Tracking
()
People
(Reporter: padenot, Unassigned)
References
Details
(Keywords: dev-doc-needed)
Updated•9 years ago
|
Updated•8 years ago
|
Comment 1•8 years ago
|
||
Comment 2•8 years ago
|
||
Reporter | ||
Comment 3•8 years ago
|
||
Updated•8 years ago
|
Comment 5•8 years ago
|
||
Comment 6•8 years ago
|
||
Comment 7•7 years ago
|
||
Updated•7 years ago
|
Comment 8•7 years ago
|
||
Comment 9•7 years ago
|
||
Comment 10•5 years ago
|
||
Is there any changes? I use suspend/resume on OfflineAudioContext to build a visualization of a whole file using AnalyzerNode. Because of this missing in Firefox, I have no choice but to say in bold that Firefox is not supported.
Comment 11•5 years ago
|
||
Can a new OfflineAudioContext for each FFT suffice?
Comment 12•5 years ago
|
||
- This sounds like a torture for performance. In turn, suspend/resume are asynchronous and don't affect main thread much.
- This may produce inconsistent results and will be hard to implement stable. Is it defined which exactly part of a resulting AudioBuffer will frequency data be for?
Interesting enough, I use the suspend/resume method proposed on Stack Overflow by Raymond Toy — at the same time he recommended to stay non-compliant above.
Comment 13•5 years ago
|
||
(In reply to Vladyslav Huba from comment #12)
- This may produce inconsistent results and will be hard to implement stable. Is it defined which exactly part of a resulting AudioBuffer will frequency data be for?
The behavior of getFloatFrequencyData() is similar when the context rendering is suspended or completed. The frequency data is defined in terms of AnalyserNode input: "The most recent fftSize frames are used in computing the frequency data."
The contents of the AudioBuffer resulting from an OfflineAudioContext depend on the graph attached to the DestinationNode.
at the same time he recommended to stay non-compliant above.
There was a qualifier on that comment.
Comment 14•3 years ago
|
||
I'd like to add one use case that requires this feature. I have a web based audio player that has custom visualizations. I send the AnalyserNode data to a WebGL shader. During realtime playback it is fine to just use whatever the live AudioContext has at the moment. However it would be grand if I could reliably render a sequence of images with the proper data for any point in time (to export a video for example). I'd like to be able to render each frame with the proper buffer data, essentially advancing the OfflineContext by 1/60th of a second, getting the analyser data, rendering the WebGL frame, repeating thousands of times until finished.
If there is a workaround let me know, thanks!
Comment 15•3 years ago
|
||
I guess a workaround could be to take the buffer of the OfflineAudioContext
rendering and feed chunks into other shorter OfflineAudioContext
s containing AnalyserNode
s. At the completion of rendering of each the shorter contexts, AnalyserNode
can be used to read an analysis of the (most recent) data processed.
You'd want to set the AnalyserNode
smoothing to zero and do your own smoothing, if any.
There's still the problem with the size of the rendered buffer. The Audio WG is aware we need a solution to provide a stream of chunks of output with back-pressure.
Comment 16•3 years ago
|
||
The size of the rendered audio buffer isn't a problem for me 650MB/hour is small compared to the rendered images.
Updated•2 years ago
|
Description
•