Comments (8)
IMO the only way to achieve this is to create OfflineAudioWorkletGlobalScope. I rather don't want to go down that path.
from web-audio-api-v2.
AudioWG virtual F2F:
- We really want no differences between Offline and regular.
from web-audio-api-v2.
I'd rather not do this. Differences in API or behavior between the offline and real-time context are problematic.
Why not just compute everything in the worklet? What use-case is this? Parallel processing is not really a supported use case at the minute, but some people have thoughts about it.
In any case, this is v2
territory.
from web-audio-api-v2.
An if
statement can be used for user-defined implementation of wait by checking if the value has been written at N index
if (writeOffset < expectedIndex) {
// do other stuff
// user-defined wait, e.g., set outputs to 0 until writeOffset === expectedIndex; suspend the context
return true
}
// do stuff with expectedIndex set at SharedArrayBuffer
What use-case is this?
Am also interested in the purpose of using Atomics.wait()
?
Are you trying to achieve the following https://github.com/GoogleChromeLabs/web-audio-samples/blob/ae5c30553516814fd690bdf6184d33ea81642ae6/audio-worklet/design-pattern/shared-buffer/shared-buffer-worklet-processor.js#L144 ?
// Now we have enough frames to process. Wake up the worker.
from web-audio-api-v2.
Differences in API or behavior between the offline and real-time context are problematic.
I understand the concern here (hence why I also mentioned this initially), although maybe it might be worth a mention as to how/why it's problematic exactly: not being dev-friendly? Web IDL challenges? If it's one particular problem, is there a straightforward way get around it?
To somewhat extend on the part that says behavior diff between offline and real-time contexts is problematic: the way I see it, this is already the case with how process calls are scheduled, and IMO one could argue it's a fairly fundamental difference.
What use-case is this?
In my case, it's a C++ backed audio processing system in Electron, so I understand this may not exactly be the use case that should weigh in as a factor. Still, maybe it's valuable insight.
PS: I understand there are ways around this problem, but in this particular case, nothing as generally reliable as a blocking wait IMO.
from web-audio-api-v2.
In any event, I don't want to insist on pushing the matter if it really doesn't check out. Even for very exotic use cases like the one I described above, the occasional blocking wait is perfectly usable already (not to mention there are other ways around the problem).
I'm merely interested in whether it would be conceptually reasonable to have Atomics.wait
enabled, but it sounds like it may not be worth it in practice.
from web-audio-api-v2.
I do not gather how Atomics.wait()
helps for the use case?
If you have an input stream of audio you can read the stream in process()
by accessing indexes of the SharedArrayBuffer
, this is possible in "real-time" https://github.com/guest271314/webtransport/blob/main/webTransportAudioWorkletWebAssemblyMemoryGrow.js or by beginning the stream before AudioWorkeltGlobalScope
is created, or by utilizing suspend()
and resume()
to effectively pause process()
execution until conditions are met https://github.com/guest271314/AudioWorkletStream/blob/message-port-post-message/audioWorklet.js#L18.
from web-audio-api-v2.
What would defining Atomics.wait()
in AudioWorkletGlobalScope
provide the capability to do that is not achievable now?
I'm merely interested in whether it would be conceptually reasonable to have Atomics.wait enabled, but it sounds like it may not be worth it in practice.
AFAICT no tests have been performed which substantiate the claims that defining specific methods, e.g., fetch()
, WebAssembly.compileStreaming()
, WebAssembly.initiateStreaming()
, et al. in AudioWorkletGlobalScope
would have adverse performance impacts. We should at least run tests for all of the methods defined in a Worker
(module type and shared) defined in AudioWorkletGlobalScope
to base conclusions on evidence rather than conjecture about what would occur if the methods and API's are defined in the Worklet scope of AudioWorket
from web-audio-api-v2.
Related Issues (20)
- V2 documentation logistics HOT 2
- interaction between audio elements and the Web Audio API HOT 2
- Add ability to pause/resume AudioBufferSourceNode HOT 11
- AudioOutputContext HOT 21
- AnalyserNode: provide access to complex FFT result HOT 15
- AnalyserNode: efficiency improvements to FFT post-processing HOT 3
- Alternatives for module loading of AudioWorklet HOT 5
- Request to Expose AudioBuffer to DedicatedWorker HOT 29
- Rename master branch to main HOT 2
- Set up to display the spec. HOT 10
- AudioWorklet should have a 'oversample' and a 'bufferSize' property HOT 5
- cancelScheduledValues and cancelAndHoldAtTime throws RangeError for non-finite
- Switching between 2 Microphones HOT 1
- No way to convert data from WebCodecs AudioData to AudioBuffer HOT 7
- Add frequency of one oscillator to be based on another HOT 1
- Define TextEncoderStream(), and TextDecoderStream() in AudioWorkletGlobalScope HOT 1
- Add Travis CI HOT 1
- RawPCMStreamNode (StreamNode, or InputStreamNode) HOT 8
- Set up PR review HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from web-audio-api-v2.