Git Product home page Git Product logo

resonance-audio-web-sdk's Introduction

resonance-audio-web-sdk's People

Contributors

bitllama avatar drewbitllama avatar jkammerl avatar lukasdrgon avatar mgorzel avatar mrdoob avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

resonance-audio-web-sdk's Issues

Playing Ambisonic File with Resonace Audio Web SDK

Hi,
This is my first time using Resonance Audio Web SDK and I'm unsure of how to play an ambisonic file with resonance audio. I followed the tutorial to play a monophonic file and changed "resonanceAudioScene.output.connect(audioContext.destination)" to "resonanceAudioScene.ambisonicOutput.connect(audioContext.destination)" and then I changed removed all the code for creating an audio input source.

So I remove:
let source = resonanceAudioScene.createSource();
audioElementSource.connect(source.input);
source.setPosition(-0.707, -0.707, 0);

And replaced it with:

audioElementSource.connect(resonanceAudioScene.ambisonicInput); //to add the Ambisonic soundfield.

Am I going about this correctly? Or am I missing something? Thanks for your help.

Not working on Chrome

Hi, I tried to run the example fine on Google and it's not working, no sound at all.

Does anyone experience the same problem?

module loading

Feature request:
I would like to import resonance-audio in an ecma6 module so that I don't have to force users to import with a script tag in their html file.

Thanks for a great library,
- lonce

Resonance Reverb does not work in Firefox

Hi All,

This a more specific issue around Reverb not working in Firefox that was brought up in : Issue #8

This is the project that I am working on right now that is a WebVR+Resonance scene:
Repo: https://github.com/ianpetrarca/spatialaudio
Live Example: https://ianpetrarca.com/spatialaudio

If you view that example in Chrome+Chrome Canary you can hear the reverb. If you view that example in Firefox there is no reverb in the mix. I want to be clear that there is still spatialization and volume attenuation in Firefox so it seems like this has something to do with the Convolver. It's almost like someone turned the "Wet" reverb mix completely off.

This bug can be reproduced in this demo as well: https://demo.datavized.com/drums/

Specifically, this is the code that I am using to setup/use resonance (Audio.js in Repo):

https://github.com/ianpetrarca/spatialaudio/blob/master/src/audio.js

PannerNode vs TresureHunt

Hi,
Im new Resonance SDK. I want to know that is there any difference between the Panner Node and Tresure Hunt in terms of Ambisonic / Binural , except the audio room / hall effect's.

CPU benchmark utility hangs

When running the CPU benchmark utility (https://cdn.rawgit.com/resonance-audio/resonance-audio-web-sdk/master/examples/benchmark.html) it hangs.

Steps to Reproduce:

  1. Open https://cdn.rawgit.com/resonance-audio/resonance-audio-web-sdk/master/examples/benchmark.html
  2. Click Begin
  3. Says 'Please Wait' for hours.
  4. Happens in Chrome on OSX but I believe it's platform agnostic.

Underlying issue:
Looking in the Browser Console (F12) I see the following error:
benchmark.js:119 Uncaught TypeError: audioContext.createMediaElementSource is not a function at startBenchmark (benchmark.js:119)

which points to this code:

let audioContext = new OfflineAudioContext(numberOutputChannels, durationSeconds * sampleRate, sampleRate); let audioElementSource = audioContext.createMediaElementSource(audioElement);

It seems OfflineAudioContext no longer has the createMediaElementSource function (see WebAudio/web-audio-api#308 and https://developer.mozilla.org/en-US/docs/Web/API/BaseAudioContext). Looks like OfflineAudioContext now inherits from BaseAudioContext which does not have the necessary function.

Audio stuttering/popping with Three.js and First Person camera

Hello,

I'm not sure about the state of this project, but I wanna share some issue I'm having with the library, hoping I could get some more experienced insight, for someone who knows better, or who had this kind of issue already.

I've been trying to use 3D positional audio with this library, after trying with Omnitone and JSAmbisonics as well, and I always get the same issue : when moving the camera, above a certain speed, the audio starts stuttering / popping. With this SDK, it's really crackling a lot, especially when getting closer to the object which is attached some audio sample. I reduced the character speed, and the camera rotation speed, but it's still very annoying.

Here are the repo (https://github.com/polar0/metaverse) and the demo (https://polar0.github.io/metaverse/) if you want to hear. You just need to move for a bit and you will notice it. It did not find anything that could help me across the few other threads of people having the same kind of issue.

In the repo, you will find the 'src/World/audio' path, in which there is 'main.js' initiating the main audio functions, and 'positioned.js' with the problematic sounds spatializing. There is also a 'resonance-audio.js' file, but it's not relevant, I just modified it a bit to be able to access the Omnitone renderer. But the issue was already there before.

I hope we can find some way to go through this bug. Thank you anyway if you took the time to read me, and/or to go through the code.

polarzero

Confirming exactly what the "ambisonicOutput" is

Hey there,

I have been playing around with the examples and it seems the "ambisonicOutput" object isn't quite working as I thought it would.

Assuming I'm using first order, when I connect the ambisonicOutput of the resonance object to my audio context destination, I expect 4 channels of B-format to be output to my soundcard (I have a multichannel soundcard).

However, when I do this, I only get 2 channels. Could someone please confirm if this behavior is expected?

I have tried playing a regular multichannel file from my browser and that works perfectly so would expect the raw B-format to output to my soundcard similarly.

Thanks!

Near-field effect

For binaurally-rendered content, support near-field modelling of sources.

require Omnitone without relative paths

The only referenced external dependency is Omnitone, which is imported via a relative path pointing to the library under node_modules as:

const Omnitone = require('../node_modules/omnitone/build/omnitone.js');

Since Omnitone is compiled into this library's build file (which is probably fine), anyone using resonance who also might want to use Omnitone would be forced to load two copies of it. One might try to reference the resonance source files, which are included in the npm package, but since Omnitone is included as above, that won't solve the problem.

Please change this to:

const Omnitone = require('omnitone');

Since you're building with webpack, this should allow builds that use the source files to use their own copy of Omnitone, without changing the build output from this repo. It should also allow dependents to use the latest version of Omnitone without having to wait for an upgrade in this repo.

Thanks

Support for surround output.

In addition to binaural and ambisonic output, allow the export of arbitrary speaker layouts, with presets for common layouts such as stereo, 5.1, and 7.1.

Only Left speaker

Hi everyone,
I'm trying the library with very simple examples, but the sound is reproduced only by the left speaker ..
(I tried to change the position of the source)
What could be the reason?

thanks

ES5 compiled code to npm

Creating an optimized production build...
Failed to compile.

Failed to minify the code from this file:

./node_modules/resonance-audio/build/resonance-audio.js:544 

Read more here: http://bit.ly/2tRViJ9

error Command failed with exit code 1.

please provide ES5 compiled code to npm

Improve performance

Optimize CPU cost using browser-specific APIs to better support HOA spatial audio.

Directivity patterns, occlusion and reflections

I'm working on writing components for A-Frame that utilize the Resonance Audio Web SDK. I need clarity on these properties as the documentation is rather limited at this point. I don't see that the Web SDK explicitly allows access to occlusion and amount of reflections -- only material coefficients.

I've also played around with the directivity patterns, and it's difficult to debug without any visual cues. My approach has been to move around a visible sound source, change my angle of approach and listen for changes in the frequency spectrum. The problem is that the subtleties are hard to discern especially with the interference of the reflections. It does not seem that the forward position of the sound is linked with the forward position of the object even though the I am using setFromMatrix on the object's Matrix4 object. Increasing sharpness, and setting alpha to 0.5 (cardioid) does not result in what I would expect. I'm not clear on the difference between sharpness and sourceWidth.

It would be great to get more documentation on these properties, clarification on the discrepancies I am experiencing, and if there was a way to visually represent the positioning and shape (at least in the documentation) that would tremendously help my development.

Thank you!

Coordinate System Questions

Hello,
I have been going through the web sdk documentation and had a question about the coordinate system in Resonance. The documentation says that the room dimensions are 'width, height, and depth' and then provides the ability to set the source and listener positions in (x,y,z). However, I can't seem to find any reference (written or visual) to the coordinate system that Resonance is using.

From tinkering it appears to be a Right-handed coordinate system with x+ pointing to in the right direction, y+ pointing in the up direction, and z+ pointing towards the user. This would mean that x would map to moving along the width of the room, y in the height, and z in depth. Is this the case?

Volumetric sources

Incorporate techniques for modelling volumetric sources including directional and positional filtering.

Reverb is always mono on any browser

HEllo!
Reverb is mono, even with the audio demos that are provided in the SDK like room models.
IT's mono on Electron, Safari, Firefox, everywhere.

Any chance to get this fixed?

Unable to change the position of a room

Hi!

Love Resonance, works great but it misses the ability to move a room in space (or maybe I missed something?). You can move the listener, you can move the source but not the room itself :(

Thanks,
Etienne

setRoomProperties throws an error

The error happens in LateReflections.prototype.setDurations.

Per the ConvolverNode spec, you can only set the buffer once. If you try to set it a second time, it throws an InvalidStateError. Firefox ignores this, and it is only recently implemented in Chrome, which is probably why the problem didn't show up earlier.

I believe you'll need to create a new ConvolverNode when you want to change the buffer.

L R reverse

When I create and run "treasure-hunt.n.html" that modified the Tresure-Hunt sample to load the js file of src (direct_src) directly, L and R will sound reversely Yes. why?
repository:forked

audioContext.createMediaStreamSource instead of audioContext.createMediaElementSource?

Hi friends at resonance-audio,

Thank you so much for offering such an interesting and useful library! :-)

I've spent the past few days working on this, and I've gotten the following to work ๐ŸŽ‰๐Ÿ’ช:

export function createDraggableDiv(socketId, stream, participantUsername) {
    const video = createVideo(socketId, stream, participantUsername);
    const div = getWrapper(socketId, participantUsername);
    document.getElementById('main').appendChild(div);
    div.append(video);
        
    const soundSource = scene.createSource();
    soundSources[socketId] = soundSource;
    // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    // TODO: Remove this temporary section about audioElement.
    const audioElement = document.createElement('audio');
    const audioSrc = audioSources.pop();
    audioElement.src = audioSrc;
    audioElement.crossOrigin = 'anonymous';
    audioElement.load();
    audioElement.play();
    audioElement.loop = true;
    div.append(audioElement);
    const mediaElementAudioSourceNode = audioContext.createMediaElementSource(audioElement);
    mediaElementAudioSourceNode.connect(soundSource.input);
    // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    // TODO: Uncomment this section.
    // const mediaStreamAudioSourceNode = audioContext.createMediaStreamSource(video.srcObject);
    // mediaStreamAudioSourceNode.connect(soundSource.input);
    // ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
}

When I drag the divs around on my screen, they call soundSource.setPosition(x, y, z); (not shown here), and I hear what I expected to hear in my headphones: the position of the dragged element on the screen determines where in 3D space in my ears I hear the sound. This is great! ๐Ÿ‘

However, as you can see, what I actually am trying to build instead is a video chat platform that allows spatial audio.

I don't want to be using the wav files in my audioSources array, which I was just doing temporarily to check whether Resonance was working.

What I'm really trying to do is use a mediaStreamAudioSourceNode that comes from my video.srcObject (from the RTCPeerConnection).

How can I use a MediaStream source instead?

I really appreciate your help! :-)

P.S. For context, I want to build a video chat app that:

  • is easy to start just by visiting a website, like Google Meet (I built this already)
  • lets the user drag other participants' videos into other orders/arrangements, like Zoom
  • uses spatial audio, where each user's choices of how to arrange other participants' videos does not affect others' spatial audio. E.g. On a call with 9 participants:
    • Mike (or anyone) can choose to show himself his own selfie view or hide it, and it won't affect the audio that he or anyone else hears.
    • Mike can choose to hide all but 3 of the other participants and arranges those visible 3 in one horizontal row, in which case the audio he hears will be 4 channels: front-left, front-center, front-right, and then a catch-all channel for all invisible videos, which maybe I'll orient as coming from behind Mike.
    • Simultaneously, and without affecting Mike's experience or anyone else's, Carol could choose to show all 8 other participants, and she could arrange them in whatever line or grid she wants, and the audio from each will sound to her as if it's coming from the appropriate part of her screen.

Smooth transition when using sliders to set params

When I use a slider to change the value, e.g. setPosition, there're sometimes clipping sounds, especially when I am sliding fast.
And I can't use linearRamp since it's not an audio node param.
How could I resolve this?

Severe bug in ResonanceAudio.setRoomProperties()

Since the recent Chrome 69 update, the buffer property on a ConvolverNode can't be set twice. This is used in LateReflections.setDurations(), which is used by Room.setProperties(), which is used by ResonanceAudio.setRoomProperties().

The error (accompanied by the above stacktrace):

Uncaught (in promise) DOMException: Failed to set the 'buffer' property on 'ConvolverNode': Cannot set buffer to non-null after it has been already been set to a non-null buffer

The development team has several demos listed on the 'getting started' page, which link to the demo page found here. Of those demos, the two that use ResonanceAudio.setRoomProperties() give the mentioned error. The 'Hello World' demo shows the error right after clicking 'Play', while the 'Room Models' demo shows it after playing one of the sounds and then changing the room dimensions or the wall materials (both resulting in a call to ResonanceAudio.setRoomProperties()).

Resonance Audio does not work in Firefox

Google resonance does not seem to work with Firefox. The official examples page gives this error:

The HTMLMediaElement passed to createMediaElementSource has a cross-origin resource, the node will output silence.

Link: https://cdn.rawgit.com/resonance-audio/resonance-audio-web-sdk/master/examples/index.html

Additionally, I am working on a project right now that uses Resonance with WebVR and there is no reverb output while it's running in Firefox. It runs perfectly in Chrome.

https://spatialsoundbox.firebaseapp.com/

Any thoughts why this might be happening?

Ian

Spatial audio does not work with audio streams

I'm trying to use this as part of a voice chat feature in a game that implements CEF. However, it doesn't seem to apply any 3D effects to the voice stream. I tested a normal sound (ogg) and it worked as expected, with the audio becoming quieter as the player moved away from the other and the direction in the speakers changing as well. With an audio stream from a microphone, using window.URL.createObjectURL(stream) to get the source, it only plays the sound as if it were a normal sound - it doesn't matter where my player is in the game; the sound is the same volume and direction.

I'm curious as to if this even works with spatial audio. I tried using a different library (HowlerJS), but I read that audio streams are not supported with their spatial audio. Is this the same case here?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.