Git Product home page Git Product logo

Comments (22)

Catsvilles avatar Catsvilles commented on May 25, 2024 3

@HadrienRivere
Just my 2 cents here, since I'm following Peter's project and all the JS audio topic. This is just my experiences starting with this repo of a JS guy, who before just had experience with Tone.js / Web Audio but not much of low level DSP stuff.

  • Tone.js is just a basically a wrapper around Web Audio API that simplifies working with it, and abstracting lots of stuff. Yeah, it's quite good for making music on the web with the means of Web Audio, creating compositions from audio samples, scheduling sounds, using it's instruments (which mostly are simple oscillators anyway) but can do only so much... What Peter offers with AssemblyScript is much more powerful and low level, and from my understanding in not actually bound to browser's Web Audio and it's quirks. (But obviously it uses it's AudioContext and AudioWorklet to play the sound) but more to math/ classical DSP things.

I think when starting making music with "javascriptmusic" (we still need good official name for this :D ) you should NOT try doing things like you did with Web Audio/Tone.js but think of terms classical DSP and how music instruments/plugins actually work. Yeah, it also involves a bit of math, which may scare regular JS guys like me, clearly Peter has a more advanced experience as an audio and software developer overall, so for him the things he does in his code are quite easy and obvious and would be probably done the same way in any other language like C/C++, Rust etc. Yeah, I think this is important to NOTE, again, even though Peter uses JS/Typescript overall it's not bound to the browser that much and those algorithms could be used the same way in any other language/stack, while manipulating audio with Tone.js/Web Audio kinda is limited to the way we do audio in browser only.

I hope I did not confuse anyone, I'm quite a beginner too in this field, and it took me a while to realize how DSP and music programming should be actually done. It really helps seeing more low level projects written in C/C++, you may not know those languages but it helps understanding why the things are done the way there are, also it's really fun seeing how years ago people have been writing code in complex low level languages, while nowadays you can do same thing in something like JS/Typescript. Yeah, WASM is truly AWESOME! :)

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 2

thanks @HadrienRivere :-)

The plan is to add more documentation and tutorials, but it's great to get questions so that I get an idea about what is unclear to others.

I don't know TONE.js very well, but after checking it quickly I think the main difference is that in TONE.js you declare the properties of your synth ( envelopes, waveforms etc ), while in my project you calculate every signal value that is sent to the audio output. So it's more low level, but also much more control.

I've tried to make a minimal example below ( and more explanation follows below that ). Try pasting in the following sources:

sequencer pane (editor to the left) - javascript:

// tempo beats per minute
global.bpm =  120;

// pattern size exponent (2^4 = 16)
global.pattern_size_shift = 4;

// register an instrument
addInstrument('sinelead', {type: 'note'});

playPatterns({
  // create a pattern with notes to play
  sinelead: pp(1, [c5,d5,e5,f5])
});

synth pane (editor to the right) - AssemblyScript:

import {notefreq, SineOscillator} from './globalimports';

// pattern size exponent (2^4 = 16 steps)
export const PATTERN_SIZE_SHIFT = 4;

// beats per pattern exponent (2^2 = 4 steps per beat)
export const BEATS_PER_PATTERN_SHIFT = 2;

// create a simple oscillator ( sine wave )
let osc: SineOscillator = new SineOscillator();

/**
 * callback from the sequencer, whenever there's a new note to be played
 */
export function setChannelValue(channel: usize, value: f32): void {
  // set the frequency of the oscillator based on the incoming note
  osc.frequency = notefreq(value);
}


/**
 * callback for each sample frame to be rendered
 * this is where the actual sound is generated
 */
export function mixernext(leftSampleBufferPtr: usize, rightSampleBufferPtr: usize): void {  
  // get the next value from the oscillator
  let signal = osc.next();
  // store it in the left channel
  store<f32>(leftSampleBufferPtr, signal);
  // store it in the right channel
  store<f32>(rightSampleBufferPtr, signal);    
}

The key to sound generation is the mixernext function that is called by the Audio renderer for every sample to be sent to the audio output. It expects you to store the signal at the provided addresses for left and right sample. As you can see I take the next signal value from the SineOscillator, which calculates its value from a math sine function and for which the frequency is set by the setChannelValue function that will be called by the sequencer whenever there's a new note to be played.

In my more advanced examples I mix more instruments, and also make them richer by mixing ( adding ) waveforms, applying echo and reverb. So what is different from TONE.js I guess is that in this projects you set up the math and calculate every sample for the audio output, rather than just declaring the properties of your sound. Still I think it can be done quite simple this way, giving you much more control, and I'm also working on reducing the amount of code required.

Hope this helps. Let me know if I pointed you in the right/wrong direction here :-)

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 2

@Catsvilles thanks :) And BTW: I'm using wasm-music as a working title now (which is used for the npm package that I use for deployment to my website https://www.npmjs.com/package/wasm-music ). Does wasm-music sound ok? - given that it's pronounced as it should :)

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 1

Thanks for nice feedback :-)

  1. The best source of time is here: https://github.com/petersalomonsen/javascriptmusic/blob/master/wasmaudioworklet/synth1/assembly/index.ts#L46
    The "tick" is used by the player to look up which pattern to play and the position in the pattern.

The sequencer timing happens inside the WebAssembly synth based on actual samples rendered,
unlike my experiments with nodejs, where I use setTimeout for scheduling when to play notes and
send midi signals to the midi synth.

If you want to have a go at this, you'd have to expose the tick through the AudioWorkletProcessor.
Send a signal through the message port to the AudioWorkletNode and then it can be used from a frontend JS api.

  1. Audio samples is on my todo list. I'd like to record vocals and acoustic instruments. My plan / idea
    is to have uncompressed sample data in the WebAssembly memory. And so it will be easy for a "Sampler"
    instrument to play back sample data and it could be used in combination with other effects in the synth
    such as reverb, and even combine with synthesized instruments.

So this would have to be done in AssemblyScript, and it would have to be interfaces for filling memory with sample
data. Probably also some functions in JavaScript to decode/decompress audio formats when importing samples,
and also functions to store sample data when not in use ( e.g. to indexedDB ). I'm also thinking of this
in combination with my plan to implement a GIT client using the wasm-git project,
where there's also a virtual filesystem backed by indexedDB.

  1. Yeah I want to simplify the instrument creation part. Possibly by making a simple macros that
    translates easily to assemblyscript sources, close to what 4klang does. Here's a script for a
    4klang instrument: https://github.com/petersalomonsen/javascriptmusic/blob/master/4klang/instruments/BA_SawBass.inc

And you can see more examples of how to generate sounds here:

https://www.youtube.com/watch?v=wP__g_9FT4M
https://www.youtube.com/watch?v=1nEcbAgRPtc&t=29s (Amiga klang)

Also yes I want more presets. I have some instruments already in the sources, but I think it would be easier
with 4klang style macros, and it would be even possible to import instruments from 4klang.

  1. Thanks for the link to the wasm port of Yoshimi. I did use Yoshimi (ZynAddSubFX) for my nodejs
    experiments. I think actually the 4klang sequencer is not fit for use with Yoshimi, it's better
    with a midi sequencer, so I'd like to port my nodejs midi sequencer stuff to the web. Will definately have a go
    at this!

Naming... yes ... not sure what to call it yet.. Maybe something in the direction of w-awesome, like pronouncing WASM :)

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 1

yoshimi support is here: #8

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 1

regarding note lengths consider this example:

// SONGMODE=YOSHIMI
setBPM(80);

const lead = createTrack(
                  5, // midi channel
                  2 // default duration in steps per beat ( lower is longer ) 
            );

await lead.steps(
  4, // track resolution in steps per beat
  [
    d5(1/8),,,, // first parameter of a note is the note duration
    f5(1/4),,,,
    a5(1),,,,
    f5,,,,
]);
loopHere();
  • Yes, I plan to be able to combine the assemblyscript synth and wasm synth, but the way I see it currently is that they will be separate AudioWorklets mixed using the Web Audio Context.
  • There's no downloading of a WASM in Yosimi mode, and I'm not sure if it will be since it also would be a quite big WASM. Also it's based on emscripten which does not run in the browser. Instead I'm planning an export-wav function using OfflineAudioContext. Export WASM is more a feature of the AssemblyScript synth, and there it's also easier to make compact and small binaries.

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 1

demo video: https://youtu.be/HH92wXnP4WU

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 1

a little demo:

https://petersalomonsen.github.io/javascriptmusic/wasmaudioworklet/?gist=29d0b6f8e8d3cf3267ae4b7b4ffa49bf

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 1

Export wav implemented now.

from javascriptmusic.

Catsvilles avatar Catsvilles commented on May 25, 2024 1

Does wasm-music sound ok?

Yeah, I think it's suitable name for this project! :)

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024 1

Thank you for this @Catsvilles . The padsynth was a really good tip, and I really don't know how I haven't heard about it before. I've obviously been using it in Yoshimi/ZynAddSubFX, but there I just used the presets.

After studying the algorithm I found that I've done some of the same when synthesizing pads myself. The concept of spreading out the harmonics is something I've done a lot, but randomizing the phase as done in padsynth improves it a lot, and that's the part I've been missing.

One downside of the padsynth algorithm as it is described is that it requires to precalculate the wavetable, which results in quite a delay when changing parameters. So I started exploring an alternative approach that doesn't require any other precalculation than the periodic wave. And instead of having the harmonic spread in the IFFT, I play the periodic wave at different speeds. As far as I can see this gives the same result, it costs a little more in real time calculation, but the startup time is significantly reduced. Also because this way I can have a much smaller FFT giving the same result.

My first attempt is a simple organ, check out the video below. I have to play a bit more with this to see if I'm on the right track, but so far it seems ok :)

https://youtu.be/YgLuv1IKMQs

from javascriptmusic.

Catsvilles avatar Catsvilles commented on May 25, 2024

@petersalomonsen thank you for the reply! :)

Yeah I want to simplify the instrument creation part. Possibly by making a simple macros that
translates easily to assemblyscript source

My 5 cents here is since the project is targeting JS devs, I guess it make sense to make it in json format, so users can easily, write, save .json files/patches, should be a good UX :)

At the moment I cannot decide what I would like more to start working on, 1. time, duration, scroll or 4. Adding and making music with Yoshimi, asap :) Will probably work on this and that, and we'll see where it will take me.

so I'd like to port my nodejs midi sequencer stuff to the web. Will definately have a go at this!

Do you have any thoughts on the timeline, when you would be working on this? Also, I guess makes sense to use something like https://github.com/jazz-soft/JZZ as they claim to support both node.js and browser?

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024

You just put Yoshimi on top of my list :) I had to test it yesterday. Managed to build it, and will start looking into integrating it. Those sounds are just amazing!

from javascriptmusic.

Catsvilles avatar Catsvilles commented on May 25, 2024

@petersalomonsen Hah! I'm in the same mood, playing with it right now, these pads sound amazing, perfect for my darkish ambient things, don't understand why it's not much more popular! It's not a problem to play it in browser with JZZ midi library, but I cannot make it work and output audio in node.js, following your example. I understand now, that I could just use the provided glue code and abstract away AudioWorklet somehow, to pipe it to SOX, but meeh, cannot wrap my head around it, never worked with .wasm in node.js before, need your help with this! Really hope you will show some examples on how to pipe it to SOX! :)

from javascriptmusic.

Catsvilles avatar Catsvilles commented on May 25, 2024

Interestingly, I also was able to play the AudioWorklet browser version directly from node.js env by sending midi directly to browser. JZZ library does provide what it promises, it works same both in browser and node.js! Just, from what I noticed overall these days, there is still big performance difference, when I play audio from node.js piped to SOX, and AudioWorklet, SOX loads my CPU at 20%-25% rate while AudioWorklet version is at 45%-50%.

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024

I'm looking into the pure browser based approach. Managed to control Yoshimi from javascript here (click the play song button )

https://petersalomonsen.github.io/yoshimi/wam/dist/

this is a song I wrote in the nodejs setup, now translated to web. so I think this will work fine with the live coding environment. I think for the web it's probably better to export wav directly from the web page instead of using sox.

from javascriptmusic.

Catsvilles avatar Catsvilles commented on May 25, 2024

Cool!! Was playing around it, sounds very good, but it's kinda hard to grasp how the sequencer works, for example, in provided song example, I could not find a way to hold the pad notes longer than it is now, how could I hold the note for 5 or 15, etc, seconds? I tried to adjust steps per beat but if the number is bigger the notes play actually even faster, I even set the number to negative -4 and only then I got my long, ambient pads, too bad they seem not to release and play infinitely :D

  • Would it be possible in the future to use WAM synths and AssemblyScript synthesizer together in the same project? This would give possibility to have and choose from variety of instruments + external audio samples (once AssemblyScript synth will have those)? And also, this way will be easier to have same interface for tracking the currentTime, seeking, etc. I understand that AssemblyScript currently have totally different sequencer but making all the synths with one common sequencer gives much more freedom and options?

  • Does song compile actually works in Yoshimi mode? I just get the message in console but the .wasm file is not downloading. Well, in any case I suppose this comes later.

from javascriptmusic.

Catsvilles avatar Catsvilles commented on May 25, 2024

@petersalomonsen Hey, it's working very good, it's possible already to compose whole songs! Thanks for your previous reply and explaining things! I was following the whole progress, looks like what's left now is to implement .wav rendering and adding audio samples, or merging Yoshimi with AssemblyScript version for those.

Meanwhile, I was working on Time, Duration, Seeking and have a bit of success with the first two in AssemblyScript mode. but I guess I'll wait when you will finalize the whole version so I can proceed with those. I noticed that logCurrentSongTime() actually calculates the whole duration of the current song, so I guess something like this is needed for OfflineAudioContext in Yoshimi version. Still have no idea how to proceed with Seeking but I'm really excited to get on it, it's my first experience in participating/building such musical software so I will be happy to get it done.

  • A little bit hard to understand and grasp on what we can do in .xml file, I tried to read more about it in Yoshimi documentation but still not fully clear if we can edit/control whole Yoshimi synth or just separate instruments/presets? I guess you don't have to spend your time now and answer this here, I just thought it would be cool to mention this or/and link to sources where users can learn more about it, in the documentation/read.md files in the future.

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024

The xml file is for controlling the whole Yoshimi synth. If you load multiple instruments, you'll see that they all appear in the xml file.

For OfflineAudioContext in the Yoshimi context there's no worry about the timing, as the sequencer player is embedded into the audioworklet. So it's just to start it and it will trigger the midi events from within the audioworklet processor.

from javascriptmusic.

petersalomonsen avatar petersalomonsen commented on May 25, 2024

export audio to wav, work in progress:

#15

already made an export that is sent to spotify:

https://twitter.com/salomonsen_p/status/1261404943484772353?s=20

from javascriptmusic.

vitalspace avatar vitalspace commented on May 25, 2024

Hello Peter Excellent Project.

I was wondering if you have in mind to build a documentation about your project? I have spent the last 3 days trying to understand how your code works and until now it is difficult for me to understand how the sound is generated through Assemblyscript. I'm so used to creating synthesizers quickly in TONE.js, but I guess I'm pretty noob for your code >:'c.

from javascriptmusic.

Catsvilles avatar Catsvilles commented on May 25, 2024

@petersalomonsen Hey Peter, as promised I started putting together a list of PadSynth implementations for an inspiration, I guess. :) I was pretty sure there are few made with Typescript but unfortunately for some reason I could find only JavaScript ones for now. Instead of creating a new issue here I decided to go with a new repo as already for a long time I wanted to put together list of cool things related to Web Audio :)
https://github.com/Catsvilles/awesome-web-audio-api-projects

Also I found something in Rust if this is any help:
https://github.com/nyanpasu64/padsynth

Actually last few months I was actively getting into Supercollider (they even have VSTs now) but I'm missing doing things with JavaScript, so maybe someday I will go on exploring this project and trying to synthesize and sequence sounds with JS and AssemblyScript :)

from javascriptmusic.

Related Issues (8)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.