bwhitman / amy Goto Github PK
View Code? Open in Web Editor NEWAMY - the Additive Music synthesizer librarY
License: MIT License
AMY - the Additive Music synthesizer librarY
License: MIT License
In service of #59, we would need voice parameters in a domain where linear summing naturally supports the desired control.
Specifying frequency in Hertz doesn't satisfy this. Currently, we apply controls like envelopes to frequency parameters by multiplication.
If instead we used a logarithmic frequency scale, we could add an envelope to a base frequency value to get similar control behavior (since addition in a logarithmic domain corresponds to multiplication in the linear domain).
However, we can then also scale the modulating inputs to vary their effect. (This actually corresponds to raising linear envelopes to a power.)
This arrangement mirrors the "one volt per octave" standard used in modular analog synthesizers.
@dpwe is making a lot of great headway converting AMY from floating point to fixed point. The branch is currently here https://github.com/bwhitman/amy/tree/fxp and you can see his readme here https://github.com/bwhitman/amy/blob/fxp/src/amy_fixedpoint.h . I assume (and hope) he'll write a big blog post about it when it's done.
WHY: AMY is fast and efficient but really relies on an FPU to do most of its rendering. This was "fine" for our original targets -- ESP32 and desktop, but we'd like to port it to much more, like the RP2040 (#41) or other Cortex M0 types. Even on MCUs with FPUs, they're not usually as fast as using fixed point types, so we'll hopefully have more headroom on things like Alles and Tulip. We couldn't do more than a few sine waves on an RP2040 before; now we can have hundreds. There's not a lot of downside: just the complexity of doing it and perhaps it's slightly harder code to understand if you're not familiar with this.
Here's where we're at, I'll track the merge here as it happens! thanks DAN
TODO
should support
import amy
pcm = amy.render(5) # seconds
amy.send("v0a1l1f440.f")
amy.live() # starts a libsoundio stream
Currently, the envelope generator breakpoint set:
bp0="200,1.0,500,0.3,50,0.0"
means "ramp up to 1.0 in the first 200ms after the note-on (attack), then ramp-down to 0.3 by 500ms after the note-on (decay). When the note ends, ramp down to zero over 50ms (release)."
I keep getting tripped-up by the way the end of the decay is specified as ms since the beginning of the note, so includes the attack time. I think it should be since the beginning of the decay, so the equivalent bp string under the proposed scheme would be:
bp0="200,1.0,300,0.3,50,0.0"
Note that bp strings can have up to 8 segments, and all the times are (currently) relative to the start of the note except the final one (release) which is special. This modification would make all the durations relative to the start of that segment, making the release segment different only in that it doesn't begin until note-end.
Also, we'll have to reprocess the DX7 preset envelopes which, although segment-relative natively, were processed to cumulate the times when being translated by fm.py:355
.
On Fedora 37 (now using PulseAudio), with amy-example
, there's a noticeable (~3 second) delay between when events are sent and when AMY starts generating audio.
The same behavior is not seen on macOS+coreAudio.
I'll dig into this but wanted to note, because this wasn't expected.
Running make
under macOS (x86), I get
gcc src/algorithms.o src/amy.o src/envelope.o src/delay.o src/filters.o src/oscillators.o src/pcm.o src/partials.o src/libminiaudio-audio.o src/amy-example-esp32.o src/amy-example.o -Wall -lpthread -lm -o amy-example
ld: Undefined symbols:
_AudioComponentFindNext, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioComponentInstanceDispose, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioComponentInstanceNew, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioObjectAddPropertyListener, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioObjectGetPropertyData, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioObjectGetPropertyDataSize, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioObjectRemovePropertyListener, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioObjectSetPropertyData, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioOutputUnitStart, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioOutputUnitStop, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioUnitAddPropertyListener, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioUnitGetProperty, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioUnitGetPropertyInfo, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioUnitInitialize, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioUnitRender, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_AudioUnitSetProperty, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_CFRelease, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
_CFStringGetCString, referenced from:
_ma_context_init__coreaudio in libminiaudio-audio.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [amy-example] Error 1
Probably I need to add Apple specific frameworks, but I don't know how
I'm using ToneJS in my web-app for synths and effects and it already establishes an AudioContext
instance on it's own. In order to use it's effects I need to have AMY in the same context.
We can import { context } from 'tone'
, yet it's a fitted into modern web standard and there's no ScriptProcessorNode present there. Even context.rawContext
!
I can see that it's playing a minor role in the whole HTML setup and it should be easily replaceable with AudioWorklet.
I'm still new to these low level concepts, so still just trying to figure out the exact setup. Here's what I have as reference:
Any help with a minimal example of that to be put into the www
folder?
This is on Debian bullseye
/ x86_64
, gcc
is gcc (Debian 10.2.1-6) 10.2.1 20210110
❯ git clone https://github.com/bwhitman/amy.git
Cloning into 'amy'...
remote: Enumerating objects: 624, done.
remote: Counting objects: 100% (210/210), done.
remote: Compressing objects: 100% (118/118), done.
remote: Total 624 (delta 134), reused 134 (delta 92), pack-reused 414
Receiving objects: 100% (624/624), 21.13 MiB | 19.98 MiB/s, done.
Resolving deltas: 100% (386/386), done.
❯ cd amy
❯ make
gcc -g -Wall -Wno-strict-aliasing -I. -c src/algorithms.c -o src/algorithms.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/amy.c -o src/amy.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/envelope.c -o src/envelope.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/delay.c -o src/delay.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/filters.c -o src/filters.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/oscillators.c -o src/oscillators.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/pcm.c -o src/pcm.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/partials.c -o src/partials.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/libminiaudio-audio.c -o src/libminiaudio-audio.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/amy-example-esp32.c -o src/amy-example-esp32.o
gcc -g -Wall -Wno-strict-aliasing -I. -c src/amy-example.c -o src/amy-example.o
gcc src/algorithms.o src/amy.o src/envelope.o src/delay.o src/filters.o src/oscillators.o src/pcm.o src/partials.o src/libminiaudio-audio.o src/amy-example-esp32.o src/amy-example.o -Wall -lpthread -lm -o amy-example
/usr/bin/ld: src/libminiaudio-audio.o: in function `ma_dlopen':
/home/znmeb/amy/src/miniaudio.h:17734: undefined reference to `dlopen'
/usr/bin/ld: src/libminiaudio-audio.o: in function `ma_dlclose':
/home/znmeb/amy/src/miniaudio.h:17754: undefined reference to `dlclose'
/usr/bin/ld: src/libminiaudio-audio.o: in function `ma_dlsym':
/home/znmeb/amy/src/miniaudio.h:17773: undefined reference to `dlsym'
collect2: error: ld returned 1 exit status
make: *** [Makefile:40: amy-example] Error 1
❯ gcc --version
gcc (Debian 10.2.1-6) 10.2.1 20210110
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
This Stack Overflow might be relevant but I haven't tried it:
https://stackoverflow.com/questions/20369672/undefined-reference-to-dlsym
Is this by design?
Should I be filtering the noise to a specific frequency I desire?
If I want to use noise as a modulator, what's the best way to achieve this?
From a math point-of-view maybe my question makes no sense.
From an old-synth-nerd point-of-view I think it does.
when I do:
alles.send(osc=0,wave=alles.PCM,vel=0.3,patch=11)
there's a click in the resulting drum sound.
The wave=RANDOM oscillator doesn't use its freq/note argument at present: It always generates full-band white noise.
Recently, when working on the Loris partials, we removed its ability to synthesize "noisy" partials, since it didn't seem that useful.
However, the general idea of an oscillator that generates narrowband noise around a center frequency seems quite useful. We can do this with the white noise source and a bandpass VCF to set the band. But that's a clumsy control interface if we want to, for instance, carry a melody on the noise.
A new oscillator mode to generate narrowband noise without a filter could be useful. We could use the freq arg to set the center frequency, then a second arg (filter_freq?) to define the bandwidth.
Inspired by the former Loris partial implementation, we could have a lookup table of band-limited noise, use the bandwidth parameter to control the lookup table playback (setting the bandwidth of the noise), then multiply it by a sinusoid at the note frequency (to shift the center frequency of the noise).
The weakness here is that periodic noise (as you'd get with a short lookup table) is typically perceived as pitched rather than noisy. We'd need a large table to avoid this.
On the other hand, we can sample from a very long implicit random sequence using a random function. However, the value of the lookup table is that it has the lowpass nature built-in (making cheap linear interpolation adequate). If we generated noise on-the-fly, we'd need to use more expensive interpolation to get good frequency behavior, but we could optimize that for a single frequency, then treat its output as if it was a large bandlimited noise lookup table.
using amy-example-esp32.c
Hi,
I just found this library and it looks quite promising indeed, but it being my current tinkering platform of choice I was wondering whether support for Raspberry Pi Pico / RP2040 (read: including running OS-less, with fundamentals based off the Pico SDK, perhaps including support for running on the second core, etc) has been considered and/or investigated yet? (read: as I just found the library I don't quite know where to start digging myself yet ;) )
https://www.raspberrypi.com/products/raspberry-pi-pico/
https://datasheets.raspberrypi.com/pico/raspberry-pi-pico-c-sdk.pdf
https://github.com/raspberrypi/pico-sdk
Thanks,
BR//Karl (@xoblite)
Looking at tests/ref/TestPWM.wav, there's a big discontinuity in the DC offset at t=0.325 sec.
This looks a lot like similar problems we had at one point in the SAW waveforms resulting from the per-block offset calculation.
I'd forgotten to put this in the previous PR, but when building or RPi (and apparently other 32-bit/ARM/Linux systems : raysan5/raylib#2452) libatomic
is needed.
Also for reasons that I didn't investigate, libdl
is needed too.
The fix is to adding -ldl -latomic
to LIBS
in Makefile
.
I lightly tested this on a RPi3 last night.
Simple fix, and I can PR this if you need.
amy.send(osc=1, wave=amy.SAW_DOWN, freq=0.5, amp=0.75)
amy.send(osc=0, wave=amy.PULSE, duty=0.5, freq=220, mod_source=1, mod_target=amy.TARGET_DUTY)
amy.send(osc=0, vel=0.5)
Track work to get Tulip AMY back in here
Via my Fender phone tuning app:
amy-message
with v0w0n69l1
on Linux + miniaudio shows 440.1 Hzamy.wasm
(via a web app I've yet to generate a PR for) with v0w0n69l1
on Linux + FireFox 110.1 shows 479.0 HzI'll hook up a scope to get a more accurate numbers, and can test on other browsers and platforms for metrics
If this is an unavoidable consequence of using amy.wasm
might we introduce a global tuning parameter akin to the global volume setting?
Happy to keep helping, as I'm quite intrigued with AMY.
Still investigating, but this sequence is 100% repeatable:
$ ./amy-message -d 1
# # amy-message AMY playground -> https://octetta.com
# - uses AMY audio synthesizer library -> https://github.com/bwhitman/amy
# - uses miniaudio v0.11.11 audio playback library -> https://miniaud.io
# - uses bestline history and editing library -> https://github.com/jart/bestline
# OSCS=64
# SAMPLE_RATE=44100
# load history from amy-message-history.txt
# v0w7 ## set osc0 to PCM
# v0p22 ## choose SynthVz patch
# v0b1 ## enable PCM looping
# v0l1 ## trigger with velocity 1
# v0w1 ## change osc0 to pulse wave
# Segmentation fault (core dumped)
For those watching, this can be avoided by resetting osc0 via S0
before changing the oscillator's wave type.
I'll dig into how to prevent this in the codebase, but pointing out until then.
For the purpose of keeping the 22050 Hz samples + adding 44100 Hz samples in the future.
Makefile should generate shared library and link an example c program that generates a sine or FM wave and exits
Adding the simple tests in test.py allows inspection of the simple waveform outputs in tests/ref/*.wav.
Looking, for instance, at TestSineOsc.wav, we see the 6ms onset ramp (1 block) we expect.
But the offset is immediate (causing a click). It should probably be a 1 frame ramp-down.
Building AMY on Fedora Linux 37 yields the following errors:
gcc src/amy-example.o src/algorithms.o src/amy.o src/envelope.o src/filters.o src/oscillators.o src/pcm.o src/partials.o src/libsoundio-audio.o src/amy-example-esp32.o -Wall -lpthread -lsoundio -lm -o amy-example
/usr/bin/ld: src/libsoundio-audio.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:9: multiple definition of `amy_channel'; src/amy-example.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:9: first defined here
/usr/bin/ld: src/libsoundio-audio.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:10: multiple definition of `amy_device_id'; src/amy-example.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:10: first defined here
/usr/bin/ld: src/libsoundio-audio.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:11: multiple definition of `amy_running'; src/amy-example.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:11: first defined here
collect2: error: ld returned 1 exit status
make: *** [Makefile:38: amy-example] Error 1
I can change these three variables to extern
in libsoundio-audio.h
and the code compiles, but I'm worried this isn't the original intention and might lead to other problems as I work on getting the running on my Linux system.
Thoughts?
This is more of a note for me to do for mad-scientist reasons.
Please nuke this if it's too far from the goals here and I'll fork off :).
The new fixed point AMY doesn't seem to be as loud on the Alles esp32 speakers as it used to. And setting volume to usually ok amounts (3-5) now clips during alles.drums()
The web example basically crashes (but AMY still responds to debug messages) if you send a l0 to a certain FM patch, #0 is one. but it only does this if it's running on the web, on a remote server (local hosting the JS/WASM works fine!) I expect some memory corruption but valgrind
came back clean.
I was able to boot AMY on a Teensy 3.2, which has 64KB RAM. We say it needs "around 100KB" on the webpage, but I bet if you decrease AMY_OSCS
to like 8 or 16 you could make it much smaller. Let's find out
AMY_IS_SET uses isnan, which requires a floating point value (at least on Ubuntu 20.04 with gcc).
algo_source
seems to be an integer, causing a failure.
I suspect this is because of the change of using NAN as a flag instead of -1?
Will tinker with this later, but wanted to point it out.
src/algorithms.c: In function ‘algo_note_off’:
src/algorithms.c:143:12: error: non-floating-point argument in call to function ‘__builtin_isnan’
143 | if(AMY_IS_SET(synth[osc].algo_source[i])) {
| ^~~~~~~~~~
src/algorithms.c: In function ‘algo_note_on’:
src/algorithms.c:232:8: error: non-floating-point argument in call to function ‘__builtin_isnan’
232 | if(AMY_IS_SET(synth[osc].patch)) {
| ^~~~~~~~~~
src/algorithms.c:236:12: error: non-floating-point argument in call to function ‘__builtin_isnan’
236 | if(AMY_IS_SET(synth[osc].algo_source[i])) {
| ^~~~~~~~~~
src/algorithms.c: In function ‘render_algo’:
src/algorithms.c:264:12: error: non-floating-point argument in call to function ‘__builtin_isnan’
264 | if(AMY_IS_SET(synth[osc].algo_source[op]) && synth[synth[osc].algo_source[op]].status == IS_ALGO_SOURCE) {
| ^~~~~~~~~~
make: *** [Makefile:39: src/algorithms.o] Error 1
In [44]: amy.reset()
In [45]: amy.send(wave=amy.SINE,ratio=0.2,amp=0.1,osc=0,bp0_target=amy.TARGET_AMP,bp0="1000,0,0,0")
...:
In [46]: amy.send(wave=amy.SINE,ratio=1,amp=1,osc=1)
...:
In [47]: amy.send(wave=amy.ALGO,algorithm=0,algo_source="-1,-1,-1,-1,1,0",osc=2)
...:
In [48]: amy.send(osc=2, note=60, vel=3)
...:
In [49]: zsh: segmentation fault ipython
Update the loris tar to modern python3 install so that it's easier to bundle with AMY
Hi! I'm trying out AMY as a synth for my music learning platform. I successfully fitted the wasm into my Vue setup, but there appears some weird glitch. After playing somewhere more that 20 notes the sound stops and I got this error:
Uncaught TypeError: attempting to access detached ArrayBuffer
audioCallback amy.vue:56
onaudioprocess amy.vue:89
setupAudio amy.vue:88
startAudio amy.vue:124
piano_down amy.vue:144
setup amy.vue:155
listener index.mjs:241
Tested in Firefox and Chrome. You can try it here:
https://chromatone.center/practice/experiments/amy/
The code is a slight modification of the example www/amy.js
code in a Vue 3 component. Check it here:
https://github.com/chromatone/chromatone.center/blob/master/content/practice/experiments/amy/amy.vue
I'm a JS dev and have little experience with wasm stuff so I just don't know how to debug this further. Any ideas? May this be helpful?
Reports of it not working if wasm hasn't loaded yet, also glitches during UI updates on phone, etc. Would love some help modernizing the example, adding more tests for people to try!!
I don't know what's happening yet, but applying both amplitude and filter-freq envelopes leads to a floating point explosion:
amy.send(osc=0, wave=amy.SAW_DOWN, filter_type=amy.FILTER_LPF, resonance=0.7, filter_freq=4500, bp1_target=amy.TARGET_FILTER_FREQ, bp1='0,0.1,150,1.0,1000,0.4,100,0.1', bp0_target=amy.TARGET_AMP, bp0='0,0,60,1.0,500,0.5,100,0')
amy.send(osc=0, note=64, vel=1)
Weirdly, the filter without the amplitude scaling is fine (the filter is the most likely culprit when things go unstable). The amplitude envelope without the filter is fine too of course.
The problem seems to wait until the amplitude envelop hits the sustain phase. Which .. makes #552554a suspicious.
Just brainstorming of features I'd like and can work on.
In the short-term, this is most relevant to me for the PCM wave-type.
Would it be better to:
*_lutset.h
file for each?The first waveforms I'm thinking about are inspired from the Korg DW8000 and Ensoniq ESQ-1
(Korg's might be trickier, as there are different resolution waveforms per octave.)
After making changes mentioned in #18, amy-example
emits the message No suitable device format available.
regardless of the sound device I specify.
Can AMY use another output format than SoundIoFormatS16NE
?
I've added this locally and it's nice for testing at the shell.
Is there interest in this for AMY's core or should I fork?
Currently, the way that note, envelope, and lfo inputs affect pitch, envelope, and filter cutoff etc. is fairly complex and irregular.
In the spirit of the voltage-summing nodes of analog synths, I want to introduce a fully orthogonal structure, where each voice parameter is calculated as the sum of the same set of control inputs via a matrix of scale coefficients.
For example, instead of filter_freq=1000
setting the cutoff to a fixed value, followed by bp0_target=FILTER_FREQ
and setting up the bp0
envelope to get a sweep, you would do something like:
filter_freq=1000.0,1.0,0,0,0
where the vector of coefficients now indicate the weights for a fixed set of control inputs that are summed together.
The first coefficient is always taken as-is, providing a constant starting point, but the remainder are applied to inputs whose values vary, defined in some fixed order. In the example above, the second value applies to bp0
, but we would also include note value (pitch), note velocity, lfo, etc.
Voice parameters include oscillator frequency, output level, filter frequency, PWM duty, and stereo pan.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.