Git Product home page Git Product logo

amy's People

Contributors

bwhitman avatar dpwe avatar erkkah avatar octetta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

amy's Issues

Logarithmic frequency axis

In service of #59, we would need voice parameters in a domain where linear summing naturally supports the desired control.

Specifying frequency in Hertz doesn't satisfy this. Currently, we apply controls like envelopes to frequency parameters by multiplication.

If instead we used a logarithmic frequency scale, we could add an envelope to a base frequency value to get similar control behavior (since addition in a logarithmic domain corresponds to multiplication in the linear domain).

However, we can then also scale the modulating inputs to vary their effect. (This actually corresponds to raising linear envelopes to a power.)

This arrangement mirrors the "one volt per octave" standard used in modular analog synthesizers.

Fixed-point AMY

@dpwe is making a lot of great headway converting AMY from floating point to fixed point. The branch is currently here https://github.com/bwhitman/amy/tree/fxp and you can see his readme here https://github.com/bwhitman/amy/blob/fxp/src/amy_fixedpoint.h . I assume (and hope) he'll write a big blog post about it when it's done.

WHY: AMY is fast and efficient but really relies on an FPU to do most of its rendering. This was "fine" for our original targets -- ESP32 and desktop, but we'd like to port it to much more, like the RP2040 (#41) or other Cortex M0 types. Even on MCUs with FPUs, they're not usually as fast as using fixed point types, so we'll hopefully have more headroom on things like Alles and Tulip. We couldn't do more than a few sine waves on an RP2040 before; now we can have hundreds. There's not a lot of downside: just the complexity of doing it and perhaps it's slightly harder code to understand if you're not familiar with this.

Here's where we're at, I'll track the merge here as it happens! thanks DAN

TODO

  • Base oscillators
  • FM
  • Envelopes
  • Filters
  • EQ
  • Pan
  • Reverb
  • Chorus
  • Partials
  • Karplus-strong
  • PCM
  • Make RP2040 build multi-core like ESP
  • Port back into Alles
  • Port back into Tulip

python library

should support

import amy
pcm = amy.render(5) # seconds
amy.send("v0a1l1f440.f")
amy.live() # starts a libsoundio stream 

Make all envelope generator durations be relative to segment start

Currently, the envelope generator breakpoint set:
bp0="200,1.0,500,0.3,50,0.0"
means "ramp up to 1.0 in the first 200ms after the note-on (attack), then ramp-down to 0.3 by 500ms after the note-on (decay). When the note ends, ramp down to zero over 50ms (release)."

I keep getting tripped-up by the way the end of the decay is specified as ms since the beginning of the note, so includes the attack time. I think it should be since the beginning of the decay, so the equivalent bp string under the proposed scheme would be:
bp0="200,1.0,300,0.3,50,0.0"

Note that bp strings can have up to 8 segments, and all the times are (currently) relative to the start of the note except the final one (release) which is special. This modification would make all the durations relative to the start of that segment, making the release segment different only in that it doesn't begin until note-end.

Also, we'll have to reprocess the DX7 preset envelopes which, although segment-relative natively, were processed to cumulate the times when being translated by fm.py:355.

Unexpected delay between event and audio.

On Fedora 37 (now using PulseAudio), with amy-example, there's a noticeable (~3 second) delay between when events are sent and when AMY starts generating audio.

The same behavior is not seen on macOS+coreAudio.

I'll dig into this but wanted to note, because this wasn't expected.

Undefined symbols on _AudioComponentFindNext

Running make under macOS (x86), I get

gcc  src/algorithms.o  src/amy.o  src/envelope.o  src/delay.o  src/filters.o  src/oscillators.o  src/pcm.o  src/partials.o  src/libminiaudio-audio.o  src/amy-example-esp32.o src/amy-example.o -Wall -lpthread  -lm  -o amy-example
ld: Undefined symbols:
  _AudioComponentFindNext, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioComponentInstanceDispose, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioComponentInstanceNew, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioObjectAddPropertyListener, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioObjectGetPropertyData, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioObjectGetPropertyDataSize, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioObjectRemovePropertyListener, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioObjectSetPropertyData, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioOutputUnitStart, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioOutputUnitStop, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioUnitAddPropertyListener, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioUnitGetProperty, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioUnitGetPropertyInfo, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioUnitInitialize, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioUnitRender, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _AudioUnitSetProperty, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _CFRelease, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
  _CFStringGetCString, referenced from:
      _ma_context_init__coreaudio in libminiaudio-audio.o
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [amy-example] Error 1

Probably I need to add Apple specific frameworks, but I don't know how

ScriptProcessorNode is deprecated, need help with an AudioWorklet example

I'm using ToneJS in my web-app for synths and effects and it already establishes an AudioContext instance on it's own. In order to use it's effects I need to have AMY in the same context.

We can import { context } from 'tone', yet it's a fitted into modern web standard and there's no ScriptProcessorNode present there. Even context.rawContext!

I can see that it's playing a minor role in the whole HTML setup and it should be easily replaceable with AudioWorklet.

I'm still new to these low level concepts, so still just trying to figure out the exact setup. Here's what I have as reference:

Any help with a minimal example of that to be put into the www folder?

`make` can't find `dlopen`, `dlclose` and `dlsym`

This is on Debian bullseye/ x86_64, gcc is gcc (Debian 10.2.1-6) 10.2.1 20210110

❯ git clone https://github.com/bwhitman/amy.git
Cloning into 'amy'...
remote: Enumerating objects: 624, done.
remote: Counting objects: 100% (210/210), done.
remote: Compressing objects: 100% (118/118), done.
remote: Total 624 (delta 134), reused 134 (delta 92), pack-reused 414
Receiving objects: 100% (624/624), 21.13 MiB | 19.98 MiB/s, done.
Resolving deltas: 100% (386/386), done.
❯ cd amy
❯ make
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/algorithms.c -o src/algorithms.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/amy.c -o src/amy.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/envelope.c -o src/envelope.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/delay.c -o src/delay.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/filters.c -o src/filters.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/oscillators.c -o src/oscillators.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/pcm.c -o src/pcm.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/partials.c -o src/partials.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/libminiaudio-audio.c -o src/libminiaudio-audio.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/amy-example-esp32.c -o src/amy-example-esp32.o
gcc -g -Wall -Wno-strict-aliasing  -I. -c src/amy-example.c -o src/amy-example.o
gcc  src/algorithms.o  src/amy.o  src/envelope.o  src/delay.o  src/filters.o  src/oscillators.o  src/pcm.o  src/partials.o  src/libminiaudio-audio.o  src/amy-example-esp32.o src/amy-example.o -Wall -lpthread  -lm -o amy-example
/usr/bin/ld: src/libminiaudio-audio.o: in function `ma_dlopen':
/home/znmeb/amy/src/miniaudio.h:17734: undefined reference to `dlopen'
/usr/bin/ld: src/libminiaudio-audio.o: in function `ma_dlclose':
/home/znmeb/amy/src/miniaudio.h:17754: undefined reference to `dlclose'
/usr/bin/ld: src/libminiaudio-audio.o: in function `ma_dlsym':
/home/znmeb/amy/src/miniaudio.h:17773: undefined reference to `dlsym'
collect2: error: ld returned 1 exit status
make: *** [Makefile:40: amy-example] Error 1

❯ gcc --version
gcc (Debian 10.2.1-6) 10.2.1 20210110
Copyright (C) 2020 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

This Stack Overflow might be relevant but I haven't tried it:

https://stackoverflow.com/questions/20369672/undefined-reference-to-dlsym

Noise waveform ignores frequency parameter

Is this by design?

Should I be filtering the noise to a specific frequency I desire?

If I want to use noise as a modulator, what's the best way to achieve this?

From a math point-of-view maybe my question makes no sense.

From an old-synth-nerd point-of-view I think it does.

Patch 11 has a click

when I do:
alles.send(osc=0,wave=alles.PCM,vel=0.3,patch=11)

there's a click in the resulting drum sound.

Pitched noise oscillator

The wave=RANDOM oscillator doesn't use its freq/note argument at present: It always generates full-band white noise.

Recently, when working on the Loris partials, we removed its ability to synthesize "noisy" partials, since it didn't seem that useful.

However, the general idea of an oscillator that generates narrowband noise around a center frequency seems quite useful. We can do this with the white noise source and a bandpass VCF to set the band. But that's a clumsy control interface if we want to, for instance, carry a melody on the noise.

A new oscillator mode to generate narrowband noise without a filter could be useful. We could use the freq arg to set the center frequency, then a second arg (filter_freq?) to define the bandwidth.

Inspired by the former Loris partial implementation, we could have a lookup table of band-limited noise, use the bandwidth parameter to control the lookup table playback (setting the bandwidth of the noise), then multiply it by a sinusoid at the note frequency (to shift the center frequency of the noise).

The weakness here is that periodic noise (as you'd get with a short lookup table) is typically perceived as pitched rather than noisy. We'd need a large table to avoid this.

On the other hand, we can sample from a very long implicit random sequence using a random function. However, the value of the lookup table is that it has the lowpass nature built-in (making cheap linear interpolation adequate). If we generated noise on-the-fly, we'd need to use more expensive interpolation to get good frequency behavior, but we could optimize that for a single frequency, then treat its output as if it was a large bandlimited noise lookup table.

clean up oscillators.c

  • fix up render_lut to use new dan-terminology for variables
  • use dan LUT code for modulation saw, triangle wave
  • remove mod_sine

Raspberry Pi Pico / RP2040 support?

Hi,

I just found this library and it looks quite promising indeed, but it being my current tinkering platform of choice I was wondering whether support for Raspberry Pi Pico / RP2040 (read: including running OS-less, with fundamentals based off the Pico SDK, perhaps including support for running on the second core, etc) has been considered and/or investigated yet? (read: as I just found the library I don't quite know where to start digging myself yet ;) )

https://www.raspberrypi.com/products/raspberry-pi-pico/
https://datasheets.raspberrypi.com/pico/raspberry-pi-pico-c-sdk.pdf
https://github.com/raspberrypi/pico-sdk

Thanks,

BR//Karl (@xoblite)

Tests: Discontinuity in PWM

Looking at tests/ref/TestPWM.wav, there's a big discontinuity in the DC offset at t=0.325 sec.

This looks a lot like similar problems we had at one point in the SAW waveforms resulting from the per-block offset calculation.

Building on RPi/ARM with miniaudio results in errors

I'd forgotten to put this in the previous PR, but when building or RPi (and apparently other 32-bit/ARM/Linux systems : raysan5/raylib#2452) libatomic is needed.

Also for reasons that I didn't investigate, libdl is needed too.

The fix is to adding -ldl -latomic to LIBS in Makefile.

I lightly tested this on a RPi3 last night.

Simple fix, and I can PR this if you need.

new click on duty breakpoint

amy.send(osc=1, wave=amy.SAW_DOWN, freq=0.5, amp=0.75)
amy.send(osc=0, wave=amy.PULSE, duty=0.5, freq=220, mod_source=1, mod_target=amy.TARGET_DUTY)
amy.send(osc=0, vel=0.5)

Linux audio and WASM audio output frequencies are out-of-tune

Via my Fender phone tuning app:

  • amy-message with v0w0n69l1 on Linux + miniaudio shows 440.1 Hz
  • amy.wasm (via a web app I've yet to generate a PR for) with v0w0n69l1 on Linux + FireFox 110.1 shows 479.0 Hz

I'll hook up a scope to get a more accurate numbers, and can test on other browsers and platforms for metrics

If this is an unavoidable consequence of using amy.wasm might we introduce a global tuning parameter akin to the global volume setting?

Happy to keep helping, as I'm quite intrigued with AMY.

`amy-message` Segmentation fault in certain circumstances

Still investigating, but this sequence is 100% repeatable:

$ ./amy-message -d 1
# # amy-message AMY playground -> https://octetta.com
# - uses AMY audio synthesizer library -> https://github.com/bwhitman/amy
# - uses miniaudio v0.11.11 audio playback library -> https://miniaud.io
# - uses bestline history and editing library -> https://github.com/jart/bestline
# OSCS=64
# SAMPLE_RATE=44100
# load history from amy-message-history.txt
# v0w7  ## set osc0 to PCM
# v0p22 ## choose SynthVz patch
# v0b1  ## enable PCM looping
# v0l1  ## trigger with velocity 1
# v0w1  ## change osc0 to pulse wave
# Segmentation fault (core dumped)

For those watching, this can be avoided by resetting osc0 via S0 before changing the oscillator's wave type.

I'll dig into how to prevent this in the codebase, but pointing out until then.

Tests: Hard offsets

Adding the simple tests in test.py allows inspection of the simple waveform outputs in tests/ref/*.wav.

Looking, for instance, at TestSineOsc.wav, we see the 6ms onset ramp (1 block) we expect.

But the offset is immediate (causing a click). It should probably be a 1 frame ramp-down.

Don't understand why handling of globals fails while compiling under Linux

Building AMY on Fedora Linux 37 yields the following errors:

gcc  src/amy-example.o  src/algorithms.o  src/amy.o  src/envelope.o  src/filters.o  src/oscillators.o  src/pcm.o  src/partials.o  src/libsoundio-audio.o  src/amy-example-esp32.o -Wall -lpthread -lsoundio -lm  -o amy-example
/usr/bin/ld: src/libsoundio-audio.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:9: multiple definition of `amy_channel'; src/amy-example.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:9: first defined here
/usr/bin/ld: src/libsoundio-audio.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:10: multiple definition of `amy_device_id'; src/amy-example.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:10: first defined here
/usr/bin/ld: src/libsoundio-audio.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:11: multiple definition of `amy_running'; src/amy-example.o:/home/stewartj/pr/amy/src/libsoundio-audio.h:11: first defined here
collect2: error: ld returned 1 exit status
make: *** [Makefile:38: amy-example] Error 1

I can change these three variables to extern in libsoundio-audio.h and the code compiles, but I'm worried this isn't the original intention and might lead to other problems as I work on getting the running on my Linux system.

Volume difference on FXP Alles

The new fixed point AMY doesn't seem to be as loud on the Alles esp32 speakers as it used to. And setting volume to usually ok amounts (3-5) now clips during alles.drums()

check why render_algo doesn't render all ops during note_offs

The web example basically crashes (but AMY still responds to debug messages) if you send a l0 to a certain FM patch, #0 is one. but it only does this if it's running on the web, on a remote server (local hosting the JS/WASM works fine!) I expect some memory corruption but valgrind came back clean.

How much RAM does AMY really need?

I was able to boot AMY on a Teensy 3.2, which has 64KB RAM. We say it needs "around 100KB" on the webpage, but I bet if you decrease AMY_OSCS to like 8 or 16 you could make it much smaller. Let's find out

AMY_IS_SET usage causes compile to fail

AMY_IS_SET uses isnan, which requires a floating point value (at least on Ubuntu 20.04 with gcc).
algo_source seems to be an integer, causing a failure.

I suspect this is because of the change of using NAN as a flag instead of -1?

Will tinker with this later, but wanted to point it out.

src/algorithms.c: In function ‘algo_note_off’:
src/algorithms.c:143:12: error: non-floating-point argument in call to function ‘__builtin_isnan’
  143 |         if(AMY_IS_SET(synth[osc].algo_source[i])) {
      |            ^~~~~~~~~~
src/algorithms.c: In function ‘algo_note_on’:
src/algorithms.c:232:8: error: non-floating-point argument in call to function ‘__builtin_isnan’
  232 |     if(AMY_IS_SET(synth[osc].patch)) {
      |        ^~~~~~~~~~
src/algorithms.c:236:12: error: non-floating-point argument in call to function ‘__builtin_isnan’
  236 |         if(AMY_IS_SET(synth[osc].algo_source[i])) {
      |            ^~~~~~~~~~
src/algorithms.c: In function ‘render_algo’:
src/algorithms.c:264:12: error: non-floating-point argument in call to function ‘__builtin_isnan’
  264 |         if(AMY_IS_SET(synth[osc].algo_source[op]) && synth[synth[osc].algo_source[op]].status == IS_ALGO_SOURCE) {
      |            ^~~~~~~~~~
make: *** [Makefile:39: src/algorithms.o] Error 1

segfault on DIY algo

In [44]: amy.reset()

In [45]: amy.send(wave=amy.SINE,ratio=0.2,amp=0.1,osc=0,bp0_target=amy.TARGET_AMP,bp0="1000,0,0,0")
    ...: 

In [46]: amy.send(wave=amy.SINE,ratio=1,amp=1,osc=1)
    ...: 

In [47]: amy.send(wave=amy.ALGO,algorithm=0,algo_source="-1,-1,-1,-1,1,0",osc=2)
    ...: 

In [48]: amy.send(osc=2, note=60, vel=3)
    ...: 

In [49]: zsh: segmentation fault  ipython

Getting "attempting to access detached ArrayBuffer" after a couple dozens of notes played

Hi! I'm trying out AMY as a synth for my music learning platform. I successfully fitted the wasm into my Vue setup, but there appears some weird glitch. After playing somewhere more that 20 notes the sound stops and I got this error:

Uncaught TypeError: attempting to access detached ArrayBuffer
    audioCallback amy.vue:56
    onaudioprocess amy.vue:89
    setupAudio amy.vue:88
    startAudio amy.vue:124
    piano_down amy.vue:144
    setup amy.vue:155
    listener index.mjs:241

Tested in Firefox and Chrome. You can try it here:

https://chromatone.center/practice/experiments/amy/

The code is a slight modification of the example www/amy.js code in a Vue 3 component. Check it here:

https://github.com/chromatone/chromatone.center/blob/master/content/practice/experiments/amy/amy.vue

I'm a JS dev and have little experience with wasm stuff so I just don't know how to debug this further. Any ideas? May this be helpful?

clean up example JS/HTML

Reports of it not working if wasm hasn't loaded yet, also glitches during UI updates on phone, etc. Would love some help modernizing the example, adding more tests for people to try!!

Interaction of filter env and amp env on fixed-point

I don't know what's happening yet, but applying both amplitude and filter-freq envelopes leads to a floating point explosion:

amy.send(osc=0, wave=amy.SAW_DOWN, filter_type=amy.FILTER_LPF, resonance=0.7, filter_freq=4500, bp1_target=amy.TARGET_FILTER_FREQ, bp1='0,0.1,150,1.0,1000,0.4,100,0.1', bp0_target=amy.TARGET_AMP, bp0='0,0,60,1.0,500,0.5,100,0')
amy.send(osc=0, note=64, vel=1)

Weirdly, the filter without the amplitude scaling is fine (the filter is the most likely culprit when things go unstable). The amplitude envelope without the filter is fine too of course.

The problem seems to wait until the amplitude envelop hits the sustain phase. Which .. makes #552554a suspicious.

What's the best way to add vintage synthesizer waveforms?

Would it be better to:

  • create an additional *_lutset.h file for each?
  • create PCM entries?

The first waveforms I'm thinking about are inspired from the Korg DW8000 and Ensoniq ESQ-1

(Korg's might be trickier, as there are different resolution waveforms per octave.)

General linear-combination control inputs

Currently, the way that note, envelope, and lfo inputs affect pitch, envelope, and filter cutoff etc. is fairly complex and irregular.

In the spirit of the voltage-summing nodes of analog synths, I want to introduce a fully orthogonal structure, where each voice parameter is calculated as the sum of the same set of control inputs via a matrix of scale coefficients.

For example, instead of filter_freq=1000 setting the cutoff to a fixed value, followed by bp0_target=FILTER_FREQ and setting up the bp0 envelope to get a sweep, you would do something like:

filter_freq=1000.0,1.0,0,0,0

where the vector of coefficients now indicate the weights for a fixed set of control inputs that are summed together.

The first coefficient is always taken as-is, providing a constant starting point, but the remainder are applied to inputs whose values vary, defined in some fixed order. In the example above, the second value applies to bp0, but we would also include note value (pitch), note velocity, lfo, etc.

Voice parameters include oscillator frequency, output level, filter frequency, PWM duty, and stereo pan.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.