Git Product home page Git Product logo

dalle2-pytorch's Issues

No run for GeForce 840M

It looks like CUDA for GeForce 840M has not been recognized. Scripts failed.

# torch/cuda/__init__.py
hasattr(torch._C, '_cuda_getDeviceCount')  # is False
torch._C._cuda_setDevice(device)  # no such method

Moved toy task to the end

Hi. Few hours back, you had toy task as your (almost) next item and now it is moved to the end in your check-list.

Are you facing any major challenges for training on the toy task? If not, please let me know -- I have beefy GPUs available with me for training on toy task -- will do it parallely with you!


Before:

IMG_20220418_213547

Now:

IMG_20220419_031840

Open replication of the generator

So it turns out people also are interested to work on the generator.
Let's use this issue to track progress on that.

What's needed:

  • A dataloader that uses eg webdataset containing both .jpg and text embedding as .npy
  • A training loop working on one node
  • A first training on a small dataset (for example use img2dataset on cc3m or a small subset of laion2B)
  • analyse results
  • scale up the training code to multi node

This will require a lot of work but should be very cool
It will work best in conjunction with #23 but can still be built beforehand (by using directly text embeddings instead of mapped image embeddings)

Your X-Clip or OpenAI vanilla CLIP?

First amazing that you try to recreate this.
My question is are you going to plan to use your X-CLIP implementation or just use the basic OpenAI vanilla CLIP. From what I gathered from the official presentation I think most of the power of the DALLE-2l seems to lie in the clip embedding and obviously the move away from autoregression to diffusion and therefore also the prior diffusion into the CLIP embeddings. But OpenAI only gives out their rather weak clip models so its capabilities will be limited.

Pretrained weights

Hello everyone!

First of all, thank you so much for this cool repo and all the work that went into it. Im really interested in trying this out for myself, however with my current hardware it would take me half a year to train :). I was wondering if anyone would be able to share their pretrained weights with me. It would be greatly appreciated.

Have a nice day

Pretrained model

Thank you for sharing the code. Could you please also share a pretrained model? Thank you! The model requires a lot of resources and is really hard to train.

Build a fair evaluation of the prior

We're starting to have our first prior now. (PR of the training script coming soon)

Time to evaluate
Ideas:

  • Mse on test set
  • Zero shot eval on image net Class Text -> text emb -> image emb -> ranking LAION-AI/project-menu#13
  • Clip guided generation with the prior

If you have more ideas please share, I may be missing some obvious things.

We have more volunteers that want to help so I'll point some here :)

Minimal GPU requirements for training

Hello guys. Thanks for doing the amazing job first.
The question is what would be the minimal GPU requirements for training your implementation and are there any configurable arch blocks we could choose from to change a number of trained parameters if needed?
I have 3090 and wondering if it would be possible to use it for training.

AMP Training

There is a AMP flag, but has this feature been tested at all? For me it does not work

Add doc for the prior training

hey @krish240574 , great training script for the prior!

it could be nice if you could add a few lines in a new section in the readme to explain how to run it (the command line along with a few explanations about the main parameters, and maybe one example wandb run), so everyone can run it

NameError: name 'text_mask' is not defined when running default example (m1, cpu)

so basically i took example code and modified it to use CPU instead of cuda (m1 mac)

from dalle2_pytorch import DALLE2, DiffusionPriorNetwork, DiffusionPrior, Unet, Decoder, CLIP

clip = CLIP(
    dim_text=512,
    dim_image=512,
    dim_latent=512,
    num_text_tokens=49408,
    text_enc_depth=6,
    text_seq_len=256,
    text_heads=8,
    visual_enc_depth=6,
    visual_image_size=256,
    visual_patch_size=32,
    visual_heads=8
).cpu()

# mock data

text = torch.randint(0, 49408, (4, 256)).cpu()
images = torch.randn(4, 3, 256, 256).cpu()

# train

loss = clip(
    text,
    images,
    return_loss=True
)

loss.backward()

# do above for many steps ...

# prior networks (with transformer)

prior_network = DiffusionPriorNetwork(
    dim=512,
    depth=6,
    dim_head=64,
    heads=8
).cpu()

diffusion_prior = DiffusionPrior(
    net=prior_network,
    clip=clip,
    timesteps=100,
    cond_drop_prob=0.2
).cpu()

loss = diffusion_prior(text, images)
loss.backward()

# do above for many steps ...

# decoder (with unet)

unet1 = Unet(
    dim=128,
    image_embed_dim=512,
    cond_dim=128,
    channels=3,
    dim_mults=(1, 2, 4, 8)
).cpu()

unet2 = Unet(
    dim=16,
    image_embed_dim=512,
    cond_dim=128,
    channels=3,
    dim_mults=(1, 2, 4, 8, 16)
).cpu()

decoder = Decoder(
    unet=(unet1, unet2),
    image_sizes=(128, 256),
    clip=clip,
    timesteps=100,
    cond_drop_prob=0.2,
    condition_on_text_encodings=False  # set this to True if you wish to condition on text during training and sampling
).cpu()

for unet_number in (1, 2):
    loss = decoder(images,
                   unet_number=unet_number)  # this can optionally be decoder(images, text) if you wish to condition on the text encodings as well, though it was hinted in the paper it didn't do much
    loss.backward()

# do above for many steps

dalle2 = DALLE2(
    prior=diffusion_prior,
    decoder=decoder
)

images = dalle2(
    ['cute puppy chasing after a squirrel'],
    cond_scale=2.)

# save your image (in this example, of size 256x256)

ive expected it to run but there is a error

File "dalle2_pytorch.py", line 746, in sample
    text_cond = {**text_cond, 'text_encodings': text_encodings, 'mask': text_mask}
NameError: name 'text_mask' is not defined

EMA Bug

Hi Phil,

This morning I tried to run the decoder training part. I decided to use DecoderTrainer but found one issue when ema update.

When after using decoder_trainer do sampling, the next train forward run will throw RunError:

Traceback (most recent call last):
  File "/home/caohe/DPMs/dalle2/train_decoder.py", line 321, in <module>    main()
  File "/home/caohe/DPMs/dalle2/train_decoder.py", line 318, in main
    train(decoder_trainer, train_dl, val_dl, train_config, device)
  File "/home/caohe/DPMs/dalle2/train_decoder.py", line 195, in train
    trainer.update(unet_number)
  File "/home/caohe/DPMs/dalle2/dalle2_pytorch/train.py", line 288, in update
    self.ema_unets[index].update()
  File "/home/caohe/DPMs/dalle2/dalle2_pytorch/train.py", line 119, in update
    self.update_moving_average(self.ema_model, self.online_model)
  File "/home/caohe/DPMs/dalle2/dalle2_pytorch/train.py", line 129, in update_moving_average
    ema_param.data = calculate_ema(self.beta, old_weight, up_weight)
  File "/home/caohe/DPMs/dalle2/dalle2_pytorch/train.py", line 125, in calculate_ema
    return old * beta + new * (1 - beta)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and CPU!

def update(self):
self.step += 1
if self.step <= self.update_after_step or (self.step % self.update_every) != 0:
return
if not self.initted:
self.ema_model.state_dict(self.online_model.state_dict())
self.initted.data.copy_(torch.Tensor([True]))
self.update_moving_average(self.ema_model, self.online_model)

And I checked the up_weight.device(online model) and old_weight.device(ema model), found online model is on cuda:0 but ema model is on cpu. It's really weird, I debugged for a long time and I think it might be caused by the DecoderTrainer.sample() process.
When swapping across ema and online model, there exists some problem related to the device.

@torch.no_grad()
def sample(self, *args, **kwargs):
if self.use_ema:
trainable_unets = self.decoder.unets
self.decoder.unets = self.unets # swap in exponential moving averaged unets for sampling
output = self.decoder.sample(*args, **kwargs)
if self.use_ema:
self.decoder.unets = trainable_unets # restore original training unets
return output

The way I fixed it just add self.ema_model = self.ema_model.to(next(self.online_model.parameters()).device) before use self.update_moving_average(self.ema_model, self.online_model) (pretty naive haha)

Hope to hear your solution

Enjoy!

How can I change Cuda to CPU in dalle2 on Mac?

AssertionError: Torch not compiled with CUDA enabled

 87 # do above for many steps
 89 dalle2 = DALLE2(
 90     prior = diffusion_prior,
 91     decoder = decoder
 92 )

---> 94 images = dalle2(
95 ['cute puppy chasing after a squirrel'],
96 cond_scale = 2. # classifier free guidance strength (> 1 would strengthen the condition)
97 )
99 images

Better normalize input CLIP embeddings when training prior

Right now it looks like the CLIP embeddings (text to condition on and the clean image embedding to train the prior to predict) are being normalized to norm 1 when training the prior. This is very badly scaled for Gaussian diffusion and I suspect it is a reason why LAION's attempt at training a prior is going badly. With a 512 dim CLIP embedding, normalizing to norm 1 means that the values will be around the range -0.06 to 0.06 or some such. I suggest either finding the elementwise mean and std of the CLIP image embeddings in the training set and normalizing them to mean 0 std 1, or using PCA, or (what I am doing with my prior) simply normalizing the embeddings then multiplying them by sqrt(embed_dim), to make them better scaled for Gaussian diffusion.

This scaling makes the embeddings match the expected norm of the Gaussian noise. When we combines the clean embedding and the noise, we do so according to alpha^2 + sigma^2 = 1, where alpha and sigma are the scaling factors for the clean embedding and noise at that timestep. Since the noise is nearly always nearly orthogonal to the embedding the forward process paths stay near the manifold (the hypersphere w/ the radius sqrt(embed_dim).) I did this because it was easier than figuring out diffusion on Riemannian manifolds.

Typo in VQ-VAE

Hi Phil,
great to see your big contributions.
Here I find might be a typo. I thought it should be like last_dec_layer = self.enc_dec.decoders[-1].weight

last_dec_layer = self.decoders[-1].weight

Besides, in order to match the name with ResEncDec and ConvNextEncDec, I think it's better to rename ViTEncDec's encoder and decoder as self.encoders and self.decoders

Enjoy!

Should the first Unet have cond_on_image_embeds=True

With reference to your example usage for decoder training. The Unet code seems to have argument cond_on_image_embeds set to False by default, which is alright since it is used in various stages in the decoding process. However, shouldn't the first Unet have it set to True; otherwise, the decoder is just conditioning on time and does not respond to the CLIP image embeddings input?

import torch
from dalle2_pytorch import Unet, Decoder, CLIP

# trained clip from step 1

clip = CLIP(
    dim_text = 512,
    dim_image = 512,
    dim_latent = 512,
    num_text_tokens = 49408,
    text_enc_depth = 6,
    text_seq_len = 256,
    text_heads = 8,
    visual_enc_depth = 6,
    visual_image_size = 256,
    visual_patch_size = 32,
    visual_heads = 8
).cuda()

# 2 unets for the decoder (a la cascading DDPM)

unet1 = Unet(
    dim = 32,
    image_embed_dim = 512,
    cond_dim = 128,
    channels = 3,
    dim_mults = (1, 2, 4, 8)
).cuda()

unet2 = Unet(
    dim = 32,
    image_embed_dim = 512,
    cond_dim = 128,
    channels = 3,
    dim_mults = (1, 2, 4, 8, 16)
).cuda()

# decoder, which contains the unet(s) and clip

decoder = Decoder(
    clip = clip,
    unet = (unet1, unet2),            # insert both unets in order of low resolution to highest resolution (you can have as many stages as you want here)
    image_sizes = (256, 512),         # resolutions, 256 for first unet, 512 for second. these must be unique and in ascending order (matches with the unets passed in)
    timesteps = 1000,
    cond_drop_prob = 0.2
).cuda()

# mock images (get a lot of this)

images = torch.randn(4, 3, 512, 512).cuda()

# feed images into decoder, specifying which unet you want to train
# each unet can be trained separately, which is one of the benefits of the cascading DDPM scheme

loss = decoder(images, unet_number = 1)
loss.backward()

loss = decoder(images, unet_number = 2)
loss.backward()

# do the above for many steps for both unets

I just getting a noise

Firstly I'm not a python programmer or intrested in ai I just want to make funny pictures I ran the script below text
"Let's see the whole script below" in README but I dont' know how to display tensor images so I googled it I wrote

import matplotlib.pyplot as plt
iplt.imshow( images[0].permute(1, 2, 0) )

then I got this

b33ec5fc-8b58-424c-89db-fa9c50f05fa8

probably I made a dumb mistake somewhere pls help me

Need help with decoder training

I'm training on the CC3M. Is there anything wrong with my training? The loss seems to be going down way too fast and despite the low training loss values, sampling doesn't seem to show it is working. Sampling during training by just calling decoder.sample() giving it the CLIP image embeddings of the minibatch training images. Since I'm training a decoder with two Unets and just training the first Unet for now, I'm breaking out after sampling from the first Unet.

decoder_training_loss

Theses are the samples at the 0k, 5k, 13k, 16k, and 17k training steps.

0k
5k
13k
16k
17k

Regarding learned image embedding and text embedding in Unet

According to the paper Section 2.1 Decoder, it says

We enable classifier-free guidance by randomly setting CLIP embeddings to zero (or a learned embedding) 10% of the time, and randomly dropping the text caption 50% of he time during training.

It seems that we are replacing the embeddings after turning them to condition sequences.

https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py#L1216-L1222
https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py#L1229-L1234

And from the following it seems that that null text embeddings can vary according to their sequence position. For image embeddings, I feel it is fine, but what about for text encodings?

https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py#L1104

Also, it seems perhaps it is needed to have a separate a separate cond_drop_prob one for image embedding and one for text encodings.
If we do that, how do we modify forward_with_cond_scale()?

https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py#L1166-L1178

Diffusion prior training trial run

All l2norm related settings in DiffusionPrior() set to False as in the defaults, image_embed_scale set to 1.0 to disable any scaling. CLIP image embeddings are from OpenAI's ViT-B/32 without any l2norm, dataset is cc3m. @lucidrains What do you think of the results? Seems to be working?

L1 training loss
image

Training batch CLIP image embed to sampled CLIP image embed (from training batch text) L2 loss
image

Training batch CLIP text embed to sampled CLIP image embed (from training batch text) softmax accuracy
image

Training batch CLIP text embed to sampled CLIP image embed (from training batch text) cosine similarity
image

Question regarding clamping of x_recon in DiffusionPrior and Decoder

DiffusionPrior is configured by default to predict_x_start. As a result, x_recon is not clamped to [-1, 1] which I think is good because we don't know what is the output range of CLIP image embeddings.

https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py#L770-L778

Decoder is configured by default to not predict_x_start. As a result, x_recon is clamped to [-1, 1] which I think is also good because we fix this in the dataset or dataloader to make model predict image in range [-1, 1]. However, I don't understand why is clamping or not conditioned on not predicting x start? It seems whether or not we predict noise or x_start, x_recon is going to be x_start anyway.

https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.py#L1440-L1446

Please enlighten me.

Unexpected keyword argument - l2_norm_output

/DALLE2-pytorch-main/dalle2_pytorch/dalle2_pytorch.py", line 660, in init
self.causal_transformer = CausalTransformer(dim = dim, **kwargs)

TypeError: init() got an unexpected keyword argument 'l2norm_output'

I've pulled the latest code from this repo - did some fix break the code?

Q: Why causal transformer in diffusion prior?

I know we can but i don't understand why is the paper using causal transformer since we are not doing sequence predictions where what we are trying to predict is in the context. Here we are trying to predict the image embeddings which is not part of the context anywhere. What is the causal mask for? Please enlighten me.

Epochs for Laion Dataset

how much epochs will it take to train on laion dataset.I am trying to train it also its taking 25 ms on one full iteration is it expected?

Hardware requirements

Hi,

Let's say if the project completes -- what will be the hardware specs required to run the model?

Best,
Rakesh

Provide generation clip guiding script using the prior

alstro is reporting increased diversity when doing that

example script https://gist.github.com/crowsonkb/a6aef1031a2712241d0c21426f9c2897 that needs

this can be an interesting way to evaluate the prior

example of diversity thanks to the diffusion sampling process https://twitter.com/jd_pressman/status/1508868273474920452

open replication of the prior

hey,

thanks @lucidrains for building this awesome replication of the model, as usual!
dalle2 paper https://arxiv.org/pdf/2204.06125.pdf

with a few people from laion we're working on replication of the prior at scale
we're gathering notes in https://docs.google.com/document/d/1BKIQPzZS7pVL2JgL74W0dUIlUcfld8jptA6nIne_cNo/edit
and here
big plan:

Anybody interested in that project, feel free to discuss here or in #dalle2-prior at laion server (https://discord.gg/xBPBXfcFHd)

hp param from dalle2 paper
image

As a first step, we're trying to have an end to end version running at small scale. Once it works, we'll scale it up.

We intend to send PR in this repo for any improvement that seems worthwhile (eg support precomputed embeddings, distributed training, ...)

Question about the concated tokens (where is the `noised image token`?)

Hi Phil,
when reading the DiffusionPriorNetwork forward part, I noticed the concated tokens feed into the CausalTransformer are composed like below:

tokens = torch.cat((
text_encodings,
text_embed,
time_embed,
learned_queries
), dim = -2)

But, refer to the original paper in Section2.2, it wrote as
...consisting of encoded text, the CLIP text embedding, an embedding for the diffusion timestep, the noised CLIP image embedding, and a final embedding whose output from the Transformer is used to predict the unnoised CLIP image embedding. ,
I just wonder which part belongs to the the noised CLIP image embedding (maybe learned_queries ?) It just confuses me.

Enjoy!

[QUESTION] [BEGINNER] How to save image from 4d tensor? generating plain noise.

Hi, I am running the following code:

import torch
from dalle2_pytorch import DALLE2, DiffusionPriorNetwork, DiffusionPrior, Unet, Decoder, OpenAIClipAdapter

# openai pretrained clip - defaults to ViT/B-32

clip = OpenAIClipAdapter()

# mock data

text = torch.randint(0, 49408, (4, 256)).cuda()
images = torch.randn(4, 3, 256, 256).cuda()

# prior networks (with transformer)

prior_network = DiffusionPriorNetwork(
    dim = 512,
    depth = 6,
    dim_head = 64,
    heads = 8
).cuda()

diffusion_prior = DiffusionPrior(
    net = prior_network,
    clip = clip,
    timesteps = 100,
    cond_drop_prob = 0.2
).cuda()

loss = diffusion_prior(text, images)
loss.backward()

# do above for many steps ...

# decoder (with unet)

unet1 = Unet(
    dim = 128,
    image_embed_dim = 512,
    cond_dim = 128,
    channels = 3,
    dim_mults=(1, 2, 4, 8)
).cuda()

unet2 = Unet(
    dim = 16,
    image_embed_dim = 512,
    cond_dim = 128,
    channels = 3,
    dim_mults = (1, 2, 4, 8, 16)
).cuda()

decoder = Decoder(
    unet = (unet1, unet2),
    image_sizes = (128, 256),
    clip = clip,
    timesteps = 100,
    image_cond_drop_prob = 0.1,
    text_cond_drop_prob = 0.5,
    condition_on_text_encodings = False  # set this to True if you wish to condition on text during training and sampling
).cuda()

for unet_number in (1, 2):
    loss = decoder(images, unet_number = unet_number) # this can optionally be decoder(images, text) if you wish to condition on the text encodings as well, though it was hinted in the paper it didn't do much
    loss.backward()

# do above for many steps

dalle2 = DALLE2(
    prior = diffusion_prior,
    decoder = decoder
)

generating images:

images = dalle2(
    ['a butterfly trying to escape a tornado'],
    cond_scale = 2. # classifier free guidance strength (> 1 would strengthen the condition)
)

and trying to save:

from torchvision.utils import save_image
save_image(images[0], 'img.png')

but the img.png is just plain noise... what am I missing here? can anyone please tell me. I just want to try out the code, I am new to ML.

when i run dream 'sharing a sunset at the summit of mount everest with my dog' ,i got error :NameError: name 'image' is not defined

when i run dream 'sharing a sunset at the summit of mount everest with my dog' ,i got error :NameError: name 'image' is not defined.

detail: Traceback (most recent call last):
File "/home/am/.local/bin/dream", line 8, in
sys.exit(dream())
File "/home/am/.local/lib/python3.6/site-packages/click/core.py", line 829, in call
return self.main(*args, **kwargs)
File "/home/am/.local/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/home/am/.local/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/am/.local/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/home/am/.local/lib/python3.6/site-packages/dalle2_pytorch/cli.py", line 9, in dream
return image

Wrong order of prediction and target arguments in loss functions.

Add ability to train decoder using embedding-image pairs

I am implementing a single node training script for the decoder and it seems @lucidrains has implemented a wrapper script for this purpose that is already feature-full. Currently, the forward pass is implemented as follows:

def forward(
self,
x,
*,
unet_number,
divisor = 1,
**kwargs
):
with autocast(enabled = self.amp):
loss = self.decoder(x, unet_number = unet_number, **kwargs)
return self.scale(loss / divisor, unet_number = unet_number)

This lacks the ability to substitute our own image embeddings in the case where we have precomputed embedding-image pairs. The functionality is already mostly supported by the Decoder network where image_embed can be passed to the forward method so this could be implemented by simply adding the image_embed parameter as a pass though to decoder.forward. However, it would also be convenient to make the clip model optional in the Decoder constructor. I already started on this a week ago in this branch by adding the ability to set clip_image_size and channels separately from a clip model.

There are only a few small changes that would be necessary to implement this feature so I could put together a pull request to do this.

hi, can u make it able to train?

Dude, I have follow u for a long time. Literally you always chasing something hot and edge, but just create a git repo can not able to train or inference with only some READMEs with citations.

Can u just focus on something and make it able to train? I really want see some deeper work and really appreciated your open source work.

Typo for text_encodings?

Hi, me again (lol)

Just curious why set inited text_encodings's length as 0

if not exists(text_encodings):
text_encodings = torch.empty((batch, 0, dim), device = device, dtype = dtype)

I tried the test code in the Readme, but throw an error

concated token's shape: (b 4 d), :[text_encoding(b,0,d), text_embed(b,1,d), time_embed(b,1,d), image_embed(b,1,d), learned_queries(b,1,d)]
mask's shape: (b 5)).

diffusion_prior = DiffusionPrior(
    net = prior_network,
    clip = clip,
    timesteps = 100,
    cond_drop_prob = 0.2,
    condition_on_text_encodings = False  # this probably should be true, but just to get Laion started
).cuda()

But, when I write as text_encodings = torch.empty((batch, 1, dim), device = device, dtype = dtype) , error gone.

Plz have a check. Enjoy!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.