Git Product home page Git Product logo

assem-vc's Introduction

Assem-VC — Official PyTorch Implementation

Assem-VC: Realistic Voice Conversion by Assembling Modern Speech Synthesis Techniques

Kang-wook Kim, Seung-won Park, Junhyeok Lee, Myun-chul Joe @ MINDsLab Inc., SNU

Accepted to IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022

Paper: https://arxiv.org/abs/2104.00931
Audio Samples: https://mindslab-ai.github.io/assem-vc/

Update: Enjoy our pre-trained model with Google Colab notebook!

Abstract: In this paper, we pose the current state-of-the-art voice conversion (VC) systems as two-encoder-one-decoder models. After comparing these models, we combine the best features and propose Assem-VC, a new state-of-the-art any-to-many non-parallel VC system. This paper also introduces the GTA finetuning in VC, which significantly improves the quality and the speaker similarity of the outputs. Assem-VC outperforms the previous state-of-the-art approaches in both the naturalness and the speaker similarity on the VCTK dataset. As an objective result, the degree of speaker disentanglement of features such as phonetic posteriorgrams (PPG) is also explored. Our investigation indicates that many-to-many VC results are no longer distinct from human speech and similar quality can be achieved with any-to-many models.


Controllable and Interpretable Singing Voice Decomposition via Assem-VC

Kang-wook Kim, Junhyeok Lee @ MINDsLab Inc., SNU

Accepted to NeurIPS Workshop on ML for Creativity and Design 2021 (Oral)

Paper: https://arxiv.org/abs/2110.12676
Audio Samples: https://mindslab-ai.github.io/assem-vc/singer/

Abstract: We propose a singing decomposition system that encodes time-aligned linguistic content, pitch, and source speaker identity via Assem-VC. With decomposed speaker-independent information and the target speaker's embedding, we could synthesize the singing voice of the target speaker. In conclusion, we made a perfectly synced duet with the user's singing voice and the target singer's converted singing voice.

Requirements

This repository was tested with following environment:

Clone our Repository

git clone --recursive https://github.com/mindslab-ai/assem-vc
cd assem-vc

Datasets

Preparing Data

  • To reproduce the results from our paper, you need to download:
  • Unzip each files, and clone them in datasets/.
  • Resample them into 22.05kHz using datasets/resample.py.
    python datasets/resample.py
    Note that dataset/resample.py was hard-coded to remove original wavfiles in datasets/ and replace them into resampled wavfiles, and their filename *.wav will be transformed into *-22k.wav.
  • You can use datasets/resample_delete.sh instead of datasets/resample.py. It does the same role.

Preparing Metadata

Following a format from NVIDIA/tacotron2, the metadata should be formatted like:

path_to_wav|transcription|speaker_id
path_to_wav|transcription|speaker_id
...

When you want to learn and inference using phoneme, the transcription should have only unstressed ARPABET.

Metadata containing ARPABET for LibriTTS train-clean-100 split and VCTK corpus are already prepared at datasets/metadata. If you wish to use custom data, you need to make the metadata as shown above.

When converting transcription of metadata into ARPABET, you can use datasets/g2p.py.

python datasets/g2p.py -i <input_metadata_filename_with_graphemes> -o <output_filename>

Preparing Configuration Files

Training our VC system is consisted of two steps: (1) training Cotatron, (2) training VC decoder on top of Cotatron.

There are three yaml files in the config folder, which are configuration template for each model. They must be edited to match your training requirements (dataset, metadata, etc.).

cp config/global/default.yaml config/global/config.yaml
cp config/cota/default.yaml config/cota/config.yaml
cp config/vc/default.yaml config/vc/config.yaml

Here, all files with name other than default.yaml will be ignored from git (see .gitignore).

  • config/global: Global configs that are both used for training Cotatron & VC decoder.
    • Fill in the blanks of: speakers, train_dir, train_meta, val_dir, val_meta, f0s_list_path.
    • Example of speaker id list is shown in datasets/metadata/libritts_vctk_speaker_list.txt.
    • When replicating the two-stage training process from our paper (training with LibriTTS and then LibriTTS+VCTK), please put both list of speaker ids from LibriTTS and VCTK at global config.
    • f0s_list_path is set to f0s.txt by default
  • config/cota: Configs for training Cotatron.
    • You may want to change: batch_size for GPUs other than 32GB V100, or change chkpt_dir to save checkpoints in other disk.
    • You can also modify use_attn_loss, whether guided attention loss is used or not.
  • config/vc: Configs for training VC decoder.
    • Fill in the blank of: cotatron_path.

Extracting Pitch Range of Speakers

Before you train VC decoder, you should extract pitch range of each speaker:

python preprocess.py -c <path_to_global_config_yaml>

Result will be saved at f0s.txt.

Training

Currently, the training speed via multi-GPU setting may be slow due to the version issue of pytorch lightning. If you want to train faster, see this issue.

1. Training Cotatron

To train the Cotatron, run this command:

python cotatron_trainer.py -c <path_to_global_config_yaml> <path_to_cotatron_config_yaml> \
                           -g <gpus> -n <run_name>

Here are some example commands that might help you understand the arguments:

# train from scratch with name "my_runname"
python cotatron_trainer.py -c config/global/config.yaml config/cota/config.yaml \
                           -g 0 -n my_runname

Optionally, you can resume the training from previously saved checkpoint by adding -p <checkpoint_path> argument.

2. Training VC decoder

After the Cotatron is sufficiently trained (i.e., producing stable alignment + converged loss), the VC decoder can be trained on top of it.

python synthesizer_trainer.py -c <path_to_global_config_yaml> <path_to_vc_config_yaml> \
                              -g <gpus> -n <run_name>

The optional checkpoint argument is also available for VC decoder.

3. GTA finetuning HiFi-GAN

Once the VC decoder is trained, finetune the HiFi-GAN with GTA finetuning. First, you should extract GTA mel-spectrograms from VC decoder.

python gta_extractor.py -c <path_to_global_config_yaml> <path_to_vc_config_yaml> \
                        -p <checkpoint_path>

The GTA mel-spectrograms calculated from audio file will be saved as **.wav.gta at first, and then loaded from disk afterwards.

Train/validation metadata of GTA mels will be saved in datasets/gta_metadata/gta_<orignal_metadata_name>.txt. You should use those metadata when finetuning HiFi-GAN.

After extracting GTA mels, get into hifi-gan and follow manuals in hifi-gan/README.md

cd hifi-gan

Monitoring via Tensorboard

The progress of training with loss values and validation output can be monitored with tensorboard. By default, the logs will be stored at logs/cota or logs/vc, which can be modified by editing log.log_dir parameter at config yaml file.

tensorboard --log_dir logs/cota --bind_all # Cotatron - Scalars, Images, Hparams, Projector will be shown.
tensorboard --log_dir logs/vc --bind_all # VC decoder - Scalars, Images, Hparams will be shown.

Pre-trained Weight

We provide pretrained model of Assem-VC and GTA-finetuned HiFi-GAN generator weight. Assem-VC was trained with VCTK and LibriTTS, and HiFi-GAN was finetuned with VCTK.

  1. Download our published models and configurations.
  2. Place global/config.yaml at config/global/config.yaml, and vc/config.yaml at config/vc/config.yaml
  3. Download f0s.txt and write the relative path of it at hp.data.f0s_list_path. (Default path is f0s.txt)
  4. write path of pretrained Assem-VC and HiFi-GAN models in inference.ipynb.

Inference

After the VC decoder and HiFi-GAN are trained, you can use an arbitrary speaker's speech as the source. You can convert it to speaker contained in trainset: which is any-to-many voice conversion.

  1. Add your source audio(.wav) in datasets/inference_source
  2. Add following lines at datasets/inference_source/metadata_origin.txt
    your_audio.wav|transcription|speaker_id
    
    Note that speaker_id has no effect whether or not it is in the training set.
  3. Convert datasets/inference_source/metadata_origin.txt into ARPABET.
    python datasets/g2p.py -i datasets/inference_source/metadata_origin.txt \
                            -o datasets/inference_source/metadata_g2p.txt
  4. Run inference.ipynb

We provide three samples including single TTS sample from VITS demo page for source audio.

Note that source speech should be clean and the volume should not be too low.

Results

Disclaimer: We used an open-source g2p system in this repository, which is different from the proprietary g2p mentioned in the paper. Hence, the quality of the result may differ from the paper.

Implementation details

Here are some noteworthy details of implementation, which could not be included in our paper due to the lack of space:

  • Guided attention loss
    We applied guided attention loss proposed in DC-TTS. It helped Cotatron's alignment learning stable and faster convergence. See modules/alignment_loss.py.

License

BSD 3-Clause License.

Citation & Contact

@INPROCEEDINGS{kim2021assem, 
  title={ASSEM-VC: Realistic Voice Conversion by Assembling Modern Speech Synthesis Techniques},   
  author={Kim, Kang-Wook and Park, Seung-Won and Lee, Junhyeok and Joe, Myun-Chul},  
  booktitle={ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},   
  year={2022},  
  volume={}, 
  number={},  
  pages={6997-7001},  
  doi={10.1109/ICASSP43922.2022.9746139}}

@article{kim2021controllable,
  title={Controllable and Interpretable Singing Voice Decomposition via Assem-VC},
  author={Kim, Kang-wook and Lee, Junhyeok},
  journal={NeurIPS 2021 Workshop on Machine Learning for Creativity and Design},
  year={2021}
}

If you have a question or any kind of inquiries, please contact Kang-wook Kim at [email protected]

Repository structure

.
├── LICENSE
├── README.md
├── cotatron.py
├── cotatron_trainer.py         # Trainer file for Cotatron
├── gta_extractor.py            # GTA mel spectrogram extractor
├── inference.ipynb
├── preprocess.py               # Extracting speakers' pitch range
├── requirements.txt
├── synthesizer.py
├── synthesizer_trainer.py      # Trainer file for VC decoder (named as "synthesizer")
├── config
│   ├── cota
│   │   └── default.yaml        # configuration template for Cotatron
│   ├── global
│   │   └── default.yaml        # configuration template for both Cotatron and VC decoder
│   └── vc
│        └── default.yaml       # configuration template for VC decoder
├── datasets                    # TextMelDataset and text preprocessor
│   ├── __init__.py         
│   ├── g2p.py                  # Using G2P to convert metadata's transcription into ARPABET
│   ├── resample.py             # Python file for audio resampling
│   └── text_mel_dataset.py
│   ├── inference_source
│   │    (omitted)              # custom source speechs and transcriptions for inference.ipynb
│   ├── inference_target
│   │    (omitted)              # target speechs and transcriptions of VCTK for inference.ipynb
│   ├── metadata
│   │    (ommited)              # Refer to README.md within the folder.
│   └── text
│        ├── __init__.py
│        ├── cleaners.py
│        ├── cmudict.py
│        ├── numbers.py
│        └── symbols.py
├── docs                        # Audio samples and code for https://mindslab-ai.github.io/assem-vc/
│   (omitted)
├── hifi-gan                    # Modified HiFi-GAN vocoder (https://github.com/wookladin/hifi-gan)
│   (omitted)
├── modules                     # All modules that compose model, including mel.py
│   ├── __init__.py
│   ├── alignment_loss.py       # Guided attention loss
│   ├── attention.py            # Implementation of DCA (https://arxiv.org/abs/1910.10288)
│   ├── classifier.py
│   ├── cond_bn.py
│   ├── encoder.py
│   ├── f0_encoder.py
│   ├── mel.py                  # Code for calculating mel-spectrogram from raw audio
│   ├── tts_decoder.py
│   ├── vc_decoder.py
│   └── zoneout.py              # Zoneout LSTM
└── utils                       # Misc. code snippets, usually for logging
    ├── loggers.py
    ├── plotting.py
    └── utils.py

References

This implementation uses code from following repositories:

This README was inspired by:

The audio samples on the demo page of Assem-VC and the demo page of Assem-Singer are partially derived from:

  • LibriTTS: Dataset for multispeaker TTS, derived from LibriSpeech.
  • VCTK: 46 hours of English speech from 108 speakers.
  • KSS: Korean Single Speaker Speech Dataset.
  • CSD: Children's Song Dataset for Singing Voice Research
  • NUS-48E: NUS-48E Sung and Spoken Lyrics Corpus

assem-vc's People

Contributors

wookladin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

assem-vc's Issues

Trouble importing AttrDict

Hi there - I'm having trouble running the inference code. Can't seem to import AttrDict from env.

ImportError Traceback (most recent call last)
<ipython-input-1-7f2d8b5f7f6d> in <module>
15
16 from omegaconf import OmegaConf
---> 17 from env import AttrDict
18
19 from synthesizer import Synthesizer

ImportError: cannot import name 'AttrDict'

Things I've tried:

  • !pip install env
  • Installed AttrDict=2.0.0=py36_0 via conda
  • Made sure Python=3.6.8

Any help would be appreciated :)

한글 데이터 학습시 혀짧은 소리

안녕하세요, 오픈해주신 코드로 이것 저것 테스트 해보고 있는 개발자 입니다.
다름이 아니라 KSS 국립국어원 데이터 등으로 한글 데이터에 대해서 테스트를 해보고 있는데
Conversion 결과 혀짧은 소리가 나는데 이게 왜 이럴까요?
text쪽도 kor와 korean_cleaners 로 제대로 변경해서 학습 했는데요.
Cotatron 쪽 혹은 Synthesizer쪽 학습이 덜 되서 그런걸까요?

many to many, any to any 질문

안녕하세요! 음성합성을 공부하고 있는 학생입니다.

논문 잘 읽었습니다! 샘플 오디오를 들으면서 엄청난 성능에 깜짝 놀랐습니다.

제가 VC논문을 처음 읽어 모르는 것이 생겨 질문을 하게 되었습니다.

  1. many to many, any to many가 뭔지를 모르겠습니다.
    제가 이해하기로는
    many to many가 학습에 사용했었던 speaker를 inference시에도 사용한것이고
    any to many가 학습에서 보지 못했던 speaker를 inference 시에 사용한 것이 맞는지 모르겠습니다 ㅠㅠ

  2. GTA finetuning에 대해서 찾아봤는데 마땅한 내용을 찾지 못했습니다. 제가 논문을 읽고 이해하기로는
    원래 학습시킨 모델에 추가적으로 assem-vc를 통과한 mel을 더 추가적으로 finetuning 하는것이 맞는지 여쭤보고 싶습니다.

감사합니다.

Pre-trained model

Could you please tell us when you are planning to release pre-trained model?
Is it possible for you to provide us some kind of loss graph or just the number of training steps necessary for each module to converge on LibriTTS+VCTK dataset? So we could estimate whether it is possible for mere mortals to train the model without multiple advanced GPUs...
Could you elaborate on audio normalization mentioned in you paper? Is it implemented somewhere in your project or should we process audio files by some other means?
Thank you!

How to split singing voices

Hi, I am trying to reproduce the results presented in the paper "Controllable and Interpretable Singing Voice Decomposition via Assem-VC", with the CSD, NUS-48E and also with custom datasets. In the paper it is said that "all singing voices are split between 1-12 seconds and used for training with corresponding lyrics". I understand that the original .wav files of the datasets need to be splitted to shorter .wav files before building the metadata files with format "path_to_wav|transcription|speaker_id". However, I can't find any code in the repository for doing this. How is this splitting process done? Is it done manually for all the datasets?

Thanks!

Speaker encoder

안녕하세요. 논문을 보니 speaker representation을 구하는데 자주 쓰는 lookup table/embedding 대신에 speaker encoder를 사용했다는데 이 부분에 대해 더 자세한 설명이 없어서 혹시 자세한 구조나 참조한 논문을 알 수 있을까요? 아울러 비슷한 방식을 채택한 Attentron이나 DeepSinger 등은 target이 굳이 training set에 없어도 reference 하나만으로 conversion이 가능한 zero-shot/any-to-any 기능을 지원하는데 이것도 실험을 해보셨는지 궁금합니다.

One-to-Many

can it use as an "one-to-many" conversion model? I have few unpaired datasets, 1 hour for each, 1k+ sentences, SNR > 40dB. I want to know if I can make this project as an "one-to-many" or said one-vs-rest model?

Reason to use speaker encoder over speaker embeddings?

What was the reason you switched from speaker embeddings (Cotatron) to a speaker encoder (this). Was it because it worked better? Or was it to support Any to Any voice conversion? I'm curious because I am currently trying to deploy my own architecture and can't really decide between the two.

Possible bottleneck?

I am got warning:

/usr/local/lib/python3.8/dist-packages/pytorch_lightning/utilities/distributed.py:45: UserWarning: Dataloader(num_workers>0) and ddp_spawn do not mix well! Your performance might suffer dramatically. Please consider setting distributed_backend=ddp to use num_workers > 0 (this is a bottleneck of Python .spawn() and PyTorch

is this Ok?

Code release?

When can we expect a code release? Any timetable for that?

other models

I see that there is a Korean model in the sample, how does this step work?
python datasets/g2p.py -i < input_metadata_filename_with_graphemes > -o < output_filename >

I see something like this in the dataset.How to deal with Korean and English?
1/1_0000.wav|그는 괜찮은 척하려고 애쓰는 것 같았다.|그는 괜찮은 척하려고 애쓰는 것 같았다.|그는 괜찮은 척하려고 애쓰는 것 같았다.|3.5|He seemed to be pretending to be okay.

Regarding teacher forcing to calculate alignment

Hi,
The alignment for ith step does not use the mel frame of the ith step. Even if teacher forcing is used, we are essentially predicting alignment at every step using previous mel frames and not utilizing the actual ground truth mel frame.

We could do one of these instead:

  • Since M_i depends on A_i and since we know ground truth M_i, we can freeze the network weights to find A_i using autograd.

  • We could also use a phoneme aligner like montreal forced aligner(mfa) to get the alignment matrix directly.

Build custom non-English dataset with ARPABET

Hi, thanks for opening this project. I'm a newbie in VC and I try to add a new speaker to assem-vc.
In Prepare Metadata section, @wookladin uses python datasets/g2p.py to convert transcription into ARPABET.
For custom dataset other than English, e.g. Mandarin Chinese, how to build metadata?
I searched for g2p and find https://github.com/kakaobrain/g2pM, a Grapheme-to-Phoneme Conversion tool for Chinese. But the generated results are PinYin, not ARPABET format. This really confuses me. Could we use PinYin for Chinese to build metadata?

Extending to n+1 target speakers using pretrained Cotatron

Hello,

How would I extend this model to n+1 target speakers to perform any/many to many conversion? When I increase the number of speakers to include the speakers in LibriTTS + our dataset and use the pretrained cotatron weights, I get an embedding mismatch error when attempting to train the decoder because of the different dimensions which is derived from the speakers_list in the global config.yaml. Do I simply keep the speakers_list the same, i.e. don't include our dataset speaker names (include only LibriTTS + VCTK), but train the decoder/synthesizer on the combined data which includes LibriTTS + our dataset?

Thanks

Question about GTA of mel-spectograms

Im trying to understand the GTA part of your paper which seems to have a huge influence and im unsure if I understood it correctly. I understood that much: You have two networks, one which maps a source and target speaker mel spectogram and the transcription to a transformed spectogram and the vocoder which maps the transformed spectogram to waveform.

You first train the first network. Then instead of transforming the waveform to a spectogram and using that as input to the vocoder, in order to train it, you pass the audio through your proposed network and use the output as input to train the vocoder, is that correct?

teacher-forcing

have you tried teacher forcing rate set to 1.0 during training cotatron ?

Cross-lingual supported?

Hey, thank you for sharing your great work!
I'm a student who's currently working on cross-lingual VC. Your Assem-vc framework provides excellent results in VC within single language. I'm wondering does this framework also work for cross-lingual VC? Is there any way to transform the Cotatron model to support multi-language?

Training HIFI-GAN faster

Hi @wookladin ,

I was trying to fine-tune HIFI-GAN for a single speaker dataset(20 mins of Audio) and the training time per epoch was around 35 seconds. This seems too long. Any ideas of how to make it faster? I'm using a single T4 GPU machine, I could use a bigger machine with V100 GPUs based on your suggestion.

Changin the sampling rate

Hi, I have a voice dataset that is sampled at 16kHz, I saw that inside the cofig file there is specific instruction not to change the audio part of the config including the sampling rate. Is there a way for me to adapt a different sampling rate into the code?

Speech+Transcript conditioned phoneme recognition as an alternative to G2P

Hi @wookladin ,
While creating the training data, G2P gives phonemes based on how a particular word is supposed to be pronounced, but the audio might have a slightly different pronunciation due to various accents. I understand that you've used proprietary G2P for better results. But g2p models only utilize transcript information.

  1. A speech+transcript conditioned phoneme recognizer would give better results wouldn't it?
  2. Phoneme error rates are still high in the latest ASR acoustic models. Usually, ASR acoustic models predict not-so-accurate phonemes and ASR language models predict the transcript from the phonemes. But here we want to improve the accuracy of the phonemes given audio and transcript. I couldn't find any literature around that. Any leads/ideas?

Best way to extend the model to a new speaker

Hi,
I have a 15 min recording of a new speaker. I'd like to train assem-vc to perform any-to-one voice conversion. Based on my previous experience, the best and fastest way to do it would be to create a single speaker dataset and further fine-tune both the pre-trained VC decoder and pre-trained HIFI-GAN vocoder.

  1. Am I correct or is there a better way to do it?
  2. How would the training steps change to fine-tune the given pre-trained models?

Audio samples sound great! I have a few questions.

Hello!

Very excited for the code release this June, I'm interested to see how my datasets hold up when trained with your model. I've got a few questions so that I can adequately prep my datasets:

  1. Do I only need audio to train a speaker, or do I need audio + text transcriptions?
  2. What specific format and specifications should the audio be in?
  3. Do the speakers need to have the exact same utterances to train, or can they be different utterances?
  4. Do you think this model would be able to perform well doing Any to Many or Many to Many real-time? For example, if the input speaker was a microphone feed.

Thanks for reading, good luck w/ the code release!

어떻게해야 모델을 한글 음소로 학습시킬 수 있나요?

최근에 음성 합성에 관심이 생겨서 공부하고 있는 학부생입니다, 이 프로젝트는 정말 멋집니다.

데모에서 한국어 음성 합성을 들었는데, '가수' 데모에는 없어서, 직접 모델을 만들어보고 싶습니다.

Config를 kor와 korean_cleaners로 변경했습니다.
그리고 제가 이때까지 구현해서 만들어본 metadata.txt의 내용의 일부분입니다.

  • wavs\나타내고 싶지 않다고_ 생각하고 있습니다..wav|나타내고 싶지 않다고? 생각하고 있습니다.|001
  • wavs\나타내고 싶지 않다고_ 생각하고 있습니다..wav|나타네고 십찌 안타고? 셍가카고 읻씀니다.|001
  • wavs\나타내고 싶지 않다고_ 생각하고 있습니다..wav|{NA TA NAE GO} {SIP JI} {AN TA GO} {SAENG GA KA GO}? {IT SEUP NI DA}.|001

이런 접근 방법이 맞는 건가요?
조언 부탁드립니다.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.