Git Product home page Git Product logo

tachibanayoshino / animegan Goto Github PK

View Code? Open in Web Editor NEW
4.4K 4.4K 659.0 266.63 MB

A Tensorflow implementation of AnimeGAN for fast photo animation ! This is the Open source of the paper 「AnimeGAN: a novel lightweight GAN for photo animation」, which uses the GAN framwork to transform real-world photos into anime images.

Python 100.00%
anime-images animegan hayao-style photo-animation tensorflow

animegan's Introduction

AnimeGAN

A Tensorflow implementation of AnimeGAN for fast photo animation !     日本語
The paper can be accessed here or on the website.

If you like what I'm doing you can tip me on patreon.

Photos Colab

Videos Colab for videos



Some suggestions:

  1. since the real photos in the training set are all landscape photos, if you want to stylize the photos with people as the main body, you may as well add at least 3000 photos of people in the training set and retrain to obtain a new model.
  2. In order to obtain a better face animation effect, when using 2 images as data pairs for training, it is suggested that the faces in the photos and the faces in the anime style data should be consistent in terms of gender as much as possible.
  3. The generated stylized images will be affected by the overall brightness and tone of the style data, so try not to select the anime images of night as the style data, and it is necessary to make an exposure compensation for the overall style data to promote the consistency of brightness and darkness of the entire style data.

News:
      AnimeGANv2 has been released and can be accessed here.

The improvement directions of AnimeGANv2 mainly include the following 4 points:  

   1. Solve the problem of high-frequency artifacts in the generated image.
   2. It is easy to train and directly achieve the effects in the paper.
   3. Further reduce the number of parameters of the generator network.
   4. Use new high-quality style data, which come from BD movies as much as possible.


Requirements

  • python 3.7
  • tensorflow-gpu 1.15.0 (ubuntu, GPU 2080Ti, cuda 10.0.130, cudnn 7.6.0)
  • opencv
  • tqdm
  • numpy
  • glob
  • argparse

Usage

1. Inference

e.g. python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/real --style_name H

2. Convert video to anime

e.g. python video2anime.py --video video/input/お花見.mp4 --checkpoint_dir ./checkpoint/generator_Hayao_weight

3. Train

1. Download vgg19 or Pretrained model

vgg19.npy

Pretrained model

2. Download dataset

Link

3. Do edge_smooth

e.g. python edge_smooth.py --dataset Hayao --img_size 256

4. Train

e.g. python train.py --dataset Hayao --epoch 101 --init_epoch 5

5. Extract the weights of the generator

e.g. python get_generator_ckpt.py --checkpoint_dir ../checkpoint/AnimeGAN_Hayao_lsgan_300_300_1_1_10 --style_name Hayao


Results

😊 pictures from the paper - AnimeGAN: a novel lightweight GAN for photo animation




😍 Photo to Hayao Style












License

This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGAN given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.

Author

Xin Chen, Gang Liu, Jie Chen

Acknowledgment

This code is based on the CartoonGAN-Tensorflow and Anime-Sketch-Coloring-with-Swish-Gated-Residual-UNet. Thanks to the contributors of this project.

animegan's People

Contributors

tachibanayoshino avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

animegan's Issues

Confuse with ConvBlock(K1,S1) in DSConv.

When I read your paper, I found ConvBlock(K1,S1) in DSConv.
image
But I couldn't find ConvBlock(K1,S1) in your structure of networks.

def Separable_conv2d(inputs, filters, kernel_size=3, strides=1, padding='VALID', Use_bias = None):
    if kernel_size==3 and strides==1:
        inputs = tf.pad(inputs, [[0, 0], [1, 1], [1, 1], [0, 0]], mode="REFLECT")
    if strides == 2:
        inputs = tf.pad(inputs, [[0, 0], [0, 1], [0, 1], [0, 0]], mode="REFLECT")
    return tf.contrib.layers.separable_conv2d(
        inputs,
        num_outputs=filters,
        kernel_size=kernel_size,
        depth_multiplier=1,
        stride=strides,
        biases_initializer=Use_bias,
        normalizer_fn=tf.contrib.layers.instance_norm,
        activation_fn=lrelu,
        padding=padding)

I thought Separable_conv2d means DSConv block, am I right?
What can I think about this?

Thanks again!

A JavaScript implementation is available

Hi @TachibanaYoshino ,

Wonderful work! AnimeGAN is really amazing to me. Those images on the README looks really good. However, for those people who do not have a lot of knowledge in the field of deep learning or even Python but just want to use the work to create an animated version of their photos, installing Tensorflow on a platform such as Windows/Linux could be challenging, which can limit people's enthusiasm. Therefore, from the pre-trained model that you released, I migrated AnimeGAN onto TensorFlow.js, allowing anyone who is able to access the Internet through a browser to do photo animation.

The project is here: https://github.com/TonyLianLong/AnimeGAN.js.
You can have a try here: https://animegan.js.org/.

The implementation uses WebGL or, if not supported, other backends to accelerate computing, but does not require the user to install anything.

Writing this message, I wonder if it's possible that you add a link of AnimeGAN.js to AnimeGAN's README as a demo, which will benefit this project since people are able to upload their own photo and try AnimeGAN out in their browser in addition to viewing the examples that you give, and they will have a deeper feeling than viewing a collection of example images. By listing a demo link, I mean in a way that this project lists other implementations, but suggestions are always welcomed. If you'd like, we could discuss further steps and work together.

尊敬的作者大大你好,关于训练的问题有一些不明白

  1. Do edge_smooth
    eg. python edge_smooth.py --dataset Hayao --img_size 256

  2. Calculate the three-channel(BGR) color difference
    eg. python data_mean.py --dataset Hayao
    1,这两项是训练必须的步骤吗? 搜遍全网也没搞清楚这两个步骤的意义,可能是我太菜了。是否预测的时候也需要执行这两个程序?
    2,是我尝试迁移预测一些黑暗场景和爆炸场景,得到十分糟糕的效果,是训练集缺失的原因吧?假设我去热血漫搞点爆炸镜头能否改善这个问题呢?
    3,训练集图片都挺小的,能否自定义输入图片的形状,比如直接输入1080p?
    感谢作者,期望能得到你的教导。

inference time so slow?

hi,
I am not sure where i'm wrong but when i try inference with test.py file as read.me file, each image need more than 2 second for build fake images. When reading your paper, image inference just 50ms, what im wrong. Does image size has big impact for slow inference,
thanks you.

installation

is there any tutorial on how to install in wondows10

I can not load checkpoint

my tensorflow version is 1.13.1 because tf==1.8 is too old ,I can not install it
I can not load AnimeGAN.model-60 weight by tf.train.get_check_state

In my way:
download: Haoyao-style.zip
and unzip into "checkpoint/"
so, the checkpoint folder including:

checkpoint
AnimeGAN.model-60.index
AnimeGAN.model-60.meta
AnimeGAN.model-60.data-00000-of-00001

and using test scripts:
python test.py --checkpoint_dir checkpoint/AnimeGAN.model-60 --test_dir dataset/test/real --style_name H

but it failed
ckpt is None by tf.train.get_check_state(checkpoint/AnimeGAN.model-60)

Train error

Hi, I tried training but an error occurred. How do i fix it?

[] Reading checkpoints...
[
] Failed to find a checkpoint
[!] Load failed...
2020-12-08 21:29:36.529360: W T:\src\github\tensorflow\tensorflow\core\framework\op_kernel.cc:1306] Invalid argument: TypeError: Cannot cast ufunc add output from dtype('<U32') to dtype('float32') with casting rule 'same_kind'
Traceback (most recent call last):

File "C:\Users\VGLAB\Anaconda3\lib\site-packages\tensorflow\python\ops\script_ops.py", line 157, in call
ret = func(*args)

File "C:\Users\VGLAB\Desktop\GAN\AnimeGAN-master\AnimeGAN-master\tools\data_loader.py", line 64, in load_image
image1, image2 = self.read_image(img1)

File "C:\Users\VGLAB\Desktop\GAN\AnimeGAN-master\AnimeGAN-master\tools\data_loader.py", line 45, in read_image
image1[:,:,0] += self.data_mean[2]

TypeError: Cannot cast ufunc add output from dtype('<U32') to dtype('float32') with casting rule 'same_kind'

Pytorch implemenation

Hi @TachibanaYoshino,

Great work! Your project is really interesting, so I decided to implement the algorithm in Pytorch based on your Tensorflow code.

I tried to make the architecture of G and D to be the same as yours, and train on the same config (loss weights) but the result is a little bit different. So I tuned the loss weights to get the best result.

Currently, my results look like anime, but some generation images still have some artifacts. Maybe I will need to train longer.

Could you add this implementation into your README.md, I think It helps other people when looking for a Pytorch version 😄

thank you very much.

my repo: https://github.com/ptran1203/pytorch-animeGAN

Some of my results:

1 (46)_anime
1 (36)_anime

Question about style

Hi

I ran python test.py --checkpoint_dir checkpoint/generator_Hayao_weight --test_dir dataset/test/real --style_name H and got a nice results as you shown!

There are three styles, Hayao, Paprika, and Shinkai, I notice that you used style_name=H. Does this mean that you use capital letters for other ones too? Shinkai for S and paprika for P?

I tried setting --style_name S and --style_name Shinkai, but it still gives the same result as --style_name H

gpu ram and batch

I use 1070 8gb.
I try batch is 2. ok
I try batch is 3. ok
I try batch is 4. overflow.

Is the author's video memory 8gb? Is it higher (11gb)? The author uses batch = 4, will it report an error? Is my graphics card set incorrectly?

Tips on training on new dataset

Hi,

Love the work! I have managed to train the model with the existing dateset. I want to train it to generate better images with people. As you suggested, I've added about 2000 photos of various people in the train_photo. However it looks like the discriminator loss is converging to 0 after a few epoch and the generator loss gets stuck at a high number when I retrain the model.

This seems to be the discriminator is learning too fast? Any parameter I should tune or is my choice of data not good enough?
Am I suppose to replace the photos in the train_photo folder or just add new ones?

Excuse me if this sounds really amateurish, I'm quite new at this.

Baidu and vgg19.npy links are dead

Hi,
What a great work!
Those results look really amazing.

Seems Baidu and vgg19.npy links are dead. I can't download them.
Can you reupload them?

Thanks.

Weird results

When i run

python3 main.py --phase test --dataset Hayao  --checkpoint_dir checkpoint/AnimeGAN.model-60

I get strange results like

1

or

1

What might be causing it?

No module named 'Brightness_tool'

With all due respect, I believe the Brightness_tool directory needs to be outside the AnimeGAN+ directory. I tried to resolve the issue by introducing the init.py file, but that did not work. I am not really good at python. But after the commit 4 hours ago, the code does not run until and unless I put the Brightness_tool directory outside of AnimeGAN+.

how to use?i want convert a picture to anime picture.

if I want convert a picture to anime picture. What is command should use?

I install environment using a docker image, and put my pictures in folder "samples", and run
docker run -it xxxx python main.py

I except AnimeGAN can handle my pictures, but he start train the pictures in folder dataset. one step and one step, slowly, I close it using ctrl+c until it runing step 100+.

I just want convert my pictures to anime picture. What is wrong with me? And what should I do correct??

The loss of discriminator keeps decreasing

Hello, Can you help me?

I rewrite AnimeGAN by tensorflow2.0. All hyperparameters are the same as your code.

But I found that in the training process, at first the generator generated the correct image, but later the loss of the discriminator gradually decreased, resulting in the loss of anime style of the generated image.

loss value

animegan-1

generate image

anime

my opinions

At first, I thought the loss method was wrong, and then I tried Dragan, but the effect was not as good as lsgan.

I'm currently trying to increase training rate : Generator : Discriminator = 3 : 1.

I'd appreciate it if you could give me some tips

g_loss is very high

Dear author:
thanks for sharing this greate repo. it's helpful!
During reproduce your exp, i noticed the d&g loss is like this:

Epoch: 108 Step:   456  time: 0.595382 s d_loss: 11.78779125, g_loss: 1069.37170410 -- mean_d_loss: 15.22266197, mean_g_loss: 1128.97387695
Epoch: 108 Step:   457  time: 0.598991 s d_loss: 21.59355354, g_loss: 1145.24707031 -- mean_d_loss: 15.33250523, mean_g_loss: 1129.25439453
Epoch: 108 Step:   458  time: 0.592521 s d_loss: 6.78862953, g_loss: 1130.46630859 -- mean_d_loss: 15.18769360, mean_g_loss: 1129.27490234
Epoch: 108 Step:   459  time: 0.594440 s d_loss: 1.89800203, g_loss: 1081.30371094 -- mean_d_loss: 14.96619892, mean_g_loss: 1128.47534180
Epoch: 108 Step:   460  time: 0.595921 s d_loss: 65.07347870, g_loss: 1352.42736816 -- mean_d_loss: 15.78763008, mean_g_loss: 1132.14672852
Epoch: 108 Step:   461  time: 0.595723 s d_loss: 60.10780716, g_loss: 1378.10986328 -- mean_d_loss: 16.50247002, mean_g_loss: 1136.11389160
Epoch: 108 Step:   462  time: 0.596035 s d_loss: 2.82108736, g_loss: 1654.43518066 -- mean_d_loss: 16.28530502, mean_g_loss: 1144.34130859
Epoch: 108 Step:   463  time: 0.595019 s d_loss: 2.04391837, g_loss: 857.95996094 -- mean_d_loss: 16.06278419, mean_g_loss: 1139.86657715
Epoch: 108 Step:   464  time: 0.596555 s d_loss: 10.40408993, g_loss: 1090.06201172 -- mean_d_loss: 15.97572708, mean_g_loss: 1139.10034180
Epoch: 108 Step:   465  time: 0.595400 s d_loss: 7.73724365, g_loss: 965.87994385 -- mean_d_loss: 15.85090065, mean_g_loss: 1136.47583008

The g_loss is relatively large and does not decrease steadily. Is this normal? what's your training process looks like? thank you.

What is the license used?

Super cool! Since you have released the project as open source, can you clarify on the license you have used for the project? Thank you.

baidu link is dead or what?

HI I am very interest to know how far AnimGan can beat CartoonGan ,
but the baidu link (dateset) is dead .
Can you put it at google drive or mega drive for everyone?
i think that is a problem for everyone interest in your project~

Error happend : buffer_size must be greater than zero.

How to fix this ?
and you did a great work ! it's very fun ! Thanks!

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main.py", line 112, in
main()
File "main.py", line 104, in main
gan.train()
File "/home/tom/workspace/AnimeGAN/AnimeGAN.py", line 232, in train
anime, anime_smooth, real = self.sess.run([anime_img_op, anime_smooth_op, real_img_op])
File "/root/anaconda3/envs/testa/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/root/anaconda3/envs/testa/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/root/anaconda3/envs/testa/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/root/anaconda3/envs/testa/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: buffer_size must be greater than zero.
[[Node: ShuffleDataset = ShuffleDataset[output_shapes=[[]], output_types=[DT_FLOAT], reshuffle_each_iteration=true](RepeatDataset, ShuffleDataset/buffer_size, ShuffleDataset/seed, ShuffleDataset/seed2)]]
[[Node: OneShotIterator = OneShotIteratorcontainer="", dataset_factory=_make_dataset_25WVjNoEiQY[], output_shapes=[, ], output_types=[DT_FLOAT, DT_FLOAT], shared_name="", _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Train Error

Hi
I download and open project with pycharm. I configured the environment as README required(1080Ti), running edge-smooth.py and data_mean.py has no problem, however , when I ran main.py, something wrong:
[*] Reading checkpoints...
[*] Failed to find a checkpoint
[!] Load failed...
2021-04-12 13:15:23.718658: W tensorflow/core/framework/op_kernel.cc:1306] Invalid argument: UFuncTypeError: Cannot cast ufunc 'add' output from dtype('<U32') to dtype('float32') with casting rule 'same_kind'
Traceback (most recent call last):
File "****/anaconda3/envs/anime/lib/python3.6/site-packages/tensorflow/python/ops/script_ops.py", line 157, in __call__ ret = func(*args)
File "*****/AnimeGAN-master/tools/data_loader.py", line 70, in load_image image1, image2 = self.read_image(img1)
File "*****/AnimeGAN-master/tools/data_loader.py", line 45, in read_image image1[:,:,0] += self.data_mean[2]
numpy.core._exceptions._UFuncOutputCastingError: Cannot cast ufunc 'add' output from dtype('<U32') to dtype('float32') with casting rule 'same_kind'
Could you please help solve this problem or give some advice?
Best wishes!

sieg!

Ur instruction doesn't work. luk for my screenshot. what to do? how to transform my photo into anime? nya?
i did everythonng by instruction.

Снимок экрана 2020-05-06 в 00 53 31

Upgrade with newer tensorflow version

Hi @TachibanaYoshino , your animegan is amazing, thanks for your great works!
Could you consider upgrade this repo with newer tensorflow version, eg. 1.14 or 1.15. since tensorflow 1.8 only support cuda to 9.x version, but cuda 9.x does not supppot newer nvidia rtx graphic card like rtx 2080ti, I run into this problem and it stuck me for a few days. here are to reference:

https://stackoverflow.com/questions/50622525/which-tensorflow-and-cuda-version-combinations-are-compatible

tensorflow/tensorflow#23341

btw here are error output when train animegan in my 2080ti cuda9.0 machine:


# ignore...

2020-04-20 11:45:56.252939: E tensorflow/stream_executor/cuda/cuda_blas.cc:654] failed to run cuBLAS routine cublasSgemm_v2: CUBLAS_STATUS_EXECUTION_FAILED
Traceback (most recent call last):
  File "/home/bigdata/anaconda3/envs/animegan/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
    return fn(*args)
  File "/home/bigdata/anaconda3/envs/animegan/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File "/home/bigdata/anaconda3/envs/animegan/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.InternalError: Blas SGEMM launch failed : m=65536, n=128, k=64

# ignore

please add function: output pb file

please add function: output pb file

I do it,but I don't how do. (convert checkpoint to pb file)

do you open a message about convert OR add output pb file function.

example other software message(convert checkpoint to pb file message):

input = tf.placeholder(tf.float32, shape=[None, 36], name="Input")
output = tf.layers.dense(inputs=self.layer2, units=1, activation=tf.nn.relu, name="Prediction")


0: op name = import/Input, op type = ( Placeholder ), inputs = , outputs = import/Input:0
@input shapes:
@output shapes:
name = import/Input:0 : (?, 36)

19: op name = import/Prediction, op type = ( Relu ), inputs = import/add_5:0, outputs = import/Prediction:0
@input shapes:
name = import/add_5:0 : (?, 1)
@output shapes:
name = import/Prediction:0 : (?, 1)

question during reproducing results

Great job!
I ran the code with python 3.7.4 and tf 1.15.0 to reproducing the results. And I found it hard to train the Paprika style well with default hyperparameter settings. It seems that the GAN just transforms the image close to grayscale, and there are also some vertical lines.
006_a
006_b
Is it caused by Paprika data?
Note that I trained 130 epoches. The results after 100th epoch looks better, but the green region still goes grayscale.
006_b (1)
It means that I just need to select a proper checkpoint during training? If yes, how?

val group test_real_A ram very high

I try move to iphone. I test it. ram very high.The system crashed.
I just used the verification part, but looking at the structure it seems to call the original training program. This may be the reason for his large memory requirements?

I tried trimming the model to make it smaller, but I failed. Do you have a plan to reduce memory(low)?
example:
self.test_generated = self.generator(self.test_real, reuse=False)
Of course I failed, the system reported an error

In addition, I have run other gan projects, the memory requirements are not so large, why is this project so high memory requirements? Even outside of training, test part of application part

i got this error

File "D:\develop\workspace\AnimeGAN\AnimeGAN.py", line 232, in train
anime, anime_smooth, real = self.sess.run([anime_img_op, anime_smooth_op, real_img_op])

tensorflow.python.framework.errors_impl.InvalidArgumentError: [Derived]buffer_size must be greater than zero.
[[{{node OptimizeDataset/ShuffleDataset}}]]
[[OneShotIterator]]

ImportError: cannot import name 'trace'

I use the whole new conda environment, but when run the example video transfer command, error occurs:

    from tensorflow_estimator.python.estimator import estimator
  File "C:\Users\XU\.conda\envs\anime\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 36, in <module>
    from tensorflow.python.profiler import trace
ImportError: cannot import name 'trace'

(anime) E:\AI_art\AnimeGANv2-master>

How to solve it please? Thanks

Why cannot import 'tools'

When i run the edge_smooth.py #(usage 3)# , the terminal return 'No module named 'tools'' . how can i fix this ?
Traceback (most recent call last): File "edge_smooth.py", line 2, in <module> from tools.utils import check_folder ModuleNotFoundError: No module named 'tools'

楼主你好,论文贴出的计算量是不是算错了?

tensorflow计算计算量,需要固定输入尺寸,固定情况下输出才是对的。
固定256size,FLOPs: 62650916141
不固定,FLOPs: 7937325
总之,和论文贴的都不太一样,论文和不固定的情况比较接近吧

There are artifacts on generated image

I used more shinkai data and real images to train AnimeGAN, and got artifacts on my generated image as follow
image
image

I trained the model with default parameters in your main.py

I simply removed instance norm according to stylegan2. Not worked.

I was wondering if you had encountered this and what was the solution? thanks

百度网盘链接已失效

亲,看啦你们模型效果很赞,十分好奇数据的样子。
但是给的百度链接已经失效了,请问能再发一个吗?感谢

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.