Git Product home page Git Product logo

zennas's Introduction

License arXiv

Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition

Zen-NAS is a lightning fast, training-free Neural Architecture Searching (NAS) algorithm for automatically designing deep neural networks with high prediction accuracy and high inference speed on GPU and mobile device.

This repository contains pre-trained models, a mini framework for zero-shot NAS searching, and scripts to reproduce our results. You can even customize your own search space and develop a new zero-shot NAS proxy using our pipeline. Contributions are welcomed.

The arXiv version of our paper is available from here. To appear in ICCV 2021. bibtex

How Fast

Using 1 GPU searching for 12 hours, ZenNAS is able to design networks of ImageNet top-1 accuracy comparable to EfficientNet-B5 (~83.6%) while inference speed 4.9x times faster on V100, 10x times faster on NVIDIA T4, 1.6x times faster on Google Pixel2.

Inference Speed

Compare to Other Zero-Shot NAS Proxies on CIFAR-10/100

We use the ResNet-like search space and search for models within parameter budget 1M. All models are searched by the same evolutionary strategy, trained on CIFAR-10/100 for 1440 epochs with auto-augmentation, cosine learning rate decay, weight decay 5e-4. We report the top-1 accuracies in the following table:

proxy CIFAR-10 CIFAR-100
Zen-NAS 96.2% 80.1%
FLOPs 93.1% 64.7%
grad-norm 92.8% 65.4%
synflow 95.1% 75.9%
TE-NAS 96.1% 77.2%
NASWOT 96.0% 77.5%
Random 93.5% 71.1%

Please check our paper for more details.

Pre-trained Models

We provided pre-trained models on ImageNet and CIFAR-10/CIFAR-100.

ImageNet Models

model resolution # params FLOPs Top-1 Acc V100 T4 Pixel2
zennet_imagenet1k_flops400M_SE_res224 224 5.7M 410M 78.0% 0.25 0.39 87.9
zennet_imagenet1k_flops600M_SE_res224 224 7.1M 611M 79.1% 0.36 0.52 128.6
zennet_imagenet1k_flops900M_SE_res224 224 19.4M 934M 80.8% 0.55 0.55 215.7
zennet_imagenet1k_latency01ms_res224 224 30.1M 1.7B 77.8% 0.1 0.08 181.7
zennet_imagenet1k_latency02ms_res224 224 49.7M 3.4B 80.8% 0.2 0.15 357.4
zennet_imagenet1k_latency03ms_res224 224 85.4M 4.8B 81.5% 0.3 0.20 517.0
zennet_imagenet1k_latency05ms_res224 224 118M 8.3B 82.7% 0.5 0.30 798.7
zennet_imagenet1k_latency08ms_res224 224 183M 13.9B 83.0% 0.8 0.57 1365
zennet_imagenet1k_latency12ms_res224 224 180M 22.0B 83.6% 1.2 0.85 2051
EfficientNet-B3 300 12.0M 1.8B 81.1% 1.12 1.86 569.3
EfficientNet-B5 456 30.0M 9.9B 83.3% 4.5 7.0 2580
EfficientNet-B6 528 43M 19.0B 84.0% 7.64 12.3 4288
  • 'V100' is the inference latency on NVIDIA V100 in milliseconds, benchmarked at batch size 64, float16.
  • 'T4' is the inference latency on NVIDIA T4 in milliseconds, benchmarked at batch size 64, TensorRT INT8.
  • 'Pixel2' is the inference latency on Google Pixel2 in milliseconds, benchmarked at single image.

CIFAR-10/CIFAR-100 Models

model resolution # params FLOPs Top-1 Acc
zennet_cifar10_model_size05M_res32 32 0.5M 140M 96.2%
zennet_cifar10_model_size1M_res32 32 1.0M 162M 96.2%
zennet_cifar10_model_size2M_res32 32 2.0M 487M 97.5%
zennet_cifar100_model_size05M_res32 32 0.5M 140M 79.9%
zennet_cifar100_model_size1M_res32 32 1.0M 162M 80.1%
zennet_cifar100_model_size2M_res32 32 2.0M 487M 84.4%

Reproduce Paper Experiments

System Requirements

  • PyTorch >= 1.5, Python >= 3.7
  • By default, ImageNet dataset is stored under ~/data/imagenet; CIFAR-10/CIFAR-100 is stored under ~/data/pytorch_cifar10 or ~/data/pytorch_cifar100
  • Pre-trained parameters are cached under ~/.cache/pytorch/checkpoints/zennet_pretrained

Evaluate pre-trained models on ImageNet and CIFAR-10/100

To evaluate the pre-trained model on ImageNet using GPU 0:

python val.py --fp16 --gpu 0 --arch ${zennet_model_name}

where ${zennet_model_name} should be replaced by a valid ZenNet model name. The complete list of model names can be found in 'Pre-trained Models' section.

To evaluate the pre-trained model on CIFAR-10 or CIFAR-100 using GPU 0:

python val_cifar.py --dataset cifar10 --gpu 0 --arch ${zennet_model_name}

To create a ZenNet in your python code:

gpu=0
model = ZenNet.get_ZenNet(opt.arch, pretrained=True)
torch.cuda.set_device(gpu)
torch.backends.cudnn.benchmark = True
model = model.cuda(gpu)
model = model.half()
model.eval()

Searching on CIFAR-10/100

Searching for CIFAR-10/100 models with budget params < 1M , using different zero-shot proxies:

'''bash scripts/Flops_NAS_cifar_params1M.sh scripts/GradNorm_NAS_cifar_params1M.sh scripts/NASWOT_NAS_cifar_params1M.sh scripts/Params_NAS_cifar_params1M.sh scripts/Random_NAS_cifar_params1M.sh scripts/Syncflow_NAS_cifar_params1M.sh scripts/TE_NAS_cifar_params1M.sh scripts/Zen_NAS_cifar_params1M.sh '''

Searching on ImageNet

Searching for ImageNet models, with latency budget on NVIDIA V100 from 0.1 ms/image to 1.2 ms/image at batch size 64 FP16:

scripts/Zen_NAS_ImageNet_latency0.1ms.sh
scripts/Zen_NAS_ImageNet_latency0.2ms.sh
scripts/Zen_NAS_ImageNet_latency0.3ms.sh
scripts/Zen_NAS_ImageNet_latency0.5ms.sh
scripts/Zen_NAS_ImageNet_latency0.8ms.sh
scripts/Zen_NAS_ImageNet_latency1.2ms.sh

Searching for ImageNet models, with FLOPs budget from 400M to 800M:

scripts/Zen_NAS_ImageNet_flops400M.sh
scripts/Zen_NAS_ImageNet_flops600M.sh
scripts/Zen_NAS_ImageNet_flops800M.sh

Customize Your Own Search Space and Zero-Shot Proxy

The masternet definition is stored in "Masternet.py". The masternet takes in a structure string and parses it into a PyTorch nn.Module object. The structure string defines the layer structure which is implemented in "PlainNet/*.py" files. For example, in "PlainNet/SuperResK1KXK1.py", we defined SuperResK1K3K1 block, which consists of multiple layers of ResNet blocks. To define your own block, e.g. ABC_Block, first implement "PlainNet/ABC_Block.py". Then in "PlainNet/__init__.py", after the last line, append the following lines to register the new block definition:

from PlainNet import ABC_Block
_all_netblocks_dict_ = ABC_Block.register_netblocks_dict(_all_netblocks_dict_)

After the above registration call, the PlainNet module is able to parse your customized block from structure string.

The search space definitions are stored in SearchSpace/*.py. The important function is

gen_search_space(block_list, block_id)

block_list is a list of super-blocks parsed by the masternet. block_id is the index of the block in block_list which will be replaced later by a mutated block This function must return a list of mutated blocks.

The zero-shot proxies are implemented in "ZeroShotProxy/*.py". The evolutionary algorithm is implemented in "evolution_search.py". "analyze_model.py" prints the FLOPs and model size of the given network. "benchmark_network_latency.py" measures the network inference latency. "train_image_classification.py" implements SGD gradient training and "ts_train_image_classification.py" implements teacher-student distillation.

FAQ

Q: Why it is so slow when searching with latency constraints? A: Most of the time is spent in benchmarking the network latency. We use a latency predictor in our paper, which is not released.

Major Contributors

How to Cite This Work

Ming Lin, Pichao Wang, Zhenhong Sun, Hesen Chen, Xiuyu Sun, Qi Qian, Hao Li, Rong Jin. Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition. 2021 IEEE/CVF International Conference on Computer Vision (ICCV 2021).

@inproceedings{ming_zennas_iccv2021,
  author    = {Ming Lin and Pichao Wang and Zhenhong Sun and Hesen Chen and Xiuyu Sun and Qi Qian and Hao Li and Rong Jin},
  title     = {Zen-NAS: A Zero-Shot NAS for High-Performance Deep Image Recognition},
  booktitle = {2021 IEEE/CVF International Conference on Computer Vision, {ICCV} 2021},  
  year      = {2021},
}

Open Source

A few files in this repository are modified from the following open-source implementations:

https://github.com/DeepVoltaire/AutoAugment/blob/master/autoaugment.py
https://github.com/VITA-Group/TENAS
https://github.com/SamsungLabs/zero-cost-nas
https://github.com/BayesWatch/nas-without-training
https://github.com/rwightman/gen-efficientnet-pytorch
https://pytorch.org/vision/0.8/_modules/torchvision/models/resnet.html

Copyright

Copyright (C) 2010-2021 Alibaba Group Holding Limited.

zennas's People

Contributors

minglin-home avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zennas's Issues

Some simple questions

Dear researcher, Hello
I ran into some problems while running the code that I couldn't solve when I wanted to test CIFAR10
(python val_cifar.py --dataset cifar10 --gpu 0 --arch zennet_cifar10_model_size05M_res32)
image

RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
image

The version I use is Torch1.7.1 Torchvision0.8.2 cudatoolkit10.2.89. May I ask if this is a problem with my operating environment? Or do I need to make changes in the code? Thanks a million, dear researcher.

A training question about GENet

Hi, everyone. I'm currently trying to reproduce your previous work "Neural Architecture Design for GPU-Efficient Networks", here is the repo link. After determining the model structure, such as "GENet_large", "GENet_small", etc. Can we refer to the training script(train_image_classification.py) in this repo to train the model and get the effect consistent with the paper description?

Some simple quetions about val

Hello, @MingLin-home ! I'm studing your zon_ NAS, but I encountered some problems in the process of re engraving. My results are as follows:

python val.py --fp16 --gpu 0 --arch zennet_imagenet1k_flops400M_SE_res224
Evaluate zennet_imagenet1k_flops400M_SE_res224 at 224x224 resolution.
/home/jet/anaconda3/lib/python3.9/site-packages/torchvision/transforms/transforms.py:287: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum.
warnings.warn(
---debug use_se in SuperResIDWE1K7(16,40,2,40,1)
---debug use_se in SuperResIDWE1K7(40,64,2,64,1)
---debug use_se in SuperResIDWE4K7(64,96,2,96,5)
---debug use_se in SuperResIDWE2K7(96,224,2,224,5)
loading pretrained parameters...
Using GPU 0.
/home/jet/anaconda3/lib/python3.9/site-packages/torch/nn/functional.py:3631: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details.
warnings.warn(
mini_batch 0, top-1 acc= 0%, top-5 acc= 0%, number of evaluated images=128
mini_batch 100, top-1 acc= 0%, top-5 acc= 0%, number of evaluated images=12928
mini_batch 200, top-1 acc= 0%, top-5 acc= 0%, number of evaluated images=25728
mini_batch 300, top-1 acc= 0%, top-5 acc=1.5625%, number of evaluated images=38528
*** arch=zennet_imagenet1k_flops400M_SE_res224, validation top-1 acc=0.08999999612569809%, top-5 acc=0.343999981880188%, number of evaluated images=50000

  1. I hope you can help me to explain the meaning of these results.
  2. I want to know : if I want to train a NAS, what procedures will I use?
    Thank you very much for your help!

能否请教一些关于MAE-DET的问题

计算方差以及得分的代码对应lightweight-neural-architecture-search/tinynas/scores中的那一个呢?计算特征图方差是通过模型前向推理数据得来的吗?还是将其转化到以计算通道的方式,和deepmad类似?

How does zen nas perform in detection tasks

hi, lin, I am interesting your group work , do you do some expriments on object detect target ?
I want to some nas work on object detect , zen-nas are attractive at speed, do you have some work or idea about this?

Pretrained Zennets and Evolution Search Algorithm

Hi, interesting work!

I had a couple of clarifying questions regarding the evolution search and the models provided.

For the pretrained models, those were trained on a V100 if I understood the paper correctly?

The inference speed charts on the T4 and Pixel 2 are only from exporting to TensorRT rather than actually training on the T4 and Pixel 2?

Hi MingLin

Your proposed Zen-NAS is a very efficient way to search for neural network structures. I read your article and GitHub code carefully, and did my own search on your code, but one thing I found is that the network structure searched almost always repeats more times the deeper the network is the network block, and the first few layers of the network block are repeated once, for example, I used your code to search the structure of MNas (the search space has been changed according to MNas), MNas0.35 optimal structure: 

SuperConvK3BNRELU(3,16,2,1)SuperResMnasV1K3(16,8,1,16,1)SuperResMnasV3K3(8,8,2,8,3)SuperResMnasV3K5(8,16,2,8,3) SuperResMnasV6K5(16,32,2,16,3)SuperResMnasV6K3(32,32,1,32,2)SuperResMnasV6K5(32,64,2,32,4)SuperResMnasV6K3(64,112,1,64,1) SuperConvK1BNRELU(112,1280,1,1)
but I searched with your architecture and the structure is as follows
SuperConvK3BNRELU(3,8,2,1)SuperResMnasV1K3(8,8,1,8,1)SuperResMnasV3K5(8,16,2,8,1)SuperResMnasV3K5(16,24,2,8,1)SuperResMnasV3K5(24,64,2,40,1)SuperResMnasV3K5(64,24,1,48,1)SuperResMnasV3K5(24,64,2,176,4)SuperResMnasV3K5(64,48,1,256,5)SuperConvK1BNRELU(48,2048,1, 1)
so the search out of the structure compared to the original structure is not very good, the structure is still the problem mentioned above, the search out of the network shallow block duplication for 1, only the deeper network has block duplication, also tested your code in the Flops400M,600M,900M, found the same problem, this is why?

The setting for num param=0.5m

Hi. I am interesting in what is the parameter setting for num param=0.5m

budget_model_size=5e5
max_layers=18
population_size=512
evolution_max_iter=480000  # we suggest evolution_max_iter=480000 for

Is it like this one?

multi-gpu training

Training for 480 epoch on single GPU will take quite a lot of time. So I wanted to check if multi GPU training is possible in the current code. I tried --dist_mode with mpi and auto are not supported as global_utils.AutoGPU() is not defined. But looks like --dist_mode=horovod is supported. Can it be used to do multi GPU training?
Is it possible to share sample command to do multi-gpu training?

Btw it took around 2.83 hrs to complete 1 epoch on my GTX 1080-TI, does this sound reasonable or too high? Typically I have seen on my machine (4 x GTX1080) around 15 mins per epoch for EfficientNet-ES using Ross wightman's timm repo.

Reproduce Nas

Hello, Can i do Nas for myself data set based on your source code?

How to do `entropy_forward` for CSP network?

My block look like this.

BottleneckCSP(
  (cv1): Conv(
    (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (act): Mish()
  )
  (cv2): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (cv3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
  (cv4): Conv(
    (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
    (act): Mish()
  )
  (bn): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (act): Mish()
  (m): Sequential(
    (0): Bottleneck(
      (cv1): Conv(
        (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): Mish()
      )
      (cv2): Conv(
        (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): Mish()
      )
    )
    (1): Bottleneck(
      (cv1): Conv(
        (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): Mish()
      )
      (cv2): Conv(
        (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
        (bn): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (act): Mish()
      )
    )
  )
)

forward(self, x)

def forward(self, x):
        d = self.m(self.cv1(x))
        y1 = self.cv3(d)
        y2 = self.cv2(x)
        return self.cv4(self.act(self.bn(torch.cat((y1, y2), dim=1))))

evolution search speed

The paper says searching cost is 0.5 GPU day with NVIDIA V100 GPU, half precision (FP16), batch size 64.
We use supplied script to search network, and set evolution_max_iter=50000, using V100, FP16, batch 64, the whole process spends much time。
for example, using ./Zen_NAS_ImageNet_latency0.8ms..sh to search 0.8ms model, we take 29 hours.
loop_count=47000/50000, max_score=308.6, min_score=293.333, time=28.213h
loop_count=48000/50000, max_score=308.6, min_score=294.01, time=28.78h
loop_count=49000/50000, max_score=308.6, min_score=294.837, time=29.3295h

in paper, evolutionary iterations is 96000, but using 50000, we still cannot achieve 0.5 GPU day to finish search model.

I wonder is there some problems in code or else

search problem

Hi, @MingLin-home
I search the zen-nas-1.0M in CIFAR10 by your default scripts, the result is that, params is less than 1M, but flops is 360M which is about twice as much as in the paper(160M).
Is this normal? How can I reproduce the level in the paper? Thank you.😄

Number of samples to approximate the expectation

Hello,
When computing the Zen-Score it involves expectations over input x and parameters theta.
In ZeroShotProxy/compute_zen_score.py, the approximation of such expectations is controlled by argument repeat in compute_nas_score, which is the number of samples from (x, theta).
Although the default value of repeat is 32 in ZeroShotProxy/compute_zen_score.py, it is set to be 1 (hard-coded) when calling compute_nas_score in evolution_search.py

To reproduce your results, should I keep using repeat=1? I am not sure if it works to approximate the expectation with only one sample. But using larger size of sample increases the search budget.

GENet how to use soft-labels

i follow the paper "Neural Architecture Design for GPU-Efficient Networks", and see "We also use ResNet-152 as teacher network to get soft-labels. The soft-label loss and the true-label loss are weighted 1:1." at page 8
how to use soft-labels,teach genet. like knowledge distillation? have any code in this repo?

Search in RegNet Space

I was experimenting with RegNet like search space but with little different group_size. If I use my hand designed network (model_hand) I get 74.05 % top-1 accuracy with latency L. My modest objective is to learn a model (model_zennas) of same latency (L) but at least gives same accuracy as hand designed network. I have kept training schedule exactly same between model_hand and model_zennas but I got 1.4% less.
When I look at ZenScore of model_hand it is 112 where as for model_zennas it is 136.0. So it looks like it is not about model_hand not being part of search possibilities. But some how ZenScore assigns lesser score to higher accuracy model.

So my question is, if there is some tweak I can do to the way ZenScore is defined so that model_hand has similar score to model_zennas. If that happens I can expect model_zennas also generates similar accuracy as model_hand.

I like your way of searching much as it does not need training so exploring to accomplish above. Do you think it is appropriate for such objective?

Deeper and wider network has higher accuracy?

From Figure 2. in the paper, it can be seen that deeper and wider network has higher Zen-Score. And Zen-Score positively correlates with model accuracy. So Deeper and wider network has higher accuracy, which is a well-known principle. Then whats' the meaning of Zen-Score?

Same search space used in search_space_IDW_fixfc.py as well as in search_space_XXBL.py

First of all, thanks a lot for releasing the code for such a nice work. I have few doubts,

  1. The seach_space_block_type_list_list in both files, search_space_IDW_fixfc.py as well as search_space_XXBL.py are same. Is it intended?

  2. The script, Zen_NAS_ImageNet_flops400M.sh and Zen_NAS_ImageNet_latency0.2ms use same search space. But the paper mentions Zen_NAS_ImageNet_flops400M.sh using MB block and Zen_NAS_ImageNet_latency0.2ms uses botn block like Resnet50.

Thanks again.

The training top-1 accuracy is only 80% in cifar100. Did we make a mistake or miss something?

Train ZenNet-2.0M in CIFAR100

Test model: zennet_cifar100_model_size2M_res32

Data augmentation:

  • subtracting the channel mean and dividing the channel standard deviation
  • mixup
  • label-smoothing
  • random erasing
  • random crop/resize/flip/lightting
  • Auto Augment

Train optimizer:

  • SGD optimizer with momentum 0.9
  • weight decay 5e-4 for CIFAR10/100
  • Learning rate 0.1 with batch size 256
  • Cosine learning rate decay
  • 1440 epochs in CIFAR10/100

Hi, your work is exciting and inspires us a lot. So we try to reproduce,but we train the test model according to the above configuration, and the training top-1 accuracy is only 80% in cifar100. Did we make a mistake or miss something?

File module missing

Dear researcher,
There was a file missing from the code (analyze_model.py) when I finished training the model. Always feel this module is very important. Could you help me find out what the problem is? Thank you very much
image

image

NAS-Bench-201

Hello,

As I remember from the paper, your method works on Vanilla CNN. However, in algorithm 1, you just mentioned that residual connections are deleted.

It confused me a little bit, and I do not know your method can be applied on any CNN without residual connection or it only can be applied on Vanilla CNN.

Can I use your code on benchmarks such as NAS-Bench-201?

Custom Dataset Question

I want to use ZenNAS on a medical image dataset, but am confused when you say ZenNAS is "data-free" and "a data independent method. " I understand how it is "training-free", and that the calculation of Zen-Score only takes a few forward inferences through a randomly initialized network, but is this zero-shot Zen-Score based at all on my data?

Use of NTK Condition Number in Combination with ZEN Score.

Hello MingLin,

I found your NAS approach to be very interesting.

Did you ever try to combine the ZEN-Score with the NTK-score like the TE-NAS paper did? The TE-NAS paper suggests that combining the NTK-Score with a expressivity measure would improve the overall performance.

Thanks,
SXK

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.