Git Product home page Git Product logo

pointnet_pointnet2_pytorch's Introduction

Pytorch Implementation of PointNet and PointNet++

This repo is implementation for PointNet and PointNet++ in pytorch.

Update

2021/03/27:

(1) Release pre-trained models for semantic segmentation, where PointNet++ can achieve 53.5% mIoU.

(2) Release pre-trained models for classification and part segmentation in log/.

2021/03/20: Update codes for classification, including:

(1) Add codes for training ModelNet10 dataset. Using setting of --num_category 10.

(2) Add codes for running on CPU only. Using setting of --use_cpu.

(3) Add codes for offline data preprocessing to accelerate training. Using setting of --process_data.

(4) Add codes for training with uniform sampling. Using setting of --use_uniform_sample.

2019/11/26:

(1) Fixed some errors in previous codes and added data augmentation tricks. Now classification by only 1024 points can achieve 92.8%!

(2) Added testing codes, including classification and segmentation, and semantic segmentation with visualization.

(3) Organized all models into ./models files for easy using.

Install

The latest codes are tested on Ubuntu 16.04, CUDA10.1, PyTorch 1.6 and Python 3.7:

conda install pytorch==1.6.0 cudatoolkit=10.1 -c pytorch

Classification (ModelNet10/40)

Data Preparation

Download alignment ModelNet here and save in data/modelnet40_normal_resampled/.

Run

You can run different modes with following codes.

  • If you want to use offline processing of data, you can use --process_data in the first run. You can download pre-processd data here and save it in data/modelnet40_normal_resampled/.
  • If you want to train on ModelNet10, you can use --num_category 10.
# ModelNet40
## Select different models in ./models 

## e.g., pointnet2_ssg without normal features
python train_classification.py --model pointnet2_cls_ssg --log_dir pointnet2_cls_ssg
python test_classification.py --log_dir pointnet2_cls_ssg

## e.g., pointnet2_ssg with normal features
python train_classification.py --model pointnet2_cls_ssg --use_normals --log_dir pointnet2_cls_ssg_normal
python test_classification.py --use_normals --log_dir pointnet2_cls_ssg_normal

## e.g., pointnet2_ssg with uniform sampling
python train_classification.py --model pointnet2_cls_ssg --use_uniform_sample --log_dir pointnet2_cls_ssg_fps
python test_classification.py --use_uniform_sample --log_dir pointnet2_cls_ssg_fps

# ModelNet10
## Similar setting like ModelNet40, just using --num_category 10

## e.g., pointnet2_ssg without normal features
python train_classification.py --model pointnet2_cls_ssg --log_dir pointnet2_cls_ssg --num_category 10
python test_classification.py --log_dir pointnet2_cls_ssg --num_category 10

Performance

Model Accuracy
PointNet (Official) 89.2
PointNet2 (Official) 91.9
PointNet (Pytorch without normal) 90.6
PointNet (Pytorch with normal) 91.4
PointNet2_SSG (Pytorch without normal) 92.2
PointNet2_SSG (Pytorch with normal) 92.4
PointNet2_MSG (Pytorch with normal) 92.8

Part Segmentation (ShapeNet)

Data Preparation

Download alignment ShapeNet here and save in data/shapenetcore_partanno_segmentation_benchmark_v0_normal/.

Run

## Check model in ./models 
## e.g., pointnet2_msg
python train_partseg.py --model pointnet2_part_seg_msg --normal --log_dir pointnet2_part_seg_msg
python test_partseg.py --normal --log_dir pointnet2_part_seg_msg

Performance

Model Inctance avg IoU Class avg IoU
PointNet (Official) 83.7 80.4
PointNet2 (Official) 85.1 81.9
PointNet (Pytorch) 84.3 81.1
PointNet2_SSG (Pytorch) 84.9 81.8
PointNet2_MSG (Pytorch) 85.4 82.5

Semantic Segmentation (S3DIS)

Data Preparation

Download 3D indoor parsing dataset (S3DIS) here and save in data/s3dis/Stanford3dDataset_v1.2_Aligned_Version/.

cd data_utils
python collect_indoor3d_data.py

Processed data will save in data/stanford_indoor3d/.

Run

## Check model in ./models 
## e.g., pointnet2_ssg
python train_semseg.py --model pointnet2_sem_seg --test_area 5 --log_dir pointnet2_sem_seg
python test_semseg.py --log_dir pointnet2_sem_seg --test_area 5 --visual

Visualization results will save in log/sem_seg/pointnet2_sem_seg/visual/ and you can visualize these .obj file by MeshLab.

Performance

Model Overall Acc Class avg IoU Checkpoint
PointNet (Pytorch) 78.9 43.7 40.7MB
PointNet2_ssg (Pytorch) 83.0 53.5 11.2MB

Visualization

Using show3d_balls.py

## build C++ code for visualization
cd visualizer
bash build.sh 
## run one example 
python show3d_balls.py

Using MeshLab

Reference By

halimacc/pointnet3
fxia22/pointnet.pytorch
charlesq34/PointNet
charlesq34/PointNet++

Citation

If you find this repo useful in your research, please consider citing it and our other works:

@article{Pytorch_Pointnet_Pointnet2,
      Author = {Xu Yan},
      Title = {Pointnet/Pointnet++ Pytorch},
      Journal = {https://github.com/yanx27/Pointnet_Pointnet2_pytorch},
      Year = {2019}
}
@InProceedings{yan2020pointasnl,
  title={PointASNL: Robust Point Clouds Processing using Nonlocal Neural Networks with Adaptive Sampling},
  author={Yan, Xu and Zheng, Chaoda and Li, Zhen and Wang, Sheng and Cui, Shuguang},
  journal={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}
@InProceedings{yan2021sparse,
  title={Sparse Single Sweep LiDAR Point Cloud Segmentation via Learning Contextual Shape Priors from Scene Completion},
  author={Yan, Xu and Gao, Jiantao and Li, Jie and Zhang, Ruimao, and Li, Zhen and Huang, Rui and Cui, Shuguang},
  journal={AAAI Conference on Artificial Intelligence ({AAAI})},
  year={2021}
}
@InProceedings{yan20222dpass,
      title={2DPASS: 2D Priors Assisted Semantic Segmentation on LiDAR Point Clouds}, 
      author={Xu Yan and Jiantao Gao and Chaoda Zheng and Chao Zheng and Ruimao Zhang and Shuguang Cui and Zhen Li},
      year={2022},
      journal={ECCV}
}

Selected Projects using This Codebase

pointnet_pointnet2_pytorch's People

Contributors

lly007 avatar tarmas99 avatar yanx27 avatar yushiangw avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pointnet_pointnet2_pytorch's Issues

Features meaning of the S3DIS dataset

Hi,
I noticed that the input to the semantic segmentation network has 9 dimensional features. Apart from xyz coordinate and normal, what is the feature meaning of the rest 3 dimensions. Is it RGB ?
Thank you
Cheers

Question in ModelNetDataLoader.py

In the classification task, the function 'farthest_point_sample()' is defined to uniformly resample the point clouds, but why the flag 'uniform' is set False?

Solved - Question about how to run sementic segmentation on S3DIS dataset

Dear author, thank you for sharing the code.
When i run the train_semseg.py--model pointnet2_sem_seg --test_area 5 --log_dir pointnet2_sem_seg,
There is an error: Given groups=1, weight of size 32 9 1 1, expected input[16, 12, 32, 1024] to have 9 channels, but got 12 channels instead.
I used the Stanford3dDataset_v1.2_Aligned_Version dataset.
I don't know how to solve the problem and can you kindly tell me how to solve it?
thank you
The problem I have solved. thank you.

ImportError: No module named data_utils.ModelNetDataLoader

Thank you for your good work!
But when training,i get some errors:
Traceback (most recent call last):
File "train_clf.py", line 14, in
from data_utils.ModelNetDataLoader import ModelNetDataLoader, load_data
ImportError: No module named data_utils.ModelNetDataLoader
I don't know what's the problem and can you kindly tell me how to solve it?

The configuration to get 0.52 mIoU in S3DIS dataset

Thank you for your good work!
I am interested in the configuration to get 0.52 mIoU in S3DIS dataset, like number of GPU and multi-scale or single-scale except for batchsize and learning rate. And more, is it convenient for you to provide checkpoint.pth of 0.52 mIoU?

pc.utils ----- ImportError

Thank you for your good work!
i want view shapenet dataset,but i run to get next.......
from data_utils.ShapeNetDataLoader import load_data ImportError: cannot import name 'load_data'

Question about the evaluation of part seg

Hi,thanks for your code, it's really nice.
I have one question about the the evaluation method of part seg. I noticed in the 111 line of test_partseg.py, you calc the acc by following code:

cur_pred_val[i, :] = np.argmax(logits[:, seg_classes[cat]], 1) + seg_classes[cat][0]

This assumes that, during test we already know which parts each category contains. I think it may reduce the difficulty during test. Is this implementation the same as official?

about visualization

Two questions about the visualiztion part.

  1. pc_utils.py can not run properly. Base on the code, the reason is load_data function is not defined in ShapeNetDataLoader.py.

  2. what is the functionality for the show3d_balls.py file? According to the code, is it correct that showpoints function is to show point cloud with different color according the points' gt and pred labels? Could u give a illustrated example w. gt and pred labels?

Thanks!

can not reach 89.4 on pointnet classfication

I runned the code on modelnet40 with ubuntu16.04 pytorch1.1 and a 1070. But I can't get the score that you report, I get only 88.5. Is that the dataset problem, because I used the dataset that automatically downloaded by the official Pointnet code which is 416M instead of the dataset you provided which is 1.9G. Or is the effect of input transform or feature transform. By the way my batchsize is 24. Really dont know where is the problem.

about reproducible result on s3dis (semantic segmentation) dataset

Hi, @yanx27 ,

I had followed the steps on training s3dis (semantic segmentation) dataset and performed testing. However, the test result only gives 0.329263 mIOU (use pointnet2_sem_seg model), which is quite lower than the reported 0.532 mIOU. And I found that the trained model performs good segmentation on class ceiling, floor, wall, while it performs almost zero accuracy on class bean, column, window. Is there something wrong in the dataset preparation?
Here are the logs on my training and testing: pointnet2_sem_seg.txt, eval.txt

THX!

Error while copying batch data to cuda tensor in test_semseg.py

@yanx27 Thanks for this very useful repo.
Im trying to use test code for semantic segmentation in test_semseg.py.
For the following block:

image

I am getting below error:
Traceback (most recent call last): File "test_semseg.py", line 204, in <module> main(args) File "test_semseg.py", line 133, in main batch_data[0:real_batch_size, ...] = scene_data[start_idx:end_idx, ...] TypeError: can't assign a numpy.ndarray to a Variable[CUDAType]

I could see that because within the for loop, once the variable batch_data has been converted to cuda tensor in the very first iteration of for loop, I believe the same cuda tesnor is being assing with new set of block data via scene_data[start_idx:end_idx, ...] which is still a numpy array. Could you please confirm if my understanding about the error is correct. I can rewrite the loop accordingly.
TIA

About vote option in test and noted data augmentation

Hello.

I have questions as mentioned in title.

  1. how the options of 'num_vote' are used for reporting your results?

  2. and is the implementation of 'num_vote' are appropriate? does the randomness are inside the 'classifier' instance?

cf. original implementation of pointnet++ of authers include randomness when loading test data.

  1. what is your additional data sugmentation trick mentioned in README.md?

If answered, it will be much more plausible to others.

thank you.

what is the version of the pytorch

hello, I appreciate this repository very much. I found that you use 'Variable' in your code. What is the version of the pytorch? Thank you very much!

is there a script only to test the model?

I have trained pointnet++ using below command.
python train_partseg.py --model_name pointnet2

I see it also gives the test accuracies after each epoch and the final best test accuracy.

Now if I have to only test the accuracy again, how can I do it?

early stopping

Hi,

I found no validation data is used in your code. I wonder whether it is fair to compare the accuracy with the original paper without using validation data?

Thanks!

How to visualize the segmentation network performance output ?

Hi @yanx27 first of all thank you for this helpful work. I was first following the official tensorflow repository by the original authors. But it didn't work well for me. I later moved to your repository (PyTorch implementation) and I was able to do the training process without issues. I just have one question about the visualization. I was of the assumption that the show3d_balls.py file is supposed to help us visualize the output of segmentation like the way you have shown in the README.md file. But I find that the visualization code is not complete yet. Please correct me if I am wrong. What did you do to visualize the results of training, the way you have shown in the README.md file ? Did you use any external tools ?

Please clarify which version of s3dis dataset should be used

Hi, @yanx27 ,

Which version of s3dis dataset should be used to test your package (Pointnet_Pointnet2_pytorch)?
Stanford3dDataset_v1.2.zip or Stanford3dDataset_v1.2_Aligned_Version.zip?

According to indoor3d_util.py, Stanford3dDataset_v1.2_Aligned_Version is mentioned. However, the guide mentions Stanford3dDataset_v1.2.

Pls clarify on which version of s3dis dataset should be used.

THX!

Bug when Training on multi-GPU

when I trained train_partset.py with the command:
python train_partseg.py --multi_gpu="1, 2" --model_name='pointnet2' --batchsize=16 --epoch=130 --step_size=30 --optimizer='Adam'
the program will stop at the first iteration of progress bar, and I even can't kill the process.

While when I trained on a single GPU with:
python train_partseg.py --gpu="2" --model_name='pointnet2' --batchsize=16 --epoch=130 --step_size=30 --optimizer='Adam'
it can be run successfully.

I don't know what's the problem and can you kindly tell me how to solve it?

CUDNN_STATUS_NOT_SUPPORTED on classification

I have the following error when trying to train classification. I tried to reduce batch size and number of points and I always have the issue.

Traceback (most recent call last):
File "train_cls.py", line 209, in
main(args)
File "train_cls.py", line 171, in main
loss.backward()
File "...site-packages\torch\tensor.py", line 195, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "...site-packages\torch\autograd_init_.py", line 99, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.

My conda environment is:

Name Version Build Channel

blas 1.0 mkl
ca-certificates 2019.11.27 0 anaconda
certifi 2019.11.28 py37_0 anaconda
cffi 1.13.2 py37h7a1dbc1_0
cudatoolkit 10.1.243 h74a9793_0
cudnn 7.6.5 cuda10.1_0 anaconda
freetype 2.9.1 ha9979f8_1
icc_rt 2019.0.0 h0cc432a_1
intel-openmp 2019.4 245
jpeg 9b vc14h4d7706e_1 [vc14] anaconda
libpng 1.6.37 h2a8f88b_0
libtiff 4.1.0 h56a325e_0
mkl 2019.4 245
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.0.15 py37h14836fe_0
mkl_random 1.1.0 py37h675688f_0
ninja 1.9.0 py37h74a9793_0
numpy 1.18.1 py37h93ca92e_0
numpy-base 1.18.1 py37hc3f5095_1
olefile 0.46 py37_0
openssl 1.1.1 he774522_0 anaconda
pillow 5.2.0 py37h08bbbbd_0
pip 20.0.2 py37_0
pycparser 2.19 py37_0
python 3.7.6 h60c2a47_2
pytorch 1.4.0 py3.7_cuda101_cudnn7_0 pytorch
setuptools 45.1.0 py37_0
six 1.14.0 py37_0
sqlite 3.30.1 he774522_0
tk 8.6.7 vc14hb68737d_1 [vc14] anaconda
torchvision 0.5.0 py37_cu101 pytorch
tqdm 4.42.0 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_1
wheel 0.33.6 py37_0
wincertstore 0.2 py37_0
xz 5.2.4 h2fa13f4_4
zlib 1.2.11 vc14h1cdd9ab_1 [vc14] anaconda
zstd 1.3.7 h508b16e_0

Any idea?

test Pointnet2 network

I want to use your code to test Pointnet2 network, at this point I input data as point cloud file, its format as [1,3,2048].It appear error:"\model\pointnet_util.py", line 79, in farthest_point_sample centroid = xyz[batch_indices, farthest, :].view( B, 1, 3) RuntimeError: shape '[1, 1, 3]' is invalid for input of size 2047.

Data Path for Semantic Segmentation

Hi,

I noticed, that the data path for semantic segmentation is expected to be
data/s3dis/Stanford3dDataset_v1.2_Aligned_Version/
(DATA_PATH = os.path.join(ROOT_DIR, 'data','s3dis', 'Stanford3dDataset_v1.2_Aligned_Version'))

unlike the one mentioned in the Readme, which is
data/Stanford3dDataset_v1.2_Aligned_Version/

Best regards,
Philipp

typos on the shape

Hi, @yanx27 ,

Thanks for updating your code. It become more clear and concise.
One place that I found not updated is here , the shape should be:
new_points: sampled points data, [B, npoint, nsample, C+D]

How to create the h5 dataset file?

There is none anyone "ply_data_train0.h5" in the data/MobileNet/
I think I need to some tools to convert the dataset to the h5 format.
Can you help me how to do it ?
Thanks a lots!
image

device-side assert from cuda

when I tried point2_sem_seg model to other dataset, the error occurs:
device-side assert from cuda
this problem probably caused by index out of bound when I searched google. And after I tried run model on CPU mode, it clearly pointed out which line the error occurs:

==========index_points=========
points.shape: torch.Size([8, 512, 3])
idx.shape: torch.Size([8, 1024])
view_shape: [8, 1]
repeat_shape: [1, 1024]
==========index_points=========
points.shape: torch.Size([8, 512, 3])
idx.shape: torch.Size([8, 1024, 32])
view_shape: [8, 1, 1]
repeat_shape: [1, 1024, 32]
Traceback (most recent call last):
  File "train_semseg.py", line 277, in <module>
    main(args)
  File "train_semseg.py", line 180, in main
    seg_pred, trans_feat = classifier(points)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/idriver/work/wt/nuRadarScenes/models/pointnet2_sem_seg.py", line 26, in forward
    l1_xyz, l1_points = self.sa1(l0_xyz, l0_points)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 489, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/idriver/work/wt/nuRadarScenes/models/pointnet_util.py", line 202, in forward
    new_xyz, new_points = sample_and_group(self.npoint, self.radius, self.nsample, xyz, points)
  File "/home/idriver/work/wt/nuRadarScenes/models/pointnet_util.py", line 135, in sample_and_group
    grouped_xyz = index_points(xyz, idx) # [B, npoint, nsample, C]
  File "/home/idriver/work/wt/nuRadarScenes/models/pointnet_util.py", line 64, in index_points
    new_points = points[batch_indices, idx, :]
RuntimeError: index 512 is out of bounds for dim with size 512

so the idx is out of bound in function index_points
where idx produced in function blow:

def query_ball_point(radius, nsample, xyz, new_xyz):
    """
    Input:
        radius: local region radius
        nsample: max sample number in local region
        xyz: all points, [B, N, 3]
        new_xyz: query points, [B, S, 3]
    Return:
        group_idx: grouped points index, [B, S, nsample]
    """
    device = xyz.device
    B, N, C = xyz.shape
    _, S, _ = new_xyz.shape
    group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1])
    sqrdists = square_distance(new_xyz, xyz)
    group_idx[sqrdists > radius ** 2] = N
    group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample]
    group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample])
    mask = group_idx == N
    group_idx[mask] = group_first[mask]
    return group_idx

when I change group_idx[sqrdists > radius ** 2] = N-1 and mask = group_idx == N-1, it works!

so is that right ?

Pretrained Model

Hello,

Thanks a lot for the accessible code.
Can you please share with us the pretrained model for the classification test on ModelNet?

Thanks a lot

Semantic Segmentation ran out of input

Hi,
running train_semseg.py as instructed in the readme gives me following errors

python train_semseg.py --model pointnet2_sem_seg --test_area 5 --log_dir pointnet2_sem_seg
PARAMETER ...
Namespace(batch_size=16, decay_rate=0.0001, epoch=128, gpu='0', learning_rate=0.001, log_dir='pointnet2_sem_seg', lr_decay=0.7, model='pointnet2_sem_seg', npoint=4096, optimizer='Adam', step_size=10, test_area=5)
start loading training data ...
[1.124833 1.1816078 1. 2.2412012 2.340336 2.343587 1.7070498
2.0335796 1.8852289 3.8252103 1.7948895 2.7857335 1.3452303]
Totally 47623 samples in train set.
start loading test data ...
[1.1381457 1.2059734 1. 9.996554 2.5299199 2.0086675 2.1162353
1.9657742 2.4815738 4.727607 1.4018297 2.8840992 1.4809785]
Totally 18923 samples in test set.
The number of training data is: 47623
The number of test data is: 18923
No existing model, starting training from scratch...
**** Epoch 1 (1/128) ****
Learning rate:0.001000
BN momentum updated to: 0.100000
Traceback (most recent call last):
File "train_semseg.py", line 274, in
main(args)
File "train_semseg.py", line 168, in main
for i, data in tqdm(enumerate(trainDataLoader), total=len(trainDataLoader), smoothing=0.9):
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\data\dataloader.py", line 278, in iter
return _MultiProcessingDataLoaderIter(self)
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\utils\data\dataloader.py", line 682, in init
w.start()
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\multiprocessing\popen_spawn_win32.py", line 65, in init
reduction.dump(process_obj, to_child)
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'main..'
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\API\AppData\Local\Programs\Python\Python36\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

any ideas ? thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.