Git Product home page Git Product logo

layumi / person-reid-3d Goto Github PK

View Code? Open in Web Editor NEW
262.0 10.0 46.0 231.81 MB

TNNLS'22 :statue_of_liberty: Parameter-Efficient Person Re-identification in the 3D Space :statue_of_liberty:

Home Page: https://arxiv.org/abs/2006.04569

License: MIT License

Python 81.11% C++ 7.28% Cuda 10.60% C 1.01%
person-reidentification person-reid 3d market-1501 dukemtmc-reid msmt17 graph-neural-networks re-id reid re-identification

person-reid-3d's Introduction

Person Re-id in the 3D Space

Python 3.6 License: MIT

[Pdf] [Code] [中文解读]

Thanks for your attention. In this repo, we provide the code for the paper [Parameter-Efficient Person Re-identification in the 3D Space ], published at IEEE Transactions on Neural Networks and Learning Systems (TNNLS) 2022.

News

  • 9 Mar 2023 Market-1501 is in 3D. Please check our single 2D to 3D reconstruction work https://github.com/layumi/3D-Magic-Mirror.

  • 29 Sep 2022. I updated Circle loss, parameter count and the latest snapshots trained on 4 datasets, including Market, Duke, CUHK and MSMT, in /snapshots. You can directly test it after dataset preparing.

  • 31 Jul 2021. Circle loss is added. For the fair comparison with circle loss, I re-train almost all the models with a bigger batch size. The results are updated in the latest arXiv version.

  • 30 Oct 2020. I simply modify code on three points to further improve the performance:

  1. More training epochs help; (Since we are trained from scratch)

  2. I replace the dgl to more efficient KNN implementation to accelebrate training; (DGL does not optimize KNN very well, and Matrix Multiplication works quicker. )

  3. For MSMT-17 and Duke, some classes contain too many images, while other categories are under-explored. I apply the stratified sampling (--balance), which takes training samples of each class with equal probability.

  • You may directly download my generated 3D data of the Market-1501 dataset at [OneDrive] or [GoogleDrive], and therefore you could skip the data preparation part. Just put the datasets in the same folder of the code.
├── 2DMarket\
│   ├── query/  
│   ├── train_all/
│   ├── ...
├── 3DMarket+bg\
│   ├── query/  
│   ├── train_all/
│   ├── ...
├── train.py
├── test.py 
├── ...

Prerequisites

  • Python 3.6 or 3.7
  • GPU Memory >= 4G (e.g., GTX1080)
  • Pytorch = 1.4.0 (Not Latest. Latest version is incompatible, since it changes the C++ interfaces.)
  • dgl

Install

Here I use the cuda10.1 by default.

conda create --name OG python=3.7
conda activate OG
conda install pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.1 -c pytorch
conda install matplotlib requests
conda install -c dglteam dgl-cuda10.1=0.4.3
pip install -r requirements.txt

If you face any error, you may first try to re-install open3d. It helps. And make sure the gcc version is larger than 5.4.0. If you do not have the sudo permission, you may install gcc by conda as follows:

conda install -c brown-data-science gcc          (which is gcc-5.4.0)
gcc -v                                          (to see whether installation is successful)
ln libstdc++.so.6.0.26 libstdc++.so.6            (update lib in /anaconda3/env/OG/lib)
conda install gxx_linux-64
conda install gcc_linux-64

Prepare Data

I remove all 3D faces and only keep 3D points positions&RGB to save the storage & loading burden. You can use any text readers (such as vim) to see my generated obj files.

  • 2D Part Download Market-1501, DukeMTMC-reID or MSMT17 and unzip them in the ../

Split the dataset and arrange them in the folder of ID by the following code.

python prepare_market.py # You may need to change the download path. 
python prepare_duke.py
python prepare_MSMT.py

Link the 2DDataset to this dir.

ln -s ../Your_Market/pytorch  ./2DMarket
ln -s ../Your_Duke/pytorch  ./2DDuke
ln -s ../Your_MSMT/pytorch  ./2DMSMT

Training

    1. Market-1501

OG-Net 86.82 (69.02)

python train_M.py --batch-size 36 --name Efficient_ALL_Dense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1500_wa0.9_GeM_bn2_class3_amsgrad --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1500  --feature_dims 64,128,256,512   --efficient  --wa --wa_start 0.9 --gem --norm_layer bn2   --amsgrad --class 3

OG-Net + Circle 87.80 (70.56)

python train_M.py --batch-size 36 --name Efficient_ALL_Dense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1500_wa0.9_GeM_bn2_balance_circle_amsgrad_gamma64 --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1500  --feature_dims 64,128,256,512   --efficient  --wa --wa_start 0.9 --gem --norm_layer bn2 --balance  --circle --amsgrad --gamma 64

OG-Net-Small 86.79 (67.92)

python train_M.py --batch-size 36 --name Efficient_ALL_SDense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_wa0.9_GeM_bn2_balance_amsgrad --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,192,384   --efficient  --wa --wa_start 0.9 --gem --norm_layer bn2 --balance  --amsgrad 

OG-Net-Small + Circle 87.38 (70.48)

python train_M.py --batch-size 36 --name Efficient_ALL_SDense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1500_wa0.9_GeM_bn2_balance_circle_amsgrad_gamma64 --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1500  --feature_dims 48,96,192,384   --efficient  --wa --wa_start 0.9 --gem --norm_layer bn2 --balance  --circle --amsgrad --gamma 64

OG-Net-Deep + Circle 88.81 (72.91)

python train_M.py --batch-size 30 --name Market_Efficient_ALL_2SDDense_b30_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_id2_bn_k9_conv2_balance  --id_skip 2 --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,96,192,192,384,384  --efficient --k 9  --num_conv 2  --dataset 2DMarket --balance --gem --norm_layer bn2 --circle --amsgrad --gamma 64
    1. DukeMTMC-reID

OG-Net-Small 77.33 (57.74)

python train_M.py --batch-size 36 --name reEfficient_Duke_ALL_SDense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_class_GeM_bn2_amsgrad --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,192,384   --efficient --dataset 2DDuke --class --wa --wa_start 0.9 --gem --norm_layer bn2  --amsgrad 

OG-Net-Small + Circle 77.15 (58.51)

python train_M.py --batch-size 36 --name reEfficient_Duke_ALL_SDense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_balance_GeM_bn2_circle_amsgrad --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,192,384   --efficient --dataset 2DDuke --balance --wa --wa_start 0.9 --gem --norm_layer bn2 --circle --amsgrad

OG-Net 76.53 (57.92)

python train_M.py --batch-size 36 --name reEfficient_Duke_ALL_Dense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_class1_GeM_bn2_amsgrad --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 64,128,256,512   --efficient --dataset 2DDuke --class 1 --wa --wa_start 0.9 --gem --norm_layer bn2 --amsgrad  

OG-Net + Circle 78.37 (60.07)

python train_M.py --batch-size 36 --name reEfficient_Duke_ALL_Dense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_balance_GeM_bn2_circle_amsgrad_gamma64 --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 64,128,256,512   --efficient --dataset 2DDuke --balance --wa --wa_start 0.9 --gem --norm_layer bn2 --circle --amsgrad --gamma 64

OG-Net-Deep 76.97 (59.23)

python train_M.py --batch-size 36 --name Duke_Efficient_ALL_2SDDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_id2_bn_k9_conv2_balance_noCircle  --id_skip 2 --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,96,192,192,384,384  --efficient --k 9  --num_conv 2  --dataset 2DDuke --balance --gem --norm_layer bn2 --amsgrad 

OG-Net-Deep + Circle 78.50 (60.7)

python train_M.py --batch-size 36 --name Duke_Efficient_ALL_2SDDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_id2_bn_k9_conv2_balance  --id_skip 2 --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,96,192,192,384,384  --efficient --k 9  --num_conv 2  --dataset 2DDuke --balance --gem --norm_layer bn2 --circle --amsgrad --gamma 64
    1. CUHK-NP

OG-Net 44.00 (39.28)

python train_M.py --batch-size 36 --name Efficient_CUHK_ALL_Dense_b36_lr8_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_class1_gem_bn2_amsgrad_wd1e-3 --slim 0.5 --flip --scale  --lrRate 8e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1 --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 64,128,256,512    --efficient --dataset 2DCUHK --class 1  --gem --norm_layer bn2  --amsgrad  --wd 1e-3 

OG-Net + Circle 48.29 (43.73)

python train_M.py --batch-size 36 --name Efficient_CUHK_ALL_Dense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_class3_gem_bn2_circle_amsgrad_wd1e-3_gamma96 --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 64,128,256,512    --efficient --dataset 2DCUHK --class 3 --gem --norm_layer bn2 --circle --amsgrad --wd 1e-3 --gamma 96

OG-Net-Small 43.07 (38.06)

python train_M.py --batch-size 36 --name Efficient_CUHK_ALL_SDense_b36_lr10_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_gem_bn2_amsgrad_wd1e-3_class1 --slim 0.5 --flip --scale  --lrRate 10e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam   --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,192,384    --efficient --dataset 2DCUHK --gem --norm_layer bn2  --amsgrad --wd 1e-3  --class 1

OG-Net-Small + Circle 46.43 (41.79)

python train_M.py --batch-size 36 --name Efficient_CUHK_ALL_SDense_b36_lr8_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_balance_gem_bn2_circle_amsgrad_wd1e-3_gamma64 --slim 0.5 --flip --scale  --lrRate 8e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam   --init 768  --cluster xyzrgb  --train_all   --num-epoch 1000  --feature_dims 48,96,192,384    --efficient --dataset 2DCUHK --balance --gem --norm_layer bn2 --circle --amsgrad --wd 1e-3 --gamma 64

OG-Net-Deep 45.71 (41.15)

python train_M.py --batch-size 36 --name CUHK_Efficient_ALL_2SDDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1500_id2_bn_k9_conv2_class3_Nocircle  --id_skip 2 --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1500  --feature_dims 48,96,96,192,192,384,384  --efficient --k 9  --num_conv 2  --dataset 2DCUHK --class 3 --gem --norm_layer bn2 --amsgrad 

OG-Net-Deep + Circle 49.43 (45.71)

python train_M.py --batch-size 36 --name CUHK_Efficient_ALL_2SDDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1500_id2_bn_k9_conv2_balance  --id_skip 2 --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 1500  --feature_dims 48,96,96,192,192,384,384  --efficient --k 9  --num_conv 2  --dataset 2DCUHK --balance --gem --norm_layer bn2 --circle --amsgrad --gamma 64
    1. MSMT-17

OG-Net 44.27 (21.57)

python train_M.py --batch-size 36 --name reEfficient_MSMT_ALL_Dense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e600_balance_GeM_bn2_circle_amsgrad_gamma64_Nocircle --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 600  --feature_dims 64,128,256,512   --efficient --dataset 2DMSMT --balance --wa --wa_start 0.9 --gem --norm_layer bn2  --amsgrad 

OG-Net + Circle 45.28 (22.81)

python train_M.py --batch-size 36 --name reEfficient_MSMT_ALL_Dense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e600_balance_GeM_bn2_circle_amsgrad_gamma64 --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 600  --feature_dims 64,128,256,512   --efficient --dataset 2DMSMT --balance --wa --wa_start 0.9 --gem --norm_layer bn2 --circle --amsgrad --gamma 64

OG-Net-Small 43.84 (21.79)

python train_M.py --batch-size 36 --name reEfficient_MSMT_ALL_SDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e600_balance_GeM_bn2_circle_amsgrad_gamma64 --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 600  --feature_dims 48,96,192,384   --efficient --dataset 2DMSMT --balance --wa --wa_start 0.9 --gem --norm_layer bn2 --circle --amsgrad --gamma 64

OG-Net-Small + Circle 42.44 (20.31)

python train_M.py --batch-size 36 --name reEfficient_MSMT_ALL_SDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e600_class_GeM_bn2_amsgrad --slim 0.5 --flip --scale  --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 600  --feature_dims 48,96,192,384   --efficient --dataset 2DMSMT --class 1  --wa --wa_start 0.9 --gem --norm_layer bn2  --amsgrad 

OG-Net-Deep 44.56 (21.41)

python train_M.py --batch-size 30 --name MSMT_Efficient_ALL_2SDDense_b30_lr4_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e600_id2_bn_k9_conv2_balance_nocircle  --id_skip 2 --slim 0.5 --flip --scale  --lrRate 4e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 600  --feature_dims 48,96,96,192,192,384,384  --efficient --k 9  --num_conv 2  --dataset 2DMSMT --balance --gem --norm_layer bn2 --amsgrad 

OG-Net-Deep + Circle 47.32 (24.07)

python train_M.py --batch-size 30 --name MSMT_Efficient_ALL_2SDDense_b30_lr4_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e600_id2_bn_k9_conv2_balance  --id_skip 2 --slim 0.5 --flip --scale  --lrRate 4e-4 --gpu_ids 0 --warm_epoch 10  --erase 0  --droprate 0.7   --use_dense  --bg 1  --adam  --init 768  --cluster xyzrgb  --train_all   --num-epoch 600  --feature_dims 48,96,96,192,192,384,384  --efficient --k 9  --num_conv 2  --dataset 2DMSMT --balance --gem --norm_layer bn2 --circle --amsgrad --gamma 64

Evaluation

  • Market-1501
python test_M.py  --name  Market_Efficient_ALL_2SDDense_b30_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_id2_bn_k9_conv2_balance
  • DukeMTMC-reID
python test_M.py  --data 2DDuke --name Duke_Efficient_ALL_2SDDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_id2_bn_k9_conv2_balance
  • CUHK
python test_M.py  --data 2DCUHK --name CUHK_Efficient_ALL_2SDDense_b36_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1500_id2_bn_k9_conv2_balance
  • MSMT-17
python test_MSMT.py  --name MSMT_Efficient_ALL_2SDDense_b30_lr4_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e600_id2_bn_k9_conv2_balance

Pre-trained Models

Since OG-Net is really small, I has included trained models in this github repo ./snapshot.

If the model is trained on CUHK, Duke or MSMT, I will add dataset name in the model name, otherwise the model is trained on Market.

[ModelNet Performance]

I add OG-Net code to https://github.com/layumi/dgcnn
Results on ModelNet are 93.3 Top1 Accuracy / 90.5 MeanClass Top1 Accuracy.

Citation

You may cite it in your paper. Thanks a lot.

@article{zheng2022person,
  title={Parameter-Efficient Person Re-identification in the 3D Space},
  author={Zheng, Zhedong and Wang, Xiaohan and Zheng, Nenggan and Yang, Yi},
  journal={IEEE Transactions on Neural Networks and Learning Systems (TNNLS)},
  doi={10.1109/TNNLS.2022.3214834},
  note={\mbox{doi}:\url{10.1109/TNNLS.2022.3214834}},
  year={2022}
}

Related Work

We thank the great works of hmr, DGL, DGCNN and PointNet++. You may check their code at

The baseline models used in the paper are modified from:

Acknowledge

I would like to thank the helpful comments and suggestions from Yaxiong Wang, Yuhang Ding, Qian Liu, Chuchu Han, Tianqi Tang, Zonghan Wu and Qipeng Guo.

person-reid-3d's People

Contributors

layumi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

person-reid-3d's Issues

3DMarkey+bg obj file error?

image

Hello. I downloaded the Data that you have provided in the google Drive.
But when I opened the 3DMarkey+bg obj file, the result came out like the above image.
I opened it with MeshLab in Window10.

I am not sure what the problem is.. Since I downloaded from the Prepare Data section.
Could you give me some help??
Thanks in advance.

how to set 8192 points for each human body?

Hello, Dr. Zheng. Thank you very much for your excellent work. I want to know how to set 8192 points for each human body? Where can this part of the code be found in the HMR project? Thank you very much for your reply.

Failed to build pointnet2-ops

output log:

gcc -pthread -B /home/ubuntu/miniconda3/envs/prid/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -Ipointnet2_ops/_ext-src/include -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include/TH -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-9.2/include -I/home/ubuntu/miniconda3/envs/prid/include/python3.6m -c pointnet2_ops/ext-src/src/bindings.cpp -o build/temp.linux-x86_64-3.6/pointnet2_ops/ext-src/src/bindings.o -O3 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=ext -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/usr/local/cuda-9.2/bin/nvcc -Ipointnet2_ops/ext-src/include -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include/TH -I/home/ubuntu/miniconda3/envs/prid/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-9.2/include -I/home/ubuntu/miniconda3/envs/prid/include/python3.6m -c pointnet2_ops/ext-src/src/sampling_gpu.cu -o build/temp.linux-x86_64-3.6/pointnet2_ops/ext-src/src/sampling_gpu.o -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS
-D__CUDA_NO_HALF2_OPERATORS
--expt-relaxed-constexpr --compiler-options '-fPIC' -O3 -Xfatbin -compress-all -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_ext -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_37,code=compute_37 -gencode=arch=compute_61,code=sm_61 -gencode=arch=compute_37,code=sm_37 -gencode=arch=compute_62,code=sm_62 -std=c++11
nvcc fatal : Unsupported gpu architecture 'compute_75'
error: command '/usr/local/cuda-9.2/bin/nvcc' failed with exit status 1

ERROR: Failed building wheel for pointnet2-ops
Running setup.py clean for pointnet2-ops
Failed to build pointnet2-ops

Problem with swa_utils

Hi. Can you help please. I can not figure out this error. I use pytorch=1.4.0 torchvision=0.5.0 cudatoolkit=10.1
656565

请问能使用pytorch1.8吗?

如题,因为我只有30系列的显卡,但30系列好像只支持cuda11.1以上的版本,然而cuda11.1只有在pytorch1.7.1以上版本才支持。看到主页写着pytorch=1.4,所以想问一下该代码是否也能用pytorch1.8执行,非常感谢

点的顺序乱了?

大神您好,我试着加上了面的信息,发现点的顺序和面的信息对不上,形成的obj文件形状错误。
这是我代码写错了吗,有什么办法得到正确的点顺序。
image
image

关于person-reid-3D程序的一点小问题

大神你好,打扰你了。想问下pointnet2_ops_lib这个文件是环境包的文件嘛?在乌班图系统上能够按上,在win环境下没有安成功,win环境下是需要自己下载什么东西么?这个包有windows版本嘛?感谢您的解答

Using

Hi, I did everything according to the instructions, trained the model and the result came out. Everything is ok. Could you please tell me. I want to test my pictures or take pictures of people from video. Can I do this? And How

TypeError: get_model_complexity_info()

Hello, I met a problem when I tried to run train_M.py without any modification to the code. As below:
Using backend: pytorch ModelE_dense( (nng): KNNGraphE() (conv): ModuleList( (0): Conv2d(6, 64, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1)) (3): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1)) ) (conv_s1): ModuleList() (conv_s2): ModuleList() (bn): ModuleList( (0): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (1): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (sa): ModuleList( (0): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(67, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(1, 32, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(67, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(1, 32, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(67, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(32, 1, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(1, 32, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(32, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) (1): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(131, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(2, 64, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(131, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(2, 64, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(131, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(64, 2, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(2, 64, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) (2): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(128, 5, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(5, 128, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(128, 5, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(5, 128, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(259, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(128, 5, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(5, 128, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) (3): PointnetSAModuleMSG( (groupers): ModuleList( (0): QueryAndGroup() (1): QueryAndGroup() (2): QueryAndGroup() ) (mlps): ModuleList( (0): Sequential( (0): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(256, 10, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(10, 256, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (1): Sequential( (0): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(256, 10, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(10, 256, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) (2): Sequential( (0): Conv2d(515, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): SE( (se_reduce): Conv2d(256, 10, kernel_size=(1, 1), stride=(1, 1)) (se_expand): Conv2d(10, 256, kernel_size=(1, 1), stride=(1, 1)) (swish): MemoryEfficientSwish() ) (3): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (4): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU(inplace=True) ) ) ) ) (embs): ModuleList( (0): Linear(in_features=1024, out_features=512, bias=False) ) (bn_embs): ModuleList( (0): BatchNorm1d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (dropouts): ModuleList( (0): Dropout(p=0.7, inplace=True) ) (partpool): AdaptiveAvgPool1d(output_size=1) (proj_output): Linear(in_features=512, out_features=751, bias=True) ) torch.Size([1, 4096, 6]) Traceback (most recent call last): File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/uisee/.vscode-server/extensions/ms-python.python-2021.10.1365161279/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module> cli.main() File "/home/uisee/.vscode-server/extensions/ms-python.python-2021.10.1365161279/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main run() File "/home/uisee/.vscode-server/extensions/ms-python.python-2021.10.1365161279/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file runpy.run_path(target_as_str, run_name=compat.force_str("__main__")) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/home/uisee/anaconda3/envs/OG/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/uisee/yongtao/proj/person-reid-3d/train_M.py", line 362, in <module> macs, params = get_model_complexity_info(model.cuda(), batch0.cuda(), ((round(6890*opt.slim), 3) ), as_strings=True, print_per_layer_stat=False, verbose=True) TypeError: get_model_complexity_info() got multiple values for argument 'print_per_layer_stat'

I think there is something wrong with the input to get_model_complexity_info(). Do you know how to fix it?

2D转3D图片代码中的问题

cv2.error: OpenCV(4.5.4-dev) 👎 error: (-5:Bad argument) in function 'circle'

Overload resolution failed:

  • Scalar value for argument 'color' is not numeric
  • Scalar value for argument 'color' is not numeric

出现这个错误,请问大神,这应该怎么解决啊,万分感谢您的帮助

The issue of the process about the generated 3d dataset

Thanks for your excellent work. When I generate 3d dataset with your code, I encounter the following output, and then the program stops. Could you encounter this case?

Restoring checkpoint /home/yinjunhui/per-id/3d/hmr/src/models/model.ckpt-667589..
WARNING:tensorflow:From /home/yinjunhui/anaconda3/envs/hmr/lib/python2.7/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.

Lack of ./snapshot

大神你好,最近在拜读您的文章,想结合Demo理解,但根据您的readme.md,有一个已经训练好的model,但是在repo中,并没有发现?

RuntimeError: invalid argument 5: k not in range for dimension

Hi,

Thank you for sharing your work.

I ran into an issue running train_M.sh on the supplied generated 3D data of the Market-1501 dataset

Number of training parameters: 2.34 M
Epoch #0 Validating
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py:3335: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/_methods.py:154: RuntimeWarning: invalid value encountered in true_divide
ret, rcount, out=ret, casting='unsafe', subok=False)
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/fromnumeric.py:3335: RuntimeWarning: Mean of empty slice.
out=out, **kwargs)
/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/numpy/core/_methods.py:154: RuntimeWarning: invalid value encountered in true_divide
ret, rcount, out=ret, casting='unsafe', subok=False)

0%| | 0/1617 [00:00<?, ?it/s]
Traceback (most recent call last):
File "train_M.py", line 298, in
train(model, optimizer, scheduler, train_loader, dev, epoch)
File "train_M.py", line 129, in train
logits = model(xyz.detach(), rgb.detach(), istrain=True)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward
return self.module(*inputs[0], **kwargs[0])
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/ichec/work/iecom001b/person-reid-3d/model.py", line 171, in forward
g = self.nng(xyz, istrain=istrain and self.graph_jitter)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/ichec/work/iecom001b/person-reid-3d/KNNGraphE.py", line 102, in forward
return knn_graphE(x, self.k, istrain)
File "/ichec/work/iecom001b/person-reid-3d/KNNGraphE.py", line 51, in knn_graphE
k_indices = F.argtopk(dist, k, 2, descending=False)
File "/ichec/home/users/niallomahony/.conda/envs/tfgpu/lib/python3.6/site-packages/dgl/backend/pytorch/tensor.py", line 132, in argtopk
return th.topk(input, k, dim, largest=descending)[1]
RuntimeError: invalid argument 5: k not in range for dimension at /opt/conda/conda-bld/pytorch_1579027003190/work/aten/src/THC/generic/THCTensorTopK.cu:23<

I followed all the installation steps but had to use cuda 10.0 (and cudatoolkit 10.0 and dgl-cu100 as that is what is available on the hpc.

您好,我想请教一下,我是按照描述安装的pytorch1.4,但是进行训练时错误如下?似乎是版本太过陈旧,然后我更换到1.12以上又会报其他错误。请问大佬这是什么原因?

(OG) francisjiang@francisjiang:~/desktop/person-reid-3d$ python train_M.py --batch-size 30 --name Market_Efficient_ALL_2SDDense_b30_lr6_flip_slim0.5_warm10_scale_e0_d7+bg_adam_init768_clusterXYZRGB_e1000_id2_bn_k9_conv2_balance --id_skip 2 --slim 0.5 --flip --scale --lrRate 6e-4 --gpu_ids 0 --warm_epoch 10 --erase 0 --droprate 0.7 --use_dense --bg 1 --adam --init 768 --cluster xyzrgb --train_all --num-epoch 1000 --feature_dims 48,96,96,192,192,384,384 --efficient --k 9 --num_conv 2 --dataset 2DMarket --balance --gem --norm_layer bn2 --circle --amsgrad --gamma 64
/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libtorch_cpu.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
Traceback (most recent call last):
File "train_M.py", line 6, in
from market3d import Market3D
File "/home/francisjiang/desktop/person-reid-3d/market3d.py", line 1, in
from torchvision import datasets
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/init.py", line 7, in
from torchvision import models
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/models/init.py", line 2, in
from .convnext import *
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/models/convnext.py", line 8, in
from ..ops.misc import Conv2dNormActivation, Permute
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/ops/init.py", line 2, in
from .boxes import (
File "/home/francisjiang/anaconda3/envs/OG/lib/python3.7/site-packages/torchvision/ops/boxes.py", line 78, in
@torch.jit._script_if_tracing
AttributeError: module 'torch.jit' has no attribute '_script_if_tracing'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.