Git Product home page Git Product logo

point-bert's Introduction

Point-BERT: Pre-Training 3D Point Cloud Transformers with Masked Point Modeling

PWC

Created by Xumin Yu*, Lulu Tang*, Yongming Rao*, Tiejun Huang, Jie Zhou, Jiwen Lu

[arXiv] [Project Page] [Models]

This repository contains PyTorch implementation for Point-BERT:Pre-Training 3D Point Cloud Transformers with Masked Point Modeling (CVPR 2022).

Point-BERT is a new paradigm for learning Transformers to generalize the concept of BERT onto 3D point cloud. Inspired by BERT, we devise a Masked Point Modeling (MPM) task to pre-train point cloud Transformers. Specifically, we first divide a point cloud into several local patches, and a point cloud Tokenizer is devised via a discrete Variational AutoEncoder (dVAE) to generate discrete point tokens containing meaningful local information. Then, we randomly mask some patches of input point clouds and feed them into the backbone Transformer. The pre-training objective is to recover the original point tokens at the masked locations under the supervision of point tokens obtained by the Tokenizer.

intro

Pretrained Models

model dataset config url
dVAE ShapeNet config Tsinghua Cloud / BaiDuYun(code:26d3)
Point-BERT ShapeNet config Tsinghua Cloud / BaiDuYun(code:jvtg)
model dataset Acc. Acc. (vote) config url
Transformer ModelNet 92.67 93.24 config Tsinghua Cloud / BaiDuYun(code:tqow)
Transformer ModelNet 92.91 93.48 config Tsinghua Cloud / BaiDuYun(code:tcin)
Transformer ModelNet 93.19 93.76 config Tsinghua Cloud / BaiDuYun(code:k343)
Transformer ScanObjectNN 88.12 -- config Tsinghua Cloud / BaiDuYun(code:f0km)
Transformer ScanObjectNN 87.43 -- config Tsinghua Cloud / BaiDuYun(code:k3cb)
Transformer ScanObjectNN 83.07 -- config Tsinghua Cloud / BaiDuYun(code:rxsw)

Usage

Requirements

  • PyTorch >= 1.7.0
  • python == 3.7
  • CUDA >= 10.2
  • GCC >= 4.9
  • torchvision
  • timm
  • open3d
  • tensorboardX
pip install -r requirements.txt

Building Pytorch Extensions for Chamfer Distance, PointNet++ and kNN

NOTE: PyTorch >= 1.7 and GCC >= 4.9 are required.

# Chamfer Distance
bash install.sh
# PointNet++
pip install "git+git://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops&subdirectory=pointnet2_ops_lib"
# GPU kNN
pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl

Dataset

We use ShapeNet for the training of dVAE and the pre-training of Point-BERT models. And finetuning the Point-BERT models on ModelNet, ScanObjectNN, ShapeNetPart The details of used datasets can be found in DATASET.md.

dVAE

To train a dVAE by yourself, simply run:

bash scripts/train.sh <GPU_IDS>\
    --config cfgs/ShapeNet55_models/dvae.yaml \
    --exp_name <name>

Visualize the reconstruction results of a pre-trained dVAE, run: (default path: ./vis)

bash ./scripts/test.sh <GPU_IDS> \
    --ckpts <path>\
    --config cfgs/ShapeNet55_models/dvae.yaml\
    --exp_name <name>

Point-BERT pre-training

To pre-train the Point-BERT models on ShapeNet, simply run: (complete the ckpt in cfgs/Mixup_models/Point-BERT.yaml first )

bash ./scripts/dist_train_BERT.sh <NUM_GPU> <port>\
    --config cfgs/Mixup_models/Point-BERT.yaml \
    --exp_name pointBERT_pretrain 
    [--val_freq 10]

val_freq controls the frequence to evaluate the Transformer on ModelNet40 with LinearSVM.

Fine-tuning on downstream tasks

We finetune our Point-BERT on 4 downstream tasks: Classfication on ModelNet40, Few-shot learning on ModelNet40, Transfer learning on ScanObjectNN and Part segmentation on ShapeNetPart.

ModelNet40

To finetune a pre-trained Point-BERT model on ModelNet40, simply run:

# 1024 points
bash ./scripts/train_BERT.sh <GPU_IDS> \
    --config cfgs/ModelNet_models/PointTransformer.yaml\
    --finetune_model\
    --ckpts <path>\
    --exp_name <name>
# 4096 points
bash ./scripts/train_BERT.sh <GPU_IDS>\
    --config cfgs/ModelNet_models/PointTransformer_4096point.yaml\ 
    --finetune_model\ 
    --ckpts <path>\
    --exp_name <name>
# 8192 points
bash ./scripts/train_BERT.sh <GPU_IDS>\
    --config cfgs/ModelNet_models/PointTransformer_8192point.yaml\ 
    --finetune_model\ 
    --ckpts <path>\
    --exp_name <name>

To evaluate a model finetuned on ModelNet40, simply run:

bash ./scripts/test_BERT.sh <GPU_IDS>\
    --config cfgs/ModelNet_models/PointTransformer.yaml \
    --ckpts <path> \
    --exp_name <name>

Few-shot Learning on ModelNet40

We follow the few-shot setting in the previous work.

First, generate your own few-shot learning split or use the same split as us (see DATASET.md).

# generate few-shot learning split
cd datasets/
python generate_few_shot_data.py
# train and evaluate the Point-BERT
bash ./scripts/train_BERT.sh <GPU_IDS> \
    --config cfgs/Fewshot_models/PointTransformer.yaml \
    --finetune_model \
    --ckpts <path> \
    --exp_name <name> \
    --way <int> \
    --shot <int> \
    --fold <int>

ScanObjectNN

To finetune a pre-trained Point-BERT model on ScanObjectNN, simply run:

bash ./scripts/train_BERT.sh <GPU_IDS>  \
    --config cfgs/ScanObjectNN_models/PointTransformer_hardest.yaml \
    --finetune_model \
    --ckpts <path> \
    --exp_name <name>

To evaluate a model on ScanObjectNN, simply run:

bash ./scripts/test_BERT.sh <GPU_IDS>\
    --config cfgs/ScanObjectNN_models/PointTransformer_hardest.yaml \
    --ckpts <path> \
    --exp_name <name>

Part Segmentation

To finetune a pre-trained Point-BERT model on ShapeNetPart

cd segmentation
python train_partseg.py \
    --model PointTransformer \
    --gpu <GPU_IDS> \
    --pretrain_weight <path> \
    --log_dir <name> 

To evaluate a model on ShapeNetPart, simply run:

python test_partseg.py \
    --gpu <GPU_IDS> \
    --log_dir <name> 

Visualization

Masked point clouds reconstruction using our Point-BERT model trained on ShapeNet

results

License

MIT License

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{yu2021pointbert,
  title={Point-BERT: Pre-Training 3D Point Cloud Transformers with Masked Point Modeling},
  author={Yu, Xumin and Tang, Lulu and Rao, Yongming and Huang, Tiejun and Zhou, Jie and Lu, Jiwen},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

point-bert's People

Contributors

henryzhengr avatar julie-tang00 avatar raoyongming avatar satyajitghana avatar yuxumin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

point-bert's Issues

failed to build pointnet2_ops_lib

when run the command to build pointnet2_ops_lib, some errors occur:
......

1.10.0.git.kitware.jobserver-1
g++ -pthread -shared -B /mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.4/compiler_compat -L/mnt/cache/share/spring/conda_envs/miniconda3/envs/s0.3.4/lib -Wl,-rpath=/mnt/cache/share/spring/conda_env
s/miniconda3/envs/s0.3.4/lib -Wl,--no-as-needed -Wl,--sysroot=/ /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/group_points.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86
_64-3.6/pointnet2_ops/_ext-src/src/bindings.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/ball_query.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_op
s/_ext-src/src/sampling.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/interpolate.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/samp
ling_gpu.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/group_points_gpu.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/ball_query_gpu
.o /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/interpolate_gpu.o -L/mnt/cache/liyanjie/.local/lib/python3.6/site-packages/torch/lib -L/mnt/cache/share/cuda-10.2/lib64 -lc1
0 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.6/pointnet2_ops/_ext.cpython-36m-x86_64-linux-gnu.so
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/group_points.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/bindings.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/ball_query.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/sampling.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/interpolate.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/sampling_gpu.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/group_points_gpu.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/ball_query_gpu.o: No such file or directory
g++: error: /tmp/pip-req-build-cn253nrh/build/temp.linux-x86_64-3.6/pointnet2_ops/_ext-src/src/interpolate_gpu.o: No such file or directory

Pre-training Point-BERT and dVAE on other datasets

Hey,
In the runner and runner_BERT_pretrain scripts there is a check for dataset name and if it's not ShapeNet (or ModelNet for runner_BERT_pretrain), a NotImplementedError is thrown.
What do I need to change/add/remove in order to run the pertaining and dVAE on other datasets (ScanObjectNN or ScanNet for example)?

Thanks,
Eliahu

Details about DGCNN

In your code, it seems that the graph is not recomputed in the feature space in each layer. So a static graph is yielded?

Linear SVM result drops

Hi,

Thanks for your great work! I have one quick question here.

I am trying to pre-train point-bert by myself. And I notice the Linear SVM evaluation result drops with the pre-training going. May I ask is this normal? I attached my training visual log here.

image

ninja: build stopped: subcommand failed.

Hi, thanks for your selfless code sharing. I encountered the following error when I run
“bash ./scripts/train_BERT.sh 0\ --config cfgs/ModelNet_models/PointTransformer.yaml\ --finetune_model\ --ckpts checkpoints\ --exp_name modelnet40_1024_test” but there are the error "ninja: build stopped: subcommand failed.", I tried some solution, but still can't solve it, could you give some advice. I would appreciate it if you could reply. Thank you very much.

segmentation fault while evaluating

when start to validate for pretraining, segmentation fault occurs:

......
2021-12-27 11:32:16,067 - Point-BERT - INFO - config.model.transformer_config.return_all_tokens : False2021-12-27 11:32:16,067 - Point-BERT - INFO - config.model.dvae_config = edict()2021-12-27 11:32:16,067 - Point-BERT - INFO - config.model.dvae_config.group_size : 322021-12-27 11:32:16,068 - Point-BERT - INFO - config.model.dvae_config.num_group : 64
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.model.dvae_config.encoder_dims : 256
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.model.dvae_config.num_tokens : 8192
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.model.dvae_config.tokens_dims : 256
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.model.dvae_config.decoder_dims : 256
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.model.dvae_config.ckpt : pretrain/dVAE.pth
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.total_bs : 128
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.step_per_update : 1
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.max_epoch : 300
2021-12-27 11:32:16,068 - Point-BERT - INFO - config.consider_metric : CDL1
2021-12-27 11:32:16,068 - Point-BERT - INFO - Distributed training: False
2021-12-27 11:32:16,068 - Point-BERT - INFO - Set random seed to 0, deterministic: False
2021-12-27 11:32:16,073 - ShapeNet-55 - INFO - [DATASET] sample out 1024 points
2021-12-27 11:32:16,074 - ShapeNet-55 - INFO - [DATASET] Open file /mnt/cache/liyanjie/data/pointcloud/ShapeNet55-34/ShapeNet-55/train.txt
2021-12-27 11:32:16,101 - ShapeNet-55 - INFO - [DATASET] Open file /mnt/cache/liyanjie/data/pointcloud/ShapeNet55-34/ShapeNet-55/test.txt
2021-12-27 11:32:16,174 - ShapeNet-55 - INFO - [DATASET] 52470 instances were loaded
2021-12-27 11:32:16,191 - ModelNet - INFO - The size of test data is 2468
2021-12-27 11:32:16,192 - ModelNet - INFO - Load processed data from /mnt/cache/liyanjie/data/pointcloud/ModelNet/modelnet40_normal_resampled/modelnet40_test_8192pts_fps.dat...
2021-12-27 11:32:16,993 - ModelNet - INFO - The size of train data is 9843
2021-12-27 11:32:16,994 - ModelNet - INFO - Load processed data from /mnt/cache/liyanjie/data/pointcloud/ModelNet/modelnet40_normal_resampled/modelnet40_train_8192pts_fps.dat...
2021-12-27 11:32:19,511 - Point_BERT - INFO - [Point_BERT] build dVAE_BERT ...
2021-12-27 11:32:19,511 - Point_BERT - INFO - [Point_BERT] Point_BERT [NOT] calc the loss for all token ...
2021-12-27 11:32:19,511 - dVAE BERT - INFO - [Transformer args] {'mask_ratio': [0.25, 0.45], 'trans_dim': 384, 'depth': 12, 'drop_path_rate': 0.1, 'cls_dim': 512, 'replace_pob': 0.0, 'num_heads'
: 6, 'moco_loss': False, 'dvae_loss': True, 'cutmix_loss': True, 'return_all_tokens': False}
2021-12-27 11:32:22,489 - dVAE BERT - INFO - [Encoder] Successful Loading the ckpt for encoder from pretrain/dVAE.pth
2021-12-27 11:32:22,521 - dVAE BERT - INFO - [Transformer args] {'mask_ratio': [0.25, 0.45], 'trans_dim': 384, 'depth': 12, 'drop_path_rate': 0.1, 'cls_dim': 512, 'replace_pob': 0.0, 'num_heads'
: 6, 'moco_loss': False, 'dvae_loss': True, 'cutmix_loss': True, 'return_all_tokens': False}
2021-12-27 11:32:23,612 - Point_BERT - INFO - [dVAE] Successful Loading the ckpt for dvae from pretrain/dVAE.pth
2021-12-27 11:32:23,638 - Point_BERT - INFO - [Point_BERT Group] cutmix_BERT divide point cloud into G64 x S32 points ...2021-12-27 11:32:34,314 - Point-BERT - INFO - [RESUME INFO] Loading model weights from ./experiments/Point-BERT/Mixup_models/pointBERT_pretrain/ckpt-last.pth...2021-12-27 11:32:38,565 - Point-BERT - INFO - [RESUME INFO] resume ckpts @ 9 epoch( best_metrics = {'acc': 0.0})2021-12-27 11:32:38,566 - Point-BERT - INFO - Using Data parallel ...2021-12-27 11:32:38,591 - Point-BERT - INFO - [RESUME INFO] Loading optimizer from ./experiments/Point-BERT/Mixup_models/pointBERT_pretrain/ckpt-last.pth...2021-12-27 11:32:39,640 - Point-BERT - INFO - [VALIDATION] Start validating epoch 10
error: Segmentation fault

Any suggestions would be deeply appreciated~

Number of GPUs

Hi,

Thanks for your great work.

Would you mind sharing how many GPUs did you use for training dVAE, pre-training, and fine-tuning?

The name of PCT

Congratulations on publishing such a great paper.

I am the author of PCT: Point Cloud Transformer. Thanks for citing our work. However, i note that you use a wrong name for our PCT which you call it PTC in Table 1. Could you correct it?

Best,

Menghao

Reproducing results with 1024 points on ModelNet40

Hi, thank you first for your great work Point-BERT! I am trying to running your code to reproduce the results with 1024 points on ModelNet40 (93.2% acc w/ voting).

I fine tune Point-BERT on ModelNet40 (initialized the provided weights in your repo pretrained on ShapeNetPart). I am getting 92.26% accuracy with voting. I also evaluate with your model weights on ModelNet40, and can get 93.2% accuracy. Could you please advise if there is anything I did wrong in training (fine-tuning) on ModelNet40? Thank you!

Confusion about Visualizing Point Cloud Reconstruction Results in Figure2

One point I am confused with is the Visualizing Point Cloud Reconstruction Results of Figure2 in paper.

I want to know that how is the input masked point cloud and the final reconstructed point cloud visualization done? Because the mask operation is tokens, and the MaskTransformer output is cls-head and logits, what should I do to visualize the reconstructed point cloud of Figure 2 in paper?

Finally, I also want to know whether Point-BERT can predict the complete point cloud of the model by inputting viewpoint point cloud after pre-training.

Pre-training and Fine-tuning times

Hey,
Thanks for publishing the code, these are indeed very impressive results!
Could you please elaborate on how long it took for the pre-training and fine-tuning and on what type of hardware?

Thanks!

Questions about the number of input points

Hi, I see that the default input points are 1024, 2048, 4096 and 8192. When the original data has more points, is it possible to set more input points, such as 16384, 32768 and so on? Should the corresponding num_group and group_size also be increased?

What is the difference between point cloud completion task and point cloud masked reconstruction?

First of all, thank you for your outstanding work!

The pretext task used in your pre-training phase is point cloud masked reconstruction.

I wonder what's the difference between point cloud completion task and point cloud masked reconstruction? For example, Point-BERT (pre-training phase) vs PoinTr. Although the implementation details may be different, what are the essential differences in tasks?

Question about the position embedings

Thanks for the great job! I have a question about the usage of position embeddings in the transformer encoder.
Screenshot from 2021-12-25 22-42-58
As shown in the picture, the position embeddings are given for each layer. I am curious about why not only add the position embeddings in the first layer? Thanks.

Zhimin Chen

About the ShapeNet55/34 Dataset

Hi, hank you for the great work!

I downloaded the ShapeNet55/34 Dataset from the link of BaiduCloud, but I did not find these data(ShapeNet-35 train.txt test.txt) when I unzipped it. Is there any problem?

How to install chamfer?

"ModuleNotFoundError: No module named 'chamfer'. " I want to install chamfer by pip install chamfer, but it reminds
image
So, how should I install the 'chamfer' module?
Thank you very much!

pip can't install open3d==0.9 due to version

Collecting h5py
Downloading h5py-3.6.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (4.5 MB)
|████████████████████████████████| 4.5 MB 67.1 MB/s
Collecting matplotlib
Downloading matplotlib-3.5.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl (11.3 MB)
|████████████████████████████████| 11.3 MB 34.4 MB/s
Collecting numpy
Downloading numpy-1.22.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.8 MB)
|████████████████████████████████| 16.8 MB 131.3 MB/s
ERROR: Could not find a version that satisfies the requirement open3d==0.9 (from versions: 0.10.0.0, 0.11.0, 0.11.1, 0.11.2, 0.12.0, 0.13.0, 0.14.1)
1644596906(1)

ERROR: No matching distribution found for open3d==0.9

I used conda create a virtual environment which based on python3.8, but it can not install open3d==0.9.
Then I changed the version of python(python3.7) in virtual environment, which is Successful
image

Training questions about dVAE

What do the three Metrics in "Metrics" mean when training dVAE? How to judge whether the trained dVAE is good or bad according to these three indicators?
IMG_20220607_125919

A problem about installing extensions

Hello! When installing the extensions, an error happens.
The version: gcc=5.4, cuda=11.1, python=3.7, pytorch=1.8.0, setuptools=62.1.0
I can't import chamfer package. Has it installed incompletely? I don't know the problems from which they are derive.
Thanks for your helps.
1. chamfer installing loggs

running install
running bdist_egg
running egg_info
creating chamfer.egg-info
writing chamfer.egg-info/PKG-INFO
writing dependency_links to chamfer.egg-info/dependency_links.txt
writing top-level names to chamfer.egg-info/top_level.txt
writing manifest file 'chamfer.egg-info/SOURCES.txt'
reading manifest file 'chamfer.egg-info/SOURCES.txt'
writing manifest file 'chamfer.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'chamfer' extension
creating /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build
creating /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37
Emitting ninja build file /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  setuptools.SetuptoolsDeprecationWarning,
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/setuptools/command/easy_install.py:147: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  EasyInstallDeprecationWarning,
[1/2] c++ -MMD -MF /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37/chamfer_cuda.o.d -pthread -B /HOME/scz1454/.conda/envs/hjc_pytorch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/TH -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/THC -I/data/apps/cuda/11.1/include -I/HOME/scz1454/.conda/envs/hjc_pytorch/include/python3.7m -c -c /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/chamfer_cuda.cpp -o /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37/chamfer_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/Parallel.h:140:0,
                 from /HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
                 from /HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
                 from /HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
                 from /HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
                 from /HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/torch/extension.h:4,
                 from /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/chamfer_cuda.cpp:9:
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/ParallelOpenMP.h:83:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
 #pragma omp parallel for if ((end - begin) >= grain_size)
 ^
[2/2] /data/apps/cuda/11.1/bin/nvcc --generate-dependencies-with-compile --dependency-output /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37/chamfer.o.d -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/TH -I/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/THC -I/data/apps/cuda/11.1/include -I/HOME/scz1454/.conda/envs/hjc_pytorch/include/python3.7m -c -c /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/chamfer.cu -o /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37/chamfer.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
creating build/lib.linux-x86_64-cpython-37
g++ -pthread -B /HOME/scz1454/.conda/envs/hjc_pytorch/compiler_compat -Wl,--sysroot=/ -pthread -shared -B /HOME/scz1454/.conda/envs/hjc_pytorch/compiler_compat -L/HOME/scz1454/.conda/envs/hjc_pytorch/lib -Wl,-rpath=/HOME/scz1454/.conda/envs/hjc_pytorch/lib -Wl,--no-as-needed -Wl,--sysroot=/ /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37/chamfer.o /data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/chamfer_dist/build/temp.linux-x86_64-cpython-37/chamfer_cuda.o -L/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/lib -L/data/apps/cuda/11.1/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda_cu -ltorch_cuda_cpp -o build/lib.linux-x86_64-cpython-37/chamfer.cpython-37m-x86_64-linux-gnu.so
creating build/bdist.linux-x86_64
creating build/bdist.linux-x86_64/egg
copying build/lib.linux-x86_64-cpython-37/chamfer.cpython-37m-x86_64-linux-gnu.so -> build/bdist.linux-x86_64/egg
creating stub loader for chamfer.cpython-37m-x86_64-linux-gnu.so
byte-compiling build/bdist.linux-x86_64/egg/chamfer.py to chamfer.cpython-37.pyc
creating build/bdist.linux-x86_64/egg/EGG-INFO
copying chamfer.egg-info/PKG-INFO -> build/bdist.linux-x86_64/egg/EGG-INFO
copying chamfer.egg-info/SOURCES.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying chamfer.egg-info/dependency_links.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
copying chamfer.egg-info/top_level.txt -> build/bdist.linux-x86_64/egg/EGG-INFO
writing build/bdist.linux-x86_64/egg/EGG-INFO/native_libs.txt
zip_safe flag not set; analyzing archive contents...
__pycache__.chamfer.cpython-37: module references __file__
creating dist
creating 'dist/chamfer-2.0.0-py3.7-linux-x86_64.egg' and adding 'build/bdist.linux-x86_64/egg' to it
removing 'build/bdist.linux-x86_64/egg' (and everything under it)
Processing chamfer-2.0.0-py3.7-linux-x86_64.egg
creating /data/run01/scz1454/hongjiacheng/Point-BERT-master/.local/lib/python3.7/site-packages/chamfer-2.0.0-py3.7-linux-x86_64.egg
Extracting chamfer-2.0.0-py3.7-linux-x86_64.egg to /data/run01/scz1454/hongjiacheng/Point-BERT-master/.local/lib/python3.7/site-packages
Adding chamfer 2.0.0 to easy-install.pth file

Installed /data/run01/scz1454/hongjiacheng/Point-BERT-master/.local/lib/python3.7/site-packages/chamfer-2.0.0-py3.7-linux-x86_64.egg
Processing dependencies for chamfer==2.0.0
Finished processing dependencies for chamfer==2.0.0

2.emd

linux-x86_64-cpython-37/cuda/emd_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -O2 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=emd_cuda -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -std=c++14
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu: In function ‘at::Tensor ApproxMatchForward(at::Tensor, at::Tensor)’:
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:181:16: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   CHECK_INPUT(xyz1);
                ^
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:303:1: note: declared here
   DeprecatedTypeProperties & type() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:182:16: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   CHECK_INPUT(xyz2);
                ^
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:303:1: note: declared here
   DeprecatedTypeProperties & type() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:184:45: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
   auto match = at::zeros({b, m, n}, xyz1.type());
                                             ^
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:303:1: note: declared here
   DeprecatedTypeProperties & type() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:185:53: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations]
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:303:1: note: declared here
   DeprecatedTypeProperties & type() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu: In lambda function:
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:187:885: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here
   T * data() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:187:910: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here
   T * data() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:187:936: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here
   T * data() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:187:961: warning: ‘T* at::Tensor::data() const [with T = double]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here
   T * data() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu: In lambda function:
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:187:1730: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here
   T * data() const {
 ^
/data/run01/scz1454/hongjiacheng/Point-BERT-master/extensions/emd/cuda/emd_kernel.cu:187:1754: warning: ‘T* at::Tensor::data() const [with T = float]’ is deprecated: Tensor.data<T>() is deprecated. Please use Tensor.data_ptr<T>() instead. [-Wdeprecated-declarations]
/HOME/scz1454/.conda/envs/hjc_pytorch/lib/python3.7/site-packages/torch/include/ATen/core/TensorBody.h:395:1: note: declared here
   T * data() const {
 ^

Confusion about the Point Tokenizer

@yuxumin @lulutang0608 @raoyongming Thanks for sharing the paper and code.

One point I am confused with is the Point Tokenizer in the framework.

According to the paper and my understanding, the farthest point sampling (FPS) produces g centers, after that kNN is used such that there are g groups, then you adopt the mini-PointNet to extract featues of previous g groups and the output can be treated as an input sequence to standard Transformer.

Immediately after that, however, you give a small section on Point Tokenizer, it actually is a DGCNN. My question is what's the utility of Point Tokenizer and why need to tokenize embeddings since the input sequences have been created and the inner embeddings are inherently separated by FPS according to previous Point Embeddings.

Is Point Tokenizer necessary in the framework?

error when bash install

Hi Dear all,
I meet a problem when python setup.py install --user
it turns out:

running install
/media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/anaconda3$/envs/MaskTop/lib/python3.7/site-packages/setuptools/command/install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
/media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/anaconda3$/envs/MaskTop/lib/python3.7/site-packages/setuptools/command/easy_install.py:147: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
EasyInstallDeprecationWarning,
running bdist_egg
running egg_info
creating emd_ext.egg-info
writing emd_ext.egg-info/PKG-INFO
writing dependency_links to emd_ext.egg-info/dependency_links.txt
writing top-level names to emd_ext.egg-info/top_level.txt
writing manifest file 'emd_ext.egg-info/SOURCES.txt'
reading manifest file 'emd_ext.egg-info/SOURCES.txt'
writing manifest file 'emd_ext.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_ext
building 'emd_cuda' extension
creating /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build
creating /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build/temp.linux-x86_64-3.7
creating /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build/temp.linux-x86_64-3.7/cuda
Emitting ninja build file /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build/temp.linux-x86_64-3.7/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
1.10.2
creating build/lib.linux-x86_64-3.7
g++ -pthread -B /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/anaconda3$/envs/MaskTop/compiler_compat -Wl,--sysroot=/ -pthread -shared -B /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/anaconda3$/envs/MaskTop/compiler_compat -L/media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/anaconda3$/envs/MaskTop/lib -Wl,-rpath=/media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/anaconda3$/envs/MaskTop/lib -Wl,--no-as-needed -Wl,--sysroot=/ /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build/temp.linux-x86_64-3.7/cuda/emd.o /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build/temp.linux-x86_64-3.7/cuda/emd_kernel.o -L/media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/anaconda3$/envs/MaskTop/lib/python3.7/site-packages/torch/lib -L/usr/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda -o build/lib.linux-x86_64-3.7/emd_cuda.cpython-37m-x86_64-linux-gnu.so
g++: error: /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build/temp.linux-x86_64-3.7/cuda/emd.o: No such file or directory
g++: error: /media/ly/1dc08886-9b59-45c8-809c-468f28b98ce1/Point-BERT/extensions/emd/build/temp.linux-x86_64-3.7/cuda/emd_kernel.o: No such file or directory
error: command '/usr/bin/g++' failed with exit code 1

I want to know if it works for my satuation:
NVCC V10.1.243
gcc 9.4.0
pytorch 1.7.1 with cuda 10.1

A question regarding the voting code

Hi,

Thank you for releasing the code.

I refer to RS-CNN voting code and have a question regarding your voting code

  for time in range(1, 10):
            this_acc = test_vote(base_model, test_dataloader, 1, None, args, config, logger=logger, times=time)
            if acc < this_acc:
                acc = this_acc

From my understanding, for time in range(1, 10): means repeat voting. Hence, time should not be passed in to test_vote(base_model, test_dataloader, 1, None, args, config, logger=logger, times=time). Am I correct or did I miss some points?

Looking forward to your reply. Best regards!

Questions about Point Patch Mixing

Dear authors, thank you for the great work! I have some questions about Point Patch Mixing (PPM) introduced in your paper.

(1) Is PPM implemented with _mixup_pc(self, neighborhood, center, dvae_label) in the code?

(2) In the specific implementation, is the "virtual sample" in the paper generated by flipping the batch, i.e., i-th sample will be mixed with n-i-th sample, where n is the batch size?

Thank you!

Could not reproduce results on ModelNet40

Dear author,
Thanks for your sharing! I am trying to reproduce the results with the given checkpoint (Tsinghua Cloud). Here are my results:

Dataset Best Acc Best Acc (vote)
ModelNet40 (1k points) 92.38/92.26 92.71/92.66
ModelNet40 (4k points) 92.99 93.11
ModelNet40 (8k points) 93.31 93.59

For ModelNet40 (1k points), I have run it twice and both the two results are lower than the number you report (Acc=92.67, Vote Acc=93.24). Here are my questions:

  1. I am using PyTorch 1.7 with CUDA 11.0 on a single A100 GPU. For other packages, I follow your instructions and install them accordingly. I could obtain similar or slightly higher performance on ScanObjectNN. Could you successfully reproduce the performance of ModelNet40 (1K) and ModelNet40 (4K)?
  2. I find that, a checkpoint with the best accuracy doesn't mean it has the best vote accuracy. Does these two metrics (Acc & Vote Acc) are selected from different checkpoints?

Thanks and hoping for your reply!

Pretrain the Point-BERT model on ModelNet

Hello,
I want to pretrain Point-BERT model on ModelNet40 , I trained ModelNet's dVAE model and configured .pkl weights in Point-BERT.yaml. When I run I get the following error, is this a problem with the weights file or with the distributed training? I am training with a single GPU on a machine with pytorch1.7.1, python3.7, cuda10.2.89 and cudnn7.6.5_0.

`(pointbert) ws666@ws666-OMEN-by-HP-Laptop-15-dc1xxx:~/Point-BERT-master$ bash ./scripts/dist_train_BERT.sh 1 0 --config cfgs/Mixup_models/Point-BERT.yaml --exp_name pointBERT_pretrain

  • NGPUS=1
  • PORT=0
  • PY_ARGS='--config cfgs/Mixup_models/Point-BERT.yaml --exp_name pointBERT_pretrain'
  • python -m torch.distributed.launch --master_port=0 --nproc_per_node=1 main_BERT.py --launcher pytorch --sync_bn --config cfgs/Mixup_models/Point-BERT.yaml --exp_name pointBERT_pretrain
    init distributed in rank 0
    2022-05-08 13:49:09,932 - Point-BERT - INFO - Copy the Config file from cfgs/Mixup_models/Point-BERT.yaml to ./experiments/Point-BERT/Mixup_models/pointBERT_pretrain/config.yaml
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.config : cfgs/Mixup_models/Point-BERT.yaml
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.launcher : pytorch
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.local_rank : 0
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.num_workers : 4
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.seed : 0
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.deterministic : False
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.sync_bn : True
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.exp_name : pointBERT_pretrain
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.start_ckpts : None
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.ckpts : None
    2022-05-08 13:49:09,933 - Point-BERT - INFO - args.val_freq : 1
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.resume : False
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.test : False
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.finetune_model : False
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.scratch_model : False
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.label_smoothing : False
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.mode : None
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.way : -1
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.shot : -1
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.fold : -1
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.experiment_path : ./experiments/Point-BERT/Mixup_models/pointBERT_pretrain
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.tfboard_path : ./experiments/Point-BERT/Mixup_models/TFBoard/pointBERT_pretrain
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.log_name : Point-BERT
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.use_gpu : True
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.distributed : True
    2022-05-08 13:49:09,934 - Point-BERT - INFO - args.world_size : 1
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.optimizer = edict()
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.optimizer.type : AdamW
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.optimizer.kwargs = edict()
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.optimizer.kwargs.lr : 0.0005
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.optimizer.kwargs.weight_decay : 0.05
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.scheduler = edict()
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.scheduler.type : CosLR
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.scheduler.kwargs = edict()
    2022-05-08 13:49:09,934 - Point-BERT - INFO - config.scheduler.kwargs.epochs : 300
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.scheduler.kwargs.initial_epochs : 3
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset = edict()
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train = edict()
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.base = edict()
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.base.NAME : ModelNet
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.base.DATA_PATH : data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.base.N_POINTS : 1024
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.base.NUM_CATEGORY : 2
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.base.USE_NORMALS : False
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.others = edict()
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.others.subset : train
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.others.npoints : 1024
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.others.whole : True
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.train.others.bs : 128
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val = edict()
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.base = edict()
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.base.NAME : ModelNet
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.base.DATA_PATH : data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.base.N_POINTS : 1024
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.base.NUM_CATEGORY : 2
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.base.USE_NORMALS : False
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.others = edict()
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.others.subset : test
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.val.others.bs : 256
    2022-05-08 13:49:09,935 - Point-BERT - INFO - config.dataset.extra_train = edict()
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.base = edict()
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.base.NAME : ModelNet
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.base.DATA_PATH : data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.base.N_POINTS : 1024
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.base.NUM_CATEGORY : 2
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.base.USE_NORMALS : False
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.others = edict()
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.others.subset : train
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.dataset.extra_train.others.bs : 256
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model = edict()
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.NAME : Point_BERT
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.m : 0.999
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.T : 0.07
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.K : 16384
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config = edict()
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.mask_ratio : [0.25, 0.45]
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.trans_dim : 384
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.depth : 12
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.drop_path_rate : 0.1
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.cls_dim : 512
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.replace_pob : 0.0
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.num_heads : 6
    2022-05-08 13:49:09,936 - Point-BERT - INFO - config.model.transformer_config.moco_loss : False
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.transformer_config.dvae_loss : True
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.transformer_config.cutmix_loss : True
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.transformer_config.return_all_tokens : False
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config = edict()
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config.group_size : 32
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config.num_group : 64
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config.encoder_dims : 256
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config.num_tokens : 8192
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config.tokens_dims : 256
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config.decoder_dims : 256
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.model.dvae_config.ckpt : /home/ws666/Point-BERT-master/data/pretrainmodel/mydata/ckpt-best/archive/data.pkl
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.total_bs : 128
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.step_per_update : 1
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.max_epoch : 300
    2022-05-08 13:49:09,937 - Point-BERT - INFO - config.consider_metric : CDL1
    2022-05-08 13:49:09,937 - Point-BERT - INFO - Distributed training: True
    2022-05-08 13:49:09,937 - Point-BERT - INFO - Set random seed to 0, deterministic: False
    2022-05-08 13:49:09,938 - ModelNet - INFO - The size of train data is 187
    2022-05-08 13:49:09,938 - ModelNet - INFO - Load processed data from data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled/modelnet2_train_1024pts_fps.dat...
    2022-05-08 13:49:09,941 - ModelNet - INFO - The size of test data is 21
    2022-05-08 13:49:09,941 - ModelNet - INFO - Load processed data from data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled/modelnet2_test_1024pts_fps.dat...
    2022-05-08 13:49:09,942 - ModelNet - INFO - The size of train data is 187
    2022-05-08 13:49:09,942 - ModelNet - INFO - Load processed data from data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled/modelnet2_train_1024pts_fps.dat...
    2022-05-08 13:49:09,944 - Point_BERT - INFO - [Point_BERT] build dVAE_BERT ...
    2022-05-08 13:49:09,944 - Point_BERT - INFO - [Point_BERT] Point_BERT [NOT] calc the loss for all token ...
    2022-05-08 13:49:09,944 - dVAE BERT - INFO - [Transformer args] {'mask_ratio': [0.25, 0.45], 'trans_dim': 384, 'depth': 12, 'drop_path_rate': 0.1, 'cls_dim': 512, 'replace_pob': 0.0, 'num_heads': 6, 'moco_loss': False, 'dvae_loss': True, 'cutmix_loss': True, 'return_all_tokens': False}
    Traceback (most recent call last):
    File "/home/ws666/Point-BERT-master/utils/registry.py", line 285, in build_from_cfg
    return obj_cls(cfg)
    File "/home/ws666/Point-BERT-master/models/Point_BERT.py", line 438, in init
    self.transformer_q._prepare_encoder(self.config.dvae_config.ckpt)
    File "/home/ws666/Point-BERT-master/models/Point_BERT.py", line 302, in _prepare_encoder
    ckpt = torch.load(dvae_ckpt, map_location='cpu')
    File "/home/ws666/anaconda3/envs/pointbert/lib/python3.7/site-packages/torch/serialization.py", line 595, in load
    return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
    File "/home/ws666/anaconda3/envs/pointbert/lib/python3.7/site-packages/torch/serialization.py", line 764, in _legacy_load
    magic_number = pickle_module.load(f, **pickle_load_args)
    _pickle.UnpicklingError: A load persistent id instruction was encountered,
    but no persistent_load function was specified.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "main_BERT.py", line 90, in
main()
File "main_BERT.py", line 86, in main
pretrain(args, config, train_writer, val_writer)
File "/home/ws666/Point-BERT-master/tools/runner_BERT_pretrain.py", line 57, in run_net
base_model = builder.model_builder(config.model)
File "/home/ws666/Point-BERT-master/tools/builder.py", line 34, in model_builder
model = build_model_from_cfg(config)
File "/home/ws666/Point-BERT-master/models/build.py", line 15, in build_model_from_cfg
return MODELS.build(cfg, **kwargs)
File "/home/ws666/Point-BERT-master/utils/registry.py", line 147, in build
return self.build_func(*args, **kwargs, registry=self)
File "/home/ws666/Point-BERT-master/utils/registry.py", line 288, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
_pickle.UnpicklingError: Point_BERT: A load persistent id instruction was encountered,
but no persistent_load function was specified.
Traceback (most recent call last):
File "/home/ws666/anaconda3/envs/pointbert/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/ws666/anaconda3/envs/pointbert/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ws666/anaconda3/envs/pointbert/lib/python3.7/site-packages/torch/distributed/launch.py", line 260, in
main()
File "/home/ws666/anaconda3/envs/pointbert/lib/python3.7/site-packages/torch/distributed/launch.py", line 256, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/ws666/anaconda3/envs/pointbert/bin/python', '-u', 'main_BERT.py', '--local_rank=0', '--launcher', 'pytorch', '--sync_bn', '--config', 'cfgs/Mixup_models/Point-BERT.yaml', '--exp_name', 'pointBERT_pretrain']' returned non-zero exit status 1.
`

pretrain dvae chamfer problem

Thanks a lot for your help!
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test = edict()
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test.base = edict()
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test.base.NAME : ShapeNet
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test.base.DATA_PATH : data/ShapeNet55-34/ShapeNet-55
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test.base.N_POINTS : 8192
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test.base.PC_PATH : data/ShapeNet55-34/shapenet_pc
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test.others = edict()
2022-06-21 13:48:35,473 - dvae - INFO - config.dataset.test.others.subset : test
2022-06-21 13:48:35,474 - dvae - INFO - config.dataset.test.others.npoints : 1024
2022-06-21 13:48:35,474 - dvae - INFO - config.dataset.test.others.bs : 1
2022-06-21 13:48:35,474 - dvae - INFO - config.model = edict()
2022-06-21 13:48:35,474 - dvae - INFO - config.model.NAME : DiscreteVAE
2022-06-21 13:48:35,474 - dvae - INFO - config.model.group_size : 32
2022-06-21 13:48:35,474 - dvae - INFO - config.model.num_group : 64
2022-06-21 13:48:35,474 - dvae - INFO - config.model.encoder_dims : 256
2022-06-21 13:48:35,474 - dvae - INFO - config.model.num_tokens : 8192
2022-06-21 13:48:35,474 - dvae - INFO - config.model.tokens_dims : 256
2022-06-21 13:48:35,474 - dvae - INFO - config.model.decoder_dims : 256
2022-06-21 13:48:35,474 - dvae - INFO - config.total_bs : 64
2022-06-21 13:48:35,475 - dvae - INFO - config.step_per_update : 1
2022-06-21 13:48:35,475 - dvae - INFO - config.max_epoch : 300
2022-06-21 13:48:35,475 - dvae - INFO - config.consider_metric : CDL1
2022-06-21 13:48:35,475 - dvae - INFO - Distributed training: False
2022-06-21 13:48:35,475 - dvae - INFO - Set random seed to 0, deterministic: False
2022-06-21 13:48:35,476 - ShapeNet-55 - INFO - [DATASET] sample out 1024 points
2022-06-21 13:48:35,476 - ShapeNet-55 - INFO - [DATASET] Open file data/ShapeNet55-34/ShapeNet-55/train.txt
2022-06-21 13:48:35,514 - ShapeNet-55 - INFO - [DATASET] 41952 instances were loaded
2022-06-21 13:48:35,515 - ShapeNet-55 - INFO - [DATASET] sample out 1024 points
2022-06-21 13:48:35,515 - ShapeNet-55 - INFO - [DATASET] Open file data/ShapeNet55-34/ShapeNet-55/test.txt
2022-06-21 13:48:35,522 - ShapeNet-55 - INFO - [DATASET] 10518 instances were loaded
2022-06-21 13:48:38,979 - dvae - INFO - Using Data parallel ...
Traceback (most recent call last):
File "main.py", line 72, in
main()
File "main.py", line 68, in main
run_net(args, config, train_writer, val_writer)
File "/home/ubuntu/E21201078/Point-BERT-master/tools/runner.py", line 132, in run_net
loss_1, loss_2 = base_model.module.get_loss(ret, points)
File "/home/ubuntu/E21201078/Point-BERT-master/models/dvae.py", line 320, in get_loss
loss_recon = self.recon_loss(ret, gt)
File "/home/ubuntu/E21201078/Point-BERT-master/models/dvae.py", line 310, in recon_loss
loss_coarse_block = self.loss_func_cdl1(coarse, group_gt)
File "/home/ubuntu/E21201078/common/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/E21201078/Point-BERT-master/extensions/chamfer_dist/init.py", line 79, in forward
dist1, dist2 = ChamferFunction.apply(xyz1, xyz2)
File "/home/ubuntu/E21201078/Point-BERT-master/extensions/chamfer_dist/init.py", line 16, in forward
dist1, dist2, idx1, idx2 = chamfer.forward(xyz1, xyz2)
AttributeError: module 'chamfer' has no attribute 'forward'
(common) (base) ubuntu@ahu:~/E21201078/Point-BERT-master$

Thanks a lot for your help!

About test acc

I noticed that you used farthest point sample in the testing phase, while others simply took the first 1024 points, or picked them at random. I tried to train a DGCNN classification model with farthest point sample and achieved 82.6 on the ScanObjectNN(PB T50 RS) test set, which is much higher than 78.1 for random sample. So I would like to know which method I should use to calculate the acc?

`validation` v.s. `vote validation`

Hi,

Hello, would you please explain what's the difference between validation acc and vote validation acc?
And, which one is used in the paper's tables?

An error when installing extensions

hello, when installing the chamfer_dist, an unexpected error appears.
The verions are: pythorch=1.11.0 cuda=10.2 gcc=5.5.0

/usr/local/cuda/bin/nvcc -I/export2/CX2/anaconda3/envs/cross/lib/python3.7/site-packages/torch/include -I/export2/CX2/anacon
da3/envs/cross/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/export2/CX2/anaconda3/envs/cross/lib/pyth
on3.7/site-packages/torch/include/TH -I/export2/CX2/anaconda3/envs/cross/lib/python3.7/site-packages/torch/include/THC -I/us
r/local/cuda/include -I/export2/CX2/anaconda3/envs/cross/include/python3.7m -c chamfer.cu -o build/temp.linux-x86_64-3.7/cha
mfer.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPER
ATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\
" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_
ABI=0 -gencode=arch=compute_61,code=compute_61 -gencode=arch=compute_61,code=sm_61 -std=c++14
/export2/CX2/anaconda3/envs/cross/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h: In
instantiation of ‘std::shared_ptr<torch::nn::Module> torch::nn::Cloneable<Derived>::clone(const c10::optional<c10::Device>&)
 const [with Derived = torch::nn::CrossMapLRN2dImpl]’:
/tmp/tmpxft_00009393_00000000-5_chamfer.cudafe1.stub.c:4:27:   required from here
/export2/CX2/anaconda3/envs/cross/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:57:5
9: error: invalid static_cast from type ‘const torch::OrderedDict<std::basic_string<char>, at::Tensor>’ to type ‘torch::Orde
redDict<std::basic_string<char>, at::Tensor>&’

ShapeNet-55

Why not use the original ShapeNetCore dataset directly?
And how did you generate ShapeNet-55 from ShapeNetCore?

What does _random_replace do?

Hi, thank you for your great work.
I just find it difficult to understand what _random_replace() in MaskTransformer does. Can you give me some hints? Thank you!

an issue when install chamfer

/usr/bin/nvcc -I/home/ruizhe/miniconda3/envs/crzz/lib/python3.7/site-packages/torch/include -I/home/ruizhe/miniconda3/envs/crzz/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/home/ruizhe/miniconda3/envs/crzz/lib/python3.7/site-packages/torch/include/TH -I/home/ruizhe/miniconda3/envs/crzz/lib/python3.7/site-packages/torch/include/THC -I/home/ruizhe/miniconda3/envs/crzz/include/python3.7m -c chamfer.cu -o build/temp.linux-x86_64-3.7/chamfer.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=chamfer -D_GLIBCXX_USE_CXX11_ABI=0 -gencode=arch=compute_86,code=sm_86 -std=c++14
nvcc fatal : Unsupported gpu architecture 'compute_86'
error: command '/usr/bin/nvcc' failed with exit status 1

Thanks a lot for your help!

About "vocabulary" in paper

Hello, thanks for your great work.
I have some questions about the "vocabulary" mentioned in the paper. How is it acquired? Is it learned during the dVAE-based point cloud reconstruction?
Hoping for the answer, thanks.

Details of Part Segmentation

Hi Lulu Tang,

Thanks for your wonderful contribution. I'm a little confused about the details of Part Segmentation. The following line in the test_partseg is utilized to initialize the PointTransformer, But I can not find the corresponding parameters in the models. How can I used this line to test the model ?

classifier = MODEL.get_model(num_part, normal_channel=args.normal).cuda()

When I follows your config to train the Point Transformer, I can only get about 70+ mIoU in the test set. How can I improve the performance to match your results in the paper? Thanks a lot.

ninja: build stopped: subcommand failed.

  • GPUS='0'
  • PY_ARGS='--config cfgs/ModelNet_models/PointTransformer.yaml\ --finetune_model\ --ckpts checkpoints\ --exp_name modelnet40_1024_train'
  • CUDA_VISIBLE_DEVICES='0'
  • python main_BERT.py --config 'cfgs/ModelNet_models/PointTransformer.yaml' '--finetune_model' --ckpts 'checkpoints' --exp_name modelnet40_1024_train
    Traceback (most recent call last):
    File "C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\torch\utils\cpp_extension.py", line 1723, in _run_ninja_build
    env=env)
    File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
    subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "main_BERT.py", line 1, in
from tools import BERT_pretrain_run_net as pretrain
File "E:\E21201078\Point-BERT-master\tools_init_.py", line 1, in
from .runner import run_net
File "E:\E21201078\Point-BERT-master\tools\runner.py", line 5, in
from tools import builder
File "E:\E21201078\Point-BERT-master\tools\builder.py", line 8, in
from models import build_model_from_cfg
File "E:\E21201078\Point-BERT-master\models_init_.py", line 2, in
import models.dvae
File "E:\E21201078\Point-BERT-master\models\dvae.py", line 4, in
from knn_cuda import KNN
File "C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\knn_cuda_init_.py", line 38, in
knn = load_cpp_ext("knn")
File "C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\knn_cuda_init
.py", line 33, in load_cpp_ext
with_cuda=True
File "C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\torch\utils\cpp_extension.py", line 1136, in load
keep_intermediates=keep_intermediates)
File "C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\torch\utils\cpp_extension.py", line 1347, in _jit_compile
is_standalone=is_standalone)
File "C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\torch\utils\cpp_extension.py", line 1452, in _write_ninja_file_and_build_library
error_prefix=f"Error building extension '{name}'")
File "C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\torch\utils\cpp_extension.py", line 1733, in _run_ninja_build
raise RuntimeError(message) from e
RuntimeError: Error building extension 'knn': [1/1] "D:\VisualStudio\Buildtools\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64/link.exe" knn.o knn.cuda.o /nologo /DLL c10.lib c10_cuda.lib torch_cpu
.lib torch_cuda_cu.lib -INCLUDE:?searchsorted_cuda@native@at@@ya?AVTensor@2@AEBV32@0_N1@Z torch_cuda_cpp.lib -INCLUDE:?warp_size@cuda@at@@yahxz torch.lib /LIBPATH:C:\Users\Administrator\Deskto
p\SegTHOR\lib\site-packages\torch\lib torch_python.lib /LIBPATH:C:\Users\Administrator\Desktop\SegTHOR\Scripts\libs "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\lib/x64"
cudart.lib /out:knn.pyd
FAILED: knn.pyd
"D:\VisualStudio\Buildtools\VC\Tools\MSVC\14.29.30133\bin\Hostx64\x64/link.exe" knn.o knn.cuda.o /nologo /DLL c10.lib c10_cuda.lib torch_cpu.lib torch_cuda_cu.lib -INCLUDE:?searchsorted_cuda@n
ative@at@@ya?AVTensor@2@AEBV32@0_N1@Z torch_cuda_cpp.lib -INCLUDE:?warp_size@cuda@at@@yahxz torch.lib /LIBPATH:C:\Users\Administrator\Desktop\SegTHOR\lib\site-packages\torch\lib torch_python.l
ib /LIBPATH:C:\Users\Administrator\Desktop\SegTHOR\Scripts\libs "/LIBPATH:C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.3\lib/x64" cudart.lib /out:knn.pyd
LINK : fatal error LNK1104: 无法打开文件“python37.lib”
ninja: build stopped: subcommand failed.

The size of the learnable vocabulary

When training dvae, the default num_tokens=8192, does the number of vocabularies affect the training results of dvae? Each data in ModelNet40 contains 10000 points, when I want to train a dataset containing more points, say there are 40000 points or more, should the corresponding num_tokens become larger?

Confusion about 1024 points, but there are 64 patches and each one contains 32 points

@yuxumin @lulutang0608 Thanks for sharing the paper and code.

One point I am confused with is, in section 4.1 Pre-training setups, you claim "We sample 1024 points from each 3D model and divide them into 64 point patches (sub- clouds). Each sub-cloud contains 32 points".

It is confusing in 3 ways:

  1. 64 * 32 > 1024, which deviates the description in the paper, so I have a second understanding as follows
  2. For 1024 points in 64 patches, there are overlapped points across different patches, if not, I have a third understanding as follows
  3. In ShapeNet, each object is a CAD model that can be sampled to produce a holistic point set P, which contains number of points ( >1024 at least).
    Then FPS is applied to P to get 1024 points, denoted as Q.
    Among Q, you apply FPS again to get 64 point centers, denoted as C.
    After that, in the search space P, you search k(= 32) nearest neighbors for each center point in C.

Am I right? If the third way is right, why not FPS 64 centers in P directly, then find its k nearest neighbors to produce 64 local patches?

Pre-training dVAE on ModelNet40

Hello,
I try to use ModelNet40 for pre-training of dVAE. I changed the dataset_name in runner.py to ModelNet40 and modified the code as shown in the picture. In addition, I also configured the dvae.yaml file about ModelNet40. But when I start training I get "KeyError: 'ModelNet'". I don't know why this problem occurs, is it related to the json file? , or do you have any good suggestions? Thanks.

2022-05-07 18-57-38 的屏幕截图

`(pointbert) ws666@ws666-OMEN-by-HP-Laptop-15-dc1xxx:~/Point-BERT-master$ bash scripts/train.sh 0 --config cfgs/ModelNet_models/dvae.yaml --exp_name modelnetDvae

  • GPUS=0
  • PY_ARGS='--config cfgs/ModelNet_models/dvae.yaml --exp_name modelnetDvae'
  • CUDA_VISIBLE_DEVICES=0
  • python main.py --config cfgs/ModelNet_models/dvae.yaml --exp_name modelnetDvae
    2022-05-07 18:53:55,248 - dvae - INFO - Copy the Config file from cfgs/ModelNet_models/dvae.yaml to ./experiments/dvae/ModelNet_models/modelnetDvae/config.yaml
    2022-05-07 18:53:55,248 - dvae - INFO - args.config : cfgs/ModelNet_models/dvae.yaml
    2022-05-07 18:53:55,248 - dvae - INFO - args.launcher : none
    2022-05-07 18:53:55,248 - dvae - INFO - args.local_rank : 0
    2022-05-07 18:53:55,249 - dvae - INFO - args.num_workers : 4
    2022-05-07 18:53:55,249 - dvae - INFO - args.seed : 0
    2022-05-07 18:53:55,249 - dvae - INFO - args.deterministic : False
    2022-05-07 18:53:55,249 - dvae - INFO - args.sync_bn : False
    2022-05-07 18:53:55,249 - dvae - INFO - args.exp_name : modelnetDvae
    2022-05-07 18:53:55,249 - dvae - INFO - args.start_ckpts : None
    2022-05-07 18:53:55,249 - dvae - INFO - args.ckpts : None
    2022-05-07 18:53:55,249 - dvae - INFO - args.val_freq : 1
    2022-05-07 18:53:55,249 - dvae - INFO - args.resume : False
    2022-05-07 18:53:55,249 - dvae - INFO - args.test : False
    2022-05-07 18:53:55,249 - dvae - INFO - args.finetune_model : False
    2022-05-07 18:53:55,249 - dvae - INFO - args.scratch_model : False
    2022-05-07 18:53:55,249 - dvae - INFO - args.label_smoothing : False
    2022-05-07 18:53:55,249 - dvae - INFO - args.mode : None
    2022-05-07 18:53:55,249 - dvae - INFO - args.way : -1
    2022-05-07 18:53:55,249 - dvae - INFO - args.shot : -1
    2022-05-07 18:53:55,249 - dvae - INFO - args.fold : -1
    2022-05-07 18:53:55,249 - dvae - INFO - args.experiment_path : ./experiments/dvae/ModelNet_models/modelnetDvae
    2022-05-07 18:53:55,249 - dvae - INFO - args.tfboard_path : ./experiments/dvae/ModelNet_models/TFBoard/modelnetDvae
    2022-05-07 18:53:55,249 - dvae - INFO - args.log_name : dvae
    2022-05-07 18:53:55,249 - dvae - INFO - args.use_gpu : True
    2022-05-07 18:53:55,249 - dvae - INFO - args.distributed : False
    2022-05-07 18:53:55,249 - dvae - INFO - config.optimizer = edict()
    2022-05-07 18:53:55,249 - dvae - INFO - config.optimizer.type : AdamW
    2022-05-07 18:53:55,249 - dvae - INFO - config.optimizer.kwargs = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.optimizer.kwargs.lr : 0.0005
    2022-05-07 18:53:55,250 - dvae - INFO - config.optimizer.kwargs.weight_decay : 0.0005
    2022-05-07 18:53:55,250 - dvae - INFO - config.scheduler = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.scheduler.type : CosLR
    2022-05-07 18:53:55,250 - dvae - INFO - config.scheduler.kwargs = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.scheduler.kwargs.epochs : 300
    2022-05-07 18:53:55,250 - dvae - INFO - config.scheduler.kwargs.initial_epochs : 10
    2022-05-07 18:53:55,250 - dvae - INFO - config.scheduler.kwargs.warming_up_init_lr : 5e-05
    2022-05-07 18:53:55,250 - dvae - INFO - config.temp = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.temp.start : 1
    2022-05-07 18:53:55,250 - dvae - INFO - config.temp.target : 0.0625
    2022-05-07 18:53:55,250 - dvae - INFO - config.temp.ntime : 100000
    2022-05-07 18:53:55,250 - dvae - INFO - config.kldweight = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.kldweight.start : 0
    2022-05-07 18:53:55,250 - dvae - INFO - config.kldweight.target : 0.1
    2022-05-07 18:53:55,250 - dvae - INFO - config.kldweight.ntime : 100000
    2022-05-07 18:53:55,250 - dvae - INFO - config.dataset = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.dataset.train = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.dataset.train.base = edict()
    2022-05-07 18:53:55,250 - dvae - INFO - config.dataset.train.base.NAME : ModelNet
    2022-05-07 18:53:55,250 - dvae - INFO - config.dataset.train.base.DATA_PATH : data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled
    2022-05-07 18:53:55,250 - dvae - INFO - config.dataset.train.base.N_POINTS : 1024
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.train.base.NUM_CATEGORY : 2
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.train.base.USE_NORMALS : False
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.train.others = edict()
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.train.others.subset : train
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.train.others.npoints : 1024
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.train.others.bs : 2
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val = edict()
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.base = edict()
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.base.NAME : ModelNet
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.base.DATA_PATH : data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.base.N_POINTS : 1024
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.base.NUM_CATEGORY : 2
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.base.USE_NORMALS : False
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.others = edict()
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.others.subset : test
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.others.npoints : 1024
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.val.others.bs : 1
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.test = edict()
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.test.base = edict()
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.test.base.NAME : ModelNet
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.test.base.DATA_PATH : data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.test.base.N_POINTS : 1024
    2022-05-07 18:53:55,251 - dvae - INFO - config.dataset.test.base.NUM_CATEGORY : 2
    2022-05-07 18:53:55,252 - dvae - INFO - config.dataset.test.base.USE_NORMALS : False
    2022-05-07 18:53:55,252 - dvae - INFO - config.dataset.test.others = edict()
    2022-05-07 18:53:55,252 - dvae - INFO - config.dataset.test.others.subset : test
    2022-05-07 18:53:55,252 - dvae - INFO - config.dataset.test.others.npoints : 1024
    2022-05-07 18:53:55,252 - dvae - INFO - config.dataset.test.others.bs : 1
    2022-05-07 18:53:55,252 - dvae - INFO - config.model = edict()
    2022-05-07 18:53:55,252 - dvae - INFO - config.model.NAME : DiscreteVAE
    2022-05-07 18:53:55,252 - dvae - INFO - config.model.group_size : 32
    2022-05-07 18:53:55,252 - dvae - INFO - config.model.num_group : 64
    2022-05-07 18:53:55,252 - dvae - INFO - config.model.encoder_dims : 256
    2022-05-07 18:53:55,252 - dvae - INFO - config.model.num_tokens : 8192
    2022-05-07 18:53:55,252 - dvae - INFO - config.model.tokens_dims : 256
    2022-05-07 18:53:55,252 - dvae - INFO - config.model.decoder_dims : 256
    2022-05-07 18:53:55,252 - dvae - INFO - config.total_bs : 2
    2022-05-07 18:53:55,252 - dvae - INFO - config.step_per_update : 1
    2022-05-07 18:53:55,252 - dvae - INFO - config.max_epoch : 300
    2022-05-07 18:53:55,252 - dvae - INFO - config.consider_metric : CDL1
    2022-05-07 18:53:55,252 - dvae - INFO - Distributed training: False
    2022-05-07 18:53:55,252 - dvae - INFO - Set random seed to 0, deterministic: False
    2022-05-07 18:53:55,253 - ModelNet - INFO - The size of train data is 187
    2022-05-07 18:53:55,253 - ModelNet - INFO - Load processed data from data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled/modelnet2_train_1024pts_fps.dat...
    2022-05-07 18:53:55,256 - ModelNet - INFO - The size of test data is 21
    2022-05-07 18:53:55,256 - ModelNet - INFO - Load processed data from data/ModelNet/modelnet40_normal_resampled/modelnet40_normal_resampled/modelnet2_test_1024pts_fps.dat...
    2022-05-07 18:53:57,049 - dvae - INFO - Using Data parallel ...
    2022-05-07 18:53:58,312 - dvae - INFO - [Epoch 0/300][Batch 1/93] BatchTime = 1.260 (s) DataTime = 0.051 (s) Losses = ['656.8937', '214.9319'] lr = 0.000001
    2022-05-07 18:53:59,702 - dvae - INFO - [Epoch 0/300][Batch 21/93] BatchTime = 0.069 (s) DataTime = 0.002 (s) Losses = ['604.8466', '216.9617'] lr = 0.000001
    2022-05-07 18:54:01,093 - dvae - INFO - [Epoch 0/300][Batch 41/93] BatchTime = 0.070 (s) DataTime = 0.001 (s) Losses = ['578.2596', '215.6334'] lr = 0.000001
    2022-05-07 18:54:02,482 - dvae - INFO - [Epoch 0/300][Batch 61/93] BatchTime = 0.069 (s) DataTime = 0.002 (s) Losses = ['596.0265', '214.1100'] lr = 0.000001
    2022-05-07 18:54:03,868 - dvae - INFO - [Epoch 0/300][Batch 81/93] BatchTime = 0.069 (s) DataTime = 0.002 (s) Losses = ['548.0435', '219.2712'] lr = 0.000001
    2022-05-07 18:54:04,724 - dvae - INFO - [Training] EPOCH: 0 EpochTime = 7.672 (s) Losses = ['580.5861', '216.5118']
    2022-05-07 18:54:07,406 - dvae - INFO - Save checkpoint at ./experiments/dvae/ModelNet_models/modelnetDvae/ckpt-last.pth
    2022-05-07 18:54:07,562 - dvae - INFO - [Epoch 1/300][Batch 1/93] BatchTime = 0.154 (s) DataTime = 0.079 (s) Losses = ['549.7282', '215.7364'] lr = 0.000001
    2022-05-07 18:54:08,985 - dvae - INFO - [Epoch 1/300][Batch 21/93] BatchTime = 0.071 (s) DataTime = 0.001 (s) Losses = ['511.4938', '217.7582'] lr = 0.000001
    2022-05-07 18:54:10,391 - dvae - INFO - [Epoch 1/300][Batch 41/93] BatchTime = 0.069 (s) DataTime = 0.002 (s) Losses = ['518.8726', '219.2345'] lr = 0.000001
    2022-05-07 18:54:11,785 - dvae - INFO - [Epoch 1/300][Batch 61/93] BatchTime = 0.069 (s) DataTime = 0.001 (s) Losses = ['482.4848', '217.2698'] lr = 0.000001
    2022-05-07 18:54:13,179 - dvae - INFO - [Epoch 1/300][Batch 81/93] BatchTime = 0.070 (s) DataTime = 0.001 (s) Losses = ['478.4417', '216.4505'] lr = 0.000001
    2022-05-07 18:54:14,038 - dvae - INFO - [Training] EPOCH: 1 EpochTime = 6.631 (s) Losses = ['504.2244', '216.6016']
    2022-05-07 18:54:14,038 - dvae - INFO - [VALIDATION] Start validating epoch 1
    2022-05-07 18:54:15,204 - dvae - INFO - [Validation] EPOCH: 1 Metrics = ['0.0011', '202.4537', '132.7796']
    2022-05-07 18:54:15,204 - dvae - INFO - ============================ TEST RESULTS ============================
    2022-05-07 18:54:15,204 - dvae - INFO - Taxonomy #Sample F-Score CDL1 CDL2 #ModelName
    Traceback (most recent call last):
    File "main.py", line 72, in
    main()
    File "main.py", line 68, in main
    run_net(args, config, train_writer, val_writer)
    File "/home/ws666/Point-BERT-master/tools/runner.py", line 191, in run_net
    metrics = validate(base_model, test_dataloader, epoch, ChamferDisL1, ChamferDisL2, val_writer, args, config, logger=logger)
    File "/home/ws666/Point-BERT-master/tools/runner.py", line 295, in validate
    msg += shapenet_dict[taxonomy_id] + '\t'
    KeyError: 'ModelNet'`

About vote validation acc.

Hi. Thank you for your great work.

Did other works(eg. PointTransformer) use the same voting strategy? It seems like the results reported on the paper is vote validation acc.

Part segmentation details

Hi,

Thanks for your great work.

May I ask when do you plan to release part segmentation codes?

We are trying to reproduce the part segmentation results and meet some issues due to lack of details in paper and appendix. Regarding segmentation head, could you please provide more details about how the upsampled feature H4^, H8^ with H12 being propagated by 1 MLP and 8 DGCNN blocks?

Looking forward to your kind reply.

Best regards,
Yatian

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.