Git Product home page Git Product logo

tanet's People

Contributors

happinesslz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tanet's Issues

evaluation results of pertained model is almost zero

Hi, I tried to evaluate the pertained model provided for PointPillars_with_tanet. But the results are like this:

avg forward time per example: 0.006
avg postprocess time per example: 0.007
Before Refine:
Cyclist [email protected], 0.50, 0.50:
bbox AP:0.01, 0.01, 0.01
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.01, 0.01, 0.01
Cyclist [email protected], 0.25, 0.25:
bbox AP:0.01, 0.01, 0.01
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.01, 0.01, 0.01
Pedestrian [email protected], 0.50, 0.50:
bbox AP:0.01, 0.14, 0.14
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.00, 0.01, 0.01
Pedestrian [email protected], 0.25, 0.25:
bbox AP:0.01, 0.14, 0.14
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.00, 0.01, 0.01

After Refine:
Cyclist [email protected], 0.50, 0.50:
bbox AP:0.00, 0.00, 0.00
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.00, 0.00, 0.00
Cyclist [email protected], 0.25, 0.25:
bbox AP:0.00, 0.00, 0.00
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.00, 0.00, 0.00
Pedestrian [email protected], 0.50, 0.50:
bbox AP:0.00, 0.00, 0.03
bev AP:0.00, 0.00, 0.03
3d AP:0.00, 0.00, 0.00
aos AP:0.00, 0.00, 0.01
Pedestrian [email protected], 0.25, 0.25:
bbox AP:0.00, 0.00, 0.03
bev AP:0.00, 0.07, 0.07
3d AP:0.00, 0.00, 0.05
aos AP:0.00, 0.00, 0.01

Cyclist coco [email protected]:0.05:0.70:
bbox AP:0.01, 0.01, 0.01
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.00, 0.00, 0.00
Pedestrian coco [email protected]:0.05:0.70:
bbox AP:0.04, 0.04, 0.06
bev AP:0.00, 0.01, 0.03
3d AP:0.00, 0.00, 0.02
aos AP:0.03, 0.02, 0.04

Cyclist coco [email protected]:0.05:0.70:
bbox AP:0.01, 0.01, 0.01
bev AP:0.00, 0.00, 0.00
3d AP:0.00, 0.00, 0.00
aos AP:0.00, 0.00, 0.00
Pedestrian coco [email protected]:0.05:0.70:
bbox AP:0.04, 0.04, 0.06
bev AP:0.00, 0.01, 0.03
3d AP:0.00, 0.00, 0.02
aos AP:0.03, 0.02, 0.04

Is this abnormal? what could be the reason for this?

cannot import name 'box_ops_cc' from 'second.core'

Traceback (most recent call last):
File "/home/olap/model_test/python/TANet/pointpillars_with_TANet/second/core/box_np_ops.py", line 10, in
```

**> from second.core import box_ops_cc

ImportError: cannot import name 'box_ops_cc' from 'second.core'**

关于total_dir_loss疑问

您好,我看了您的源代码,在voxelnet.py文件中计算total_dir_loss和total_loss的代码段如下:

if self._use_direction_classifier:
                   refine_dir_logits = preds_dict["Refine_dir_preds"].view(batch_size_dev, -1, 2)
                   ### compute refine dir loss
                   refine_dir_loss = self._dir_loss_ftor(refine_dir_logits, dir_targets, weights=weights)
                   refine_dir_loss = refine_dir_loss.sum() / batch_size_dev

                   ### self._direction_loss_weight = 0.2
                   total_dir_loss = dir_loss* + refine_dir_loss * self._direction_loss_weight

                   ### compute total loss
                   refine_loss += total_dir_loss

               total_loss = coarse_loss + refine_loss

其中`dir_loss`被同时添加到coarse_lossrefine_loss,导致dir_loss在total_loss**计算了两次,不明白为什么要这样计算,请问这是有意为之还是代码的写错

google.protobuf.text_format.ParseError: 8:5 : Message type "second.protos.VoxelNet" has no field named "num_class".

您好,我在训练过程中遇到下面问题,您可以帮忙解决么?
Traceback (most recent call last):
File "./pytorch/train.py", line 765, in
fire.Fire()
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/fire/core.py", line 480, in _Fire
target=component.name)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "./pytorch/train.py", line 112, in train
text_format.Merge(proto_str, config)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 728, in Merge
allow_unknown_field=allow_unknown_field)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 796, in MergeLines
return parser.MergeLines(lines, message)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 821, in MergeLines
self._ParseOrMerge(lines, message)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 840, in _ParseOrMerge
self._MergeField(tokenizer, message)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 970, in _MergeField
merger(tokenizer, message, field)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 1045, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 970, in _MergeField
merger(tokenizer, message, field)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 1045, in _MergeMessageField
self._MergeField(tokenizer, sub_message)
File "/anaconda3/envs/tanet/lib/python3.7/site-packages/google/protobuf/text_format.py", line 937, in _MergeField
(message_descriptor.full_name, name))
google.protobuf.text_format.ParseError: 8:5 : Message type "second.protos.VoxelNet" has no field named "num_class".

Training on all classs of KITTI

@happinesslz thanks for opesourcing the code base , i have qurey on the trained model

  1. We have two models one for vehicle and one for person , can we train TANet on KITTI dataset for all the classes with one model
  2. Does the accuracy drop when trained on all the class for the kitti data

Please share ur thoughts and thanks in advance

Visualization of the inference results in the pointcloud

Hello @happinesslz and thanks for the great work. I was wondering how can I reproduce the visualization of the 3d bounding boxes on the pointcloud itself like in the image (on the right).
image

Is there a code in the repo, or is it extra ? Moreover, can the visualization be done with the KittiViewer web ?

Best,
Aldi

License

Hi,
Could you add a license for this code or state your restriction on using or changing this code?

Thanks.

Train with my own dataset

Hi,

Thanks for the nice work! Do you have any suggestions for how should I modify my own dataset to use TANet?

how to load a trained model

anyone know how to load a trained model?I am a baginner , i am confused with loading trained model,then running the model on myself data for days? It is hard for me to do this.Can you tell in details?

Some question about attention module

Thanks for your sharing this great work. I have some question about the attention module.
First, the channel-wise attention operation, I think, tends to select the maximum of [x, y, z, intensity].
Second, the point wise attention operation tends to select the farthest point in each voxel.
So these operations seems to have any instinct sense.
If I have mis-understood the idea of attention, please tell me.
Thanks again.

Evaluation on TANet for PointPillars

Hi @happinesslz,

How can I run the evaluation on TANet for PointPillars for trained models?

I downloaded your trained models from Google Drive and I just want to run the evaluation.
Is the following command correct?:
python ./pytorch/train.py train --config_path=./configs/car.config --model_dir=./models/car_trained

Thank you for your attention.

nvcc fatal : Unknown option 'MMD'

I'm having issues starting the training
bash run_car_tanet.sh
and I do get the error
nvcc fatal : Unknown option 'MMD'

My set up is

Ubuntu18.04
CUDA version 11.6
NVIDIA-SMI 510.54
Cuda compilation tools release 10.1.243
cumm version: 0.2.8
spconv version: 2.1.14 (while the voxelgeneratorv2 class method is still available. It was removed in subsequent versions after 2.1.19)
Graphics card : Turing RTX2060

the error line starts here

[1/577] [NVCC][c++]/home/wm/Downloads/spconv/spconv/build/src/cumm/gemm/main/Simt_f16f16f16f32f32ntt_m32n32k32m32n32k8_2_SAB10/GemmKernel/GemmKernel_gemm_kernel.cu.o
FAILED: /home/wm/Downloads/spconv/spconv/build/src/cumm/gemm/main/Simt_f16f16f16f32f32ntt_m32n32k32m32n32k8_2_SAB10/GemmKernel/GemmKernel_gemm_kernel.cu.o
nvcc -MMD -MT /home/wm/Downloads/spconv/spconv/build/src/cumm/gemm/main/Simt_f16f16f16f32f32ntt_m32n32k32m32n32k8_2_SAB10/GemmKernel/GemmKernel_gemm_kernel.cu.o -MF /home/wm/Downloads/spconv/spconv/build/src/cumm/gemm/main/Simt_f16f16f16f32f32ntt_m32n32k32m32n32k8_2_SAB10/GemmKernel/GemmKernel_gemm_kernel.cu.o.d -I "/home/wm/Downloads/spconv/spconv/build/include" -I "/usr/local/cuda-10.1/targets/x86_64-linux/include" -I "/home/wm/Downloads/cumm/include" -I "/home/wm/anaconda3/envs/TANet_happinesslz/lib/python3.7/site-packages/pybind11/include" -I "/home/wm/anaconda3/envs/TANet_happinesslz/include/python3.7m" -I "/home/wm/anaconda3/envs/TANet_happinesslz/include/python3.7m" -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -Xcudafe "--diag_suppress=implicit_return_from_non_void_function" -Xcompiler -fno-gnu-unique -std=c++14 -O3 -DTV_CUDA -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -Xcompiler="-fPIC" -Xcompiler='-O3' -c /home/wm/Downloads/spconv/spconv/build/src/cumm/gemm/main/Simt_f16f16f16f32f32ntt_m32n32k32m32n32k8_2_SAB10/GemmKernel/GemmKernel_gemm_kernel.cu -o /home/wm/Downloads/spconv/spconv/build/src/cumm/gemm/main/Simt_f16f16f16f32f32ntt_m32n32k32m32n32k8_2_SAB10/GemmKernel/GemmKernel_gemm_kernel.cu.o
nvcc fatal : Unknown option 'MMD'

Does anyone have any kind advice to help resolve this issue? Thanks!

Question about the accuracy of the Ped&Cyc model

Hi @happinesslz ,
Thanks for your contribution!
I try to reproduce the results on Car and Ped&Cyc with your default configurations, only to find there is a gap on the accuracy of Ped&Cyc while the accuracy of Car is not bad.
Could you explain why?

Config Claimed AP Reproduced AP
Car Car 3d: 88.17, 77.75, 75.31 Car 3d: 88.41, 77.84, 75.84
Ped&Cyc Ped 3d: 71.04, 64.20, 59.11
Cyc 3d: 85.21, 65.29, 61.57
Ped 3d: 68.73, 62.96, 56.60
Cyc 3d: 80.11, 60.74, 56.18

Dependencies

Hi,
It looks like NUMBAPRO_* environment variables are deprecated and ignored, so there is no need to state them.
Also, I noticed that I you need to install shapely easydict pybind11 (with apt-get install libboost-dev) to run pointpillars_with_TANet.

Could you provide visualization codes?

Thanks for your useful project. How can we visual result such as the image of comparison between TANet and PointPillars on Visualization part? Could you provide visualization codes?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.