dvlab-research / dsgn Goto Github PK
View Code? Open in Web Editor NEWDSGN: Deep Stereo Geometry Network for 3D Object Detection (CVPR 2020)
License: MIT License
DSGN: Deep Stereo Geometry Network for 3D Object Detection (CVPR 2020)
License: MIT License
Hello, thanks for your great work! I wanna know when is the code available
Hi,Jia
I have cloned the repo and downloaded your first pre-trained model(DSGN_car_pretrained.zip). Then I run the code on my kitti dataset, but I got the result which just consists of the cyclist and pedestrians.
And, I also tried to apply another model (dsgn_12g_b). I got the result for 'car' successfully but the position, dimension, or orientation are almost wrong.
My environment follows your requirements (python==3.7.0, pytorch==1.1.0, torchvision==2.2.0).
Could you please give me some tips?
Thanks.
你好,想请问下跑12G的模型,不支持多GPU么?
我执行了这个指令python3 tools/train_net.py --cfg ./configs/config_car_12g.py --savemodel ./outputs/MODEL_dsgn_v1 -btrain 8 -d 0-7,然后还是出现了
RuntimeError: CUDA out of memory. Tried to allocate 674.00 MiB (GPU 0; 11.78 GiB total capacity; 9.14 GiB already allocated; 665.69 MiB free; 892.32 MiB cached)
我用的显卡是TITAN V的。
期待您的解答,谢谢!
First I want to congratulate you for the work done.
I would like to know if you plan to make a version that runs with tensorflow / tensorflow lite?
I was so impressived by your model. Thank you for sharing!
So I want to implement your model in my setup. But there are some issue in performance.
I think it caused by pytorch version as you said in Troubleshooting.
At first my environment is as follows.
cuda 11.8
python 3.7
torch 1.8.0
torchvision 0.9.0
4 RTX 3090(24G) for training.
My train&test code is as follows
python3 tools/train_net.py --cfg ./configs/default/config_car.py --savemodel ./outputs/dsgn_origin_4 -btrain 4 -d 0-3 --multiprocessing-distributed
python3 tools/test_net.py --loadmodel ./outputs/dsgn_origin_4/finetune_53.tar -btest 8 -d 0-3
Thank you for reading my issue. Is there a problem with my setup?
Hello, chenyilun95!
Thanks for your great work of monocular 3D object detection. After watch your demo, I am confused about the right_bottom BEV point cloud, is it original velodyne point cloud or pseudo lidar generated by depth estimation results of your net.
What is the supervision of point cloud?
I have two ways of understanding
When I was testing the model, Typerror appeared.
DSGN/dsgn/eval/kitti-object-eval-python# bash eval.sh /root/autodl-tmp/DSGN/tools/.././outputs/MODEL_DSGN_12g/kitti_output
/root/autodl-tmp/DSGN/tools/.././outputs/MODEL_DSGN_12g/kitti_output
0
Eval 3769 images
Traceback (most recent call last):
File "evaluate.py", line 32, in
fire.Fire()
File "/root/.local/lib/python3.7/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/root/.local/lib/python3.7/site-packages/fire/core.py", line 471, in _Fire
target=component.name)
File "/root/.local/lib/python3.7/site-packages/fire/core.py", line 681, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "evaluate.py", line 28, in evaluate
print(get_official_eval_result(gt_annos, dt_annos, current_class))
File "/root/autodl-tmp/DSGN/dsgn/eval/kitti-object-eval-python/eval.py", line 773, in get_official_eval_result
z_center=z_center)
File "/root/autodl-tmp/DSGN/dsgn/eval/kitti-object-eval-python/eval.py", line 677, in do_eval_v3
z_center=z_center)
File "/root/autodl-tmp/DSGN/dsgn/eval/kitti-object-eval-python/eval.py", line 517, in eval_class
z_center=z_center)
File "/root/autodl-tmp/DSGN/dsgn/eval/kitti-object-eval-python/eval.py", line 395, in calculate_iou_partly
overlap_part = image_box_overlap(gt_boxes, dt_boxes)
TypeError: expected dtype object, got 'numpy.dtype[float64]'
Hi, I have a question about the error of the depth estimation. Did you compute this error with different distance? For example, output the mean and median error in several distance bins, like 0-10m, 10-20m, 20-30m, 30-40m.
Hello, could you please share the details about the inference speed of DSGN? Thanks.
Hi,
Could you please let me know what is the train, val, test split used here?
Is it the same as https://xiaozhichen.github.io/files/mv3d/imagesets.tar.gz?
Thanks in advance.
Hi, I used the provided config config\defaults\config_car.py
to train DSGN on the trainval set on KITTI and submit to the leaderboard. But the results I get seem to lower than those reported in the paper:
I am using Pytorch 1.2 with Torchvision 0.4 to train.
If the configuration used to get the leaderboard results differ from the provided ones, can they made available?
Thanks!
I followed the same instruction as mentoned in README file but getting below error while running test_net.py
command : python3 tools/test_net.py --loadmodel ./outputs/DSGN_car_pretrained/ -btest 4 -d 2-3
Error: RuntimeError: cuda runtime error (209) : no kernel image is available for execution on the device
Environment Details:
Ubuntu 18.04
torch 1.3.0
torchvision 0.4.1
4GPU GeForce RTX 2080 11GB
I can't find split files. Where's the split files?
Hi!I wonder whether the camera parameters need to be introduced when building the Plane Sweep Volume. What is the difference between your Plane Sweep Volume and the Cost Volume in Stereo matching?
Thanks for you work~. I have a question, if using 3D Geometry Volume to predict depth, like IMG->PSCV->3DV->Depth, the depth results is better? thanks.
Hi, thanks for your work.
I input batchsize 1 for one gpu, but out of memory? (caused by 3D convolution)
python3 ./tools/train_net.py
--cfg ./configs/default/config_car.py
--savemodel ./outputs/dsgn_car
--start_epoch 1
--lr_scale 50
--epochs 60
-btrain 1
-d 0 \
What should I do to solve this problem?
Thanks for you help.
How long does it take to run a frame.
Do you you have benchmarked this on Waymo open dataset? I would be really useful to compare on that dataset like RetinaTrack https://arxiv.org/abs/2003.13870
Have you tested train depth and 3d object detection jointly in monocular image by DSGN manner? The OFT(https://github.com/tom-roddick/oft) works not very well. Does the 3D volume transformation really matter?
Is there a model that has been trained?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.