Comments (22)
Hi, I am willing to help. Could you provide some samples, such as the disparity map or point cloud?
from pseudo_lidar.
Hi,
Thanks for replying!
Since the file size is too large, forwarding attachments to your mail([email protected]).Sorry for the inconvenience caused.
Thanks,
Hari
from pseudo_lidar.
Hi, I am willing to help. Could you provide some samples, such as the disparity map or point cloud?
Thanks for the fast response!! I have already sent you the files in your mail.Hope you received the files
from pseudo_lidar.
Hi can you please post any analysis from the above or the root cause? It would be highly helpful.
from pseudo_lidar.
Hi. @hari1106 can you explain how you are able to convert a mono depth map to a point cloud? Depth requires baseline but there is no baseline for a mono image. So, how can you convert it to a point cloud?
from pseudo_lidar.
from pseudo_lidar.
@hari1106 so if I understand you correctly, we need a stereo camera regardless? That defeats the purpose of using mono depth. I was under the impression that we could use images obtained using a single camera, obtain the mono depth map just for that image and then use the calibration values to get the point cloud coordinates. But without the baseline value, it is impossible. Meaning that it is impossible to convert monodepth map to point cloud without knowing the baseline value. Please tell me if I understand you correctly here.
from pseudo_lidar.
Monodepth uses the concept of image regeneration for predicting disparity.If you are training with stereo images with a specified baseline,then for testing time if you give left image alone it can regenerate right image and disparity image out of that using encoder-decoder architecture.So from that perspective it requires only a single left image but if you are taking images from a monocular camera,then this doesn't comes under "monocular" approach as you mentioned since depth is found out with formula D=f*b/d.But there are other monocular approaches/models which directly gives depth from monocular image which doesn't depend on baseline. I hope I answered your question.
Thanks,
Hari
from pseudo_lidar.
@mileyan I would appreciate it if you can post the analysis on the issue I raised..
from pseudo_lidar.
@hari1106 thank you for this clarification. I guess that means it is simply impossible to obtain the point cloud from a mono depth map without a baseline? Because that is what I was trying to do and I immediately got stumped when it came to generating the point cloud since the original images were not taken from a stereo camera.
from pseudo_lidar.
I think you understood it wrongly.I gave a brief idea as to how monodepth works and I clearly mentioned that the output is disparity map not depth map in that case.But in your case,you have depth map as output.So you can generate the pointcloud using the same code provided by mileyan with arguments passed for depth image.
Thanks,
Hari
from pseudo_lidar.
I use DenseDepth to obtain depth map for an image (https://github.com/ialhashim/DenseDepth). What do I use as baseline when converting this to a point cloud? The original images are not from a stereo camera
from pseudo_lidar.
#15 please refer this if you have depth image as output from model
from pseudo_lidar.
I have tried this. Using this requires the baseline which I don't have since the image was not obtained from a stereo camera.
from pseudo_lidar.
@sarimmehdi Please add --is_depth in the command as clearly mentioned in #15
from pseudo_lidar.
@hari1106 well, of course, I did that. Please read my comment where I clearly mentioned that I have no baseline value since my images were not obtained from a stereo camera. As originally stated by me, it is impossible to get a point cloud if you don't have the baseline
from pseudo_lidar.
@sarimmehdi please check the code and help me to identify exactly where you need the baseline if you are giving --is_depth argument in the command. If you have depth map as output of your model, you don't require baseline value but I do agree you have to provide transformation matrix from your lidar to your monocular camera coordinates if you are using custom dataset.
Thanks,
Hari
from pseudo_lidar.
@hari1106 When we set is_depth to true we arrive here: https://github.com/mileyan/pseudo_lidar/blob/master/preprocessing/generate_lidar.py#L70
This leads us to this function here:
Here you can clearly see the baseline being used to obtain the point cloud. Hence, it is impossible to obtain the 3D point cloud coordinates without the baseline value (which I do not have since my image was not obtained from a stereo camera)
from pseudo_lidar.
@sarimmehdi if you are setting is_depth to true ,can you please double check if it goes to line 70 or this line:
https://github.com/mileyan/pseudo_lidar/blob/master/preprocessing/generate_lidar.py#L72
which leads to function "project_depth_to_points"
from pseudo_lidar.
Hi, thank you. So, if I understand correctly, there is no need of a baseline, since the depth is already predicted by the monodepth estimating neural net.
from pseudo_lidar.
@sarimmehdi ya true no need of baseline for depth predicting models but for models like monodepth which predicts disparity between left and right images ,depth follows inverse relationship with disparity(D=f*b/d) where baseline is required.hope your doubt is clarified.Happy learning
Thanks,
Hari
from pseudo_lidar.
@mileyan got better results....closing this issue
from pseudo_lidar.
Related Issues (20)
- Evaluation with pre-trained frustum pointnet HOT 1
- AVOD pre-trained weights for mono pseudo-lidar HOT 1
- have trouble in Train the stereo model
- pseudo lidar HOT 1
- pseudo lidar gives wrong converted point cloud HOT 1
- Can not install pytorch with Python 2.7 HOT 2
- Training for my own custom data HOT 3
- If you need PSMNet but only want to go with python 3, check this repo then HOT 1
- Why pseudo point cloud results in this proposed method seem too far different from LIDAR? HOT 4
- question about dataloader
- About the calibration problem between the true location and the Pseudo-LiDAR HOT 1
- [Feature requested] Python3 support
- How did you handle calib matrices in mono+depth setting HOT 3
- Why does the nan value appear in loss when training the stereo model?
- Confusion about paper table 5
- visualization code cannot get point cloud output HOT 1
- Visualising Pseudo and Real LiDAR HOT 1
- Training Custom dataset
- Nuscenes dataset application
- can you share the dorn project
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pseudo_lidar.