Comments (27)
@swdev1202
I am using pyqtgraph
to visual point cloud.
You can install it use pip
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl
class plot3d(object):
def __init__(self, title='null'):
#
self.glview = gl.GLViewWidget()
coord = gl.GLAxisItem()
coord.setSize(1,1,1)
#self.glview.addItem(coord)
self.glview.setMinimumSize(QtCore.QSize(600, 500))
self.glview.pan(1, 0, 0)
self.glview.setCameraPosition(azimuth=180)
self.glview.setCameraPosition(elevation=0)
self.glview.setCameraPosition(distance=5)
self.items = []
#
self.view = QtGui.QWidget()
self.view.window().setWindowTitle(title)
hlayout = QtGui.QHBoxLayout()
snap_btn = QtGui.QPushButton('&Snap')
def take_snap():
qimg = self.glview.readQImage()
qimg.save('1.jpg')
snap_btn.clicked.connect(take_snap)
hlayout.addWidget(snap_btn)
hlayout.addStretch()
layout = QtGui.QVBoxLayout()
#
layout.addLayout(hlayout)
layout.addWidget(self.glview)
self.view.setLayout(layout)
def add_item(self, item):
self.glview.addItem(item)
self.items.append(item)
def clear(self):
for it in self.items:
self.glview.removeItem(it)
self.items.clear()
def add_points(self, points, colors):
points_item = gl.GLScatterPlotItem(pos=points, size=1.5, color=colors)
self.add_item(points_item)
def add_line(self, p1, p2, color, width=3):
lines = np.array([[p1[0], p1[1], p1[2]],
[p2[0], p2[1], p2[2]]])
lines_item = gl.GLLinePlotItem(pos=lines, mode='lines',
color=color, width=width, antialias=True)
self.add_item(lines_item)
def plot_bbox_mesh(self, gt_boxes3d, color=(0,1,0,1)):
b = gt_boxes3d
for k in range(0,4):
i,j=k,(k+1)%4
self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)
i,j=k+4,(k+1)%4 + 4
self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)
i,j=k,k+4
self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)
def value_to_rgb(pc_inte):
minimum, maximum = np.min(pc_inte), np.max(pc_inte)
ratio = (pc_inte - minimum + 0.1) / (maximum - minimum + 0.1)
r = (np.maximum((1 - ratio), 0))
b = (np.maximum((ratio - 1), 0))
g = 1 - b - r
return np.stack([r, g, b]).transpose()
def view_points_cloud(pc=None):
app = QtGui.QApplication([])
glview = plot3d()
if pc is None:
pc = np.random.rand(1024, 3)
pc_color = np.ones([pc.shape[0], 4])
glview.add_points(pc, pc_color)
glview.view.show()
return app.exec()
from pseudo_lidar.
Can you please check the PSMNet environment?
Python2.7
PyTorch(0.4.0+)
torchvision 0.2.0 (higher version may cause issues)
from pseudo_lidar.
Hi,
I am using python3 and pytorch 1.1.0.
Should I retrain PSMNet?
That's wired, is there any simpler solution.
Thank you
from pseudo_lidar.
The simpler solution is you install Python2.7, PyTorch(0.4.0+), torchvision 0.2. You can easily use anaconda to create a new environment.
from pseudo_lidar.
Thanks.
from pseudo_lidar.
Hi, Is this how it is supposed to be now ?
But, I notice the generated pseudo point cloud seems not similar to the ones in your papers
from pseudo_lidar.
It should not look like it. How do you generate this point cloud? which code do you use?
from pseudo_lidar.
I follow your README to generate point cloud.
Firstly, generating disparity map
$ python ./psmnet/submission.py \
--loadmodel ./finetune_300.tar \
--datapath /mine/KITTI_DAT/training/ \
--save_path ./training_predict_disparity/
Then, converting disparity to point cloud
python ./preprocessing/generate_lidar.py \
--calib_path /mine/KITTI_DAT/training/calib \
--save_path ./training_predict_velodyne \
--disparity_dir ./training_predict_disparity/ \
--max_high 1
I notice the --disp_path
cmd option is incorrect, so i use --disparity_dir
instead
This is the 000003
file in KITTI training set
from pseudo_lidar.
I also found the script generate_lidar.py
does not work properly!
I used frustum-pointnets, first feed the ground truth velodyne to it for testing, and it worked well. Then I use generate_lidar.py
to generate predicted_velodyne from ground disparity, and feed predicted_velodyne for testing, the results are very bad. So I think there should be some problem in generate_lidar.py
.
from pseudo_lidar.
I follow your README to generate point cloud.
Firstly, generating disparity map$ python ./psmnet/submission.py \ --loadmodel ./finetune_300.tar \ --datapath /mine/KITTI_DAT/training/ \ --save_path ./training_predict_disparity/
Then, converting disparity to point cloud
python ./preprocessing/generate_lidar.py \ --calib_path /mine/KITTI_DAT/training/calib \ --save_path ./training_predict_velodyne \ --disparity_dir ./training_predict_disparity/ \ --max_high 1
I notice the
--disp_path
cmd option is incorrect, so i use--disparity_dir
instead
This is the000003
file in KITTI training set
Hi godspeed1989,
I used the code in this repo to generate the point cloud. It looks correct. Not sure why your result is not right. So Can you send me the disparity file and bin file of 000003 to me? My email is [email protected] . Thanks.
from pseudo_lidar.
I also found the script
generate_lidar.py
does not work properly!
I used frustum-pointnets, first feed the ground truth velodyne to it for testing, and it worked well. Then I usegenerate_lidar.py
to generate predicted_velodyne from ground disparity, and feed predicted_velodyne for testing, the results are very bad. So I think there should be some problem ingenerate_lidar.py
.
Hi zklgame,
Did you use the FPointNet trained on velodyne or on pseudo-lidar? Might you try this checkpoint https://drive.google.com/file/d/1qhCxw6uHqQ4SAkxIuBi-QCKqLmTGiNhP/view?usp=sharing, which is trained on pseudo-lidar.
from pseudo_lidar.
Hi, mileyan.
I add --save_float
option to run python ./psmnet/submission.py
and generate disparity map stored in npy.
In the meanwhile, modified serveral lines in ./preprocessing/generate_lidar.py
to read .npy instead of .png.
The result looks better.
training/000000
training/000002
training/000003
But i not sure whether it is correct now.
I notice the estimated depth is not so good at the distance.
from pseudo_lidar.
Hi godspeed1989,
It looks great! I will change the save_float
to the default setting. Thanks for your feedback.
from pseudo_lidar.
@godspeed1989
May I ask you what visualization tool you use for the point cloud (.bin)?
Thank you!
from pseudo_lidar.
@godspeed1989
godspeed1989 Hi,How does this code apply and where do I need to put it?
from pseudo_lidar.
@swdev1202
I am usingpyqtgraph
to visual point cloud.
You can install it use pipfrom pyqtgraph.Qt import QtCore, QtGui import pyqtgraph.opengl as gl class plot3d(object): def __init__(self, title='null'): # self.glview = gl.GLViewWidget() coord = gl.GLAxisItem() coord.setSize(1,1,1) #self.glview.addItem(coord) self.glview.setMinimumSize(QtCore.QSize(600, 500)) self.glview.pan(1, 0, 0) self.glview.setCameraPosition(azimuth=180) self.glview.setCameraPosition(elevation=0) self.glview.setCameraPosition(distance=5) self.items = [] # self.view = QtGui.QWidget() self.view.window().setWindowTitle(title) hlayout = QtGui.QHBoxLayout() snap_btn = QtGui.QPushButton('&Snap') def take_snap(): qimg = self.glview.readQImage() qimg.save('1.jpg') snap_btn.clicked.connect(take_snap) hlayout.addWidget(snap_btn) hlayout.addStretch() layout = QtGui.QVBoxLayout() # layout.addLayout(hlayout) layout.addWidget(self.glview) self.view.setLayout(layout) def add_item(self, item): self.glview.addItem(item) self.items.append(item) def clear(self): for it in self.items: self.glview.removeItem(it) self.items.clear() def add_points(self, points, colors): points_item = gl.GLScatterPlotItem(pos=points, size=1.5, color=colors) self.add_item(points_item) def add_line(self, p1, p2, color, width=3): lines = np.array([[p1[0], p1[1], p1[2]], [p2[0], p2[1], p2[2]]]) lines_item = gl.GLLinePlotItem(pos=lines, mode='lines', color=color, width=width, antialias=True) self.add_item(lines_item) def plot_bbox_mesh(self, gt_boxes3d, color=(0,1,0,1)): b = gt_boxes3d for k in range(0,4): i,j=k,(k+1)%4 self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color) i,j=k+4,(k+1)%4 + 4 self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color) i,j=k,k+4 self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color) def value_to_rgb(pc_inte): minimum, maximum = np.min(pc_inte), np.max(pc_inte) ratio = (pc_inte - minimum + 0.1) / (maximum - minimum + 0.1) r = (np.maximum((1 - ratio), 0)) b = (np.maximum((ratio - 1), 0)) g = 1 - b - r return np.stack([r, g, b]).transpose() def view_points_cloud(pc=None): app = QtGui.QApplication([]) glview = plot3d() if pc is None: pc = np.random.rand(1024, 3) pc_color = np.ones([pc.shape[0], 4]) glview.add_points(pc, pc_color) glview.view.show() return app.exec()
How can I use this code to visualize my bin files?
from pseudo_lidar.
@Tantoyy @DavidDiosdado you can call view_points_cloud()
to display your (N,3) points array
from pseudo_lidar.
@Tantoyy @DavidDiosdado you can call
view_points_cloud()
to display your (N,3) points array
Hi, In view_points_cloud(), how to transfer the depth bin file into the parameter "pc"?
from pseudo_lidar.
Hi @lilingge , I have add a jupyter script for visualization in the folder ./visualization. Hope it can help you.
from pseudo_lidar.
Hi @lilingge , I have add a jupyter script for visualization in the folder ./visualization. Hope it can help you.
Hi, author, the code is very useful! But I still have a question, how do I save the Cloud as png?
from pseudo_lidar.
I usually take a screenshot. Or you can use PPTK library https://github.com/heremaps/pptk.
from pseudo_lidar.
I usually take a screenshot. Or you can use PPTK library https://github.com/heremaps/pptk.
Emmm, I get trouble in showing cloud. I can't get expected interface as your show and I reopen my web browser but not make difference. What should I do?
from pseudo_lidar.
Hi, have you installed the requirements? pip install pythreejs pyntcloud pandas numpy
from pseudo_lidar.
The question is solved! Thank you very much! It's something else, not the requirements.
from pseudo_lidar.
Hi, mileyan. I add
--save_float
option to runpython ./psmnet/submission.py
and generate disparity map stored in npy. In the meanwhile, modified serveral lines in./preprocessing/generate_lidar.py
to read .npy instead of .png. The result looks better. training/000000training/000002
training/000003
But i not sure whether it is correct now. I notice the estimated depth is not so good at the distance.
Hi, I don't find "save_float" in submission.py. Do you mean "save_figure" instead?
from pseudo_lidar.
The question is solved! Thank you very much! It's something else, not the requirements.
Hello, I cannot visualize the point cloud as the interface. How to solve it? Thanks!!!
from pseudo_lidar.
Hi, Is this how it is supposed to be now ?
![]()
![]()
But, I notice the generated pseudo point cloud seems not similar to the ones in your papers
Hello, I encountered the same problem as you did. The pseudo-point cloud I generated is also sliced. Could you please tell me how you solved this problem? Thank you.
from pseudo_lidar.
Related Issues (20)
- Evaluation with pre-trained frustum pointnet HOT 1
- AVOD pre-trained weights for mono pseudo-lidar HOT 1
- have trouble in Train the stereo model
- pseudo lidar HOT 1
- pseudo lidar gives wrong converted point cloud HOT 1
- Can not install pytorch with Python 2.7 HOT 2
- Training for my own custom data HOT 3
- If you need PSMNet but only want to go with python 3, check this repo then HOT 1
- Why pseudo point cloud results in this proposed method seem too far different from LIDAR? HOT 4
- question about dataloader
- About the calibration problem between the true location and the Pseudo-LiDAR HOT 1
- [Feature requested] Python3 support
- How did you handle calib matrices in mono+depth setting HOT 3
- Why does the nan value appear in loss when training the stereo model?
- Confusion about paper table 5
- visualization code cannot get point cloud output HOT 1
- Visualising Pseudo and Real LiDAR HOT 1
- Training Custom dataset
- Nuscenes dataset application
- can you share the dorn project
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pseudo_lidar.