Git Product home page Git Product logo

Comments (27)

godspeed1989 avatar godspeed1989 commented on July 1, 2024 4

@swdev1202
I am using pyqtgraph to visual point cloud.
You can install it use pip

from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl

class plot3d(object):
    def __init__(self, title='null'):
        #
        self.glview = gl.GLViewWidget()
        coord = gl.GLAxisItem()
        coord.setSize(1,1,1)
        #self.glview.addItem(coord)
        self.glview.setMinimumSize(QtCore.QSize(600, 500))
        self.glview.pan(1, 0, 0)
        self.glview.setCameraPosition(azimuth=180)
        self.glview.setCameraPosition(elevation=0)
        self.glview.setCameraPosition(distance=5)
        self.items = []
        #
        self.view = QtGui.QWidget()
        self.view.window().setWindowTitle(title)
        hlayout = QtGui.QHBoxLayout()
        snap_btn = QtGui.QPushButton('&Snap')
        def take_snap():
            qimg = self.glview.readQImage()
            qimg.save('1.jpg')
        snap_btn.clicked.connect(take_snap)
        hlayout.addWidget(snap_btn)
        hlayout.addStretch()
        layout = QtGui.QVBoxLayout()
        #
        layout.addLayout(hlayout)
        layout.addWidget(self.glview)
        self.view.setLayout(layout)
    def add_item(self, item):
        self.glview.addItem(item)
        self.items.append(item)
    def clear(self):
        for it in self.items:
            self.glview.removeItem(it)
        self.items.clear()
    def add_points(self, points, colors):
        points_item = gl.GLScatterPlotItem(pos=points, size=1.5, color=colors)
        self.add_item(points_item)
    def add_line(self, p1, p2, color, width=3):
        lines = np.array([[p1[0], p1[1], p1[2]],
                          [p2[0], p2[1], p2[2]]])
        lines_item = gl.GLLinePlotItem(pos=lines, mode='lines',
                                       color=color, width=width, antialias=True)
        self.add_item(lines_item)
    def plot_bbox_mesh(self, gt_boxes3d, color=(0,1,0,1)):
        b = gt_boxes3d
        for k in range(0,4):
            i,j=k,(k+1)%4
            self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)
            i,j=k+4,(k+1)%4 + 4
            self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)
            i,j=k,k+4
            self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)

def value_to_rgb(pc_inte):
    minimum, maximum = np.min(pc_inte), np.max(pc_inte)
    ratio = (pc_inte - minimum + 0.1) / (maximum - minimum + 0.1)
    r = (np.maximum((1 - ratio), 0))
    b = (np.maximum((ratio - 1), 0))
    g = 1 - b - r
    return np.stack([r, g, b]).transpose()

def view_points_cloud(pc=None):
    app = QtGui.QApplication([])
    glview = plot3d()
    if pc is None:
        pc = np.random.rand(1024, 3)
    pc_color = np.ones([pc.shape[0], 4])
    glview.add_points(pc, pc_color)
    glview.view.show()
    return app.exec()

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

Can you please check the PSMNet environment?
Python2.7
PyTorch(0.4.0+)
torchvision 0.2.0 (higher version may cause issues)

from pseudo_lidar.

godspeed1989 avatar godspeed1989 commented on July 1, 2024

Hi,
I am using python3 and pytorch 1.1.0.
Should I retrain PSMNet?
That's wired, is there any simpler solution.
Thank you

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

The simpler solution is you install Python2.7, PyTorch(0.4.0+), torchvision 0.2. You can easily use anaconda to create a new environment.

from pseudo_lidar.

godspeed1989 avatar godspeed1989 commented on July 1, 2024

Thanks.

from pseudo_lidar.

godspeed1989 avatar godspeed1989 commented on July 1, 2024

Hi, Is this how it is supposed to be now ?
image
image
image
But, I notice the generated pseudo point cloud seems not similar to the ones in your papers

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

It should not look like it. How do you generate this point cloud? which code do you use?

from pseudo_lidar.

godspeed1989 avatar godspeed1989 commented on July 1, 2024

I follow your README to generate point cloud.
Firstly, generating disparity map

$ python ./psmnet/submission.py \
--loadmodel ./finetune_300.tar \
--datapath /mine/KITTI_DAT/training/ \
--save_path ./training_predict_disparity/

Then, converting disparity to point cloud

python ./preprocessing/generate_lidar.py  \
    --calib_path /mine/KITTI_DAT/training/calib \
    --save_path ./training_predict_velodyne \
    --disparity_dir ./training_predict_disparity/ \
    --max_high 1

I notice the --disp_path cmd option is incorrect, so i use --disparity_dir instead
This is the 000003 file in KITTI training set

from pseudo_lidar.

zklgame avatar zklgame commented on July 1, 2024

I also found the script generate_lidar.py does not work properly!
I used frustum-pointnets, first feed the ground truth velodyne to it for testing, and it worked well. Then I use generate_lidar.py to generate predicted_velodyne from ground disparity, and feed predicted_velodyne for testing, the results are very bad. So I think there should be some problem in generate_lidar.py .

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

I follow your README to generate point cloud.
Firstly, generating disparity map

$ python ./psmnet/submission.py \
--loadmodel ./finetune_300.tar \
--datapath /mine/KITTI_DAT/training/ \
--save_path ./training_predict_disparity/

Then, converting disparity to point cloud

python ./preprocessing/generate_lidar.py  \
    --calib_path /mine/KITTI_DAT/training/calib \
    --save_path ./training_predict_velodyne \
    --disparity_dir ./training_predict_disparity/ \
    --max_high 1

I notice the --disp_path cmd option is incorrect, so i use --disparity_dir instead
This is the 000003 file in KITTI training set

Hi godspeed1989,

I used the code in this repo to generate the point cloud. It looks correct. Not sure why your result is not right. So Can you send me the disparity file and bin file of 000003 to me? My email is [email protected] . Thanks.

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

I also found the script generate_lidar.py does not work properly!
I used frustum-pointnets, first feed the ground truth velodyne to it for testing, and it worked well. Then I use generate_lidar.py to generate predicted_velodyne from ground disparity, and feed predicted_velodyne for testing, the results are very bad. So I think there should be some problem in generate_lidar.py .

Hi zklgame,

Did you use the FPointNet trained on velodyne or on pseudo-lidar? Might you try this checkpoint https://drive.google.com/file/d/1qhCxw6uHqQ4SAkxIuBi-QCKqLmTGiNhP/view?usp=sharing, which is trained on pseudo-lidar.

from pseudo_lidar.

godspeed1989 avatar godspeed1989 commented on July 1, 2024

Hi, mileyan.
I add --save_floatoption to run python ./psmnet/submission.py and generate disparity map stored in npy.
In the meanwhile, modified serveral lines in ./preprocessing/generate_lidar.py to read .npy instead of .png.
The result looks better.
training/000000
image
training/000002
image
training/000003
image

But i not sure whether it is correct now.
I notice the estimated depth is not so good at the distance.

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

Hi godspeed1989,

It looks great! I will change the save_float to the default setting. Thanks for your feedback.

from pseudo_lidar.

swdev1202 avatar swdev1202 commented on July 1, 2024

@godspeed1989
May I ask you what visualization tool you use for the point cloud (.bin)?

Thank you!

from pseudo_lidar.

Tantoyy avatar Tantoyy commented on July 1, 2024

@godspeed1989
godspeed1989 Hi,How does this code apply and where do I need to put it?

from pseudo_lidar.

DavidDiosdado avatar DavidDiosdado commented on July 1, 2024

@swdev1202
I am using pyqtgraph to visual point cloud.
You can install it use pip

from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph.opengl as gl

class plot3d(object):
    def __init__(self, title='null'):
        #
        self.glview = gl.GLViewWidget()
        coord = gl.GLAxisItem()
        coord.setSize(1,1,1)
        #self.glview.addItem(coord)
        self.glview.setMinimumSize(QtCore.QSize(600, 500))
        self.glview.pan(1, 0, 0)
        self.glview.setCameraPosition(azimuth=180)
        self.glview.setCameraPosition(elevation=0)
        self.glview.setCameraPosition(distance=5)
        self.items = []
        #
        self.view = QtGui.QWidget()
        self.view.window().setWindowTitle(title)
        hlayout = QtGui.QHBoxLayout()
        snap_btn = QtGui.QPushButton('&Snap')
        def take_snap():
            qimg = self.glview.readQImage()
            qimg.save('1.jpg')
        snap_btn.clicked.connect(take_snap)
        hlayout.addWidget(snap_btn)
        hlayout.addStretch()
        layout = QtGui.QVBoxLayout()
        #
        layout.addLayout(hlayout)
        layout.addWidget(self.glview)
        self.view.setLayout(layout)
    def add_item(self, item):
        self.glview.addItem(item)
        self.items.append(item)
    def clear(self):
        for it in self.items:
            self.glview.removeItem(it)
        self.items.clear()
    def add_points(self, points, colors):
        points_item = gl.GLScatterPlotItem(pos=points, size=1.5, color=colors)
        self.add_item(points_item)
    def add_line(self, p1, p2, color, width=3):
        lines = np.array([[p1[0], p1[1], p1[2]],
                          [p2[0], p2[1], p2[2]]])
        lines_item = gl.GLLinePlotItem(pos=lines, mode='lines',
                                       color=color, width=width, antialias=True)
        self.add_item(lines_item)
    def plot_bbox_mesh(self, gt_boxes3d, color=(0,1,0,1)):
        b = gt_boxes3d
        for k in range(0,4):
            i,j=k,(k+1)%4
            self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)
            i,j=k+4,(k+1)%4 + 4
            self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)
            i,j=k,k+4
            self.add_line([b[i,0],b[i,1],b[i,2]], [b[j,0],b[j,1],b[j,2]], color)

def value_to_rgb(pc_inte):
    minimum, maximum = np.min(pc_inte), np.max(pc_inte)
    ratio = (pc_inte - minimum + 0.1) / (maximum - minimum + 0.1)
    r = (np.maximum((1 - ratio), 0))
    b = (np.maximum((ratio - 1), 0))
    g = 1 - b - r
    return np.stack([r, g, b]).transpose()

def view_points_cloud(pc=None):
    app = QtGui.QApplication([])
    glview = plot3d()
    if pc is None:
        pc = np.random.rand(1024, 3)
    pc_color = np.ones([pc.shape[0], 4])
    glview.add_points(pc, pc_color)
    glview.view.show()
    return app.exec()

How can I use this code to visualize my bin files?

from pseudo_lidar.

godspeed1989 avatar godspeed1989 commented on July 1, 2024

@Tantoyy @DavidDiosdado you can call view_points_cloud() to display your (N,3) points array

from pseudo_lidar.

lilingge avatar lilingge commented on July 1, 2024

@Tantoyy @DavidDiosdado you can call view_points_cloud() to display your (N,3) points array

Hi, In view_points_cloud(), how to transfer the depth bin file into the parameter "pc"?

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

Hi @lilingge , I have add a jupyter script for visualization in the folder ./visualization. Hope it can help you.

from pseudo_lidar.

lilingge avatar lilingge commented on July 1, 2024

Hi @lilingge , I have add a jupyter script for visualization in the folder ./visualization. Hope it can help you.

Hi, author, the code is very useful! But I still have a question, how do I save the Cloud as png?

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

I usually take a screenshot. Or you can use PPTK library https://github.com/heremaps/pptk.

from pseudo_lidar.

lilingge avatar lilingge commented on July 1, 2024

I usually take a screenshot. Or you can use PPTK library https://github.com/heremaps/pptk.

Emmm, I get trouble in showing cloud. I can't get expected interface as your show and I reopen my web browser but not make difference. What should I do?

from pseudo_lidar.

mileyan avatar mileyan commented on July 1, 2024

Hi, have you installed the requirements? pip install pythreejs pyntcloud pandas numpy

from pseudo_lidar.

lilingge avatar lilingge commented on July 1, 2024

The question is solved! Thank you very much! It's something else, not the requirements.

from pseudo_lidar.

rsj007 avatar rsj007 commented on July 1, 2024

Hi, mileyan. I add --save_floatoption to run python ./psmnet/submission.py and generate disparity map stored in npy. In the meanwhile, modified serveral lines in ./preprocessing/generate_lidar.py to read .npy instead of .png. The result looks better. training/000000 image training/000002 image training/000003 image

But i not sure whether it is correct now. I notice the estimated depth is not so good at the distance.

Hi, I don't find "save_float" in submission.py. Do you mean "save_figure" instead?

from pseudo_lidar.

rsj007 avatar rsj007 commented on July 1, 2024

The question is solved! Thank you very much! It's something else, not the requirements.

Hello, I cannot visualize the point cloud as the interface. How to solve it? Thanks!!!

from pseudo_lidar.

a-free-a avatar a-free-a commented on July 1, 2024

Hi, Is this how it is supposed to be now ? image image image But, I notice the generated pseudo point cloud seems not similar to the ones in your papers

Hello, I encountered the same problem as you did. The pseudo-point cloud I generated is also sliced. Could you please tell me how you solved this problem? Thank you.

from pseudo_lidar.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.