Git Product home page Git Product logo

velodyne_decoder's Introduction

velodyne_decoder

Build PyPI Conan Center Vcpkg PyPI - Downloads

Python package and C++ library for Velodyne packet decoding. Point cloud extraction from PCAP and ROS bag files is supported out of the box.

All non-solid-state Velodyne lidar models are fully supported. The model type and RPM are detected automatically from the data and no configuration is necessary to start using the library.

Notably, the library also includes support for dual-return data and decoding of telemetry packets. Precise timing info is available for all models.

The decoded point clouds have been validated to match the official VeloView ground truth data for all models.

In Python, the decoded point clouds are provided either as a structured NumPy array:

array([(2.6912806, 1.1651788 , -0.47706223,  9., -0.10085896,   0, 16, 1),
       (2.3603256, 1.021404  , -1.428755  , 85., -0.10085782,   0,  1, 1),
       (2.6994078, 1.1675802 , -0.4092741 ,  3., -0.10085666,   0, 17, 1),
       ...,
       (2.8952641, 0.80728334,  0.48905915,  2.,  0.00054029, 923, 30, 1),
       (2.8683424, 0.79923725, -0.5555609 ,  2.,  0.00054144, 923, 15, 1),
       (2.908243 , 0.80980825,  0.56333727,  1.,  0.00054259, 923, 31, 1)],
      dtype={'names': ['x', 'y', 'z', 'intensity', 'time', 'column', 'ring', 'return_type'], 
             'formats': ['<f4', '<f4', '<f4', '<f4', '<f4', '<u2', 'u1', 'u1'], 
             'offsets': [0, 4, 8, 12, 16, 20, 22, 23], 'itemsize': 32})

or as a contiguous array of floats (default):

array([[2.691281, 1.165179, -0.477062,  9., -0.100859,   0., 16., 1.],
       [2.360326, 1.021404, -1.428755, 85., -0.100858,   0.,  1., 1.],
       [2.699408, 1.16758 , -0.409274,  3., -0.100857,   0., 17., 1.],
       ...,
       [2.895264, 0.807283,  0.489059,  2.,  0.00054 , 923., 30., 1.],
       [2.868342, 0.799237, -0.555561,  2.,  0.000541, 923., 15., 1.],
       [2.908243, 0.809808,  0.563337,  1.,  0.000543, 923., 31., 1.]], dtype=float32)

The decoded point cloud follows ROS conventions for its coordinate axes: x – forward, y – left, z – up.

Installation

Wheels are available from PyPI for Linux, MacOS and Windows. Python versions 3.7+ are supported.

pip install velodyne-decoder

Alternatively, you can build and install the development version from source.

sudo apt-get install cmake build-essential python3-dev
pip install git+https://github.com/valgur/velodyne_decoder.git

Usage

Decoding Velodyne data from a ROS bag

import velodyne_decoder as vd

bagfile = 'xyz.bag'
lidar_topics = ['/velodyne_packets']
cloud_arrays = []
for stamp, points, topic in vd.read_bag(bagfile, topics=lidar_topics):
    cloud_arrays.append(points)

The rosbag library must be installed. If needed, you can install it without setting up the entire ROS stack with

pip install rosbag --extra-index-url https://rospypi.github.io/simple/

To extract all VelodyneScan messages in the bag you can leave the list of topics unspecified.

The header timestamp from the scan messages will be returned by default. To use the message arrival time instead set use_header_time=False.

To return arrays of structs instead of the default contiguous arrays, set as_pcl_structs=True.

Decoding Velodyne data from a PCAP file

import velodyne_decoder as vd

pcap_file = 'vlp16.pcap'
cloud_arrays = []
for stamp, points in vd.read_pcap(pcap_file):
    cloud_arrays.append(points)

To return arrays of structs instead of the default contiguous arrays, set as_pcl_structs=True.

Configuration

You can pass a velodyne_decoder.Config object to all decoder functions. The following options are available:

  • min_range and max_range – only return points between these range values.
  • min_angle and max_angle – only return points between these azimuth angles.
  • timestamp_first_packet – whether the scan timestamps are set based on the first or last packet in the scan.
  • cut_angle – when working with a raw packet stream, if unset (by default), the stream is split into a "scan" every time at least 360 degrees have been covered. If set, the splitting always occurs at the specified azimuth angle instead. Note that the scan might cover less than 360 degrees in this case.

Only required for data from HDL-64E sensors:

  • model – the sensor model ID. See velodyne_decoder.Model.__entries for the possible values.
  • calibration_file – beam calibration parameters in a YAML format. You can either extract the calibration info from a PCAP file with packets using extract-hdl64e-calibration <pcap_file> or convert a db.xml provided with the sensor using gen_calibration.py from the ROS driver.

Authors

The core functionality has been adapted from the ROS velodyne driver.

License

BSD 3-Clause License

velodyne_decoder's People

Contributors

flopie2009 avatar valgur avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

velodyne_decoder's Issues

decoded data arrangement

I used the snippet you provided for Decoding Velodyne data from a PCAP file , however the data comes out as
Y , negative X , Z
instead of x,y,z . I compared it to veloview extracted csv and it indeed came out as
Points_m_XYZ:1,-Points_m_XYZ:0,Points_m_XYZ:2

I know I can just multiply by -1 and rearrange , but am I doing something wrong ?

Sensor type : VLP-16

Unexpected Attribute Error: 'bytes' object has no attribute 'data'

Hi Martin,
Thanks for creating this decoder and Happy New Year!
I'm using the velodyne_decoder to decode a 5-min-long pcap file from an Alpha Prime, which can be opened and played using Veloview from the first frame to the last frame.
However, while using the velodyne_decoder to open the same file, it exits at frame 234/3000, and throws this error while trying to use the "next(The generator object read_pcap)" in the while loop:

File ~\anaconda3\envs\LiDAR\lib\site-packages\velodyne_decoder_init_.py:42 in read_pcap
data = dpkt.ethernet.Ethernet(buf).data.data.data
AttributeError: 'bytes' object has no attribute 'data'

I suspect that the Veloview has some exception handling procedure to locate the next correct frame, because it feels like there are some disturbance in the pcap file (maybe happened during ethernet transmission) that interrupt the decoding. Do you have any suggestion for this?

Thanks,
Yi

RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)

Environment:

  • ROS version: melodic
  • Python: 2.7
  • Velodyne type: VLP16

Issue
The default example in the README cause the following error:

    cloud_arrays.append(decoder.decode_message(scan_msg))
RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details)

After installing the package in debug, I got

    cloud_arrays.append(decoder.decode_message(scan_msg))
RuntimeError: Unable to cast Python instance of type <type 'str'> to C++ type 'std::array<unsigned char, 1206ul>'

Workaround
To make the demo work (without changing the package), I had to convert each packet's data from a str type to a bytearray type

for packet in scan_msg.packets:
  packet.data = bytearray(packet.data)

By the way, thanks for sharing this package! Very usefull :)

Feature Request: Reading Raw Azimuth Angle, Laser ID, Distance, and Timestamp

Hi Martin,
Recently we see increasing research with brilliant ideas of using the raw spherical coordinate system data without translating to the cartesian coordinate system. I'm wondering if it's possible to allow passing a parameter to the vd.read_pcap function like: vd.read_pcap(pcap_file_name, config, spherical_coordinate_system=True) to directly return the spherical coordinates like azimuth angle, laser Id, distance? This would save us tons of efforts in reproducing the methodologies proposed in those papers.

Thanks in advance,
Crear

Broken VLS-128 config

While testing the VLS-128 integration, the reading of the rosbag failed with:

File "*****/python3.6/site-packages/velodyne_decoder/__init__.py", line 94, in read_bag decoder = ScanDecoder(config) RuntimeError: Unable to open calibration file: ******/python3.6/site-packages/velodyne_decoder/calibrations/VLS-128.yml

This is because the config file within the repo is broken and some of the laser id's have missing data. Is it possible to fix this issue?

v3.0.0 release

With the significantly simpler configuration, dual-return mode support, reworked scan batching and full precise-timing support for all model versions, I am using this opportunity to create a backwards-incompatible v3.0.0 release soon.

These changes reside in the develop branch in the mean time.

Some ideas and loose ends I would still like to implement before the release:

  • autodetection of HDL-64E from input file in Python
  • gps_time→use_device_time
  • Move __init__.py contents into separate Python files.
  • StreamDecoder →ScanBatcher
  • Return both arrival and device timestamps for scans, drop use_device_time param
  • Always use packet stamp for within-scan timestamps
  • Add getters for model ID and return mode to PacketDecoder
  • ros-drivers/velodyne#518
  • Make return_type a separate field
  • Add column field
  • Fix __version__ info
  • Unit tests for PCAP decoding by model
  • PCAP → rosbag conversion
  • Support for rosbags library input (#6)
  • Support for re-batching of ROS VelodyneScan packets.
  • HDL-64E calib extraction from rosbag
  • HDL-64E autodetection for rosbag
  • Decode by ROS topic
  • TelemetryPacket output support for PCAP
  • VLS-128 timings validation
  • Update README
  • Update ChangeLog
  • Sphinx documentation

Time

How to get the proper timestamp using this package?
By default, the output comes out as negative in float32. Can it be converted to long?

error on import in develop branch

After installing the package from develop branch, I am getting the following error on import.

image

`---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
in
----> 1 import velodyne_decoder as vd

/opt/conda/lib/python3.8/site-packages/velodyne_decoder/init.py in
2 from collections import namedtuple
3
----> 4 from velodyne_decoder.velodyne_decoder_pylib import *
5 from velodyne_decoder.velodyne_decoder_pylib import version as _pylib_version
6

ModuleNotFoundError: No module named 'velodyne_decoder.velodyne_decoder_pylib'`

How About Direct Connection to LiDAR and read the PCAP Stream from Static IP address without Using ROS?

Now we have successfully connect our Jetson Orin to the LiDAR device and we can use tcpdump to generate .pcap files without involving ROS. (.pcap files are just the network traffic packages) However, to test real-time processing, we would like to directly analyze the LiDAR stream instead of saving and then analyzing the pcap files. May I ask how to directly use the IP address and maybe port number as LiDAR input?

Request for Apple Silicon Support

Hi valgur,
Thank you for providing such an amazing tool. I moved my work flow from an intel MacBook Pro to the Silicon environment, the installation cannot work, tried both pip install velodyne-decoder (cannot install) and pip install git+https://github.com/valgur/velodyne_decoder.git (says installed successfully, but the vd.Config(model="Alpha Prime", rpm=600) cannot be created.

Error Message:
    config = vd.Config(model="Alpha Prime", rpm=600)
TypeError: __init__(): incompatible constructor arguments. The following argument types are supported:
    1. velodyne_decoder.velodyne_decoder_pylib.Config(*, model: Optional[velodyne_decoder::ModelId] = None, calibration: Optional[velodyne_decoder::Calibration] = None, single_return_mode_info: bool = False, min_range: float = 0.1, max_range: float = 200, min_angle: float = 0, max_angle: float = 360, cut_angle: Optional[float] = None, timestamp_first_packet: bool = False, use_device_time: bool = False)

Invoked with: kwargs: model='Alpha Prime', rpm=600

Thanks again,
Crear

Inconsistent frame sizes

When processing a VLP-16 .PCAP file, the output frames are inconsistent in size. One frame will contain ~32,000 points and the next ~8,000 points. There is a consistent pattern of large, then small frames.

Using VeloView on the same data file, and exporting .CSV files produce frames of ~19,000 points in all frames.

Decoding Velodyne data from a PCAP file have wrong number of pcd frames for VLP-32C

I'm using our PCAP file, on Linux and MacOS, in duration of 20 sec, created by Velodyne VLP-32C, with 600 rpm, 10Hz.
In VeloView I get as expected 200 pcd frames with this configuration, but out of velodyne_decoder I get 400 frames.

By testing velodyne_decoder on Velodyne HDL32E with 600 rpm, 10 Hz on following dataset I get 843 pcd frames, in VeloView I get 2013?
Kitaware dataset: 2014-11-10-10-36-54_Velodyne-VLP_10Hz-County Fair.pcap, from https://data.kitware.com/api/v1/item/5b7fff608d777f06857cb53a/download

Is it possible that the calculation of frames per rotation and second is not right?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.