scaleapi / pandaset-devkit Goto Github PK
View Code? Open in Web Editor NEWLicense: Other
License: Other
Dear Developers,
Thank you for providing your devkit to access Padaset data easily.
I was wondering if it's possible for you to provide pip package for an easier install and adding it as requirements in various projects.
Currently using
-e git://github.com/scaleapi/pandaset-devkit.git#egg=pandaset-devkit
in requiremens.txt fails due to the non-standard file structure of this repository.
So the request would be very useful to deploy scripts utilizing this devkit.
Thank you in advance!
Daniel
Use lidar.pose to convert each point cloud into ego coordinate system instead of world coordinate system.
At the middle of front bumper, or center of vehicle, or where?
Minimum requirement: LiDAR filterable by sensor ID
Optimum requirement: chain filters with lambda function
Hello,
I would like to know the number of cars/bus/pedestrian/etc that are labeled in total in the dataset.
Are these numbers available?
I am a postgraduate in the university and I follow the download instruction on the website to submit my education e-mail. But I have not receive any download link in my email and I have tried several times. Could you please give me a download link? Thanks a lot!
I am a PhD student in the university and I follow the download instruction on the website to submit my education e-mail.
But I did not receive any download link in my email and I have tried several times.
Hello,
can you provide a mechanism to restore the original sensor view image (cylindrical projection of the point cloud)? This is necessary for many semantic segmentation methods. It is possible to compute the azimuth and elevation angles from the point list to construct the point cloud, however there are a few issues:
FYI @nisseknudsen
For my algorithms, I preferably need point cloud data in its sensor's frame. This helps detecting which objects are occluded and which are not.
Would it be possible to also provide the raw data of the Pandar GT as it already has been done with the rotating lidar? @xpchuan-95 @nisseknudsen
Explain especially which columns in data frames have which meaning
For the overlap area between mechanical 360° LiDAR and front-facing LiDAR, moving objects received two cuboids to compensate for synchronization differences of both sensors. If cuboid is in this overlapping area and moving, this value is either 0 (mechanical 360° LiDAR) or 1 (front-facing LiDAR). All other cuboids have value -1.
This is the git documentation,sensor_id,I extracted the label with sensor_id of 1, and found that many targets were missing in the visualization.How can I get all the labels of pandagt
Exemplary application of a point cloud registration algorithm (ex: ICP) and visualize differences in pose estimations.
I submitted the download form for many times, but I never receive your any reply with links for now.
I have some problems when I project Pandar64 3D cloud point into a spherical image. Here the little snippet:
# load dataset
dataset = pandaset.DataSet("/path/to/dataset")
seq001 = dataset["001"]
seq001.load()
np.set_printoptions(precision=4, suppress=True)
# generate projected points
seq_idx = 0
lidar = seq001.lidar
# useless pose ?
pose = lidar.poses[seq_idx]
pose_homo_transformation = geometry._heading_position_to_mat(pose['heading'], pose['position'])
print(pose_homo_transformation)
data = lidar.data[seq_idx]
# this retrieve both pandarGT and pandar64
both_lidar_clouds = lidar.data[seq_idx].to_numpy()
# get only points belonging to pandar 64 mechanical lidar
idx_pandar64 = np.where(both_lidar_clouds[:, 5] == 0)[0]
points3d_lidar_xyzi = both_lidar_clouds[idx_pandar64][:, :4]
print("number of points of mechanical lidar Pandar64:", len(idx_pandar64))
print("number of points of lidar PandarGT:", len(data)-len(idx_pandar64))
num_rows = 64 # the number of laser beams
num_columns = int(360 / 0.2) # horizontal field of view / horizontal angular resolution
# vertical fov of pandar64, 40 deg
fov_up = math.radians(15)
fov_down = math.radians(-25)
# init empty imgages
intensity_img = np.full((num_rows, num_columns), fill_value=-1, dtype=np.float32)
range_img = np.full((num_rows, num_columns), fill_value=-1, dtype=np.float32)
# get abs full vertical fov
fov = np.abs(fov_down) + np.abs(fov_up)
# transform points
# R = pose_homo_transformation[0:3, 0:3]
# t = pose_homo_transformation[0:3, 3]
# # print(R)
# # print(t)
# points3d_lidar_xyzi[:, :3] = points3d_lidar_xyzi[:, :3] @ np.transpose(R)
# get depth of all points
depth = np.linalg.norm(points3d_lidar_xyzi[:, :3], 2, axis=1)
# get scan components
scan_x = points3d_lidar_xyzi[:, 0]
scan_y = points3d_lidar_xyzi[:, 1]
scan_z = points3d_lidar_xyzi[:, 2]
intensity = points3d_lidar_xyzi[:, 3]
# get angles of all points
yaw = -np.arctan2(scan_y, scan_x)
pitch = np.arcsin(scan_z / depth)
# get projections in image coords
proj_x = 0.5 * (yaw / np.pi + 1.0) # in [0.0, 1.0]
proj_y = 1.0 - (pitch + abs(fov_down)) / fov # in [0.0, 1.0]
# scale to image size using angular resolution
proj_x *= num_columns # in [0.0, width]
proj_y *= num_rows # in [0.0, heigth]
# round and clamp for use as index
proj_x = np.floor(proj_x)
out_x_projections = proj_x[np.logical_or(proj_x > num_columns, proj_x < 0)] # just to check how many points out of image
proj_x = np.minimum(num_columns - 1, proj_x)
proj_x = np.maximum(0, proj_x).astype(np.int32) # in [0,W-1]
proj_y = np.floor(proj_y)
out_y_projections = proj_y[np.logical_or(proj_y > num_rows, proj_y < 0)] # just to check how many points out of image
proj_y = np.minimum(num_rows - 1, proj_y)
proj_y = np.maximum(0, proj_y).astype(np.int32) # in [0,H-1]
print("projections out of image: ", len(out_x_projections), len(out_y_projections))
print("percentage of points out of image bound: ", len(out_x_projections)/len(idx_pandar64)*100, len(out_y_projections)/len(idx_pandar64)*100)
# order in decreasing depth
indices = np.arange(depth.shape[0])
order = np.argsort(depth)[::-1]
depth = depth[order]
intensity = intensity[order]
indices = indices[order]
proj_y = proj_y[order]
proj_x = proj_x[order]
# assing to images
range_img[proj_y, proj_x] = depth
intensity_img[proj_y, proj_x] = intensity
plt.figure(figsize=(20, 4), dpi=300)
plt.imshow(intensity_img, cmap='gray', vmin=0.5, vmax=50)#, vmin=0.5, vmax=80)
plt.show()
plt.figure(figsize=(20, 4), dpi=300)
plt.imshow(range_img,vmin=0.5, vmax=80)
plt.show()
This current projection gets an image cut on half in which the lower part is completely empty.
I've tried to project into a spherical depth/intensity image raw data (like in the tutorial raw_depth_projection) and I've completely different results in terms of quality and resolution.
I don't understand what kind of problem I am having, if related to the cloud reference frame, to some Pandar64 internal params that I am messing up or something else. Really appreciate some help. Thank you in advance.
@nisseknudsen
Hi Nisse,
During I try to draw boxes in a scene, I find the meaning of position, dimensions, yaw is not very clear in README, how can they be converted to corners of box and what's relationship between car's heading and yaw?
Create a tutorial which takes a cuboid and a point cloud and returns only the part of the point cloud which is inside the point cloud.
Target is to get for example "typical car shapes".
Can you give one tutorial?
What is the relation between camera used and front camera, front left camera,..., back camera?
Hi!
I am a PhD student working as a researcher at Technical University of Cluj-Napoca. Although I offer my uni email, an email with a link to download the dataset is not sent to me.
I can successfully download datasets from KITTI, Waymo, Lyft and even the Audi dataset using this email, but not yours. Why is this so? Only company emails work? Or has the dataset become unavailable?
Hello,
As per the information in the site you have mentioned that the total frames are 16,000+ LiDAR sweeps. But even after combining all the three parts we could only find around 8500+ frames only. So, If any additional dataset that is available apart from those 3 parts can you share. Also there are some missing sequences like 49,60,61 frames are missing, is it ok? Please kindly let us know.
Hi there,
thansk for open sourcing this great dataset, as far as I know there is currently no other public dataset including mems lidar data. I'm wondering if there is any plan to held a competition on kaggle / academic conference workshop , or maintain a benchmark? I think this would be great to promote the development of dense point cloud detection methods and also Scale AI & Hesai.
Take camera sensor(s) and convert single images to video file
There is the file docs/static_extrinsic_calibration.yaml, which holds the mounting position (extrinsic calibration) of the sensors.
If I look, for example, at the main_pandar64 sensor, its mounting position is the unity transformation, which means that the mounting position coordinate system equals the main_pandar64 sensor coordinate system, i.e. has its origin at this sensor. What is, however, the transformation from the mounting position coordinate system to the ego coordinate system, which has its origin at the middle of the rear axis?
Hi
Thanks for the comprehensive dataset I wanted to make sure that I understood the data provided.
In the LiDAR data there is an 'i' and 'd' value - I was thinking 'i' is the Identifier of the object but not sure if 'd' is a measure of distance since it has values from 0 to 1.
Also is there a chance to limit the LiDAR data to only what the front and back camera sees?
Thanks
Amine
The image data seems to be of very good quality, I don’t know if I can provide a purchase channel for the camera,Thank you very much!
Running simple code such as
from pandaset import DataSet
seq_num = 0
dataset = DataSet('...')
for sequence in dataset.sequences():
print("Sequence {}, {} of {}".format(sequence, seq_num, len(dataset.sequences())))
seq = dataset[sequence]
seq.load()
del seq
seq_num += 1
Quickly leads to sigkill due to lack of memory. Why? Because loaded sequences are also stored in the DataSet object, such that after deleting seq
, you can still access the loaded data from dataset[sequence] without doing .load()
again.
So is there any practical way of iterating through the data? The dataset class does not support item deletion, the sequence class does not support copying... The only way I've found was to delete the dataset object every iteration which slows down things unnecessarily.
Seems an .unload() method would be simple enough. Thank you.
@nisseknudsen
Hi, Nisse. I find cubiods.sensor_id and camera_used for some cuboid given as -1. I think sensor_id is the lidar id, and camera_used is the camera id, but what dose "-1" means?
Hey guys,
Is there a way to map lidar points to the lidar channel that produced them? Specifically those channels reported here: https://hesaiweb2019.blob.core.chinacloudapi.cn/uploads/Pandar64_User's_Manual.pdf
I've tried something simple along the lines of:
theta = np.arctan2(points[..., 2], points[..., 1])
but the visualisations don't look quite right when i colour points red > 3 * np.pi / 180
and colour points yellow <3 * np.pi/180
. According to the hesai data sheet I was expecting to see 4 distinct bands but that didn't work out :) attached is what I see (also included a green rectangle whose corner is (0, 0, 0) which is 100 units long
My impression is that the provided lidar pose is in fact the baselink on the vehicle as opposed to the center of the lidar which is in turn making my "find the lidar channel" logic work incorrectly. The reason I think this is because if I add around 2m/3m to the "lidar_to_ego" corrected points I get the following:
But of course "about 3m" isn't quite the whole story because the lidary unit has a little tilt as well :) I guess what I need is the transform from baselink to the lidar sensor?
I've also attached the notebook I used to produce the images
view.tar.gz
Any pointers?
i want to use this for personal use but i cant sign up because i dont have a job and it doesn't like gmail emails. i just wanna explore point clouds of street intersections without graduating
I have tried to download the PandaSet data from Scale AI, but after clicking the download button all that opens is an empty white floating box with the header 'Download Dataset' that doesn't contain any links or redirect. This happens even after I sign in.
It looks like the website is broken, I'm not sure if anyone else is getting this issue.
Are there any other ways to access this data, for example direct download links, URLs or links to where it is hosted on ScaleAI? Thanks.
First off, thanks for the great dataset!
I would like to use the front camera images to train a network for regular object detection on camera images with 2D bounding boxes. You provide the cuboids and projection in the camera image, but would it also be possible to obtain a 2D bounding for the images?
How to visualise point cloud data with bounding boxes
Take the point-to-image projections including semseg class index and create mesh on index, which creates a poor-man's 2D Semseg. Probably best on front camera with forward-facing camera.
PandarSet matches the name of LiDAR products.
@xpchuan-95 : Hey Peter, I tried sending you emails, but for some reason they bounced all three...
To make sure you got the information, here is my email to you:
Hi Peter,
apologies for that! It is because I hadn't finished implementing the intrinsics change.
Could you do a master-merge or rebase into your branch, and apply the patch file to geometry.py? Possibly, I have overseen anything else, but this should help for the first start.Best,
Nisse
and this is the email attachment:
Show users how to create a "RGB" point cloud by using photo-to-point projections.
1.) Colorize point cloud by height
2.) Colorize point cloud by distance
3.) Colorize point cloud by instensity
4.) Colorize point cloud by semseg class index
Dear,
I am going through the dataset details, as much as I can find. However, although its not mentioned, I just wanted to confirm if pandaset provides tracking information, i.e. a static id unique to one object's bounding box in all frame sequences?
I parsed and searched all the dataset carefully. I found that there is no annotation for 'Trolley' in the Pandaset.
Hi! I wish to compute a rigid affine transformation between additional provided raw sweeplidar and existing sweeplidar. However, not matter I use moorse-puesodo inverse to compute or use ICP algorithmn to compute, I can not acqurie a feasible affine transformation between the two.
In visualization, I find they are in correspondence but exists distorion in existing sweeplidar. May I know what additional operation you apply to transfer the raw sweeplidar to current sweeplidar points?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.