Git Product home page Git Product logo

avdata's People

Contributors

agarwalsid avatar ankitvora7 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

avdata's Issues

Velodyne data to pcd2 conversion

Hello everyone,

Can we convert Ford LiDAR data which is available in velodyne format to pcd2 format?
If yes, how can we convert that?

Thanks in advance

Relative coordinate system definition

Hi everyone,

The topic "pose_ground_truth" and "pose_raw" for each LOG does not start from (0, 0, 0), so what is the definition of the Origin of each sequence?
How is that related to definite location (BLH) of the "gps" topic ?
Thanks for the response.

Rosbag Timezone Information

Hi,

In general, how do you handle timezone information in the Rosbag file format?

Do you store timezone information anywhere in the Rosbag files or are all timestamps local time depending on the origin of the trip or even the current position of the car?

Thanks!
Kevin

Python 3

Hey there!

Why not make your code Python 3 compatible?
On my machine, catkin and ROS built automatically for Python3.
I just needed to add parathesis to the print statements to make it work.

question about the time interval of IMU

Hi, bro,
i get the gps and imu csv data from the same-data.bag by using your providing scripts---- bag _to_csv.py
However, i find the time interval of IMU or GPS is a constant value.

For example, this the data of GPS in the csv format which i get.
timestamp, latitude, longitude, altitude ,position_covariance
1501822123.041313792, 42.30597,-83.24483,156.02139,0.00000
1501822123.049295872, 42.30597,-83.24483,156.02137,0.00000
1501822123.051310080, 42.30597,-83.24483,156.02135,0.00000
1501822123.059303168, 42.30597,-83.24483,156.02133,0.00000
1501822123.061313024, 42.30597,-83.24483,156.02131,0.00000
1501822123.069293056, 42.30597,-83.24483,156.02129,0.00000
1501822123.071315968, 42.30597,-83.24483,156.02128,0.00000
.............................................................................
The time interval should be 0.005s because the frequency rate is 200Hz.
But the time interval of first and second timestamps is about 0.008s, the time interval of second and third timestamps is about 0.002s, the average time interval is 0.005{ [0.008+0.002]/2 }. the situation does a loop in the later all lines.
I also try the another rosbag, the average time interval is also 0.005{ [0.003+0.007]/2 }.

Is the gps and IMU like that in fact ?
Thank you so much. It is really very kind of you .I hope you can give me a reply.

About the linear accelaration

Hi,dear authors,
I have read your paper and your websites about this AVData. I have the question about the linear accelartion. and I know it has removed the gravity and represents the oriention of body frame in the vehicle.
The imu.csv is transformed by your providing scripts. But the linear accelartion is so big like this and i can't understand the reason.
v2_log5_accelaration_part

I really hope you can give me a reply .
Thank you so much

Some question on the data

May I know what's the frequency of all the image and IMU data? I checked with the ROS topic, and read that the frequencies of /imu and /image_front_left are around 14 and 170. Are these correct?

Besides, may I know the calibration parameters of the IMU intrinsic?

Thank you

Regarding static transform from gps to body frame

Hello, I was trying to view the dataset in rviz as well as mapviz. Inside the Sample-Data/Calibration-V2 folder, there is no yaml file to get the static transform from gps to body frame. Due to this missing frame transform, I can't see the gps map in mapviz. Is it available and if available, where is it ??

Direct access Rosbags on S3

Is it possible to put the Rosbags in a non-zipped form on S3, just the .bag files?

We have a bunch of analytics tools that can parse and read Rosbags directly on S3 w/o copying and converting. Makes sense if you process petabytes of bags per day, exploring the data and searching for edge cases.

Many thanks for your release of public AV data! That helps a lot.

Convert lidar csv files to pcds

Hi,

Thank you very much for the datasets. Can you please give detailed codes (write a script) to show how to convert lidar csv files to pcds?

I have tried my best. But due to my poor ros knowledge, I failed.

Thank you!

Best,
liu

Online LiDAR scans in sensor frame or body frame?

Hello. I extracted LiDAR scans from the ROS bag with the help of multi_lidar_convert.launch. However, I am not sure which coordinate frame the LiDAR point clouds are in when they are published by multi_lidar_convert.launch. Are the published point clouds in the respective LiDAR frames or in the body frame?

download slowly

I'm glad to see the data you shared
I want to use your data to explore autonomous driving
But now the download speed is slow in China
Is there any other way to download data?

Pose topic

Hi, is it possible to have a brief explanation about the differences of the pose published in the three different topics (/pose_raw, /pose_localized and /pose_ground_truth)?
Thanks!

Branch for latest version of velodyne_pointcloud compatible (ros-noetic)

Hi all,

Since I'm using ros-noetic, I'm wondering if there is a branch compatible for latest version of velodyne_pointcloud?

As far as I know, CloudNodelet has been removed from velodyne_pointcloud in velodyne-1.6.0 as transform node includes all features of cloud node. As a result, multi_lidar_convert.launch is not working and all nodes that load velodyne_pointcloud/CloudNodelet need to be changed.

I've checked velodyne-model-fix branch, but it does not have such changes.

Thanks in advance

Inaccurate ground-truth trajectories

I mapped the poses from the /pose_ground_truth topic of the first scene (2017-08-04-V2-Log1) onto satellite images of the area according to the information given in the issues #12 and #29: Map origin is at lat 42.294319 lon -83.223275 and coordinates are given in North,East,Down order in meters. The result is accurate in some parts, but off by several meters in other parts (see e.g. highway-exit at bottom left of the image) while the paper reports cm-accuracy. The error is visible both with bingmaps and googlemaps, so I assume it's not due to an error in the satellite maps. Do you know where this error is coming from?

Bingmaps:
tf-full

Googlemaps:
googlemaps

Camera-to-world transform matrix

Hi,
I want to use NerfStudio on the Ford dataset to generate street views, but I need to know the camera-to-world transform matrix for each frame. I have checked the calibration folder but did not find the relevant information. Do you have any ideas on how to obtain extrinsic parameters?

project lidar scanned point clouds to images

Hi,

I'm doing some dataset parsing job on the dataset, i.e., projecting lidar scanned point clouds to images.

Specifically, following my last question (#26), I obtained pcds of the lidar-blue using the provided sample dataset, then I want to project pcds to the camera FL.

Below are some results:

First, I show some successful examples.

image
image

At this stage, we can see that the projections of lidar scanned points align well with the image.

(Some projections are on the sky, is this due to the calibration error between lidar and image? or something else?)

Then, I show some failure cases:

image

image

image

We can see discontinuities for each scan line. Is this correct?

For the last failure case, the scans on the right part of the image are missing.

I tried to find the reason and found that the pcd of this case is small and do not cover the right part of the image. Below is the visible 3D points cloud of the last failure case:
image

Can you please help me to confirm whether I'm doing the right job? thank you very much!

PCDS: https://drive.google.com/file/d/1yIJRg_HFHQbb1LTPMW8wxqkIY8dKUdOd/view?usp=sharing

I think we should start from here, check whether the pcds are correct.

codes: first rename them as XX.py and then run project_lidar_to_camera.py
transformations.txt
project_lidar_to_camera.txt

you need to install open3d, opencv, and other common libs (see project_lidar_to_camera.py).

Dropped 100.00% of messages so far. Please turn the [ros.velodyne_pointcloud.message_filter

I try to run this roslaunch`` ford_demo multi_lidar_convert.launch for viewing the live lidar pointcloud,but It does not appear in rviz,there is a warnning that [ WARN] [1628749506.660728638]: MessageFilter [target=odom ]: Dropped 100.00% of messages so far. Please turn the [ros.velodyne_pointcloud.message_filter] rosconsole logger to DEBUG for more information.
rostopic echo /lidar_red_pointcloud , there is alse no pointcloud data.
How should I solved it?

Future plans

Hi All,
Thank you for sharing this dataset. Do you intend to make any trained AI models for automated driving available to the public?

multi_lidar_convert failed by using the Sample-Data

Hi all,

I used the sample data using following command:
$ roslaunch ford_demo demo.launch map_dir:=/tmp/ford/Map calibration_dir:=/tmp/ford/Calibration-V2/
$ rosbag play /tmp/ford/Sample-Data.bag
$ roslaunch ford_demo multi_lidar_convert.launch

The first two commands executed successfully. But I go following error in the third command:

[ERROR] [1586980154.104065718]: Failed to load nodelet [/velodyne_yellow_convert] of type [velodyne_pointcloud/CloudNodelet] even after refreshing the cache: According to the loaded plugin descriptions the class velodyne_pointcloud/CloudNodelet with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/crop_foremost depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyz_radial depth_image_proc/point_cloud_xyzi depth_image_proc/point_cloud_xyzi_radial depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_proc/crop_decimate image_proc/crop_nonZero image_proc/crop_non_zero image_proc/debayer image_proc/rectify image_proc/resize image_publisher/image_publisher image_rotate/image_rotate image_view/disparity image_view/image nodelet_tutorial_math/Plus pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/CropBox pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/RadiusOutlierRemoval pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SHOTEstimation pcl/SHOTEstimationOMP pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/VFHEstimation pcl/VoxelGrid stereo_image_proc/disparity stereo_image_proc/point_cloud2
[ERROR] [1586980154.104222810]: The error before refreshing the cache was: According to the loaded plugin descriptions the class velodyne_pointcloud/CloudNodelet with base class type nodelet::Nodelet does not exist. Declared types are depth_image_proc/convert_metric depth_image_proc/crop_foremost depth_image_proc/disparity depth_image_proc/point_cloud_xyz depth_image_proc/point_cloud_xyz_radial depth_image_proc/point_cloud_xyzi depth_image_proc/point_cloud_xyzi_radial depth_image_proc/point_cloud_xyzrgb depth_image_proc/register image_proc/crop_decimate image_proc/crop_nonZero image_proc/crop_non_zero image_proc/debayer image_proc/rectify image_proc/resize image_publisher/image_publisher image_rotate/image_rotate image_view/disparity image_view/image nodelet_tutorial_math/Plus pcl/BAGReader pcl/BoundaryEstimation pcl/ConvexHull2D pcl/CropBox pcl/EuclideanClusterExtraction pcl/ExtractIndices pcl/ExtractPolygonalPrismData pcl/FPFHEstimation pcl/FPFHEstimationOMP pcl/MomentInvariantsEstimation pcl/MovingLeastSquares pcl/NodeletDEMUX pcl/NodeletMUX pcl/NormalEstimation pcl/NormalEstimationOMP pcl/NormalEstimationTBB pcl/PCDReader pcl/PCDWriter pcl/PFHEstimation pcl/PassThrough pcl/PointCloudConcatenateDataSynchronizer pcl/PointCloudConcatenateFieldsSynchronizer pcl/PrincipalCurvaturesEstimation pcl/ProjectInliers pcl/RadiusOutlierRemoval pcl/SACSegmentation pcl/SACSegmentationFromNormals pcl/SHOTEstimation pcl/SHOTEstimationOMP pcl/SegmentDifferences pcl/StatisticalOutlierRemoval pcl/VFHEstimation pcl/VoxelGrid stereo_image_proc/disparity stereo_image_proc/point_cloud2

Any Idea to fix this?

question about IMU data

Hi authors, thanks for sharing ford avadata. i have som questions about the data:

  1. IMU
    I extracted imu.csv from the .bag, whose linear accelaration is as following:
    imu

It seems too small and far away from gravity measurements 9.8m/s^2.
I've noticed some discussion about the transformation of imu.csv, but i cannot find where the script is.

  1. GPS
    According to FordAV home page, "The GPS data provides the latitude, longitude and altitude of the body frame with respect to the WGS84 frame".
    My question is where the origin of the GPS data, body frame(rear axle center) or IMU center?

Could anyone do me a favor please?

download the files of datasets

Hello sir,
your datasets are only the file---2017-08-04(1.6TB) in your website. can i download a part of it? the 1.6T is so big for me that i can't downlod it.
Thank you so much.

convert gps coordinates to local reference

Hi,

I want to transform the WGS-84 Geodetic point (lat, lon, h) to the local (X,Y,Z) coordinate, using the provided reference (42.294319, -83.223275).

After the transformation, I think I would get very similar results as pose_raw.csv.

Here is what I have done:

Generally speaking, I first transform the WGS-84 Geodetic point (lat, lon, h) to Earth-Centered Earth-Fixed (ECEF) coordinates (x, y, z). Then, the ECEF coordinates are transformed to East-North-Up coordinates.

However, I have the following questions:

  1. the provided reference (42.294319, -83.223275) does not contain the altitude of the reference, which hinders me to obtain the Z coordinate. Can you please share the altitude of the reference?

  2. the provided reference (42.294319, -83.223275) has six decimals. However, the real-time recorded items in gps.csv have ten decimals. Can you please explain the motivation?

  3. After parsing the file gps.csv, I got slightly different results, compared with items in pose_raw.csv.

For example:

GT: x : -1644.91390701, y : 1335.59171013; My: x : -1645.054286939695, y : 1335.1813690268768; Github: x : -1645.327115755862, y : 1334.9546098449466
GT: x : -1644.87310037, y : 1335.60461139; My: x : -1645.0135018018027, y : 1335.194133655902; Github: x : -1645.2863271411193, y : 1334.9673839304598
GT: x : -1644.75078146, y : 1335.64325695; My: x : -1644.8910969154442, y : 1335.2324164154254; Github: x : -1645.163911817825, y : 1335.0056950780595
GT: x : -1644.70994124, y : 1335.65615531; My: x : -1644.8502787941275, y : 1335.2451810321118; Github: x : -1645.1230902180873, y : 1335.018469162866
GT: x : -1644.66908808, y : 1335.66905943; My: x : -1644.809444182155, y : 1335.2579345378645; Github: x : -1645.0822521232294, y : 1335.0312321408526
GT: x : -1644.62845006, y : 1335.68189438; My: x : -1644.7685930739356, y : 1335.2707102548661; Github: x : -1645.0413975343001, y : 1335.0440173338925
GT: x : -1644.58757031, y : 1335.69479944; My: x : -1644.72773372329, y : 1335.283474862213; Github: x : -1645.0005346978107, y : 1335.0567914186993
GT: x : -1644.54668006, y : 1335.70770073; My: x : -1644.6868578851947, y : 1335.2962394682925; Github: x : -1644.9596553693486, y : 1335.0695655042125
GT: x : -1644.50578347, y : 1335.72059332; My: x : -1644.6459820471835, y : 1335.3090040756938; Github: x : -1644.9187760398372, y : 1335.0823395897257
GT: x : -1644.46509403, y : 1335.73341287; My: x : -1644.605106213379, y : 1335.3217575747874; Github: x : -1644.877896711375, y : 1335.095102566299
GT: x : -1644.42418363, y : 1335.74630271; My: x : -1644.5642138884473, y : 1335.334511072365; Github: x : -1644.8370008877923, y : 1335.1078655442852
GT: x : -1644.38326264, y : 1335.75919394; My: x : -1644.5233133211952, y : 1335.34726456738; Github: x : -1644.7960968187479, y : 1335.1206285208584
GT: x : -1644.34233305, y : 1335.77208839; My: x : -1644.482404510576, y : 1335.3600180637834; Github: x : -1644.7551845031924, y : 1335.1333914988447
GT: x : -1644.30139827, y : 1335.7849923; My: x : -1644.4414874538413, y : 1335.3727826680674; Github: x : -1644.7142639411259, y : 1335.146165584358
GT: x : -1644.26045842, y : 1335.79789444; My: x : -1644.4005621522492, y : 1335.385547271062; Github: x : -1644.6733351314992, y : 1335.1589396691645
GT: x : -1644.21951287, y : 1335.81079218; My: x : -1644.3596368544556, y : 1335.3983007670995; Github: x : -1644.6324063208235, y : 1335.1717026464444
GT: x : -1644.17856419, y : 1335.82368318; My: x : -1644.3187033152187, y : 1335.4110542645226; Github: x : -1644.5914692646857, y : 1335.1844656244307
GT: x : -1644.13779667, y : 1335.83651219; My: x : -1644.2777615337031, y : 1335.4238077616092; Github: x : -1644.550523960988, y : 1335.1972286010039

For each line:

GT denotes the provided items in pose_raw.csv;

My denotes the results of my method;

Github denotes the method from the issue #12
section 7.5 of this document (https://portal.opengeospatial.org/files/16-011r4)

You can see that the results of different methods differ.

Can you please explain the reasons?

I guess the reason may come from the six decimals of the reference.

I attached my code and related files below.

please rename check_gps.txt to check_gps.py
check_gps.txt

pose_raw.csv
gps.csv

Thank you very much!

Subscribing to image data 8UC3

I am playing one of the rosbag files which has an image topic /image_front_left of type 8UC3. In my subscriber callback I am attempting to convert it to bgr8 format with:
img = self.bridge.imgmsg_to_cv2(msg,'bgr8')
I think this makes sense, but I get an error:
[8UC3] is not a color format. but [bgr8] is. The conversion does not make sense
Now I don't think this makes sense, as 8UC3 is a color format.

Am I doing something wrong in subscribing this way? Do you have example code subscribing to the image topic?

Labeled Data for Seasonal Variation or Scenarios?

Hi, Does the Ford AV dataset contain any labels for seasonal variation (ie sunny, cloudy, snow) or driving scenarios (ie freeway, overpass, residential)? If so, where would I be able to find this information in the Sample Data?

about linear velocity

The values of linear velocity are stored in geometry_msgs_Vector3Stamped file. what are the units of linear velocity? is it kmph or m/s?

IMU to GPS antenna lever arm

Hi

Thank you for sharing the data and your work! I was wondering if you could also share the lever arm calibration between the GPS antenna and the IMU frame, or with respect to the vehicle body frame?
thank you!

Compile on Ubuntu 20.04

Hi Thanks for this work. I tried to that to compile the project on Ubuntu 20.04, and added the following lines on /map_loader/CMakeLists.txt after here. It works on my desktop. Just in case someone may need it.

ADD_COMPILE_OPTIONS(-std=c++11 )
ADD_COMPILE_OPTIONS(-std=c++14 )

[Question] Python / C++ api

Hello
Thanks for publishing these data
will you publish some sort of python or c++ api, like other datasets?
Also maybe some demo code to do some basic functionalities with this dataset

Reflectivity for surroundings

Hi!
Is there reflectivity data for the surrounding pointcloud? If there is, is it possible to download it somehow?
Thanks in advance!

[Question] integrate images and laser scans.

Hey, do you have some smart method to integrate lidar scans and images since as I saw the images are not synchronized explicitly with rosbag time steps, therefore some 'approximations' should be done, and I wonder whether it was your plan from the beginning in this dataset or you addressed somehow this issue? I am asking since I would like to avoid a method that I have a time step from rosbag and I look for the corresponding image-string that's "close".

Best

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.