Git Product home page Git Product logo

m3ed's People

Contributors

fcladera avatar k-chaney avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

m3ed's Issues

Missing IMU Calibration?

While there is calibration data for the different cameras, I cannot locate the IMU calibration data mentioned in the data overview page https://m3ed.io/overview/ Guidance as to where to find this data would be appreciated; additionally, there doesn't seem to be a separate time map between IMU and the left event camera (or any time mapping for the IMU).

As a lower priority note, it appears that the data in the HDF5 files are inconsistently compressed; some data has had LZF applied, while some data is uncompressed. For instance, the IMU accelerometer/gyroscope data is uncompressed, while the IMU timestamps are. While LZF may be the default for h5py, it 1) isn't the most space efficient compression (and thus slow download from AWS) and 2) isn't available in all distributions of HDF5 libraries.

Distortion Model is radtan but only 4 parameters?

From what I can tell, the radtan, aka plumb bob distortion model requires 5 parameters, but there are only 4 parameters provided under 'distortion_coeffs' in the HDF5 files.

Is a parameter missing? Where could it be located?

Was a different distortion model used? The fisheye distortion model uses 4 parameters, but based on the narrow FOV of the cameras, it is unlikely a fisheye lens was used (as also published in the accompanying paper). This would probably be the wrong model to use.

I hope that the missing coefficient can be located, as I suspect that figuring out the correct calibration after the data has been collected may be impossible if any change has affected the camera system. I hope this doesn't render this data-set unusable (i.e. I hope the calibration images are saved somewhere).

The ratio of inf pixels in depth map is big.

Hi, authors,
I notice that the ratio of invalid depth pixels is too big.
I use ‘car_urban_night_rittenhouse_gt’ and 'car_urban_night_city_hall_gt' as examples.

I use the code below to compute the number of invalid depth maps.

for i in tqdm(range(len(gt_f['depth/prophesee/left']))):
    depth = gt_f['depth/prophesee/left'][i] #(720, 1280)
    if np.sum(depth == np.inf)/(720*1280) > 0.99:
        invalid_depth_count+=1
print(invalid_depth_count)

The two results are both the lengths of the sequence.

Call for disparity ground truth

Thank you for your next step of event camera dataset.

I want to convert depth gt to disparity gt. However, based on my understanding, this conversion requires projection matrix, which appears to be missing from the dataset. (MVSEC, your previous dataset, provides projection matrix and conversion between depth and disparity was easy.)

Are there any plans to publish additional information regarding projection matrix for this dataset?
Alternatively, is it possible to perform this conversion using only the information currently provided in the dataset?
Any guidance or additional information would be greatly appreciated.

Thank you for your support.

Inquiry Regarding LIDAR Data Processing in the M3ED Dataset

Dear Authors,

I have recently been exploring the application of event cameras in urban autonomous driving scenarios and was thrilled to find that your dataset perfectly aligns with my research needs. I am particularly interested in obtaining the trajectories of other dynamic objects in urban driving scenes, which I believe your dataset could facilitate.

However, I encountered a challenge while processing the "depth_gt.h5" file, specifically in the “/depth/prophesee_left” section, where the depth values of dynamic objects appear to be omitted. I attempted to derive the 3D point clouds in the LIDAR coordinate system using the Ouster LIDAR data saved in “/ouster/data” and “/ouster/metadata” topic of the “.h5” file. Unfortunately, the information about LEGACY data format(https://static.ouster.dev/sensor-docs/image_route1/image_route2/sensor_data/sensor-data.html#legacy-data-packet-format) provided on the M3ED website seems to focus on the composition of the LEGACY data package send from LIDAR, and I am struggling to interpret the 128x12609 Numpy matrix stored in “/ouster/data”.

Given these challenges, I am writing to kindly request any available code or instructions on how to calculate the 3D point clouds in the LIDAR coordinate system or the Prophesee left camera coordinate system based on “/ouster/data” and “/ouster/metadata” in the “.h5” file. Your guidance in this matter would be invaluable and greatly enhance my understanding and utilization of your dataset.

Please rest assured that any resources you provide will be used strictly for research purposes, and I will ensure that all due credit is given to your groundbreaking work. If there are any conditions or agreements necessary for the use of the code or data, I am more than willing to comply with them.

Thank you very much for considering my request. I greatly appreciate your time and assistance, and I am looking forward to potentially discussing your work further.

Best regards,

Benchmark compression and compress all datasets

Current scenario: only some datasets are compressed in the data.h5 files using LZF.

For homogeneity, it would be good to compress all the datasets using the same compression. We should benchmark all possible compression filters (compression ratio and compression speed) and pick the filter with the best results.

Bad Camera Calibrations in Yaml.

image

Looks like the transformation values from the camera chain were copied into the yaml file as transformations from the current camera to 'camera 1' (i.e. T_cn_cnm1, and by cnm1 you mean cam0 the left event camera). However, the camera chain is from camera 1 to 0, camera 2 to 1, etc etc.

As a result, the transformations in the yaml file are the correct values for the wrong transformations; anybody trusting those values as camera X to 1 will be disappointed.

some confusion about depth and calib

Thanks for the great work!

  1. I notice that the depth gt is not aligned with ovc images, right? It seems that ovc is 25Hz(40ms), but depth seems
    irregular.
  2. I notice that you offer many calibration files in h5 files. E.g., Cn_T_C0', 'Ln_T_L0' in depth_gt.h5, calib of RGB data, left/right grayscale camera data. Is it convenient for you to provide a simple chart to explain some of them?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.