Git Product home page Git Product logo

milliego's People

Contributors

christopherlu avatar li-peize avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

milliego's Issues

A question of the overall results in Table 1

Hello,
Thank you for providing the code for the interesting project.
We would like to know whether the results in Table 1 correspond to an evaluation of one exemplar testing trajectory (one specific testing sequence) or the average of all eight testing sequences given in the dataset. In our case, we gathered the ATE-2D and ATE-3D metrics of the provided model, i.e., '140', and found the results neither on each testing sequence nor the average could precisely match the results in Table 1. Here is a (comparison ) between the summary of ATE data and the overall results in Table 1.
I will be grateful for any help you can provide!

Runnable issue of quantitive evaluation

Hello,
Thank you for providing the code for the interesting project.
I have an issue when running "os_evaluate_robot.py" that allows quantitive evaluation. It seems that some CSV files (e.g., true_delta_gmapping.csv) and some detailed configurations in "config.yaml" (e.g., cfg['dataset_creation']['dataroot']) are missing in the repository. Or have I missed some critical information in the readme file?
I would appreciate it very much if you could check them out and make "os_evaluate_robot.py" runnable.

mmWave Middle Data Query

Hello,

Thank you for providing this repository, it's a great source of information for a very interesting project.

I have a question regarding the mmWave middle data, provided in your Training/Testing data set, that I'm hoping you can help me understand. I am assuming that this data is a 2D projection of the 3D point cloud. However, can you explain the scaling? Values which I would expect to be 0 at the point of no 3D data are -0.3398 across your entire dataset. Surely, there would have been some function to initialize the shape of the image like img = np.zeros([64,256]), but then your data points would be 0.

Can you perhaps provide the functions you use to create your pointcloud projection data?

Thanks in advance! :)

Dataset on dropbox has been deleted

I'm currently trying to train milliEgo with your code, but it seems that your dataset published on dropbox has been deleted. I wonder if there's any other way that I can download the dataset? Thanks a lot! Best wishes!
dropbox

Regarding questions about data and pre-trained model downloads and test results.

Thank you for generously sharing your code. We would like to learn about your work and conduct testing using the code you've kindly provided. However, we encountered an issue when trying to access the data through the link in your repository, as it appears to have been removed. As a result, we obtained the model and data from the MAPS Lab/OdomBeyond Vision repository(https://github.com/MAPS-Lab/OdomBeyondVision)and conducted testing.
However, the results seem to be inconsistent with expectations.. Could you please advise if there are specific parameters that require attention or modification during the testing process?
Alternatively, are the data and pre-trained model I downloaded consistent with those provided in this repository?
Looking forward to your reply.

How depth image was generated and how the normalization works

I'm attracted by your fascinating work and thanks for sharing the code. I'm trying to train the model with my own data yet I found a few confusing points in the code and paper.
eqa3

1.I noticed that equation 3 in your paper mentioned how to calculate the pixel position of a point on a depth image. The equation used the calculated angle divided by the resolution of the mmwave radar as the coordinate of the pixel. By running training code, I found that the input shape of a depth image was 64*256. As far as I know, mmwave radar usually has a resolution of over 10 degrees, it cannot provide such precise depth image. I’m wondering is it possible to tell me whether alternative methods were applied or did I misunderstood your code or your paper.

2.Another problem is that you mentioned to normalize the depth image data within the range [0, 255], I’m wondering how the normalization is applied. Is the radar position set to be 255 and linearly decrease the value of the depth image? Or set the closest point to the radar to be 255 and linearly decrease the value? Or is it some other ways?

Dropbox link unavailable

Hi, thanks for your nice work.
Your Dropbox links for the pre-train model and config files are not available. Could you please update these links?

Many thanks.

milliEgo dataset

Hello, I downloaded the dataset from the link to my computer but I am getting a CRDOWNLOAD FILE after download. The download status shows Failed - Forbidden after the download is complete. Is there any alternative to download the dataset?
Capture

Missing function: build_model_plus_imu

Hello!

Thanks for all your previous project support. I was wondering if you might be able to provide the source code for the network training with the RGD and Depth camera inputs?

In the test_double.py file, this is denoted by the function build_model_plus_imu but this function is not part of the networks.py file.

Regards

Rachel

A question of the pre-trained model

Hello,
Thank you for providing the code for the interesting project.
I have a problem with the performance distinction when training the model w and w/o pre-trained models.
In practice, we trained the official code under default configuration with the pre-trained CNN model and from scratch (without pre-trained models), respectively. In our case, we took out nine training sequences as the validation set since validation sequences are not found in the dataset. We found that the model with pre-trained CNN has a better performance than the model from scratch in terms of visualization and quantification results. But both are generally worse than the provided model, i.e., '140'. Therefore, we want to know how the pre-trained models (i.e., 'cnn.h5' and '140') are trained and generated.
I will be grateful for any help you can provide!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.