Git Product home page Git Product logo

calibnet's People

Contributors

epiception avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

calibnet's Issues

Is is a bug?

Is it a bug that small_transform passed to _simple_transformer from config_res.py is same as dataset_build_color.py (not reversed)

Doubt on reproducibility

I try to reproduce your work based on your codebase and found the following doubts:

  • You are using IMG_HT = 375, IMG_WDT = 1242 for training, which is different size of testing datasets. How do you even handle that? It makes no sense to me.

  • Where's the network graph json? Could you at least share that? Otherwise, the network architecture is invisible to us.

  • The photometric loss is very suspicious since you are using a l2 loss, which I assume will have at least 10k scale loss numerical value, while you are using the 5e-4 learning rate. Would you share your training curve on tensorboard?

Duplicate mapping

Amazing works on CalibNet!

I have one question after reading through the paper and code:

Even though you avoid the duplication in bilinear sampling layer, in this step, the duplicates could still happen, correct?

Would be interested to know your feedback.

Extreme miscalibration

Hi , great work @epiception and team!
I had a small query- I have the following image with the lidar points projected on to the image.I was wondering if I can use calibnet for aligning the lidar cloud properly on the image in such a case.I got the doubt because this is a case of extreme mis-calibration and a very sparse lidar cloud.
lidar_not_alligned

3D Grid Generator and differentiability

Greetings,

I was going through your paper. I would like to know about the 3D Grid Generator. You mention in the paper that the 3D point cloud projection to 2D space is carried out differentiably.

Could someone point out where I can look for this portion in the code?

Thank you

Questions on camera matrix you use for KITTI

  1. It looks like you are using image_02 and why the R_rect_xx is from 00? Shouldn't you use R_rect_02 instead?

  2. Since you are using sync data, why using apply the R_rect_xx again and shouldn't you instead, as KITTI dev kit suggested, use P_rect_xx to project the camera into the image co-planar? However I didn't see you even parse this matrix.

Training this model

I'm trying to recreate your results but cannot find the parameters.json file for the resnet_params_path in config_res.py.

I noticed that you provide a script to generate this file in resnet_json_parameters.zip but I can't seem to generate this file without errors. Would it be possible to provide the parameters.json file?

Best regards,
Shreyas

Some questions about your paper

Hi,
I just read your IROS2018 paper "CalibNet: Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks" and have a specific question:

  1. considering the loss function design (equ.(2) and equ.(5)), how do you get the ground truth ($D_{gt}$, $X_{exp}$) to train your network?

Thanks!

How soon is soon?

Would love to try your code after reading your paper. Do you have an ETA on when v1 of the code will be uploaded?

Where do you implement inverse of Tran_random?

Hi,
X2 is depth_map_transformed img. It is passed to function _simple_transformer -> _3D_meshgrid_batchwise_diff aiming to convert 2D img back to original 3d points under R_rect_00 coordinate system.
Why K mat is "matrix_inversed" but transformation_matrix and small_transform are not? (see all_transformer.py line 67- 72). Btw, small_transform in config_res.py is same as dataset_build_color.py, it is not inversed either.

Error in data_loader.py while running training

Screenshot from 2019-11-12 12-57-20

Attached screenshot above. When I'm running the training, data_loader.py is throwing an error. It seems that the projected depth maps are being stored as 4 channel png files. Is that a bug in the dataset builder? The only change I made in the dataset builder is use plt.imsave instead of misc.imsave, since misc.imsave is not available in latest version of scipy

Why need ground truth information?

First, thank you for the creative work.
After reading paper, I wonder: During training, the correct SE(3) between camera and LiDAR is needed, but if we have this information, why bother to use this network to predict it?

No OpKernel was registered to support Op 'AuctionMatch'

I'm trying to train your network using the script mentioned in the README. I'm getting the follow error:

Caused by op u'AuctionMatch_1', defined at:
  File "train_model_combined.py", line 87, in <module>
    cloud_loss_validation = model_utils.get_emd_loss(cloud_pred, cloud_exp)
  File "/data/Docker_Codebase/CalibNet/code/model_utils.py", line 42, in get_emd_loss
    matchl_out, matchr_out = tf_auctionmatch.auction_match(pred, gt)
  File "/data/Docker_Codebase/CalibNet/code/tf_ops/emd/tf_auctionmatch.py", line 20, in auction_match
    return auctionmatch_module.auction_match(xyz1,xyz2)
  File "<string>", line 37, in auction_match
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
    op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2630, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1204, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op AuctionMatch with these attrs.  Registered devices: [CPU], Registered kernels:
  device=GPU
         [[Node: AuctionMatch_1 = AuctionMatch[](map_1/TensorArrayStack_1/TensorArrayGatherV3, map_2/TensorArrayStack_1/TensorArrayGatherV3)]]

I'm using CUDA8.0, Tensorflow 1.3.0 and Python2.7 as mentioned in the README.

How to normlize the sparse depth map

In this paper, The sparse depth map need to be normlized to range [-1,1]. Here are my questions:

  1. The normal distribution doesn't have a boundary, what does range [-1,1] mean?
  2. Given that the sparse depth map has many NaN pixels without any projected pointcloud point, how to normlize them?

in train_model_combined.py scaled

at line 43~44
fx_scaled = 2*(fx)/np.float32(IMG_WDT) # focal length x scaled for -1 to 1 range
fy_scaled = 2*(fy)/np.float32(IMG_HT) # focal length y scaled for -1 to 1 range

i think.....
if fx,fy= 7.215377e+02, IMG_HT = 375, IMG_WDT = 1242
fx_scaled = 1.1619
fy_scaled = 3.8482

I think the range of focal length is not -1 to 1.
can you help me?

and
at line 45~46
cx_scaled = -1 + 2*(cx - 1.0)/np.float32(IMG_WDT) # optical center x scaled for -1 to 1 range
cy_scaled = -1 + 2*(cy - 1.0)/np.float32(IMG_HT) # optical center y scaled for -1 to 1 range

Why are you subtracting 1? (cx - 1.0)&(cy - 1.0)

parsed_set.txt is empty

Sincerely look forward to your reply!
According to your advices,I find the "parsed_set.txt" is empty?

Dataset Preparation:
To prepare the dataset, run /dataset_files/dataset_builder_parallel.sh in the directory where you wish to store. We will also create a parser parsed_set.txt for the dataset, that contains the file names for training.

velo_to_cam Matrix is necessary ?

dataset_build_color.py
velo_to_cam = np.vstack((np.hstack((velo_to_cam_R, velo_to_cam_T)), np.array([[0,0,0,1]])))

velo_to_cam Matrix is calibration result?
Is it necessary ?

Thanks!

Where can I get evaluation code?

I'm trying to make some programs using this CalibNet.
So i'm looking for evaluation code of this.
Where can I get the Code for Direct Evaluation/Testing pipeline for point cloud calibration will?

The dataset seems not available

I try to train your net, but I get trouble downloading your dataset. It seems that the dataset is not available now. Could you please check the link or something?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.