Git Product home page Git Product logo

pose-warping's Introduction

Pose-Warping for View Synthesis / DIBR

Code to warp a frame from a source view to the target view, given the depth map of source view, camera poses of both views and camera intrinsic matrix.

Features:

  1. The warping is implemented in python using numpy/pytorch package and has been vectorized.
  2. It uses inverse bilinear interpolation (which can be considered as a trivial form of splatting).
  3. Pytorch implementation is differentiable.
  4. Also contains splatting/interpolation code given flow and/or depth.
  5. Supports warping of masked frame/flow i.e. if frame/flow is unknown at certain pixels.

Other implementations:

  1. Reference View Synthesizer (RVS)
  2. Open MVS
  3. A splatting based differential warping has been implemented by PyTorch 3D which has been used in SynSin
  4. Softmax Splatting

Citation

If you find this work useful, please cite it as

@misc{somraj2020posewarping,
    title = {Pose-Warping for View Synthesis / {DIBR}},
    author = {Somraj, Nagabhushan},
    year = {2020},
    url = {https://github.com/NagabhushanSN95/Pose-Warping}
}

pose-warping's People

Contributors

nagabhushansn95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

pose-warping's Issues

Why prox_weight_nw divide depth_weights?

Thanks for sharing your code. I trace the code and found something I can't figure it out.
How do you normalize the depth_weights? What does 50 mean?

sat_depth1 = torch.clamp(depth1, min=0, max=1000)
log_depth1 = torch.log(1 + sat_depth1)  # (b, 2, h, w)
depth_weights = torch.exp(log_depth1 / log_depth1.max() * 50)

And why every weight should divide the depth_weight ?

weight_nw = torch.moveaxis(prox_weight_nw * mask1 * flow12_mask / depth_weights, [0, 1, 2, 3], [0, 3, 1, 2])
weight_sw = torch.moveaxis(prox_weight_sw * mask1 * flow12_mask / depth_weights, [0, 1, 2, 3], [0, 3, 1, 2])
weight_ne = torch.moveaxis(prox_weight_ne * mask1 * flow12_mask / depth_weights, [0, 1, 2, 3], [0, 3, 1, 2])
weight_se = torch.moveaxis(prox_weight_se * mask1 * flow12_mask / depth_weights, [0, 1, 2, 3], [0, 3, 1, 2])

good job! I have some problem about Depth Map Bits

Hello!
Could you please tell me what is the unit of the depth map? I want to import the depth map of my own picture, but it seems that the depth image's units do not match so the results of warp are different.

Camera parameters and depth map

Hi, thanks for the work. I am new to this field and tried to run this on some RGBD datasets. I am confused about the camera parameters and their relation to the depth. Is the transformation matrix used in the code a camera_to_world matrix or a world_to_camera matrix? And as mentioned in #3, the unit of depth map is supposed to be the same as the unit of translation vector. However, I am wondering if it matters what world coordinate convention is used to define the translation vector.
Thanks in advance!

Warping affected while handling Invalid gradients during training

Thanks for the repo!
I am using the torch version of your code in a training setup, to warp frame 1 into frame 2 and use it as supervision.
During gradient computation, I run into an error at line 197 due to division by zero- I think at cropped_weights

warped_frame2 = torch.where(mask, cropped_warped_frame / cropped_weights, zero_tensor)

When I add a small residue like 1e-10, the warping using ground truth that was fine all along got affected- the warped frame is black. If I remove this residue and run the warping, I get the proper warped frame.
So, adding this residue for training purpose is affecting the warping itself.
How do I handle this?

depth file

Hi, I've tried the script on the data you provided and it works fine. But when I use a depth inferenced by MiDas the result went wrong. How did you get depth1.npy? By some depth sensor or algorithm? Thanks a lot.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.