Git Product home page Git Product logo

nvlabs / learningrigidity Goto Github PK

View Code? Open in Web Editor NEW
145.0 17.0 20.0 3.94 MB

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation (ECCV 2018)

Home Page: http://research.nvidia.com/publication/2018-09_Learning-Rigidity-in

Dockerfile 1.40% Python 29.63% C 10.74% C++ 25.73% Cuda 27.96% Shell 1.06% CMake 3.48%
computer-vision deep-learning eccv-2018 3d-vision pytorch

learningrigidity's Introduction

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation (ECCV 2018)

alt text

License

Copyright (c) 2018 NVIDIA Corp. All Rights Reserved. This work is licensed under the Creative Commons Attribution NonCommercial ShareAlike 4.0 License.

Project page

NVIDIA Research project page

Paper

Project members

Summary

This repository includes the implementation of our full inference algorithm, including rigidity network, flow and the refinement optimization.

The data creation toolkit is located at an independent REFRESH repository

git clone https://github.com/lvzhaoyang/RefRESH

If you use this code, our generated data, or the dataset creation tool, please cite the following paper:

Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation,
Zhaoyang Lv, Kihwan Kim, Alejandro Troccoli, Deqing Sun, James M. Rehg, Jan Kautz, European Conference on Computer Vision (ECCV) 2018

@inproceedings{Lv18eccv,  
  title     = {Learning Rigidity in Dynamic Scenes with a Moving Camera for 3D Motion Field Estimation},  
  author    = {Lv, Zhaoyang and Kim, Kihwan and Troccoli, Alejandro and Sun, Deqing and Rehg, James and Kautz, Jan},  
  booktitle = {ECCV},  
  year      = {2018}  
}

Usage

We provide both Docker-based setup for the containerizaiton and generic setup with conda.

An inference example is currently available. We are working on a release version of the training code, which will be available soon.

1. Running with Docker

Prerequisites

  • Install nvidia-docker.
  • Clone gtsam at 'external_packages/' folder (do not build it there yet. Docker will build inside the container later).

Pretrained model for default inference

Download the following models and put them in the 'weights/' directory.

Build with Docker

docker build . --tag rigidity:1.0

Now go to the root folder of this project, and run the run_container.sh script. This is important as we will mount the project folder inside the docker container.

sudo ./run_container.sh

Run the inference code

Run the example inference code with default weights with the refinement step and visualization flags:

/rigidity#python run_inference.py --post_refine --visualize

Then the output of the testing example (using an example pair from sintel's market_5) will pop up as shown below:
alt text

You can also check out the 'results/' folder to see all saved images.
Optionally you can specify the output folder with --output_path. See more options with --h.

The estimated pose from the refinment will be shown as:

Estimated two-view transform
[[[ 0.99974869  0.01963792 -0.01081263 -0.04639582]
  [-0.01937608  0.9995287   0.02381045 -0.05028503]
  [ 0.01127512 -0.02359496  0.99965802  0.44513745]
  [ 0.          0.          0.          1.        ]]]

You can manually choose the network weights if you have another models.

/rigidity$python run_inference.py --pwc_weights weights/pwc_net_chairs.pth.tar --rigidity_weights weights/rigidity_net.pth.tar --post_refine --visualize

# to check the help functions
/rigidity$python run_inference.py --help

2. Running with conda

The code was developed using Python 2.7 & PyTorch 0.3.1 & Cuda 8.0. We provide an anaconda environment with all dependencies needed. To run it, please ensure you have the Anaconda Python 2.7 version installed and set the environment path for conda. Then run the following script to automatially set up the environment:

# create the anaconda environment
conda env create -f setup/rigidity.yml

# activate the environment
conda activate rigidity

sh setup/install_for_network.sh
# If you don't need the refinement stage, you can choose not to run this. 
# And set the post_refine flag to be false
sh setup/install_for_refinement.sh

Run the inference code

Download the following pre-trained models and put them in the 'weights/' directory:

Please refer to the issue list of PWC-net/pytorch if there is any issue w.r.t. the flow models.

Run the example inference code with default weights the refinement step and visualization:

python run_inference.py --post_refine --visualize

Or you can manually choose the network weights

python run_inference.py --pwc_weights weights/pwc_net_chairs.pth.tar --rigidity_weights weights/rigidity_net.pth.tar --post_refine --visualize

# to check the help functions
python run_inference.py --help

To run results for your own inputs, you can use the default simple loader provided in the run_inference.py example. You need to set the color images directory, depth images directory (only support '.dpt' and '.png' format), and a simple pin-hole camera intrinsic parameters ('fx,fy,cx,cy'). For example:

python run_inference.py --color_dir data/market_5/clean --depth_dir data/market_5/depth --intrinsic 1120,1120,511.5,217.5 --post_refine --visualize

Dataset

learningrigidity's People

Contributors

kihwan23 avatar lvzhaoyang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

learningrigidity's Issues

Docker with conda env

I try to use docker with jupyter like this
nvidia-docker run -it --net=host -p 8087:8087 -v ~/learningrigidity/:/rigidity learningrigidity:1.10 bash -ci ". /opt/conda/etc/profile.d/conda.sh && conda activate rigidity && jupyter lab --port 8087 --ip=0.0.0.0 --no-browser --allow-root"
So i have problem: in terminal I had valid python 2.7.15 with anaconda, but when I tried to import torch in notebook I had problem with that.
Also I have this additional information in Dockerfile for jupyter
RUN DEBIAN_FRONTEND=noninteractive && apt-get update -y && apt-get install -y --no-install-recommends
build-essential
git
vim
wget
RUN . /opt/conda/etc/profile.d/conda.sh
conda install ipykernel
conda install nb_conda
conda install -u jupyter jupyter lab
RUN rm -rf /rigidity

ENV PATH /opt/conda/envs/env/bin:$PATH
RUN curl -O https://bootstrap.pypa.io/get-pip.py &&
python get-pip.py &&
rm get-pip.py
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y python-pip
RUN DEBIAN_FRONTEND=noninteractive python -m pip install --upgrade pip
RUN DEBIAN_FRONTEND=noninteractive pip install -U jupyter && pip install jupyterlab
RUN DEBIAN_FRONTEND=noninteractive python -m ipykernel install --user --name mykernel
ENV PATH /opt/conda/envs/rigidity/:$PATH

Training

Good morning, thank you for work. Do you have any code for training rigidity network?

Import error

Hello, I want to comare my point cloud aligner with your, but I have some trouble with import pyflow2pose. gtsam and flow2pose was built without proublem
ImportError Traceback (most recent call last)
in
1 sys.path.append('external_packages/flow2pose/build')
----> 2 from pyFlow2Pose import pyFlow2Pose

ImportError: external_packages/flow2pose/build/pyFlow2Pose.so: undefined symbol: _ZN5pbcvt16fromNDArrayToMatEP7_object

ResolvePackageNotFound

Hi. when i try to create a conda environment with rigidity.yml, I got this error:
ResolvePackageNotFound:

  • cudatoolkit==8.0=3
  • pytorch==0.3.1=py27_cuda8.0.61_cudnn7.1.2_3
    I am using anaconda3. Any solutions?
    Thanks

Undeclared transform_to(pose, pt2_)

When I try to build this repo from the docker image, or from conda, I get an error with the flow2pose library. Here is my build error:
[ 44%] Built target pyboostcv_bridge
/rigidity/external_packages/flow2pose/flow2pose_lib/flow2pose.cpp: In member function 'gtsam::Pose3 Flow2Pose::solve(const std::vectorgtsam::Point3&, const std::vectorgtsam::Point3&, const gtsam::Pose3&, const std::vector&)':
/rigidity/external_packages/flow2pose/flow2pose_lib/flow2pose.cpp:192:49: error: 'transform_to' was not declared in this scope
Point3_ pt2_1_ = transform_to(pose, pt2_);
^
CMakeFiles/Flow2pose.dir/build.make:62: recipe for target 'CMakeFiles/Flow2pose.dir/flow2pose_lib/flow2pose.cpp.o' failed
make[2]: *** [CMakeFiles/Flow2pose.dir/flow2pose_lib/flow2pose.cpp.o] Error 1
make[1]: *** [CMakeFiles/Flow2pose.dir/all] Error 2
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/Flow2pose.dir/all' failed
make: *** [all] Error 2
Makefile:127: recipe for target 'all' failed

Any tips on modifying the script to fix that? I tried including some of the gslam header files that had similar functions, but none appeared to take in those types.

Releasing the training code

Hi, thank for providing the inference code and dataset! It would be really helpful.
As mentioned in README,

An inference example is currently available. We are working on a release version of the training code, which will be available soon.

do you plan to release the training code soon?

Thank you in advance!

Best regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.