Git Product home page Git Product logo

cmr's Introduction

Learning Category-Specific Mesh Reconstruction from Image Collections

Angjoo Kanazawa*, Shubham Tulsiani*, Alexei A. Efros, Jitendra Malik

University of California, Berkeley In ECCV, 2018

This code is no longer actively maintained. For pytorch 1.x, python3, and pytorch NMR support, please see this implementation from chenyuntc.

Project Page Teaser Image

Requirements

  • Python 2.7
  • PyTorch tested on version 0.3.0.post4

Installation

Setup virtualenv

virtualenv venv_cmr
source venv_cmr/bin/activate
pip install -U pip
deactivate
source venv_cmr/bin/activate
pip install -r requirements.txt

Install Neural Mesh Renderer and Perceptual loss

cd external;
bash install_external.sh

Demo

  1. From the cmr directory, download the trained model:
wget https://people.eecs.berkeley.edu/~kanazawa/cachedir/cmr/model.tar.gz & tar -vzxf model.tar.gz

You should see cmr/cachedir/snapshots/bird_net/

  1. Run the demo:
python -m cmr.demo --name bird_net --num_train_epoch 500 --img_path cmr/demo_data/img1.jpg
python -m cmr.demo --name bird_net --num_train_epoch 500 --img_path cmr/demo_data/birdie.jpg

Training

Please see doc/train.md

Citation

If you use this code for your research, please consider citing:

@inProceedings{cmrKanazawa18,
  title={Learning Category-Specific Mesh Reconstruction
  from Image Collections},
  author = {Angjoo Kanazawa and
  Shubham Tulsiani
  and Alexei A. Efros
  and Jitendra Malik},
  booktitle={ECCV},
  year={2018}
}

cmr's People

Contributors

akanazawa avatar shubhtuls avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cmr's Issues

Simple question about your implementation

I have a simple question. In mesh_net.py Line289,290, you set img_H and img_W as :

  • img_H = int(2**np.floor(np.log2(np.sqrt(num_faces) * opts.tex_size)))
  • img_W = 2 * img_H

Could you explain the reason?

Update to PyTorch1.x and Python3, remove chainer/cupy dependancy.

Nice work! I've been working on a similar task, your code is clean and intuitive, a good start for the project.

But I struggle to set up the env. So I make some updates:

  • upgrade support to Python3
    Python 2 support ends today.

  • upgrade to PyTorch 1.3
    PyTorch 0.3 was like two years ago.

  • Remove chainer/cupy dependancy.
    Setting up cupy is too complicated and now there is pytorch version neural renderer

The repo is in chenyuntc/cmr.
I can make a pull request if you would like.

demo.py plot_trisurf

Hi Angjoo,
I'm still trying to find a way to plot the bird demo as an interactive 3d visualization, similiar to projects like this.
Have you maybe discussed to use plot_trisurf function from matplotlib in demo.py to do something similar?

cupy.cuda.compiler.CompileException: nvrtc: error: failed to load builtins

When you run the demo:
python -m cmr.demo --name bird_net --num_train_epoch 500 --img_path cmr/demo_data/img1.jpg

I am getting this error:

/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Setting up model..
loading /home/ubuntu/akshay/cmr/nnutils/../cachedir/snapshots/bird_net/pred_net_500.pth..
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"main", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/ubuntu/akshay/cmr/demo.py", line 119, in
app.run(main)
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/absl/app.py", line 274, in run
_run_main(main, argv)
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/absl/app.py", line 238, in _run_main
sys.exit(main(argv))
File "/home/ubuntu/akshay/cmr/demo.py", line 108, in main
outputs = predictor.predict(batch)
File "cmr/nnutils/predictor.py", line 110, in predict
self.forward()
File "cmr/nnutils/predictor.py", line 149, in forward
self.cam_pred)
File "cmr/nnutils/nmr.py", line 183, in forward
return Render(self.renderer)(verts, faces)
File "cmr/nnutils/nmr.py", line 114, in forward
masks = self.renderer.forward_mask(vs, fs)
File "cmr/nnutils/nmr.py", line 50, in forward_mask
self.masks = self.renderer.render_silhouettes(self.vertices, self.faces)
File "build/bdist.linux-x86_64/egg/neural_renderer/renderer.py", line 38, in render_silhouettes
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/chainer/functions/array/concat.py", line 90, in concat
y, = Concat(axis).apply(xs)
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/chainer/function_node.py", line 245, in apply
outputs = self.forward(in_data)
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/chainer/functions/array/concat.py", line 44, in forward
return xp.concatenate(xs, self.axis),
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/manipulation/join.py", line 49, in concatenate
return core.concatenate_method(tup, axis)
File "cupy/core/core.pyx", line 2439, in cupy.core.core.concatenate_method
File "cupy/core/core.pyx", line 2482, in cupy.core.core.concatenate_method
File "cupy/core/core.pyx", line 2533, in cupy.core.core.concatenate
File "cupy/core/core.pyx", line 1630, in cupy.core.core.ndarray.setitem
File "cupy/core/core.pyx", line 3101, in cupy.core.core._scatter_op
File "cupy/core/elementwise.pxi", line 823, in cupy.core.core.ufunc.call
File "cupy/util.pyx", line 39, in cupy.util.memoize.decorator.ret
File "cupy/core/elementwise.pxi", line 622, in cupy.core.core._get_ufunc_kernel
File "cupy/core/elementwise.pxi", line 33, in cupy.core.core._get_simple_elementwise_kernel
File "cupy/core/carray.pxi", line 170, in cupy.core.core.compile_with_cache
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 123, in compile_with_cache
base = _preprocess('', options, arch)
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 86, in _preprocess
result = prog.compile(options)
File "/home/ubuntu/akshay/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 233, in compile
raise CompileException(log, self.src, self.name, options)
cupy.cuda.compiler.CompileException: nvrtc: error: failed to load builtins

I tried the solution mentioned in this: https://groups.google.com/forum/#!topic/chainer-jp/GKNe5KY_fm0 yet it is not working

404 error while downloading cachedir.tar.gz

When I tried to download CUB annotation file by running:
wget https://people.eecs.berkeley.edu/~shubhtuls/cachedir/cmr/cachedir.tar.gz
I got 404 error.
Is this file currently available for downloading?

textured human reconstruction

Professor Angjoo Kanazawa:
I want to try to reconstruct the textured human body shape from a single image with your represent model.So,I want to ask you about whether it's feasible or not?Would you give me some suggestions about these?Thanks!
@akanazawa @shubhtuls

Dataset!

hello, Is there someone know where to download the dataset?

Did you do the training on Pascal3D+ car and aeroplane together or separately?

Did you train the CMR on Pascal3D+ car and aeroplane together or separately? I noticed in DRC, you mentioned you train category-agnostic CNN, which means you train DRC on aeroplane and car together and you report DRC's 3D IoU on aeroplane and car test data, which are 0.42 and 0.67 separately. However, in this repo, the p3d dataloader only loads 'car'.

Simple Question about Data Preprocessing

Hello,
I would like to train a model on ShapeNet with does not have neither foreground masks nor semantic annotations, what should I do and in which format I should save the data after annotation?

ERROR: Could not find a version that satisfies the requirement h5py

conda create -n cmr python=3
conda activate cmr
conda install pytorch torchvision -c pytorch
pip install -r requirements.txt

But I get many errors for almost all dependencies in requirements.txt starting with

h5py:

(cmr) [moldach@cedar5 cmr]$ pip install -r requirements.txt
Looking in links: /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/avx2, /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/nix/generic, /cvmfs/soft.computecanada.ca/custom/python/wheelhouse/generic
Collecting absl-py
  Downloading absl_py-0.12.0-py3-none-any.whl (129 kB)
     |████████████████████████████████| 129 kB 10.4 MB/s 
Collecting Cython
  Downloading Cython-0.29.22-py2.py3-none-any.whl (980 kB)
     |████████████████████████████████| 980 kB 18.1 MB/s 
ERROR: Could not find a version that satisfies the requirement h5py
ERROR: No matching distribution found for h5py

where is the results saved?

Hello:
I run the project and it resulted in a new terminal named "ipdb",I input "c" to continue the procedure,but it return nothing but the orignal terminal.and I have not see any output results,what should I do next?Whether I run the script successfully?Thanks.

/root/Documents/cmr/demo.py(98)visualize()
97 import ipdb
---> 98 ipdb.set_trace()
99

ipdb>
@shubhtuls @akanazawa

Getting Assertion error when trying to run demo code of CMR

Hi, I'm trying to run the demo code for CMR but my team and I are getting this error across all devices. We'd really appreciate it if you could help us with this. Thank you.

Setting up model..
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/content/cmr/demo.py", line 119, in
app.run(main)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "/content/cmr/demo.py", line 107, in main
predictor = pred_util.MeshPredictor(opts)
File "/content/cmr/nnutils/predictor.py", line 39, in init
self.model = mesh_net.MeshNet(img_size, opts, nz_feat=opts.nz_feat)
File "/content/cmr/nnutils/mesh_net.py", line 236, in init
verts, faces, num_indept, num_sym, num_indept_faces, num_sym_faces = mesh.make_symmetric(verts, faces)
File "/content/cmr/utils/mesh.py", line 44, in make_symmetric
assert(prop_left_inds.shape[0] == num_sym)
AssertionError

Experiment on PASCAL3D+ dataset

Hi, I'm a beginner in 3d reconstruction. It's a really impressive work!
I'm now trying to implement it on PASCAL3D+ dataset, facing some problems on camera pose.
I find PASCAL3D+ dataset offer cam_pose in azi/ele system for each picture and the perspective projection codes. I just wonder how did you process the dataset to match cmr?

  1. Simply run SfM again, ignoring the ground-truth cam_pose provided by PASCAL3D+?
  2. Calculate the scale/trans/rotation for PASCAL correspond to CUB dataset? But I don't know if it's ok because PASCAL is under perspective projection and CUB is under weak-perspective projection.
  3. Rewrite python codes to project PASCAL using perspective camera?
    If I want to do this, I guess the main problem would be how to adjust neural renderer to match the projection.

I'd really appreciate it if you could offer any ideas/materials/codes in this. Thanks a lot!

get errors when try to run demo

Hello, I tried to run the demo:python -m cmr.demo --name bird_net --num_train_epoch 500 --img_path cmr/demo_data/img1.jpg but I got 2 errors that says :/tmp/tmpsIvbDf/3a3498fcc1b5eb4a0d21f0025a35669d_2.cubin.cu(13): error: a value of type "const float *" cannot be used to initialize an entity of type "float *"

/tmp/tmpsIvbDf/3a3498fcc1b5eb4a0d21f0025a35669d_2.cubin.cu(14): error: a value of type "const float *" cannot be used to initialize an entity of type "float *"

could you please tell me how to solve this problems,looking for your reply.thanks

Failed to load builtins

Hi, thanks for sharing the code! I encountered the following error, and I am wondering how to solve this?

venv_cmr/local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 233, in compile
    raise CompileException(log, self.src, self.name, options)
cupy.cuda.compiler.CompileException: nvrtc: error: failed to load builtins

Annotation mat file of PASCAL

I find that the link of annotation mat files is down. I found a temporary link of CUB in issue #7, but PASCAL's annotation is still not found. Would I find it in some other link?
Thank you so much!

Undefined name 'convert2np' in ./benchmark/evaluate.py

There is a missing import in ./benchmark/evaluate.py so that convert2np() is and undefined name.

convert2np() is defined at https://github.com/akanazawa/cmr/blob/master/utils/bird_vis.py#L177

flake8 testing of https://github.com/akanazawa/cmr on Python 3.6.3

$ flake8 . --count --select=E901,E999,F821,F822,F823 --show-source --statistics

./benchmark/evaluate.py:102:28: F821 undefined name 'convert2np'
        img = np.transpose(convert2np(batch['img'][0]), (1, 2, 0))
                           ^
1     F821 undefined name 'convert2np'
1

Snapshot generated after training not generating same results as provided snapshot

Hi,
When I tested the model using the snapshot provided, I got the below result
saved_image

But when I trained the model while following the instructions provided, similar snapshot is not able to generate similar output
saved_image2_500

I checked with snapshots later in the training as well, similar output as above.

The only thing I have changed is that I ported the code to Torch 1.0 and had to make one small change:
replace things like self.total_loss.data[0] with self.total_loss.item(). I am sure this is not the reason but just mentioned it for the sake of completion.

Could you suggest what might I be doing wrong ?

EDIT:
The losses are way higher as compared to the loss_log I got when I downloaded the snapshot.
losses

What version of Cuda is required for installing?

As in the title.
I have cuda version 9.0 and this is causing a lot of problems in installing cupy. I installed it seperately using "pip install cupy-cuda90", removed it from requirements.txt and then installed everything else.
When I try to install the external repos, i.e, Perceptual Loss and Neural Renderer, I got this Traceback:
Traceback (most recent call last):
File "setup.py", line 3, in
import neural_renderer
File "/home/nsajjan/WORK/Capstone/cmr/external/neural_renderer/neural_renderer/init.py", line 1, in
from cross import cross
File "/home/nsajjan/WORK/Capstone/cmr/external/neural_renderer/neural_renderer/cross.py", line 2, in
import cupy as cp
File "/home/nsajjan/WORK/Capstone/cmr/venv_cmr/lib/python2.7/site-packages/cupy/init.py", line 7, in
from cupy import _version
ImportError: cannot import name _version

Any suggestions?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.