Git Product home page Git Product logo

neural_renderer's Introduction

Neural 3D Mesh Renderer (CVPR 2018)

This is code for the paper Neural 3D Mesh Renderer by Hiroharu Kato, Yoshitaka Ushiku, and Tatsuya Harada.

For more details, please visit project page.

This repository only contains the core component and simple examples. Related repositories are:

For PyTorch users

This code is written in Chainer. For PyTorch users, there are two options.

I'm grateful to these researchers for writing and releasing their codes.

Installation

sudo python setup.py install

Running examples

python ./examples/example1.py
python ./examples/example2.py
python ./examples/example3.py
python ./examples/example4.py

Example 1: Drawing an object from multiple viewpoints

Example 2: Optimizing vertices

Transforming the silhouette of a teapot into a rectangle. The loss function is the difference between the rendered image and the reference image.

Reference image, optimization, and the result.

Example 3: Optimizing textures

Matching the color of a teapot with a reference image.

Reference image, result.

Example 4: Finding camera parameters

The derivative of images with respect to camera pose can be computed through this renderer. In this example the position of the camera is optimized by gradient descent.

From left to right: reference image, initial state, and optimization process.

FAQ

CPU implementation?

Currently, this code has no CPU implementation. Since CPU implementation would be probably too slow for practical usage, we do not plan to support CPU.

Python3 support?

Code in this repository is only for Python 2.x. PyTorch port by Nikos Kolotourosr, supports Python 3.x.

If you want to install neural renderer using Python 3, please add ./neural_renderer to $PYTHON_PATH temporarily as mentioned in issue #6. However, since we did not tested our code using Python 3, it might not work well.

Citation

@InProceedings{kato2018renderer
    title={Neural 3D Mesh Renderer},
    author={Kato, Hiroharu and Ushiku, Yoshitaka and Harada, Tatsuya},
    booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year={2018}
}

neural_renderer's People

Contributors

hiroharu-kato avatar nitish11 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural_renderer's Issues

running setup.py causes bug in Python 3

running setup.py in the root path of this repo showing:

Traceback (most recent call last):
  File "setup.py", line 3, in <module>
    import neural_renderer
  File "/new_disk_1/antonio/neural_renderer/neural_renderer/__init__.py", line 1, in <module>
    from cross import cross
ModuleNotFoundError: No module named 'cross'

I changed all absolute imports to relative ones in the regarding file and it did the work, now ./neural_renderer/__init__.py looks like:

from .cross import cross
from .get_points_from_angles import get_points_from_angles
from .lighting import lighting
from .load_obj import load_obj
from .look import look
from .look_at import look_at
from .mesh import Mesh
from .optimizers import Adam
from .perspective import perspective
from .rasterize import (
    rasterize_rgbad, rasterize, rasterize_silhouettes, rasterize_depth, use_unsafe_rasterizer, Rasterize)
from .renderer import Renderer
from .save_obj import save_obj
from .vertices_to_faces import vertices_to_faces

__version__ = '1.1.2'

Adding ./neural_renderer to $PYTHON_PATH temporarily should do the same thing. I just feel this is worth a note in README.

Using style transfer for 3D models

I was just exploring the project and wanted to try out style transfer example for 3D models.
Could you please guide how to run the style transfer example?

CVPR2019

Sorry to bother you, I have finished reading your team's paper : Learning View Priors for Single-view 3D Reconstruction(CVPR 2019). I hope to reproduce the results of the paper, but I have not found the code of the paper. Please show me a code address or something else, thank you. Looking forward to your reply, thank you.

When I run the example4, I get error Actual type: <class 'torch.Tensor'>.

(x) xxx@xxxxxxx:~/projects/neural_renderer$ python ./examples/example4.py
faces type <class 'torch.Tensor'>
type self.faces: <class 'torch.Tensor'>
Traceback (most recent call last):
File "./examples/example4.py", line 117, in
run()
File "./examples/example4.py", line 97, in run
model.to_gpu()
File "./examples/example4.py", line 53, in to_gpu
self.faces = chainer.cuda.to_gpu(self.faces, device)
File "/my/path/.conda/envs/envname/lib/python3.8/site-packages/chainer/backends/cuda.py", line 418, in to_gpu
return _backend._convert_arrays(
File "/my/path/.conda/envs/envname/lib/python3.8/site-packages/chainer/_backend.py", line 19, in _convert_arrays
return func(array)
File "/my/path/.conda/envs/envname/lib/python3.8/site-packages/chainer/backends/cuda.py", line 419, in
array, lambda arr: _array_to_gpu(arr, device, stream))
File "/my/path/.conda/envs/envname/lib/python3.8/site-packages/chainer/backends/cuda.py", line 446, in _array_to_gpu
raise TypeError(
TypeError: The array sent to gpu must be an array or a NumPy scalar.
Actual type: <class 'torch.Tensor'>.

How to set the coordinate of the renderer?

I have some .obj files with the different coordinate setting, like z-axis in the order of descending from bottom to top. I have no idea where can I make this change.

2D Projected Vertex positions?

I've just come across this library and previously have a background in opendr.

I am interested in retrieving the 2D projected positions for the mesh vertices. OpenDR allows this through the "camera.ProjectPoints" function which returns a 2D coordinate for each input vertex.

Is similar functionality available here?

Many thanks,

Ben

Problem encountered when using save_obj

Hi there,

I am using your save_obj function, expecting to save the .obj as well as its texture generated from your model.

And what I added is only a straightforward line in example3.py, which is

neural_renderer.save_obj('result.obj', model.vertices[0], model.faces[0], cf.tanh(model.textures[0]).array)

Since I noticed you initialized the textures, vertices and faces with the first dimension of 1 as a batch size, I just get rid of this first dimension by indexing with 0.

The problem is I didn't get a reasonable .png from texture, in result an odd textured obj when showing in my meshlab software, and it differs a lot when comparing with the gif result. I provided what I got below in a zip file. Looking forward for any suggestions.

Thanks a lot!

example3_result
save_obj_result.zip

2 errors detected in the compilation of "/tmp/tmpgX1d3K/kern.cu"

Hi I was trying to run "python ./examples/example1.py" but got following erUsing username "songweig".

Traceback (most recent call last):
File "./examples/example1.py", line 66, in
run()
File "./examples/example1.py", line 52, in run
images = renderer.render(vertices, faces, textures) # [batch_size, RGB, image_size, image_size]
File "build/bdist.linux-x86_64/egg/neural_renderer/renderer.py", line 90, in render
File "build/bdist.linux-x86_64/egg/neural_renderer/lighting.py", line 40, in lighting
File "build/bdist.linux-x86_64/egg/neural_renderer/cross.py", line 59, in cross
File "cmr/venv_cmr/lib/python2.7/site-packages/chainer/function.py", line 235, in call
ret = node.apply(inputs)
File "cmr/venv_cmr/lib/python2.7/site-packages/chainer/function_node.py", line 263, in apply
outputs = self.forward(in_data)
File "cmr/venv_cmr/lib/python2.7/site-packages/chainer/function.py", line 135, in forward
return self._function.forward(inputs)
File "cmr/venv_cmr/lib/python2.7/site-packages/chainer/function.py", line 342, in forward
return self.forward_gpu(inputs)
File "build/bdist.linux-x86_64/egg/neural_renderer/cross.py", line 39, in forward_gpu
File "cupy/core/_kernel.pyx", line 558, in cupy.core._kernel.ElementwiseKernel.call
File "cupy/core/_kernel.pyx", line 579, in cupy.core._kernel.ElementwiseKernel._get_elementwise_kernel
File "cupy/core/_kernel.pyx", line 392, in cupy.core._kernel._get_elementwise_kernel
File "cupy/core/_kernel.pyx", line 26, in cupy.core._kernel._get_simple_elementwise_kernel
File "cupy/core/_kernel.pyx", line 46, in cupy.core._kernel._get_simple_elementwise_kernel
File "cupy/core/carray.pxi", line 148, in cupy.core.core.compile_with_cache
File "cmr/venv_cmr/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 164, in compile_with_cache
ptx = compile_using_nvrtc(source, options, arch)
File "cmr/venv_cmr/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 82, in compile_using_nvrtc
ptx = prog.compile(options)
File "cmr/venv_cmr/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 245, in compile
raise CompileException(log, self.src, self.name, options)
cupy.cuda.compiler.CompileException: cmr/venv_cmr/lib/python2.7/site-packages/cupy/core/include/cupy/carray.cuh(281): warning: statement is unreachable
detected during instantiation of "void CIndexer<_ndim>::set(ptrdiff_t) [with _ndim=1]"
/tmp/tmpgX1d3K/kern.cu(10): here

/tmp/tmpgX1d3K/kern.cu(13): error: a value of type "const float *" cannot be used to initialize an entity of type "float *"

/tmp/tmpgX1d3K/kern.cu(14): error: a value of type "const float *" cannot be used to initialize an entity of type "float *"

2 errors detected in the compilation of "/tmp/tmpgX1d3K/kern.cu".

Ghost texture problem

I tested this wonderful renderer with a coffee mug model and it looks great! I built the model with software Blender by myself.

However, when I compared it with the result below from OpenGL renderer (software Preview on Mac), I noticed that there were some "ghost" textures on the surface of the object in the result from neural mesh renderer (the result above):

To determine the cause of the ghost textures, I also called the save_obj function to save the loaded mesh (coffee mug). Here is how the saved model looks in the OpenGL (Preview software on Mac):
figure_5

The result looks correct, except (a) the blurry texture (I think due to the texture_size_out=16 not being as large as the size of textures tensor texture_size x texture_size x texture_size = 4 x 4 x4 = 64 ); and (b) the minor flipping issue. But we do not see any ghost problem here.

Here is the Chainer code I used to call save_obj:

import chainer
import numpy as np
import scipy.misc
import neural_renderer

renderer = neural_renderer.Renderer()

vertices, faces, textures = neural_renderer.load_obj( '/home/chengfei/projects/test_coffe_mug/straight.obj', load_texture=True, texture_size=16)

neural_renderer.save_obj('/home/chengfei/projects/test_coffe_mug/chainer_save.obj', vertices, faces, textures)

To further investigate the problem, I also rendered the same model renders with varying texture_size in the neural mesh renderer

texture_size = 4
iterm2 04 out3
texture_size = 8
iterm2 08 out3
texture_size = 16
iterm2 16 out3

You may notice that the larger the texture_size is, the sharper the textures look, also the more obvious ghosts appear. Hope it can give you some insights on this bug.

I have also observed the same issue with at least 5 other models (which have image textures) downloaded online (e.g. from 3D Warehouse). I did not this issue in models with solid-color textures (i.e. textures defined only by .mtl files without any images).

@hiroharu-kato Do you have any thoughts? I would greatly appreciate any pointers/ideas that you have. Thanks!

FYI. The same problem appears with the PyTorch version of this renderer. Issue link.

RuntimeError: CUDA error: an illegal memory access was encountered

(venv) H:\workSpace\pythonCode\Texformer-master>python demo.py --img_path demo_imgs/img.png --seg_path demo_imgs/seg.png
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\modules\container.py:597: UserWarning: Setting attributes on ParameterDict is not supported.
warnings.warn("Setting attributes on ParameterDict is not supported.")
H:\workSpace\pythonCode\Texformer-master\transformers\net_utils.py:83: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\functional.py:4004: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
"Default grid_sample and affine_grid behavior has changed "
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]

(venv) H:\workSpace\pythonCode\Texformer-master>python demo.py --img_path demo_imgs/img.png --seg_path demo_imgs/seg.png
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\modules\container.py:597: UserWarning: Setting attributes on ParameterDict is not supported.
warnings.warn("Setting attributes on ParameterDict is not supported.")
H:\workSpace\pythonCode\Texformer-master\transformers\net_utils.py:83: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It cu
rrently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\functional.py:4004: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
"Default grid_sample and affine_grid behavior has changed "
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Error in create_texture_image: an illegal memory access was encountered
Error in create_texture_image_boundary: an illegal memory access was encountered
Traceback (most recent call last):
File "demo.py", line 182, in
demo.run_demo(args)
File "H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "demo.py", line 146, in run_demo
rendered_img, _, _ = self.renderer.render(self.pred_vertices, self.pred_cam_t, uvmap)
File "H:\workSpace\pythonCode\Texformer-master\NMR\neural_render_test.py", line 128, in render
nr.save_obj("myTest" + str(self.count), tempVerts, tempFaces, textures=tex_tensor)
File "H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\neural_renderer_pytorch-1.1.3-py3.7-win-amd64.egg\neural_renderer\save_obj.py", line 48, in save_obj
texture_image, vertices_textures = create_texture_image(textures)
File "H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\neural_renderer_pytorch-1.1.3-py3.7-win-amd64.egg\neural_renderer\save_obj.py", line 30, in create_texture_image
vertices[:, :, 0] /= (image.shape[1] - 1)
RuntimeError: CUDA error: an illegal memory access was encountered

Face Colour Support

Hello, Thanks for the great work!
I would like to see if I can use face colour if I do not have a texture to start with.
I looked into the code and it seems like internally it is using face colour. An issue shows up in the PyTorch Repo confirms the idea daniilidis-group/neural_renderer#13.
Will there be official face colour support?
If not, would you please recommend a way to implement it?
Thank you.

How to get the same visual look of object in another rendering software (Blender)?

What render settings/materials (let's say in Blender) I need to use to get the same render as in neural renderer? Here is the result of finding camera parameters (as you can see loss became 0 in the end and camera just moved into the object):
example4_result

Here is my reference image, rendered in Blender:
teapot_ref_new

Here is also comparison between original reference image and image rendered in Blender (with close camera parameters):

Original:
example4_ref

Blender:
teapot_ref_like_original

Thanks.

RuntimeError: CUDA error: an illegal memory access was encountered

I am using your save_obj function, expecting to save the .obj as well as its texture generated from your model.

I get the following error:

(venv) H:\workSpace\pythonCode\Texformer-master>python demo.py --img_path demo_imgs/img.png --seg_path demo_imgs/seg.png
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\modules\container.py:597: UserWarning: Setting attributes on ParameterDict is not supported.
warnings.warn("Setting attributes on ParameterDict is not supported.")
H:\workSpace\pythonCode\Texformer-master\transformers\net_utils.py:83: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\functional.py:4004: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
"Default grid_sample and affine_grid behavior has changed "
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]

(venv) H:\workSpace\pythonCode\Texformer-master>python demo.py --img_path demo_imgs/img.png --seg_path demo_imgs/seg.png
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\modules\container.py:597: UserWarning: Setting attributes on ParameterDict is not supported.
warnings.warn("Setting attributes on ParameterDict is not supported.")
H:\workSpace\pythonCode\Texformer-master\transformers\net_utils.py:83: UserWarning: floordiv is deprecated, and its behavior will change in a future version of pytorch. It cu
rrently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats)
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\nn\functional.py:4004: UserWarning: Default grid_sample and affine_grid behavior has changed to align_corners=False since 1.3.0. Please specify align_corners=True if the old behavior is desired. See the documentation of grid_sample for details.
"Default grid_sample and affine_grid behavior has changed "
H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
Error in create_texture_image: an illegal memory access was encountered
Error in create_texture_image_boundary: an illegal memory access was encountered
Traceback (most recent call last):
File "demo.py", line 182, in
demo.run_demo(args)
File "H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\torch\autograd\grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "demo.py", line 146, in run_demo
rendered_img, _, _ = self.renderer.render(self.pred_vertices, self.pred_cam_t, uvmap)
File "H:\workSpace\pythonCode\Texformer-master\NMR\neural_render_test.py", line 128, in render
nr.save_obj("myTest" + str(self.count), tempVerts, tempFaces, textures=tex_tensor)
File "H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\neural_renderer_pytorch-1.1.3-py3.7-win-amd64.egg\neural_renderer\save_obj.py", line 48, in save_obj
texture_image, vertices_textures = create_texture_image(textures)
File "H:\workSpace\pythonCode\Texformer-master\venv\lib\site-packages\neural_renderer_pytorch-1.1.3-py3.7-win-amd64.egg\neural_renderer\save_obj.py", line 30, in create_texture_image
vertices[:, :, 0] /= (image.shape[1] - 1)
RuntimeError: CUDA error: an illegal memory access was encountered

Strange gradients.

Hi. My colleague (@czw0078) and I have been using your neural renderer, and we've noticed some pretty strange behavior during optimization. A minimal working example demonstrating the odd behavior can be found below.

import chainer
import neural_renderer as nr
import numpy as np
import scipy.misc

from chainer import Chain


class Model(Chain):
    def __init__(self, input_obj, initial_z):
        super(Model, self).__init__()

        # Load object.
        (vertices, faces) = nr.load_obj(input_obj)

        vertices = vertices[None, :, :]
        faces = faces[None, :, :]
        texture_size = 2
        textures = np.ones((1, faces.shape[1], texture_size, texture_size, texture_size, 3), "float32")

        chainer.cuda.get_device_from_id(0).use()
        self.vertices = chainer.cuda.to_gpu(vertices)
        self.faces = chainer.cuda.to_gpu(faces)
        self.textures = chainer.cuda.to_gpu(textures)

        self.x = self.vertices[0][:, 0]
        self.y = self.vertices[0][:, 1]
        self.z = self.vertices[0][:, 2]

        # Camera parameters.
        camera_distance = 2.732
        elevation = 0
        azimuth = 0
        self.camera_position = np.array(nr.get_points_from_angles(camera_distance, elevation, azimuth), dtype=np.float32)

        # Adjust renderer.
        renderer = nr.Renderer()
        renderer.camera_direction = np.array(renderer.camera_direction, dtype=np.float32)
        renderer.camera_direction = chainer.cuda.to_gpu(renderer.camera_direction)
        renderer.camera_mode = "look"
        renderer.eye = self.camera_position
        renderer.eye = chainer.cuda.to_gpu(renderer.eye)
        renderer.viewing_angle = 8.123
        self.renderer = renderer

        # Optimization direction.
        with self.init_scope():
            self.z_delta = chainer.Parameter(np.array([initial_z], dtype=np.float32))

    def __call__(self):
        new_z = chainer.functions.add(self.z, chainer.functions.repeat(self.z_delta, len(self.z)))

        vertices = chainer.functions.stack((self.x, self.y, new_z), axis=1)[None, :, :]
        image = self.renderer.render(vertices, self.faces, self.textures)

        return (new_z, image)


def main():
    input_obj = "./examples/data/teapot.obj"
    model = Model(input_obj, 4)
    model.to_gpu(0)

    gen_teapot = False

    for sign in [1, -1]:
        model.cleargrads()
        (new_z, images) = model()

        if gen_teapot and sign == 1:
            image = images.data.get()[0].transpose((1, 2, 0))
            scipy.misc.toimage(image, cmin=0, cmax=1).save("my_teapot.png")

        print("Current z_delta: {0}".format(model.z_delta.data.get()[0]))
        assert model.z_delta.grad is None

        loss = sign * chainer.functions.batch_l2_norm_squared(images)[0]
        loss.backward(retain_grad=True)
        print("Loss: {0}".format(loss.data.get().item()))

        print("z_delta derivative after .backward(): {0}".format(model.z_delta.grad.get()[0]))


if __name__ == "__main__":
    main()

The example code produces the following output:

Current z_delta: 4.0
Loss: 21549.5859375
z_delta derivative after .backward(): -68343.7109375
Current z_delta: 4.0
Loss: -21549.5859375
z_delta derivative after .backward(): -13344.5039062

when using the image below as a starting point.
my_teapot
As you can see, the gradients have the same sign despite using opposite loss functions. Any insight you could provide on this behavior would be greatly appreciated. Thank you.

The following Dockerfile was used to generate the environment used for the code above.

FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
RUN apt-get update && apt-get install -y --no-install-recommends \
         wget \
         build-essential \
         cmake \
         nano \
         less \
         git \
         curl \
         libjpeg-dev \
         libpng-dev \
         imagemagick \
         python \
         python-dev \
         python-setuptools \
         python-pip \
         python-wheel
RUN pip install chainer cupy scikit-image tqdm imageio ipython==5.8.0

How to save as .obj

Hi,

I used this code to save as .obj-

import chainer
import numpy as np
import scipy.misc
import neural_renderer

renderer = neural_renderer.Renderer()

vertices, faces, textures = neural_renderer.load_obj( '/home/user/jay/straight.obj', load_texture=True, texture_size=16)

neural_renderer.save_obj('/home/user/jay/chainer_save.obj', vertices, faces, textures)

But facing issue in renderer.py. So i changed below code like this.
faces = torch.cat((faces, faces[:, list(reversed(range(faces.shape[-1])))]), dim=1).detach()

Then i was getting error in vertices_to_faces.py so i changed below vertices & faces assert to 2.
assert (vertices.ndimension() == 2)
assert (faces.ndimension() == 2)

But I am getting error at
assert (vertices.shape[0] == faces.shape[0]) because
vertices.shape[0] = 34817
faces.shape[0] = 69630.

Where am i doing wrong ? Could you please help me to fix this?

Question about different image size

Thanks for sharing good work.
Can I ask simple question about image size, same size of width and height??

Is it possible set up different height, weight image rendering??
Your example used only one image parameter, which is same size of height, weight.

blender version

Hi,I wonder the version of blender you use to generate dataset.

How to train?

As the paper says, the models were trained on ShapeNet. How do I do that using this repo?

Why to use ReLU in lighting module?

Hi!
I don't understand this line in lighting.py, why you use relu after calculating the inner product between face normals and light direction.
cos = cf.relu(cf.sum(normals * direction, axis=2))

Hope for answering,
Thanks!

pytorch version problem

In your readme, supported pytorch version is 0.4.However, after pip install neural_renderer_pytorch,error has occured:
ImportError: /home/dsg/anaconda3/lib/python3.6/site-packages/neural_renderer/cuda/load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at19UndefinedTensorImpl10_singletonE
But if pytorch version is 1.0,this error would not occur.
Could you give us neural_renderer version supporting pytorch 0.4 to match our other modules?
Thank you very much!

cupy and chianer version?

I am working with cupy 5.0.0 and chainer 5.0.0~ It works fine for my other projects. but when I want to install the render. It always show, cannot find the chainer module?
Python:2.7
Ubuntu:16.04
Cuda:9.2
Any suggestion?

Error during execution

I have chainer 3.3.0 and cupy 2.3.0. When I try to run example1.py, I get the following error:

/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
Drawing: 0%| | 0/90 [00:00<?, ?it/s]Traceback (most recent call last):
File "examples/example1.py", line 66, in
run()
File "examples/example1.py", line 52, in run
images = renderer.render(vertices, faces, textures) # [batch_size, RGB, image_size, image_size]
File "build/bdist.linux-x86_64/egg/neural_renderer/renderer.py", line 78, in render
File "/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/chainer/functions/array/concat.py", line 90, in concat
y, = Concat(axis).apply(xs)
File "/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/chainer/function_node.py", line 245, in apply
outputs = self.forward(in_data)
File "/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/chainer/functions/array/concat.py", line 44, in forward
return xp.concatenate(xs, self.axis),
File "/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/manipulation/join.py", line 49, in concatenate
return core.concatenate_method(tup, axis)
File "cupy/core/core.pyx", line 2439, in cupy.core.core.concatenate_method
File "cupy/core/core.pyx", line 2482, in cupy.core.core.concatenate_method
File "cupy/core/core.pyx", line 2533, in cupy.core.core.concatenate
File "cupy/core/core.pyx", line 1630, in cupy.core.core.ndarray.setitem
File "cupy/core/core.pyx", line 3101, in cupy.core.core._scatter_op
File "cupy/core/elementwise.pxi", line 823, in cupy.core.core.ufunc.call
File "cupy/util.pyx", line 39, in cupy.util.memoize.decorator.ret
File "cupy/core/elementwise.pxi", line 622, in cupy.core.core._get_ufunc_kernel
File "cupy/core/elementwise.pxi", line 33, in cupy.core.core._get_simple_elementwise_kernel
File "cupy/core/carray.pxi", line 170, in cupy.core.core.compile_with_cache
File "/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 123, in compile_with_cache
base = _preprocess('', options, arch)
File "/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 86, in _preprocess
result = prog.compile(options)
File "/home/ubuntu/mnt/Work/cmr/venv_cmr/local/lib/python2.7/site-packages/cupy/cuda/compiler.py", line 233, in compile
raise CompileException(log, self.src, self.name, options)
cupy.cuda.compiler.CompileException: nvrtc: error: invalid value for --gpu-architecture (-arch)

I have Tesla V100 GPU and I reckon there is some compatibility problem with that. Could you help me out with regards what I need to change or do to getting it working?

Thank you for sharing. Why use 2 viewpoint during training?

Thank you for sharing this great project. I have one question with the code and paper. As you mentioned in section 5.1.1, "In each minibatch, we included silhouettes from two viewpoints per input image." Why use 2 viewpoint during training? Wish to receive your reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.