Git Product home page Git Product logo

bachili / redner Goto Github PK

View Code? Open in Web Editor NEW
1.4K 1.4K 135.0 45.39 MB

Differentiable rendering without approximation.

Home Page: https://people.csail.mit.edu/tzumao/diffrt/

License: MIT License

CMake 0.90% C++ 24.66% C 1.79% Python 25.53% Dockerfile 0.18% Shell 0.06% Batchfile 0.08% NASL 46.79%
computer-graphics computer-vision differentiable-rendering monte-carlo-ray-tracing pytorch rendering tensorflow

redner's People

Contributors

abdallahdib avatar aferrall avatar awcrr avatar bachili avatar budmonde avatar francoisruty avatar jpchen avatar mworchel avatar oeway avatar rodrigodzf avatar sgrabli-ilm avatar supershinyeyes avatar tetterl avatar tstullich avatar ybh1998 avatar yihang99 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redner's Issues

Tutorial 02 results: teapot not aligned

I am trying to run the tutorials. I have tried the second tutorial, changing nothing from the code. However I cannot get as satisfying results as the ones presented. I have tried different learning rates but never got the teapot correctly aligned in the end. Can anyone confirm that with the current version of the code, this tutorial is running correctly and outputting expected result?
Thanks

Camera parameters cannot be updated and differentiate in batch

Hello! I report one bug and I have one question:
The bug: following the code structure of the tutorials, when updating the camera each iteration like:
cam.position = torch.tensor([0.0, 0.0, 2.5])
The change is not reflected in the rendered image.
My question: I know batch rendering is not supported, however, is it possible to differentiate over a set of rendered images? I tried to render a set over a loop and differentiate over them and got:

CUDA Runtime Error: device-side assert triggered at /home/hleon/Documents/redner/buffer.h:86
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [38,0,0], thread: [33,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [38,0,0], thread: [35,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [38,0,0], thread: [45,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [38,0,0], thread: [46,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [38,0,0], thread: [47,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [35,0,0], thread: [36,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [35,0,0], thread: [46,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [35,0,0], thread: [47,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [35,0,0], thread: [55,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [36,0,0], thread: [9,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [36,0,0], thread: [20,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [35,0,0], thread: [0,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [35,0,0], thread: [1,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [35,0,0], thread: [25,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [38,0,0], thread: [19,0,0] Assertion isfinite(nee_contrib) failed.
/home/hleon/Documents/redner/pathtracer.cpp:927: void path_contribs_accumulator::operator()(int): block: [38,0,0], thread: [20,0,0] Assertion isfinite(nee_contrib) failed.

Thank you for your help!

Per vertex color

Hello,

Do you support per vertex color rendering? If not, are you thinking in introducing it?

Thank you!

Segmentation Fault

I am encountering a segfault when running the tutorial scripts:

(gdb) run 02_pose_estimation.py 
Starting program: /home/jlafleche/miniconda3/envs/redner/bin/python 02_pose_estimation.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fff9f728700 (LWP 1680)]
[New Thread 0x7fff9415d700 (LWP 1687)]
[New Thread 0x7fff9395c700 (LWP 1688)]

Thread 1 "python" received signal SIGSEGV, Segmentation fault.
thrust::system::detail::sequential::reduce<thrust::system::cpp::detail::par_t, TVector3<float> const*, TVector3<float>, vector3f_min> (init=..., end=0x7fff39466cd0, begin=0x7fff39453e00, binary_op=...)
    at /usr/local/cuda/include/thrust/system/detail/sequential/reduce.h:61
61	    result = wrapped_binary_op(result, *begin);

Any ideas what the problem might be? Thanks!

test_single_triangle_camera_fisheye.py outputs "odd" result after 05/13/2019 update.

02/02/2019 version and 05/13/2019 version output different test result for test_single_triangle_camera_fisheye.py.

Here's the video (left: old, right: new): https://www.dropbox.com/s/f7rwn3m7w5q2pbn/test_single_triangle_camera_fisheye.mp4?dl=0

The old version converges close to target:
target
final_diff:
final_diff

The new version does not converge and the final_diff is very large:
final_diff

Reproduce:

cd tests/
python test_single_triangle_camera_fisheye.py

I'm using CPU on both cases.

Optimizing a square

To understand the code, I tried changing vertices to render a square instead of a triangle in the scene. Here's how I modified test_single_traingle.py:

@@ -26,16 +26,16 @@ mat_grey = pyredner.Material(\
     diffuse_reflectance = torch.tensor([0.5, 0.5, 0.5],
     device = pyredner.get_device()))
 materials = [mat_grey]
-vertices = torch.tensor([[-1.7,1.0,0.0], [1.0,1.0,0.0], [-0.5,-1.0,0.0]],
+vertices = torch.tensor([[-1.0,2.0,0.0], [2.0,2.0,0.0], [-1.0,-1.0,0.0], [2.0,-1.0,0.0]],
                         device = pyredner.get_device())
-indices = torch.tensor([[0, 1, 2]], dtype = torch.int32,
+indices = torch.tensor([[0, 1, 2, 3]], dtype = torch.int32,
                        device = pyredner.get_device())
...
...
...
 # Perturb the scene, this is our initial guess
-shape_triangle.vertices = torch.tensor(\
-    [[-2.0,1.5,0.3], [0.9,1.2,-0.3], [-0.4,-1.4,0.2]],
+shape_square.vertices = torch.tensor(\
+    [[-1.0,1.0,0.3], [1.0,2.0,0.3], [0.0,-1.0,0.3], [2.0,0.0,0.3]],

But I'm still seeing a triangle in init.png and target.png. Do you know what I might be doing wrong?

init.png
target.png

How to debug the code

Hi, I just want to know your method to debug the pyrender, since it is written in C++, and we call it by python, how can I step in the cpp file in pyrender?

Thrust Allocation Failure

I'm running into assert(false); @ thrust_utils.h:84. The forward pass goes through, loss gets calculated, but it fails when running the backwards pass.

python 01_optimize_single_triangle.py 
Scene construction, time: 0.05752 s
Forward pass, time: 0.04237 s
/home/jlafleche/miniconda3/envs/redner/lib/python3.6/site-packages/skimage/util/dtype.py:122: UserWarning: Possible precision loss when converting from float32 to uint8
  .format(dtypeobj_in, dtypeobj_out))
Scene construction, time: 0.01139 s
Forward pass, time: 0.04475 s
iteration: 0
Scene construction, time: 0.01407 s
Forward pass, time: 0.02298 s
loss: 758.4088134765625
python: /home/jlafleche/Projects/redner/thrust_utils.h:84: char* ThrustCachedAllocator::allocate(std::ptrdiff_t): Assertion `false' failed.

I solved the error by increasing the number of buffers for the GPU case to 2, but I wonder if you might have more insights into why the error comes up in the first place. Thanks!

My configuration:
Pytorch: 1.1.0
Embree: 3.5.2
Optix: 5.1.1
Cuda: 10.0
compiler: gcc-7

GPUs:
TitanXP
RTX 2080Ti

device free failed: unspecificied launch failure

Hi, I successfully compiled your project based on cuda 9.0 and optix 5.1
I can import pyredner.
When I run 01_optimize_single_triangle.py. I faced this error.
terminate called after throwing an instance of 'thrust::system::system_error' what(): device free failed: unspecified launch failure Aborted (core dumped)

It seems something wrong with cuda?

Any suggestions are welcome.

'pdf_nee' of envmap

Hi,
I find the pdf_nee when sampling envmap seems does not consider light_pmf of the envmap.

auto pdf_nee = envmap_pdf(*scene.envmap, wo);

perhaps we should multiply the light_pmf of the envmap just like you did in area_light ?
auto light_pmf = scene.light_pmf[light_shape.light_id];
auto light_area = scene.light_areas[light_shape.light_id];
auto pdf_nee = light_pmf / light_area;

auto envmap_id = scene.num_lights - 1;
auto light_pmf = scene.light_pmf[envmap_id];  
auto pdf_nee = envmap_pdf(*scene.envmap, wo) * light_pmf;

How to optimize SVBRDF(Texture)

it generates mipmap automatically in pyredner module. I am not quite sure how I can optimize texture texels

text = torch.tensor(np.ones()*0.5, requires_grad=True, device=pyredner.device()))
material.diffuse_reflectance = pyredner.Texture(text)
...
optimizer = torch.optim.Adam([text])

for ...:
  // I have to add retain_graph=True here to get it work.
  render()
  loss.backward(retain_graph=True)
  // text update correctly, but rendered image does not change..
  // then text will be gradually broken...
  optimizer.step()

Build issue with mitsuba.pyc

Hi,

I am getting the following error when trying to build redner on my Linux machine running OpenSUSE Tumbleweed:

byte-compiling /usr/lib/python2.7/site-packages/pyredner/load_mitsuba.py to load_mitsuba.pyc
File "/usr/lib/python2.7/site-packages/pyredner/load_mitsuba.py", line 18
ret = value @ ret
^
SyntaxError: invalid syntax

byte-compiling /usr/lib/python2.7/site-packages/pyredner/transform.py to transform.pyc
File "/usr/lib/python2.7/site-packages/pyredner/transform.py", line 80
return rot_z @ (rot_y @ rot_x)

My guess is that my version of python is set incorrectly in the build files somewhere, but I checked my CMake configurations with ccmake and have not found anything thus far. Would you know how I can solve this problem?

how to perform optimization for arbitrary params

Hi, I would like to thank you for sharing this code. Great work btw!!

is it possible to use ReDner to optimize for external params. let me give an example to be more clear,

suppose i want to optimize Spherical Harmonics coefficients to estimate illumination at a certain point of the scene. I have my target image of a white sphere ( used as probe) and my sphere 3d model.

In this case, vertices colors are updated at each iteration using the normals and SH coeffs, than we render the new vertices and calculate the loss based on the target image. In this situation, we dont want to setup any lighting in the redner scene neither a materials. so my question is it possible for ur physically based renderer to handle this case ( with some tweaks ?)

Thank you

UV texture map appears to be wrong

Hello,

I have been trying to load a model with a non trivial texture Map (Kd_map), I think there is a problem with the texture mapping. To be sure I altered the chessboard texture of the teapot with asymmetries and rendered it with redner, I also opened the same model with the modified texture in meshlab, and saw that the mapping were really different. I've tried to reverse "(1-x)" the uvs_pool when loading the obj file but I couldn't get the same mapping as in meshlab so it appears more complex than that.

How can we visualize the grad image?

How to get the result of grad images ( Fig 1 (c)-(e), Fig 5,7, 8, 9). I guess an image diff and then add each pixel with a color value RGB(100,200,200)? How many images should we use to do the average calculation? To smooth the output...

Questions about evaluating derivatives

Hi, Thanks for your awesome work!

I'm new to automatic differentiation and have some trouble in evaluating derivatives.
I see in your paper "we use an operator overloading approach for automatic differentiation", but in this version you imporve it with hand derivatives.

So, what's the problem with original automatic differentiation approach in the paper? Is there some reference materials about how to do it concretely?
Is there an existing tool which can transform renderer's c++ code to a differentiable c++ implementation directly (if we don't consider visibility problem talked in your paper) ?
I saw "Source-to-source automatic differentiation" in your project's wish list, what's the advantage of this approach compare to current hand derivatives and previous automatic differentiation approach ?

Looking forward to your reply! Thanks a lot!

OpenEXR undefined symbol: _ZTIN7Iex_2_27BaseExcE

Hi, I've met a problem when installing the newest version of redner(updated on 06/25/2019). I successfully installed redner in the previous version, but somehow the problem meets.

When trying the tutorials 1, I met problems like this:

Python 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
Type "copyright", "credits" or "license" for more information.

IPython 7.5.0 -- An enhanced Interactive Python.

runfile('/media/yuanxun/E/code/redner/tutorials/01_optimize_single_triangle.py', wdir='/media/yuanxun/E/code/redner/tutorials')
Traceback (most recent call last):

File "", line 1, in
runfile('/media/yuanxun/E/code/redner/tutorials/01_optimize_single_triangle.py', wdir='/media/yuanxun/E/code/redner/tutorials')

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile
execfile(filename, namespace)

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "/media/yuanxun/E/code/redner/tutorials/01_optimize_single_triangle.py", line 2, in
import pyredner

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/pyredner/init.py", line 10, in
from .image import *

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/pyredner/image.py", line 2, in
import OpenEXR

ImportError: /home/yuanxun/anaconda3/lib/python3.6/site-packages/OpenEXR.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZTIN7Iex_2_27BaseExcE

I thinks there's problem in OpenEXRpython. When I want to import pyredner, this error happens. Error still exists after I trying changing the version of OpenEXR from 2.3 to 2.2.

I'm using Anaconda with Python 3.6. Hope for answer!

Usage of redner trainning in other network

Thanks for this wonderful work!
I met troubles when I tried to use redner as a differentiable render module in my whole networks. Given an image I, I train an network X to predict the vertices corresponding to it. Network X includes an encoder to learn features about I to construct vertices.
So, I could get

Pred_vertices = X(I)

Next, I want to use redner to learn textures & lighting via rendering the vertices to 2D plane.
My original assume is that: redner achieves gradients about vertices & textures & lighting from loss between images, and gradients about vertices can be back passed to my encoder X. However, I got error like

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile
execfile(filename, namespace)

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)

File "/media/yuanxun/E/My Experiment/train.py", line 177, in
train_loss.backward()

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)

File "/home/yuanxun/anaconda3/lib/python3.6/site-packages/torch/autograd/init.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag

RuntimeError: Function RenderFunctionBackward returned an invalid gradient at index 5 - got [1, 3] but expected shape compatible with [1, 53215, 3]

I original thought I can use redner as another part of My whole network but found I thought it too simple.
I guess there's problem set in gradients BP between redner and X. I think I need to write a torch.autograd.Function wrapper API to get the gradients of vertices from RenderFunction.backward() in render_pytorch.py and return the gradients to my network X. But I found difficulties here, I really don't know how to achieve the gradients of redner. Could you tell me how to get the gradients computed by redner?
Thanks!

Importing Pyredner

Hi,

I attempted to run the first tutorial, yet despite pyredner being on my computer, my compiler is unable to detect pyredner and states "No module named pyredner". Any thoughts? If it matters, I configured and generated the necessary dependencies through CMake and did not make install via the terminal.

How to use for rendering 2d vertex coordinates into binary mask?

How can I use this library to render 2d image from coordinates (polygon). I have predictions in form of vertex coordinates (batch, num_points, x, y) and I want to render it into 2D binary mask. Is is possible? I want to achive similar thing as done in optimizing triangle tutorial but I dont have information about material, lighting and texture because I want just a binary mask. Any insight would be really helpful!

Turn on/off the edge sampling technique.

Edge sampling is a great thing to make the differential visibility (path traced!) work!

"It also depends on your material model: if you have discontinuities in your procedural shader, then you need some kind of edge sampling or prefiltering. Index of refraction of dielectric materials might also cause discontinuities."

To my knowledge, some rendering parameters do not need this, such as the continuous parameters like diffuseTex for a continuous surface in a static scene.
Edge sampling is heavy. This repo is very active (a good thing!)

As mentioned by you in previous issues answer,

"To turn off edge sampling, I would add "return;" at these two lines:
https://github.com/BachiLi/redner/blob/master/edge.cpp#L235
https://github.com/BachiLi/redner/blob/master/edge.cpp#L644
I might add a flag to do this in the future."

The code line you provided in the issue answer is not accurate now.
Perhaps make it as an option? -- a bool parameter in "render.options" (python) ?

It would be very convenient to test or develop new algorithms based on "redner".

Originally posted by @BachiLi in #27 (comment)

Question about supported light type

Hi,
Thanks for your awesome work! I have some question about light type in this renderer.
I noticed the renderer does not support pure point light which is referenced in 5.2 of the paper, and there is only areaLight type in this implementation. But I don't really understand why this renderer has limit on light type? Are other light types (e.g. spotLight, direcional light) not supported by this method?
Could you give some brief explanation about the problem or difficulty of using point light?

Thanks a lot!

Optimization of shading normals : SVBRDF

Hello,
In the test code for SVBRDF optimization, you are providing diffuse, roughness and specular maps.

However, I was wondering whether we can optimize normal maps the same way. Does the material model itself not use normal maps?

Thanks.

Enabling multispectral rendering

Does Redner currently support spectral rendering (where the output is generated with many spectral/color channels)? This feature is readily available in renders like Mitsuba when it is compiled accordingly.

Is there a straightforward way to do that for Redner?

Thanks for your help!

CUDA Runtime Error

Hello,

First thanks for the great work! I am only able to run the tutorials in CPU mode. When I enable GPU, the scripts fails showing:

CUDA Runtime Error: unspecified launch failure at buffer.h:86

buffer.h line 86:
checkCuda(cudaFree(data));

My configuration:
Ubuntu 16.4
CUDA 10.0
Nvidia Driver 4.18
OptiX 6.0
Pytorch 1.1.0
gcc 5.4

I have a Titan XP and a Quadro P6000.

Do you have any idea how I could resolve this issue?
Thanks

Bus error

I'm hitting a bus error that looks related to issue #3 but isn't related to assertions.

First, running an example line by line, the crash happens at img = render(0, *args) .

So I followed the same steps as described in #3 . Running via gdb gives:

(gdb) run test_shadow_light.py
Starting program: /home/ubuntu/miniconda3/bin/python test_shadow_light.py
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7fffa2ae7700 (LWP 19335)]
[New Thread 0x7fff894f9700 (LWP 19336)]
Scene construction, time: 0.07067 s
[New Thread 0x7fff836e7700 (LWP 19337)]
[New Thread 0x7fff8236b700 (LWP 19338)]
[New Thread 0x7fff81b6a700 (LWP 19339)]

Thread 1 "python" received signal SIGBUS, Bus error.
ChannelInfo::ChannelInfo (this=0x7fffffffb760, channels=..., use_gpu=<optimized out>) at /home/ubuntu/src/redner/channels.cpp:25
25              this->channels[i] = channels[i];
(gdb) p channels
$1 = (const std::vector<Channels, std::allocator<Channels> > &) @0x5555a8b80520: {<std::_Vector_base<Channels, std::allocator<Channels> >> = {
    _M_impl = {<std::allocator<Channels>> = {<__gnu_cxx::new_allocator<Channels>> = {<No data fields>}, <No data fields>}, 
      _M_start = 0x555557e38d60, _M_finish = 0x555557e38d64, _M_end_of_storage = 0x555557e38d64}}, <No data fields>}
(gdb) p i
$2 = 1
(gdb) p *this
$3 = {channels = 0xb02729000, num_channels = 1, num_total_dimensions = 3, radiance_dimension = 0, use_gpu = true}

My setup:

  • AWS EC2 g3s.xlarge (Tesla M60 / 5.2)
  • Driver Version: 418.40.04
  • CUDA Version: 10.1

Might it be another synchronization issue? Thanks in advance!

Installation problem

Hello!
Probably I am doing something wrong for installing redner. Here is the error I get when I run tutorials/01_optimize_single_triangle.py
Traceback (most recent call last): File "tutorials/01_optimize_single_triangle.py", line 1, in <module> import pyredner File "/home/hleon/anaconda3/envs/render/lib/python3.6/site-packages/pyredner/__init__.py", line 7, in <module> from .render_pytorch import * File "/home/hleon/anaconda3/envs/render/lib/python3.6/site-packages/pyredner/render_pytorch.py", line 4, in <module> import redner ImportError: libembree3.so.3: cannot open shared object file: No such file or directory
It seems I am not importing embree correctly, here are my cmake variables:
set(EMBREE_LIBRARY "/usr/lib64/libembree3.so.3") set(EMBREE_INCLUDE_PATH "/usr/include/embree3")

Do you see something wrong? Any ideas why this can be happening? Thank you for your help!

Problem setting PATHs; Cannot find CUDA

Hi, i'm trying to install redner by following the directions in the readme. after (painstakingly) installing all the dependencies, i tried running:

build git:(master) ✗ CUDA_INCLUDE_DIRS=/usr/local/cuda-10.0/include cmake ..

-- Could NOT find CUDA (missing: CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) (found suitable version "10.0", minimum required is "10")
CMake Warning (dev) at embree/common/cmake/test.cmake:31 (SET):
  implicitly converting 'INT' to 'STRING' type.
Call Stack (most recent call first):
  embree/CMakeLists.txt:93 (INCLUDE)
This warning is for project developers.  Use -Wno-dev to suppress it.

CMake Warning (dev) at embree/common/cmake/ispc.cmake:71 (SET):
  implicitly converting 'INT' to 'STRING' type.
Call Stack (most recent call first):
  embree/CMakeLists.txt:396 (INCLUDE)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- OpenImageIO not found in your environment. You can 1) install
                              via your OS package manager, or 2) install it
                              somewhere on your machine and point OPENIMAGEIO_ROOT to it. (missing: OPENIMAGEIO_INCLUDE_DIR OPENIMAGEIO_LIBRARY)
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
EMBREE_INCLUDE_PATH
   used as include directory in directory /home/jpchen/3d-scene-gen/path_tracer/redner
   ...
EMBREE_LIBRARY
    linked by target "redner" in directory /home/jpchen/3d-scene-gen/path_tracer/redner
THRUST_INCLUDE_DIR
   used as include directory in directory /home/jpchen/3d-scene-gen/path_tracer/redner

And my build is failing (presumably due to other reasons). From the error logs:

Error log
Determining if the pthread_kill exist failed with the following output:
Change Dir: /home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_429d1/fast
/usr/bin/make -f CMakeFiles/cmTC_429d1.dir/build.make CMakeFiles/cmTC_429d1.dir/build
make[1]: Entering directory '/home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_429d1.dir/CheckSymbolExists.c.o
/usr/bin/cc   -fPIC    -o CMakeFiles/cmTC_429d1.dir/CheckSymbolExists.c.o   -c /home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c
/home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c: In function ‘main’:
/home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:8:19: error: ‘pthread_kill’ undeclared (first use in this function)
 return ((int*)(&pthread_kill))[argc];
                 ^
/home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:8:19: note: each undeclared identifier is reported only once for each function it appears in
CMakeFiles/cmTC_429d1.dir/build.make:65: recipe for target 'CMakeFiles/cmTC_429d1.dir/CheckSymbolExists.c.o' failed
make[1]: *** [CMakeFiles/cmTC_429d1.dir/CheckSymbolExists.c.o] Error 1
make[1]: Leaving directory '/home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp'
Makefile:121: recipe for target 'cmTC_429d1/fast' failed
make: *** [cmTC_429d1/fast] Error 2

File /home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp/CheckSymbolExists.c:
/* */
#include <pthread.h>

int main(int argc, char** argv)
{
(void)argv;
#ifndef pthread_kill
return ((int*)(&pthread_kill))[argc];
#else
(void)argc;
return 0;
#endif
}

Determining if the function pthread_create exists in the pthreads failed with the following output:
Change Dir: /home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp

Run Build Command(s):/usr/bin/make cmTC_31319/fast
/usr/bin/make -f CMakeFiles/cmTC_31319.dir/build.make CMakeFiles/cmTC_31319.dir/build
make[1]: Entering directory '/home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp'
Building C object CMakeFiles/cmTC_31319.dir/CheckFunctionExists.c.o
/usr/bin/cc   -fPIC -DCHECK_FUNCTION_EXISTS=pthread_create   -o CMakeFiles/cmTC_31319.dir/CheckFunctionExists.c.o   -c /usr/local/share/cmake-3.14/Modules/CheckFunctionExists.c
Linking C executable cmTC_31319
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/cmTC_31319.dir/link.txt --verbose=1
/usr/bin/cc    -fPIC -DCHECK_FUNCTION_EXISTS=pthread_create    -rdynamic CMakeFiles/cmTC_31319.dir/CheckFunctionExists.c.o  -o cmTC_31319 -lpthreads
/usr/bin/ld: cannot find -lpthreads
collect2: error: ld returned 1 exit status
CMakeFiles/cmTC_31319.dir/build.make:86: recipe for target 'cmTC_31319' failed
make[1]: *** [cmTC_31319] Error 1
make[1]: Leaving directory '/home/jpchen/3d-scene-gen/path_tracer/redner/build/CMakeFiles/CMakeTmp'
Makefile:121: recipe for target 'cmTC_31319/fast' failed
make: *** [cmTC_31319/fast] Error 2

specs:

build git:(master) ✗ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130

build git:(master) ✗ gcc --version
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.11) 5.4.0 20160609

and i am using conda with python 3.

and when i set paths such as EMBREE_INCLUDE_PATH etc, i still get the error that they are not found. am i setting paths correctly? could you give an example path or which directories the paths should point to?

gcc9 w/o cuda: `isfinite` undefined for `Real&`s

Compiling w/o CUDA, gcc and clang complained on cpps about isfintie being only defined for vectors, not for reals.

  • Option 1: include <cmath> in cpps
  • Option 2: adding using std::isfinite in vector.h instead (vector.h is included in all places that reference isfinite; worked for me)

Build on Windows machine

Build script is target on gcc and linux currently.
Is there any plan to port to Windows and MSVC?

Error: copy-list-initialization cannot use a constructor marked "explicit"

When I make the project, the following error occurs.

error: copy-list-initialization cannot use a constructor marked "explicit"

That error happens when compiling "material.h"

inline std::tuple<int, int, int> get_diffuse_size() const { return {diffuse_reflectance.width, diffuse_reflectance.height, diffuse_reflectance.num_levels}; }

I also test int w=diffuse_reflectance.width, same error occurs.

Could you please give me some suggestions?

Installing Redner without CUDA

I am trying to build and install redner without CUDA (which was mentioned as an optional requirement). I have all other dependencies installed. When I run ccmake .. and try to configure, I get the following error:

CUDA_TOOLKIT_ROOT_DIR not found or specified

Even when I use a CMake gui and configure, I get this error:

CUDA_TOOLKIT_ROOT_DIR not found or specified
Could NOT find CUDA (missing: CUDA_TOOLKIT_ROOT_DIR CUDA_NVCC_EXECUTABLE CUDA_INCLUDE_DIRS CUDA_CUDART_LIBRARY) (Required is at least version "10")
Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
Configuring done

Is there a quick fix for this problem? In general, how to get redner up and running without CUDA support?

Gaussian pyramid loss

Hi,

Thanks for the awesome work! I was curious about your implementation of Gaussian pyramid loss referenced in section 5.1 of your paper. I couldn't seem to find it in the code base – is there a particular file that I could look which has this implementation? It seems like you only used this for one of the figures so wasn't sure if you have code for this?

Thanks!

Python OpenEXR Old Boost Dependency

I have tried running the two_d_mesh.py demo and run into an error regarding a missing boost library. Screenshot below:
error

Upon further inspection it seems to me that the pythonopenexr extension is trying to use a shared object of libboost_system that is below version 1.60. On my system I am unable to install any boost libraries below version 1.69 so it is not straightforward for me to install the necessary dependencies.

During the build phase no errors appear as well, so finding this error after the fact is a bit frustrating. Do you have any suggestions on how to go about fixing this?

About the living-room scenes

I tried to reproduce your optimization process on the room scenes (as presented in your paper and your recent paper, https://niessnerlab.org/projects/azinovic2019inverse.html )

I download the original scene from
https://benedikt-bitterli.me/resources/mitsuba/living-room-3.zip

To make it readable for redner:

  1. I changed "sphere" light to "area light" defined in an obj file.
  2. I changed "Glass" material to "diffuse"
  3. I debugged the "toworld" ("lookat") vectors (to replace the matrix representation) in Mitsuba GUI.
    ...
    the modified XML can be load by Mitsuba.

However, it still failed to be load by redner at

https://github.com/BachiLi/redner/blob/master/buffer.h#L86

Would you like to share the living-room-3 scene you used in your paper (maybe the room scene in your newest CVPR paper as well) in this repo?

I would love to send you an email about this if you feel more comfortable.

BTW: I've been reading your code throughout. Really solid work, thumbs up!

NaNs popping in complicated tests

I am trying to use Redner for a differential rendering task, and I ran into issues with more "complex" scenes. At first, I thought it was my usage that was wrong, but then I discovered I could reproduce it with the provided tests.
The simple tests (e.g. single triangle) work as expected, but the teapot tests almost always result in NaN appearing in gradients, sometimes very quickly:

$ python test_teapot_specular.py 
iteration: 0
loss: 24416.279296875
translation.grad: tensor([ 3.0038, -4.5837,  1.4213])
translation: tensor([19.5000,  0.5000,  1.5000], requires_grad=True)
iteration: 1
loss: 24524.2109375
translation.grad: tensor([nan, nan, nan])
translation: tensor([nan, nan, nan], requires_grad=True)
iteration: 2

I use the last Redner version (commit 1083be5) with Python 3.7, Pytorch 1.0, CUDA / no CUDA (tried them both), and Embree 3.4 on Anaconda on Linux.

I tried to reduce the learning rate and tweak Adam parameters, but this is still popping. Sometimes it does converge to a reasonable result, sometimes not.

I experience the same problem in my setup. I understand how tricky the hyper-parameters tuning may be and how NaNs are not a problem exclusive to redner, but would you happen to have some tricks/fixes to avoid getting those pesky NaNs, or at least make their appearance a bit more predictable?

Thanks for the great work and the efforts you make to make it available!

Back-face culling

Is there a way to disable back-face culling currently?

Feature would be greatly appreciated!

"invalid context" when running redner.Scene()

when i run any of the tutorials i get a Runtime error:

Traceback (most recent call last):
  File "tutorials/02_pose_estimation.py", line 84, in <module>
    img = render(0, *scene_args)
  File "/home/jpchen/3d-scene-gen/path_tracer/redner/pyredner/render_pytorch.py", line 276, in forward
    pyredner.get_use_gpu())
RuntimeError: Invalid context

this seems to be coming from the following call: scene = redner.Scene(camera, shapes, materials, area_lights, envmap, pyredner.get_use_gpu()) by the fact that envmap is None. the pointer arithmetic is a bit convoluted so i'm not sure how to go about debugging this.

Using python 3 on Ubuntu 16.04 with cuda 10.

Demo test_teapot_reflectance.py is not working correctly

I find 'test_teapot_reflectance.py' cannot generate a correct optimization result using the latest release code.
But i remember i can get correct optimization result in early release.

So, does anyone know how to make 'test_teapot_reflectance.py' work again?

gpu.dockerfile err at the last RUN

I Just mod the dockerfile as below

'add one line'
RUN sed -i 's|http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64|https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64|g' /etc/apt/sources.list.d/cuda.list

'mod one line':
ARG OPTIX_VERSION=7.0.0 (from 5.1.0)

'err occurs':
Step 21/21 : RUN if [ -d "build" ]; then rm -rf build; fi && mkdir build && cd build && cmake .. && make install -j 8 && cd / && rm -rf /app/build/
---> Running in f57283dc9ef4
-- The C compiler identification is GNU 7.4.0
-- The CXX compiler identification is GNU 7.4.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA 10.0.130
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Found Python: /opt/conda/lib/libpython3.6m.so (found suitable version "3.6.9", minimum required is "3.6") found components: Development
-- Looking for pthread.h
-- Looking for pthread.h - found
-- Looking for pthread_create
-- Looking for pthread_create - not found
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE
-- Found CUDA: /usr/local/cuda (found suitable version "10.0", minimum required is "10")
CMake Error: File /app/embree/kernels/rtcore_version.h.in does not exist.
CMake Error at embree/CMakeLists.txt:64 (CONFIGURE_FILE):
CONFIGURE_FILE Problem configuring file

CMake Error: File /app/embree/kernels/hash.h.in does not exist.
CMake Error at embree/CMakeLists.txt:68 (CONFIGURE_FILE):
CONFIGURE_FILE Problem configuring file

CMake Error at embree/CMakeLists.txt:93 (INCLUDE):
INCLUDE could not find load file:

test

CMake Error: File /app/embree/kernels/config.h.in does not exist.
CMake Error at embree/CMakeLists.txt:170 (CONFIGURE_FILE):
CONFIGURE_FILE Problem configuring file

CMake Error at embree/CMakeLists.txt:209 (MESSAGE):
Unsupported compiler: GNU

-- Configuring incomplete, errors occurred!
See also "/app/build/CMakeFiles/CMakeOutput.log".
See also "/app/build/CMakeFiles/CMakeError.log".
The command '/bin/sh -c if [ -d "build" ]; then rm -rf build; fi && mkdir build && cd build && cmake .. && make install -j 8 && cd / && rm -rf /app/build/' returned a non-zero code: 1

Error running (all) tests

$ python3 unit_tests.py
Traceback (most recent call last):
File "unit_tests.py", line 1, in
import pyredner
File "/home/jay/redner/pyredner/init.py", line 7, in
from .render_pytorch import *
File "/home/jay/redner/pyredner/render_pytorch.py", line 4, in
import redner
ImportError: /usr/lib/python3/dist-packages/redner.so: undefined symbol: rtcNewScene

Asserting even if CUDA is installed

$ python3 test_shadow_light.py        
python3: /home/jay/redner/scene.cpp:79: Scene::Scene(const Camera&, const std::vector<const Shape*>&, const std::vector<const Material*>&, const std::vector<const Light*>&, bool): Assertion `false' failed.
[1]    9201 abort (core dumped)  python3 test_shadow_light.py

I'm getting this assert for all tests. I have CUDA installed and enabled during CMake.

#ifdef __NVCC__
    ...
#else 
    assert(false)
#endif

Here's the snippet from scene.cpp. I'm guessing nvcc is not being used to compile scene.cpp

CUDA illegal memory access when running redner.Scene(...)

I built redner with the following setup:

  • Ubuntu 16.04
  • gcc 7.4.0
  • NVIDIA Driver Version: 410.78
  • CUDA 10.0
  • Optix 5.1.1
  • Embree 3.2.3
  • Python 3.7.3
  • PyTorch 1.1 (tried both stable and nightly builds)

However, for each of the tests the same CUDA illegal memory access error occurs:

Starting program: /home/roman/miniconda3/bin/python test_single_triangle_camera.py  
[Thread debugging using libthread_db enabled]  
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[Detaching after fork from child process 11862]
[New Thread 0x7fffa26a8700 (LWP 11864)]
[New Thread 0x7fff884f4700 (LWP 11865)]
[New Thread 0x7fff87cf3700 (LWP 11866)]
CUDA Runtime Error: an illegal memory access was encountered at /home/roman/libs/redner/buffer.h:86
[Thread 0x7fff87cf3700 (LWP 11866) exited]
[Thread 0x7fff884f4700 (LWP 11865) exited]
[Thread 0x7ffff7fb4700 (LWP 11858) exited]
[Inferior 1 (process 11858) exited with code 01]

After further inspection it turned out the error occurs in render_pytorch.py lines 305-311 in the call scene = redner.Scene(camera, shapes, materials, area_lights, envmap, pyredner.get_use_gpu(), pyredner.get_device().index if pyredner.get_device().index is not None else -1)

Before I've also tried building with OptiX 6.0.0 and running the tests resulted in 'invalid context' error as described in this issue.

More NaN Issues

redner/envmap.h

Line 265 in e46647a

return envmap.pdf_norm * fabs(lum_fy * sin_theta_fy + lum_cy * sin_theta_cy) / sin_theta;

Similar to #21, in the above code execution, there is a rare case where lum_fy, lum_cy, and sin_theta all are exactly 0. In this situation, the output of envmap_pdf returns -nan.

I'm not sure how this should be fixed. Any ideas?

running error

Hi,

When I run your tutorial/01_optimize_single_triangle.py, when the code run line 146 loss.backward()

It has an Assertion error.

/h/wenzheng/pydr2/lib/python3.7/site-packages/skimage/util/dtype.py:141: UserWarning: Possible precision loss when converting from float32 to uint8
.format(dtypeobj_in, dtypeobj_out))
iteration: 0
loss: 758.4083251953125
grad: tensor([[-131.9577, 582.0557, -426.4650],
[ 183.6868, 882.2390, -330.1663],
[ 58.8177, -528.9055, -201.2860]])
vertices: tensor([[-1.9500, 1.4500, 0.3500],
[ 0.8500, 1.1500, -0.2500],
[-0.4500, -1.3500, 0.2500]], requires_grad=True)
iteration: 1
loss: 609.8726196289062
python: /h/wenzheng/project/redner/material.h:208: Vector3 bsdf(const Material&, const SurfacePoint&, const Vector3&, const Vector3&, Real): Assertion roughness > 0.f' failed. python: /h/wenzheng/project/redner/material.h:208: Vector3 bsdf(const Material&, const SurfacePoint&, const Vector3&, const Vector3&, Real): Assertion roughness > 0.f' failed.
python: /h/wenzheng/project/redner/material.h:208: Vector3 bsdf(const Material&, const SurfacePoint&, const Vector3&, const Vector3&, Real): Assertion `roughness > 0.f' failed.
Aborted (core dumped)

Could you please tell me how to fix this?

Thank you!

Importing Pyredner outside build folder

Hi,

I noticed that the import pyredner command solely works when I am under the build folder of redner. I cannot be in another folder under that build folder as well ((e.g. if I am under redner/build/test) I cannot run "import pyredner" under the test folder.. Is there a reason for this? Thank you!

Here is the error message below:

import pyredner
Traceback (most recent call last):
File "", line 1, in
File "/home/abhinav/anaconda3/lib/python3.7/site-packages/pyredner/init.py", line 9, in
from .render_pytorch import *
File "/home/abhinav/anaconda3/lib/python3.7/site-packages/pyredner/render_pytorch.py", line 4, in
import redner
ImportError: libembree3.so.3: cannot open shared object file: No such file or directory

Square Root NaN Bug

There's a bug in the envmap code where it's possible to get a NaN evaluated due to a sqrt function call at the following callsite:

redner/envmap.h

Line 262 in 7a6074d

auto sin_theta = sqrt(1 - square(local_dir.y));

It seems that if local_dir.y equals 1, then 1 - square(local_dir.y) evaluates to -0.0000 which when taken the square root of gives you a NaN error.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.