Git Product home page Git Product logo

neural_renderer's People

Contributors

0xsuu avatar czw0078 avatar hiroharu-kato avatar lotayou avatar nitish11 avatar nkolot avatar shunsukesaito avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

neural_renderer's Issues

pip install failed with anaconda env?

ERROR: Complete output from command /home/gaofei/anaconda3/envs/dynamic1/bin/python -u -c 'import setuptools, tokenize;file='"'"'/tmp/pip-install-azhbhou2/neural-renderer-pytorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-6ohe3bci --python-tag cp35:
ERROR: running bdist_wheel
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/load_obj.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/rasterize.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/save_obj.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/perspective.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/look_at.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/renderer.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/get_points_from_angles.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/mesh.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/init.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/look.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/projection.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/lighting.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/vertices_to_faces.py -> build/lib.linux-x86_64-3.5/neural_renderer
creating build/lib.linux-x86_64-3.5/neural_renderer/cuda
copying neural_renderer/cuda/init.py -> build/lib.linux-x86_64-3.5/neural_renderer/cuda
running build_ext
building 'neural_renderer.cuda.load_textures' extension
creating build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/neural_renderer
creating build/temp.linux-x86_64-3.5/neural_renderer/cuda
gcc -pthread -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include -I/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/TH -I/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/THC -I:/usr/local/cuda-9.0/include -I/home/gaofei/anaconda3/envs/dynamic1/include/python3.5m -c neural_renderer/cuda/load_textures_cuda.cpp -o build/temp.linux-x86_64-3.5/neural_renderer/cuda/load_textures_cuda.o -DTORCH_EXTENSION_NAME=neural_renderer.cuda.load_textures -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
neural_renderer/cuda/load_textures_cuda.cpp: In function ‘at::Tensor load_textures(at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int)’:
neural_renderer/cuda/load_textures_cuda.cpp:15:79: error: ‘AT_CHECK’ was not declared in this scope
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
^
neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
^
neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(image);
^
In file included from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/pytypes.h:12:0,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/cast.h:13,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/attr.h:13,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/pybind11.h:43,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/torch/torch.h:6,
from neural_renderer/cuda/load_textures_cuda.cpp:1:
neural_renderer/cuda/load_textures_cuda.cpp: At global scope:
:0:37: error: expected initializer before ‘.’ token
/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/detail/common.h:212:47: note: in definition of macro ‘PYBIND11_CONCAT’
#define PYBIND11_CONCAT(first, second) first##second
^
neural_renderer/cuda/load_textures_cuda.cpp:37:1: note: in expansion of macro ‘PYBIND11_MODULE’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
neural_renderer/cuda/load_textures_cuda.cpp:37:17: note: in expansion of macro ‘TORCH_EXTENSION_NAME’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
:0:37: error: expected initializer before ‘.’ token
/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/detail/common.h:171:51: note: in definition of macro ‘PYBIND11_PLUGIN_IMPL’
extern "C" PYBIND11_EXPORT PyObject *PyInit_##name()
^
neural_renderer/cuda/load_textures_cuda.cpp:37:1: note: in expansion of macro ‘PYBIND11_MODULE’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
neural_renderer/cuda/load_textures_cuda.cpp:37:17: note: in expansion of macro ‘TORCH_EXTENSION_NAME’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
error: command 'gcc' failed with exit status 1

ERROR: Failed building wheel for neural-renderer-pytorch
Running setup.py clean for neural-renderer-pytorch
Failed to build neural-renderer-pytorch
Installing collected packages: neural-renderer-pytorch
Running setup.py install for neural-renderer-pytorch ... error
ERROR: Complete output from command /home/gaofei/anaconda3/envs/dynamic1/bin/python -u -c 'import setuptools, tokenize;file='"'"'/tmp/pip-install-azhbhou2/neural-renderer-pytorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-h5tpulv2/install-record.txt --single-version-externally-managed --compile:
ERROR: running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.5
creating build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/load_obj.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/rasterize.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/save_obj.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/perspective.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/look_at.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/renderer.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/get_points_from_angles.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/mesh.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/init.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/look.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/projection.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/lighting.py -> build/lib.linux-x86_64-3.5/neural_renderer
copying neural_renderer/vertices_to_faces.py -> build/lib.linux-x86_64-3.5/neural_renderer
creating build/lib.linux-x86_64-3.5/neural_renderer/cuda
copying neural_renderer/cuda/init.py -> build/lib.linux-x86_64-3.5/neural_renderer/cuda
running build_ext
building 'neural_renderer.cuda.load_textures' extension
creating build/temp.linux-x86_64-3.5
creating build/temp.linux-x86_64-3.5/neural_renderer
creating build/temp.linux-x86_64-3.5/neural_renderer/cuda
gcc -pthread -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include -I/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/TH -I/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/THC -I:/usr/local/cuda-9.0/include -I/home/gaofei/anaconda3/envs/dynamic1/include/python3.5m -c neural_renderer/cuda/load_textures_cuda.cpp -o build/temp.linux-x86_64-3.5/neural_renderer/cuda/load_textures_cuda.o -DTORCH_EXTENSION_NAME=neural_renderer.cuda.load_textures -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
neural_renderer/cuda/load_textures_cuda.cpp: In function ‘at::Tensor load_textures(at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, int)’:
neural_renderer/cuda/load_textures_cuda.cpp:15:79: error: ‘AT_CHECK’ was not declared in this scope
#define CHECK_CUDA(x) AT_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
^
neural_renderer/cuda/load_textures_cuda.cpp:17:24: note: in expansion of macro ‘CHECK_CUDA’
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
^
neural_renderer/cuda/load_textures_cuda.cpp:28:5: note: in expansion of macro ‘CHECK_INPUT’
CHECK_INPUT(image);
^
In file included from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/pytypes.h:12:0,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/cast.h:13,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/attr.h:13,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/pybind11.h:43,
from /home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/torch/torch.h:6,
from neural_renderer/cuda/load_textures_cuda.cpp:1:
neural_renderer/cuda/load_textures_cuda.cpp: At global scope:
:0:37: error: expected initializer before ‘.’ token
/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/detail/common.h:212:47: note: in definition of macro ‘PYBIND11_CONCAT’
#define PYBIND11_CONCAT(first, second) first##second
^
neural_renderer/cuda/load_textures_cuda.cpp:37:1: note: in expansion of macro ‘PYBIND11_MODULE’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
neural_renderer/cuda/load_textures_cuda.cpp:37:17: note: in expansion of macro ‘TORCH_EXTENSION_NAME’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
:0:37: error: expected initializer before ‘.’ token
/home/gaofei/anaconda3/envs/dynamic1/lib/python3.5/site-packages/torch/lib/include/pybind11/detail/common.h:171:51: note: in definition of macro ‘PYBIND11_PLUGIN_IMPL’
extern "C" PYBIND11_EXPORT PyObject *PyInit_##name()
^
neural_renderer/cuda/load_textures_cuda.cpp:37:1: note: in expansion of macro ‘PYBIND11_MODULE’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
neural_renderer/cuda/load_textures_cuda.cpp:37:17: note: in expansion of macro ‘TORCH_EXTENSION_NAME’
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
^
error: command 'gcc' failed with exit status 1
----------------------------------------
ERROR: Command "/home/gaofei/anaconda3/envs/dynamic1/bin/python -u -c 'import setuptools, tokenize;file='"'"'/tmp/pip-install-azhbhou2/neural-renderer-pytorch/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' install --record /tmp/pip-record-h5tpulv2/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-azhbhou2/neural-renderer-pytorch/

Windows Anaconda support?

I'm trying to use this package but running into compilation errors. Has anyone tried to make it run on windows anaconda?

The errors for me are:
Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/torch.h(7): fatal error C1021: invalid preprocessor command 'warning'

How to save as .obj ?

Hi,

I used this code to save as .obj-


import chainer
import numpy as np
import scipy.misc
import neural_renderer

renderer = neural_renderer.Renderer()

vertices, faces, textures = neural_renderer.load_obj( '/home/user/jay/straight.obj', load_texture=True, texture_size=16)

neural_renderer.save_obj('/home/user/jay/chainer_save.obj', vertices, faces, textures)

But getting error at this assert So changed this line to

self.register_buffer('vertices', vertices[:, :])
self.register_buffer('faces', faces[:, :])

But facing issue in renderer.py. So i changed the code with below line.

faces = torch.cat((faces, faces[:, list(reversed(range(faces.shape[-1])))]), dim=1).detach()

Then i was getting error in vertices_to_faces.py so i changed below vertices & faces assert to 2.

assert (vertices.ndimension() == 2)
assert (faces.ndimension() == 2)

But I am getting error at this line
assert (vertices.shape[0] == faces.shape[0]) because
vertices.shape[0] = 34817
faces.shape[0] = 69630.

Where am i doing wrong ? Could you please help me to fix this.

Custom Background Images

Is there a feature to add custom background images to replace the black background that is provided currently?

How to use for rendering 2d vertex coordinates into mask?

How can I use this library to render 2d image from coordinates (polygon). I have predictions in form of vertex coordinates (batch, num_points, x, y) and I want to render it into 2D binary mask. Is is possible? Any insight would be really helpful!

Only square images?

Hi, nice work with this! I am correct in saying that this is only for square images? Are you planning to extend it to generic images?

Better Compatibility for __CUDA_ARCH__ for RTX GPUs

at rasterize_cuda_kernel.cu
you need to change
#if CUDA_ARCH < 600 and defined(__CUDA_ARCH__)
//blablabla
#endif
to

#if !defined(__CUDA_ARCH__) || __CUDA_ARCH__ >= 600
#else
static __inline__ __device__ double atomicAdd(double* address, double val) {
    unsigned long long int* address_as_ull = (unsigned long long int*)address;
    unsigned long long int old = *address_as_ull, assumed;
    do {
        assumed = old;
        old = atomicCAS(address_as_ull, assumed,
                __double_as_longlong(val + __longlong_as_double(assumed)));
    // Note: uses integer comparison to avoid hang in case of NaN (since NaN != NaN) } while (assumed != old);
    } while (assumed != old);
    return __longlong_as_double(old);
}
#endif

it will solve setup.py nvcc call errors somehow

Running on several gpu

Hello everyone!
I am working on a project which make use of your neural renderer implementation. We would like to run our code in parallel on several GPU. Running the code on gpu different from 0, we got an error, so we tried to pass device (properly modifing the source code of the renderer) forcing to work on gpu:1. In this case, the renderer returns a tensor of 0. What could be the problem? Is mandatory to run the renderer on gpu:0 ?

Thanks so much in advance!

Could not load obj correctly.

Awesome works!! I am trying to load some fancy obj files without image texture, but there is some difference between the result from the preview(Mac app) and this renderer. It looks like could load object correctly, but lose some texture and seems a general problem since it always happened when I tried several models. I show a simple model comparison below which the left side is rendering result and the right side is preview result.
camaro_comp
The 3d model is downloaded from 3D Warehouse and exported as obj file from the skp file. Here is obj(1.4MB) and mtl(1.8KB) files.
Could you help me to check this problem? What do you think about it?
Thanks a lot!

The name shadowing issue for the package 'neural_renderer'

Hi, I tried the code and could not execute python3 ./examples/example1.py successfully. It raised the error

Traceback (most recent call last):
  File "examples/example1.py", line 12, in <module>
    import neural_renderer as nr
  File "/foo/bar/neural_renderer/neural_renderer/__init__.py", line 3, in <module>
    from .load_obj import load_obj
  File "/foo/bar/neural_renderer/neural_renderer/load_obj.py", line 8, in <module>
    import neural_renderer.cuda.load_textures as load_textures_cuda
ImportError: No module named 'neural_renderer.cuda.load_textures'

It seems like to be a "name shadowing" problem as the local package name conflicts with the installed packages. After altering the inner (second) "neural_renderer" filename to others, it worked as expected. Could it be better to rename the local dirname or just mention that in the README.md? I am not sure whether this is a common issue. I am using python3. Thanks.

PS: Referenced from The name shadowing trap

A simple case for showing "name shadowing" is (assuming you have numpy)

  1. mkdir numpy
  2. touch numpy/__init__.py
  3. python3 -c "import numpy.random" and you will get "No module named..." error

[GCC version issue]

Hi daniilidis, thanks for your recent update!
I've just found out that compiling with GCC 6.4.0 causes error, as also mentioned in this issue, but compiling with GCC 5.x works fine. (Both 5.4.0 and 5.5.0 are tested and passed)

I think it would be better to specify the correct GCC version required for successful compilation in README, isn't it? Thanks for your consideration.

How to train ?

The paper trains the network on shapeNet. So, how do I go about doing that?

Thanks

Installation error on Ubuntu 18.04 with PyTorch 0.4.1, CUDA 9.1, GCC 7.3.0

I have PyTorch installed already.

python --version: Python 3.7.0
nvcc --version: release 9.1, V9.1.85
gcc --version: 7.3.0

There are pages and pages of errors, but here is the beginning of the output:

running install
running bdist_egg
running egg_info
writing neural_renderer.egg-info/PKG-INFO
writing dependency_links to neural_renderer.egg-info/dependency_links.txt
writing requirements to neural_renderer.egg-info/requires.txt
writing top-level names to neural_renderer.egg-info/top_level.txt
reading manifest file 'neural_renderer.egg-info/SOURCES.txt'
writing manifest file 'neural_renderer.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'neural_renderer.cuda.load_textures' extension
gcc -pthread -B /home/jim/anaconda3/envs/pytorch/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include -I/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/TH -I/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/THC -I/usr/include -I/home/jim/anaconda3/envs/pytorch/include/python3.7m -c neural_renderer/cuda/load_textures_cuda.cpp -o build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda.o -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/pybind11/cast.h:16:0,
                 from /home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/pybind11/attr.h:13,
                 from /home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/pybind11/pybind11.h:43,
                 from /home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/torch/torch.h:5,
                 from neural_renderer/cuda/load_textures_cuda.cpp:1:
/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/pybind11/detail/internals.h:82:34: warning: ‘int PyThread_create_key()’ is deprecated [-Wdeprecated-declarations]
     decltype(PyThread_create_key()) tstate = 0; // Usually an int but a long on Cygwin64 with Python 3.x

There are a bunch of warnings that int PyThread_create_key() is deprecated, then...

/usr/bin/nvcc -I/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include -I/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/TH -I/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/THC -I/usr/include -I/home/jim/anaconda3/envs/pytorch/include/python3.7m -c neural_renderer/cuda/load_textures_cuda_kernel.cu -o build/temp.linux-x86_64-3.7/neural_renderer/cuda/load_textures_cuda_kernel.o -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options '-fPIC' -std=c++11
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/ATen/TensorMethods.h:646:36:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/jim/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/lib/include/ATen/TensorMethods.h:646:36:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~

There are many pages of similar errors, which I've omitted here for length.

Does anyone know how to fix these errors? I assume it's some versioning problem with CUDA, GCC, etc.

pytorch verssion 0.4.0 and CUDA runtime version

Hi! It looks like you've replaced AT_ASSERT with AT_CHECK in the PyPI version. So I was not able to install through pip with pytorch==0.4.0 on my machine. If that is the case, could you please update the Readme file?

When I switched to pytorch==0.4.1, I could install the package but was not able to run the examples. Here are the errors I got:

Error in forward_face_index_map_1: CUDA driver version is insufficient for CUDA runtime version
Error in forward_face_index_map_2: CUDA driver version is insufficient for CUDA runtime version
Error in forward_texture_sampling: CUDA driver version is insufficient for CUDA runtime version

I'm sure that my pytorch is successfully installed. Any suggestions are highly appreciated!

Questions about texture

@nkolot
Hi, in the code example3.py the size of texture is torch.Size([1, 13776, 4, 4, 4, 3]), what the meaning of this? If I have rgb information for each vertex, then how can I transform them to torch.Size([1, 13776, 4, 4, 4, 3]) which is used in the code?
Thanks

rendering glitches

I'm working on a project where I need to find optimal camera parameters, but I am getting some strange rendering artifacts when I try to use this renderer from some viewpoints.

Good:
good_2
good_1

Bad:
bad_5
bad_4
bad_3
bad_2
bad_1
bad_0

Here is a zip of the files needed to reproduce this issue: test_nr.zip

I've been working on this for a few days but can't figure out what the problem is. Do you have any ideas about what could be causing this?

PyTorch 1.0 support?

Hi guys!
Does your project supports PyTorch 1.0? I've tried to install it with PT 1.0 but obtained following error message:

(.venv) ➜  neural_renderer git:(master) python3 setup.py install             
running install
running bdist_egg
running egg_info
writing neural_renderer.egg-info/PKG-INFO
writing dependency_links to neural_renderer.egg-info/dependency_links.txt
writing requirements to neural_renderer.egg-info/requires.txt
writing top-level names to neural_renderer.egg-info/top_level.txt
reading manifest file 'neural_renderer.egg-info/SOURCES.txt'
writing manifest file 'neural_renderer.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
building 'neural_renderer.cuda.load_textures' extension
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fdebug-prefix-map=/build/python3.6-sXpGnM/python3.6-3.6.3=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/TH -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/daiver/coding/neural_renderer/.venv/include -I/usr/include/python3.6m -c neural_renderer/cuda/load_textures_cuda.cpp -o build/temp.linux-x86_64-3.6/neural_renderer/cuda/load_textures_cuda.o -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from neural_renderer/cuda/load_textures_cuda.cpp:1:0:
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include/torch/torch.h:7:2: warning: #warning "Including torch/torch.h for C++ extensions is deprecated. Please include torch/extension.h" [-Wcpp]
 #warning \
  ^~~~~~~
/usr/local/cuda/bin/nvcc -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/torch/csrc/api/include -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/TH -I/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/daiver/coding/neural_renderer/.venv/include -I/usr/include/python3.6m -c neural_renderer/cuda/load_textures_cuda_kernel.cu -o build/temp.linux-x86_64-3.6/neural_renderer/cuda/load_textures_cuda_kernel.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --compiler-options '-fPIC' -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/core/TensorMethods.h:1117:48:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor&, at::Tensor&, at::Tensor&>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (3ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>}; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor&, at::Tensor&, at::Tensor&>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor&, at::Tensor&, at::Tensor&>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:2558:85:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (4, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor&, at::Tensor&, at::Tensor&>&&; bool <anonymous> = true; _Elements = {at::Tensor&, at::Tensor&, at::Tensor&}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (5ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3623:197:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (6, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:3626:267:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> > >&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, std::vector<at::Tensor, std::allocator<at::Tensor> >}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:248:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:483:67: error: mismatched argument pack lengths while expanding ‘std::is_constructible<_Elements, _UElements&&>’
       return __and_<is_constructible<_Elements, _UElements&&>...>::value;
                                                                   ^~~~~
/usr/include/c++/6/tuple:484:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_MoveConstructibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:626:362:   required by substitution of ‘template<class ... _UElements, typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; typename std::enable_if<(((std::_TC<(sizeof... (_UElements) == 1), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NotSameTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>()) && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && (4ul >= 1)), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:489:65: error: mismatched argument pack lengths while expanding ‘std::is_convertible<_UElements&&, _Elements>’
       return __and_<is_convertible<_UElements&&, _Elements>...>::value;
                                                                 ^~~~~
/usr/include/c++/6/tuple:490:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_ImplicitlyMoveConvertibleTuple() [with _UElements = {std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>}; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:662:419:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(const std::tuple<_Args1 ...>&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<const tuple<_Elements ...>&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = const std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
/usr/include/c++/6/tuple: In instantiation of ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’:
/usr/include/c++/6/tuple:686:422:   required by substitution of ‘template<class ... _UElements, class _Dummy, typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> > constexpr std::tuple< <template-parameter-1-1> >::tuple(std::tuple<_Args1 ...>&&) [with _UElements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}; _Dummy = void; typename std::enable_if<((std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_MoveConstructibleTuple<_UElements ...>() && std::_TC<(1ul == sizeof... (_UElements)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_ImplicitlyMoveConvertibleTuple<_UElements ...>()) && std::_TC<(std::is_same<_Dummy, void>::value && (1ul == 1)), at::Tensor, at::Tensor, at::Tensor, at::Tensor>::_NonNestedTuple<tuple<_Elements ...>&&>()), bool>::type <anonymous> = <missing>]’
/home/daiver/coding/neural_renderer/.venv/lib/python3.6/site-packages/torch/lib/include/ATen/Functions.h:4119:107:   required from here
/usr/include/c++/6/tuple:495:244: error: wrong number of template arguments (5, should be 2)
       return  __and_<__not_<is_same<tuple<_Elements...>,
                                                                                                                                                                                                                                                    ^    
/usr/include/c++/6/type_traits:1558:8: note: provided for ‘template<class _From, class _To> struct std::is_convertible’
     struct is_convertible
        ^~~~~~~~~~~~~~
/usr/include/c++/6/tuple:502:1: error: body of constexpr function ‘static constexpr bool std::_TC<<anonymous>, _Elements>::_NonNestedTuple() [with _SrcTuple = std::tuple<at::Tensor, at::Tensor, at::Tensor, at::Tensor>&&; bool <anonymous> = true; _Elements = {at::Tensor, at::Tensor, at::Tensor, at::Tensor}]’ not a return-statement
     }
 ^
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 1

Load Textures With UV Coordinates Out Of Range 0.0~1.0

I am very grateful that you ported the neural mesh renderer by Hiroharu Kato from Chainer to PyTorch. Thanks a lot for your great job!

I tried this tests_load_obj.py and it works well, so I take it as a reference. I write texture_load_texture1.py in order to load textured models from dataset Pix3D in my research project. However, when I load some models and run several times (take one example, IKEA model IKEA_LEIRVIK), it often raises an error:

ValueError: Images of type float must be between -1 and 1.

I found the output Numpy array contains NaN values by debugging, so I try to work around this issue by adding code in test_load_textured2.py, it produces this result:

model_pkr_out2.png and enlarger.png

You can see from the enlarged image that the values of some pixels on the edge of the object are off. I even tried this on the original Chainer version, and I have this error message:

cupy.cuda.runtime.CUDARuntimeError: cudaErrorIllegalAddress: an illegal memory access was encountered

Do you have any idea on this issue? What reason do you think cause this problem (obj model quality problem or bugs in the source code)? Is it possible we could work together to fix it?

Best regards,

Chenfei Wang

render_depth is not returning the expected image

My example code is not working, and documentation is scarce. Can you help?
By running the code below I cannot obtain the expected image.
Also I am not sure were the camera intrinsics should enter in your formulation.

import neural_renderer as nr

# P0 is projection matrix from the world to the camera
P = torch.from_numpy(np.expand_dims(P0, 0)).cuda()

vertices, faces = nr.load_obj(str_objfile)
renderer = nr.Renderer(camera_mode='projection', P=P, image_size=480)
im = renderer.render_depth(vertices[None, :, :], faces[None, :, :])

plt.imshow(im.data.cpu().numpy()[0])
plt.show()

Is it can be run on winodws??When I run "python setup.py install" ,it will show this

running install
running bdist_egg
running egg_info
writing neural_renderer.egg-info\PKG-INFO
writing dependency_links to neural_renderer.egg-info\dependency_links.txt
writing requirements to neural_renderer.egg-info\requires.txt
writing top-level names to neural_renderer.egg-info\top_level.txt
reading manifest file 'neural_renderer.egg-info\SOURCES.txt'
writing manifest file 'neural_renderer.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running build_ext
D:\Anaconda3\lib\site-packages\torch\utils\cpp_extension.py:184: UserWarning: Error checking compiler version for cl: [WinError 2] 系统找不到指定的文件。
warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
building 'neural_renderer.cuda.load_textures' extension
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -ID:\Anaconda3\lib\site-packages\torch\lib\include -ID:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include -ID:\Anaconda3\lib\site-packages\torch\lib\include\TH -ID:\Anaconda3\lib\site-packages\torch\lib\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -ID:\Anaconda3\include -ID:\Anaconda3\include -ID:\VisualStudio\VC\Tools\MSVC\14.16.27023\ATLMFC\include -ID:\VisualStudio\VC\Tools\MSVC\14.16.27023\include "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-ID:\Windows Kits\10\include\10.0.17763.0\ucrt" "-ID:\Windows Kits\10\include\10.0.17763.0\shared" "-ID:\Windows Kits\10\include\10.0.17763.0\um" "-ID:\Windows Kits\10\include\10.0.17763.0\winrt" "-ID:\Windows Kits\10\include\10.0.17763.0\cppwinrt" /EHsc /Tpneural_renderer/cuda/load_textures_cuda.cpp /Fobuild\temp.win-amd64-3.6\Release\neural_renderer/cuda/load_textures_cuda.obj -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0
load_textures_cuda.cpp
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/Exception.h(27): warning C4275: 非 dll 接口 class“std::exception”用作 dll 接口 class“c10::Error”的基
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\vcruntime_exception.h(44): note: 参见“std::exception”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/Exception.h(27): note: 参见“c10::Error”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/Exception.h(28): warning C4251: “c10::Error::msg_stack_”: class“std::vector<std::string,std::allocator<Ty>>”需要有 dll 接口由 class“c10::Error”的客户端使用
with
[
Ty=std::string
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/Exception.h(28): note: 参见“std::vector<std::string,std::allocator<Ty>>”的声明
with
[
Ty=std::string
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/Exception.h(29): warning C4251: “c10::Error::backtrace
”: class“std::basic_string<char,std::char_traits,std::allocator>”需要有 dll 接口由 class“c10::Error”的客户端 使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits,std::allocator>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/Exception.h(34): warning C4251: “c10::Error::msg
”: class“std::basic_string<char,std::char_traits,std::allocator>”需要有 dll 接口由 class“c10::Error”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits,std::allocator>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/Exception.h(35): warning C4251: “c10::Error::msg_without_backtrace
”: class“std::basic_string<char,std::char_traits,std::allocator>”需要有 dll 接口由 class“c10::Error”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits,std::allocator>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/Allocator.h(126): warning C4251: “c10::InefficientStdFunctionContext::ptr”: class“std::unique_ptr<void,std::function<void (void *)>>”需要有 dll 接口由 struct“c10::InefficientStdFunctionContext”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/Allocator.h(126): note: 参见“std::unique_ptr<void,std::function<void (void *)>>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/TensorTypeIdRegistration.h(32): warning C4251: “c10::TensorTypeIdCreator::last_id_”: struct“std::atomic”需要有 dll 接口由 class“c10::TensorTypeIdCreator”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xxatomic(162): note: 参见“std::atomic”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/TensorTypeIdRegistration.h(45): warning C4251: “c10::TensorTypeIdRegistry::registeredTypeIds_”: class“std::unordered_set<c10::TensorTypeId,std::hashc10::TensorTypeId,std::equal_to<Kty>,std::allocator<Kty>>”需要有 dll 接口由 class“c10::TensorTypeIdRegistry”的客户端使用
with
[
Kty=c10::TensorTypeId
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/TensorTypeIdRegistration.h(45): note: 参见“std::unordered_set<c10::TensorTypeId,std::hashc10::TensorTypeId,std::equal_to<Kty>,std::allocator<Kty>>”的声明
with
[
Kty=c10::TensorTypeId
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/TensorTypeIdRegistration.h(46): warning C4251: “c10::TensorTypeIdRegistry::mutex
”: class“std::mutex”需要有 dll 接口由 class“c10::TensorTypeIdRegistry”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\mutex(82): note: 参见“std::mutex”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(168): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(171): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(174): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(177): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(181): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(184): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(187): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(190): warning C4244: “参数”: 从“int”转换到“float” ,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(196): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(199): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(202): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(205): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(209): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(212): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(215): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Half-inl.h(218): warning C4244: “参数”: 从“int64_t”转换到“float”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/intrusive_ptr.h(58): warning C4251: “c10::intrusive_ptr_target::refcount
”: struct“std::atomic”需要有 dll 接口由 class“c10::intrusive_ptr_target”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xxatomic(162): note: 参见“std::atomic”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/intrusive_ptr.h(59): warning C4251: “c10::intrusive_ptr_target::weakcount
”: struct“std::atomic”需要有 dll 接口由 class“c10::intrusive_ptr_target”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xxatomic(162): note: 参见“std::atomic”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/intrusive_ptr.h(708): warning C4267: “return”: 从“size_t” 转换到“uint32_t”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/intrusive_ptr.h(742): warning C4267: “return”: 从“size_t” 转换到“uint32_t”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/StorageImpl.h(215): warning C4251: “c10::StorageImpl::data_ptr”: class“c10::DataPtr”需要有 dll 接口由 struct“c10::StorageImpl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/Allocator.h(19): note: 参见“c10::DataPtr”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/Storage.h(184): warning C4251: “c10::Storage::storage_impl”: class“c10::intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type>”需要有 dll 接口 由 struct“c10::Storage”的客户端使用
with
[
TTarget=c10::StorageImpl
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/Storage.h(10): note: 参见“c10::intrusive_ptr<c10::StorageImpl,c10::detail::intrusive_target_default_null_type>”的声明
with
[
TTarget=c10::StorageImpl
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/logging_is_not_google_glog.h(47): warning C4251: “c10::MessageLogger::stream”: class“std::basic_stringstream<char,std::char_traits,std::allocator>”需要有 dll 接口由 class“c10::MessageLogger”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\iosfwd(623): note: 参见“std::basic_stringstream<char,std::char_traits,std::allocator>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(115): warning C4251: “at::PlacementDeleteContext::data_ptr_”: class“c10::DataPtr”需要有 dll 接口由 struct“at::PlacementDeleteContext”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/core/Allocator.h(19): note: 参见“c10::DataPtr”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(1434): warning C4251: “at::TensorImpl::sizes_”: class“c10::SmallVector<int64_t,5>”需要有 dll 接口由 struct“at::TensorImpl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(1434): note: 参见“c10::SmallVector<int64_t,5>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(1435): warning C4251: “at::TensorImpl::strides_”: class“c10::SmallVector<int64_t,5>”需要有 dll 接口由 struct“at::TensorImpl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(1434): note: 参见“c10::SmallVector<int64_t,5>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(454): warning C4244: “参数”: 从“int64_t”转换到“c10::DeviceIndex”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(1006): warning C4244: “参数”: 从“float”转换 到“const Ty”,可能丢失数据
with
[
Ty=size_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/TensorImpl.h(1382): warning C4244: “初始化”: 从“int64_t” 转换到“int”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/LegacyTypeDispatch.h(138): warning C4251: “at::LegacyTypeDispatch::type_registry”: class“std::unique_ptrat::Type,at::LegacyTypeDeleter”需要有 dll 接口由 class“at::LegacyTypeDispatch”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/LegacyTypeDispatch.h(51): note: 参见“std::unique_ptrat::Type,at::LegacyTypeDeleter”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Tensor.h(692): warning C4251: “at::Tensor::impl
”: class“c10::intrusive_ptrat::TensorImpl,at::UndefinedTensorImpl”需要有 dll 接口由 class“at::Tensor”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Tensor.h(44): note: 参见“c10::intrusive_ptrat::TensorImpl,at::UndefinedTensorImpl”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Tensor.h(693): warning C4522: “at::Tensor”: 指定了多个赋值 运算符
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Tensor.h(720): warning C4251: “at::WeakTensor::weak_impl
”: class“c10::weak_intrusive_ptr<TTarget,NullType>”需要有 dll 接口由 struct“at::WeakTensor”的客户端使用
with
[
TTarget=at::TensorImpl,
NullType=at::UndefinedTensorImpl
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/intrusive_ptr.h(172): note: 参见“c10::weak_intrusive_ptr<TTarget,NullType>”的声明
with
[
TTarget=at::TensorImpl,
NullType=at::UndefinedTensorImpl
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/LegacyTHDispatch.h(86): warning C4251: “at::LegacyTHDispatch::dispatcher_registry”: class“std::unique_ptrat::LegacyTHDispatcher,at::LegacyTHDispatcherDeleter”需要有 dll 接口由 class“at::LegacyTHDispatch”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/LegacyTHDispatch.h(61): note: 参见“std::unique_ptrat::LegacyTHDispatcher,at::LegacyTHDispatcherDeleter”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(128): warning C4251: “at::Context::generator_registry”: class“std::unique_ptr<at::Generator,std::default_delete<_Ty>>”需要有 dll 接口由 class“at::Context”的客户端使用
with
[
_Ty=at::Generator
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(104): note: 参见“std::unique_ptr<at::Generator,std::default_delete<_Ty>>”的声明
with
[
Ty=at::Generator
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(145): warning C4251: “at::Context::thc_init”: struct“std::once_flag”需要有 dll 接口由 class“at::Context”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xcall_once.h(18): note: 参见“std::once_flag”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(146): warning C4251: “at::Context::thh_init”: struct“std::once_flag”需要有 dll 接口由 class“at::Context”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xcall_once.h(18): note: 参见“std::once_flag”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(147): warning C4251: “at::Context::complex_init
”: struct“std::once_flag”需要有 dll 接口由 class“at::Context”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xcall_once.h(18): note: 参见“std::once_flag”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(151): warning C4251: “at::Context::next_id”: struct“std::atomic”需要有 dll 接口由 class“at::Context”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xxatomic(162): note: 参见“std::atomic”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(152): warning C4251: “at::Context::thc_state”: class“std::unique_ptr<THCState,void (__cdecl *)(THCState *)>”需要有 dll 接口由 class“at::Context”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/detail/CUDAHooksInterface.h(57): note: 参见“std::unique_ptr<THCState,void (__cdecl *)(THCState *)>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(153): warning C4251: “at::Context::thh_state”: class“std::unique_ptr<THHState,void (__cdecl *)(THHState *)>”需要有 dll 接口由 class“at::Context”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/detail/HIPHooksInterface.h(33): note: 参见“std::unique_ptr<THHState,void (__cdecl *)(THHState *)>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(161): warning C4996: 'getenv': This function or variable may be unsafe. Consider using _dupenv_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details.
D:\Windows Kits\10\include\10.0.17763.0\ucrt\stdlib.h(1191): note: 参见“getenv”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/Context.h(164): warning C4996: 'getenv': This function or variable may be unsafe. Consider using _dupenv_s instead. To disable deprecation, use _CRT_SECURE_NO_WARNINGS. See online help for details.
D:\Windows Kits\10\include\10.0.17763.0\ucrt\stdlib.h(1191): note: 参见“getenv”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Stream.h(102): warning C4244: “参数”: 从“unsigned __int64”转换 到“c10::DeviceIndex”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/Stream.h(102): warning C4244: “参数”: 从“unsigned _int64”转换 到“c10::StreamId”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/TensorGeometry.h(56): warning C4251: “at::TensorGeometry::sizes
”: class“std::vector<int64_t,std::allocator<_Ty>>”需要有 dll 接口由 struct“at::TensorGeometry”的客户端使用
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/typeid.h(584): note: 参见“std::vector<int64_t,std::allocator<_Ty>>”的声明
with
[
Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/TensorGeometry.h(57): warning C4251: “at::TensorGeometry::strides
”: class“std::vector<int64_t,std::allocator<Ty>>”需要有 dll 接口由 struct“at::TensorGeometry”的客户端使用
with
[
Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/typeid.h(584): note: 参见“std::vector<int64_t,std::allocator<Ty>>”的声明
with
[
Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(351): warning C4251: “torch::autograd::Variable::Impl::name”: class“std::basic_string<char,std::char_traits,std::allocator>”需要有 dll 接口由 struct“torch::autograd::Variable::Impl”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits,std::allocator>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(355): warning C4251: “torch::autograd::Variable::Impl::grad_fn
”: class“std::shared_ptrtorch::autograd::Function”需要有 dll 接口由 struct“torch::autograd::Variable::Impl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/edge.h(17): note: 参见“std::shared_ptrtorch::autograd::Function”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(356): warning C4251: “torch::autograd::Variable::Impl::grad_accumulator
”: class“std::weak_ptrtorch::autograd::Function”需要有 dll 接口由 struct“torch::autograd::Variable::Impl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(153): note: 参见“std::weak_ptrtorch::autograd::Function”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(358): warning C4251: “torch::autograd::Variable::Impl::version_counter
”: struct“torch::autograd::VariableVersion”需要有 dll 接口由 struct“torch::autograd::Variable::Impl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable_version.h(19): note: 参见“torch::autograd::VariableVersion”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(359): warning C4251: “torch::autograd::Variable::Impl::hooks
”: class“std::vector<std::shared_ptrtorch::autograd::FunctionPreHook,std::allocator<_Ty>>”需 要有 dll 接口由 struct“torch::autograd::Variable::Impl”的客户端使用
with
[
_Ty=std::shared_ptrtorch::autograd::FunctionPreHook
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(245): note: 参见“std::vector<std::shared_ptrtorch::autograd::FunctionPreHook,std::allocator<Ty>>”的声明
with
[
Ty=std::shared_ptrtorch::autograd::FunctionPreHook
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/autograd/variable.h(376): warning C4251: “torch::autograd::Variable::Impl::mutex
”: class“std::mutex”需要有 dll 接口由 struct“torch::autograd::Variable::Impl”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\mutex(82): note: 参见“std::mutex”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/thread_pool.h(51): warning C4251: “c10::ThreadPool::tasks
”: class“std::queue<c10::ThreadPool::task_element_t,std::deque<_Ty,std::allocator<_Ty>>>”需要有 dll 接口由 class“c10::ThreadPool”的客户端使用
with
[
_Ty=c10::ThreadPool::task_element_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/thread_pool.h(51): note: 参见“std::queue<c10::ThreadPool::task_element_t,std::deque<_Ty,std::allocator<Ty>>>”的声明
with
[
Ty=c10::ThreadPool::task_element_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/thread_pool.h(52): warning C4251: “c10::ThreadPool::threads
”: class“std::vector<std::thread,std::allocator<Ty>>”需要有 dll 接口由 class“c10::ThreadPool”的客户端使用
with
[
Ty=std::thread
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/thread_pool.h(52): note: 参见“std::vector<std::thread,std::allocator<Ty>>”的声明
with
[
Ty=std::thread
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/thread_pool.h(53): warning C4251: “c10::ThreadPool::mutex
”: class“std::mutex”需要有 dll 接口由 class“c10::ThreadPool”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\mutex(82): note: 参见“std::mutex”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/thread_pool.h(54): warning C4251: “c10::ThreadPool::condition
”: class“std::condition_variable”需要有 dll 接口由 class“c10::ThreadPool”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\mutex(682): note: 参见“std::condition_variable”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/thread_pool.h(55): warning C4251: “c10::ThreadPool::completed
”: class“std::condition_variable”需要有 dll 接口由 class“c10::ThreadPool”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\mutex(682): note: 参见“std::condition_variable”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(24): warning C4251: “c10::ivalue::ConstantString::str
”: class“std::basic_string<char,std::char_traits,std::allocator>”需要有 dll 接口由 struct“c10::ivalue::ConstantString”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits,std::allocator>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(43): warning C4251: “c10::ivalue::List::elements”: class“std::vector<T,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::ivalue::List”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/ArrayRef.h(214): note: 参见“std::vector<T,std::allocator<Ty>>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(65): note: 参见对正在编译的 类 模板 实例化 "c10::ivalue::List" 的引用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(43): warning C4251: “c10::ivalue::Listc10::IValue::elements
”: class“std::vector<Elem,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::ivalue::Listc10::IValue” 的客户端使用
with
[
Elem=c10::IValue,
_Ty=c10::IValue
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(43): note: 参见“std::vector<Elem,std::allocator<Ty>>”的声明
with
[
Elem=c10::IValue,
Ty=c10::IValue
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(69): note: 参见对正在编译的 类 模板 实例化 "c10::ivalue::Listc10::IValue" 的引用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(570): warning C4251: “c10::ivalue::Future::mutex
”: class“std::mutex”需要有 dll 接口由 struct“c10::ivalue::Future”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\mutex(82): note: 参见“std::mutex”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(572): warning C4251: “c10::ivalue::Future::completed
”: struct“std::atomic”需要有 dll 接口由 struct“c10::ivalue::Future”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xxatomic(162): note: 参见“std::atomic”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(573): warning C4251: “c10::ivalue::Future::callbacks”: class“std::vector<std::function<void (void)>,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::ivalue::Future”的客户端使用
with
[
_Ty=std::function<void (void)>
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(573): note: 参见“std::vector<std::function<void (void)>,std::allocator<_Ty>>”的声明
with
[
Ty=std::function<void (void)>
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(43): warning C4251: “c10::ivalue::List<int64_t>::elements
”: class“std::vector<int64_t,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::ivalue::List<int64_t>”的客户端使用
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/typeid.h(584): note: 参见“std::vector<int64_t,std::allocator<_Ty>>”的声明
with
[
Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(622): note: 参见对正在编译的 类 模板 实例化 "c10::ivalue::List<int64_t>" 的引用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(43): warning C4251: “c10::ivalue::List::elements
”: class“std::vector<T,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::ivalue::List”的客户端使用
with
[
T=double,
_Ty=double
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/ArrayRef.h(214): note: 参见“std::vector<T,std::allocator<_Ty>>”的声明
with
[
T=double,
Ty=double
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(636): note: 参见对正在编译的 类 模板 实例化 "c10::ivalue::List" 的引用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(43): warning C4251: “c10::ivalue::List::elements
”: class“std::vector<bool,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::ivalue::List”的客户端使用
with
[
_Ty=bool
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(296): note: 参见“std::vector<bool,std::allocator<_Ty>>”的声明
with
[
Ty=bool
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(643): note: 参见对正在编译的 类 模板 实例化 "c10::ivalue::List" 的引用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(43): warning C4251: “c10::ivalue::Listat::Tensor::elements
”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::ivalue::Listat::Tensor”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/ivalue.h(650): note: 参见对正在编译的 类 模板 实例化 "c10::ivalue::Listat::Tensor" 的引用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/scope.h(24): warning C4251: “torch::jit::Scope::parent
”: class“c10::intrusive_ptr<torch::jit::Scope,c10::detail::intrusive_target_default_null_type>”需要有 dll 接口由 struct“torch::jit::Scope”的客户端使用
with
[
TTarget=torch::jit::Scope
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/scope.h(20): note: 参见“c10::intrusive_ptr<torch::jit::Scope,c10::detail::intrusive_target_default_null_type>”的声明
with
[
TTarget=torch::jit::Scope
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/constants.h(16): warning C4275: 非 dll 接口 class“std::runtime_error”用作 dll 接口 struct“torch::jit::constant_not_supported_error”的基
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\stdexcept(157): note: 参见“std::runtime_error”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/constants.h(16): note: 参见“torch::jit::constant_not_supported_error”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(52): warning C4251: “std::enable_shared_from_thisc10::Type::_Wptr”: class“std::weak_ptr<_Ty>”需要有 dll 接口由 class“std::enable_shared_from_thisc10::Type”的客 户端使用
with
[
_Ty=c10::Type
]
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\memory(2029): note: 参见“std::weak_ptr<_Ty>”的声明
with
[
Ty=c10::Type
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(213): warning C4251: “c10::SingleElementTypec10::TypeKind::OptionalType,c10::OptionalType::elem”: class“std::shared_ptrc10::Type”需要有 dll 接口由 struct“c10::SingleElementTypec10::TypeKind::OptionalType,c10::OptionalType”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(50): note: 参见“std::shared_ptrc10::Type”的声 明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(359): warning C4244: “参数”: 从“int64_t”转换到“int”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(476): warning C4251: “c10::CompleteTensorType::sizes
”: class“std::vector<int64_t,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::CompleteTensorType”的客户端使用
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/typeid.h(584): note: 参见“std::vector<int64_t,std::allocator<_Ty>>”的声明
with
[
Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(477): warning C4251: “c10::CompleteTensorType::strides
”: class“std::vector<int64_t,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::CompleteTensorType”的客户端使用
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/typeid.h(584): note: 参见“std::vector<int64_t,std::allocator<_Ty>>”的声明
with
[
Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(463): warning C4267: “参数”: 从“size_t”转换到 “int”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(482): warning C4251: “c10::SingleElementTypec10::TypeKind::ListType,c10::ListType::elem”: class“std::shared_ptrc10::Type”需要有 dll 接口由 struct“c10::SingleElementTypec10::TypeKind::ListType,c10::ListType”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(50): note: 参见“std::shared_ptrc10::Type”的声 明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(516): warning C4251: “c10::SingleElementTypec10::TypeKind::FutureType,c10::FutureType::elem”: class“std::shared_ptrc10::Type”需要有 dll 接口由 struct“c10::SingleElementTypec10::TypeKind::FutureType,c10::FutureType”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(50): note: 参见“std::shared_ptrc10::Type”的声 明
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(625): warning C4251: “c10::TupleType::elements
”: class“std::vector<c10::TypePtr,std::allocator<_Ty>>”需要有 dll 接口由 struct“c10::TupleType”的客户端使用
with
[
_Ty=c10::TypePtr
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/jit_type.h(148): note: 参见“std::vector<c10::TypePtr,std::allocator<_Ty>>”的声明
with
[
_Ty=c10::TypePtr
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(29): warning C4251: “std::enable_shared_from_thistorch::jit::tracer::TracingState::_Wptr”: class“std::weak_ptr<_Ty>”需要有 dll 接口由 class“std::enable_shared_from_thistorch::jit::tracer::TracingState”的客户端使用
with
[
_Ty=torch::jit::tracer::TracingState
]
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\memory(2029): note: 参见“std::weak_ptr<_Ty>”的声明
with
[
_Ty=torch::jit::tracer::TracingState
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(47): warning C4251: “torch::jit::tracer::TracingState::value_map”: class“std::unordered_map<torch::jit::tracer::TracingState::WeakTensor,torch::jit::Value *,torch::jit::tracer::TracingState::WeakTensorHasher,torch::jit::tracer::TracingState::WeakTensorEq,std::allocator<std::pair<const _Kty,_Ty>>>”需要有 dll 接口由 struct“torch::jit::tracer::TracingState”的客户端使用
with
[
_Kty=torch::jit::tracer::TracingState::WeakTensor,
_Ty=torch::jit::Value *
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(47): note: 参见“std::unordered_map<torch::jit::tracer::TracingState::WeakTensor,torch::jit::Value *,torch::jit::tracer::TracingState::WeakTensorHasher,torch::jit::tracer::TracingState::WeakTensorEq,std::allocator<std::pair<const _Kty,_Ty>>>”的声明
with
[
_Kty=torch::jit::tracer::TracingState::WeakTensor,
_Ty=torch::jit::Value *
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(48): warning C4251: “torch::jit::tracer::TracingState::graph”: class“std::shared_ptrtorch::jit::Graph”需要有 dll 接口由 struct“torch::jit::tracer::TracingState”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(48): note: 参见“std::shared_ptrtorch::jit::Graph”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(52): warning C4251: “torch::jit::tracer::TracingState::lookup_var_name_fn”: class“std::function<std::string (const torch::autograd::Variable &)>”需要有 dll 接口由 struct“torch::jit::tracer::TracingState”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(51): note: 参见“std::function<std::string (const torch::autograd::Variable &)>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/tracing_state.h(145): warning C4251: “torch::jit::tracer::NoWarn::state”: class“std::shared_ptr<_Ty>”需要有 dll 接口由 struct“torch::jit::tracer::NoWarn”的客户端使用
with
[
_Ty=torch::jit::tracer::TracingState
]
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\memory(2019): note: 参见“std::shared_ptr<Ty>”的声明
with
[
Ty=torch::jit::tracer::TracingState
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/serialize/input-archive.h(78): warning C4251: “torch::serialize::InputArchive::module
”: class“std::shared_ptrtorch::jit::script::Module”需要有 dll 接口由 class“torch::serialize::InputArchive”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/serialize/input-archive.h(78): note: 参见 “std::shared_ptrtorch::jit::script::Module”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/serialize/output-archive.h(66): warning C4251: “torch::serialize::OutputArchive::module
”: class“std::shared_ptrtorch::jit::script::Module”需要有 dll 接口由 class“torch::serialize::OutputArchive”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/serialize/input-archive.h(78): note: 参见 “std::shared_ptrtorch::jit::script::Module”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/lexer.h(384): warning C4267: “参数”: 从“size_t”转换到“_Ty”,可能丢失数据
with
[
_Ty=int32_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/lexer.h(458): warning C4267: “初始化”: 从“size_t”转换到“int”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/interpreter.h(42): warning C4251: “torch::jit::Code::pImpl”: class“std::shared_ptrtorch::jit::CodeImpl”需要有 dll 接口由 struct“torch::jit::Code”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/interpreter.h(42): note: 参见“std::shared_ptrtorch::jit::CodeImpl”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/argument_spec.h(189): warning C4267: “初始化”: 从“size_t”转换到“int32_t”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/argument_spec.h(192): warning C4267: “初始化”: 从“size_t”转换到“int32_t”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/argument_spec.h(192): warning C4267: “初始化”: 从“size_t”转换到“const int32_t”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/argument_spec.h(196): warning C4244: “+=”: 从“int64_t”转换到“int32_t”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/argument_spec.h(215): warning C4244: “+=”: 从“int64_t”转换到“int32_t”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/argument_spec.h(367): warning C4267: “参数”: 从“size_t”转换到“const int”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/graph_executor.h(41): warning C4251: “torch::jit::GraphExecutor::pImpl”: class“std::shared_ptrtorch::jit::GraphExecutorImpl”需要有 dll 接口由 struct“torch::jit::GraphExecutor”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/graph_executor.h(41): note: 参见“std::shared_ptrtorch::jit::GraphExecutorImpl”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/module.h(56): warning C4267: “初始化”: 从“size_t”转换到“int”,可能丢失数据
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(93): warning C4275: 非 dll 接口 struct“torch::jit::script::SugaredValue”用作 dll 接口 struct“torch::jit::script::SimpleValue”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(35): note: 参见“torch::jit::script::SugaredValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(93): note: 参见“torch::jit::script::SimpleValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(122): warning C4275: 非 dll 接口 struct“torch::jit::script::SugaredValue”用作 dll 接口 struct“torch::jit::script::BuiltinFunction”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(35): note: 参见“torch::jit::script::SugaredValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(122): note: 参见“torch::jit::script::BuiltinFunction”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(130): warning C4251: “torch::jit::script::BuiltinFunction::self”: class“c10::optionaltorch::jit::NamedValue”需要有 dll 接口由 struct“torch::jit::script::BuiltinFunction”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/module.h(488): note: 参见“c10::optionaltorch::jit::NamedValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(143): warning C4275: 非 dll 接口 struct“torch::jit::script::SugaredValue”用作 dll 接口 struct“torch::jit::script::BuiltinModule”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(35): note: 参见“torch::jit::script::SugaredValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(143): note: 参见“torch::jit::script::BuiltinModule”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(157): warning C4251: “torch::jit::script::BuiltinModule::name”: class“std::basic_string<char,std::char_traits,std::allocator>”需要有 dll 接口 由 struct“torch::jit::script::BuiltinModule”的客户端使用
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\xstring(4373): note: 参见“std::basic_string<char,std::char_traits,std::allocator>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(160): warning C4251: “torch::jit::script::BuiltinModule::version”: class“c10::optional<int64_t>”需要有 dll 接口由 struct“torch::jit::script::BuiltinModule”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(145): note: 参见“c10::optional<int64_t>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(167): warning C4275: 非 dll 接口 struct“torch::jit::script::SugaredValue”用作 dll 接口 struct“torch::jit::script::ForkValue”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(35): note: 参见“torch::jit::script::SugaredValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(167): note: 参见“torch::jit::script::ForkValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(173): warning C4275: 非 dll 接口 struct“torch::jit::script::SugaredValue”用作 dll 接口 struct“torch::jit::script::AnnotateValue”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(35): note: 参见“torch::jit::script::SugaredValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/script/compiler.h(173): note: 参见“torch::jit::script::AnnotateValue”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(61): warning C4251: “std::enable_shared_from_thistorch::nn::Module::_Wptr”: class“std::weak_ptr<_Ty>”需要有 dll 接口由 class“std::enable_shared_from_thistorch::nn::Module”的客户端使用
with
[
Ty=torch::nn::Module
]
D:\VisualStudio\VC\Tools\MSVC\14.16.27023\include\memory(2029): note: 参见“std::weak_ptr<Ty>”的声明
with
[
Ty=torch::nn::Module
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(483): warning C4251: “torch::nn::Module::parameters
”: class“torch::OrderedDictstd::string,at::Tensor”需要有 dll 接口由 class“torch::nn::Module”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(209): note: 参见“torch::OrderedDictstd::string,at::Tensor”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(486): warning C4251: “torch::nn::Module::buffers
”: class“torch::OrderedDictstd::string,at::Tensor”需要有 dll 接口由 class“torch::nn::Module” 的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(209): note: 参见“torch::OrderedDictstd::string,at::Tensor”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(489): warning C4251: “torch::nn::Module::children
”: class“torch::OrderedDict<std::string,std::shared_ptr<_Ty>>”需要有 dll 接口由 class“torch::nn::Module”的客户端使用
with
[
_Ty=torch::nn::Module
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(245): note: 参见“torch::OrderedDict<std::string,std::shared_ptr<Ty>>”的声明
with
[
Ty=torch::nn::Module
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/module.h(492): warning C4251: “torch::nn::Module::name
”: class“c10::optionalstd::string”需要有 dll 接口由 class“torch::nn::Module”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch/csrc/jit/named_value.h(70): note: 参见“c10::optionalstd::string”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/conv.h(106): warning C4251: “torch::nn::ConvImpl<1,torch::nn::Conv1dImpl>::options”: struct“torch::nn::ConvOptions<1>”需要有 dll 接口由 class“torch::nn::ConvImpl<1,torch::nn::Conv1dImpl>”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/conv.h(87): note: 参见“torch::nn::ConvOptions<1>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/conv.h(126): warning C4251: “torch::nn::ConvImpl<2,torch::nn::Conv2dImpl>::options”: struct“torch::nn::ConvOptions<2>”需要有 dll 接口由 class“torch::nn::ConvImpl<2,torch::nn::Conv2dImpl>”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/conv.h(87): note: 参见“torch::nn::ConvOptions<2>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/conv.h(146): warning C4251: “torch::nn::ConvImpl<3,torch::nn::Conv3dImpl>::options”: struct“torch::nn::ConvOptions<3>”需要有 dll 接口由 class“torch::nn::ConvImpl<3,torch::nn::Conv3dImpl>”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/conv.h(87): note: 参见“torch::nn::ConvOptions<3>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/functional.h(89): warning C4251: “torch::nn::FunctionalImpl::function
”: class“std::function<at::Tensor (at::Tensor)>”需要有 dll 接口由 class“torch::nn::FunctionalImpl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/data/transforms/tensor.h(39): note: 参见“std::function<at::Tensor (at::Tensor)>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(174): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::w_ih”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::RNNImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(174): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::w_hh”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::RNNImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(174): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::b_ih”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::RNNImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(174): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::b_hh”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::RNNImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<Ty>>”的声明
with
[
Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(174): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::cudnn_mode
”: class“c10::optional<torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::CuDNNMode>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::RNNImpl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(64): note: 参见“c10::optional<torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::CuDNNMode>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(174): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::RNNImpl::flat_weights
”: class“std::vector<at::Tensor,std::allocator<_Ty>>” 需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::RNNImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(202): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::w_ih”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(202): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::w_hh”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(202): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::b_ih”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(202): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::b_hh”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<Ty>>”的声明
with
[
Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(202): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::cudnn_mode
”: class“c10::optional<torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::CuDNNMode>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(64): note: 参见“c10::optional<torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::CuDNNMode>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(202): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl::flat_weights
”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::LSTMImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(228): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::w_ih”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::GRUImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(228): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::w_hh”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::GRUImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(228): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::b_ih”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::GRUImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(228): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::b_hh”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::GRUImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<Ty>>”的声明
with
[
Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(228): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::cudnn_mode
”: class“c10::optional<torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::CuDNNMode>”需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::GRUImpl”的客户端使用
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(64): note: 参见“c10::optional<torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::CuDNNMode>”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/nn/modules/rnn.h(228): warning C4251: “torch::nn::detail::RNNImplBasetorch::nn::GRUImpl::flat_weights
”: class“std::vector<at::Tensor,std::allocator<_Ty>>” 需要有 dll 接口由 class“torch::nn::detail::RNNImplBasetorch::nn::GRUImpl”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/optimizer.h(83): warning C4251: “torch::optim::detail::OptimizerBase::parameters
”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::detail::OptimizerBase”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adagrad.h(28): warning C4275: 非 dll 接口 class“torch::optim::Optimizer”用作 dll 接口 class“torch::optim::Adagrad”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/optimizer.h(100): note: 参见“torch::optim::Optimizer”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adagrad.h(28): note: 参见“torch::optim::Adagrad”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adagrad.h(44): warning C4251: “torch::optim::Adagrad::sum_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::Adagrad”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adagrad.h(45): warning C4251: “torch::optim::Adagrad::step_buffers”: class“std::vector<int64_t,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::Adagrad”的客户端使用
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/typeid.h(584): note: 参见“std::vector<int64_t,std::allocator<_Ty>>”的声明
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adam.h(31): warning C4275: 非 dll 接口 class“torch::optim::Optimizer”用作 dll 接口 class“torch::optim::Adam”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/optimizer.h(100): note: 参见“torch::optim::Optimizer”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adam.h(31): note: 参见“torch::optim::Adam”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adam.h(45): warning C4251: “torch::optim::Adam::step_buffers”: class“std::vector<int64_t,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::Adam”的客户端使用
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\c10/util/typeid.h(584): note: 参见“std::vector<int64_t,std::allocator<_Ty>>”的声明
with
[
_Ty=int64_t
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adam.h(46): warning C4251: “torch::optim::Adam::exp_average_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::Adam”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adam.h(47): warning C4251: “torch::optim::Adam::exp_average_sq_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::Adam”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/adam.h(48): warning C4251: “torch::optim::Adam::max_exp_average_sq_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::Adam”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(22): warning C4305: “初始化”: 从“double”到“float”截断
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(23): warning C4305: “初始化”: 从“double”到“float”截断
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(27): warning C4275: 非 dll 接口 class“torch::optim::LossClosureOptimizer”用作 dll 接口 class“torch::optim::LBFGS”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/optimizer.h(110): note: 参见“torch::optim::LossClosureOptimizer”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(27): note: 参见“torch::optim::LBFGS”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(48): warning C4251: “torch::optim::LBFGS::ro”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::LBFGS” 的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(49): warning C4251: “torch::optim::LBFGS::al”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::LBFGS” 的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(50): warning C4251: “torch::optim::LBFGS::old_dirs”: class“std::deque<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::LBFGS”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(50): note: 参见“std::deque<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(51): warning C4251: “torch::optim::LBFGS::old_stps”: class“std::deque<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::LBFGS”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/lbfgs.h(50): note: 参见“std::deque<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/rmsprop.h(34): warning C4275: 非 dll 接口 class“torch::optim::Optimizer”用作 dll 接口 class“torch::optim::RMSprop”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/optimizer.h(100): note: 参见“torch::optim::Optimizer”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/rmsprop.h(34): note: 参见“torch::optim::RMSprop”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/rmsprop.h(50): warning C4251: “torch::optim::RMSprop::square_average_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::RMSprop”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/rmsprop.h(51): warning C4251: “torch::optim::RMSprop::momentum_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::RMSprop”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/rmsprop.h(52): warning C4251: “torch::optim::RMSprop::grad_average_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::RMSprop”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/sgd.h(31): warning C4275: 非 dll 接 口 class“torch::optim::Optimizer”用作 dll 接口 class“torch::optim::SGD”的基
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/optimizer.h(100): note: 参见“torch::optim::Optimizer”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/sgd.h(31): note: 参见“torch::optim::SGD”的声明
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/optim/sgd.h(45): warning C4251: “torch::optim::SGD::momentum_buffers”: class“std::vector<at::Tensor,std::allocator<_Ty>>”需要有 dll 接口由 class“torch::optim::SGD”的客户端使用
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\ATen/core/Type.h(224): note: 参见“std::vector<at::Tensor,std::allocator<_Ty>>”的声明
with
[
_Ty=at::Tensor
]
D:\Anaconda3\lib\site-packages\torch\lib\include\torch\csrc\api\include\torch/torch.h(7): fatal error C1021: 无效的预处 理器命令“warning”
error: command 'D:\VisualStudio\VC\Tools\MSVC\14.16.27023\bin\HostX86\x64\cl.exe' failed with exit status 2

Installation Error on Centos 7

Hi,
Thanks for your nice work. I encountered a problem when executed python setup.py install:

running install
running bdist_egg
running egg_info
writing neural_renderer.egg-info/PKG-INFO
writing dependency_links to neural_renderer.egg-info/dependency_links.txt
writing requirements to neural_renderer.egg-info/requires.txt
writing top-level names to neural_renderer.egg-info/top_level.txt
reading manifest file 'neural_renderer.egg-info/SOURCES.txt'
writing manifest file 'neural_renderer.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/utils/cpp_extension.py:118: UserWarning:

                           !! WARNING !!

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Your compiler (g++ 4.8) may be ABI-incompatible with PyTorch!
Please use a compiler that is ABI-compatible with GCC 4.9 and above.
See https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html.

See https://gist.github.com/goldsborough/d466f43e8ffc948ff92de7486c5216d6
for instructions on how to install GCC 4.9 or higher.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                          !! WARNING !!

warnings.warn(ABI_INCOMPATIBILITY_WARNING.format(compiler))
building 'neural_renderer.cuda.load_textures' extension
gcc -pthread -B /home/fzwu/anaconda2/envs/torch4/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/lib/include -I/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/lib/include/TH -I/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/fzwu/anaconda2/envs/torch4/include/python3.6m -c neural_renderer/cuda/load_textures_cuda.cpp -o build/temp.linux-x86_64-3.6/neural_renderer/cuda/load_textures_cuda.o -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default]
/usr/local/cuda/bin/nvcc -I/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/lib/include -I/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/lib/include/TH -I/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/lib/include/THC -I/usr/local/cuda/include -I/home/fzwu/anaconda2/envs/torch4/include/python3.6m -c neural_renderer/cuda/load_textures_cuda_kernel.cu -o build/temp.linux-x86_64-3.6/neural_renderer/cuda/load_textures_cuda_kernel.o -DTORCH_EXTENSION_NAME=load_textures -D_GLIBCXX_USE_CXX11_ABI=0 --compiler-options '-fPIC' -std=c++11
/home/fzwu/anaconda2/envs/torch4/lib/python3.6/site-packages/torch/lib/include/ATen/Half-inl.h(17): error: identifier "__half_as_short" is undefined

1 error detected in the compilation of "/tmp/tmpxft_00007f3e_00000000-7_load_textures_cuda_kernel.cpp1.ii".
error: command '/usr/local/cuda/bin/nvcc' failed with exit status 2

I am running on python3.6 with pytorch 0.4.1.

setup.py installation issues

neural_renderer/cuda/load_textures_cuda.cpp:37:17: note: in expansion of macro ‘TORCH_EXTENSION_NAME’
 PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
                 ^~~~~~~~~~~~~~~~~~~~
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1

I get the following error when I run sudo python3 setup.py install

CUDA 9.0
Ubuntu 18.04
torch 0.4.0
gcc 5.5

Renderer returns empty image with batch size = 3

Hi there,
I happened across quite a scary bug (I think) with this library today. As far as I can tell, the renderer seems to return empty images when provided vertices, faces with batch dimension = 3.

To test, I cloned and built the library today (20/10/2018) and produced a demo which I hope will enable you to reproduce the problem:

import matplotlib.pyplot as plt
import numpy as np

import torch
import torch.optim as optim

import trimesh
import neural_renderer as nr

IMAGE_SIZE = 256
TEXTURE_SIZE = 4

def run_renderer_with_batch_size(verts, faces, textures, renderer, batch_size):
    # We just duplicate (using pytorch expand) the cube (i.e. duplicate verts, faces and textures) 'batch_size' times
    verts = verts.expand(batch_size, -1, 3)
    faces = faces.expand(batch_size, -1, 3)
    textures = textures.expand(batch_size, faces.shape[1], TEXTURE_SIZE, TEXTURE_SIZE, TEXTURE_SIZE, 3)

    # Run the renderer and the silhouette renderer
    rendered_images = renderer.render(verts, faces, textures)
    rendered_silhouettes = renderer.render_silhouettes(verts, faces)

    return rendered_images, rendered_silhouettes

def main():
    # Initialize the renderer
    renderer = nr.Renderer(camera_mode='look_at')
    renderer.eye = nr.get_points_from_angles(5.0, 45.0, 45.0)
    renderer.image_size = IMAGE_SIZE

    # Create a 2x2x2 cube
    box_verts = np.array([
        [-1.0, -1.0, -1.0],
        [-1.0, -1.0,  1.0],
        [-1.0,  1.0, -1.0],
        [-1.0,  1.0,  1.0],
        [ 1.0, -1.0, -1.0],
        [ 1.0, -1.0,  1.0],
        [ 1.0,  1.0, -1.0],
        [ 1.0,  1.0,  1.0]])

    box_faces = np.array([
        [1, 3, 0],
        [4, 1, 0],
        [0, 3, 2],
        [2, 4, 0],
        [1, 7, 3],
        [5, 1, 4],
        [5, 7, 1],
        [3, 7, 2],
        [6, 4, 2],
        [2, 7, 6],
        [6, 5, 4],
        [7, 5, 6]])

    # Add a batch dimension, initially = 1
    verts = torch.from_numpy(box_verts[None, :]).float().cuda()
    faces = torch.from_numpy(box_faces[None, :]).int().cuda()

    # Create some pretty textures, again with a batch dimension = 1
    textures = torch.ones(1, box_faces.shape[0], TEXTURE_SIZE, TEXTURE_SIZE, TEXTURE_SIZE, 3).float().cuda()
    textures = torch.tanh(textures)

    # Test the renderer with various batch sizes
    for batch_size in range(1, 100):
        rendered_images, rendered_silhouettes = run_renderer_with_batch_size(verts, faces, textures, renderer, batch_size)

        # Print to the terminal what the largest pixel value is over the whole batch
        # nr.render() should produce tanh(1.0) = 0.7615...
        # nr.render_silhouettes() should produce 1.0

        print ("Batch size: {0}, nr.render(): {1}, nr.render_silhouettes(.): {2}".format(
            batch_size,
            rendered_images.max(),
            rendered_silhouettes.max()))

        # Visualize the first entry in the batch
        images_vis = rendered_images.permute(0, 2, 3, 1).cpu().numpy()
        images_vis = (images_vis * 255.0).astype(np.uint8)

        silhouette_vis = rendered_silhouettes.cpu().numpy()

        plt.figure()
        plt.suptitle("Batch size: {0}, Max image value across batch: {1}".format(batch_size, rendered_images.max()))
        plt.subplot(121)
        plt.imshow(images_vis[0])
        plt.subplot(122)
        plt.imshow(silhouette_vis[0])
        plt.show()

           
if __name__ == '__main__':
    main()

This produces output

Batch size: 1, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
Batch size: 2, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
Batch size: 3, nr.render(): 0.0, nr.render_silhouettes(.): 0.0
Batch size: 4, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
Batch size: 5, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
Batch size: 6, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
Batch size: 7, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
Batch size: 8, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
Batch size: 9, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0
...
Batch size: 100, nr.render(): 0.761594295502, nr.render_silhouettes(.): 1.0

Batch size = 3 seems to have a problem.

batch_size 2
batch_size 3
batch_size 4

It seems the renderer is left-handed.

First I found the code cause a bug because there is no way to pass the up value in.

if up.ndimension() == 1:

So I copy the same code from look_at.py, and make it work.
However, when I use the "look" camera mode and move renderer.eye along the x,y and z axes, the object moved like in the left-handed coordinate.
We know the opengl is in the right-handed coordinate. So the neural renderer is left-handed?

Ghost Texture

Hi, I test this wonderful renderer with a coffee mug model and it looks great! The model is built by software Blender by myself.

I use this script run on PyTorch version to load the coffee mug and get this

Nice rendering work!

However, when I compare it with below result from OpenGL renderer, I realize that some "ghost" textures on the object on above neural renderer result:(the flip of texture is no big issue)

So I explored a little bit and also found the phenomenon on other white background objects(Both Chainer version and PyTorch version) when I test other models.

Any idea how to fix this bug?

Strange gradients.

Hi. My colleague (@czw0078) and I have been using your neural renderer port, and we've noticed some pretty strange behavior during optimization (see also, this issue). A minimal working example demonstrating the odd behavior can be found below. The code requires adding:

if up is None:
    up = torch.cuda.FloatTensor([0, 1, 0])

to look.py to run.

import neural_renderer as nr
import numpy as np
import scipy.misc
import torch
import torch.nn as nn


class Model(nn.Module):
    def __init__(self, input_obj, initial_z):
        super(Model, self).__init__()

        # Load object.
        (vertices, faces) = nr.load_obj(input_obj)

        self.vertices = vertices[None, :, :]
        self.faces = faces[None, :, :]
        texture_size = 2
        self.textures = torch.ones(1, self.faces.shape[1], texture_size, texture_size,
                                   texture_size, 3, dtype=torch.float32).cuda()

        self.x = self.vertices[0][:, 0]
        self.y = self.vertices[0][:, 1]
        self.z = self.vertices[0][:, 2]

        # Camera parameters.
        camera_distance = 2.732
        elevation = 0
        azimuth = 0
        camera_position = np.array(nr.get_points_from_angles(camera_distance, elevation, azimuth), dtype=np.float32)
        self.camera_position = torch.from_numpy(camera_position).cuda()

        # Adjust renderer.
        renderer = nr.Renderer(camera_mode="look")
        renderer.eye = self.camera_position
        renderer.viewing_angle = 8.123
        self.renderer = renderer

        # Optimization direction.
        self.z_delta = nn.Parameter(torch.Tensor([initial_z]))

    def forward(self):
        new_z = self.z + self.z_delta

        vertices = torch.stack((self.x, self.y, new_z), dim=1)[None, :, :]
        image = self.renderer(vertices, self.faces, self.textures)

        return (new_z, image)


def main():
    input_obj = "./data/teapot.obj"
    model = Model(input_obj, 4)
    model.cuda()

    gen_teapot = False

    for sign in [1, -1]:
        model.zero_grad()
        (new_z, images) = model()

        if gen_teapot and sign == 1:
            image = images.detach().cpu().numpy()[0].transpose(1, 2, 0)
            scipy.misc.toimage(image, cmin=0, cmax=1).save("my_teapot.png")

        print("Current z_delta: {0}".format(model.z_delta.item()))
        assert (model.z_delta.grad is None) or (model.z_delta.grad.item() == 0.0)

        loss = sign * images.norm()
        loss.backward()
        print("Loss: {0}".format(loss.item()))

        print("z_delta derivative after .backward(): {0}".format(model.z_delta.grad[0]))


if __name__ == "__main__":
    main()

The example code produces the following output:

Current z_delta: 4.0
Loss: 146.864685059
z_delta derivative after .backward(): -232.424438477
Current z_delta: 4.0
Loss: -146.864685059
z_delta derivative after .backward(): -46.2437057495

when using the image below as a starting point.
my_teapot
As you can see, the gradients have the same sign despite using opposite loss functions. Any insight you could provide on this behavior would be greatly appreciated. Thank you.

The following Dockerfile was used to generate the environment used for the code above.

FROM nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
RUN apt-get update && apt-get install -y --no-install-recommends \
         wget \
         build-essential \
         cmake \
         nano \
         less \
         git \
         curl \
         libjpeg-dev \
         libpng-dev \
         imagemagick \
         python \
         python-dev \
         python-setuptools \
         python-pip \
         python-wheel
RUN pip install torch torchvision scikit-image tqdm imageio ipython==5.8.0

Rendered Meshes with Large Faces are Blurry.

The original paper states:

We use Figure 2 (b) for the forward pass because if we use Figure 2 (d), the color of a face leaks outside of the face. Therefore, our rasterizer produces the same images as the standard rasterizer, but it has non-zero gradients.

This is not correct, as meshes with large faces and detailed textures will result in a blurry images. The problem, which was already described in #6, could probably be solved by rewriting the current texture sampling method. What's the reason behind the current implementation?

fail run examples on pytorch1.0

i run the example.py on pytorch1.0, and get the following error:

Optimizing:   0%|          | 0/300 [00:00<?, ?it/s]Traceback (most recent call last):
  File "example2.py", line 99, in <module>
    main()
  File "example2.py", line 80, in main
    loss.backward()
  File "/data7//app/miniconda/envs/shape/lib/python2.7/site-packages/torch/tensor.py", line 107, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File "/data7//app/miniconda/envs/shape/lib/python2.7/site-packages/torch/autograd/__init__.py", line 93, in backward
    allow_unreachable=True)  # allow_unreachable flag
RuntimeError: function RasterizeFunctionBackward returned an incorrect number of gradients (expected 11, got 10)

it seems that only the forward step can success.

Need Transparent Texture Mapping

In 3d modeling, we sometimes use png picture as the texture for transparent effect. PNG picture has a transparent alpha channel, and it is the easiest way to handle glass material and animal hair. For example, here is the result of a horse model example rendered in OpenGL.

Look at the horsetail.

(I will also post PyTorch rendering result here later after the merge #9. PyTorch version can not handle PNG texture for now.)

PNG texture is a cheap but often-used trick to add realism to the result, which is very useful. But this may be a very challenging enhancement task.

It could be implemented as a "mask" value, but it is a very complicated issue involving some knowledge like Z-buffer.

Render multiple objects

I have two questions regarding the renderer part.

  1. Render in batches, i.e., render one object per image, but in batch model. I saw the first dimension is batch, does it means that you can load and render multiple objects at the same time, where each batch is one object? If yes, how? The problem is that each batch needs to be the same size. In other words, this requires each object have the same number of vertices.
  2. Render multiple objects in one image. Do you currently support rendering multiple objects into the same image?

Thanks!

Error running examples

Thanks for putting this implementation together.

I can successfully run python setup.py install, but I'm running into the following error when I try to run the examples. If I run python examples/example1.py, I get this output:

Traceback (most recent call last):
  File "examples/example1.py", line 12, in <module>
    import neural_renderer as nr
  File "/home/abc/miniconda3/envs/neur3/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/__init__.py", line 3, in <module>
    from .load_obj import load_obj
  File "/home/abc/miniconda3/envs/neur3/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/load_obj.py", line 8, in <module>
    import neural_renderer.cuda.load_textures as load_textures_cuda
ImportError: /home/abc/miniconda3/envs/neur3/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/cuda/load_textures.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN2at5ErrorC1ENS_14SourceLocationESs

I'm using a CentOS7 Linux system and am running in a conda virtual environment with Python 3.6, Pytorch 0.4.0, cudatoolkit 8.0, and cudadnn 7.1.3. I have tried other versions of Pytorch, but haven't had luck with them either. I can successfully import torch and run vanilla Pytorch operations in the environment, so the Pytorch installation seems to be okay.

Expected format for textures?

I'm having a hard time understanding the way that textures are represented inside the renderer.

It seems internally (e.g. from the texture optimization example) that textures are stored in Tensors of shape

(batch_size, num_faces, texture_size, texture_size, texture_size, 3)

which I understand as representing a 3D grid of size texture_size^3 of RGB value for each face. How does this relate to the per-vertex UV texture coordinates that are stored in .obj files?

The CUDA kernel for loading textures appears to be bilinearly sampling the RGB values from the texture images, but I am having a hard time understanding exactly what is happening inside this kernel and exactly what it returns. Can you give a brief explanation?

Running time

Function neural_renderer.rasterize takes approximately 1.1 s to rasterize 10^5 polygons on a Titan GPU.

Is this the expected time, or could it be improved?

Compilation Issues

I'm getting the following errors compiling this code on OSX High Sierra 10.13.

neural_renderer/cuda/load_textures_cuda.cpp:24:5: error: too many arguments provided to function-like macro invocation
    CHECK_INPUT(image);
    ^
neural_renderer/cuda/load_textures_cuda.cpp:15:24: note: expanded from macro 'CHECK_INPUT'
#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x)
                       ^
neural_renderer/cuda/load_textures_cuda.cpp:13:53: note: expanded from macro 'CHECK_CUDA'
#define CHECK_CUDA(x) AT_ASSERT(x.type().is_cuda(), #x " must be a CUDA tensor")
                                                    ^
<scratch space>:273:1: note: expanded from here
"image"
^
/Users/lena/anaconda/envs/neuralrender/lib/python2.7/site-packages/torch/lib/include/ATen/Error.h:118:9: note: macro 'AT_ASSERT' defined here
#define AT_ASSERT(cond) \
        ^
neural_renderer/cuda/load_textures_cuda.cpp:24:5: error: use of undeclared identifier 'AT_ASSERT'
    CHECK_INPUT(image);
    ^

I'm using the following command to compile. (I've already installed pytorch)

sudo MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ CFLAGS="-std=c++11 -stdlib=libc++ -mmacosx-version-min=10.9" CUDA_HOME=/usr/local/cuda/ python setup.py install

segment fault

Hi, my installed package are listed here.
pip list
Package Version


absl-py 0.7.1
astor 0.8.0
backcall 0.1.0
bleach 1.5.0
certifi 2019.3.9
chardet 3.0.4
chumpy 0.68
cycler 0.10.0
Cython 0.28.5
decorator 4.4.0
deepdish 0.3.6
Django 2.2.1
gast 0.2.2
grpcio 1.21.1
h5py 2.8.0
html5lib 0.9999999
idna 2.8
image 1.5.25
imageio 2.5.0
ipdb 0.12
ipython 7.5.0
ipython-genutils 0.2.0
jedi 0.13.3
kiwisolver 1.1.0
Markdown 3.1.1
matplotlib 2.2.2
memory-profiler 0.54.0
mkl-fft 1.0.12
mkl-random 1.0.2
mock 2.0.0
munkres 1.0.12
networkx 2.3
neural-renderer 1.1.3
nibabel 2.4.1
numexpr 2.6.9
numpy 1.14.5
opencv-contrib-python 3.4.2.16
opencv-python 3.4.2.16
pandas 0.24.2
parso 0.4.0
pbr 4.2.0
pexpect 4.7.0
pickleshare 0.7.5
Pillow 5.3.0
pip 19.1.1
prompt-toolkit 2.0.9
protobuf 3.7.1
psutil 5.4.7
ptyprocess 0.6.0
Pygments 2.4.2
pyparsing 2.4.0
python-dateutil 2.8.0
pytz 2019.1
PyWavelets 1.0.3
pyzmq 18.0.1
requests 2.22.0
scikit-image 0.15.0
scipy 1.1.0
setuptools 41.0.1
six 1.12.0
sqlparse 0.3.0
tables 3.5.1
tensorboard 1.8.0
tensorflow 1.8.0
termcolor 1.1.0
torch 0.4.0
torchfile 0.1.0
torchvision 0.2.2
tornado 6.0.2
tqdm 4.19.9
traitlets 4.3.2
urllib3 1.25.3
visdom 0.1.8.8
wcwidth 0.1.7
websocket-client 0.56.0
Werkzeug 0.15.4
wheel 0.33.4

When I run examples/example1.py, the programe got segment fault. How to solve this? Thank you!
image

Cannot step inside the cuda kernel. Ask for help about how to debug.

Thanks for your library. It is cool. I saw the post #36 say the library can work with pytorch 1.1.
"Latest release (1.1.3) works ok for me with torch 1.1.0 and CUDA 10.0 on ubuntu 16.04."

I try to test it on my pc but I failed at a simple test. Any suggestion is really appreciated.

Following this blog, I try to debug a simple cuda cpp extension at first.
https://chrischoy.github.io/research/pytorch-extension-with-makefile/
The source code is here:
https://github.com/chrischoy/MakePytorchPlusPlus/

But I can not step inside the cuda kernel. The gcc version is gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 , pytorch 1.1 with cuda 10.0 in anaconda environment.

The error info is :
Reading symbols from /usr/lib/x86_64-linux-gnu/libcuda.so.1...(no debugging symbols found)...done.
Reading symbols from /usr/lib/x86_64-linux-gnu/libnvidia-fatbinaryloader.so.410.104...(no debugging symbols found)...done.
0x00007ffcb77d9b62 in clock_gettime ()
$1 = 86834896
cuda-gdb/7.12/gdb/block.c:456: internal-error: set_block_compunit_symtab: Assertion `gb->compunit_symtab == NULL' failed.
A problem internal to GDB has been detected,
further debugging may prove unreliable.
Quit this debugging session? (y or n) y

I wonder if how you debug the neural render codes. I wish to add the per pixel normal shading feature, so that is why I need to debug the codes. Thanks a lot.

Any plan to support rendering on non-square canvas?

Currently, the system only uses a single image_size as both width and height.
To modify the system to support width and height separately, functions in rasterize_cuda_kernel.cu could be slightly changed.

Bug: render function returns a tuple not a tensor

When I run exampe2.py, an error are reported:
Traceback (most recent call last): File "./examples/example2.py", line 100, in <module> main() File "./examples/example2.py", line 94, in main image = images.detach().cpu().numpy()[0].transpose((1, 2, 0)) AttributeError: 'tuple' object has no attribute 'detach'
So, I scan the render code, and find that it returns a tuple. I think example2 should changed to :
image = images[0].detach().cpu().numpy()[0].transpose((1, 2, 0))

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.