facebookresearch / pytorch3d Goto Github PK
View Code? Open in Web Editor NEWPyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Home Page: https://pytorch3d.org/
License: Other
PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
Home Page: https://pytorch3d.org/
License: Other
Running setup.py develop for pytorch3d
ERROR: Command errored out with exit status 1:
command: /Users/gu/opt/anaconda3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/gu/codes/pytorch3d/setup.py'"'"'; __file__='"'"'/Users/gu/codes/pytorch3d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps
cwd: /Users/gu/codes/pytorch3d/
Complete output (43 lines):
running develop
running egg_info
writing pytorch3d.egg-info/PKG-INFO
writing dependency_links to pytorch3d.egg-info/dependency_links.txt
writing requirements to pytorch3d.egg-info/requires.txt
writing top-level names to pytorch3d.egg-info/top_level.txt
reading manifest file 'pytorch3d.egg-info/SOURCES.txt'
writing manifest file 'pytorch3d.egg-info/SOURCES.txt'
running build_ext
building 'pytorch3d._C' extension
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I/Users/gu/opt/anaconda3/include -arch x86_64 -I/Users/gu/opt/anaconda3/include -arch x86_64 -I/Users/gu/codes/pytorch3d/pytorch3d/csrc -I/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include -I/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -I/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/TH -I/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/THC -I/Users/gu/opt/anaconda3/include/python3.7m -c /Users/gu/codes/pytorch3d/pytorch3d/csrc/ext.cpp -o build/temp.macosx-10.9-x86_64-3.7/Users/gu/codes/pytorch3d/pytorch3d/csrc/ext.o -std=c++17 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
In file included from /Users/gu/codes/pytorch3d/pytorch3d/csrc/ext.cpp:3:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:6:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/utils/pybind.h:6:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/pybind11/pybind11.h:44:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/pybind11/attr.h:13:
/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/pybind11/cast.h:579:34: error: aligned allocation function of type 'void *(std::size_t, std::align_val_t)' is only available on macOS 10.14 or newer
vptr = ::operator new(type->type_size,
^
/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/pybind11/cast.h:579:34: note: if you supply your own aligned allocation functions, use -faligned-allocation to silence this diagnostic
In file included from /Users/gu/codes/pytorch3d/pytorch3d/csrc/ext.cpp:3:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:6:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/utils/pybind.h:6:
/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/pybind11/pybind11.h:1000:11: error: 'operator delete' is unavailable: introduced in macOS 10.12
::operator delete(p, s, std::align_val_t(a));
^
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/new:194:74: note: 'operator delete' has been explicitly marked unavailable here
_LIBCPP_OVERRIDABLE_FUNC_VIS _LIBCPP_AVAILABILITY_SIZED_NEW_DELETE void operator delete(void* __p, std::size_t __sz, std::align_val_t) _NOEXCEPT;
^
In file included from /Users/gu/codes/pytorch3d/pytorch3d/csrc/ext.cpp:3:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/extension.h:6:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/api/include/torch/python.h:12:
In file included from /Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/torch/csrc/utils/pybind.h:6:
/Users/gu/opt/anaconda3/lib/python3.7/site-packages/torch/include/pybind11/pybind11.h:1002:11: error: 'operator delete' is unavailable: introduced in macOS 10.12
::operator delete(p, s);
^
/Library/Developer/CommandLineTools/usr/bin/../include/c++/v1/new:177:74: note: 'operator delete' has been explicitly marked unavailable here
_LIBCPP_OVERRIDABLE_FUNC_VIS _LIBCPP_AVAILABILITY_SIZED_NEW_DELETE void operator delete(void* __p, std::size_t __sz) _NOEXCEPT;
^
3 errors generated.
error: command 'clang' failed with exit status 1
----------------------------------------
ERROR: Command errored out with exit status 1: /Users/gu/opt/anaconda3/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/Users/gu/codes/pytorch3d/setup.py'"'"'; __file__='"'"'/Users/gu/codes/pytorch3d/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' develop --no-deps Check the logs for full command output.
fixed via
MACOSX_DEPLOYMENT_TARGET=10.14 CC=clang CXX=clang++ pip install -e .
If you do not know the root cause of the problem / bug, and wish someone to help you, please
post according to this template:
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
EXPECTED RESULT
Tutorial completes without error
ACTUAL RESULT
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-4-78db78d73941> in <module>()
12 import os
13 sys.path.append(os.path.abspath(''))
---> 14 from utils import plot_camera_scene
15
16 # set for reproducibility
ModuleNotFoundError: No module named 'utils'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
Please include the following (depending on what the issue is):
git diff
) or code you wrote<put diff or code here>
<put logs here>
Please also simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
I want to use this differentiable renderer to render a LINEMOD object. The .ply model vertices are loaded and then scale from mm
to m
by a factor 0.001.
The camera intrinsic:
K = np.array([[572.4114, 0, 325.2611], [0, 573.57043, 242.04899], [0, 0, 1]])
Rotation and translation
R = np.eye(3)
T = np.array([-0.1, 0.1, 0.7])
I followed the camera position tutorial to set the cameras and renderers, except that the camera I used is
cameras = SfMPerspectiveCameras(
focal_length=((K[0,0], K[1,1]), ),
principal_point=((K[0,2], K[1,2]),),
device=device,
)
And since I want to get 480x640 image, I set the image_size=640
and crop to 480x640
after rendering.
However, there is nothing in the rendered results. I could get correct results with a OpenGL renderer with the same settings.
So I wonder how I can correctly use the SfMPerspectiveCameras
?
I cannot see the data of 10014_dolphin_v2_max2011_it2.obj
in pytorch3d/docs/tutorials/data
for the tutorial(deform_source_mesh_to_target_mesh.ipynb).
Maybe forget to upload??
Dear people of Pytorch3D,
We are actively working on a project called Pytorch Deep Point Cloud Benchmark, with hydra as our core configuration tool.
I thought it could be of any interest
Here is the repo link: https://github.com/nicolas-chaulet/deeppointcloud-benchmarks
and its associated cuda / cpp library: https://github.com/nicolas-chaulet/torch-points
Maybe, it could possible to imagine future collaboration.
Best regards,
Thomas Chaton.
NOTE:
If you encountered any errors or unexpected issues while using PyTorch3d and need help resolving them,
please use the "Bugs / Unexpected behaviors" issue template.
We do not answer general machine learning / computer vision questions that are not specific to
PyTorch3d, such as how a model works or what algorithm/methods can be
used to achieve X.
Fixed Windows MSVC build compatibility issues. All tests passed.
If you are using pre-compiled pytorch 1.4 and torchvision 0.5. You should make the following revisions to the pytorch source codes also to successfully compile with visual studio 2019 (MSVC 19.16.27034) and CUDA 10.1.
-static constexpr *
+static const *
-static constexpr size_t DEPTH_LIMIT = 128;
+static const size_t DEPTH_LIMIT = 128;
-explicit operator type&() { return *(this->value); }
+explicit operator type& () { return *((type*)(this->value)); }
If you are using CUDA 10.2 with pre-compiled pytorch 1.4 and torchvision 0.5, you need to check
Pytorch Issue [(https://github.com/pytorch/pytorch/issues/33203)] for patching the pytorch source code in addition.
Please include the following (depending on what the issue is):
git diff
) or code you wrotehttps://github.com/facebookresearch/pytorch3d/pull/9
<put logs here>
Please also simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
conda install pytorch3d -c pytorch3d
gives the below output:
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: |
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
As dataclasses were included in Python 3.7 some modules are broken in Python 3.6, but the installation guide suggests Python 3.6 as default version (see this)
The following error arises when running one of the examples using a conda installation in Python 3.6
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-ff7862b75591> in <module>
18
19 # rendering components
---> 20 from pytorch3d.renderer import (
21 OpenGLPerspectiveCameras, look_at_view_transform, look_at_rotation,
22 RasterizationSettings, MeshRenderer, MeshRasterizer, BlendParams,
~/anaconda3/envs/torch3d/lib/python3.6/site-packages/pytorch3d/renderer/__init__.py in <module>
17 from .lighting import DirectionalLights, PointLights, diffuse, specular
18 from .materials import Materials
---> 19 from .mesh import (
20 GouradShader,
21 MeshRasterizer,
~/anaconda3/envs/torch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/__init__.py in <module>
2
3 from .rasterize_meshes import rasterize_meshes
----> 4 from .rasterizer import MeshRasterizer, RasterizationSettings
5 from .renderer import MeshRenderer
6 from .shader import (
~/anaconda3/envs/torch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/rasterizer.py in <module>
3
4
----> 5 from dataclasses import dataclass
6 from typing import NamedTuple, Optional
7 import torch
ModuleNotFoundError: No module named 'dataclasses'
Is it possible to make cubify differentiable ?
I used cubify in a custom loss and I am getting this runtime error:
element 0 of tensors does not require grad and does not have a grad_fn
Will pytorch3d plan to support 3D pretrained deep learning models such as Mesh RCNN?
Will be nice to have a bunch of pretrained models to start playing with. Incrememental work would become much easier.
--- None as of now --
I replaced the silhouette_renderer with the defined phong_renderer in the Camera Position Optimization example. Essentially I replaced the following line.
#model = Model(meshes=teapot_mesh, renderer=silhouette_renderer, image_ref=image_ref).to(device)
model = Model(meshes=teapot_mesh, renderer=phong_renderer, image_ref=image_ref).to(device)
With this modification, the optimization provides zero gradient and no longer updates the camera position or loss. Is this expected behavior?
The current implementations of sigmoid_alpha_blend and softmax_rgb_blend in blending.py are not differentiable. This is due the to the replacement of cumulative product with the exponential of log sums for the alpha calculation. When prob_map is 1.0, the gradient is not defined.
# alpha = 1.0 - torch.cumprod((1.0 - prob), dim=-1)[..., -1]
alpha = 1.0 - torch.exp(torch.log((1.0 - prob_map)).sum(dim=-1))
Thank you for your great work at first!
When I try to run deformation of sphere to dolphin tutorial, I found an unexpected errors when loading the vertices of meshes to device, which be set to CUDA:0. Here is the error log:
THCudaCheck FAIL file=/opt/conda/conda-bld/pytorch_1579027003190/work/aten/src/THC/THCGeneral.cpp line=50 error=30 : unknown error
Traceback (most recent call last):
File "dolphin.py", line 33, in
faces_idx = faces.verts_idx.to(device)
File "/home/jormungandr/anaconda3/envs/pytorch3d/lib/python3.6/site-packages/torch/cuda/init.py", line 197, in _lazy_init
torch._C._cuda_init()
RuntimeError: cuda runtime error (30) : unknown error at /opt/conda/conda-bld/pytorch_1579027003190/work/aten/src/THC/THCGeneral.cpp:50
I've tried rmmod nvidia, nvidia-uvm, but each of these commands has an error about
rmmod: ERROR: Module nvidia_uvm is not currently loaded or
rmmod: ERROR: Module nvidia is in use by: nvidia_modeset
I rebooted once, but nothing changed either.
And my environment is as follows:
Pytorch : 1.4
Python : 3.6.10
CUDA : 10.0 by nvcc(runtime)
cuDNN : 7.0
OS : Ubuntu 18.04
How do I force pytorch to calculate/import and incorporate normals when fitting the source to the target.
In my case, both the source and the target objs are somewhat similar and can draw upon normal values to guide fitting.
Any pointers?
Thank you
I am wondering why the package is not on PyPI. I am thinking it is because of some of the dependencies, but I'm not sure.
Installing through PyPI without conda makes it much easier for anyone to get started.
This is useful for many users.
Would appreciate more documentation or functionality to allow for rendering multiple meshes in a single scene, each with its own texture or vertex colors. In neural mesh renderer this can be achieved by setting a tensor with size (1, F, T, T, T, 3), where F are number of faces, and T are "texture size", while in the Nvidia Kaolin repository, there is the vertex texture mode for DIB-R.
Requesting an example of how this could be accomplished using this renderer.
Complex scene require multiple objects, often with its own texture. This functionality is present in other differentiable renderers.
Possible pseudo-code:
MeshList = [Mesh1(faces1, verts1, text1), Mesh2(faces2, verts2, text2) ... ]
a=Mesh()
for i in MeshList:
a = a.mesh_join(i)
img = Renderer(a)
I suppose the βtranslateβ part should be in the last column of matrix, not in the last row.
when I wrote the code like this:
import pytorch3d.transforms as p3d_t
t = p3d_t.Transform3d().translate(1,2,3)
print(t.get_matrix())
I got
tensor([[[1., 0., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 0.],
[1., 2., 3., 1.]]])
while I suppose the correct response should be
tensor([[[1., 0., 0., 1.],
[0., 1., 0., 2.],
[0., 0., 1., 3.],
[0., 0., 0., 1.]]])
I suppose chang the mat[:, :3, 3]
to mat[:, 3, :3]
in line 388 and 394 of pytorch3d.transforms would fix it.
Hi! my env: win10, pytorch 1.4.0, py3.7, cuda10.2
when i run setup.py it return with:
C:/python37/lib/site-packages/torch/include\torch/csrc/jit/argument_spec.h(190): error: member "torch::jit::ArgumentSpecCreator::DEPTH_LIMIT" may not be initialized
C:/python37/lib/site-packages/torch/include\torch/csrc/jit/script/module.h(467): error: member "torch::jit::script::detail::ModulePolicy::all_slots" may not be initialized
C:/python37/lib/site-packages/torch/include\torch/csrc/jit/script/module.h(480): error: member "torch::jit::script::detail::ParameterPolicy::all_slots" may not be initialized
C:/python37/lib/site-packages/torch/include\torch/csrc/jit/script/module.h(494): error: member "torch::jit::script::detail::BufferPolicy::all_slots" may not be initialized
C:/python37/lib/site-packages/torch/include\torch/csrc/jit/script/module.h(507): error: member "torch::jit::script::detail::AttributePolicy::all_slots" may not be initialized
5 errors detected in the compilation of "C:/Users/xxxx/AppData/Local/Temp/tmpxft_00002db4_00000000-10_rasterize_meshes.cpp1.ii".
error: command 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.2\bin\nvcc.exe' failed with exit status 1
how to fit it?
Is there any easy way to get the 2D location of each one of the vertices of each rendered image?
I'm trying to optimize a mesh pose using a target image and the rendered. Because I'm working in synthetic domain I thought that I could use PhongShader instead of Silouette proxy.
However, right in first iteration weights explode. Is there a simple justification for this? Or maybe I'm doing something wrong?
I followed the instructions in INSTALL.md to create a new anaconda env, and install the required packages by the instructions. However, when I use "conda install pytorch3d -c pytorch3d",
I got
PackageNotFoundError: Packages missing in current channels:
If you do not know the root cause of the problem / bug, and wish someone to help you, please
post according to this template:
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
EXPECTED RESULT
Tutorial completes without error
ACTUAL RESULT
The 2nd cell under "Load an OBJ file ..." throws an error:
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-7-a2f61d370091> in <module>()
----> 1 verts, faces, aux = load_obj(trg_obj)
2
3 # verts is a FloatTensor of shape (V, 3) where V is the number of vertices in the mesh
4 # faces is an object which contains the following LongTensors: verts_idx, normals_idx and textures_idx
5 # For this tutorial, normals and textures are ignored.
1 frames
/usr/local/lib/python3.6/dist-packages/pytorch3d/io/obj_io.py in _open_file(f)
82 if isinstance(f, str):
83 new_f = True
---> 84 f = open(f, "r")
85 elif isinstance(f, pathlib.Path):
86 new_f = True
FileNotFoundError: [Errno 2] No such file or directory: './data/doplhin/10014_dolphin_v2_max2011_it2.obj'
Please include the following (depending on what the issue is):
git diff
) or code you wrote<put diff or code here>
<put logs here>
Please also simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
NOTE: we only consider adding new features if they are useful for many users.
I found a small typo on pytorch3d.org tutorial
As I mentioned on screenshot above, the command!pip install 'git+https://github.com/facebookresearch/pytorch3d.git
should be fixed to
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git'
or
!pip install git+https://github.com/facebookresearch/pytorch3d.git
I know it is such a trivial but typo is typo...
After installing the requirements, running the render_textured_meshes.ipynb
fails on the first cell with the following error:
ImportError Traceback (most recent call last)
<ipython-input-5-ce664553dc67> in <module>
8
9 # Data structures and functions for rendering
---> 10 from pytorch3d.structures import Meshes, Textures
11 from pytorch3d.renderer import (
12 look_at_view_transform,
~/PycharmProjects/pytorch3d/pytorch3d/structures/__init__.py in <module>
1 # Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
2
----> 3 from .meshes import Meshes
4 from .textures import Textures
5 from .utils import (
~/PycharmProjects/pytorch3d/pytorch3d/structures/meshes.py in <module>
4 import torch
5
----> 6 from pytorch3d import _C
7
8 from . import utils as struct_utils
ImportError: dlopen(/Users/semo/PycharmProjects/pytorch3d/pytorch3d/_C.cpython-37m-darwin.so, 2): Symbol not found: __Z23RasterizeMeshesFineCudaRKN2at6TensorES2_ifiib
Referenced from: /Users/semo/PycharmProjects/pytorch3d/pytorch3d/_C.cpython-37m-darwin.so
Expected in: flat namespace
in /Users/semo/PycharmProjects/pytorch3d/pytorch3d/_C.cpython-37m-darwin.so
python setup.py build develop
from root directoryrender_textured_meshes.ipynb
Any help is much appreciated, thanks!
If you do not know the root cause of the problem / bug, and wish someone to help you, please
post according to this template:
error when trying to import pytorch3d.renderer
I installed pytorch3d according to the INSTALL.md file (installed pytroch3d from Anaconda Cloud), when I try to import pytorch3d.renderer, I get the following error:
ModuleNotFoundError: No module named 'dataclasses'
Complete logs:
$ python
Python 3.6.10 |Anaconda, Inc.| (default, Jan 7 2020, 21:14:29)
[GCC 7.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.__version__
'1.4.0'
>>> torch.version.cuda
'10.0'
>>> import pytorch3d.renderer
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/meissen/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/__init__.py", line 19, in <module>
from .mesh import (
File "/home/meissen/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/__init__.py", line 4, in <module>
from .rasterizer import MeshRasterizer, RasterizationSettings
File "/home/meissen/miniconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/rasterizer.py", line 5, in <module>
from dataclasses import dataclass
ModuleNotFoundError: No module named 'dataclasses'
>>>
Hi,
We can remark that this package is only available for Linux and MacOS. Can we have a version for Windows ? Thank you for answering.
Hi,
First of all, fantastic work I love it :)
I'm modifying the mesh deformation code to learn a mesh from 2D views. The optimization process starts but after about 100 optimization steps, the code crashes with the following (the log is output with CUDA_LAUNCH_BLOCKING=1
).
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-17-d220a60da988> in <module>
29 # We sample 5k points from the surface of each mesh
30 sample_trg = sample_points_from_meshes(trg_mesh, 5000)
---> 31 sample_src = sample_points_from_meshes(new_src_mesh, 5000)
32
~/coding/python/pytorch3d/env/lib/python3.7/site-packages/pytorch3d/ops/sample_points_from_meshes.py in sample_points_from_meshes(meshes, num_samples, return_normals)
64 num_samples, replacement=True
65 ) # (N, num_samples)
---> 66 sample_face_idxs += mesh_to_face[meshes.valid].view(num_valid_meshes, 1)
67
68 # Get the vertex coordinates of the sampled faces.
RuntimeError: cuda runtime error (710) : device-side assert triggered at /pytorch/aten/src/THC/THCGeneral.cpp:313
The error could be related to the other issue here but I'm not sure if it's the same root cause. The error seems to happen at different points in the optimization process depending on parameters like loss weights, optimizer used.
Python : 3.7.5
Pytorch : 1.4.0
CUDA : 10.2
Ubuntu : 19.10
Thanks!
Hello,
Is it possible to provide the dataset used in the examples? I think, the dolphine
data is not available in the data folder.
Thanks.
Hello,
I installed Python3.6 when things did not work with 3.7 and 3.8. Now I get the following errors:
Camera position optimization using differentiable rendering
In this tutorial we will learn the [x, y, z] position of a camera given a reference image using differentiable rendering.
We will first initialize a renderer with a starting position for the camera. We will then use this to generate an image, compute a loss with the reference image, and finally backpropagate through the entire pipeline to update the position of the camera.
This tutorial shows how to:
load a mesh from an .obj file
initialize a Camera, Shader and Renderer,
render a mesh
set up an optimization loop with a loss function and optimizer
Set up and imports
import os
import torch
import numpy as np
from tqdm import tqdm_notebook
import imageio
import torch.nn as nn
import torch.nn.functional as F
import matplotlib.pyplot as plt
from skimage import img_as_ubyte
β
from pytorch3d.io import load_obj
β
from pytorch3d.structures import Meshes, Textures
β
from pytorch3d.transforms import Rotate, Translate
β
ModuleNotFoundError Traceback (most recent call last)
in
19
20 # rendering components
---> 21 from pytorch3d.renderer import (
22 OpenGLPerspectiveCameras, look_at_view_transform, look_at_rotation,
23 RasterizationSettings, MeshRenderer, MeshRasterizer, BlendParams,
~/Disk/Software/Anaconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/init.py in
17 from .lighting import DirectionalLights, PointLights, diffuse, specular
18 from .materials import Materials
---> 19 from .mesh import (
20 GouradShader,
21 MeshRasterizer,
~/Disk/Software/Anaconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/init.py in
2
3 from .rasterize_meshes import rasterize_meshes
----> 4 from .rasterizer import MeshRasterizer, RasterizationSettings
5 from .renderer import MeshRenderer
6 from .shader import (
~/Disk/Software/Anaconda3/envs/pytorch3d/lib/python3.6/site-packages/pytorch3d/renderer/mesh/rasterizer.py in
3
4
----> 5 from dataclasses import dataclass
6 from typing import NamedTuple, Optional
7 import torch
ModuleNotFoundError: No module named 'dataclasses'
If you do not know the root cause of the problem / bug, and wish someone to help you, please
post according to this template:
The top level tutorials page currently tells visitors to
!pip install torch torchvision
!pip install 'git+https://github.com/facebookresearch/pytorch3d.git
The second line is missing a closing single quote, resulting in errors when visitors try to follow the instructions.
PR submitted as #27
Please include the following (depending on what the issue is):
git diff
) or code you wrote<put diff or code here>
<put logs here>
Please also simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
Transforming a shape with Genus = 0 to a sphere has been well studied topic. Mean Curvature
flow guarantees that during surface evolution, the mesh will not intersect.
In the tutorial on transforming the model what properties are being preserved. Is there any report/article using this approach? Is the Machine learning approach better than classical
approach? Any paper?
/pytorch3d/loss/mesh_edge_loss.py
The docstring currently mentions that:
Each edge contributes equally to the final loss, regardless of
numbers of edges per mesh in the batch by weighting each mesh with the
inverse number of edges.
Which is wrong. Edges of meshes with relatively many edges have small weights.
It should be:
Each mesh contributes equally to the final loss, regardless of
numbers of edges per mesh in the batch by weighting each mesh with the
inverse number of edges.
Hello,
Sorry, I am a newbee to Pytorch but got attracted to 3D mesh. I am having very hard time
making Pytorch3D working. I am not sure if I should be asking these questions here.
I installed NVidia-Driver 440.59 on Ubuntu 18.04.
The Compiler is GCC 8.3
nvidia-smi says that CUDA-10.2 is installed.
Thereafter, I went to Pytorch library and opted Conda + 10.1 with the given command:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
Even after 5 hours the above command did not do anything. So I gave it up.
I installed pytorch from conda command i.e. pytorch-gpu.
All simple examples worked. No issues moving data from CPU-GPU and back.
I installed Pytorch3D. No example worked. For example:
(base) :$ python test_utils.py
Traceback (most recent call last):
File "test_utils.py", line 8, in
from pytorch3d.renderer.utils import TensorProperties
File "/home/Disk/Software/Anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/init.py", line 19, in
from .mesh import (
File "/home/Disk/Software/Anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/mesh/init.py", line 3, in
from .rasterize_meshes import rasterize_meshes
File "/home/Disk/Software/Anaconda3/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterize_meshes.py", line 9, in
from pytorch3d import _C
ImportError: /home/Disk/Software/Anaconda3/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so: undefined symbol: _ZN6caffe28TypeMeta21_typeMetaDataInstanceIN3c108BFloat16EEEPKNS_6detail12TypeMetaDataEv
How to resolve these issues? It seems that the problem may be with Pytorch, Cuda, and NVidia Drivers. Installing all these libraries turned out to be a nightmare.
Can someone write how to correctly installed these libraries so that Pytorch3D work as expected.
Thanks
Hi, I want to ask how can I render objects that have no texture information? Such as ShapeNetSem dataset, whose .obj file contain no line start with "vt ".
And there is the content of a corresponding .mtl file below:
# Blender MTL File: 'None'
# Material Count: 1
newmtl None
Ns 0
Ka 0.000000 0.000000 0.000000
Kd 0.8 0.8 0.8
Ks 0.8 0.8 0.8
d 1
illum 2
As far as I know, MeshRenderer class need a shader, and the shader need texture information to work regularly. When I execute this line: verts, faces, aux = load_obj(shapenetsem_obj_filename)
, I get an empty aux
. And then I create a mesh instance without texture: mesh = Meshes(verts=[verts], faces=[faces_idx], textures=None)
, and when I render mesh like the tutorial, it occurs "ValueError: Expected meshes.textures to be an instance of Textures; got <class 'NoneType'>"
How can I solve this problem? Thank you for your help!
More Point Sampling Methods from meshes
It would be great if I could have more options on how to get the set of surface point with different point samplings, such as
More sampling options will enable us to have better understanding on how our model will perform under different situations :)
If you think its necessary, I can submit a P.R
Tried to run the 'camera_position_optimization_with_differentiable_rendering.ipynb' with my custom obj. Although it works fine for the provided obj, for my custom obj the look_at_view_transform
and look_at_rotation
sometime returns :
R is not a valid rotation matrix
which is weird!
Build completes fine with
MACOSX_DEPLOYMENT_TARGET
set to 10.14 rather than 10.9 as per instructions
In file included from /Users/jelleferinga/miniconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/include/pybind11/attr.h:13:
/Users/jelleferinga/miniconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/include/pybind11/cast.h:579:34: error: aligned allocation function of type 'void *
(std::size_t, std::align_val_t)' **is only available on macOS 10.14 or newer**
vptr = ::operator new(type->type_size,
The build passes when set to the correct target
Hi, all,
I got the following import error:
>>> from pytorch3d import _C
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: /data/code11/pytorch3d/pytorch3d/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZTIN3c1021AutogradMetaInterfaceE
My config is pytorch==1.4, torchvision==0.5.0, CUDA=10.0. It seems there's conflicts with the packages? Any hints to solve it?
THX!
Hi!
I'd like to just use the simple demo (no training) of mesh-r-cnn, and don't have a cuda-capable gpu on my macbook. Can I install and use this only with CPU? Or is it only made for gpu usage? Thanks!
When running the deform_source_mesh_to_target_mesh.ipynb
tutorial, I encounter the following error in the 'sample_points_from_meshes()' function in the main optimization loop, cell 10 under section "3. Optimization loop":
RuntimeError Traceback (most recent call last)
<ipython-input-10-eaad4829c9f7> in <module>
28
29 # We sample 5k points from the surface of each mesh
---> 30 sample_trg = sample_points_from_meshes(trg_mesh, 5000)
31 sample_src = sample_points_from_meshes(new_src_mesh, 5000)
32
~/PycharmProjects/pytorch3d/pytorch3d/ops/sample_points_from_meshes.py in sample_points_from_meshes(meshes, num_samples, return_normals)
53 with torch.no_grad():
54 areas, _ = _C.face_areas_normals(
---> 55 verts, faces
56 ) # Face areas can be zero.
57 max_faces = meshes.num_faces_per_mesh().max().item()
RuntimeError: Not implemented on the CPU. (face_areas_normals at /Users/semo/PycharmProjects/pytorch3d/pytorch3d/csrc/face_areas_normals/face_areas_normals.h:35)
frame #0: c10::Error::Error(c10::SourceLocation, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) + 135 (0x12737c9e7 in libc10.dylib)
frame #1: face_areas_normals(at::Tensor, at::Tensor) + 857 (0x127e8e679 in _C.cpython-37m-darwin.so)
frame #2: std::__1::tuple<at::Tensor, at::Tensor> pybind11::detail::argument_loader<at::Tensor, at::Tensor>::call_impl<std::__1::tuple<at::Tensor, at::Tensor>, std::__1::tuple<at::Tensor, at::Tensor> (*&)(at::Tensor, at::Tensor), 0ul, 1ul, pybind11::detail::void_type>(std::__1::tuple<at::Tensor, at::Tensor> (*&)(at::Tensor, at::Tensor), std::__1::integer_sequence<unsigned long, 0ul, 1ul>, pybind11::detail::void_type&&) + 57 (0x127e96fa9 in _C.cpython-37m-darwin.so)
frame #3: void pybind11::cpp_function::initialize<std::__1::tuple<at::Tensor, at::Tensor> (*&)(at::Tensor, at::Tensor), std::__1::tuple<at::Tensor, at::Tensor>, at::Tensor, at::Tensor, pybind11::name, pybind11::scope, pybind11::sibling>(std::__1::tuple<at::Tensor, at::Tensor> (*&)(at::Tensor, at::Tensor), std::__1::tuple<at::Tensor, at::Tensor> (*)(at::Tensor, at::Tensor), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::'lambda'(pybind11::detail::function_call&)::operator()(pybind11::detail::function_call&) const + 135 (0x127e96857 in _C.cpython-37m-darwin.so)
frame #4: pybind11::cpp_function::dispatcher(_object*, _object*, _object*) + 3088 (0x127e98350 in _C.cpython-37m-darwin.so)
frame #5: _PyMethodDef_RawFastCallKeywords + 537 (0x108117865 in Python)
frame #6: _PyCFunction_FastCallKeywords + 41 (0x108116dc0 in Python)
frame #7: call_function + 628 (0x1081ac2b4 in Python)
frame #8: _PyEval_EvalFrameDefault + 6767 (0x1081a528f in Python)
frame #9: _PyEval_EvalCodeWithName + 1698 (0x1081acb15 in Python)
frame #10: _PyFunction_FastCallKeywords + 212 (0x108116d88 in Python)
frame #11: call_function + 737 (0x1081ac321 in Python)
frame #12: _PyEval_EvalFrameDefault + 6922 (0x1081a532a in Python)
frame #13: _PyEval_EvalCodeWithName + 1698 (0x1081acb15 in Python)
frame #14: PyEval_EvalCode + 51 (0x1081a377d in Python)
frame #15: builtin_exec + 563 (0x1081a1207 in Python)
frame #16: _PyMethodDef_RawFastCallKeywords + 488 (0x108117834 in Python)
frame #17: _PyCFunction_FastCallKeywords + 41 (0x108116dc0 in Python)
frame #18: call_function + 628 (0x1081ac2b4 in Python)
frame #19: _PyEval_EvalFrameDefault + 6922 (0x1081a532a in Python)
frame #20: gen_send_ex + 244 (0x1081227e7 in Python)
frame #21: _PyEval_EvalFrameDefault + 17461 (0x1081a7c55 in Python)
frame #22: gen_send_ex + 244 (0x1081227e7 in Python)
frame #23: _PyEval_EvalFrameDefault + 17461 (0x1081a7c55 in Python)
frame #24: gen_send_ex + 244 (0x1081227e7 in Python)
frame #25: _PyMethodDef_RawFastCallKeywords + 583 (0x108117893 in Python)
frame #26: _PyMethodDescr_FastCallKeywords + 81 (0x10811c065 in Python)
frame #27: call_function + 782 (0x1081ac34e in Python)
frame #28: _PyEval_EvalFrameDefault + 6742 (0x1081a5276 in Python)
frame #29: function_code_fastcall + 106 (0x108117194 in Python)
frame #30: call_function + 737 (0x1081ac321 in Python)
frame #31: _PyEval_EvalFrameDefault + 6922 (0x1081a532a in Python)
frame #32: function_code_fastcall + 106 (0x108117194 in Python)
frame #33: call_function + 737 (0x1081ac321 in Python)
frame #34: _PyEval_EvalFrameDefault + 6742 (0x1081a5276 in Python)
frame #35: _PyEval_EvalCodeWithName + 1698 (0x1081acb15 in Python)
frame #36: _PyFunction_FastCallDict + 444 (0x108116a00 in Python)
frame #37: _PyObject_Call_Prepend + 131 (0x108117b07 in Python)
frame #38: PyObject_Call + 136 (0x108116ecd in Python)
frame #39: _PyEval_EvalFrameDefault + 7507 (0x1081a5573 in Python)
frame #40: _PyEval_EvalCodeWithName + 1698 (0x1081acb15 in Python)
frame #41: _PyFunction_FastCallKeywords + 212 (0x108116d88 in Python)
frame #42: call_function + 737 (0x1081ac321 in Python)
frame #43: _PyEval_EvalFrameDefault + 7090 (0x1081a53d2 in Python)
frame #44: gen_send_ex + 244 (0x1081227e7 in Python)
frame #45: builtin_next + 99 (0x1081a1cfe in Python)
frame #46: _PyMethodDef_RawFastCallKeywords + 488 (0x108117834 in Python)
frame #47: _PyCFunction_FastCallKeywords + 41 (0x108116dc0 in Python)
frame #48: call_function + 628 (0x1081ac2b4 in Python)
frame #49: _PyEval_EvalFrameDefault + 6922 (0x1081a532a in Python)
frame #50: _PyEval_EvalCodeWithName + 1698 (0x1081acb15 in Python)
frame #51: _PyFunction_FastCallKeywords + 212 (0x108116d88 in Python)
frame #52: call_function + 737 (0x1081ac321 in Python)
frame #53: _PyEval_EvalFrameDefault + 6742 (0x1081a5276 in Python)
frame #54: gen_send_ex + 244 (0x1081227e7 in Python)
frame #55: builtin_next + 99 (0x1081a1cfe in Python)
frame #56: _PyMethodDef_RawFastCallKeywords + 488 (0x108117834 in Python)
frame #57: _PyCFunction_FastCallKeywords + 41 (0x108116dc0 in Python)
frame #58: call_function + 628 (0x1081ac2b4 in Python)
frame #59: _PyEval_EvalFrameDefault + 6922 (0x1081a532a in Python)
frame #60: _PyEval_EvalCodeWithName + 1698 (0x1081acb15 in Python)
frame #61: _PyFunction_FastCallKeywords + 212 (0x108116d88 in Python)
frame #62: call_function + 737 (0x1081ac321 in Python)
frame #63: _PyEval_EvalFrameDefault + 6922 (0x1081a532a in Python)
deform_source_mesh_to_target_mesh.ipynb
notebook on the CPU as opposed to CUDA. Make sure to have the example target 3D model downloaded and placed in the appropriate directory as instructed in cell 3.Hi,
I tried to install the package with "python3 setup.py install --user" but I got a few errors:
and same errors for line 245. Is it related to the cuda version, the version I have is 10.0?
Thanks in advance.
Hi i have this issue when i am trying to run the tutorial.
(aienv) C:\Users\Desktop\AIrendering>C:/Users/AppData/Local/Continuum/anaconda3/envs/aienv/python.exe c:/Users/Desktop/AIrendering/meshing.py
Traceback (most recent call last):
File "c:/Users/Desktop/AIrendering/meshing.py", line 8, in
from pytorch3d.io import load_objs_as_meshes
File "c:\Users\Desktop\AIrendering\pytorch3d\io_init_.py", line 4, in
from .obj_io import load_obj, load_objs_as_meshes, save_obj
File "c:\Users\Desktop\AIrendering\pytorch3d\io\obj_io.py", line 16, in
from pytorch3d.structures import Meshes, Textures, join_meshes
File "c:\Users\Desktop\AIrendering\pytorch3d\structures_init_.py", line 3, in
from .meshes import Meshes, join_meshes
File "c:\Users\Desktop\pytorch3d\structures\meshes.py", line 7, in
from pytorch3d import _C
ImportError: DLL load failed: The specified procedure could not be found.
What can i do to overcome this problem?
Hi,
Thanks for sharing your great work!
I was wondering - can one render a mesh and get per-pixel depth too, like e.g. mesh-renderer allows you to do?
Thanks a lot!
Z.
To preface, it is very likely that I am doing something wrong, as I didn't closely look at the blending code in Pytorch3d. Would appreciate any advice on this particular topic.
Was comparing the silhouette and color output between SoftRas and Pytorch3d. The silhouette is very very similar, with a few pixels difference, perhaps due to some kind of rounding or offset issue.
But the textured (color) output is very different with black artifacts.
Links to mask & color output:
Imgur link
Minimum replication code for SoftRas:
import matplotlib.pyplot as plt
import os
import numpy as np
import imageio
import soft_renderer as sr
current_dir = "/home/aluo/tools/SoftRas/examples"
data_dir = os.path.join(current_dir, '../data')
import matplotlib.pyplot as plt
class args_obj():
filename_input = os.path.join(data_dir, 'obj/spot/spot_triangulated.obj')
output_dir = os.path.join(data_dir, 'results/output_render')
def main():
args = args_obj()
mesh = sr.Mesh.from_obj(args.filename_input, load_texture=True, texture_res=5,
texture_type='surface')
renderer = sr.SoftRenderer(camera_mode='look_at')
os.makedirs(args.output_dir, exist_ok=True)
renderer.transform.set_eyes_from_angles(2.7, 10, 20)
mesh.reset_()
gamma_pow = -3
renderer.set_gamma(10**gamma_pow)
renderer.set_sigma(10**(gamma_pow-1))
images = renderer.render_mesh(mesh)
image = images.detach().cpu().numpy()[0].transpose((1, 2, 0))
plt.figure(figsize=(10, 10))
plt.grid("off")
plt.axis("off")
plt.imshow(image[:,:,:3])
plt.show()
plt.figure(figsize=(10, 10))
plt.grid("off")
plt.axis("off")
plt.imshow(image[:,:,-1])
plt.show()
main()
For Pytorch3d (following the render_textured_meshes notebook)
blend_params = BlendParams(sigma=1e-4, gamma=1e-4)
raster_settings = RasterizationSettings(
image_size=256,
blur_radius=np.log(1. / 1e-4 - 1.) * blend_params.sigma,
faces_per_pixel=100,
bin_size=0
)
renderer = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
),
shader=TexturedPhongShader(
device=device,
cameras=cameras,
lights=lights
)
)
images = renderer(mesh)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off")
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., -1].cpu().numpy())
plt.grid("off")
plt.axis("off")
I am implementing a irradiance model forPhotoVoltaic modules. Most of the code is already on pytorch, and I was wondering if I could use some of your functions to compute shades.
I am inexperienced with the domain of computer graphics, and terms as shader, or raster are scary.
I am particularly interested on computing shades produced by the sun, from one row of PV modules to the ground and the row behind. Actually we have a very rudimentary case by case shade computation, but would love to replace this by a more "state of the art" solution.
My model already implements a Mesh object with vertexes and faces for every element on the scene. Most of the code for this comes from Kaolin.
Thank you.
pip install pytorch3d
have that installed but still can not import.
Deforming source to target texture along with the mesh β if possible! Textured interpolation between two objects would be extremely useful for evaluating inverse-graphics models.
If you do not know the root cause of the problem / bug, and wish someone to help you, please
post according to this template:
from pytorch3d.structures import Meshes
Expected behavior:
Cell succeeds
Actual behavior:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-14-d743ebcdab4d> in <module>()
2 import torch
3 from pytorch3d.io import load_obj, save_obj
----> 4 from pytorch3d.structures import Meshes
5 from pytorch3d.utils import ico_sphere
6 from pytorch3d.ops import sample_points_from_meshes
1 frames
/usr/local/lib/python3.6/dist-packages/pytorch3d/structures/meshes.py in <module>()
4 import torch
5
----> 6 from pytorch3d import _C
7
8 from . import utils as struct_utils
ImportError: /usr/local/lib/python3.6/dist-packages/pytorch3d/_C.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _Z24RasterizeMeshesNaiveCudaRKN2at6TensorES2_S2_ifib
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
Please include the following (depending on what the issue is):
git diff
) or code you wrote<put diff or code here>
<put logs here>
Please also simplify the steps as much as possible so they do not require additional resources to
run, such as a private dataset.
Rendering a simple triangle with SilhouetteShader
and BlendParams
as follows
blend_params = BlendParams(sigma=1e-4, gamma=1e-4)
raster_settings = RasterizationSettings(
image_size=frame_size,
blur_radius=np.log(1.0 / 1e-4 - 1.0) * blend_params.sigma,
faces_per_pixel=2,
bin_size=1
)
gives a big runtime error. However if bin_size=0
, no errors arise.
pip install git+git://github.com/facebookresearch/pytorch3d.git@234658901ae1891c71f6aed2dcf617ab1964851d
python3.7 -m venv env
pip freeze
:fvcore==0.1.dev200218
numpy==1.18.1
Pillow==7.0.0
pkg-resources==0.0.0
portalocker==1.5.2
pytorch3d==0.1
PyYAML==5.3
six==1.14.0
termcolor==1.1.0
torch==1.4.0
torchvision==0.5.0
tqdm==4.43.0
yacs==0.1.6
import numpy as np
import torch
from pytorch3d.renderer import (
look_at_view_transform,
OpenGLPerspectiveCameras, RasterizationSettings, BlendParams,
MeshRenderer, MeshRasterizer, SilhouetteShader
)
from pytorch3d.structures.meshes import Meshes
device = torch.device("cuda:0")
vertices = np.array([
[-0.4, 0, 0],
[ 0.2, 0, 0],
[ 0.0, 0.5, 0]], dtype=np.float32)
faces = np.array([[0, 1, 2]], dtype=np.float32)
frame_size = 256
mesh = Meshes(verts=[torch.tensor(vertices, dtype=torch.float32, device=device)],
faces=[torch.tensor(faces, dtype=torch.float32, device=device)])
R, T = look_at_view_transform(2.7, 45.0, 0.0)
cameras = OpenGLPerspectiveCameras(device=device, R=R, T=T)
blend_params = BlendParams(sigma=1e-4, gamma=1e-4)
raster_settings = RasterizationSettings(
image_size=frame_size,
blur_radius=np.log(1.0 / 1e-4 - 1.0) * blend_params.sigma,
faces_per_pixel=2,
bin_size=1
)
renderer = MeshRenderer(
rasterizer=MeshRasterizer(
cameras=cameras, raster_settings=raster_settings
),
shader=SilhouetteShader(blend_params=blend_params),
)
images = renderer(meshes_world=mesh)
Traceback (most recent call last):
File "/tmp/diff_renderers/standalone/run_pytorch3d.py", line 45, in <module>
images = renderer(meshes_world=mesh)
File "/tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/renderer/mesh/renderer.py", line 37, in forward
fragments = self.rasterizer(meshes_world, **kwargs)
File "/tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterizer.py", line 124, in forward
perspective_correct=raster_settings.perspective_correct,
File "/tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterize_meshes.py", line 120, in rasterize_meshes
perspective_correct,
File "/tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/renderer/mesh/rasterize_meshes.py", line 169, in forward
perspective_correct,
RuntimeError: Got 256; that's too many! (RasterizeMeshesCoarseCuda at /tmp/pip-req-build-ih7k1k4e/pytorch3d/csrc/rasterize_meshes/rasterize_meshes.cu:627)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x33 (0x7f903310a193 in /tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: RasterizeMeshesCoarseCuda(at::Tensor const&, at::Tensor const&, at::Tensor const&, int, float, int, int) + 0x3c8 (0x7f902b8c6471 in /tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #2: RasterizeMeshesCoarse(at::Tensor const&, at::Tensor const&, at::Tensor const&, int, float, int, int) + 0xb2 (0x7f902b87aed2 in /tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #3: RasterizeMeshes(at::Tensor const&, at::Tensor const&, at::Tensor const&, int, float, int, int, int, bool) + 0x62 (0x7f902b87b352 in /tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x3cc10 (0x7f902b88bc10 in /tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #5: <unknown function> + 0x39207 (0x7f902b888207 in /tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/pytorch3d/_C.cpython-37m-x86_64-linux-gnu.so)
frame #6: _PyMethodDef_RawFastCallKeywords + 0x224 (0x5d7f64 in /tmp/diff_renderers/standalone/env/bin/python)
frame #7: /tmp/diff_renderers/standalone/env/bin/python() [0x54a9c0]
frame #8: _PyEval_EvalFrameDefault + 0x43f8 (0x551c08 in /tmp/diff_renderers/standalone/env/bin/python)
frame #9: _PyEval_EvalCodeWithName + 0x252 (0x54b302 in /tmp/diff_renderers/standalone/env/bin/python)
frame #10: _PyFunction_FastCallDict + 0x34e (0x5d9dbe in /tmp/diff_renderers/standalone/env/bin/python)
frame #11: THPFunction_apply(_object*, _object*) + 0xa7f (0x7f903a50c3ef in /tmp/diff_renderers/standalone/env/lib/python3.7/site-packages/torch/lib/libtorch_python.so)
frame #12: _PyMethodDef_RawFastCallKeywords + 0x132 (0x5d7e72 in /tmp/diff_renderers/standalone/env/bin/python)
frame #13: /tmp/diff_renderers/standalone/env/bin/python() [0x54a9c0]
frame #14: _PyEval_EvalFrameDefault + 0x43f8 (0x551c08 in /tmp/diff_renderers/standalone/env/bin/python)
frame #15: _PyEval_EvalCodeWithName + 0x252 (0x54b302 in /tmp/diff_renderers/standalone/env/bin/python)
frame #16: _PyFunction_FastCallKeywords + 0x482 (0x5d8bd2 in /tmp/diff_renderers/standalone/env/bin/python)
frame #17: /tmp/diff_renderers/standalone/env/bin/python() [0x54a880]
frame #18: _PyEval_EvalFrameDefault + 0x13ad (0x54ebbd in /tmp/diff_renderers/standalone/env/bin/python)
frame #19: _PyEval_EvalCodeWithName + 0x252 (0x54b302 in /tmp/diff_renderers/standalone/env/bin/python)
frame #20: _PyFunction_FastCallDict + 0x34e (0x5d9dbe in /tmp/diff_renderers/standalone/env/bin/python)
frame #21: /tmp/diff_renderers/standalone/env/bin/python() [0x4d8102]
frame #22: PyObject_Call + 0x56 (0x5dbbc6 in /tmp/diff_renderers/standalone/env/bin/python)
frame #23: _PyEval_EvalFrameDefault + 0x18d4 (0x54f0e4 in /tmp/diff_renderers/standalone/env/bin/python)
frame #24: _PyEval_EvalCodeWithName + 0x252 (0x54b302 in /tmp/diff_renderers/standalone/env/bin/python)
frame #25: _PyFunction_FastCallDict + 0x34e (0x5d9dbe in /tmp/diff_renderers/standalone/env/bin/python)
frame #26: /tmp/diff_renderers/standalone/env/bin/python() [0x59412b]
frame #27: PyObject_Call + 0x56 (0x5dbbc6 in /tmp/diff_renderers/standalone/env/bin/python)
frame #28: _PyEval_EvalFrameDefault + 0x18d4 (0x54f0e4 in /tmp/diff_renderers/standalone/env/bin/python)
frame #29: _PyEval_EvalCodeWithName + 0x252 (0x54b302 in /tmp/diff_renderers/standalone/env/bin/python)
frame #30: _PyFunction_FastCallDict + 0x34e (0x5d9dbe in /tmp/diff_renderers/standalone/env/bin/python)
frame #31: /tmp/diff_renderers/standalone/env/bin/python() [0x4d8102]
frame #32: PyObject_Call + 0x56 (0x5dbbc6 in /tmp/diff_renderers/standalone/env/bin/python)
frame #33: _PyEval_EvalFrameDefault + 0x18d4 (0x54f0e4 in /tmp/diff_renderers/standalone/env/bin/python)
frame #34: _PyEval_EvalCodeWithName + 0x252 (0x54b302 in /tmp/diff_renderers/standalone/env/bin/python)
frame #35: _PyFunction_FastCallDict + 0x34e (0x5d9dbe in /tmp/diff_renderers/standalone/env/bin/python)
frame #36: /tmp/diff_renderers/standalone/env/bin/python() [0x59412b]
frame #37: _PyObject_FastCallKeywords + 0x4eb (0x5d96db in /tmp/diff_renderers/standalone/env/bin/python)
frame #38: /tmp/diff_renderers/standalone/env/bin/python() [0x54aa51]
frame #39: _PyEval_EvalFrameDefault + 0x13ad (0x54ebbd in /tmp/diff_renderers/standalone/env/bin/python)
frame #40: _PyEval_EvalCodeWithName + 0x252 (0x54b302 in /tmp/diff_renderers/standalone/env/bin/python)
frame #41: PyEval_EvalCode + 0x23 (0x54d803 in /tmp/diff_renderers/standalone/env/bin/python)
frame #42: /tmp/diff_renderers/standalone/env/bin/python() [0x6308e2]
frame #43: PyRun_FileExFlags + 0x97 (0x630997 in /tmp/diff_renderers/standalone/env/bin/python)
frame #44: PyRun_SimpleFileExFlags + 0x17f (0x63160f in /tmp/diff_renderers/standalone/env/bin/python)
frame #45: /tmp/diff_renderers/standalone/env/bin/python() [0x65450e]
frame #46: _Py_UnixMain + 0x2e (0x65486e in /tmp/diff_renderers/standalone/env/bin/python)
frame #47: __libc_start_main + 0xe7 (0x7f903f22db97 in /lib/x86_64-linux-gnu/libc.so.6)
frame #48: _start + 0x2a (0x5df80a in /tmp/diff_renderers/standalone/env/bin/python)
Process finished with exit code 1
Maybe i'm the proverbial idiot of murphy's laws - "If you make something idiot-proof, someone will just make a better idiot.". But I think if the input parameters are invalid, it should be captured before crashing and burning :D
A declarative, efficient, and flexible JavaScript library for building user interfaces.
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. πππ
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google β€οΈ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.