Comments (15)
Hi @Zozobozo yes you can get the per pixel depth (for the top k faces which overlap with that pixel) from the output of the mesh rasterizer. fragments.zbuf
is a (N, H, W, K)
dimensional tensor.
To retrieve this output, you can initialize a rasterizer and only use that e.g.
rasterizer = MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
)
fragments = rasterizer(meshes)
OR if you want the full image as well as the depth, you can extend the MeshRenderer
class to create your own renderer which also returns the fragments.zbuf
e.g
class MeshRendererWithDepth(nn.Module):
def __init__(self, rasterizer, shader):
super().__init__()
self.rasterizer = rasterizer
self.shader = shader
def forward(self, meshes_world, **kwargs) -> torch.Tensor:
fragments = self.rasterizer(meshes_world, **kwargs)
images = self.shader(fragments, meshes_world, **kwargs)
return images, fragments.zbuf
We also have a setting to enable perspective correct depth interpolation (set raster_settings.perspective_correct = True
).
If this answers your question, please close this issue! :)
from pytorch3d.
When you do fragments = fragments[1]
you have a tensor, not a picture. You have done something to make the picture, and your question pertains to that something, not pytorch3d. My guess is you have plotted a one-channel image with matplotlib, and it has defaulted to the viridis color scheme. You can change to a different color scheme, or manually convert to a 3 channel RGB image. E.g. expand the tensor from (H,W) to (H,W,3) (I think) so it becomes a 3-channel grayscale image.
from pytorch3d.
We are landing a change now that introduces MeshRendererWithFragments that returns images, fragments into the renderer library which you can use in the future.
from pytorch3d.
@wangsen1312 this y flip issue has now been fixed - see #78 for further discussion.
from pytorch3d.
Following this.
I am using this shader
shader = SoftPhongShader(
cameras=cameras,
lights= lights,
device=device
)
and this
rasterizer = MeshRendererWithDepth(rasterizer = rasterizer, shader = shader)
fragments = rasterizer(mesh)
fragments = fragments[1]
and my output is this.
I was wondering how can I extract something with a white gradient and a dark background like this
What are the main decisions regarding your color choice ( purple and green), how can I change those?!
from pytorch3d.
thanks for the quick answer! i'll give it a try!
Best,
Z.
from pytorch3d.
@nikhilaravi I have tried Depth render with images, but found they are not in the same coordinate? it seems the y-axis flips? is this an issue or I use change it manually?
Best
from pytorch3d.
@nikhilaravi Got it, Nice work๏ผ
from pytorch3d.
Is this kind of depth image differentiable? @nikhilaravi
from pytorch3d.
@Bob-Yeah yes it should be differentiable.
from pytorch3d.
I'm actually kind of curious now. For the zbuf output (and we optimize with respect to another 2.5D depth map target), is it differentiable ONLY at pixels where there is a face? Or is this like the SoftSilhouetteShader where the boundaries can also be optimized?
from pytorch3d.
Thank you so much
from pytorch3d.
Hi @nikhilaravi , sorry to bother you. I meet a problem while trying to convert it to point clouds after getting the zbuf
. I will be very grateful if you could give me some advice.
The problem is that the point cloud calculated from rendered zbuf
is deformed. From my understanding, the zbuf
is just like the depth image and I can easily convert it to point clouds using the intrinsic matrix. But the results failed.
The original mesh is like this:
The point clouds generated from zbuf is like this:
Here is the code I use:
import numpy as np
import matplotlib.pyplot as plt
from pytorch3d.io import load_objs_as_meshes, load_obj
from pytorch3d.renderer import (
FoVPerspectiveCameras, look_at_view_transform,
RasterizationSettings, BlendParams,
MeshRenderer, MeshRasterizer, HardPhongShader
)
import open3d as o3d
width = 512
height = 512
fov = 60
obj_path = './data/examples/models/model_normalized.obj'
verts, faces, aux = load_obj(obj_path)
meshes = load_objs_as_meshes([obj_path])
R, T = look_at_view_transform(2.7, 10, 20)
cameras = FoVPerspectiveCameras(R=R, T=T, fov=fov)
raster_settings = RasterizationSettings(
image_size=(height, width),
blur_radius=0.0,
faces_per_pixel=1,
# max_faces_per_bin=20000
)
rasterizer = MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
)
depth = rasterizer(meshes).zbuf.cpu().squeeze().numpy()
cx = width / 2
cy = height / 2
fx = cx / np.tan(fov / 2)
fy = cy / np.tan(fov / 2)
row = height
col = width
# TODO check whether u or v is the column. depth[v, u] ???
v = np.array(list(np.ndindex((row, col)))).reshape(row, col, 2)[:, :, 0]
u = np.array(list(np.ndindex((row, col)))).reshape(row, col, 2)[:, :, 1]
X_ = (u - cx) / fx
X_ = X_[depth > -1] # exclude infinity
Y_ = (v - cy) / fy * depth
Y_ = Y_[depth > -1] # exclude infinity
depth_ = depth[depth > -1] # exclude infinity
X = X_ * depth_
Y = Y_ * depth_
Z = depth_
coords_g = np.stack([X, Y, Z]) # shape: num_points * 3
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(coords_g.T)
o3d.visualization.draw_geometries([pcd])
Any suggestion will be helpful. Please reply at your convenience.
Thanks!
from pytorch3d.
I get this "Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)" error, when utilizing rasterizer for zbuf. can anyone familiar with this and help? the code is from posts here and similar renderer demos.
from pytorch3d.
This issue is closed. Please open new issues with all the details for help with other things.
from pytorch3d.
Related Issues (20)
- Does pytorch3d work with python3.11? HOT 2
- get_full_projection_transform not functioning properly for batch sizes > 2 HOT 2
- How to save mesh as tetrahedron in pytorch3d? HOT 1
- Does PyTorch3D support AMD Radeon W7900? (gfx1100) HOT 1
- The picture I generated using pytorch3d has a slash in the middle of the picture, and I don't know why? HOT 8
- Some kind of offset in TexturesUV Map, leads to faulty texture on mesh HOT 6
- A valid wheel for the current colab? HOT 4
- Compatibility Issues with PyTorch3D, GCC, and CUDA on Red Hat Linux HOT 10
- Softras to Pytorch3d comparison rendering flow HOT 1
- How to use Pytorch3d to render a uv normal map for a given mesh? HOT 1
- How can I use OpenGL-related methods in PyTorch3D to render semi-transparent textures? HOT 2
- Differentiable rednering HOT 5
- How to use Pytorch3D to render objects from Objaverse? HOT 1
- Can't install Pytorch3D HOT 3
- Can't seem to install pytorch3d on Orin nx platform? HOT 1
- rotation matrix convention seems inconsistent HOT 1
- Issue while Installing pytorch3d on windows with cuda 12.6 HOT 6
- error HOT 1
- Rendering depth and converting them to the needed format HOT 1
- Support Pytorch2.4 HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. ๐๐๐
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google โค๏ธ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch3d.