Git Product home page Git Product logo

npbg's People

Contributors

seva100 avatar ttsesm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

npbg's Issues

There was a problem with scene stitching

When I placed a trained object into the scene, rendering issues occurred when using the concatenation method for point clouds and textures, but not when using the replacement method. (Due to the small number of the object point clouds, a portion of the scene point clouds were replaced with object point clouds and the same was done with textures.) How did you guys solve the problem of scene editing?

Questions about remote rendering

Thanks so much for your work.
But I got an error for GLX: Failed to create context.
How can I perform rendering on a remote server without UI?

Something wrong about the docker image?

Hello, I try to run docker/tools/build.sh, however, I get such error:
Can u help me?

+ docker build -t npbg -f ../Dockerfile ..
Sending build context to Docker daemon  15.36kB
Error response from daemon: stat /mnt/vdb/docker/tmp: no such file or directory
+ docker tag npbg root/npbg:latest
Error response from daemon: No such image: npbg:latest

Multi-GPU support for training?

I try to train a large-scale scene with large number of point clouds.
But I received a CUDA out of memory error at the start of training.

Is there any way to train the model on Multi-GPUs?

[Question] What kind of information the eight neural descriptors for each point represent?

Do you have any idea regarding what kind of information the neural descriptors at each point represent?

I've read the paper but it was not that clear about the information that are learned in these descriptors.

Also do you have any idea how lighting information (e.g. light sources position, intensity, distribution) could be possibly added as extra parameters?

Furthermore, to my understanding by using your model someone could render any map other than just the RGB values is that correct? For example imagine that someone has the irrandiance map of the scene the in principle he could render the corresponding irrandiance values for unseen views right?

How to fit scenes from other datasets

Hi guys,

I would be interested to know whether it is possible to fit different scenes based on other existing datasets. For example I would like to understand and get an idea how to fit the redwood dataset (http://redwood-data.org/). I have read the README regarding how to fit our own scenes but it is not clear to me how to do the same with the e.g. the redwood data.

Thus, would be easy to give some feedback here how to do that.

Thanks.

Generation

Hi, thanks for your work, which is definitly amazing. I want know whether the model can apply to another scene without fine-tune?

Why can't directly use .ply file provided by Scannet Dataset

Hello, thank you for your excellent work and the open source code!
Following the guild in readme, I can successfully fit a new scene by building the reconstruction with Agisoft Metashape Pro and then fitting descriptors.
However, when I directly use the reconstructions (eg. http://kaldir.vc.in.tum.de/scannet/v2/scans/scene0001_01/scene0000_00_vh_clean.ply) provided in ScanNet dataset, I found that I can not fitting descriptors correctlly then.
Are there some differences between the pointcloud built by Agisoft Metashape Pro and the pointcloud provided by ScanNet dataset? And how can I fit a scene in Scannet dataset such as scene0000_00 with the pointcloud provided by ScanNet dataset?
Thank you for your reply!

The viewer's results were not as good as the training

Hi, thanks for the good work.
When I trained the scene, the PSNR was up to 25, which was clear to see on the Tensorboard. Why is it not as good as the one seen in the viewer during training, and the PSNR obtained is less than 20? (the same pose and point cloud) Does the viewer have any tricks to save the image and reduce distortion?

How to render one of the fitted scenes on Ubuntu 22.04 LTS: A comprehensive guide.

I've managed to render several of the fitted scenes on Ubuntu 22.04 LTS, with an NVIDIA GeForce 940MX GPU (2 GB RAM). However, I faced several challenges while doing so. Therefore, I am going to document these challenges with their solutions here in case someone finds them helpful.

Note: this guide was written based on commit 5bc6f8d18e61978f167f7dbb21787771fbd59bf6.

Step 0

Before starting, make sure you have gcc and OpenGL tools installed. Installing these will save you from dealing with a lot of problems later. This can be done by running:

$ sudo apt update
$ sudo apt install build-essential # gcc
$ sudo apt-get install libglu1-mesa-dev freeglut3-dev mesa-common-dev libglfw3-dev # OpenGL

More details on how to install OpenGL can be found here.

Next, make sure you have an appropriate NVIDIA driver installed, and that you have CUDA installed. These can be installed via a quick google search.

Finally, make sure you have Anaconda installed.

Step 1

Clone the repository

git clone https://github.com/alievk/npbg.git

then run install_deps.sh

cd npbg
source scripts/install_deps.sh

Step 2

Download the fitted scenes and the rendering network weights as described here. Place the downloads folder in the root directory (where the README.md file is located).

Step 3

As described in the README.md file, try running the Person 1 fitted scene:

$ python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view

This will likely not work, and you will observe an AttributeError:

Traceback (most recent call last):
  File "viewer.py", line 9, in <module>
    from npbg.gl.render import OffscreenRender, create_shared_texture, cpy_tensor_to_buffer, cpy_tensor_to_texture
  File "/home/mhdadk/Documents/npbg2/npbg/gl/render.py", line 14, in <module>
    import torch
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/__init__.py", line 280, in <module>
    from .functional import *
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/functional.py", line 2, in <module>
    import torch.nn.functional as F
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/__init__.py", line 1, in <module>
    from .modules import *  # noqa: F401
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/__init__.py", line 2, in <module>
    from .linear import Identity, Linear, Bilinear
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 5, in <module>
    from .. import functional as F
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/functional.py", line 14, in <module>
    from .._jit_internal import boolean_dispatch, List
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/_jit_internal.py", line 595, in <module>
    import typing_extensions
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/typing_extensions-4.2.0-py3.6.egg/typing_extensions.py", line 159, in <module>
    class _FinalForm(typing._SpecialForm, _root=True):
AttributeError: module 'typing' has no attribute '_SpecialForm'

To fix this AttributeError, run

$ pip uninstall typing-extensions
$ pip install typing-extensions

Try to run the Person 1 fitted scene again:

$ python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view

However, this will not work again, and you will observe the following error:

Traceback (most recent call last):
  File "viewer.py", line 434, in <module>
    my_app = MyApp(args)
  File "viewer.py", line 103, in __init__
    _config = yaml.load(f)
TypeError: load() missing 1 required positional argument: 'Loader'

The solution to this error can be found here. To fix this error, run

$ pip install pyyaml==5.4.1

Again, try to run the Person 1 fitted scene:

$ python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view

Unfortunately, this will still throw an error:

loading pointcloud...
=== 3D model ===
VERTICES:  3072078
EXTENT:  [-16.5258255  -14.02860832 -30.76384926] [ 10.81384468   2.03398347 -20.07631493]
================
new viewport size  (2000, 1328)
libGL error: MESA-LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: iris
libGL error: MESA-LOADER: failed to open iris: /usr/lib/dri/iris_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: iris
libGL error: MESA-LOADER: failed to open swrast: /usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory (search paths /usr/lib/x86_64-linux-gnu/dri:\$${ORIGIN}/dri:/usr/lib/dri, suffix _dri)
libGL error: failed to load driver: swrast
[w] b'GLX: Failed to create context: GLXBadFBConfig'
[x] Window creation failed

The solution to this error can be found in this answer. To fix this error, run

$ export LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libstdc++.so.6

One more time: try to run the Person 1 fitted scene:

$ python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view

Still not quite there yet. You will observe the following error:

loading pointcloud...
=== 3D model ===
VERTICES:  3072078
EXTENT:  [-16.5258255  -14.02860832 -30.76384926] [ 10.81384468   2.03398347 -20.07631493]
================
new viewport size  (2000, 1328)
[i] HiDPI detected, fixing window size
[w] Cannot read STENCIL size from the framebuffer
Unable to load numpy_formathandler accelerator from OpenGL_accelerate
[i] Using GLFW (GL 4.6)
Traceback (most recent call last):
  File "/home/mhdadk/Documents/npbg2/npbg/gl/render.py", line 29, in _init_buffers
    import pycuda.gl.autoinit  # this may fails in headless mode
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/pycuda-2021.1-py3.6-linux-x86_64.egg/pycuda/gl/autoinit.py", line 10, in <module>
    context = make_default_context(lambda dev: cudagl.make_context(dev))
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/pycuda-2021.1-py3.6-linux-x86_64.egg/pycuda/tools.py", line 228, in make_default_context
    "on any of the %d detected devices" % ndevices
RuntimeError: make_default_context() wasn't able to create a context on any of the 1 detected devices

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "viewer.py", line 434, in <module>
    my_app = MyApp(args)
  File "viewer.py", line 177, in __init__
    clear_color=args.clear_color)
  File "/home/mhdadk/Documents/npbg2/npbg/gl/render.py", line 19, in __init__
    self._init_buffers(viewport_size, out_buffer_location)
  File "/home/mhdadk/Documents/npbg2/npbg/gl/render.py", line 31, in _init_buffers
    raise RuntimeError('PyCUDA init failed, cannot use torch buffer')
RuntimeError: PyCUDA init failed, cannot use torch buffer

The solution to this RuntimeError can be found in issue #12. To fix this error, replace the code in viewer.py with the code given in the gist in this reply. In other words, here is the code to put into viewer.py:

import argparse
import threading
import yaml
import re

from glumpy import app, gloo, glm, gl, transforms
from glumpy.ext import glfw

from npbg.gl.render import OffscreenRender, create_shared_texture, cpy_tensor_to_buffer, cpy_tensor_to_texture
from npbg.gl.programs import NNScene
from npbg.gl.utils import load_scene_data, get_proj_matrix, crop_intrinsic_matrix, crop_proj_matrix, \
    setup_scene, rescale_K, FastRand, nearest_train, pca_color, extrinsics_from_view_matrix, extrinsics_from_xml
from npbg.gl.nn import OGL
from npbg.gl.camera import Trackball

import os, sys
import time
import numpy as np
import torch
import cv2


def get_args():
    parser = argparse.ArgumentParser(description='')
    parser.add_argument('-c', '--config', type=str, default=None, required=True, help='config path')
    parser.add_argument('--viewport', type=str, default='', help='width,height')
    parser.add_argument('--keep-fov', action='store_true', help='keep field of view when resizing viewport')
    parser.add_argument('--init-view', type=str, help='camera label for initial view or path to 4x4 matrix')
    parser.add_argument('--use-mesh', action='store_true')
    parser.add_argument('--use-texture', action='store_true')
    parser.add_argument('--rmode', choices=['trackball', 'fly'], default='trackball')
    parser.add_argument('--fps', action='store_true', help='show fps')
    parser.add_argument('--light-position', type=str, default='', help='x,y,z')
    parser.add_argument('--replay-camera', type=str, default='', help='path to view_matrix to replay at given fps')
    parser.add_argument('--replay-fps', type=float, default=25., help='view_matrix replay fps')
    parser.add_argument('--supersampling', type=int, default=1, choices=[1, 2])
    parser.add_argument('--clear-color', type=str)
    parser.add_argument('--nearest-train', action='store_true')
    parser.add_argument('--gt', help='like /path/to/images/*.JPG. * will be replaced with nearest camera label.')
    parser.add_argument('--pca', action='store_true')
    parser.add_argument('--origin-view', action='store_true')
    parser.add_argument('--temp-avg', action='store_true')
    parser.add_argument('--checkpoint')
    args = parser.parse_args()

    args.viewport = tuple([int(x) for x in args.viewport.split(',')]) if args.viewport else None
    args.light_position = [float(x) for x in args.light_position.split(',')] if args.light_position else None
    args.clear_color = [float(x) for x in args.clear_color.split(',')] if args.clear_color else None

    return args


def get_screen_program(texture):
    vertex = '''
    attribute vec2 position;
    attribute vec2 texcoord;
    varying vec2 v_texcoord;
    void main()
    {
        gl_Position = <transform>;
        v_texcoord = texcoord;
    } '''
    fragment = '''
    uniform sampler2D texture;
    varying vec2 v_texcoord;
    void main()
    {
        gl_FragColor = texture2D(texture, v_texcoord);
    } '''

    quad = gloo.Program(vertex, fragment, count=4)
    quad["transform"] = transforms.OrthographicProjection(transforms.Position("position"))
    quad['texcoord'] = [( 0, 0), ( 0, 1), ( 1, 0), ( 1, 1)]
    quad['texture'] = texture

    return quad


def start_fps_job():
    def job():
        print(f'FPS {app.clock.get_fps():.1f}')

    threading.Timer(1.0, job).start()


def load_camera_trajectory(path):
    if path[-3:] == 'xml':
        view_matrix, camera_labels = extrinsics_from_xml(path)
    else:
        view_matrix, camera_labels = extrinsics_from_view_matrix(path)
    return view_matrix


def fix_viewport_size(viewport_size, factor=16):
    viewport_w = factor * (viewport_size[0] // factor)
    viewport_h = factor * (viewport_size[1] // factor)
    return viewport_w, viewport_h


class MyApp():
    def __init__(self, args):
        with open(args.config) as f:
            _config = yaml.load(f)
            # support two types of configs
            # 1 type - config with scene data
            # 2 type - config with model checkpoints and path to scene data config
            if 'scene' in _config: # 1 type
                self.scene_data = load_scene_data(_config['scene'])
                net_ckpt = _config.get('net_ckpt')
                texture_ckpt = _config.get('texture_ckpt') 
            else:
                self.scene_data = load_scene_data(args.config)
                net_ckpt = self.scene_data['config'].get('net_ckpt')
                texture_ckpt = self.scene_data['config'].get('texture_ckpt')

        self.viewport_size = args.viewport if args.viewport else self.scene_data['config']['viewport_size']
        self.viewport_size = fix_viewport_size(self.viewport_size)
        print('new viewport size ', self.viewport_size)

        # crop/resize viewport
        if self.scene_data['intrinsic_matrix'] is not None:
            K_src = self.scene_data['intrinsic_matrix']
            old_size = self.scene_data['config']['viewport_size']
            sx = self.viewport_size[0] / old_size[0]
            sy = self.viewport_size[1] / old_size[1]
            K_crop = rescale_K(K_src, sx, sy, keep_fov=args.keep_fov)
            self.scene_data['proj_matrix'] = get_proj_matrix(K_crop, self.viewport_size)
        elif self.scene_data['proj_matrix'] is not None:
            new_proj_matrix = crop_proj_matrix(self.scene_data['proj_matrix'], *self.scene_data['config']['viewport_size'], *self.viewport_size)
            self.scene_data['proj_matrix'] = new_proj_matrix
        else:
            raise Exception('no intrinsics are provided')

        if args.init_view:
            if args.init_view in self.scene_data['view_matrix']:
                idx = self.scene_data['camera_labels'].index(args.init_view)
                init_view = self.scene_data['view_matrix'][idx]
            elif os.path.exists(args.init_view):
                init_view = np.loadtxt(args.init_view)
        else:
            init_view = self.scene_data['view_matrix'][0]

        if args.origin_view:
            top_view = np.eye(4)
            top_view[2, 3] = 20.
            init_view = top_view

            if np.allclose(self.scene_data['model3d_origin'], np.eye(4)):
                print('Setting origin as mass center')
                origin = np.eye(4)
                origin[:3, 3] = -np.percentile(self.scene_data['pointcloud']['xyz'], 90, 0)
                self.scene_data['model3d_origin'] = origin
        else:
            # force identity origin
            self.scene_data['model3d_origin'] = np.eye(4)

        self.trackball = Trackball(init_view, self.viewport_size, 1, rotation_mode=args.rmode)

        args.use_mesh = args.use_mesh or _config.get('use_mesh') or args.use_texture

        # this also creates GL context necessary for setting up shaders
        self.window = app.Window(width=self.viewport_size[0], height=self.viewport_size[1], visible=True, fullscreen=False)
        self.window.set_size(*self.viewport_size)

        if args.checkpoint:
            assert 'Texture' in args.checkpoint, 'Set path to descriptors checkpoint'
            ep = re.search('epoch_[0-9]+', args.checkpoint).group().split('_')[-1]
            net_name = f'UNet_stage_0_epoch_{ep}_net.pth'
            net_ckpt = os.path.join(*args.checkpoint.split('/')[:-1], net_name)
            texture_ckpt = args.checkpoint

        need_neural_render = net_ckpt is not None

        # self.out_buffer_location = 'torch' if need_neural_render else 'opengl'
        self.out_buffer_location = 'numpy' if need_neural_render else 'opengl'

        # setup screen image plane
        self.off_render = OffscreenRender(viewport_size=self.viewport_size, out_buffer_location=self.out_buffer_location,
                                          clear_color=args.clear_color)
        if self.out_buffer_location == 'torch':
            screen_tex, self.screen_tex_cuda = create_shared_texture(
                np.zeros((self.viewport_size[1], self.viewport_size[0], 4), np.float32)
            )
        else:
            screen_tex, self.screen_tex_cuda = self.off_render.color_buf, None

            gl_view = screen_tex.view(gloo.TextureFloat2D)
            gl_view.activate() # force gloo to create on GPU
            gl_view.deactivate()
            screen_tex = gl_view
        
        self.screen_program = get_screen_program(screen_tex)

        self.scene = NNScene()

        if need_neural_render:
            print(f'Net checkpoint: {net_ckpt}')
            print(f'Texture checkpoint: {texture_ckpt}')
            self.model = OGL(self.scene, self.scene_data, self.viewport_size, net_ckpt, 
                texture_ckpt, out_buffer_location=self.out_buffer_location, supersampling=args.supersampling, temporal_average=args.temp_avg)
        else:
            self.model = None

        if args.pca:
            assert texture_ckpt
            tex = torch.load(texture_ckpt, map_location='cpu')['state_dict']['texture_']
            print('PCA...')
            pca = pca_color(tex)
            pca = (pca - np.percentile(pca, 10)) / (np.percentile(pca, 90) - np.percentile(pca, 10))
            pca = np.clip(pca, 0, 1)
            self.scene_data['pointcloud']['rgb'] = np.clip(pca, 0, 1)

        setup_scene(self.scene, self.scene_data, args.use_mesh, args.use_texture)
        if args.light_position is not None:
            self.scene.set_light_position(args.light_position)

        if args.replay_camera:
            self.camera_trajectory = load_camera_trajectory(args.replay_camera)
        else:
            self.camera_trajectory = None

        self.window.attach(self.screen_program['transform'])
        self.window.push_handlers(on_init=self.on_init)
        self.window.push_handlers(on_close=self.on_close)
        self.window.push_handlers(on_draw=self.on_draw)
        self.window.push_handlers(on_resize=self.on_resize)
        self.window.push_handlers(on_key_press=self.on_key_press)
        self.window.push_handlers(on_mouse_press=self.on_mouse_press)
        self.window.push_handlers(on_mouse_drag=self.on_mouse_drag)
        self.window.push_handlers(on_mouse_release=self.on_mouse_release)
        self.window.push_handlers(on_mouse_scroll=self.on_mouse_scroll)

        self.mode0 = NNScene.MODE_COLOR
        self.mode1 = 0
        self.point_size = 1
        self.point_mode = False
        self.draw_points = not args.use_mesh
        self.flat_color = True
        self.neural_render = need_neural_render
        self.show_pca = False

        self.n_frame = 0
        self.t_elapsed = 0
        self.last_frame = None
        self.last_view_matrix = None
        self.last_gt_image = None

        self.mouse_pressed = False

        self.args = args

    def run(self):
        if self.args.fps:
            start_fps_job()

        app.run()

    def render_frame(self, view_matrix):
        self.scene.set_camera_view(view_matrix)

        if self.neural_render:
            frame = self.model.infer()['output'].flip([0])
        else:
            self.scene.set_mode(self.mode0, self.mode1)
            if self.point_mode == 0:
                self.scene.set_splat_mode(False)
                self.scene.program['splat_mode'] = int(0)
            elif self.point_mode == 1:
                self.scene.set_splat_mode(True)
                self.scene.program['splat_mode'] = int(0)
            elif self.point_mode == 2:
                self.scene.set_splat_mode(False)
                self.scene.program['splat_mode'] = int(1)
            if not self.scene.use_point_sizes:
                self.scene.set_point_size(self.point_size)
            self.scene.set_draw_points(self.draw_points)
            self.scene.set_flat_color(self.flat_color)
            frame = self.off_render.render(self.scene)

        return frame

    def print_info(self):
        print('-- start info')

        mode = [m[0] for m in NNScene.__dict__.items() if m[0].startswith('MODE_') and self.mode0 == m[1]][0]
        print(mode)

        n_mode = [m[0] for m in NNScene.__dict__.items() if m[0].startswith('NORMALS_MODE_') and self.mode1 == m[1]][0]
        print(n_mode)

        print(f'point size {self.point_size}')
        print(f'splat mode: {self.point_mode}')

        print('-- end info')

    def save_screen(self, out_dir='./data/screenshots'):
        os.makedirs(out_dir, exist_ok=True)

        get_name = lambda s: time.strftime(f"%m-%d_%H-%M-%S___{s}")
        
        img = self.last_frame.cpu().numpy()[..., :3][::-1, :, ::-1] * 255
        cv2.imwrite(os.path.join(out_dir, get_name('screenshot') + '.png'), img)
        
        np.savetxt(os.path.join(out_dir, get_name('pose') + '.txt'), self.last_view_matrix)

    def get_next_view_matrix(self, frame_num, elapsed_time):
        if self.camera_trajectory is None:
            return self.trackball.pose

        n = int(elapsed_time * args.replay_fps) % len(self.camera_trajectory)
        return self.camera_trajectory[n]

    # ===== Window events =====

    def on_init(self):
        pass

    def on_key_press(self, symbol, modifiers):
        KEY_PLUS = 61
        if symbol == glfw.GLFW_KEY_X:
            self.mode0 = NNScene.MODE_XYZ
            self.neural_render = False
        elif symbol == glfw.GLFW_KEY_N:
            self.mode0 = NNScene.MODE_NORMALS
            self.neural_render = False
        elif symbol == glfw.GLFW_KEY_C:
            self.mode0 = NNScene.MODE_COLOR
            self.neural_render = False
        elif symbol == glfw.GLFW_KEY_U:
            self.mode0 = NNScene.MODE_UV
            self.neural_render = False
        elif symbol == glfw.GLFW_KEY_D:
            self.mode0 = NNScene.MODE_DEPTH
            self.neural_render = False
        elif symbol == glfw.GLFW_KEY_L:
            self.mode0 = NNScene.MODE_LABEL
            self.neural_render = False
        elif symbol == glfw.GLFW_KEY_Y:
            self.neural_render = True
            self.show_pca = False
        elif symbol == glfw.GLFW_KEY_T:
            self.neural_render = True
            self.show_pca = True
        elif symbol == glfw.GLFW_KEY_Z:
            self.mode1 = (self.mode1 + 1) % 5
        elif symbol == KEY_PLUS:
            self.point_size = self.point_size + 1
        elif symbol == glfw.GLFW_KEY_MINUS:
            self.point_size = max(0, self.point_size - 1)
        elif symbol == glfw.GLFW_KEY_P:
            self.point_mode = (self.point_mode + 1) % 3
        elif symbol == glfw.GLFW_KEY_Q:
            self.draw_points = not self.draw_points
        elif symbol == glfw.GLFW_KEY_F:
            self.flat_color = not self.flat_color
        elif symbol == glfw.GLFW_KEY_I:
            self.print_info()
        elif symbol == glfw.GLFW_KEY_S:
            self.save_screen()
        else:
            print(symbol, modifiers)

    def on_draw(self, dt):
        self.last_view_matrix = self.get_next_view_matrix(self.n_frame, self.t_elapsed)

        self.last_frame = self.render_frame(self.last_view_matrix)

        if self.out_buffer_location == 'torch':
            cpy_tensor_to_texture(self.last_frame, self.screen_tex_cuda)
        elif self.out_buffer_location == 'numpy':
            if isinstance(self.last_frame, torch.Tensor):
                rendered = self.last_frame.cpu().numpy()
            elif isinstance(self.last_frame, np.ndarray):
                rendered = self.last_frame
            
            if rendered.shape[2] == 3:
                rendered = np.concatenate([rendered, np.ones_like(rendered[..., [0]])], axis=2)
            self.screen_program['texture'] = rendered

        self.window.clear()

        gl.glDisable(gl.GL_CULL_FACE)

        # ensure viewport size is correct (offline renderer could change it)
        gl.glViewport(0, 0, self.viewport_size[0], self.viewport_size[1])

        self.screen_program.draw(gl.GL_TRIANGLE_STRIP)

        self.n_frame += 1
        self.t_elapsed += dt

        if self.args.nearest_train:
            ni = nearest_train(self.scene_data['view_matrix'], np.linalg.inv(self.scene_data['model3d_origin']) @ self.last_view_matrix)
            label = self.scene_data['camera_labels'][ni]
            assert self.args.gt, 'you must define path to gt images'
            path = self.args.gt.replace('*', str(label))
            if not os.path.exists(path):
                print(f'{path} NOT FOUND!')
            elif self.last_gt_image != path:
                self.last_gt_image = path
                img = cv2.imread(path)
                max_side = max(img.shape[:2])
                s = 1024 / max_side
                img = cv2.resize(img, None, None, s, s)
                cv2.imshow('nearest train', img)
            cv2.waitKey(1)

    def on_resize(self, w, h):
        print(f'on_resize {w}x{h}')
        self.trackball.resize((w, h))
        self.screen_program['position'] = [(0, 0), (0, h), (w, 0), (w, h)]

    def on_close(self):
        pass

    def on_mouse_press(self, x, y, buttons, modifiers):
        # print(buttons, modifiers)
        self.trackball.set_state(Trackball.STATE_ROTATE)
        if (buttons == app.window.mouse.LEFT):
            ctrl = (modifiers & app.window.key.MOD_CTRL)
            shift = (modifiers & app.window.key.MOD_SHIFT)
            if (ctrl and shift):
                self.trackball.set_state(Trackball.STATE_ZOOM)
            elif ctrl:
                self.trackball.set_state(Trackball.STATE_ROLL)
            elif shift:
                self.trackball.set_state(Trackball.STATE_PAN)
        elif (buttons == app.window.mouse.MIDDLE):
            self.trackball.set_state(Trackball.STATE_PAN)
        elif (buttons == app.window.mouse.RIGHT):
            self.trackball.set_state(Trackball.STATE_ZOOM)

        self.trackball.down(np.array([x, y]))

        # Stop animating while using the mouse
        self.mouse_pressed = True

    def on_mouse_drag(self, x, y, dx, dy, buttons):
        self.trackball.drag(np.array([x, y]))

    def on_mouse_release(self, x, y, button, modifiers):
        self.mouse_pressed = False

    def on_mouse_scroll(self, x, y, dx, dy):
        self.trackball.scroll(dy)


if __name__ == '__main__':
    args = get_args()

    my_app = MyApp(args)
    my_app.run()

FINALLY, try to run the Person 1 fitted scene again:

$ python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view

Depending on how much RAM your GPU has, you may see a CUDA out of memory error:

loading pointcloud...
=== 3D model ===
VERTICES:  3072078
EXTENT:  [-16.5258255  -14.02860832 -30.76384926] [ 10.81384468   2.03398347 -20.07631493]
================
new viewport size  (2000, 1328)
[i] HiDPI detected, fixing window size
[w] Cannot read STENCIL size from the framebuffer
Unable to load numpy_formathandler accelerator from OpenGL_accelerate
[i] Using GLFW (GL 4.6)
Net checkpoint: downloads/scenes/person_1/02-20_08-42-55/UNet_stage_0_epoch_39_net.pth
Texture checkpoint: downloads/scenes/person_1/02-20_08-42-55/PointTexture_stage_0_epoch_39.pth
SUPERSAMPLING: 2
[i] Running at 60 frames/second
on_resize 1920x1009
Traceback (most recent call last):
  File "viewer.py", line 452, in <module>
    my_app.run()
  File "viewer.py", line 256, in run
    app.run()
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/glumpy/app/__init__.py", line 317, in run
    clock = __init__(clock=clock, framerate=framerate, backend=__backend__)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/glumpy/app/__init__.py", line 277, in __init__
    window.dispatch_event('on_resize', window._width, window._height)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/glumpy/app/window/event.py", line 396, in dispatch_event
    if getattr(self, event_type)(*args):
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/glumpy/app/window/window.py", line 221, in on_resize
    self.dispatch_event('on_draw', 0.0)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/glumpy/app/window/event.py", line 386, in dispatch_event
    if handler(*args):
  File "viewer.py", line 366, in on_draw
    self.last_frame = self.render_frame(self.last_view_matrix)
  File "viewer.py", line 262, in render_frame
    frame = self.model.infer()['output'].flip([0])
  File "/home/mhdadk/Documents/npbg2/npbg/gl/nn.py", line 121, in infer
    out, net_input = self.model(input_dict, return_input=True)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/mhdadk/Documents/npbg2/npbg/models/compose.py", line 194, in forward
    out1 = self.net(*input_multiscale, **kwargs)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/mhdadk/Documents/npbg2/npbg/models/unet.py", line 271, in forward
    up2 = self.up2(up3, down1)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/mhdadk/Documents/npbg2/npbg/models/unet.py", line 130, in forward
    output= self.conv(torch.cat([in1_up, inputs2_], 1))
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/mhdadk/Documents/npbg2/npbg/models/unet.py", line 77, in forward
    features = self.block.act_f(self.block.conv_f(x))
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/module.py", line 541, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/modules/activation.py", line 329, in forward
    return F.elu(input, self.alpha, self.inplace)
  File "/home/mhdadk/miniconda3/envs/npbg/lib/python3.6/site-packages/torch/nn/functional.py", line 992, in elu
    result = torch._C._nn.elu(input, alpha)
RuntimeError: CUDA out of memory. Tried to allocate 82.00 MiB (GPU 0; 1.96 GiB total capacity; 1.36 GiB already allocated; 71.62 MiB free; 98.74 MiB cached)

In this case, change the viewport parameter from 2000,1328 to something smaller, like 500,500:

$ python viewer.py --config downloads/person_1.yaml --viewport 500,500 --origin-view

This should now work. In case you see the message:

"viewer.py" is not responding.

This just means that it is taking some time to load the render. Click on "Wait" and the render should load soon.

Rotate novel view

Hi, thanks for sharing the good work.
it's difficult to rotate the view in the viewer, blur due to too much rotation.
I wonder if there is a view_matrix that can rotate 360 degrees in a certain position, Or is there another way to customize the view's trajectory? (It's not the same trajectory as training)

Render time

Hello,

you mentioned that you can render a full HD image in 60 ms, which is impressive. Do you just render with PyTorch or do you use something like TensorRT for inference?

Thanks :)

Eval mode=True during training

Hi, thank you for the great work! I have a doubt about the code. During training, you have set eval mode to true. I'd be grateful if you could help me understand why that's so and whether there is a difference in inference results if we set the model to train mode instead.

Thanks in advance

The result doesn't show any color

Hi
I printed a trained data of epoch 20 by using viewer.py, but there's an issue
As shown in the image below, it doesn't have any color
I think there should be something wrong with the process of fitting descriptor
Is there any solution for this?
Thanks.

그림1
그림2
그림3

Problem with big dataset?

Hi, with your help, i some how managed to solve problems including pytorch rtx 30 series compatibility problem.
I'm trying to apply this npbg to 3d point cloud of our school building.
The dataset consists of 708 photos and I successfully built the required files with metashape.
However, when i try to run the following command,
python train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names <scene_name>

I get the error below.

multiprocessing.pool.MaybeEncodingError: Error sending result (entire message is at the bottom): [(<npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b1358>, <npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b12b0>)]'. Reason: 'error("'i' format requires -2147483648 <= number <= 2147483647",)'

Here is the dataset built with metashape. Give it a try, it won't take that long.
https://drive.google.com/file/d/1hkX5EBLKmGodBqqdkexlFf-Qxv0M-BPS/view?usp=sharing

When i google the message, it says it's something related to python 3.5~3.6's problem of sending and receiving large data. Do you have any solution to this?
Thanks

Entire printed message of above command :

(npbg) alex@alex-System-Product-Name:~/Downloads/npbg-master$ python train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names scene
experiment dir: data/logs/01-02_01-55-03___dataset_names^scene
 - ARGV:  
 train.py --config configs/train_example.yaml --pipeline npbg.pipelines.ogl.TexturePipeline --dataset_names scene 

                                     Unchanged args

                               batch_size  :  8
                           batch_size_val  :  8
                                  comment  :  <empty>
                               conv_block  :  gated
                           criterion_args  :  {'partialconv': False}
                         criterion_module  :  npbg.criterions.vgg_loss.VGGLoss
                                crop_size  :  (512, 512)
                       dataloader_workers  :  4
                          descriptor_size  :  8
                                   epochs  :  40
                                     eval  :  False
                             eval_in_test  :  True
                            eval_in_train  :  True
                      eval_in_train_epoch  :  -1
                         exclude_datasets  :  None
                               freeze_net  :  False
                      ignore_changed_args  :  ['ignore_changed_args', 'save_dir', 'dataloader_workers', 'epochs', 'max_ds', 'batch_size_val', 'config', 'pipeline']
                                inference  :  False
                           input_channels  :  None
                             input_format  :  uv_1d_p1, uv_1d_p1_ds1, uv_1d_p1_ds2, uv_1d_p1_ds3, uv_1d_p1_ds4
                                 log_freq  :  5
                          log_freq_images  :  100
                                       lr  :  0.0001
                                   max_ds  :  4
                               merge_loss  :  True
                                 multigpu  :  True
                                 n_points  :  0
                                 net_ckpt  :  downloads/weights/01-09_07-29-34___scannet/UNet_stage_0_epoch_39_net.pth
                                 net_size  :  4
                               num_mipmap  :  5
                               paths_file  :  configs/paths_example.yaml
                               reg_weight  :  0.0
                                 save_dir  :  data/logs
                                save_freq  :  1
                                     seed  :  2019
                              simple_name  :  True
                            splitter_args  :  {'train_ratio': 0.9}
                          splitter_module  :  npbg.datasets.splitter.split_by_ratio
                            supersampling  :  1
                       texture_activation  :  none
                             texture_ckpt  :  None
                               texture_lr  :  0.1
                             texture_size  :  None
                       train_dataset_args  :  {'keep_fov': False, 'random_zoom': [0.5, 2.0], 'random_shift': [-1.0, 1.0], 'drop_points': 0.0, 'num_samples': 2000}
                                 use_mask  :  None
                                 use_mesh  :  False
                         val_dataset_args  :  {'keep_fov': False, 'drop_points': 0.0}

                                      Changed args

                                   config  :  configs/train_example.yaml (default None)
                            dataset_names  :  ['scene'] (default [])
                                 pipeline  :  npbg.pipelines.ogl.TexturePipeline (default None)

loading pointcloud...
=== 3D model ===
VERTICES:  129908024
EXTENT:  [-77.48674774 -54.43938446 -22.40940666] [127.75872803 147.50030518  17.70900345]
================
gl_frame False
image_size (512, 512)
gl_frame False
image_size (512, 512)
Traceback (most recent call last):
  File "train.py", line 477, in <module>
    pipeline.create(args)
  File "/home/alex/Downloads/npbg-master/npbg/pipelines/ogl.py", line 90, in create
    self.ds_train, self.ds_val = get_datasets(args)
  File "/home/alex/Downloads/npbg-master/npbg/datasets/dynamic.py", line 332, in get_datasets
    for ds_train, ds_val in pool_out.get():
  File "/home/alex/anaconda3/envs/npbg/lib/python3.6/multiprocessing/pool.py", line 644, in get
    raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '[(<npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b1358>, <npbg.datasets.dynamic.DynamicDataset object at 0x7f0b817b12b0>)]'. Reason: 'error("'i' format requires -2147483648 <= number <= 2147483647",)'

`

How to set num_sample

hi, thanks for sharing good work. I would like to know how to set the value of num_samples in train_example.yaml, and how is it implemented in the code.

How to train from scratch or fine-tune with multiply scenes datasets?

I want to train the model with my own dataset from scratch, and also want to fine-tune the model with 2 different scenes datasets as you did in paper about scene editing. Could you tell me how to modify the train.py or release the training code from scratch?Looking forward to your reply.Thank you.

Feasibility of alternative cost functions ?

First of all this work is impressive. Thanks for sharing the codes with the wider community.

My question pertains mainly to the cost function used for converging the model. While the perceptual cost function alone has proven to be effective in achieving good results(both in the paper and in my own validation experiments), I wonder if the feasibility of alternative cost functions such as the L1 has been investigated as well ? Or perhaps some combination scheme of multiple cost functions? What is the core motivation behind training with the VGG cost alone ?

Apologies if this has been answered elsewhere but I found no pertinent discussions in the paper or anywhere in the existing issues.

How to fit on a ScanNet scene?

Each ScanNet scene contains RGB-D images, so I can project 2D pixels to 3D point cloud and save them to a .ply file.
But how to modify path_example.yaml and train_example.yaml to fit the descriptors on this ply file?

Any guidelines or suggestions?

GPU required for training?

I am trying to train a new scene and am running into memory issues with training.

I have tried with a single Titan RTX (24GB) card, and a multi GPU setup (4 x Tesla T4, each 16GB)
With both I am receiving a CUDA out of memory error on the first epoch of training.

I was wondering which GPU set ups you used for training? As I would assume these should be sufficiently big to train the model.

Something wrong with pycuda while running viewer.py

Hello! Thanks for sharing the code! I just want to test the code by viewing some results, but I encounter some problems with the following line.

import pycuda.gl.autoinit # this may fails in headless mode

And the error message is as follows:

loading pointcloud...
=== 3D model ===
VERTICES:  3072078
EXTENT:  [-16.5258255  -14.02860832 -30.76384926] [ 10.81384468   2.03398347 -20.07631493]
================
new viewport size  (1024, 1024)
[i] HiDPI detected, fixing window size
[w] Cannot read STENCIL size from the framebuffer
Unable to load numpy_formathandler accelerator from OpenGL_accelerate
[i] Using GLFW (GL 3.1)
Traceback (most recent call last):
  File "/mnt/sda/yuanyujie/sig21/code/npbg/npbg/gl/render.py", line 29, in _init_buffers
    import pycuda.gl.autoinit  # this may fails in headless mode
  File "/mnt/sda/yuanyujie/anaconda3/envs/npbg/lib/python3.6/site-packages/pycuda-2020.1-py3.6-linux-x86_64.egg/pycuda/gl/autoinit.py", line 10, in <module>
    context = make_default_context(lambda dev: cudagl.make_context(dev))
  File "/mnt/sda/yuanyujie/anaconda3/envs/npbg/lib/python3.6/site-packages/pycuda-2020.1-py3.6-linux-x86_64.egg/pycuda/tools.py", line 229, in make_default_context
    "on any of the %d detected devices" % ndevices
RuntimeError: make_default_context() wasn't able to create a context on any of the 3 detected devices

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "viewer.py", line 435, in <module>
    my_app = MyApp(args)
  File "viewer.py", line 178, in __init__
    clear_color=args.clear_color)
  File "/mnt/sda/yuanyujie/sig21/code/npbg/npbg/gl/render.py", line 19, in __init__
    self._init_buffers(viewport_size, out_buffer_location)
  File "/mnt/sda/yuanyujie/sig21/code/npbg/npbg/gl/render.py", line 31, in _init_buffers
    raise RuntimeError('PyCUDA init failed, cannot use torch buffer')
RuntimeError: PyCUDA init failed, cannot use torch buffer

I tested the code on RTX 2080Ti with CUDA 10.0.130.

Rendering problem

Hi,

Thanks for your great work! I tried training the network from scratch and despite my best efforts, a bug seems to persist on my side. It seems the in render.py (/npbg/npbg/gl/render.py), lines 53 and 83 (self.fbo.activate() and self.fbo.deactivate() respectively) seem to create a problem. I get the following error :

File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/render.py", line 54, in render
self.fbo.activate()
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/glumpy/gloo/globject.py", line 95, in activate
self._activate()
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/glumpy/gloo/framebuffer.py", line 375, in _activate
gl.glBindFramebuffer(gl.GL_FRAMEBUFFER, self._handle)
File "src/errorchecker.pyx", line 53, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError
OpenGL.error.GLError: GLError(
err = 1282,
description = b'invalid operation',
baseOperation = glBindFramebuffer,
cArguments = (GL_FRAMEBUFFER, 1)
)
deleting buffers...
Exception ignored in: <bound method NNScene.del of <npbg.gl.programs.NNScene object at 0x7f19b91bac18>>
Traceback (most recent call last):
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 287, in del
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 292, in delete
File "src/latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.call
File "src/wrapper.pyx", line 311, in OpenGL_accelerate.wrapper.Wrapper.call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 401, in call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 381, in load
ImportError: sys.meta_path is None, Python is likely shutting down
deleting buffers...
Exception ignored in: <bound method NNScene.del of <npbg.gl.programs.NNScene object at 0x7f19b922c908>>
Traceback (most recent call last):
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 287, in del
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 292, in delete
File "src/latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.call
File "src/wrapper.pyx", line 311, in OpenGL_accelerate.wrapper.Wrapper.call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 401, in call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 381, in load
ImportError: sys.meta_path is None, Python is likely shutting down
deleting buffers...
Exception ignored in: <bound method NNScene.del of <npbg.gl.programs.NNScene object at 0x7f19b922c048>>
Traceback (most recent call last):
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 287, in del
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 292, in delete
File "src/latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.call
File "src/wrapper.pyx", line 311, in OpenGL_accelerate.wrapper.Wrapper.call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 401, in call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 381, in load
ImportError: sys.meta_path is None, Python is likely shutting down
deleting buffers...
Exception ignored in: <bound method NNScene.del of <npbg.gl.programs.NNScene object at 0x7f19b91d4f28>>
Traceback (most recent call last):
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 287, in del
File "/media/shubhendujena/3E08E5152BF3ABC4/nvs_experiments/npbg_try/npbg/gl/programs.py", line 292, in delete
File "src/latebind.pyx", line 32, in OpenGL_accelerate.latebind.LateBind.call
File "src/wrapper.pyx", line 311, in OpenGL_accelerate.wrapper.Wrapper.call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 401, in call
File "/home/shubhendujena/anaconda3/envs/npbg/lib/python3.6/site-packages/OpenGL/platform/baseplatform.py", line 381, in load
ImportError: sys.meta_path is None, Python is likely shutting down

It goes away if I comment out these lines but then I get a bunch of windows opening every few steps. I'd be grateful if you could give me some tips on how I can go about resolving this.

Thanks in advance

How to generate data on server ?

I try to generate data for training data on a server, but not work.
I run the command:

python generate_dataset.py --config downloads/person_1.yaml

and log:

loading pointcloud...
=== 3D model ===
VERTICES:  3072078
EXTENT:  [-16.5258255  -14.02860832 -30.76384926] [ 10.81384468   2.03398347 -20.07631493]
================
proj_matrix was not set
viewport size  (4000, 2656)
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[w] b'The GLFW library is not initialized'
[x] Window creation failed
deleting buffers...

Do you have any code to generate data on servers?
Thank you!

Scene.program is None

Hi,
I am trying to fit a scene and I have a problem with the dataloader. At the second epoch, even though the dataset is loaded, the scene.program seems to be None. Do you have an idea on where could the problem ?

Here is error message :

EPOCH 1

TRAIN
EVAL MODE IN TRAIN
model parameters: 1928771
running on datasets [0]
proj_matrix was not set
total parameters: 76715531
Traceback (most recent call last):
File "train.py", line 517, in
train_loss = run_train(epoch, pipeline, args, iter_cb)
File "train.py", line 253, in run_train
return run_epoch(pipeline, 'train', epoch, args, iter_cb=iter_cb)
File "train.py", line 228, in run_epoch
run_sub(dl, extra_optimizer)
File "train.py", line 118, in run_sub
for it, data in enumerate(dl):
File "C:\Users\user.conda\envs\npbg\lib\site-packages\torch\utils\data\dataloader.py", line 517, in next
data = self._next_data()
File "C:\Users\user.conda\envs\npbg\lib\site-packages\torch\utils\data\dataloader.py", line 557, in _next_data
data = self.dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\user.conda\envs\npbg\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\user.conda\envs\npbg\lib\site-packages\torch\utils\data_utils\fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "C:\Users\user.conda\envs\npbg\lib\site-packages\torch\utils\data\dataset.py", line 219, in getitem
return self.datasets[dataset_idx][sample_idx]
File "C:\Users\user\Documents\npbg\npbg\datasets\dynamic.py", line 246, in getitem
input
= self.renderer.render(view_matrix=view_matrix, proj_matrix=proj_matrix)
File "C:\Users\user\Documents\npbg\npbg\datasets\dynamic.py", line 68, in render
self.scene.set_camera_view(view_matrix)
File "C:\Users\user\Documents\npbg\npbg\gl\programs.py", line 366, in set_camera_view
self.program['m_view'] = inv(m).T

problem running the example

Hi,

I am trying to run the example code from the readme file but I am getting the following error:

$ python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view
loading pointcloud...
Traceback (most recent call last):
  File "viewer.py", line 434, in <module>
    my_app = MyApp(args)
  File "viewer.py", line 112, in __init__
    self.scene_data = load_scene_data(args.config)
  File "/home/ttsesm/Development/npbg/npbg/gl/utils.py", line 263, in load_scene_data
    pointcloud = import_model3d(fix_relative_path(config['pointcloud'], path))
  File "/home/ttsesm/Development/npbg/npbg/gl/utils.py", line 431, in import_model3d
    model['rgb'] = data.colors[0][:, :3] / 255.
IndexError: too many indices for array

any idea what could be the cause?

Regression Loss always at 0

Hi,
I am trying to fit the scene "scene0000_00" from the ScanNet dataset. The regression loss is always 0. Do you have an idea on where could be the problem?

unable to load numpy_formathandler accelerator from OpenGL_accelerate

Hi, currently testing the code with sample data but there seems to be an error loading numpy_formathandler accelerator.
After calling python viewer.py --config downloads/person_1.yaml --viewport 2000,1328 --origin-view, the view.py window freezes with above error.
I can't figure out what's wrong here. Any idea?
Thanks!

Screenshot from 2020-12-31 04-22-42

An error occurred while installing metashape.

in ubuntu 18.04

(npbg) vig-titan2@vigtitan2-System-Product-Name:~/PycharmProjects/npbg/metashape-pro$ LD_LIBRARY_PATH="python/lib:$LD_LIBRARY_PATH" ./python/bin/python3.5 -m pip install pillow
Collecting pillow
Using cached https://files.pythonhosted.org/packages/60/f0/dd2eb7911f948bf529f58f0c7931f6f6466f711bd6f1d81a69dc4edd4e2a/Pillow-8.1.2.tar.gz
Complete output from command python setup.py egg_info:
/home/vig-titan2/PycharmProjects/npbg/metashape-pro/python/bin/python3.5: error while loading shared libraries: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory

----------------------------------------

Command "python setup.py egg_info" failed with error code 127 in /tmp/pip-build-2xuxzoaq/pillow/

It seems that LD_LIBRARY_PATH is not set properly. can you help me?

Points descriptors

Hello,
I would like to understand more about your descriptors. So if you train your network on 100 ScanNet scenes then you would have a set of 100 descriptors (one for each scene) and each descriptor would contains a set of N vectors (N points in the pointcloud). I wonder if my understand is correct or not.

Also I have another question about the two-stage learning. So in the pretraining stage, u guys use a set of scenes (set A) to training both descriptors and rendering network. Then in the fine-tuning stage, u guys zero out the descriptors and fine-tuned the rendering network to fit a new set of scene (set B), right ? My question is: after the fine-tuning stage, it is obvious the network can render novel view in set B but I wonder if the network can generalize to set A anymore ?

About inference

Hello, thank you for your excellent work and the open-source code!
I have finetuned your pre-trained model, however, I can not find the way about inference.
I saw inference related settings in your config files, but I can’t use them correctly.
Thank you for your reply!

question about camera.xml

In my custom dataset, the camera matrices provided are "camera to world", should I use np.linalg.inv to convert them to "world to camera" ? In other words, is the view_matrix output from load_scene_data in w2c and opengl format?

Thank you very much.

Viewer renders only a triangular half of the texture

Because I possess an AMD GPU rather than an NVIDIA, CUDA-supporting one, I've replaced pytorch with pytorch-directml and offloaded unsupported function calls to the CPU while doing most of the work on the GPU. However, despite having modified no viewer code, it seems to render half of the image in a skewed triangle transposed across the diagonal.

Does anyone know why this is?

NOTE: I used the numpy-based viewer pasted here and my own fork here.

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.