Git Product home page Git Product logo

da-rnn's Introduction

DA-RNN: Semantic Mapping with Data Associated Recurrent Neural Networks

Created by Yu Xiang and Tanner Schmidt at RSE-Lab at University of Washington.

Introduction

We introduce Data Associated Recurrent Neural Networks (DA-RNNs), a novel framework for joint 3D scene mapping and semantic labeling. DA-RNNs use a new recurrent neural network architecture for semantic labeling on RGB-D videos. The output of the network is integrated with mapping techniques such as KinectFusion in order to inject semantic information into the reconstructed 3D scene. arXiv, Video

DA-RNN

License

DA-RNN is released under the MIT License (refer to the LICENSE file for details).

Citation

If you find DA-RNN useful in your research, please consider citing:

@inproceedings{xiang2017darnn,
    Author = {Yu Xiang and Dieter Fox},
    Title = {DA-RNN: Semantic Mapping with Data Associated Recurrent Neural Networks},
    Booktitle = {Robotics: Science and Systems (RSS)},
    Year = {2017}
}

Installation

DA-RNN consists a reccurent neural network for semantic labeling on RGB-D videos and the KinectFusion module for 3D reconstruction. The RNN and KinectFusion communicate via a Python interface.

  1. Install TensorFlow. I suggest to use the Virtualenv installation.

  2. Compile the new layers under $ROOT/lib we introduce in DA-RNN.

    cd $ROOT/lib
    sh make.sh
  3. Compile KinectFusion with cmake. Unfortunately, this step requires some effort.

    Install dependencies of KinectFusion:

    cd $ROOT/lib/kinect_fusion
    mkdir build
    cd build
    cmake ..
    make
  4. Compile the Cython interface for RNN and KinectFusion

    cd $ROOT/lib
    python setup.py build_ext --inplace
  5. Add the KinectFusion libary path

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ROOT/lib/kinect_fusion/build
  6. Download the VGG16 weights from here (57M). Put the weight file vgg16_convs.npy to $ROOT/data/imagenet_models.

Tested environment

  • Ubuntu 16.04
  • Tensorflow 1.2.0
  • CUDA 8.0

Running on the RGB-D Scene dataset

  1. Download the RGB-D Scene dataset from here (5.5G).

  2. Create a symlink for the RGB-D Scene dataset

    cd $ROOT/data/RGBDScene
    ln -s $RGB-D_scene_data data
  3. Training and testing on the RGB-D Scene dataset

    cd $ROOT
    
    # train and test RNN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/rgbd_scene_multi_*.sh $GPU_ID
    
    # train and test FCN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/rgbd_scene_single_*.sh $GPU_ID
    

Running on the ShapeNet Scene dataset

  1. Download the ShapeNet Scene dataset from here (2.3G).

  2. Create a symlink for the ShapeNet Scene dataset

    cd $ROOT/data/ShapeNetScene
    ln -s $ShapeNet_scene_data data
  3. Training and testing on the RGB-D Scene dataset

    cd $ROOT
    
    # train and test RNN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/shapenet_scene_multi_*.sh $GPU_ID
    
    # train and test FCN with different input (color, depth, normal and rgbd)
    ./experiments/scripts/shapenet_scene_single_*.sh $GPU_ID
    

Using Our Trained Models

  1. You can download all our trained tensorflow models on the RGB-D Scene dataset and the ShapeNet Scene dataset from here (3.1G).

    # an exmaple to test the trained model
    ./experiments/scripts/rgbd_scene_multi_rgbd_test.sh $GPU_ID
    

da-rnn's People

Contributors

cvpr20213dvr avatar kevinkit avatar yuxng avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

da-rnn's Issues

Error when make kinect_fusion (execute sh make.sh successfully)

Thks for sharing your great job.
I've encoutered error when built kinect_fusion prj.

image

  • fusion.h code
    image

  • rigid.h code
    image


It seems that sophus not found , but in my /usr/local/include/sophus exist headers of sophus and I've added this path to cmakelist.txt
image

I have no idea how to deal with this error, could you give me some tips?

undefined symbol: _ZN2df12KinectFusionC1ESs on trained model test script

Dear Dr. Xiang @yuxng ,

Thanks for the work. I am working on reproducing your code on my docker image.

Everything is compiled. I can run the test script without kfusion set to True. However, once set it to True, I got the error as follows:

+ set -e
+ export PYTHONUNBUFFERED=True
+ PYTHONUNBUFFERED=True
+ export CUDA_VISIBLE_DEVICES=0
+ CUDA_VISIBLE_DEVICES=0
++ date +%Y-%m-%d_%H-%M-%S
+ LOG=experiments/logs/rgbd_scene_multi_rgbd_test.txt.2018-06-14_23-27-31
+ exec
++ tee -a experiments/logs/rgbd_scene_multi_rgbd_test.txt.2018-06-14_23-27-31
+ echo Logging output to experiments/logs/rgbd_scene_multi_rgbd_test.txt.2018-06-14_23-27-31
Logging output to experiments/logs/rgbd_scene_multi_rgbd_test.txt.2018-06-14_23-27-31
+ '[' -f /home/weizhang/DA-RNN/output/rgbd_scene/rgbd_scene_val/vgg16_fcn_rgbd_multi_frame_rgbd_scene_iter_40000/segmentations.pkl ']'
+ ./tools/test_net.py --gpu 0 --network vgg16 --model data/fcn_models/rgbd_scene/vgg16_fcn_rgbd_multi_frame_rgbd_scene_iter_40000.ckpt --imdb rgbd_scene_val --cfg experiments/cfgs/rgbd_scene_multi_rgbd.yml --rig data/RGBDScene/camera.json --kfusion 1
Traceback (most recent call last):
  File "./tools/test_net.py", line 13, in <module>
    from fcn.test import test_net
  File "/home/weizhang/DA-RNN/tools/../lib/fcn/test.py", line 25, in <module>
    from kinect_fusion import kfusion
ImportError: /home/weizhang/DA-RNN/tools/../lib/kinect_fusion/kfusion.so: undefined symbol: _ZN2df12KinectFusionC1ESs

I am not sure if this is caused by compiling kinect fusion improperly. And comments are very appreciated.

And also @kevinkit , did you successfully run the script with kfusion to 1 in a docker?

Thanks for the help!.

LD_PRELOAD cannot be preloaded && undefined symbol

Hi,
Q1. when I train and test RNN with rgbd data
./experiments/scripts/rgbd_scene_multi_*.sh 0
I got the error of "ERROR: ld.so: object '/usr/lib/libtcmalloc.so.4' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored."

Q2. Encountered this error of "tensorflow.python.framework.errors_impl.NotFoundError: /home/path/DA-RNN-master/tools/../lib/backprojecting_layer/backprojecting.so: undefined symbol: _ZN10tensorflow16KernelDefBuilderD2Ev
"
How you guys cope with this problem? Thanks

Error when make KinectFusion: rigid.h(14): error: qualified name is not allowed

Hi, guys.

Firstly, thanks for your awsome work. However, when I replayed the repository in my machine, I met some errors while run make in kinect_fusion/build.

Here is my errors:

/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(14): error: qualified name is not allowed
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(14): error: explicit type is missing ("int" assumed)
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(14): error: expected a ";"
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(24): error: identifier "Transform" is undefined
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(28): error: identifier "Transform" is undefined
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(32): error: identifier "Transform" is undefined
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(36): error: identifier "Transform" is undefined
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/rigid.h(52): error: identifier "Transform" is undefined
/home/a/DA-RNN/lib/kinect_fusion/include/df/util/dualQuaternion.h(72): error: explicit type is missing ("int" assumed)
/home/a/DA-RNN/lib/kinect_fusion/include/df/util/dualQuaternion.h(72): error: qualified name is not allowed
/home/a/DA-RNN/lib/kinect_fusion/include/df/util/dualQuaternion.h(72): error: expected a ")"
/home/a/DA-RNN/lib/kinect_fusion/include/df/util/dualQuaternion.h(70): warning: constant "OtherOptions" is not used in declaring the parameter types of function template "df::DualQuaternion<Scalar, Options>::DualQuaternion(int)"
.....................
/home/a/DA-RNN/lib/kinect_fusion/include/df/transform/nonrigidDeviceModule.h(107): error: a value of type "float (df::NonrigidTransformer<float, df::DualQuaternion>::*)() const" cannot be used to initialize an entity of type "const float"
detected during instantiation of "df::NonrigidTransformer<Scalar, TransformT>::DeviceModule::DeviceModule(int) [with Scalar=float, TransformT=df::DualQuaternion]"
/home/a/DA-RNN/lib/kinect_fusion/./src/transform/nonrigid.cu(1069): here
79 errors detected in the compilation of "/tmp/tmpxft_0000744b_00000000-7_nonrigid.cpp1.ii".

--error 0x2 --

CMake Error at kinectFusion_generated_nonrigid.cu.o.cmake:266 (message):
Error generating file
/home/a/DA-RNN/lib/kinect_fusion/build/CMakeFiles/kinectFusion.dir/src/transform/./kinectFusion_generated_nonrigid.cu.o
CMakeFiles/kinectFusion.dir/build.make:1142: recipe for target 'CMakeFiles/kinectFusion.dir/src/transform/kinectFusion_generated_nonrigid.cu.o' failed
make[2]: *** [CMakeFiles/kinectFusion.dir/src/transform/kinectFusion_generated_nonrigid.cu.o] Error 1
CMakeFiles/Makefile2:67: recipe for target 'CMakeFiles/kinectFusion.dir/all' failed
make[1]: *** [CMakeFiles/kinectFusion.dir/all] Error 2
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2

I have strictly followed the steps mentioned in the issue 10: Steps for ubuntu 14.04 . And I can make sure that the versions of dependencies are:

Tensorflow 1.2.0
CUDA 8.0
Ubuntu 16.04 LTS
Sophus SHA1 ID: 341346e306d657ac8acaf052939ffd85dacd8f82
Eigen 3.2.92
Nanoflann 1.2.2
Pangolin 0.5

as @yuxng mentioned in the issue 2:Need more details for versions.

Can anyone @yuxng @kevinkit @JackHenry1992 help me please? Thanks for your generous help!

Error when test the trained model

I'm sure I have installed correct version of dependencies ,including Eigen,Sophus,Pangolin and so on.I'm not meeting Errors when "sh make.sh",I also succeed in installing KinectFusion ,that part is OK.But When I download and test the trained model Error occurs.It said "Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.created window" . The details are as follows:

:~/DA-RNN-master$ ./experiments/scripts/rgbd_scene_multi_rgbd_test.sh 0

  • set -e
  • export PYTHONUNBUFFERED=True
  • PYTHONUNBUFFERED=True
  • export CUDA_VISIBLE_DEVICES=0
  • CUDA_VISIBLE_DEVICES=0
    ++ date +%Y-%m-%d_%H-%M-%S
  • LOG=experiments/logs/rgbd_scene_multi_rgbd_test.txt.2019-01-11_15-03-36
  • exec
    ++ tee -a experiments/logs/rgbd_scene_multi_rgbd_test.txt.2019-01-11_15-03-36
  • echo Logging output to experiments/logs/rgbd_scene_multi_rgbd_test.txt.2019-01-11_15-03-36
    Logging output to experiments/logs/rgbd_scene_multi_rgbd_test.txt.2019-01-11_15-03-36
  • '[' -f /home/gaochuan/DA-RNN-master/output/rgbd_scene/rgbd_scene_val/vgg16_fcn_rgbd_multi_frame_rgbd_scene_iter_40000/segmentations.pkl ']'
  • ./tools/test_net.py --gpu 0 --network vgg16 --model data/fcn_models/rgbd_scene/vgg16_fcn_rgbd_multi_frame_rgbd_scene_iter_40000.ckpt --imdb rgbd_scene_val --cfg experiments/cfgs/rgbd_scene_multi_rgbd.yml --rig data/RGBDScene/camera.json --kfusion 1
    registered Linear
    registered Linear
    registered Poly3
    registered Poly3
    shapenet_scene_train
    shapenet_scene_val
    shapenet_single_train
    shapenet_single_val
    gmu_scene_train
    gmu_scene_val
    rgbd_scene_train
    rgbd_scene_val
    rgbd_scene_trainval
    lov_train
    lov_val
    Called with args:
    Namespace(cfg_file='experiments/cfgs/rgbd_scene_multi_rgbd.yml', gpu_id=0, imdb_name='rgbd_scene_val', kfusion=True, model='data/fcn_models/rgbd_scene/vgg16_fcn_rgbd_multi_frame_rgbd_scene_iter_40000.ckpt', network_name='vgg16', pretrained_model=None, rig_name='data/RGBDScene/camera.json', wait=True)
    Using config:
    {'EPS': 1e-14,
    'EXP_DIR': 'rgbd_scene',
    'FLIP_X': False,
    'GPU_ID': 0,
    'INPUT': 'RGBD',
    'NETWORK': 'VGG16',
    'PIXEL_MEANS': array([[[102.9801, 115.9465, 122.7717]]]),
    'RNG_SEED': 3,
    'ROOT_DIR': '/home/gaochuan/DA-RNN-master',
    'TEST': {'GRID_SIZE': 512,
    'RANSAC': False,
    'SCALES_BASE': [1.0],
    'SINGLE_FRAME': False,
    'VERTEX_REG': False,
    'VISUALIZE': False},
    'TRAIN': {'CHROMATIC': True,
    'DISPLAY': 20,
    'GAMMA': 0.1,
    'GRID_SIZE': 512,
    'IMS_PER_BATCH': 1,
    'LEARNING_RATE': 0.0001,
    'MOMENTUM': 0.9,
    'NUM_CLASSES': 10,
    'NUM_STEPS': 3,
    'NUM_UNITS': 64,
    'SCALES_BASE': [1.0],
    'SINGLE_FRAME': False,
    'SNAPSHOT_INFIX': 'rgbd_scene',
    'SNAPSHOT_ITERS': 10000,
    'SNAPSHOT_PREFIX': 'vgg16_fcn_rgbd_multi_frame',
    'STEPSIZE': 30000,
    'TRAINABLE': True,
    'USE_FLIPPED': False,
    'VERTEX_REG': False,
    'VERTEX_W': 10.0,
    'VISUALIZE': False}}
    /gpu:0
    Tensor("unstack:0", dtype=float32)
    Tensor("unstack_1:0", dtype=float32)
    Tensor("conv5_3/conv5_3:0", shape=(?, ?, ?, 512), dtype=float32)
    Tensor("conv5_3_p/conv5_3_p:0", shape=(?, ?, ?, 512), dtype=float32)
    Tensor("conv4_3/conv4_3:0", shape=(?, ?, ?, 512), dtype=float32)
    Tensor("conv4_3_p/conv4_3_p:0", shape=(?, ?, ?, 512), dtype=float32)
    Tensor("score_conv4/score_conv4:0", shape=(?, ?, ?, 64), dtype=float32)
    Tensor("upscore_conv5_1:0", shape=(?, ?, ?, 64), dtype=float32)
    Tensor("fifo_queue_Dequeue:5", dtype=float32)
    Tensor("fifo_queue_Dequeue:6", dtype=float32)
    Tensor("fifo_queue_Dequeue:7", dtype=float32)
    Tensor("unstack_3:0", dtype=float32)
    Tensor("unstack_4:0", dtype=float32)
    Tensor("upscore_1:0", shape=(?, ?, ?, 64), dtype=float32)
    Computeflow(top_data=<tf.Tensor 'flow:0' shape= dtype=float32>, top_weights=<tf.Tensor 'flow:1' shape= dtype=float32>, top_points=<tf.Tensor 'flow:2' shape= dtype=float32>)
    Tensor("score/score:0", shape=(?, ?, ?, 10), dtype=float32)
    Use network vgg16 in training
    2019-01-11 15:03:37.225146: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
    2019-01-11 15:03:37.339921: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
    2019-01-11 15:03:37.340290: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties:
    name: GeForce RTX 2080 major: 7 minor: 5 memoryClockRate(GHz): 1.71
    pciBusID: 0000:01:00.0
    totalMemory: 7.76GiB freeMemory: 7.48GiB
    2019-01-11 15:03:37.340301: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce RTX 2080, pci bus id: 0000:01:00.0, compute capability: 7.5)
    Loading model weights from data/fcn_models/rgbd_scene/vgg16_fcn_rgbd_multi_frame_rgbd_scene_iter_40000.ckpt
    rgbd_scene_val
    aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
    {"down":[0,1,0],"forward":[0,0,1],"height":480,"param_names":["fu","fv","u0","v0","k1","k2","k3"],"params":[570.29999999999995,570.29999999999995,320,240,0,0,0],"right":[1,0,0],"serialno":"34178534347","type":"Poly3","width":640}
    "Poly3"
    params: 570.29999999999995 570.29999999999995 320 240 0 0 0
    pose: [
    [
    1,
    0,
    0,
    0
    ],
    [
    0,
    1,
    0,
    0
    ],
    [
    0,
    0,
    1,
    0
    ]
    ]

{"down":[0,1,0],"forward":[0,0,1],"height":480,"param_names":["fu","fv","u0","v0","k1","k2","k3"],"params":[570.29999999999995,570.29999999999995,320,240,0,0,0],"right":[1,0,0],"serialno":"34178534347","type":"Poly3","width":640}
"Poly3"
params: 570.29999999999995 570.29999999999995 320 240 0 0 0
pose: [
[
1,
0,
0,
0
],
[
0,
1,
0,
0
],
[
0,
0,
1,
0
]
]

T_dc: 1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Framebuffer with requested attributes not available. Using available framebuffer. You may see visual artifacts.created window
./experiments/scripts/rgbd_scene_multi_rgbd_test.sh: 行 25: 15989 段错误 (核心已转储) ./tools/test_net.py --gpu 0 --network vgg16 --model data/fcn_models/rgbd_scene/vgg16_fcn_rgbd_multi_frame_rgbd_scene_iter_40000.ckpt --imdb rgbd_scene_val --cfg experiments/cfgs/rgbd_scene_multi_rgbd.yml --rig data/RGBDScene/camera.json --kfusion 1

"段错误(核心转存储)" means “segmentation fault (core dumped)”. I check the code and I have found in fact Error is at the line 309 of test.py :
if is_kfusion:
KF = kfusion.PyKinectFusion(rig_filename)
But the KinectFusion has been installed correctly so I don't know what causes the Error.
Probable Reason may be:
1.I use RTX 2080 ,with Cuda 8.0 ,Cudnn 6.0.The device is not compatible.
2.I'm using a new computer,I have not installed OpenCV and other libraries in my computer yet.

@yuxng any suggestions? I'm vert interested in this superb project and thanks a lot .

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'ComputeflowGrad'

When trying to run the shapenet experiments I get the following erros:

InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'ComputeflowGrad' with these attrs. Registreed devices: [CPU], Registered kernels: device = 'GPU'; T in [DT_FLOAT].

Then an output for the node and then ...:

CUDA Runtime Error: no CUDA-capable device is detected

I am using the same versiosn as you suggested for tensorflow, CUDA and all other dependencies. If I execute small test script like:

import tensorflow as tf
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

I can see my 2 Titan 1080 GPUs mapped as devices.

I used any kind of index for $GPU_ID, but the error is still the same.

We really appreciate your help and thank you for your help

cannot find -ltensorflow_framework

While building and execute sh make.sh, i got

/usr/bin/ld: cannot find -ltensorflow_framework
collect2: error: ld returned 1 exit status

From tensorflow github issues, i found that tensorflow_framework was removed. tensorflow/tensorflow#1569

Anyone knows how to deal with it? Thanks!

LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored

zss@zss:~/DA-RNN$ ./experiments/scripts/rgbd_scene_multi_*.sh $GPU_ID

  • set -e
  • export PYTHONUNBUFFERED=True
  • PYTHONUNBUFFERED=True
  • export CUDA_VISIBLE_DEVICES=./experiments/scripts/rgbd_scene_multi_depth.sh
  • CUDA_VISIBLE_DEVICES=./experiments/scripts/rgbd_scene_multi_depth.sh
  • export LD_PRELOAD=/usr/lib/libtcmalloc.so.4
  • LD_PRELOAD=/usr/lib/libtcmalloc.so.4
    ++ date +%Y-%m-%d_%H-%M-%S
    ERROR: ld.so: object '/usr/lib/libtcmalloc.so.4' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
  • LOG=experiments/logs/rgbd_scene_multi_color.txt.2018-01-11_19-51-45
  • exec
    ++ tee -a experiments/logs/rgbd_scene_multi_color.txt.2018-01-11_19-51-45
    ERROR: ld.so: object '/usr/lib/libtcmalloc.so.4' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
  • echo Logging output to experiments/logs/rgbd_scene_multi_color.txt.2018-01-11_19-51-45
    Logging output to experiments/logs/rgbd_scene_multi_color.txt.2018-01-11_19-51-45
  • ./tools/train_net.py --gpu 0 --network vgg16 --weights data/imagenet_models/vgg16_convs.npy --imdb rgbd_scene_train --cfg experiments/cfgs/rgbd_scene_multi_color.yml --iters 40000
    ERROR: ld.so: object '/usr/lib/libtcmalloc.so.4' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
    ERROR: ld.so: object '/usr/lib/libtcmalloc.so.4' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
    Traceback (most recent call last):
    File "./tools/train_net.py", line 13, in
    from fcn.train import get_training_roidb, train_net
    File "/home/zss/DA-RNN/tools/../lib/fcn/train.py", line 11, in
    from gt_data_layer.layer import GtDataLayer
    File "/home/zss/DA-RNN/tools/../lib/gt_data_layer/layer.py", line 12, in
    from gt_data_layer.minibatch import get_minibatch
    File "/home/zss/DA-RNN/tools/../lib/gt_data_layer/minibatch.py", line 17, in
    import scipy.io
    ImportError: No module named scipy.io
    Could anyone help me about the problem ?
    Thank you very much!

error when "sh make.sh"

Dear Dr.Xiang,
When I execute the commond line "sh make.sh", I met some errors as follow. Could you give me some advise with these errors?

Assembler messages:
Fatal error: can't create computing_label_op.cu.o: Permission denied
g++: error: computing_label_op.cu.o: No such file or directory
build computing label layer
(DA-Virtualenv) ml@neo-dell:~/DA-RNN-master/lib$ sudo sh make.sh
/usr/local/lib/python3.5/dist-packages/tensorflow/include
make.sh: 10: make.sh: nvcc: not found
g++: error: triplet_loss_op.cu.o: No such file or directory
build triplet loss
make.sh: 20: make.sh: nvcc: not found
g++: error: lifted_structured_loss_op.cu.o: No such file or directory
build lifted structured loss
make.sh: 30: make.sh: nvcc: not found
g++: error: computing_flow_op.cu.o: No such file or directory
build computing flow layer
make.sh: 40: make.sh: nvcc: not found
g++: error: backprojecting_op.cu.o: No such file or directory
build backprojecting layer
make.sh: 50: make.sh: nvcc: not found
g++: error: projecting_op.cu.o: No such file or directory
build projecting layer
make.sh: 60: make.sh: nvcc: not found
g++: error: computing_label_op.cu.o: No such file or directory
build computing label layer

Sincerely,
Felicx

g++: error: pose_estimation/build/libransac.so: No such file or directory

After the succesfull compiliation of KinectFusion (Step 3) we are now stuck on 4)

While there was the first error refering to numpy ("numpy/arrayobject.h not found") which could be fixed with a symbolic link a new error occurs which is a little bit harder to tackle:

when running the command

python setup.py build_ext --inplace

The following error occurs:

g++: error: pose_estimation/build/libransac.so: No such file or directory

when looking at the pose_estimation folder i found the CmakeList and so I did the following steps:
At first I installed NLOPT via : sudo apt-get install libnlopt-dev
And then the same procedure:

mkdir build
cd build
cmake ..
make

Which resolved this issue, however is this the correct way?

Steps for ubuntu 14.04

Since Ubuntu 16.04 seems to have some issues regarding to pangolin ( #7 ) and @JackHenry1992 successfully got the kinect_fusion code to compile on Ubuntu 14.04 , I am kindly asking to provide the steps needed to get it run on ubuntu 14.04. (see #9 for previous discussions)

@DonBilb0

Error on running test code

screenshot from 2017-08-14 19-27-53

When I run test_net.py, I encounter CUDA memory related errors (e.g. segmentation fault,
CUDA error: an illegal memory access was encountered, etc). Error messages change from time to time.
Anyone with the similar problems?

Cross plattform support

Thank you for sharing the code,

Is there any perspective or milestone in the near future, that this repository will come with a CMakeList that will allow to run this project on a Windows machine ?

Pangolin v.05 does not provide an namespace picojson / undefined reference to pangolin

Hello,

When trying to compile the kinect_fusion with the suggested version from #2 , I found out that all the data code in camera relies on the namespace "picojson", while the version picojson v.0.5 does not provide this namepsace, it is "pangolin::json" instead, like described in here: picojson.h

This change is in the master-branch under Namespace-Change

So, I have the following questions(s): Was Pangolin v.0.5 used or a newer version that uses the new namespaces?

We checkout the tag v0.5 to get the version 0.5 - however this results in the mentioned error, but when we use the newest versin an unresolved symbol error gets thrown from kinect_fusion.cpp, stating that it cannot find some Pangolin functions - which I assume are just in the 0.5 version.

So can you please tell us the correct version / branch / git commit number ?

Thank you really much for your help

How to see the 3D Semantic Scene?

I've installed all the dependencies and successfully trained and tested this project (which took me a huge effort). However, I wonder how to show the semantic scene like your illustration:

I set the argument --kfusion=True when I run test_net.py, It helps noting, and I see the code is:

    if args.kfusion:
        gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
        sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, gpu_options=gpu_options))
    else:
        sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))

How's it works, and how can I get the semantic scene?

Error with make.sh

Dear Dr. Xiang,
Could you give me some advise with these error?
/usr/lib/gcc/x86_64-linux-gnu/5/include/emmintrin.h(1294): error: expression must have arithmetic, unscoped enum, or pointer type
g++: error: triplet_loss_op.cu.o: No such file or directory
....
g++: error: lifted_structured_loss_op.cu.o No such file or directory
g++: error: computing_flow_op.cu.o: No such file or directory
g++: error: backprojecting_op.cu.o: No such file or directory
g++: error: projecting_op.cu.o: No such file or directory
g++: error: computing_label_op.cu.o: No such file or directory

Sincerely,
Luan

error on running dateset codes

@yuxng said he updated the code for latest version of Sophus and Eigen. i tried it once and didnt work. with git pull origin in $ROOT/DA-RNN and complied successfully. for the error: cannot find -ltensorflow_framework. librensorflow, librensorflow_framework are tensorflow dynamic lib which be installed addionally. check http://platanios.org/tensorflow_scala/installation.html#installation-0-dependencies-1 and http://www.rubydoc.info/github/somaticio/tensorflow.rb. dont forget copy the libs to path: /usr/lib/

first run training code come out errors about python version problems. my default version of python is 3.5 and the codes is python2, so i change my default python version and reinstalled tensorflow. then comes some strange errors:
when i try
./experiments/scripts/shapenet_scene_multi_rgbd.sh 0
error : terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc
i traced it back, the command line: from networks.factory import get_network, is the source for this error. i dont kown how to fix it.
when i try
sudo ./experiments/scripts/shapenet_scene_multi_rgbd_test.sh 0
error: ImportError: libkfusion.so: cannot open shared object file: No such file or directory. but libkfusion.so exits in /kinect_fusion/ build
when i try
./experiments/scripts/shapenet_scene_multi_rgbd_test.sh 0
error: ImportError: ../lib/kinect_fusion/kfusion.so: undefined symbol: _ZN2df12KinectFusion10save_modelESs . kfusion.so exits also in path /kinect_fusion.

wondering is there anyone running all codes successfully without encountering these errors or how u fix with it ?

Need more details for versions

We are all looking forward to use this awesome project, since the release of the paper

When trying to build the code we - as a software team - encountered several issues.

However, we could fix one by simply using an other tag (version of Sophus. We used v.0.9.5 , the error was in rigid.h and was gone after this fix.

Now there are many errors coming from Cuda Kernels in relationship with Eigen - so it would be really nice to know which versions / tags etc. were used for the Dependencies, like:

Tensorflow
CUDA
Linux-Version
Sophus
Eigen
Nanoflann
Pangolin

Thank you really much for your help.

Error when make KinectFusion

kinect_fusion.cpp:425
error: no matching function for call to ‘pangolin::OpenGlRenderState::SetModelViewMatrix(Sophus::SE3Base<Sophus::SE3 >::Transformation)

error when make KinectFusion

kinect_fusion.cpp:425
error: no matching function for call to ‘pangolin::OpenGlRenderState::SetModelViewMatrix(Sophus::SE3Base<Sophus::SE3 >::Transformation)

python:free() invalid pointer when set is_kfusion=True

I have ran successful your code with set is_kfusion=false. Now I want to ran your kinect_fusion.cpp with set this flag to true, but I got error
image
Have you encoutered same error as me? Could you give some suggestions?
In order to avoid pangolin error, I have comment all pangolin code in kinect_fusion.cpp

Supplement:
I also got python: free(): invalid next size (fast) if run test_kinect_fusion.sh on native notebook,, and found that code crash down in initMarchingCubesTables() of create_tensors() by std::cout info
image

Can you give more methods to test kinect_fusion code (like Video_$1.pango dataset in kinect_fusion/run.sh)?

Another try
I have modified kinect_fusion.cpp/main() image-input-interface and use cv2.imread to replace VideoInput as follows
image
Then direct run main() function by cmd and error shows cuda_error in initMatchingCubes()
image

It seems that this error is same with running test_kinect_fusion.py. So all this errors caused by cuda? The CUDA version I installed is cuda-8.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.