Git Product home page Git Product logo

dsp-slam's Introduction

Hello I'm Jingwen Wang



  • 🌱 I’m currently a Ph.D. student at the CDT in Foundational AI at University College London (UCL).
  • 🔭 My research interest lies in object-aware semantic SLAM and 3D reconstruction, combining object-level scene understanding and SLAM systems using learning-based approaches. I'm also interested in how to incorporate spatial intelligence, i.e. SLAM, long-term mapping into embodied AI.
  • ⚡ My personal website: https://jingwenwang95.github.io/
  • 📫 How to reach me: [email protected]

Languages & Tools

dsp-slam's People

Contributors

jingwenwang95 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dsp-slam's Issues

compile failed on "Pangolin make"

When I run the script , Pangolin install always failed because 'xdg-shell-client-protocol.h' failed.I cant solve it. The reason maybe is I run it on Ubuntu 16,but I cant reproduce in higher version Ubuntu,so anyone can help me? Or can I install Pangolin v0.5 instead of v0.6?

munmap_chunk(): invalid pointer

Hi, thank you for this nice project.
I successfully build dsp_slam with cuda 11.1, pytorch 1.8.2 lts referring to build_cuda113.sh
BUT "munmap_chunk(): invalid pointer" error occurred when I tried to run both dsp_slam and dsp_slam_mono.
Do you have any ideas about this?

running kitti

Hello author, I am running/ DSP_ When slam Vocabulary/ORBvoc.bin configs/KITTI04-12. yaml data/kitti/07 map/kitti/07, the interface can be successfully displayed, but it cannot reconstruct objects in real-time

The experiment directory does not include specifications file "specs.json"

Hi,
I can't wait to follow this great job. After successfully built DSP-SLAM , I occur a weird problem.

DSP-SLAM: Object Oriented SLAM with Deep Shape Priors.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.
Input sensor was set to: Stereo
Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!
terminate` called after throwing an instance of 'pybind11::error_already_set'
what(): Exception: The experiment directory does not include specifications file "specs.json"

Please tell me what's going wrong, thanks in advance.

Segfault

Hello, this is a great job. When I run this program, a segfault occurs. I modified the path in config_kitti.json according to the config file in your program, and the environment compiles without any errors. May I ask this What could be the reason?

can not run ./dsp_slam_mono on kitti dataset

anazing work!
I run

./dsp_slam_mono Vocabulary/ORBvoc.bin configs/KITTI00-02.yaml data/kitti/07 map/kitti/07

and it crashed

Start processing sequence ...
Images in the sequence: 1101

New Map created with 165 points
New Keyframe
/home/jzx/miniconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at  /opt/conda/conda-bld/pytorch_1634272168290/work/aten/src/ATen/native/TensorShape.cpp:2157.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
3D detector takes 0.880130 seconds
/home/jzx/miniconda3/envs/dsp-slam/lib/python3.7/site-packages/mmdet/datasets/utils.py:69: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
  'data pipeline in your config file.', UserWarning)
2D detctor takes 0.838410 seconds
terminate called after throwing an instance of 'pybind11::error_already_set'
  what():  KeyError: 'background_rays'

At:
  /home/jzx/DSP-SLAM/reconstruct/utils.py(84): __missing__
  /home/jzx/miniconda3/envs/dsp-slam/lib/python3.7/site-packages/addict/addict.py(67): __getattr__

[1]    17664 abort (core dumped)  ./dsp_slam_mono Vocabulary/ORBvoc.bin configs/KITTI00-02.yaml data/kitti/07 

What does this mean to scale the rotation matrix variables by T_cam_obj[:3, :3] *= l?

As it stated in the kitti_sequence.py, line 146:

T_cam_obj[:3, :3] *= l

Moreover, it seems that that detected 3d box has yaw (aka z axis) by instead the T_velo_obj is constructed with rotation angle in y axis?

T_velo_obj = np.array([[np.cos(theta), 0, -np.sin(theta), trans[0]],

It's a little confusing..

run dsp_slam_mono but nothing happens

Hello, I have configed the environment and compile the DSP-SLAM project successfully, then I run the dsp_slam_mono as follows. The programme keeps still, without any information、error inform or exit. Could you suggest any probable causes?

$ ./dsp_slam_mono Vocabulary/ORBvoc.bin configs/redwood_09374.yaml /media/lj/TOSHIBA/dataset/RedwoodOS/09374 map/redwood/09374

question about read_calib in kitti_sequence.py

Hi, thank you for your source code. I'm trying to test RGB+Lidar in my own dataset and have a little question.

From line 240 - line 254 in "kitti_sequence.py",

        # Load the calibration file
        filedata = read_calib_file(self.calib_file)

        # Load projection matrix P_cam2_cam0, and compute perspective instrinsics K of cam2
        P_cam2_cam0 = np.reshape(filedata['P2'], (3, 4))
        self.K_cam = P_cam2_cam0[0:3, 0:3].astype(np.float32)
        self.invK_cam = np.linalg.inv(self.K_cam).astype(np.float32)

        # Load the transfomration from T_cam0_velo, and compute the transformation T_cam2_velo
        T_cam0_velo, T_cam2_cam0 = np.eye(4), np.eye(4)
        T_cam0_velo[:3, :] = np.reshape(filedata['Tr'], (3, 4))
        T_cam2_cam0[0, 3] = P_cam2_cam0[0, 3] / P_cam2_cam0[0, 0]
        self.T_cam_velo = T_cam2_cam0.dot(T_cam0_velo).astype(np.float32)

It seems that you are using pre-calculated calib.txt, do you? I'm a little confused with this part of code.

In my thought, if I had T_w_cam and T_w_lidar respectively, T_cam_lidar = inv(T_w_lidar) @ T_w_cam, is that correct? all Tx_x are 4x4 matrix which is [R | t] in the world coordinate and padding with [0 0 0 1].

question about ObjectPoseGraph.h

hello, in function linearizeOplus() ObjectPoseGraph.h. Does the variable _jacobianOplusXj lack to multiply _measurement.inverse().adj() ?

question in kitti_sequence.py

in kitti_sequence.py you code the mask as this and what the instance's rays mean? Do you have any theory?

      if max_num_matchess > pixels_coord.shape[0] * 0.5:
            n = np.argmax(num_matches)
            instance.mask = masks_2d[n, ...]
            instance.bbox = bboxes_2d[n, ...]

            if instance.mask[instance.mask].shape[0] > self.min_mask_area:
                # Sample non-surface pixels
                non_surface_pixels = self.pixels_sampler(instance.bbox, instance.mask)
                if non_surface_pixels.shape[0] > 200:
                    sample_ind = np.linspace(0, non_surface_pixels.shape[0]-1, 200).astype(np.int32)
                    non_surface_pixels = non_surface_pixels[sample_ind, :]

                pixels_inside_bb = np.concatenate([pixels_uv, non_surface_pixels], axis=0)
                # rays contains all, but depth should only contain foreground
                instance.rays = get_rays(pixels_inside_bb, self.invK).astype(np.float32)
                instance.depth = surface_points[:, 2].astype(np.float32)

CUDA 11.3 build version mismatch

When building with the CUDA 11.3 script, there's a version mismatch between the installed CUDA and the conda requirements. Notice that the version of CUDA is 11.3.0, and the required CUDA version by conda is 11.3.1 .

Due to this, I think mmdetection3d fails to be installed. Below screenshot is the result

image

other category of pre-trained DeepSDF models

Hi, it's a nice work and the reconstruction works very good. It seems that the pertained DeepSDF model only support cars, right? Other vehicle types e.g. truck/bus are not included and thus could not reconstructed. How do I make it support truck/bus reconstruction? Do I need to train the SDF model for these classes? e.g. DeepSDF training Thanks in advance.

mmdet3d error

Hello, I run into a problem about mmdetection3d, I don't how to solve it, I have confused for hours.

`Traceback (most recent call last):
File "setup.py", line 248, in
zip_safe=False)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/init.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/develop.py", line 34, in run
self.install_for_development()
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/develop.py", line 114, in install_for_development
self.run_command('build_ext')
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 186, in run
_build_ext.build_ext.run(self)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 735, in build_extensions
build_ext.build_extensions(self)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/Cython/Distutils/old_build_ext.py", line 195, in build_extensions
_build_ext.build_ext.build_extensions(self)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 449, in build_extensions
self._build_extensions_serial()
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 474, in _build_extensions_serial
self.build_extension(ext)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 202, in build_extension
_build_ext.build_extension(self, ext)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/command/build_ext.py", line 534, in build_extension
depends=ext.depends)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/distutils/ccompiler.py", line 574, in compile
self._compile(obj, src, ext, cc_args, extra_postargs, pp_opts)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 483, in unix_wrap_single_compile
cflags = unix_cuda_flags(cflags)
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 450, in unix_cuda_flags
cflags + _get_cuda_arch_flags(cflags))
File "/home/ubuntu-slam/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1606, in _get_cuda_arch_flags
arch_list[-1] += '+PTX'

IndexError: list index out of range
`

Question about training DeepSDF model

To learn a latent code using your model:

  1. What scale should I set my custom point cloud dataset to? [-1, 1] or [0, 1]?
  2. Which axises should the front and top of the car be aligned with? Where is it facing, X and Z?

Where is the term of "Two" from?

Hi. In the paper, the part of Joint Bundle Adjustment, the object pose Two will be used to construct residual fuction, but how can I get Two?

Stereo-only input support?

Hi, thanks for your nice work and sharing the code! I was quite happy when I see the "stereo streams" are supported in the paper. I could successfully reproduce the result (qualitatively).

However, it seems that the released code does not support stereo-only input, right? The kitti dataset uses stereo and lidar data at the same time. Do I miss anything or if there do have support? Thank you!

Questions for DeepSDF training

Thanks for your excellent work!
For DeepSDF training, I have the following questions :
In addition to the data preprocessing operations in the DeepSDF paper, have you done any additional data preprocessing? Such as removeing interior car structures (seats, steering wheel, etc.)?

How should i do to repeat your experiment?

Thanks for your good project! Your work is very great!
Could you tell me what should I do if I want to repeat your experiments?
The sincerity anticipates your reply!

There is no response after I entering the command~

Hi,I don't konw why there is no response after typing the command.
Here's the command:./dsp_slam_mono Vocabulary/ORBvoc.bin configs/freiburg_001.yaml freiburg_static_cars_52/car001 map/freiburg/001
The dataset i download is the resulting data which is 4.2GB
Could you help me,please

Aborted (core dumped)

terminate called after throwing an instance of 'pybind11::error_already_set'
what(): AttributeError: module 'skimage.measure' has no attribute 'marching_cubes_lewiner'

At:
/home/shu/DSP-SLAM/reconstruct/utils.py(130): convert_sdf_voxels_to_mesh
/home/shu/DSP-SLAM/reconstruct/optimizer.py(218): extract_mesh_from_code
Aborted (core dumped)

Hello author, the above is the error I prompted when running the code. When running the code, the interface will pop up for 5s first, and then it will end, sometimes even shut down.Could you please give me some guidance.

When run dsp_slam, it happens "No module named 'skimage'"

**Amazing work!

I run dsp_slam,but I meet the problem,I install skimage by "sudo apt-get install python-skimage",but it still exists.**

(dsp-slam) ubuntu-slam@ubuntuslam-B560M-AORUS-ELITE:~/DSP-SLAM$ ./dsp_slam Vocabulary/ORBvoc.bin configs/KITTI04-12.yaml /home/ubuntu-slam/dataset/KITTI/06 map/kitti/06

DSP-SLAM: Object Oriented SLAM with Deep Shape Priors.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: Stereo

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!

terminate called after throwing an instance of 'pybind11::error_already_set'
what(): ModuleNotFoundError: No module named 'skimage'

At:
/home/ubuntu-slam/DSP-SLAM/reconstruct/utils.py(23):
(219): _call_with_frames_removed
(728): exec_module
(677): _load_unlocked
(967): _find_and_load_unlocked
(983): _find_and_load

已放弃 (核心已转储)

How to prepare datasets for other objects?

Hi;

I have two questions related to extending this framework to other objects:
(1) How to prepare the data and generate the weights and labels files for other objects (trucks, signs, cyclists, pedestrians) ?
(2) What would be the most efficient way to run multiple detections in this case? DSP-SLAM source is built around networks that are trained to work with cars, how efficient would it be to load weights based on detected object to allow it to work with multiple object classes?

Aborted(core dumped)

terminate called after throwing an instance of 'pybind11::error_already_set'
what(): Exception: The experiment directory does not include specifications file "specs.json"

At:
/home/lm/docker/dspslam/DSP-SLAM-master/deep_sdf/workspace.py(207): config_decoder
/home/lm/docker/dspslam/DSP-SLAM-master/reconstruct/utils.py(94): get_decoder

Aborted (core dumped)

Hello,author, the above is the error I prompted when running the code. I do not konw why it is and can not slove it, could you help me ?Thank you very much!

why can not reconstruct object?

After I construct env and complie project , I run "./dsp_slam Vocabulary/ORBvoc.bin configs/KITTI04-12.yaml data/kitti/07 map/kitti/07" and get some files ,then run "python extract_map_objects.py --config configs/config_kitti.json --map_dir map/07 --voxels_dim 64" get some npy/ply files. But when I run "python visualize_map.py --config configs/config_kitti.json --map_dir map/07", I get lots of black point in open3d.
How can I do to generate the picture you put int the readme "Save and visualize map"?
And the same as freiburg/001.
By the way How can I use ./dsp_slam to run freiburg/001?
tks so much!

undefined reference to `TIFF****@LIBTIFF_4.0'

When you meet the following problem,
"
../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to TIFFOpen@LIBTIFF_4.0' ../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to TIFFGetField@LIBTIFF_4.0'
../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to TIFFScanlineSize@LIBTIFF_4.0' ../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to TIFFReadScanline@LIBTIFF_4.0'
../Thirdparty/Pangolin/build/libpango_image.so: undefined reference to `TIFFClose@LIBTIFF_4.0'
"
add "-ltiff" to "targtet_link_libraries" in "DSP-SLAM/CMakeLists.txt". It may be helpful.

make build problem

[ 94%] Linking CXX executable /home/sjm/ql/projects/DSP-SLAM/dsp_slam
/usr/local/lib/libpango_image.so:对‘TIFFOpen@LIBTIFF_4.0’未定义的引用
/usr/local/lib/libpango_image.so:对‘TIFFGetField@LIBTIFF_4.0’未定义的引用
/usr/local/lib/libpango_image.so:对‘TIFFScanlineSize@LIBTIFF_4.0’未定义的引用
/usr/local/lib/libpango_image.so:对‘TIFFSetWarningHandler@LIBTIFF_4.0’未定义的引用
/usr/local/lib/libpango_image.so:对‘TIFFReadScanline@LIBTIFF_4.0’未定义的引用
/usr/local/lib/libpango_image.so:对‘TIFFClose@LIBTIFF_4.0’未定义的引用
collect2: error: ld returned 1 exit status
CMakeFiles/dsp_slam.dir/build.make:134: recipe for target '/home/sjm/ql/projects/DSP-SLAM/dsp_slam' failed
make[2]: *** [/home/sjm/ql/projects/DSP-SLAM/dsp_slam] Error 1
CMakeFiles/Makefile2:129: recipe for target 'CMakeFiles/dsp_slam.dir/all' failed
make[1]: *** [CMakeFiles/dsp_slam.dir/all] Error 2
Makefile:90: recipe for target 'all' failed
make: *** [all] Error 2

sv2_cars_filtered.json

Would you like to share the sv2_cars_filtered.json file?
Because I try to extend DSP-SLAM into more objects. sv2_cars_filtered.json could be an example.

  "Description" : [ "This experiment learns a shape representation for cars ",
                    "using data from ShapeNet version 2." ],
  "DataSource" : "/media/jingwen/Data/deepsdfData",
  "TrainSplit" : "/home/jingwen/Vision/dev/slam_with_shape_priors/DeepSDF/examples/splits/sv2_cars_filtered.json",
  "TestSplit" : "/home/jingwen/Vision/dev/slam_with_shape_priors/DeepSDF/examples/splits/sv2_cars_filtered.json",

Segmentation fault when run test on kitti

Hi:
Thans for contributing such a great project!
I have build DSP-SLAM successfully, but when I run dsp_slam on your kitti 07, i got Segmentation fault,
below is terminal output in gdb debugging

(gdb) run
Starting program: /home/dlr/Project/DSP-SLAM/dsp_slam Vocabulary/ORBvoc.bin configs/KITTI04-12.yaml /media/dsp-slam/data/kitti/07 map/kitti/07
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".

DSP-SLAM: Object Oriented SLAM with Deep Shape Priors.
This program comes with ABSOLUTELY NO WARRANTY;
This is free software, and you are welcome to redistribute it
under certain conditions. See LICENSE.txt.

Input sensor was set to: Stereo

Loading ORB Vocabulary. This could take a while...
Vocabulary loaded!

[debug]: py::module::import(sys)

[New Thread 0x7ffec9cde700 (LWP 8631)]
....
[New Thread 0x7ffd6f288700 (LWP 8704)]
[debug]: py::module::import(reconstruct.utils)

[debug]: pyCfgPath: configs/config_kitti.json

[New Thread 0x7ffd5f9a7700 (LWP 8711)]
...
[New Thread 0x7ffd52ffd700 (LWP 8732)]
[debug]: io_utils.attr(get_decoder)(pyCfg)

[debug]: strSequencePath: /media/dsp-slam/data/kitti/07

[pydebug] to get_seq	
[pydebug] to Class KITIISequence	
[pydebug] load_calib() done	
[Thread 0x7ffd6f288700 (LWP 8704) exited]
...
[Thread 0x7ffec9cde700 (LWP 8631) exited]
[pydebug] data_type == KITTI	

Thread 1 "dsp_slam" received signal SIGSEGV, Segmentation fault.
0x00007fffd4a16e6c in geos::io::WKBReader::readGeometry() () from /usr/lib/x86_64-linux-gnu/libgeos-3.6.2.so
(gdb) bt
#0  0x00007fffd4a16e6c in geos::io::WKBReader::readGeometry() () at /usr/lib/x86_64-linux-gnu/libgeos-3.6.2.so
#1  0x00007ffe52595e14 in  () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/Shapely.libs/libgeos_c-74dec7a7.so.1.14.2
#2  0x00007ffe525980ee in GEOSGeomFromWKB_buf_r () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/python3.7/site-packages/Shapely.libs/libgeos_c-74dec7a7.so.1.14.2
#3  0x00007fffe82d7dae in ffi_call_unix64 () at /usr/lib/x86_64-linux-gnu/libffi.so.6
#4  0x00007fffe82d771f in ffi_call () at /usr/lib/x86_64-linux-gnu/libffi.so.6
#5  0x00007fffb4baa7b4 in _call_function_pointer (argcount=3, resmem=0x7fffffff24a0, restype=<optimized out>, atypes=<optimized out>, avalues=0x7fffffff2480, pProc=0x7ffe525980a0 <GEOSGeomFromWKB_buf_r>, flags=<optimized out>) at /usr/local/src/conda/python-3.7.12/Modules/_ctypes/callproc.c:816
#6  0x00007fffb4baa7b4 in _ctypes_callproc (pProc=0x7ffe525980a0 <GEOSGeomFromWKB_buf_r>, argtuple=0x7ffe52849550, flags=<optimized out>, argtypes=0x7ffe5280d410, restype=<optimized out>, checker=0x0) at /usr/local/src/conda/python-3.7.12/Modules/_ctypes/callproc.c:1188
#7  0x00007fffb4bab02c in PyCFuncPtr_call (self=<optimized out>, inargs=<optimized out>, kwds=0x0) at /usr/local/src/conda/python-3.7.12/Modules/_ctypes/_ctypes.c:4025
#8  0x00007ffff583ac04 in PyObject_Call () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#9  0x00007ffff56f3805 in partial_call () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#10 0x00007ffff583add9 in _PyObject_FastCallKeywords () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#11 0x00007ffff56b13c3 in call_function.lto_priv () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#12 0x00007ffff56b7c77 in _PyEval_EvalFrameDefault () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#13 0x00007ffff56badea in function_code_fastcall () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#14 0x00007ffff56b1416 in call_function.lto_priv () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#15 0x00007ffff56b5d92 in _PyEval_EvalFrameDefault () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#16 0x00007ffff576ee80 in _PyEval_EvalCodeWithName () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#17 0x00007ffff576e29f in PyEval_EvalCodeEx () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#18 0x00007ffff576f93c in PyEval_EvalCode () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#19 0x00007ffff5773b0e in builtin_exec () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#20 0x00007ffff5839f9c in _PyMethodDef_RawFastCallDict () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#21 0x00007ffff583a9d9 in _PyCFunction_FastCallDict () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#22 0x00007ffff56b95f1 in _PyEval_EvalFrameDefault () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#23 0x00007ffff576ee80 in _PyEval_EvalCodeWithName () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#24 0x00007ffff583a0ea in _PyFunction_FastCallKeywords () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#25 0x00007ffff56b1416 in call_function.lto_priv () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#26 0x00007ffff56b7c77 in _PyEval_EvalFrameDefault () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#27 0x00007ffff56badea in function_code_fastcall () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#28 0x00007ffff56b1416 in call_function.lto_priv () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#29 0x00007ffff56b5f46 in _PyEval_EvalFrameDefault () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#30 0x00007ffff56badea in function_code_fastcall () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#31 0x00007ffff56b1416 in call_function.lto_priv () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
#32 0x00007ffff56b5d92 in _PyEval_EvalFrameDefault () at /home/dlr/3rdpack/anaconda3/envs/dsp-slam/lib/libpython3.7m.so.1.0
---Type <return> to continue, or q <return> to quit---
...

according to my print infos. I noticed the code stuck at

from .detector3d import get_detector3d

I am wondering how to solve this , can you give some insights?
Thanks in advance!

Inconsistency about the object scale in KITTI sequence loader

There is some inconsistency between instance.T_cam_obj and instance.scale.
The object length l is used as the scale(L146).

T_cam_obj[:3, :3] *= l
# Initialize detected instance
instance = ForceKeyErrorDict()
instance.T_cam_obj = T_cam_obj
instance.scale = size

However, it had been enlarged when selecting surface points (L134), which is inconsistent with the origin size(L151).

w, l, h = list(size / 2)
w *= 1.1
l *= 1.1
on_surface = (points_obj[:, 0] > -w) & (points_obj[:, 0] < w) & \
(points_obj[:, 1] > -h) & (points_obj[:, 1] < h) & \
(points_obj[:, 2] > -l) & (points_obj[:, 2] < l)
pts_surface_velo = points_nearby[on_surface]

I didn't quantitively evaluate. But fixing this seems making the result better.
image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.