dlr-rm / 3dobjecttracking Goto Github PK
View Code? Open in Web Editor NEWAlgorithms and Publications on 3D Object Tracking
License: MIT License
Algorithms and Publications on 3D Object Tracking
License: MIT License
What is the reasoning behind randomly sampling contour points? My initial thought would be uniform sampling.
Great work btw!
As the title says, how should I set the value of this parameter? Before, I always blindly tried to change the size of the model, but the occasional change is effective, and the occasional change wastes a lot of time and still has no effect. So, again to trouble you.
Is there a compiled realsense2 package? I get an error when I compile it myself
Hello,
Thank you for this amazing work! I am quite new to region-based tracking and I have some questions about the code.
I am trying your algorithm with a camera and in the run_on_camera_sequence.cpp
I did not quite understood the difference between the body1_ptr
and the body1_model_ptr
.
The function Model::LoadModel
tries to read several parameters form a .txt
file (similar to the ones described in the readme.md
for the body_ptr
). The parameters read from this .txt
file seem redundand form the ones set directly in run_on_camera_sequence.cpp
when declaring the body1_ptr
.
Edit: By the way, I also have a problem with the detection of contours, they are not any of them detected. This is a consequence of silhouette_image
being all black pixels because the normal_image_
is also black. I can see my images in the viewer so the data seems to be properly fed.
I experienced that .bin's created on linux aren't compatible when running on win. I get filesystem error. Can you reproduce this?
I had to delete the bins create them new.
This Projection Matrix formula is different from the general OpenGL Projection Matrix. The z-axis values seem to be the negatives of the original values.
What's more, could you tell me how to get the formula?
Thanks!
Is it possible to have some kind of quality factor to quantify if the object is being tracked accurately? For us, ground truth is not available all the time this means that we cannot calculate the ADD values.
I was wondering if one could use your (from paper) Energy equation to calculate some kind of quality factor which can be just used as an indicator and not as a hard truth of a good object matching/tracking.
Great job btw, really impressive and innovative work!! Thank you for this and thank you in advance for your kind help.
Hi, what is the licence for the code in this repository?
Hi, I try to use this project, but it says "fatal error: filesystem: No such file or directory". And then I try to change line "#include " to "#include <experimental/filesystem>" in common.h, and the error changes, "/RBGT/include/rbgt/common.h:65:49: error: ‘std::filesystem’ has not been declared".
my system is ubuntu16.04 and gcc5.4 is used.
So how to solve this problem?
Thanks!
Hi, could you release part of the rbot data, Like one object in your input format? The whole rbot data is too big to download.
Thanks!
Do you have more sprecific introductions?Like how to run the whole project or how write the makefiles.thanks a lot.
Using ICG as an external dependency with add_subdirectory
creates a compilation error. The problem comes from these lines of ICG CMakeLists.txt
. When using a ICG as a subdirectory, CMAKE_SOURCE_DIR
has actually the path of the parent project which creates the issue.
Replacing these two lines with:
include_directories("include")
include_directories("third_party")
works fine in all the cases.
I can open a PR if you want.
I was running for my own datasets using the run_on_recorded_sequence..I could see that there was a input named color_camera_meta_data.txt
I could not see in the repo nor in the original dataset.
Can you please explain me the format or provide a sample file.
It would highly benefit me.
Thanks
Hi, Tried to cmake ..
ok
[100%] Linking CXX executable bin/RBOT
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/RBOT.dir/link.txt --verbose=1
/usr/bin/c++
collect2: error: ld returned 1 exit status
CMakeFiles/RBOT.dir/build.make:294: recipe for target 'bin/RBOT' failed
make[2]: *** [bin/RBOT] Error 1
make[2]: Leaving directory '/home/ai1/Downloads/wts/3d_6doF/RBOT/build'
CMakeFiles/Makefile2:79: recipe for target 'CMakeFiles/RBOT.dir/all' failed
make[1]: *** [CMakeFiles/RBOT.dir/all] Error 2
make[1]: Leaving directory '/home/6doF/RBOT/build'
Makefile:86: recipe for target 'all' failed
make: *** [all] Error 2
Hello, thanks a lot for your repo. It it a little complicated for me to run the repo. But I am familiar with ROS and could be able to run this repo with ROS.
Is it possible to support ROS for all the methods here?
Hi,
I've wanted to try out M3T tracking library, but i have an issue with setting this up. I can't properly run it on Windows or Linux(Ubuntu installed on WSL). I've chosen Linux(Ubuntu) for further steps due to issues with installation of libraries on Windows. WSL should not be a problem, the Ubuntu system behaves almost exactly as standard one, might be slower.
What I've done:
-I path/to/eigen3 -I paths/to/opencv/libs
m3t::
|cv::
}Expected result
I don't need to specify more than 1 path to opencv libs (now I have to add -I path/to/lib for every single library used from OpenCV).
The linking works properly, there are no compilation errors.
Current result
I get errors like:
run_pen_paper_demo.cpp:(.text._ZN3m3t8DetectorD2Ev[_ZN3m3t8DetectorD5Ev]+0x13): undefined reference to `vtable for m3t::Detector'
/usr/bin/ld: /tmp/ccSJDzTS.o: in function `m3t::ManualDetector::~ManualDetector()':
run_pen_paper_demo.cpp:(.text._ZN3m3t14ManualDetectorD2Ev[_ZN3m3t14ManualDetectorD5Ev]+0x13): undefined reference to `vtable for m3t::ManualDetector'
/usr/bin/ld: /tmp/ccSJDzTS.o: in function `m3t::StaticDetector::~StaticDetector()':
What i also tried
make
, make install
, make clean
in M3T folderI'm not a day-to-day C++ developer so i might have overlooked something obvious.
I'd appriciate any help/detailed guide how to properly set it up.
Is there a way to make ICG Tracking more robust with respect to object or camera movement?
Are there any parameters I could change so that the object does not lose tracking due to fast movements?
Or changes in calibration of the depth camera used? I am using Realsense Camera D435.
I am trying to replicate the results similar to the real-world experiment in your video but the tracker mismatches as soon as the object moves at a normal speed or a fast speed. If I move the object slowly it works just fine.
Thanks!
I'm a novice, and although your code is commented in important places, I still don't know how to generate .bin files. Which executable file should I run? I look forward to your reply
I really appreciate your excellent work, but I got some building errors in glew & glfw dependency when using CMake to build solution file for VS. Is it possible to provide some instructions to setup the glew & glfw dependency in CMake?
how to run the code of RBGT , 3RT3D,with only one simple obj? I don not know clearly about the body1.obj and body2.obj, I only want to run the code of my own duck.obj
Dear @manuel-stoiber and all who contributed to this repo,
thanks for making it available, seems to be another great work from DLR!
The paper isn't published yet if I got it right and I'm not familiar with C++. So I can't really dig into the repo myself.
I would like to use your repo to detect and track my own objects. So following questions came up:
Thanks again to make your work public!
emm,i encounter this problem when i build icg solution. then i already installed glfw3.1.2 in path:c:/program Filles(x86)/GLFW , and i just find the glfw3.lib in subfolder(/lib) of the folder:
i search on net,and i only find glfw3.lib,and nowhere about glfw.lib,so i am dejected of this problem,I would appreciate it if someone konw it?
in addtion,i rename the glfw.lib to glfw3.lib in the attach dependece place in visual studio, and it can't work yet, i got many LNK2019 errors.
Hello,
Do you know the reason for the "Histograms could not be initialised for modality body1_region_modality" error message?
Thank you in advance
Hello,
Thank you for sharing your code, ICG worked out of the box for me with a Realsense and a cheezit box.
Would it be possible to visualize correspondence lines as shown in the paper accompanying video?
Thanks
EDIT: found Tracker::VisualizeCorrespondences
I compiled the code using cmake and making the option (USE_AZURE_KINECT "Use azure kinect" FALSE)
as False in CMakeLists.txt. Then I ran run_on_recorded sequence.cpp 's executable file after changing the paths of directories. I'm facing some errors like -
Output -
Could not open file 3DObjectTracking/RBGT/TEST/Sequence/color_camera_meta_data.txt
Camera is not initialized
Could not open file stream "3DObjectTracking/RBGT/TEST/Models/body_1.txt"
Generate body1 template view 1 of 2562
Generate body1 template view 642 of 2562
Generate body1 template view 1923 of 2562
Generate body1 template view 1283 of 2562
No valid contour in image
No valid contour in image
No valid contour in image
No valid contour in image
Model was not initialized
Could not open file stream "3DObjectTracking/RBGT/TEST/Models/body_2.txt"
Model was not initialized
Camera is not initialized
Initialize renderer first
terminate called after throwing an instance of 'cv::Exception'
what(): OpenCV(4.5.3-dev) /home/rohang1411/opencv_build/opencv/modules/highgui/src/window.cpp:1014: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'imshow'
Aborted (core dumped)
In the above lines TEST folder wasn't part of the repo, I created it and put the model.obj file and frames inside it. I have tried using the full directory paths too. Can anybody please explain or provide how to get the following missing -
And please provide what should be the input files or sequence for run_on_recorded _sequence file. And the directory it should be present in. I am unable to overlay the objects as shown in the video provided in the repo. I tried putting a .mp4 video file as well as frames from the dataset in the sequence folder, but still unable to move forward. Please help, Thanks in advance.
Can someone please provide a working codebase or implementation of this repo. Coz I really need it. I'm trying to build something similar and unable to run the present code.
error "BodyData does not fit into image" when I use my own recorded camera images and OPT soda model, anyone can help me?
Is there any guideline to get reference points?
Thanks.
I have seen the source code, I discover the calculation of Hessian does not use the gradient directly. In source code, you do not use the var dloglikelihood_ddelta_cs. But you use the var data_line.standard_deviation, Why? Do you use the Quasi-Newton method to solve the problem and use the var data_line.standard_deviation to approximate the Hessian?
Thanks.
I want to use the ENSENSO_N35 depth camera to use this algorithm. It uses a binocular grayscale image camera to identify the depth. I would like to ask if the part that requires RGB in this algorithm only has grayscale images, is it feasible?
And if I want to modify the program to apply the ENSENSO_N35 camera, what components need to be repaired?
thanks
Thanks for your great job!
I have a question how are the codes debugged? use gdb?
Is there any reason that the run_on_recorded_sequence.cpp only uses color camera and not depth?
I added the depth components back to the code (same as run_on_camera_sequence.cpp). But it does not seem to change anything
Thanks for sharing the code and congratulations for this work!
Hey guys, I've been following your work for quite a long time. Recently, one of our projects uses your library and I've been writing a ROS wrapper. The first version of the wrapper is not well-written. The second one HERE can work well.
I am still reading your papers, (though it's a little bit hard since I am new to this field). Anyway, just posting the wrapper here in case someone else needs to use this library with ROS.
Hi,
Where can i find your paper?
Dear author,
I would like to track objects with two Azure kinect cameras. Is there a code example to show how to make a setup with multiple cameras?
Thank you very much. Great thanks in advance.
is there any python version?
We ran the evaluate part of the code on the RBOD dataset, and the results were as in the article. But we could not run the RBOT dataset on https://github.com/DLR-RM/3DObjectTracking/blob/master/RBGT/examples/run_on_recorded_sequence.cpp, we couln't render the objects, what should we do for this, is there a parameter we need to change?
First, thanks for your excellent job!
Could you tell me how to infer the formula (21) in the paper of RBGT. I remember that g equals (J(x)^T)*f(x). However, the g^T in the formula (24) equals J(x). (J(x) is the jacobian matrix) The value of f(x) seems to be not multiplied into the formula. I know that f(x) is the probabilistic function, which is shown in the formula (16).
Or f(θ) equals 1 when θ=0?
Hello,
In your amazing paper, it is specified that RBGT uses only a single thread. I was wondering if it is possible to optimize the execution time by doing some tasks in parallel. For example, I already tried to parallelize overall region_modality_ptr
pointers when various bodies are tracked and it greatly reduces the execution time. For the update, I simply parallelized CalculatePoseUpdate
over every data_line
to save extra time and CalculateCorrespondences
over every template view's data points.
Is there is any other way to increase the execution time? Does the size of the tracked object have an influence on the timing?
I am using a laptop with an Intel i7 at 2.8GHz and I have between 50-70ms in execution time. This reduces the frame rate and the tracking is thus easily lost during fast motion due to large movements between frames.
Thanks for the help!
Can I use a simple RGB camera not the rgbd?
Hi,
I ported your great code to iOS and it is running very well, but I have a doubt about the initialization process...
I developed an initialization based on UI where the user can see a reference object on the screen...
But the new users simply don't know how to match the reference 2D object with the real object, although expert users can do it very quickly.
Do you have a suggestion on how to achieve an easy to use initialization UX via user interface or a code that helps with the initialization?
Another question:
Do you have a way to detect if the system is tracking or if it lost tracking? Kind of a variable or function that can determine that the tracking is lost or not?
Hello @manuel-stoiber,
I'm testing your Pen and Paper Demo with a RealSense camera.
I've changed the the config.yaml file to use RealSense D435:
%YAML:1.2
RealSenseColorCamera:
- name: color_camera
metafile_path: config.yaml
RealSenseDepthCamera:
- name: depth_camera
metafile_path: config.yaml
I managed to have the algorithm tracking the Stabilo highlighter only once. Most of the times it's not tracking correctly.
I was wondering is this might be related to the RealSense D435? Is the demo supposed to work well with that sensor as well?
Should I change other parameters other than the sensor class?
(Basically, to start the tracking I just press 'X'? Is that correct?)
Thank you in advance,
Giorgio
k4a:failed to initialize audio backend;incompatible device! how can I solve this problem?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.