Git Product home page Git Product logo

3dobjecttracking's People

Contributors

manuel-stoiber avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

3dobjecttracking's Issues

random vs uniform sampling

What is the reasoning behind randomly sampling contour points? My initial thought would be uniform sampling.
Great work btw!

How to set maximum_body_diameter

As the title says, how should I set the value of this parameter? Before, I always blindly tried to change the size of the model, but the occasional change is effective, and the occasional change wastes a lot of time and still has no effect. So, again to trouble you.

Question about parameters

Hello,
Thank you for this amazing work! I am quite new to region-based tracking and I have some questions about the code.

I am trying your algorithm with a camera and in the run_on_camera_sequence.cpp I did not quite understood the difference between the body1_ptr and the body1_model_ptr.

The function Model::LoadModel tries to read several parameters form a .txt file (similar to the ones described in the readme.md for the body_ptr). The parameters read from this .txt file seem redundand form the ones set directly in run_on_camera_sequence.cpp when declaring the body1_ptr.

Edit: By the way, I also have a problem with the detection of contours, they are not any of them detected. This is a consequence of silhouette_image being all black pixels because the normal_image_ is also black. I can see my images in the viewer so the data seems to be properly fed.

Quality factor without ground truth?

Is it possible to have some kind of quality factor to quantify if the object is being tracked accurately? For us, ground truth is not available all the time this means that we cannot calculate the ADD values.

I was wondering if one could use your (from paper) Energy equation to calculate some kind of quality factor which can be just used as an indicator and not as a hard truth of a good object matching/tracking.

Great job btw, really impressive and innovative work!! Thank you for this and thank you in advance for your kind help.

"fatal error: filesystem: No such file or directory

Hi, I try to use this project, but it says "fatal error: filesystem: No such file or directory". And then I try to change line "#include " to "#include <experimental/filesystem>" in common.h, and the error changes, "/RBGT/include/rbgt/common.h:65:49: error: ‘std::filesystem’ has not been declared".
my system is ubuntu16.04 and gcc5.4 is used.
So how to solve this problem?
Thanks!

about the rbot data

Hi, could you release part of the rbot data, Like one object in your input format? The whole rbot data is too big to download.
Thanks!

questions

Do you have more sprecific introductions?Like how to run the whole project or how write the makefiles.thanks a lot.

Problem when using 3DObjectTracking/ICG as a subdirectory

Using ICG as an external dependency with add_subdirectory creates a compilation error. The problem comes from these lines of ICG CMakeLists.txt. When using a ICG as a subdirectory, CMAKE_SOURCE_DIR has actually the path of the parent project which creates the issue.

Replacing these two lines with:

include_directories("include")
include_directories("third_party")

works fine in all the cases.
I can open a PR if you want.

No information or file regarding the color_camera_meta_data.txt

I was running for my own datasets using the run_on_recorded_sequence..I could see that there was a input named color_camera_meta_data.txt
I could not see in the repo nor in the original dataset.
Can you please explain me the format or provide a sample file.
It would highly benefit me.
Thanks

CMakeFiles/RBOT.dir/build.make:294: recipe for target 'bin/RBOT' failed make[2]: *** [bin/RBOT] Error 1

Hi, Tried to cmake ..
ok

make



[100%] Linking CXX executable bin/RBOT
/usr/local/bin/cmake -E cmake_link_script CMakeFiles/RBOT.dir/link.txt --verbose=1
/usr/bin/c++



collect2: error: ld returned 1 exit status
CMakeFiles/RBOT.dir/build.make:294: recipe for target 'bin/RBOT' failed
make[2]: *** [bin/RBOT] Error 1
make[2]: Leaving directory '/home/ai1/Downloads/wts/3d_6doF/RBOT/build'
CMakeFiles/Makefile2:79: recipe for target 'CMakeFiles/RBOT.dir/all' failed
make[1]: *** [CMakeFiles/RBOT.dir/all] Error 2
make[1]: Leaving directory '/home/6doF/RBOT/build'
Makefile:86: recipe for target 'all' failed
make: *** [all] Error 2

ROS support

Hello, thanks a lot for your repo. It it a little complicated for me to run the repo. But I am familiar with ROS and could be able to run this repo with ROS.

Is it possible to support ROS for all the methods here?

Issues with configuration and when running run_pen_paper_demo.cpp

Hi,
I've wanted to try out M3T tracking library, but i have an issue with setting this up. I can't properly run it on Windows or Linux(Ubuntu installed on WSL). I've chosen Linux(Ubuntu) for further steps due to issues with installation of libraries on Windows. WSL should not be a problem, the Ubuntu system behaves almost exactly as standard one, might be slower.

What I've done:

  1. Installed required tools and libraries (cmake, Eigen3, GLEW, GLFW, OpenCV 4 standard + contrib, doxygen).
  2. Used cmake(cli and/or gui) in .../3DObjectTracking/M3T
  3. Tried to use g++/gcc to run run_pen_paper_test_demo.cpp
  4. I get errors mentioning that it doesn't see eigen3 or opencv libraries
  5. I've added -I path/to/eigen3 -I paths/to/opencv/libs
  6. The script tries to compile, but i get a lot of errors probably with linking -undefined reference to {m3t::|cv::}

Expected result
I don't need to specify more than 1 path to opencv libs (now I have to add -I path/to/lib for every single library used from OpenCV).
The linking works properly, there are no compilation errors.

Current result
I get errors like:

run_pen_paper_demo.cpp:(.text._ZN3m3t8DetectorD2Ev[_ZN3m3t8DetectorD5Ev]+0x13): undefined reference to `vtable for m3t::Detector'
/usr/bin/ld: /tmp/ccSJDzTS.o: in function `m3t::ManualDetector::~ManualDetector()':
run_pen_paper_demo.cpp:(.text._ZN3m3t14ManualDetectorD2Ev[_ZN3m3t14ManualDetectorD5Ev]+0x13): undefined reference to `vtable for m3t::ManualDetector'
/usr/bin/ld: /tmp/ccSJDzTS.o: in function `m3t::StaticDetector::~StaticDetector()':

What i also tried

  1. using make, make install, make clean in M3T folder
  2. reinstalling libraries on Linux

I'm not a day-to-day C++ developer so i might have overlooked something obvious.
I'd appriciate any help/detailed guide how to properly set it up.

How to make tracking more robust when object or camera is moved?

Is there a way to make ICG Tracking more robust with respect to object or camera movement?

Are there any parameters I could change so that the object does not lose tracking due to fast movements?
Or changes in calibration of the depth camera used? I am using Realsense Camera D435.

I am trying to replicate the results similar to the real-world experiment in your video but the tracker mismatches as soon as the object moves at a normal speed or a fast speed. If I move the object slowly it works just fine.

Thanks!

How to generate .bin file

I'm a novice, and although your code is commented in important places, I still don't know how to generate .bin files. Which executable file should I run? I look forward to your reply

Questions on how to use custom objects

Dear @manuel-stoiber and all who contributed to this repo,
thanks for making it available, seems to be another great work from DLR!

The paper isn't published yet if I got it right and I'm not familiar with C++. So I can't really dig into the repo myself.

I would like to use your repo to detect and track my own objects. So following questions came up:

  1. Do I need to train/retrain to get the tracker working for my own objects? If so, how?
  2. You're using .obj object files. Is the texture taken into account or can I convert my CAD files without texture into .obj files and use them?
  3. Is there an initial detection step (for the first pose estimation) needed? If so, do I have to set/estimate it or is this also a part of this repo?
  4. Will the paper or an upfront version appear before ACCV?

Thanks again to make your work public!

LNK110:Unable open file "glfw.lib"

emm,i encounter this problem when i build icg solution. then i already installed glfw3.1.2 in path:c:/program Filles(x86)/GLFW , and i just find the glfw3.lib in subfolder(/lib) of the folder:
image

i search on net,and i only find glfw3.lib,and nowhere about glfw.lib,so i am dejected of this problem,I would appreciate it if someone konw it?

in addtion,i rename the glfw.lib to glfw3.lib in the attach dependece place in visual studio, and it can't work yet, i got many LNK2019 errors.

Visualizing correspondence lines?

Hello,

Thank you for sharing your code, ICG worked out of the box for me with a Realsense and a cheezit box.
Would it be possible to visualize correspondence lines as shown in the paper accompanying video?
Thanks

EDIT: found Tracker::VisualizeCorrespondences

Linking problem

Hello,

The code was compiled correctly but in the end, it has a linking problem. It will be appreciated if you can give me any advice to solve this problem.
Screenshot 2021-02-24 at 19 46 11

Unable to overlay objects through run_on_recorded_sequence.cpp and Missing metadata & model files

I compiled the code using cmake and making the option (USE_AZURE_KINECT "Use azure kinect" FALSE) as False in CMakeLists.txt. Then I ran run_on_recorded sequence.cpp 's executable file after changing the paths of directories. I'm facing some errors like -

Output -

Could not open file 3DObjectTracking/RBGT/TEST/Sequence/color_camera_meta_data.txt
Camera is not initialized
Could not open file stream "3DObjectTracking/RBGT/TEST/Models/body_1.txt"
Generate body1 template view 1 of 2562
Generate body1 template view 642 of 2562
Generate body1 template view 1923 of 2562
Generate body1 template view 1283 of 2562
No valid contour in image
No valid contour in image
No valid contour in image
No valid contour in image
Model was not initialized
Could not open file stream "3DObjectTracking/RBGT/TEST/Models/body_2.txt"
Model was not initialized
Camera is not initialized
Initialize renderer first

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.5.3-dev) /home/rohang1411/opencv_build/opencv/modules/highgui/src/window.cpp:1014: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'imshow'

Aborted (core dumped)

In the above lines TEST folder wasn't part of the repo, I created it and put the model.obj file and frames inside it. I have tried using the full directory paths too. Can anybody please explain or provide how to get the following missing -

  1. color_camera_meta_data.txt
  2. body_1.txt
  3. body_2.txt

And please provide what should be the input files or sequence for run_on_recorded _sequence file. And the directory it should be present in. I am unable to overlay the objects as shown in the video provided in the repo. I tried putting a .mp4 video file as well as frames from the dataset in the sequence folder, but still unable to move forward. Please help, Thanks in advance.

The question about the calculation of Hessian

I have seen the source code, I discover the calculation of Hessian does not use the gradient directly. In source code, you do not use the var dloglikelihood_ddelta_cs. But you use the var data_line.standard_deviation, Why? Do you use the Quasi-Newton method to solve the problem and use the var data_line.standard_deviation to approximate the Hessian?
Thanks.

Using a depth camera (such as ENSENSO) that does not have an RGB camera

I want to use the ENSENSO_N35 depth camera to use this algorithm. It uses a binocular grayscale image camera to identify the depth. I would like to ask if the part that requires RGB in this algorithm only has grayscale images, is it feasible?
And if I want to modify the program to apply the ENSENSO_N35 camera, what components need to be repaired?
thanks

No Depth in run_on_recorded_sequence.cpp

Is there any reason that the run_on_recorded_sequence.cpp only uses color camera and not depth?
I added the depth components back to the code (same as run_on_camera_sequence.cpp). But it does not seem to change anything

Great work

Thanks for sharing the code and congratulations for this work!

A ROS wrapper for ICG

Hey guys, I've been following your work for quite a long time. Recently, one of our projects uses your library and I've been writing a ROS wrapper. The first version of the wrapper is not well-written. The second one HERE can work well.

I am still reading your papers, (though it's a little bit hard since I am new to this field). Anyway, just posting the wrapper here in case someone else needs to use this library with ROS.

Tracking with multiple cameras

Dear author,

I would like to track objects with two Azure kinect cameras. Is there a code example to show how to make a setup with multiple cameras?

Thank you very much. Great thanks in advance.

YCB data doesn't work

I am trying to run the ICG evaluate_ycb_dataset. But the the object can not align well. Please see attached result image.
test55

I have a question about the Newton optimization with Tikhonov regularization used in your paper

First, thanks for your excellent job!
Could you tell me how to infer the formula (21) in the paper of RBGT. I remember that g equals (J(x)^T)*f(x). However, the g^T in the formula (24) equals J(x). (J(x) is the jacobian matrix) The value of f(x) seems to be not multiplied into the formula. I know that f(x) is the probabilistic function, which is shown in the formula (16).
Or f(θ) equals 1 when θ=0?

Timing execution

Hello,

In your amazing paper, it is specified that RBGT uses only a single thread. I was wondering if it is possible to optimize the execution time by doing some tasks in parallel. For example, I already tried to parallelize overall region_modality_ptr pointers when various bodies are tracked and it greatly reduces the execution time. For the update, I simply parallelized CalculatePoseUpdate over every data_line to save extra time and CalculateCorrespondences over every template view's data points.
Is there is any other way to increase the execution time? Does the size of the tracked object have an influence on the timing?

I am using a laptop with an Intel i7 at 2.8GHz and I have between 50-70ms in execution time. This reduces the frame rate and the tracking is thus easily lost during fast motion due to large movements between frames.

Thanks for the help!

Initialization workflow

Hi,

I ported your great code to iOS and it is running very well, but I have a doubt about the initialization process...

I developed an initialization based on UI where the user can see a reference object on the screen...

But the new users simply don't know how to match the reference 2D object with the real object, although expert users can do it very quickly.

Do you have a suggestion on how to achieve an easy to use initialization UX via user interface or a code that helps with the initialization?

Another question:

Do you have a way to detect if the system is tracking or if it lost tracking? Kind of a variable or function that can determine that the tracking is lost or not?

Pen Paper Demo and RealSense tracking issues

Hello @manuel-stoiber,
I'm testing your Pen and Paper Demo with a RealSense camera.
I've changed the the config.yaml file to use RealSense D435:

%YAML:1.2
RealSenseColorCamera:
  - name: color_camera
    metafile_path: config.yaml

RealSenseDepthCamera:
  - name: depth_camera
    metafile_path: config.yaml

I managed to have the algorithm tracking the Stabilo highlighter only once. Most of the times it's not tracking correctly.
I was wondering is this might be related to the RealSense D435? Is the demo supposed to work well with that sensor as well?
Should I change other parameters other than the sensor class?

(Basically, to start the tracking I just press 'X'? Is that correct?)
Thank you in advance,
Giorgio

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.