Git Product home page Git Product logo

find-object's Introduction

find-object

Linux Build Status
Build Status
Build Status
Windows Build Status

Standalone

Find-Object project, visit the home page for more information.

ROS1

Install

Binaries:

sudo apt-get install ros-$ROS_DISTRO-find-object-2d

Source:

  • To include xfeatures2d and/or nonfree modules of OpenCV, to avoid conflicts with cv_bridge, build same OpenCV version that is used by cv_bridge. Install it in /usr/local (default).
cd ~/catkin_ws
git clone https://github.com/introlab/find-object.git src/find_object_2d
catkin_make

Run

roscore
# Launch your preferred usb camera driver
rosrun uvc_camera uvc_camera_node
rosrun find_object_2d find_object_2d image:=image_raw

See find_object_2d for more information.

ROS2

Install

Binaries:

To come...

Source:

cd ~/ros2_ws
git clone https://github.com/introlab/find-object.git src/find_object_2d
colcon build

Run

# Launch your preferred usb camera driver
ros2 launch realsense2_camera rs_launch.py
 
# Launch find_object_2d node:
ros2 launch find_object_2d find_object_2d.launch.py image:=/camera/color/image_raw
 
# Draw objects detected on an image:
ros2 run find_object_2d print_objects_detected --ros-args -r image:=/camera/color/image_raw

3D Pose (TF)

A RGB-D camera is required. Example with Realsense D400 camera:

# Launch your preferred usb camera driver
ros2 launch realsense2_camera rs_launch.py align_depth.enable:=true
 
# Launch find_object_2d node:
ros2 launch find_object_2d find_object_3d.launch.py \
   rgb_topic:=/camera/color/image_raw \
   depth_topic:=/camera/aligned_depth_to_color/image_raw \
   camera_info_topic:=/camera/color/camera_info
 
# Show 3D pose in camera frame:
ros2 run find_object_2d tf_example

See find_object_2d for more information (same parameters/topics are used between ROS1 and ROS2 versions).

find-object's People

Contributors

awesome-manuel avatar cottsay avatar doumdi avatar lucasw avatar matlabbe avatar nuclearsandwich avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

find-object's Issues

Gpu option is not working

Hello, i am using find_object_3d with ros and kinect camera. How can i enable the gpu option for any detectors and descriptors?

3D position of the objects erro exit code -11

Hi,I am using ROS Indigo and Kinect(for XBOX360).

I am trying this tutorial,and I got some problems about 3D position of the objects.

roslaunch openni_launch openni.launch depth_registration:=true
roslaunch find_object_2d find_object_3d.launch
rosrun rviz rviz
every thing seems works well,but when I tried to add an object,then present an object, then Select the features extracted from the object, return to main screen,

erro happened like:

[ WARN] [1459320214.341369045]: "object_10" passed to lookupTransform argument source_frame does not exist.
[ WARN] [1459320214.841072544]: Object 10 detected, center 2D at (459.562181,229.519383), but invalid depth, cannot set frame "object_10"! (maybe object is too near of the camera or bad depth image)

[ WARN] [1459320214.841452733]: "object_10" passed to lookupTransform argument source_frame does not exist.
[ WARN] [1459320215.336902142]: TF to MSG: Quaternion Not Properly Normalized
[ WARN] [1459320215.337123950]: TF to MSG: Quaternion Not Properly Normalized
[find_object_3d-1] process has died [pid 1978, exit code -11, cmd /opt/ros/indigo/lib/find_object_2d/find_object_2d rgb/image_rect_color:=camera/rgb/image_rect_color depth_registered/image_raw:=camera/depth_registered/image_raw depth_registered/camera_info:=camera/depth_registered/camera_info __name:=find_object_3d __log:=/home/exbot/.ros/log/8bddd226-f642-11e5-9962-78929c842512/find_object_3d-1.log].
log file: /home/exbot/.ros/log/8bddd226-f642-11e5-9962-78929c842512/find_object_3d-1*.log

How can I deal with it? thx!

Color independent object detection

Hi!
Your Repo is awesome, i learned a lot about object detection by reading your code.

I am wondering if all descriptors are color dependent. While the logo with brown background is getting detected, the one with a black background is not:
color dependent detection

I rather want to detect the shape of my logo, than the color contrasts so I can detect it in any combination with different backgrounds.

Is that possible with the detection algorithms used by find-object or will I have to use haar cascades for that?

Thanks,
manu

Problem while running the package

I cloned the package and experimented successfully. Then i reinstalled ROS and opencv due to some reasons. Now i cloned the package and compiled it successfully with 'catkin_make' but when i tried to run the run a launch file i an getting following error

[ WARN](2015-08-06 23:22:31.328) Settings.cpp:960::createDescriptorExtractor() Find-Object is not built with OpenCV xfeatures2d module so Brief cannot be used!
[FATAL](2015-08-06 23:22:31.328) FindObject.cpp:56::FindObject() Condition (detector_ != 0 && extractor_ != 0) not met!


FATAL message occurred! Application will now exit.


Please clarify the issue.

Fps Rate

Hey,

is there any option/parameter to display framerates?

Kindly

catkin_make does not find QTransform

I'm trying to add the file hello_there.cpp file to the src folder to make some workarounds. I am using the Qtransform header #include <QTransform>, so in the CMakeLists.txt I added the following lines right before the Install

include_directories(include) include_directories( ${catkin_INCLUDE_DIRS} ) add_executable(${PROJECT_NAME}_node_hello src/hello_there.cpp) target_link_libraries(${PROJECT_NAME}_node_hello ${catkin_LIBRARIES} )

when compiling, it threw the error:

hello_there.cpp:4:22: fatal error: QTransform: No such file or directory #include

include_directories(include) include_directories( ${catkin_INCLUDE_DIRS} ) **include_directories(/home/myself/Progr/anaconda3/include/qt/QtGui)** add_executable(${PROJECT_NAME}_node_hello src/hello_there.cpp) target_link_libraries(${PROJECT_NAME}_node_hello ${catkin_LIBRARIES} )

and afterward, when compiling, it complains now with:

/anaconda3/include/qt/QtGui/qtransform.h:36:27: fatal error: QtGui/qmatrix.h: No such file or directory
#include <QtGui/qmatrix.h>

So I'm guessing it can not find the complete link to the Qt libraries.. I've tried with the header to the anaconda installation and with Qt itself:

  • include_directories(/home/myself/Progr/anaconda3/include/qt/QtGui)
  • include_directories(/home/myself/Progr/Qt/5.10.0/gcc_64/include/QtGui)

but none of them gives me any difference.. I think it might have something to do with the env variables, but uncertain. Can you give me some advice?

Problem of "terminate called after throwing an instance" and failure to use "file -> load/save"

Hello,
This is a great ROS package.
However, I got some problems when I use it.
There are two problems below:
(1)
The first one is:
Sometimes, the program crashed after running for a while.
Terminal would also show something like the photo below.
How could I solve this problem?

screenshot 2017-03-27 17 27 27

(2)
The second one is:
When I tried to click the options "file -> load/save", the program would get stuck for a while and always show nothing like the photo below.
screenshot 2017-03-27 17 22 20

Please help me solve these problems.
Thanks a lot~

Application doesn't seem to send position data on OS X over TCP

This looks good!

Is there a way to check whether object position is published over TCP on a Mac? I tried Apple's Network Utitlity on IP 127.0.0.1 and ran a portscan but no activity is showing up on the port selected by find-object.

Background: I am trying to find objects with a webcam and pass the coordinates to a server running on the same Apple computer (probably a node.js server or Apache/PHP). I haven't configured the server yet but I read that your application publishes recognized objects over TCP. Before I configure the server I want to make sure that you application actually publishes over TCP.

screenshot 2016-01-13 18 18 02

Save photo/video on detection

Thanks for one of the easiest solutions for usual users.

Tell me please how can I save photo/video on object detection from webcam?

Thank You!

OS X: Application won't recognize object when loaded from a file

The title says it all. An object that is recognized when added from scene is not recognized when loaded from file. I made sure to restore the default settings, still no luck.

I am using the DMG v.0.6.0 from the Releases page.

screenshot 2016-01-20 09 58 42

Steps to reproduce:

  1. Add object with "Add object from sceene…"
  2. Make sure the object is recognized
  3. Remove object with right-click and "Delete"
  4. Re-add the object with "Add objects from files…"
  5. The object is not recognized

Error when compiling from source

When compiling from source with catkin_make, I was running into this issue:

Vocabulary.cpp:249:109: error: no matching function for call to ‘cv::flann::Index::build(cv::Mat&, cv::flann::LinearIndexParams, cvflann::flann_distance_t)’

Looking at that particular line in Vocabulary.cpp indicated that it was looking for a specific version of opencv. The one hardcoded in is 2.4.12 (https://github.com/introlab/find-object/blob/kinetic-devel/src/Vocabulary.cpp#L246) and I had 2.4.13. I fixed it by replacing the 12 with 13 and the issue went away.

Just highlighting this since no specific version of Opencv was required for this package.

Gatherer + Find-Object

Hi Mathieu,

Hope you are doing all well !

I found this interesting project: https://github.com/headupinclouds/gatherer. It is a cross platform gpgpu opengl (es) shader pipeline framework: ogles_gpgpu + qt5 (UI). It's really convenient to compile mobile or desktop version of QT5 based apps and leverage the GPU for the GUI interface (Camera in this case).

They have a repo with several packages to easily in the CMakeLists.txt:
Github account: https://github.com/hunter-packages

Adding find-object as a new package would be interesting to add value to the gatherer project. The only problem is the way the UI is templated in the current version of Find Object. Maybe if we migrate to a QML responsive design the UI of find-object: it would make things easier.

Did you ever work with the QML features of QT 5 ?
What do you think about adding find-object in the gatherer project ?
Would it help you to have a smoother visualization of the features detection/extraction ?

Cheers,
Luc Michalski

IndexParams has private copy constructor

Hello,
I've faced this issue while compiling code from source, the error message is

"base class 'cv::flann::IndexParams' has private copy constructor"

I guess the latest opencv have some changes that made the copy constructor for IndexParams to be private and disabled copying it, so this would lead to some code changes to make the function getFlannSearchParams to be a return by reference instead value.

Problem with building

Hi.

I'm trying to build find-object package for ROS (release v0.6.0 because of OpenCV 3), but compilation failed with this error:

In file included from /home/user/catkin_ws/src/find_object_2d/src/MainWindow.cpp:28:0:
/home/user/catkin_ws/src/find_object_2d/src/../include/find_object/MainWindow.h:35:29: fatal error: QtGui/QMainWindow: No such file or directory
 #include <QtGui/QMainWindow>

I use Ubuntu 14.4 and ROS indigo.
What is? How i can resolve it?
Thanks

3D object

Hello, im Nicolas from chile.
I am using your rtabslam and find_object projects. Everything works perfect, thank you very much for your work. the issue is that I'm trying to simulate the 3D tf position with another model of recognition. I'm really new programming ros and I'm no good at this time, so I was wondering if it is possible to maintain the position obtained with find_object_3D.launch and RTAB with rviz.

The purpose of this is to take your model to another model of recognition of people and then to other projects for the future.
I'm currently using a package with HOG for recognition of heads which will add 3D positioning maintained over time to be displayed in rviz.

thanks,

How to use Find-Object GUI

Hi everyone,
I am using Find-Object GUI and it can transmit information successfully.
Now, I want to add object, I know I show Go "Edit" -> "Add object...",
However, My Find-Object GUI only have one window shown as following:
screenshot from 2017-06-20 16 56 44

I cannot find other button here, could some one tell me how to make the GUI show the other button that I can add the object?
Thanks in advance.

Launch action when object is detected

Hello, great app!

I've been adding message boxes and recently a blinking LED when the object is detected. Everything works great when GUI mode is on, but when using --console my actions seem to be ignored?

Where can I insert my action so it would not be ignored in console mode?

Edit : I can't get autoScreenshot to work either in console mode

No object detected in find_object_3d.launch with Kinect 1

Hi,
I use Kinect 1 and try to detect my stool in 1m, and it can not detected even after I move it in 0.5m. I find that the frequency of camera in find_object gui is 0Hz. I didn't modify the find_object_3d launch file. I've tried your robot-mapping with find-object example, with the help of rosbag play demo_find_object.bag. In the example object_8 is detected. Could you give me some advice about how to run the example in the real environment successfully?
Thank you in advance!
find_object with Kinect 1

Third-party detectors/descriptors

This post is for suggestions of new third-party feature detectors/descriptors that could be added to Find-Object. Use this format:

Add them as new posts and I will summarize them in the first post (this one) below to know which are implemented or not in Find-Object.

List of third-party detectors

Multi Threading of the findObject object / Performances

Hi Mathieu,

Hope you are doing well !

I created another tool (httpRequest) for find object based on Tufao web server (https://github.com/vinipsmaker/tufao/tree/0.x, as Find-Object is using QT 4.8) and able to parse the request body from a POST request and forward it to the findObject server on the TCP Server:

  1. Server Start (Port 1979):
    ./find_object --config ../config/find_object.ini --objects ../scene/dataset_1 --console --debug &
  2. Webservice Start (include the tcpResponse class and create a connection to the port 1979):
    curl -s -X POST -H "Content-Type: multipart/form-data" -F "[email protected]" http://localhost:8080/findObject | jq .

Output:
{
"Status": "0",
"Info": [
"OpenCV version : 3.0.0-dev",
"Major version : 3",
"Minor version : 0",
"Subminor version : 0",
"getNumberOfCPUs: 40",
"getNumThreads: 8",
"getThreadNum: -667818048",
"getTickFrequency: 1e+09",
"getCPUTickCount: 7389729757512303",
"useOptimized? 1",
"Scene already having one channel and format CV_8U, (1 ms)"
],
"Message": [
"No objects detected. (2 ms)"
]
}

What it does ? The webservice is using several image pre-processing in order to extract exif informations if it is a JPEG, resize, crop a Region of Interest, finding blobs or squares :-)

I wanted to increase the number of requests per seconds for the web-service but we cannot have more than a request per port. So, I was wondering how we could multi thread or shard the findObject object created (https://github.com/introlab/find-object/blob/master/app/main.cpp#L463).

Why ? In order to increase performances and the size of the inverted vocabulary. I was thinking to use nanomsg to create a recursive tree of port to bind this service (either in parallel or recursively) but need to figure out how to scale the number of requests by binding several ports with a sort of internal load balancer with several copies of the vocabulary.

1- Is there a way to pass a parameters to the TCP socket to create a Q_SLOT in order to spawn a findObject children copy of any findObject object (if we can create several) but with a fixed size (like for the yaml vocabulary loading). Or to make the findObject object shared ?

2- I have often a "terminate called after throwing an instance of 'std::out_of_range" when I compute the homography in Find-object with the latest version. Would it be due to some parameters in the config.ini ?

3- Do you have any thoughts about how to multi-thread or create between 1 to 5 workers for the findObject app service without being too much intrusive ?

4- Have you seen the repo DBow2 providing an Enhanced hierarchical bag-of-word (https://github.com/thierrymalon/DBoW2/tree/sift) ? It implements a hierarchical tree for approximating nearest neighbours in the image feature space and creating a visual vocabulary. (using Dlib for the Machine Learning)

5- Did you have Rootsift (http://www.pyimagesearch.com/2015/04/13/implementing-rootsift-in-python-and-opencv/), Colour Feature Descriptors (https://github.com/eokeeffe/quasi_invariant-features), or MODS (http://cmp.felk.cvut.cz/wbs/index.html) on your roadmap for FindObject ?

6-. How complicated it would be to migrate to QT5 ^^ ?

Ps. I will make a cool web service :-) Happy Thanks Giving

Cheers,
Luc Michalski

Why some features can't be selected.

Hi,
As shown in the image, some features can't be selected, such as SURF, SIFT, etc. Why is that, do I need to alter your source code in order to make it selectable ?
img_20170412_133442
I'm running the package on Ubuntu 14.04 LTS, ROS Indigo.
Thx

Failure to detect ros/catkin build

When building with it e.g.

catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/kinetic

a non-ros version is build. So detecting the catkin build with

CATKIN_TOPLEVEL OR CATKIN_BUILD_BINARY_PACKAGE OR CATKIN_SKIP_TESTING OR CATKIN_ENABLE_TESTING

is obviously not enough. As a workaround -DCATKIN_SKIP_TESTING=1 can be set on the commandline.

Group objects together

For example, this could be useful when we have multiple views of the same object. We could add all these views under the same group (super object).

error occurs while end of the adding object from taking a picture

Hi,I'm using roslaunch find_object_2d find_object_3d_kinect2.launch to find object.But error occurs at the end of the adding object from taking a picture .Here are some error information :

[ INFO] [1478596610.537800121]: gui=1
[ INFO] [1478596610.537872437]: objects_path=
[ INFO] [1478596610.537898265]: session_path=
[ INFO] [1478596610.537921191]: settings_path=~/.ros/find_object_2d.ini
[ INFO] [1478596610.537942808]: subscribe_depth = true
[ INFO] [1478596610.542034484]: object_prefix = object
[ INFO] [1478596610.543486549]: find_object_ros: queue_size = 10
OpenCV Error: Unsupported format or combination of formats (type=0
) in buildIndex_, file /home/ncrc6/opencv-2.4.13/modules/flann/src/miniflann.cpp, line 315
Qt has caught an exception thrown from an event handler. Throwing
exceptions from an event handler is not supported in Qt. You must
reimplement QApplication::notify() and catch all exceptions there.

terminate called after throwing an instance of 'cv::Exception'
what(): /home/ncrc6/opencv-2.4.13/modules/flann/src/miniflann.cpp:315: error: (-210) type=0
in function buildIndex_

[find_object_3d-1] process has died [pid 6662, exit code -6, cmd /home/ncrc6/AutoObjSearch_ws/devel/lib/find_object_2d/find_object_2d rgb/image_rect_color:=/kinect2/qhd/image_color_rect depth_registered/image_raw:=/kinect2/qhd/image_depth_rect depth_registered/camera_info:=/kinect2/qhd/camera_info __name:=find_object_3d __log:=/home/ncrc6/.ros/log/0bc64bf2-a594-11e6-8b6f-b808cf28480e/find_object_3d-1.log].
log file: /home/ncrc6/.ros/log/0bc64bf2-a594-11e6-8b6f-b808cf28480e/find_object_3d-1*.log

Could you give me some advice ? Thank you!

cannot get object using tf in rviz

hi, i have done exactly same as it it is to be done but using rviz, i am not able to see object from camera link, any suggestions ???
kj

please help

no objects loaded from path

Hello,

at first when I used the find object program everything worked fine. I think that I accidentally closed the add object session and afterwards when I tried to restart the program I get the following error:

alex@alex-UX303LAB:$ rosrun find_object_2d find_object_2d image:=image_raw
[ INFO] [1457440617.750504616]: gui=0
[ INFO] [1457440617.750581664]: objects_path=
/objects
[ INFO] [1457440617.750599829]: session_path=
[ INFO] [1457440617.750615618]: settings_path=~/.ros/find_object_2d.ini
[ INFO] [1457440617.750630166]: subscribe_depth = false
[ INFO] [1457440617.750645655]: obj_frame_prefix = object
[ERROR] [1457440617.754217255]: No objects loaded from path "/home/alex/objects"

I can still start the program with the Kinect v2 but I still can't manage to add pictures.

can t add picture

I already tried reinstalling but it didn't work unfortunately

Find-Object shows nothing

Hi there,
I am trying to do 3D position of the objects with Kinect v2.
I can compile iai_kinect2-master and find_object_2d in my work space.
Then I want to follow the example on http://wiki.ros.org/find_object_2d#A3D_position_of_the_objects

After roslaunch kinect2_bridge kinect2_bridge.launch publish_tf:=true
I got
screenshot from 2017-06-19 15 10 44

It seems correct!

After roslaunch find_object_2d find_object_3d_kinect2.launch
I got
screenshot from 2017-06-19 15 15 41

Then i used image_view to check. It shows:
screenshot from 2017-06-19 16 59 29

Does anyone know what the problem is or which part I miss?
Any ideas are appreciated.!!!
Thanks!!!

topic reading using python

I just need to track 2 objects and print in terminal their positions in x, y and z. How do I access to the tf topic using a simple python code?? Im using a kinect and this is just my first month using ROS. Your program is working perfect, and using rostopic echo tf I can see the results of the tracking.

thanks for your work

Having trouble compiling in Ubuntu 14.04

apt-get couldn't find libopencv2.4-dev - but I already had libopencv-dev 2.4.8.
When I issue 'make' I get most of the way through the compile - but then it throws this error:
Linking CXX executable ../../bin/find_object
/usr/lib/x86_64-linux-gnu/libopencv_highgui.so.2.4.8: undefined reference to `TIFFIsTiled@LIBTIFF_4.0'

I've tried installing the TIFF development library:
sudo apt-get install libtiff4-dev

Is there some other version of OpenCV that's needed or am I missing a different library? Thanks.

find_object crashes after adding object to scene or adding object by files

Hi, I'm having trouble running find-object as it keeps crashing and I keep getting the following error:

OpenCV Error: Unsupported format or combination of formats (type=0
) in buildIndex_, file /home/user/opencv-2.4.13/modules/flann/src/miniflann.cpp, line 315
terminate called after throwing an instance of 'cv::Exception'
  what():  /home/user/opencv-2.4.13/modules/flann/src/miniflann.cpp:315: error: (-210) type=0
 in function buildIndex_

[find_object_3d-1] process has died [pid 4635, exit code -6, 
cmd /home/user/catkin_ws/devel/lib/find_object_2d/find_object_2d rgb/image_rect_color:=/kinect2/qhd/image_color_rect depth_registered/image_raw:=/kinect2/qhd/image_depth_rect 
depth_registered/camera_info:=/kinect2/qhd/camera_info __name:=find_object_3d __log:=/home/user/.ros/log/e0a800d0-f234-11e6-91b5-e82aeaab80ce/find_object_3d-1.log].
log file: /home/user/.ros/log/e0a800d0-f234-11e6-91b5-e82aeaab80ce/find_object_3d-1*.log

This happens whenever I import my object or use the take picture option to add an object. I tried the suggestions at the following link and this is what my setup currently looks like:
image

Could you please kindly help?

Using this package in Raspberry Pi

Hi there!

I have been using this project with my laptop and all went perfect from the very beginning.

I'm trying now to change the laptop for a Raspberry Pi with ROS Indigo, but I realised that the RPi has no PCL libraries for ROS, and the project does not compile.

Is there any option of splitting the project in order to make it run without PCL in a RPi??

Thanks in advance

3d position error

hi ,i am using ros indigo and realsense sr300 .sometimes i can get 3d position of object but sometimes i have problems about 3d position
[ INFO] [1475113598.537434025]: Object_64 [x,y,z] [x,y,z,w] in "/map" frame: [0.139000,0.033985,0.016190] [-0.981934,1.133853,-0.013125,1.188766]
[ INFO] [1475113598.537483947]: Object_64 [x,y,z] [x,y,z,w] in "camera_rgb_optical_frame" frame: [-0.033985,-0.016190,0.139000] [0.000000,0.000000,-0.013125,1.188766]
[ WARN] [1475113598.740891659]: Object 64 detected, center 2D at (202.716570,183.286647), but invalid depth, cannot set frame "object_64"! (maybe object is too near of the camera or bad depth image)
and i run rviz to find that the depth information is good

Problem with Kinect

Hi!
I’m really really trying to work this out. I’m using a Kinect but I’m not able to see the video streaming, I only see a black window were it should go the video.

$ roslaunch openni_launch openni.launch depth_registration:=true
$ roslaunch find_object_2d find_object_3d.launch

Those are the steps that I follow. (By the way, my Kinect is working find) and this is what I get in the terminal:

... logging to /home/espe/.ros/log/63ab14e8-b732-11e5-ae9d-80000b6729ec/roslaunch-espe-desktop-25577.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://espe-desktop:33630/

SUMMARY

PARAMETERS

  • /find_object_3d/gui: True
  • /find_object_3d/object_prefix: object
  • /find_object_3d/objects_path:
  • /find_object_3d/settings_path: ~/.ros/find_objec...
  • /find_object_3d/subscribe_depth: True
  • /rosdistro: indigo
  • /rosversion: 1.11.16
  • /tf_example/map_frame_id: /map
  • /tf_example/object_prefix: object

NODES
/
base_to_camera_tf (tf/static_transform_publisher)
find_object_3d (find_object_2d/find_object_2d)
map_to_odom_tf (tf/static_transform_publisher)
odom_to_base_tf (tf/static_transform_publisher)
tf_example (find_object_2d/tf_example)

ROS_MASTER_URI=http://localhost:11311

core service [/rosout] found
process[find_object_3d-1]: started with pid [25595]
process[tf_example-2]: started with pid [25596]
process[base_to_camera_tf-3]: started with pid [25597]
process[odom_to_base_tf-4]: started with pid [25598]
process[map_to_odom_tf-5]: started with pid [25604]
[ INFO] [1452386343.440324495]: gui=1
[ INFO] [1452386343.440474126]: objects_path=
[ INFO] [1452386343.440533616]: session_path=
[ INFO] [1452386343.440592446]: settings_path=~/.ros/find_object_2d.ini
[ INFO] [1452386343.440650247]: subscribe_depth = true
[ INFO] [1452386343.440705962]: obj_frame_prefix = object
[ INFO] [1452386343.446738430]: find_object_ros: queue_size = 10
[ WARN](2016-01-09 19:39:05.346) FindObject.cpp:520::run() no features detected in object -1 !?!


Thanks in advance!

Cannot open SURF/SIFT in Find_object_2d

Hey, I am trying to use find_object_2d with RTABMAP and it works fine, except that I cannot use the SURF or SIFT algorithms under the feature2D tab in the Find-Object UI. I built OpenCV from source, and cloned the opencv_contrib module folder. Why are SURF and SIFT not showing up and how could i get them to show up? Thanks.

also, when i run sudo apt-cache search opencv, one of the lines printed are:

libopencv-nonfree2.4 - computer vision contrib library
libopencv-nonfree-dev - development files for libopencv-nonfree

Which means I also have the nonfree module. Why is find_object_2d not syncing with any of these so that I can use SURF/SIFT? Thanks.

ALSO: my nearest neighbour strategies options are only BruteForce and Lsh (the other ones are not available, such as Kmeans). How would I go about adding these?

Best,

Gabriel

Unable to detect object using Kinect v2 in Ros indigo

I was successfully able to add the object add the object from the scene using the gui of Find-Object. But it is not able to detect the object

I am using following two commands as given in documentation

roslaunch kinect2_bridge kinect2_bridge.launch publish_tf:=true
roslaunch find_object_2d find_object_3d_kinect2.launch

please help

Can I use webcam for detecting an object in rviz ?

I cloned find-object and I run the following codes properly by using webcam
I detected the object and I published the detected object with (position, rotation, scale and shear)

roscore
rosrun uvc_camera uvc_camera_node &
rosrun find_object_2d find_object_2d image:=image_raw
rosrun find_object_2d print_objects_detected

Can I use normal webcam for detecting an object in rviz ? I mean, can we display object position (x, y) in rviz.

Using Kinect on find_object_2d

Hi, i'm running find_object_2d on ROS Indigo , Ubunto 14.04.3 LTS.

As instruction written,

  1. roscore &

    Launch your preferred usb camera driver

  2. rosrun uvc_camera uvc_camera_node &
  3. rosrun find_object_2d find_object_2d image:=image_raw

Because my device is Kinect,

so i replace step 2 to roslaunch freenect_launch freenect.launch.

and 3 to rosrun find_object_2d find_object_2d image:=rgb/image_raw.

but nothing shows up in Find-object screen, saying


Find-Object subscribed to /image_raw topic.
You can remap the topic when starting the node:
"rosrun find_object_2d find_object_2d image:=your/image/topic".


is there any missing or wrong in my running?

Please let me know.

thank you for reading.

3D position of objects using new Kinect

The 3D positioning of object using Kinect seems to rely on openni, which does not support Kinect One. Is there a workaround for this issue?

By the way, I'm running Kinect One on a Windows PC and use k2_client and k2_server to publish Kinect data (rgb, depth, ir, body) to ROS on a Ubuntu laptop. Can find_object_2d extract 3D position of an object using only these data topics?

Unable to start the application after building the solution.

The versions I am using are visual studio 2013 ultimate.
cmake 3.6.1
Qt 4.8.7
windows 10
But unfortunately i am getting these errors in the following lines of code.
Line 160 Vocabulary.cpp
tmpIndex.build(notIndexedDescriptors_, cv::flann::LinearIndexParams(), Settings::getFlannDistanceType());

Errors are:
no suitable conversion from cv::flann::LinearIndexParams() to const cv::_InputArray
no suitable convrsion from cv::flann::flan_distance_t to cv::IndexParams

and

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.