Git Product home page Git Product logo

yolov7-ros's Introduction

ROS package for official YOLOv7

YOLOv7 Berkeley DeepDrive

This repo contains a ROS noetic package for the official YOLOv7. It wraps the official implementation into a ROS node (so most credit goes to the YOLOv7 creators).

Note

There are currently two YOLOv7 variants out there. This repo contains the implementation from the paper YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors.

Requirements & Getting Started

Following ROS packages are required:

First, clone the repo into your catkin workspace and build the package:

git clone https://github.com/lukazso/yolov7-ros.git ~/catkin_ws/src/
cd ~/catkin_ws
catkin build yolov7_ros

The Python requirements are listed in the requirements.txt. You can simply install them as

pip install -r requirements.txt

Download the YOLOv7 weights from the official repository.

Berkeley DeepDrive weights: I trained YoloV7 with a basic hyperparameter set (no special hyperparameter optimization) on the Berkeley DeepDrive dataset. You can download the weights here.

The package has been tested under Ubuntu 20.04 and Python 3.8.10.

Usage

Before you launch the node, adjust the parameters in the launch file. For example, you need to set the path to your YOLOv7 weights and the image topic to which this node should listen to. The launch file also contains a description for each parameter.

roslaunch yolov7_ros yolov7.launch

Each time a new image is received it is then fed into YOLOv7.

Visualization

You can visualize the yolo results if you set the visualize flag in the launch file. Also, with the classes_path parameter you can provide a .txt file with the class labels. An example file is provided in berkeley.txt or coco.txt.

Notes

  • The detections are published using the vision_msgs/Detection2DArray message type.
  • The detections will be published under /yolov7/out_topic.
  • If you set the visualize parameter to true, the detections will be drawn into the image, which is then published under /yolov7/out_topic/visualization.

Coming Soon

  • ROS2 implementation

yolov7-ros's People

Contributors

jap3th avatar lukazso avatar

Stargazers

 avatar  avatar  avatar mingjunjie avatar  avatar  avatar  avatar LUKCHIN avatar  avatar Can Ömercikoğlu avatar Harrison Seby avatar 业伟沈 avatar  avatar Anthony Chen avatar  avatar  avatar  avatar xiaohoutongxue avatar  avatar Yifei Li 李逸飞 avatar  avatar Xiushi Shen avatar Zhiell avatar  avatar Ikhwanul Abiyu Dhiyya'ul Haq avatar Xiaoqi_ avatar  avatar  avatar Dennis Michael Clark avatar aarimoto avatar Mohamad Yani avatar Ryan Zhao avatar Xiangrui Kong avatar  avatar yinloonga avatar Yoon, Seungje avatar Maulana Bisyir Azhari avatar Yeongseok Lee avatar  avatar Ming Zhang avatar  avatar N. Stepanov avatar 123 avatar Slava avatar Alonso Caviedes A. avatar Wenbang Deng avatar wxiangxiaow avatar Deng QY avatar Bill avatar  avatar Jihoon Oh avatar Alexandre avatar  avatar xiaolin avatar  avatar  avatar  avatar Raghuvamsi Bokka  avatar Hyunseup Jo avatar Yannik Motzet avatar Samuel Shuai Lee avatar rage baby dont cry avatar  avatar  avatar Yuanzhi_Scorpion avatar Gerardo Martino avatar  avatar  avatar  avatar FantiGiacomo avatar Park JaeHyeong avatar  Lu Jihui avatar  avatar Anselme ATCHOGOU avatar Seungwook Lee avatar  avatar Daniel Regner avatar Ari Chadda avatar  avatar  avatar  avatar NEU-Junshun avatar  avatar Yang avatar Gavin Fang avatar Hooram Nam avatar vinay-kodam avatar Yazan Murhij avatar ernie avatar LexRobot avatar  avatar nickkka avatar Mihai Bujanca avatar 爱可可-爱生活 avatar Giseop Kim avatar TaeYoung Kim avatar John Clema avatar  avatar Jaafar A.Mahmoud avatar Sanghyun Park avatar

Watchers

Giseop Kim avatar  avatar

yolov7-ros's Issues

Change input image format and size

Hello everyone.

It seems that the input image is not recognized if the image format and image size are as follows.
Are there any countermeasures?
Should the input image be modified in preprocessing?
※ ROS : noetic weights : YOLOv7.pt

[OK]
msgs : sensor_msgs/Image
image format : rgb8
image height : 480
image width : 640

[NG]
msgs : sensor_msgs/Image
image format : mono16
image height : 64
image width : 1024

Should I change the "param img_size"?
[ detect_ros.py ]
:param img_size: (height, width) to which the img is resized before being
fed into the yolo network. Final output coordinates will be rescaled to
the original img size.

Thank you.
Y.Aoki

vision_msgs

i got and error during catkin build.
can you help me to solve it?

Errors << yolov7_ros:cmake /home/aiman/catkin_ws/logs/yolov7_ros/build.cmake.003.log
CMake Error at /opt/ros/melodic/share/catkin/cmake/catkinConfig.cmake:83 (find_package):
Could not find a package configuration file provided by "vision_msgs" with
any of the following names:

vision_msgsConfig.cmake
vision_msgs-config.cmake

Add the installation prefix of "vision_msgs" to CMAKE_PREFIX_PATH or set
"vision_msgs_DIR" to a directory containing one of the above files. If
"vision_msgs" provides a separate development package or SDK, be sure it
has been installed.
Call Stack (most recent call first):
CMakeLists.txt:10 (find_package)

Issues and resolution when installing yolov7_ros

  1. 'catkin build' was not available in my ROS installation. I used catkin_make instead.

  2. 'msgs' folder is not found. Just create it under yolov7_ros.

  3. Requirements were not satisfied because I had an older version of numpy.
    I fixed with:

         sudo apt-get purge numpy
         sudo pip3 install numpy
         sudo install scipy
         sudo pip3 install -U scikit-learn
    

    This updated software versions, but led to using numpy 1.24.0 (the latest version). This is too high - I needed version < 1.23.0, so I reinstalled version 1.22.4 explicitly.

       pip3 install --force-reinstall numpy==1.22.4
    
  4. Download the weights from the original implementation. It isn't immediately obvious, but the weights files are the blue hyperlinks in the table. I use yolov7 as I have a reasonable GPU and placed the weights in a 'weights' folder under yolov7_ros.

  5. Download the class labels. The file supplied is wrong. The correct file is here. (Thanks to Isaac Sheidlower). I renamed the file to coco80.txt.

  6. Edit yolov7_ros/launch/yolov7.launch.

I added a section to start the usb camera node.

      <node name="usb_cam" pkg="usb_cam" type="usb_cam_node" output="screen" >
          <param name="video_device" value="/dev/video0" />
          <param name="image_width" value="640" />
          <param name="image_height" value="480" />
          <param name="pixel_format" value="yuyv" />
          <param name="camera_frame_id" value="usb_cam" />
          <param name="io_method" value="mmap"/>
      </node>

I also edited weight_path, classes_path and img_topic:

    <param name="weights_path" type="str"
    value="/home/paul/catkin_ws/src/yolov7_ros/weights/yolov7.pt"/>
    <!-- Path to a class_labels.txt file containing your desired class labels. The i-th entry corresponds to the i-th class id. For example, in coco class label 0 corresponds to 'person'. Files for the coco and berkeley deep drive datasets are provided in the 'class_labels/' directory. If you leave it empty then no class labels are visualized.-->
    <param name="classes_path" type="str" value="/home/paul/catkin_ws/src/yolov7_ros/class_labels/coco80.txt" />
    <!-- topic name to subscribe to -->
    <param name="img_topic" type="str" value="/usb_cam/image_raw" />
  1. When launching, the initialisation fails because it cannot download a file. The solution is to change the attempt_download function in yolov7_ros/utils/google_utils.py. Thanks to robertokcanale for this working version:
def attempt_download(file, repo='WongKinYiu/yolov7'): 
    # Attempt file download if does not exist 
    file = Path(str(file).strip().replace("'", '').lower()) 
    
    if not file.exists(): 
        try: 
            response = requests.get(f'https://api.github.com/repos/{repo}/releases').json()  # github api 
            for x in response: 
                for key, value in x.items(): 
                    #print("Key: ", key) 
                    if key == "assets": 
                        assets = [n['name'] for n in x['assets']]  # release assets 
                    if key == "tag_name": 
                        tag = x['tag_name']  # i.e. v1.0 
        except:  # fallback plan 
            assets = ['yolov7.pt'] 
            tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]

        name = file.name
        if name in assets:
            msg = f'{file} missing, try downloading from https://github.com/{repo}/releases/'
            redundant = False  # second download option
            try:  # GitHub
                url = f'https://github.com/{repo}/releases/download/{tag}/{name}'
                print(f'Downloading {url} to {file}...')
                torch.hub.download_url_to_file(url, file)
                assert file.exists() and file.stat().st_size > 1E6  # check
            except Exception as e:  # GCP
                print(f'Download error: {e}')
                assert redundant, 'No secondary mirror'
                url = f'https://storage.googleapis.com/{repo}/ckpt/{name}'
                print(f'Downloading {url} to {file}...')
                os.system(f'curl -L {url} -o {file}')  # torch.hub.download_url_to_file(url, weights)
            finally:
                if not file.exists() or file.stat().st_size < 1E6:  # check
                    file.unlink(missing_ok=True)  # remove partial downloads
                    print(f'ERROR: Download failure: {msg}')
                print('')
                return
  1. You should now be able to launch yolov7_ros, and can try to view the output using rqt_image_view on the topic yolov7/yolov7/visualization. I saw lots of error messages saying:

ImageView.callback_image() could not convert image from '8UC3' to 'rgb8' ([8UC3] is not a color format. but [rgb8] is. The conversion does not make sense)

This was fixed by editing yolov7_ros/src/detect_ros.py, line 165 to say:

        vis_msg = self.bridge.cv2_to_imgmsg(vis_img, encoding="bgr8")

I hope these tips help somebody else get yolov7 working faster than I did !

Low Accuracy

Hi,

I've been trying the repo for a while with yolov7-tiny.pt and recognized that the accuracy of detect_ros.py is much lower than the original detect.py. As far as I can see, there are lots of code differences in detect_ros.py script version due to the ROS constraints. Could you please consider this issue? I don't know if anyone had encountered with this problem before?

[ERROR] [1658410511.923109]: bad callback: <bound method Yolov7Publisher.process_img_msg

[ERROR] [1658410511.923109]: bad callback: <bound method Yolov7Publisher.process_img_msg of <main.Yolov7Publisher object at 0x7fa623cb9630>>
Traceback (most recent call last):
File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
cb(msg)
File "/home/jason/d455_ws/src/yolov7-ros/src/detect_ros.py", line 113, in process_img_msg
img_msg, desired_encoding='passthrough'
File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 163, in imgmsg_to_cv2
dtype, n_channels = self.encoding_to_dtype_with_channels(img_msg.encoding)
File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 99, in encoding_to_dtype_with_channels
return self.cvtype2_to_dtype_with_channels(self.encoding_to_cvtype2(encoding))
File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 91, in encoding_to_cvtype2
from cv_bridge.boost.cv_bridge_boost import getCvType
ImportError: dynamic module does not define module export function (PyInit_cv_bridge_boost)

Error when invoking catkin_make

Hi! I am Dion.
I tried this repo and followed the steps in README file well until I reached the the compiling step using catkin_make build.
The error says that:

-- +++ processing catkin package: 'yolov7_ros'
-- ==> add_subdirectory(yolov7-ros)
-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
CMake Error at /opt/ros/noetic/share/genmsg/cmake/genmsg-extras.cmake:94 (message):
  add_message_files() directory not found:
  /home/username/grasping_w/simulation_ws/src/yolov7-ros/msg
Call Stack (most recent call first):
  yolov7-ros/CMakeLists.txt:54 (add_message_files)


-- Configuring incomplete, errors occurred!
See also "/home/dionb/panda_deep_grasping/simulation_ws/build/CMakeFiles/CMakeOutput.log".
make: *** [Makefile:3908: cmake_check_build_system] Error 1
Invoking "make cmake_check_build_system" failed

It seems the environment misses the workspace/src/yolov7-ros/msg folder. I wonder if you accidentally removed the msg folder or uncommented the msg generation in CMakeLists.txt.

Could you check my `CMakeLists.txt' to find the error?

Here is my CMakeLists.txt file I clone from this repository:

cmake_minimum_required(VERSION 3.0.2)
project(yolov7_ros)

## Compile as C++11, supported in ROS Kinetic and newer
# add_compile_options(-std=c++11)

## Find catkin macros and libraries
## if COMPONENTS list like find_package(catkin REQUIRED COMPONENTS xyz)
## is used, also find other catkin packages
find_package(catkin REQUIRED COMPONENTS
  geometry_msgs
  roscpp
  rospy
  sensor_msgs
  std_msgs
  vision_msgs
  message_generation
)

## System dependencies are found with CMake's conventions
# find_package(Boost REQUIRED COMPONENTS system)


## Uncomment this if the package has a setup.py. This macro ensures
## modules and global scripts declared therein get installed
## See http://ros.org/doc/api/catkin/html/user_guide/setup_dot_py.html
# catkin_python_setup()

################################################
## Declare ROS messages, services and actions ##
################################################

## To declare and build messages, services or actions from within this
## package, follow these steps:
## * Let MSG_DEP_SET be the set of packages whose message types you use in
##   your messages/services/actions (e.g. std_msgs, actionlib_msgs, ...).
## * In the file package.xml:
##   * add a build_depend tag for "message_generation"
##   * add a build_depend and a exec_depend tag for each package in MSG_DEP_SET
##   * If MSG_DEP_SET isn't empty the following dependency has been pulled in
##     but can be declared for certainty nonetheless:
##     * add a exec_depend tag for "message_runtime"
## * In this file (CMakeLists.txt):
##   * add "message_generation" and every package in MSG_DEP_SET to
##     find_package(catkin REQUIRED COMPONENTS ...)
##   * add "message_runtime" and every package in MSG_DEP_SET to
##     catkin_package(CATKIN_DEPENDS ...)
##   * uncomment the add_*_files sections below as needed
##     and list every .msg/.srv/.action file to be processed
##   * uncomment the generate_messages entry below
##   * add every package in MSG_DEP_SET to generate_messages(DEPENDENCIES ...)

## Generate messages in the 'msg' folder
add_message_files(
 FILES
)

## Generate services in the 'srv' folder
# add_service_files(
#   FILES
#   Service1.srv
#   Service2.srv
# )

## Generate actions in the 'action' folder
# add_action_files(
#   FILES
#   Action1.action
#   Action2.action
# )

## Generate added messages and services with any dependencies listed here
generate_messages(
  DEPENDENCIES
  geometry_msgs
  sensor_msgs
  std_msgs
  vision_msgs
  yolov7_ros
)

################################################
## Declare ROS dynamic reconfigure parameters ##
################################################

## To declare and build dynamic reconfigure parameters within this
## package, follow these steps:
## * In the file package.xml:
##   * add a build_depend and a exec_depend tag for "dynamic_reconfigure"
## * In this file (CMakeLists.txt):
##   * add "dynamic_reconfigure" to
##     find_package(catkin REQUIRED COMPONENTS ...)
##   * uncomment the "generate_dynamic_reconfigure_options" section below
##     and list every .cfg file to be processed

## Generate dynamic reconfigure parameters in the 'cfg' folder
# generate_dynamic_reconfigure_options(
#   cfg/DynReconf1.cfg
#   cfg/DynReconf2.cfg
# )

###################################
## catkin specific configuration ##
###################################
## The catkin_package macro generates cmake config files for your package
## Declare things to be passed to dependent projects
## INCLUDE_DIRS: uncomment this if your package contains header files
## LIBRARIES: libraries you create in this project that dependent projects also need
## CATKIN_DEPENDS: catkin_packages dependent projects also need
## DEPENDS: system dependencies of this project that dependent projects also need
catkin_package(
    CATKIN_DEPENDS message_runtime
)
#  INCLUDE_DIRS include
#  LIBRARIES yolov7_ros
#  CATKIN_DEPENDS geometry_msgs roscpp rospy sensor_msgs std_msgs vision_msgs
#  DEPENDS system_lib

###########
## Build ##
###########

## Specify additional locations of header files
## Your package locations should be listed before other locations
include_directories(
# include
  ${catkin_INCLUDE_DIRS}
)

## Declare a C++ library
# add_library(${PROJECT_NAME}
#   src/${PROJECT_NAME}/yolov7_ros.cpp
# )

## Add cmake target dependencies of the library
## as an example, code may need to be generated before libraries
## either from message generation or dynamic reconfigure
# add_dependencies(${PROJECT_NAME} ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})

## Declare a C++ executable
## With catkin_make all packages are built within a single CMake context
## The recommended prefix ensures that target names across packages don't collide
# add_executable(${PROJECT_NAME}_node src/yolov7_ros_node.cpp)

## Rename C++ executable without prefix
## The above recommended prefix causes long target names, the following renames the
## target back to the shorter version for ease of user use
## e.g. "rosrun someones_pkg node" instead of "rosrun someones_pkg someones_pkg_node"
# set_target_properties(${PROJECT_NAME}_node PROPERTIES OUTPUT_NAME node PREFIX "")

## Add cmake target dependencies of the executable
## same as for the library above
# add_dependencies(${PROJECT_NAME}_node ${${PROJECT_NAME}_EXPORTED_TARGETS} ${catkin_EXPORTED_TARGETS})

## Specify libraries to link a library or executable target against
# target_link_libraries(${PROJECT_NAME}_node
#   ${catkin_LIBRARIES}
# )

#############
## Install ##
#############

# all install targets should use catkin DESTINATION variables
# See http://ros.org/doc/api/catkin/html/adv_user_guide/variables.html

## Mark executable scripts (Python etc.) for installation
## in contrast to setup.py, you can choose the destination
# catkin_install_python(PROGRAMS
#   scripts/my_python_script
#   DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )

## Mark executables for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_executables.html
# install(TARGETS ${PROJECT_NAME}_node
#   RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
# )

## Mark libraries for installation
## See http://docs.ros.org/melodic/api/catkin/html/howto/format1/building_libraries.html
# install(TARGETS ${PROJECT_NAME}
#   ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
#   LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
#   RUNTIME DESTINATION ${CATKIN_GLOBAL_BIN_DESTINATION}
# )

## Mark cpp header files for installation
# install(DIRECTORY include/${PROJECT_NAME}/
#   DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
#   FILES_MATCHING PATTERN "*.h"
#   PATTERN ".svn" EXCLUDE
# )

## Mark other files for installation (e.g. launch and bag files, etc.)
# install(FILES
#   # myfile1
#   # myfile2
#   DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
# )

#############
## Testing ##
#############

## Add gtest based cpp test target and link libraries
# catkin_add_gtest(${PROJECT_NAME}-test test/test_yolov7_ros.cpp)
# if(TARGET ${PROJECT_NAME}-test)
#   target_link_libraries(${PROJECT_NAME}-test ${PROJECT_NAME})
# endif()

## Add folders to be run by python nosetests
# catkin_add_nosetests(test)

One Question

Hi, I have a question: I need the x1, x2, y1, y2 coordinates of the boundingbox. In what message are they published? How can I get these?

catkin_make : no rules for target

-- Build files have been written to: /home/zzl/catkin_ws/build

Running command: "make yolov7_ros -j8 -l8" in "/home/zzl/catkin_ws/build"

make: *** no rules for target to make “yolov7_ros”。 stop。
Invoking "make yolov7_ros -j8 -l8" failed

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

because in the workspace I have some other packages compiled by catkin_make, so I can't use catkin build.

how to solve this problem?

cpu/gpu memory leak?

When I run this the gpu memory taken up by the process grows with every frame processed and eventually runs out:

[E] [1657743375.101 /yolov7 ...ents/rospy/src/rospy/topics.py: 753]: bad callback: <bound method Yolov7Publisher.img_callback of <__main__.Yolov7Publisher object at 0x7f478aa33400>>
Traceback (most recent call last):
  File "/home/lucasw/base_catkin_ws/src/ros/ros_comm/clients/rospy/src/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/lucasw/catkin_ws/src/misc/yolov7-ros/src/detect_ros.py", line 106, in img_callback
    self.process_img_msg(img_msg)
  File "/home/lucasw/catkin_ws/src/misc/yolov7-ros/src/detect_ros.py", line 139, in process_img_msg
    detections = self.model.inference(img)
  File "/home/lucasw/catkin_ws/src/misc/yolov7-ros/src/detect_ros.py", line 57, in inference
    pred_results = self.model(img)[0]
  File "/home/lucasw/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/lucasw/catkin_ws/src/misc/yolov7-ros/src/models/yolo.py", line 319, in forward
    return self.forward_once(x, profile)  # single-scale inference, train
  File "/home/lucasw/catkin_ws/src/misc/yolov7-ros/src/models/yolo.py", line 345, in forward_once
    x = m(x)  # run
  File "/home/lucasw/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/lucasw/catkin_ws/src/misc/yolov7-ros/src/models/common.py", line 500, in forward
    return self.act(self.rbr_reparam(inputs))
  File "/home/lucasw/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/lucasw/.local/lib/python3.10/site-packages/torch/nn/modules/activation.py", line 394, in forward
    return F.silu(input, inplace=self.inplace)
  File "/home/lucasw/.local/lib/python3.10/site-packages/torch/nn/functional.py", line 2031, in silu
    return torch._C._nn.silu_(input)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 7.80 GiB total capacity; 5.59 GiB already allocated; 14.81 MiB free; 5.85 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I'm using torch version 1.11.0+cu115 and Ubuntu 22.04 (also a slightly modified branch, only changes topics and params though: https://github.com/lucasw/yolov7-ros/tree/dynamic_reconfigure - it doesn't actually add dynamic reconfigure yet)

It looks like setting the device to cpu has the same issue, but it takes longer to max out the ram there.

Error when using usb_cam and yolov7-ros

Hello.
I am a ros novice and the following error occurs when I use usb_cam and yolov7-ros:

[ERROR] [1715575338.032005]: bad callback: <bound method Yolov7Publisher.process_img_msg of <__main__.Yolov7Publisher object at 0x7fa0c2026cf8>>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/lzw/catkin_ws/src/yolov7-ros/src/detect_ros.py", line 128, in process_img_msg
    img_msg, desired_encoding='bgr8'
  File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 163, in imgmsg_to_cv2
    dtype, n_channels = self.encoding_to_dtype_with_channels(img_msg.encoding)
  File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 99, in encoding_to_dtype_with_channels
    return self.cvtype2_to_dtype_with_channels(self.encoding_to_cvtype2(encoding))
  File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 91, in encoding_to_cvtype2
    from cv_bridge.boost.cv_bridge_boost import getCvType
ImportError: dynamic module does not define module export function (PyInit_cv_bridge_boost)

BUG and Suggestion

Hello,
I tried the repo, and I think I found a bug in the file google_utils.py which is probably not due to this repo per see, but to how the official yolov7 has been modified.

The line 19, function attempt _download, you should change to the following

def attempt_download(file, repo='WongKinYiu/yolov7'): # Attempt file download if does not exist file = Path(str(file).strip().replace("'", '').lower()) if not file.exists(): try: response = requests.get(f'https://api.github.com/repos/{repo}/releases').json() # github api for x in response: for key, value in x.items(): #print("Key: ", key) if key == "assets": assets = [n['name'] for n in x['assets']] # release assets if key == "tag_name": tag = x['tag_name'] # i.e. v1.0 except: # fallback plan assets = ['yolov7.pt'] tag = subprocess.check_output('git tag', shell=True).decode().split()[-1]

image

Please let me know if you encountered the same problem, in the current branch (starting from a clean pull of the repo) and see if my suggested changes are correct (you can also delete the comment of course, just for debug)

catkin build yolov7_ros

In order to catkin build yolov7_ros I had to comment the next lines in CMakeList.txt

add_message_files(
FILES
ObjectTracking2D.msg
ObjectTracking2DArray.msg
)

What I have to install in ROS to build without commenting these lines?
Can you help me @lukazso ?
Best Regards.

rostopic echo yolov7/yolov7/visualization doesn't output anything

Hello @lukazso I have build the yolov7_ros, without any error.
I have changed all necessary information in the launch file (the weights path, the classes path, the image topic, also the visualization to true and also the device to cpu).
I have run the command roslaunch yolov7_ros yolov7.launch and with the command rostopic echo yolov7/yolov7 and the command rostopic echo yolov7/yolov7/visualization but these doesn't output anything.
I have run rviz and wantedd to see the topic yolov7/yolov7/visualization but it opens the image visualization and it outputs an image topic saying No Image in white, as it can be seen in the PrtSc attached.

Screenshot from 2023-01-27 17-45-08

I am not able to understand the problem. :/
can you help me?

Does not visualize

Hi guys,

I set the visualize parameter to True. When I run the launch file, I can see the published image with the visualized detections with rostopic info but no windows open to see bboxes. Is this what's supposed to happen

How to inference?

I could launch the launch file without any errors. However, I would like to know how to infer video or image files of my own and check the predictions made by the ROS yolov7. I have tried to publish an image file on the topic /raw_image, and I see the following.

rostopic pub /raw_image sensor_msgs/Image '/src/inference/images/horses.jpg'
ERROR: Not enough arguments:

  • Given: ['/home/banashkum/catkin3_ws/src/yolov7-ros/src/inference/images/horses.jpg']
  • Expected: ['header', 'height', 'width', 'encoding', 'is_bigendian', 'step', 'data']

Args are: [header.seq header.stamp header.frame_id height width encoding is_bigendian step data]

Please provide me with some steps to visualize the real-time detections.

I Can't Get Output

  • Thanks for the work. I have some questions:

-First of all, I followed the steps you described in the README section.
-I put the 'berkeey_yolov7.pt' file in your drive as weights in the file named weights in the yolov7_ros folder (without decompressing the file). I don't understand, do I need to open the file?
-I arranged the paths to the fullest of launch.

-I run the launch file in the terminal and I get the following output:

$ roslaunch yolov7_ros yolov77.launch
... logging to /home/gokce/.ros/log/29378b22-1d80-11ee-bb20-db855afe6aa8/roslaunch-zep-2736.log
Checking log directory for disk usage. This may take a while.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.

started roslaunch server http://zep:35657/

SUMMARY

PARAMETERS

  • /rosdistro: noetic
  • /rosversion: 1.16.0
  • /yolov7/detect/classes_path: /home/gokce/catki...
  • /yolov7/detect/conf_thresh: 0.35
  • /yolov7/detect/device: cpu
  • /yolov7/detect/img_size: 640
  • /yolov7/detect/img_topic: /camera/image_raw
  • /yolov7/detect/iou_thresh: 0.45
  • /yolov7/detect/out_topic:yolov7
  • /yolov7/detect/queue_size: 1
  • /yolov7/detect/visualize: True
  • /yolov7/detect/weights_path: /home/gokce/catki...

NODES
/
rviz (rviz/rviz)
/yolov7/
detect (yolov7_ros/detect_ros.py)

ROS_MASTER_URI=http://localhost:11311

process[yolov7/detect-1]: started with pid [2765]
process[rviz-2]: started with pid [2766]
Fusing layers...
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
RepConv.fuse_repvgg_block
IDetect.fuse
/home/gokce/.local/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
[INFO] [1688814699.430653]: YOLOv7 initialization complete. Ready to start inference

-Then I type this in another terminal and get the following output:
$ rostopic echo /yolov7/out_topic
WARNING: topic [/yolov7/out_topic] does not appear to be published yet

-I add topic in rviz but there is no output.

Launch dosyam şu şekilde:

`launch>
node pkg="yolov7_ros" type="detect_ros.py" name="detect" output="screen" ns="yolov7">
!-- Download the official weights from the original repo -->
param name="weights_path" type="str" value="/home/gokce/catkin_ws/src/yolov7-ros/weights/weights.pt"/>
!-- Path to a class_labels.txt file containing your desired class labels. The i-th entry corresponds to the i-th class id. For example, in coco class label 0 corresponds to 'person'. Files for the coco and berkeley deep drive datasets are provided in the 'class_labels/' directory. If you leave it empty then no class labels are visualized.-->
param name="classes_path" type="str" value="/home/gokce/catkin_ws/src/yolov7-ros/class_labels/berkeley.txt" />
!-- topic name to subscribe to -->
param name="img_topic" type="str" value="/cv_camera/image_raw" />
!-- topic name for the detection output -->
param name="out_topic" type="str" value="yolov7" />
!-- confidence threshold -->
param name="conf_thresh" type="double" value="0.35" />
!-- intersection over union threshold -->
param name="iou_thresh" type="double" value="0.45" />
!-- queue size for publishing -->
param name="queue_size" type="int" value="1" />
!-- image size to which to resize each input image before feeding into the network (the final output is rescaled to the original image size) -->
param name="img_size" type="int" value="640" />
!-- flag whether to also publish image with the visualized detections -->
param name="visualize" type="bool" value="true" />
!-- 'cuda' or 'cpu' -->
param name="device" type="str" value="cpu" />
/node>
node name="cv_camera_node" pkg="cv_camera" type="cv_camera_node" output="screen">
param name="video_device" value="/dev/video0" />
param name="image_width" value="640" />
param name="image_height" value="640" />
param name="pixel_format" value="bgr8" />
param name="camera_name" value="/cv_camera/image_raw/" />
param name="io_method" value="mmap"/>
/node>

!-- Rviz -->
node pkg="rviz" type="rviz" name="rviz" args="-d $(find yolov7_ros)/rviz/yolov7.rviz"/>

/launch>`

Can you help me?
Thank you from now..

detect_ros.py inference time slower than detect.py

Hi, I'm running on a custom v7-tiny model, and the inference times when running detect_ros.py is at around 500ms while the inference times on detect.py is around 150ms, I added this to track the times:
image

use usb_cam as img_topic

Hi, is that any problem is i use usb_cam as my img_topic?

because I have error with it.

[ERROR] [1672978580.055375]: bad callback: <bound method Yolov7Publisher.process_img_msg of <__main__.Yolov7Publisher object at 0x7f471d7f9208>>
Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/topics.py", line 750, in _invoke_callback
    cb(msg)
  File "/home/aiman/yolov7_ws/src/yolov7-ros/src/detect_ros.py", line 128, in process_img_msg
    img_msg, desired_encoding='bgr8'
  File "/home/aiman/yolov7_ws/src/vision_opencv/cv_bridge/python/cv_bridge/core.py", line 163, in imgmsg_to_cv2
    dtype, n_channels = self.encoding_to_dtype_with_channels(img_msg.encoding)
  File "/home/aiman/yolov7_ws/src/vision_opencv/cv_bridge/python/cv_bridge/core.py", line 99, in encoding_to_dtype_with_channels
    return self.cvtype2_to_dtype_with_channels(self.encoding_to_cvtype2(encoding))
  File "/home/aiman/yolov7_ws/src/vision_opencv/cv_bridge/python/cv_bridge/core.py", line 91, in encoding_to_cvtype2
    from cv_bridge.boost.cv_bridge_boost import getCvType
ImportError: dynamic module does not define module export function (PyInit_cv_bridge_boost)

Is that because that matter?

I found this when trying to solve the error but i cannot make it

CmakeLists error: add_message_files

Hello, I tried building the package using "catkin_make yolov7-ros" for the first time and I obtained the following error:

-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
CMake Error at /opt/ros/noetic/share/genmsg/cmake/genmsg-extras.cmake:94 (message):
add_message_files() directory not found:
/home/psyk/catkin_ws/src/IRP/src/yolov7-ros/msg
Call Stack (most recent call first):
IRP/src/yolov7-ros/CMakeLists.txt:54 (add_message_files)

When commenting the mentioned line, the error changed to the following:

-- Using these message generators: gencpp;geneus;genlisp;gennodejs;genpy
CMake Warning at /home/psyk/catkin_ws/build/IRP/src/yolov7-ros/cmake/yolov7_ros-genmsg.cmake:3 (message):
Invoking generate_messages() without having added any message or service
file before.

You should either add add_message_files() and/or add_service_files() calls
or remove the invocation of generate_messages().
Call Stack (most recent call first):
/opt/ros/noetic/share/genmsg/cmake/genmsg-extras.cmake:307 (include)
IRP/src/yolov7-ros/CMakeLists.txt:73 (generate_messages)

-- yolov7_ros: 0 messages, 0 services
-- Configuring done
-- Generating done
-- Build files have been written to: /home/psyk/catkin_ws/build

Running command: "make yolov7_ros -j8 -l8" in "/home/psyk/catkin_ws/build"

make: *** No rule to make target 'yolov7_ros'. Stop.
Invoking "make yolov7_ros -j8 -l8" failed

issues about roslaunch

I got follow problems when I run roslaunch commend.

File "/home/zyg/yolov7_ws/src/src/models/common.py", line 11, in
from torchvision.ops import DeformConv2d
ModuleNotFoundError: No module named 'torchvision'
[yolov7/detect-1] process has died [pid 36024, exit code 1, cmd /home/zyg/yolov7_ws/src/src/detect_ros.py __name:=detect __log:=/home/zyg/.ros/log/91e4da86-e58a-11ed-9fa1-57da3f650eb4/yolov7-detect-1.log].
log file: /home/zyg/.ros/log/91e4da86-e58a-11ed-9fa1-57da3f650eb4/yolov7-detect-1*.log
all processes on machine have died, roslaunch will exit
shutting down processing monitor...
... shutting down processing monitor complete
done

ubuntu noetic ;torch 1.8.0 ;cuda 11.1 ;torchvision 0.9.

Can you provide me with some help? I would greatly appreciate it.

GPU: Out Of Memory

Hello, I have the following error mentioned in the first issue #1 reported for your package. I checked and the gradient is not used in the detect_ros.py because you fixed it. Can you help me, please?
`

[ERROR] [1690471342.527740, 1.022000]: bad callback: <bound method Yolov7Publisher.process_img_msg of <main.Yolov7Publisher object at 0x7f68486b3d30>>

Traceback (most recent call last):

File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback

cb(msg)

File "/home/psykaunot/catkin_ws/src/irp/src/yolov7-ros/src/detect_ros.py", line 164, in process_img_msg

img = img.to(self.device)

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 3.81 GiB total capacity; 2.73 GiB already allocated; 15.69 MiB free; 2.75 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF`

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.