Git Product home page Git Product logo

kinova-movo's Introduction

Table of Contents

  1. Installation guide
    1. movo_v1
    2. kinectic-devel
  2. Perception
    1. Fiducial marker
    2. Mask RCNN
    3. Point cloud
    4. A little survey
  3. Navigation
    1. Mapping and localization
    2. SLAM
  4. Manipulation
    1. Grasping
    2. Useful references
  5. Demo in Sim
    1. Pick and place
  6. Demo in Real-World
  7. Other troubleshooting tips

Installation Guide

movo_v1

  • MOVO repository for the Kinova mobile manipulator. Remote PC and sim does not need movo_network or movo_robot.
  • Setup Instructions: https://github.com/Kinovarobotics/kinova-movo/wiki/Setup-Instructions
  • Note: voice control requires installation of pocketsphinx. e.g.: sudo apt-get install ros-kinetic-pocketsphinx. Voice navigation requires installation of SpeechRecognition. e.g.: pip install SpeechRecognition.

Troubleshooting

  • Time synchronization issue between movo1 and movo2: if you get error messages regarding time synchronization, do the following:
    • Connect via ssh to movo1.
    • In a terminal of movo1, enter : ntpdate 10.66.171.1 (the ip address should be the address of movo2).
  • Battery-related issue:
    • Connect to the Ethernet port of MOVO with a remote computer.
    • Power on the robot and quickly do the following:
      • SSH into MOVO2.
      • rosrun movo_ros movo_faultlog_parser. This will produce a directory called "SI_FAULTLOGS" in the ~/.ros/ directory.

How to install

  • Follow the steps located in movo_common/si_utils/src/si_utils/setup_movo_pc_migration. Start from Install third parties and additional libraries. But do it line by line manually instead of running the setup_movo_pc_migration script.
  • In the above steps, make sure you use gcc-5. When doing cmake, do env CXX=g++-5 cmake instead.
  • Somehow making AssImp gives the gtest-related error and I wasn't able to solve it yet. However, this Package was compiled successfully and not having AssImp seems okay for now.
  • For libfreenect2, follow the instruction given by Kinova: https://github.com/Kinovarobotics/kinova-movo/wiki/1.-Setup-Instructions.

Troubleshooting

  • If kinect does not work in Gazebo, make sure to set the Gazebo reference to ${prefix}_ir_frame from the kinect_one_sensor.urdf.xacro file located in /movo_common/movo_description/urdf/sensors/.

Useful references

  • Paper describing the MOVO software, hardware and architecture: Snoswell et al..

Perception

Fiducial marker

  • We have two fiducial marker systems installed (AprilTag is preferred).
    1. AprilTag:
    • The tag36h11 type is currently used and it is set in settings.yaml along with other AprilTag-related parameters. Set the tag ID and size you want to use in tags.yaml, e.g. stanalone_tags: [{id: 0, size: 0.095}] for the tag whose size is 9.5 by 9.5 cm square. Otherwise, tags are not going to be recognized.
    • If you want to use a new tag of different size and ID, see apriltag-imgs. You will find the tag sizes very small. To increase the size of the tag, open a new document in Google Docs, copy and paste the tag in Google Docs, rescale the size, and save the document as a PDF file. Then, you will find the clean image of the tag of different size.
    • In continuous_detection.launch, set camera_name=/movo_camera/color, camera_frame=movo_camera_color_optical_frame, and image_topic=image_color_rect. apriltag_ros GitHub repository. AprilTag tutorials.
    1. Aruco: aruco_ros GitHub repository.

Troubleshooting

  • When catkin-making AprilTag, you may see the error of This workspace contains non-catkin packages in it, and catkin cannot build a non-homogeneous workspace without isolation. Try the catkin_make_isolated command instead. due to the non-catkin apriltag package installed together. Since we must stick with catkin_make, not catkin build, install the apriltag package first as follows:
    1. make
    2. PREFIX=/opt/ros/kinetic sudo make install
    3. Then do catkin_make inside movo_ws

Mask R-CNN

Troubleshooting

  • If you get ImportError: libcudnn.so.6: cannot open shared object file, then see this issue.
  • If you get IOError: Unable to open file (Truncated file: eof = 47251456, sblock->base_addr = 0, stored_eoa = 257557808), then download mask_rcnn_coco.h5 from here and place the file in ~/.ros/.

NVIDIA Jetson AGX Xavier

  • We use Xavier as a GPU machine to handle perception for MOVO. Xavier is on Ubuntu 18.04, ROS Melodic, and Python 3.6. To test example.launch provided by Mask R-CNN, follow the steps:
    1. Activate virtualenv to change to python3: source Workspace/python-virtualenv/venv/bin/activate
    2. Source the package of Mask R-CNN: source Workspace/mask_rcnn_ros/devel/setup.bash
    3. Source vision_opencv to be able to use cv_bridge: source Workspace/catkin_build_ws/install/setup.bash --extend

Point cloud

A little survey

  • Bandwidth usage per message
    • /movo_camera/point_cloud/points: >300MB/MSG.
    • /movo_camera/sd/image_depth: ~8MB/MSG. The compressed depth image is ~4MB/MSG.
    • /movo_camera/hd/image_depth_rect/compressed: ~19MB/MSG.
    • /movo_camera/qhd/image_depth_rect/compressed: ~6.1MB/MSG. The compressed color image is 1.8MB/MSG.

Navigation

Mapping and localization

  • Refer to How Tos given by Kinova: for real robot and for simulation.
  • Most relevant parameters are called in move_base.launch located in /movo_demos/launch/nav/. eband_planner_params.yaml contains local planner-related parameters.

SLAM

  • RTAB-Map.
  • Installation guide: here. You can follow the Build from source and make sure you do the following when cloning rtabmap: git clone -b kinetic-devel https://github.com/introlab/rtabmap.git rtabmap.
  • How to run:
    • In a terminal, do roslaunch movo_demos sim_rtabmap_slam.launch.
    • In another terminal, do roslaunch movo_demos rtabmap_slam.launch rtabmap_args:="--delete_db_on_start". Use rtabmap_args:="--delete_db_on_start" if you want to start over the map. Otherwise, take this out.
  • Useful arguments (attach after calling the launch file):
    • rtabmap_args:="--delete_db_on_start": this deletes the database saved in ~/.ros/rtabmap.db at each start.

Troubleshooting

  • If you face the error when catkin making: make[2]: *** No rule to make target '/usr/lib/x86_64-linux-gnu/libfreenect.so', needed by '/home/yoon/movo_ws/devel/lib/rtabmap_ros/pointcloud_to_depthimage'. Stop., do sudo apt-get install libfreenect-dev.

Manipulation

Grasping

simple_grasping

  • Currently, this feature is not available. We only use vanilla MoveIt for now.
  • The grasping largely consists of three packages as follows.
  • simple_grasping.
  • moveit_python.
  • grasping_msgs.
  • Refer to the Gazebo tutorial provided by Fetch Robotics: here.
  • Grasping poses are hardcoded in createGraspSeries() and createGrasp() in shape_grasp_planner.cpp.

Useful references

  • Actionlib-detailed description: here.

Demo in Sim

Demo-related files are located in /movo_demos.

Pick and place

MoveIt-based demo. As a simulator only rviz is used, not Gazebo. Do the following to run the demo.

  1. roslaunch movo_7dof_moveit_config demo.launch.
  2. rosrun movo_demos sim_moveit_pick_place.py.

Demo in Real-World

TBD

Other troubleshooting tips

Arms not working

Check if the light under the ethernet port on both arm bases is blinking. If not, one of the followings could be a reason:

  1. If you can manually move the arm after powering on MOVO, this implies one or more of fuses are blown. Check the status of the fuse using the multimeter and replace with the spare fuse.
  2. If the arm is stiff after powering on MOVO and cannot be moved manually, this may imply that the arm is stuck in a bootloader state. To fix this, ask Kinova to receive the Base Bootloader Upgrade service bulletin (and see Section 11) as well as the latest version of the firmware. During the bootloader upgrade, you may need to go through short-circuit the 2 pins, which can be highly risky. Make sure you triple check the right pints to short-circuit.

If none of the above works, ask Kinova for help.

Fuse specs

Two types: 028707.5PXCN (7.5 A AC 32 V DC), and 0287002.PXCN (2 A AC 32 V DC).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.