Git Product home page Git Product logo

ros-gazebo-gym-ws's Introduction

Hey ๐Ÿ‘‹, I'm Rick

Github

I'm a Robotics master's student passionate about solving complicated problems and creating software solutions ๐Ÿค–. I โค๏ธ Open Source and therefore enjoy working on interesting open source projects.


Ask me anything Ask me anything ย  Ask me anything Ask me anything ย  Follow me on twitter Follow me on twitter ย  Connect with me on linkedin Connect with me on linkedin ย  Connect with me on linkedin Connect with me on linkedin

ros-gazebo-gym-ws's People

Contributors

dependabot[bot] avatar renovate[bot] avatar rickstaa avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

ros-gazebo-gym-ws's Issues

Increase train frequency

Currently, when the baselines panda example is used with the Gazebo and Rviz GUI's disabled (i.e. load_rviz=false, load_gazebo_gui=false), I achieve a training frequency of 13 HZ on my system. Let's check if we can increase this frequency using a profiler.

How to use austin

After austin has been installed the following command can be used to profile the code:

austin -o my_profile_data.aprof python sandbox/panda_training_freq_test.py

The output file can then be opened in the vscode austin extension. Running the profiler directly from the vscode extension does not seem to work. The file can also be inspected using https://www.speedscope.app/. If you want to use speedcope you first have to convert the file using the austin-python extension:

austin2speedscope my_profile_data.aprof myscript.prof

Results

myscript.zip

When we look at the speed-scope results, we can see that most of the time is lost in the ROS service calls. We might increase the control frequency by speeding up the panda_gazeb/set_joint_commands service but we could also be limited by the speed of rospy's internal service call mechanism.

How to use profile

See https://stackoverflow.com/a/37157132/8135687 for a guide on how to use CProfile.

Results

script.zip

The profile results show that most computation time is lost in the tcpros_data.py function. This function gets invoked when we call the panda_gazeb/set_joint_commands service. It, therefore, looks like we can not speed up without switching to C++.

image

Try to add numba

I quickly checked how much work it is to add a Numba wrapper around the set_joint_commands callback function. This, however, requires me to rewrite several functions since I use a lot of lists that are not numba compatible. Because of this and the fact that the main delay is likely in the service mechanism of the rospy package, I will not add a numba decorator. However, we can do this in the future or remove the need for the service call altogether.

FetchPickAndPlace not training using DDPG or DDPG+HER

The Robot currently does not improve its behaviour after (3 hours) of training with DDPG or DDPG+HER. This is might be caused by a number of problems:

  • The gazebo environment appears not to be reset during training meaning the agent starts from the old state when starting a new episode.
  • The reward and is_done functions are not called.
    • All the print statement I put in the reward or is_done functions don't show up in the rqt_console. This, however, can also be caused due to the fact that multiprocessing is used.
  • An Insufficient number of workers was used to enable the agent to learn the task in the time the algorithm was run (3 hours) (see this issue).
  • Wrong hyperparameters were set which doesn't allow the agent to learn successfully.
  • Additionally, the RL algorithm starts with goal states that are outside the reach of the robot (x=0.002, 0.003, 0.005). Meaning that it spends a lot of time trying to learn to control to positions which are not feasible.

Move to using ROS Noetic

It would be nice if we move to ROS Noetic to get rid of the very extensive install instructions. There are however some packages that still are blocking this transition.

ROS noetic transition list

  • franka_ros
    • franka_control
    • franka_description
    • franka_example_controllers - Depends on panda_moveit_config which is not released yet (see franka_ros/issues/123)
    • franka_gripper
    • franka_hw
    • franka_msgs
    • franka_ros - Also depends on panda_moveit_config which is not released yet (see franka_ros/issues/123)
    • franka_visualization
  • gazebo_panda_gym - Compatible with ROS noetic if end effector control is removed.
  • Openai_ros - Can be removed as it was only added as a reference.
  • panda_moveit_config - Not yet compatible with ROS noetic (see panda_moveit_config/issues/77. โŒ
  • panda_openai_sim - Should be compatible with ROS Noetic when we do the following:
    • Remove the end effector control. This needs to be done since this requires the
  • panda_simulation - Should be compatible with ROS Noetic when we do the following:
  • panda_training - No problem already written in python3
  • theconstruct_msgs - Compatible with noetic. We however could get rid of this package and add the message to our own package #38.

Add more relaistic, inertias, masses and joint damping ratios

Currently, the arm is not able to lift itself due to wrongly defined arm and gripper mass/inertial properties. For the effort controller to work we need to add more realistic inertias, masses and joint damping ratios. A guide on how to compute these can be found here. For our robot, this was already done by @mkrizmancic.

Robot specs

A datasheet for the robot is published here. It states that the total mass of the ar is ~18 kg while the mass of the hand is ~0.7 kg.

New masses, centers of gravities and moments of inertia's

Arm

Volumes

  • j0: 2.435092 m^3
  • j1: 2.293251 m^3
  • j2: 2.312302 m^3
  • j3: 2.020883 m^3
  • j4: 2.005719 m^3
  • j5: 2.275315 m^3
  • j6: 1.263041 m^3
  • j7: 0.422458 m^3

Total: 15.028061000000001 m^3

Masses

  • m1: (2.435092/15.028061000000001)*18=2.9166541179198036 kg
  • m2: (2.293251/15.028061000000001)*18=2.746762739384675 kg
  • m3: (2.312302/15.028061000000001)*18=2.7695812520324474 kg
  • m4: (2.020883/15.028061000000001)*18=2.4205314311673343 kg
  • m5: (2.005719/15.028061000000001)*18=2.402368608964257 kg
  • m6: (2.275315/15.028061000000001)*18=2.7252797283694816 kg
  • m7: (1.263041/15.028061000000001)*18=1.5128191188470688 kg
  • m8: (0.422458/15.028061000000001)*18=0.506003003314932 kg

Inertias, center of gravities

Fill in in the inertial_xml_marker.py script to account for scaling.

Hand

Volumes

  • Hand: 0.000488 m^3
  • Finger1/2: 0.000011 m^3

Total: 0.000510 m^2

Masses

  • Hand: (0.000488/0.000510)*0.7=0.6698039215686273 kg
  • Finger1/2: (0.000011/0.000510)*0.7=0.015098039215686272 kg

Inertias, center of gravities

Fill in in the inertial_xml_marker.py script to account for scaling.

Fix franka_ros upstream conflicts

As can be seen from the frankaemika/pull/52 the erdalpekel/simulation branch could not be merged with the upstream branch anymore due to changes made in the upstream branch. This commit fixes these conflicts such that the branch can again be merged with the upstream repository.

Update documentation

TODOS:

  • Rended sphinx documentation
  • Create sphinx documentation pages
  • Add python version to the readme.
  • Update README.md

Remove openai_ros

This package is not used but was only added as a reference on how to create a OpenAi gym environment out of the Gazebo Simulation.

Dependency Dashboard

This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.

This repository currently has no open or pending branches.

Detected dependencies

github-actions
.github/workflows/autotag_ci.yml
  • actions/checkout v3
  • haya14busa/action-bumpr v1
  • actions/checkout v3
  • haya14busa/action-bumpr v1
.github/workflows/release_ci.yml
  • actions/checkout v3
  • rickstaa/action-contains-tag v1
  • actions/checkout v3
  • rickstaa/action-get-semver v1
  • actions/setup-node v3
  • stefanzweifel/git-auto-commit-action v4
  • actions/checkout v3
  • rickstaa/action-get-semver v1
  • actions/create-release v1
npm
package.json
  • @commitlint/cli 17.7.0
  • @commitlint/config-conventional 17.7.0
  • commitizen 4.3.0
  • cz-conventional-changelog 3.3.0
  • husky 8.0.1
  • lint-staged 13.2.0
  • remark 14.0.2
  • remark-cli 11.0.0
  • remark-lint 9.1.0
  • remark-preset-lint-recommended 6.1.1
  • standard-version 9.5.0

  • Check this box to trigger a request for Renovate to run again on this repository

Error during the compilation of the realtime kernel

When trying to compile the realtime (RT) kernel for the Franka robot I run into a number of errors. In the end, the official guide from Emika Franka did not seem to work for me. The following, however, does seem to work:

Compile realtime kernel

  1. Install dependencies:
sudo apt-get install build-essential bc curl ca-certificates fakeroot gnupg2 libssl-dev lsb-release libelf-dev bison flex libncurses-dev
  1. Download the regular kernel and the patch from the Linux Kernel archive.
curl -SLO https://www.kernel.org/pub/linux/kernel/v5.x/linux-5.4.26.tar.xz
curl -SLO https://www.kernel.org/pub/linux/kernel/v5.x/linux-5.4.26.tar.sign
curl -SLO https://www.kernel.org/pub/linux/kernel/projects/rt/5.4/older/patch-5.4.26-rt17.patch.xz
curl -SLO https://www.kernel.org/pub/linux/kernel/projects/rt/5.4/older/patch-5.4.26-rt17.patch.sign
  1. Unzip the files:
xz -d linux-5.4.26.tar.xz
xz -d patch-5.4.26-rt17.patch.xz
  1. Verifying the file integrity.
gpg2 --verify linux-5.4.26.tar.sign
gpg2 --verify patch-5.4.26-rt17.patch.sign

You have to first download the public key of the person who signed the above file. As you can see from the above output, it has the ID 6092693E. You can obtain it from the key server:

gpg2  --keyserver hkp://keys.gnupg.net --recv-keys 647F28654894E3BD457199BE38DBBDC86092693E
  1. Unzip the linux kernel tar xvf linux-5.4.26.tar.
  2. Go into the kernel folder cd linux-5.4.26.
  3. Apply the patch patch -p1 < ../patch-5.4.26-rt17.patch.
  4. Run make menuconfig and change the Preemption Model to 'Fully Preemptible Kernel (RT) (PREEMPT_RT_FULL) (NEW)'.
  5. Disable lz4 compression scripts/config --disable KERNEL_LZ4.
  6. Enable gzip compression scripts/config --enable KERNEL_GZIP.
  7. Disable debug info scripts/config --disable DEBUG_INFO.
  8. Run make clean.
  9. Make the bzImage make -j $(nproc) bzImage.
  10. Make the modules make -j $(nproc) modules.
  11. Install the modules make -j $(nproc) modules_install.
  12. Install the kernel make -j $(nproc) install.

Allow a user to set real-time permissions for its processes

After the PREEMPT_RT kernel is installed and running, add a group named realtime and add the user controlling your robot to this group:

sudo addgroup realtime
sudo usermod -a -G realtime $(whoami)

Afterwards, add the following limits to the realtime group in /etc/security/limits.conf:

@realtime soft rtprio 99
@realtime soft priority 99
@realtime soft memlock 102400
@realtime hard rtprio 99
@realtime hard priority 99
@realtime hard memlock 102400

The limits will be applied after you log out and in again.

Resources

Change control algorithm from moveit control to Joint/Position control

In order to train a planning/control RL algorithm, we need to change the robot control algorithm which currently uses moveit for the movement planning to a low level Joint/position control algorithm. This is done by using the franka_ros and/or libfranka library.

TODOS:

  • Add new effort controller
  • Add launch file argument to switch between controllers

References

_get_obs function returns the wrong observations

Currently the my_fetch_task_env._get_obs() method contains some errors which causes it to turn wrong observations. For training a grasping robot we need the following observation dictionary:

  • [Observations]
    • grip_pos - Position of the gripper given in 3 positional elements and 4 rotational elements
    • object_pos.ravel - Not applicable for reach task. It is an array of zeros for reach task
    • object_rel_pos.ravel - Not applicable for reach task. It is an array of zeros for reach task
    • gripper_state - The quantity to measure the opening of gripper
    • object_rot.ravel - Not applicable for reach task. It is an array of zeros for reach task
    • object_velp.ravel - Not applicable for reach task. It is an array of zeros for reach task
    • object_velr.ravel - Not applicable for reach task. It is an array of zeros for reach task
    • grip_velp - The velocity of gripper moving
    • gripper_vel - The velocity of gripper opening/closing
  • [Achieved goal]
  • [Desired_goal]

Translate Mujoco Fetch Robot observations to Panda observations

In the gym environments the mujoco Fetch Robot is used.

TODOS:

  • Add the right velocities
  • Write a gazebo service which extracts information about the environment.

Write gazebo service

The observation dictionary of the FetchReach environment contains the following properties:

- grip_pos - Position of the gripper given in 3 positional elements and 4 rotational elements
- object_pos.ravel - Not applicable for reach task. It is an array of zeros for reach task
- object_rel_pos.ravel - Not applicable for reach task. It is an array of zeros for reach task
- gripper_state - The quantity to measure the opening of gripper
- object_rot.ravel - Not applicable for reach task. It is an array of zeros for reach task
- object_velp.ravel - Not applicable for reach task. It is an array of zeros for reach task
- object_velr.ravel - Not applicable for reach task. It is an array of zeros for reach task
- grip_velp - The velocity of gripper moving
- gripper_vel - The velocity of gripper opening/closing

Currently the object properties are not yet retreived from the gazebo simulation. We need to write a small service which retreives position/orientation and velocity data about the objects from the gazebo topics.

References

CMake error related to empty when trying to build action server messages

When trying to build the FollowJointTrajectory.action action server messages I receive the following CMake error:

Traceback (most recent call last):
  File "/usr/bin/empy", line 3302, in <module>
    if __name__ == '__main__': main()
  File "/usr/bin/empy", line 3300, in main
    invoke(sys.argv[1:])
  File "/usr/bin/empy", line 3283, in invoke
    interpreter.wrap(interpreter.file, (file, name))
  File "/usr/bin/empy", line 2295, in wrap
    self.fail(e)
  File "/usr/bin/empy", line 2284, in wrap
    callable(*args)
  File "/usr/bin/empy", line 2359, in file
    self.safe(scanner, done, locals)
  File "/usr/bin/empy", line 2401, in safe
    self.parse(scanner, locals)
  File "/usr/bin/empy", line 2421, in parse
    token.run(self, locals)
  File "/usr/bin/empy", line 1425, in run
    interpreter.execute(self.code, locals)
  File "/usr/bin/empy", line 2595, in execute
    _exec(statements, self.globals, locals)
  File "/usr/bin/empy", line 42, in _exec
    exec("""exec code in globals""")
  File "<string>", line 1, in <module>
  File "<string>", line 38, in <module>
CMake Error at /opt/ros/melodic/share/catkin/cmake/safe_execute_process.cmake:11 (message):

  execute_process(/media/Shared/Development/panda_openai_sim_ws/build/panda_training/catkin_generated/env_cached.sh
  "/usr/bin/python2" "/usr/bin/empy" "--raw-errors" "-F"
  "/media/Shared/Development/panda_openai_sim_ws/build/panda_training/cmake/panda_training-genmsg-context.py"
  "-o"
  "/media/Shared/Development/panda_openai_sim_ws/build/panda_training/cmake/panda_training-genmsg.cmake"
  "/opt/ros/melodic/share/genmsg/cmake/pkg-genmsg.cmake.em") returned error
  code 1
Call Stack (most recent call first):
  /opt/ros/melodic/share/catkin/cmake/em_expand.cmake:25 (safe_execute_process)
  /opt/ros/melodic/share/genmsg/cmake/genmsg-extras.cmake:303 (em_expand)
  CMakeLists.txt:84 (generate_messages)

Solution

This error is caused due to the fact that the Unicode formatting style was changed from UTF-8 to UTF-8 BOM.

ROS python 3 conflicts

As python 2.7 has reached its EOL. I will develop mostly in python 3. Since ROS currently does not fully support python3 I ran into some python 2/3 compatibility issues when I was using ROS. These issues will be fixed when ROS Noetic is released in May. After Noetic is released, the repository can be ported over to use python3 (see this porting guide). At the moment I however still, however, have to use ROS Melodic which has not yet been fully ported to python3. Instead of compiling ROS melodic for python3, I can separate the RL training algorithm (python3) and the main ROS system (python 2). To do this I need to create a virtual environment for the training algorithm and a catkin_ws in which I compile required ROS packages for python3.

Create a virtual environment

To create a virtual environment use the following commands:

pip install virtualenv 
virtualenv ~/.catkin_ws_python3/openai_venv --python=python3 

This virtual environment can the be activated using the source ~/.catkin_ws_python3/openai_venv/bin/activate command. After the environment is activate you you have to install the following python packages in it:

pip install tensorflow-gpu
pip install gym
pip install pyyaml
pip install netifaces
pip install rospkg

Build the required ROS packages from source

In the panda training script I will need the following ROS (python3) packages:

These are compiled for python3 by using the rosinstall_generator tool togheter with the wstool. To recompile for python3 (melodic):

Install some prerequisites to use Python3 with ROS.

sudo apt update
sudo apt install python3-catkin-pkg-modules python3-rospkg-modules python3-empy

Prepare catkin workspace:

cd ~/.catkin_ws_python3
mkdir src
ROS_PYTHON_VERSION=3
wstool init src
rosinstall_generator ros_comm common_msgs  geometry2 --rosdistro melodic --deps | wstool merge -t src -
wstool update -t src -j8
rosdep install --from-paths src --ignore-src -y -r

Finally compile for Python 3:

catkin build --cmake-args \
            -DCMAKE_BUILD_TYPE=Release \
            -DPYTHON_EXECUTABLE=/usr/bin/python3 \
            -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m \
            -DPYTHON_LIBRARY=/usr/lib/x86_64-linux-gnu/libpython3.6m.so

While trying to build the packages you might get the following error:

File "/home/ricks/.catkin_ws_python3/src/orocos_kinematics_dynamics/python_orocos_kdl/cmake/FindSIP.py", line 8, in <module>
    import sipconfig
ModuleNotFoundError: No module named 'sipconfig'
CMake Error at cmake/FindSIP.cmake:63 (MESSAGE):
  Could not find SIP
Call Stack (most recent call first):
  CMakeLists.txt:21 (find_package)
-- Configuring incomplete, errors occurred!

This error is resolved by installing the python3-pyqt5 python3-sip python3-sip-dev python3-empy packages. Additionally, if you also want to use the ROS command line commands you need to install the following dependencies in your virtual environment:

sudo apt install python3-pycryptodome python3-gnupg

Resources

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.