Git Product home page Git Product logo

assistive-gym's Introduction

Assistive Gym v1.0

v1.0 (this branch) has been released! See the feature list below for what is new in v1.0.

Assistive Gym in also now supported in Google Colab! For example: Open In Colab

See the Wiki for all available Google Colab examples.


Assistive Gym is a physics-based simulation framework for physical human-robot interaction and robotic assistance.

Assistive Gym is integrated into the OpenAI Gym interface, enabling the use of existing reinforcement learning and control algorithms to teach robots how to interact with people.

Assistive Gym

Paper

A paper on Assistive Gym can be found at https://arxiv.org/pdf/1910.04700.pdf

Z. Erickson, V. Gangaram, A. Kapusta, C. K. Liu, and C. C. Kemp, “Assistive Gym: A Physics Simulation Framework for Assistive Robotics”, IEEE International Conference on Robotics and Automation (ICRA), 2020.

@article{erickson2020assistivegym,
  title={Assistive Gym: A Physics Simulation Framework for Assistive Robotics},
  author={Erickson, Zackory and Gangaram, Vamsee and Kapusta, Ariel and Liu, C. Karen and Kemp, Charles C.},
  journal={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2020}
}

Install

Google Colab

Open In Colab
Try out Assistive Gym in Google Colab. Assistive Gym is fully supported in Google Colab (online Python Jupyter notebook). Click on the link above for an example. Everything runs online, so you won't need to install anything on your local machine!

All of the available Google Colab examples are listed on the Wiki-Google Colab

Basic install (if you just want to use existing environments without changing them)

pip3 install --upgrade pip
pip3 install git+https://github.com/Healthcare-Robotics/assistive-gym.git

We recommend using Python 3.6 or 3.7 (although other Python 3.x versions may still work). You can either download Python 3.6 here, or use pyenv to install Python 3.6 in a local directory, e.g. pyenv install 3.6.5; pyenv local 3.6.5. Both Python 3.6.5 and 3.7.10 have been tested working.

Full installation (to edit/create environments) using a python virtual environment

We encourage installing Assistive Gym and its dependencies in a python virtualenv.
Installation instructions for Windows can also be found in the Install Guide.

python3 -m pip install --user virtualenv
python3 -m venv env
source env/bin/activate
pip3 install --upgrade pip
git clone https://github.com/Healthcare-Robotics/assistive-gym.git
cd assistive-gym
pip3 install -e .

Getting Started

We provide a 10 Minute Getting Started Guide to help you get familiar with using Assistive Gym for assistive robotics research.

You can visualize the various Assistive Gym environments using the environment viewer.
A full list of available environment can be found Here (Environments).

python3 -m assistive_gym --env "FeedingJaco-v1"

We provide pretrained control policies for each robot and assistive task.
See Running Pretrained Policies for details on how to run a pretrained policy.

See Training New Policies for documentation on how to train new control policies for Assistive Gym environments.

Finally, Creating a New Assistive Environment discusses the process of creating an Assistive Gym environment for your own human-robot interaction tasks.

New Features in v1.0

Clean code syntax

v1.0 example (getting robot left end effector velocity)

end_effector_velocity = self.robot.get_velocity(self.robot.left_end_effector)

Old v0.1 (using default PyBullet syntax)

end_effector_velocity = p.getLinkState(self.robot, 76 if self.robot_type=='pr2' else 19 if self.robot_type=='sawyer' 
                                       else 48 if self.robot_type=='baxter' else 8, computeForwardKinematics=True, 
                                       computeLinkVelocity=True, physicsClientId=self.id)[6]

Google Colab Support

Open In Colab
Assistive Gym is now supported in Google Colab! Tons of new examples are now available for developing and learning with Assistive Gym in Google Colab. See the Wiki-Google Colab for a list of all the available example notebooks.

Support for mobile bases (mobile manipulation)

For robots with mobile bases, locomotion control is now supported. Ground frictions and slip can be dynamically changed for domain randomization.

Reference this Google Colab notebook Open In Colab for an example of mobile base control.
 
Mobile bases

Support for the Stretch and PANDA robots

Stretch PANDA

Multi-robot control support

Assitive Gym now provides an interface for simulating and controlling multiple robots and people, all through the OpenAI Gym framework. See this example of multi-robot control Open In Colab.
 
Multi-robot

Integration with iGibson

Assistive Gym can now be used with iGibson to simulate human-robot interaction in a visually realistic interactive home environment.
An example of using iGibson with Assistive Gym is available in this Google Colab notebook Open In Colab.
 
AG iGibson

Static human mesh models (with SMPL-X)

SMPL-X human mesh models are now supported in Assistive Gym. See this wiki page for details of how to use these human mesh models.

A Google Colab example of building a simple robot-assisted feeding environment with SMPL-X human meshes is also available: Assistive Gym with SMPL-X in Colab Open In Colab
 
SMPL-X human meshes 1 SMPL-X human meshes 2


Base Features

Human and robot models

Customizable female and male human models (default body sizes and weights matching 50th percentile humans).
40 actuated human joints (head, torso, arms, waist, and legs)
 
Human models
 
Four collaborative robots (PR2, Jaco, Baxter, Sawyer).
 
Robot models

Realistic human joint limits

Building off of prior research, Assistive Gym provides a model for realistic pose-dependent human joint limits.
 
Realistic human joint limits

Robot base pose optimization

A robot's base pose can greatly impact the robot’s ability to physically assist people.
We provide a baseline method using joint-limit-weighted kinematic isotopy (JLWKI) to select good base poses near a person.
With JLWKI, the robot chooses base poses (position and yaw orientation) with high manipulability near end effector goals.
 
Robot base pose optimization

Human preferences

During assistance, a person will typically prefer for the robot not to spill water on them, or apply large forces to their body.
Assistive Gym provides a baseline set of human preferences unified across all tasks, which are incorporated directly into the reward function. This allows robots to learn to provide assistance that is consist with a person's preferences.
 
Human preferences

Refer to the paper for details on features in Assistive Gym.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.