Git Product home page Git Product logo

rl-mpc-locomotion's Introduction

RL MPC Locomotion

This repo aims to provide a fast simulation and RL training framework for a quadruped locomotion task by dynamically predicting the weight parameters of a MPC controller. The control framework is a hierarchical controller composed of a higher-level policy network and a lower-level model predictive controller.

The MPC controller refers to Cheetah Software but written in python, and it completely opens the interface between sensor data and motor commands, so that the controller can be easily ported to any mainstream simulators.

The RL training utilizes the NVIDIA Isaac Gym in parallel using Unitree Robotics Aliengo model, and transferring it from simulation to reality on a real Aliengo robot (sim2real is not included in this codebase).

Frameworks

Dependencies

Installation

  1. Clone this repository

    git clone [email protected]:silvery107/rl-mpc-locomotion.git
    git submodule update --init

    Or use the --recurse option to clone submodules at the same time.

  2. Create the conda environment:

    conda env create -f environment.yml
  3. Install the python binding of the MPC solver:

    pip install -e .

Quick Start

  1. Play the MPC controller on Aliengo:

    python RL_MPC_Locomotion.py --robot=Aliengo

    All supported robot types are Go1, A1 and Aliengo.

    Note that you need to plug in your Xbox-like gamepad to control it, or pass --disable-gamepad. The controller mode is default to Fsm (Finite State Machine), and you can also try Min for the minimum MPC controller without FSM.

    • Gamepad keymap

      Press LB to switch gait types between Trot, Walk and Bound.

      Press RB to switch FSM states between Locomotion and Recovery Stand

  2. Train a new policy: Set bridge_MPC_to_RL to True in <MPC_Controller/Parameters.py>

    cd RL_Environment
    python train.py task=Aliengo headless=False

    Press the v key to disable viewer updates, and press again to resume. Set headless=True to train without rendering.

    Tensorboard support is available, run tensorboard --logdir runs.

  3. Load a pretrained checkpoint:

    python train.py task=Aliengo checkpoint=runs/Aliengo/nn/Aliengo.pth test=True num_envs=4

    Set test=False to continue training.

  4. Run the pretrained weight-policy for MPC controller on Aliengo: Set bridge_MPC_to_RL to False in <MPC_Controller/Parameters.py>

    python RL_MPC_Locomotion.py --robot=Aliengo --mode=Policy --checkpoint=path/to/ckpt

    If no checkpoint is given, it will load the latest run.

Roadmap

User Notes

Gallery

rl-mpc-locomotion's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rl-mpc-locomotion's Issues

Switch PPO backend to rsl_rl

The PPO backend rl_games in this repo seems to be a "bad" choice for legged learning, and I am using rsl_rl intensively for my current project. If there is a need or I have some free time, this could be a planned work.

Question about the physical parameter of robot

Thank you for your great work! I am very interseted in it.
I want to add unitree Go2 to RobotType, then I check the <rl-mpc-locomotion/MPC_Controller/common/Quadruped.py> where defines some physical properties of the robots, like A1,Go1. I would like to know how some parameters are determined so that I can adjust them according to Go2. Take Go1 as an example

if robotype is RobotType.GO1:
        self._abadLinkLength = 0.08
        self._hipLinkLength = 0.213
        self._kneeLinkLength = 0.213
        self._kneeLinkY_offset = 0.0
        self._abadLocation = np.array([0.1881, 0.04675, 0], dtype=DTYPE).reshape((3,1))
        self._bodyName = "trunk"
        self._bodyMass = 5.204 * 2
        self._bodyInertia = np.array([0.0168128557, 0, 0, 
                                      0, 0.063009565, 0, 
                                      0, 0, 0.0716547275]) * 5
        self._bodyHeight = 0.26
        self._friction_coeffs = np.ones(4, dtype=DTYPE) * 0.4
        # (roll_pitch_yaw, position, angular_velocity, velocity, gravity_place_holder)
        self._mpc_weights = np.array([1.0, 1.5, 0.0,
                                     0.0, 0.0, 50,
                                     0.0, 0.0, 0.1,
                                     1.0, 1.0, 0.1,
                                     0.0], dtype=DTYPE) * 10

What do the following parameters mean?
self._abadLinkLength
self._abadLocation
How are this related to the URDF file in assets folder, or can it be determined from the URDF?
why does the self._bodyMass Multiply by 2?
Does the self._mpc_weights is just an initial value?

Parallelize MPC solver fully on GPU

I do have a parallelized, torch-based version implemented but with minor bugs when solving the QP in qpth. If there is a need or I have some free time, this could be a planned work.

Although I know Yuxiang has already had one parallelized QP controller well implemented lol.

How to change the step frequency?

Hello! It is really a great work! When I run MPC controller, I find that the robot's step frequency is very high. I want to know how can I change it, thank you!

Small shifting with zero command

Hi @silvery107,

Thanks a lot for releasing the nice project. During studying the code base, I met some issues and want to discuss them with you.

  1. In the update function within /MPC_Controller/common/StateEstimator.py, the current code for updating vBody and omegaBody is as follows:
    self.result.vBody = self.result.rBody @ self.result.vWorld
    self.result.omegaBody = self.result.rBody @ self.result.omegaWorld
    However, I believe the correct transformations should be:
    self.result.vBody = self.result.rBody.T @ self.result.vWorld
    self.result.omegaBody = self.result.rBody.T @ self.result.omegaWorld
    This is based on the assumption that we want to transform the linear and angular velocities from the world frame to the body frame, where v_b = R_b_to_s @ v_s?

  2. Additionally, I noticed a minor drift in the yaw-axis and the y-axis when the command is set to zero, as shown in the following video link:
    https://github.com/silvery107/rl-mpc-locomotion/assets/71823391/0c03f991-0c5c-46b8-b499-c07a644657c9
    Could you please give some suggestions on how to improve it?

Thanks a lot for your time and look forward to your response~~~

Sim2Real in Go1

I am trying to do sim2real on Go1 based on your code.
In your code, controller_dt = 0.01 # in sec, but I am curious whether controller_dt was set to 0.01 in the actual robot.

Questions about isaac sim environment configuration issues and how to read the framework

Hello, I would like to ask if you can participate in this project if you can successfully install Isaac gym but cannot run Isaac sim correctly. Isaac Sim prompts that the graphics card cannot be found, but I made sure to install the relevant driver myself.
In addition, I would like to ask if there is terrain information such as scanning point data in the framework that participates in the training. If so, where should I start to learn the status configuration of the environment information?

Bump python version to 3.8

I have tested this codebase under python 3.8 earlier, and didn't see much incompatability. The only limit of python version seems to be the bindings from Isaac Gym. I will boost the python version after a more thorough test.

Can you share your "simulation to reality " code base?

Hi, I have successfully trained a PPO model in isaacgym and deploy to ros and gazebo . My model walks well in gazebo , but I don't know how to deploy to a real A1 robot . I read the unitree's official demo , their demo is c++ , I don't know how to deploy in c++ , do you have python to control the robot by 12 joints like in gazebo ?

Go1 Support

Hi @silvery107,
Thanks for this very rich work.

Could you list some guidelines for deploying your code to Unitree Go1?

Thanks for the feedback.

Lucas

Switch the osqp folder to a submodule

I somehow missed the correct version of osqp when I first published this repo. The osqp folder sould be replaced as a submodule @ its corresponding commit.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.