Git Product home page Git Product logo

rofunc's Introduction

Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation

Release License Documentation Status Build Status

Repository address: https://github.com/Skylark0924/Rofunc
Documentation: https://rofunc.readthedocs.io/

Rofunc package focuses on the Imitation Learning (IL), Reinforcement Learning (RL) and Learning from Demonstration (LfD) for (Humanoid) Robot Manipulation. It provides valuable and convenient python functions, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an IsaacGym and OmniIsaacGym based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.

Update News πŸŽ‰πŸŽ‰πŸŽ‰

Installation

Please refer to the installation guide.

Documentation

Documentation Example Gallery

To give you a quick overview of the pipeline of rofunc, we provide an interesting example of learning to play Taichi from human demonstration. You can find it in the Quick start section of the documentation.

The available functions and plans can be found as follows.

Note βœ…: Achieved πŸ”ƒ: Reformatting β›”: TODO

Data Learning P&C Tools Simulator
xsens.record βœ… DMP β›” LQT βœ… config βœ… Franka βœ…
xsens.export βœ… GMR βœ… LQTBi βœ… logger βœ… CURI βœ…
xsens.visual βœ… TPGMM βœ… LQTFb βœ… datalab βœ… CURIMini πŸ”ƒ
opti.record βœ… TPGMMBi βœ… LQTCP βœ… robolab.coord βœ… CURISoftHand βœ…
opti.export βœ… TPGMM_RPCtl βœ… LQTCPDMP βœ… robolab.fk βœ… Walker βœ…
opti.visual βœ… TPGMM_RPRepr βœ… LQR βœ… robolab.ik βœ… Gluon πŸ”ƒ
zed.record βœ… TPGMR βœ… PoGLQRBi βœ… robolab.fd β›” Baxter πŸ”ƒ
zed.export βœ… TPGMRBi βœ… iLQR πŸ”ƒ robolab.id β›” Sawyer πŸ”ƒ
zed.visual βœ… TPHSMM βœ… iLQRBi πŸ”ƒ visualab.dist βœ… Humanoid βœ…
emg.record βœ… RLBaseLine(SKRL) βœ… iLQRFb πŸ”ƒ visualab.ellip βœ… Multi-Robot βœ…
emg.export βœ… RLBaseLine(RLlib) βœ… iLQRCP πŸ”ƒ visualab.traj βœ…
mmodal.record β›” RLBaseLine(ElegRL) βœ… iLQRDyna πŸ”ƒ oslab.dir_proc βœ…
mmodal.sync βœ… BCO(RofuncIL) πŸ”ƒ iLQRObs πŸ”ƒ oslab.file_proc βœ…
BC-Z(RofuncIL) β›” MPC β›” oslab.internet βœ…
STrans(RofuncIL) β›” RMP β›” oslab.path βœ…
RT-1(RofuncIL) β›”
A2C(RofuncRL) βœ…
PPO(RofuncRL) βœ…
SAC(RofuncRL) βœ…
TD3(RofuncRL) βœ…
CQL(RofuncRL) β›”
TD3BC(RofuncRL) β›”
DTrans(RofuncRL) βœ…
EDAC(RofuncRL) β›”
AMP(RofuncRL) βœ…
ASE(RofuncRL) βœ…
ODTrans(RofuncRL) β›”

RofuncRL

RofuncRL is one of the most important sub-packages of Rofunc. It is a modular easy-to-use Reinforcement Learning sub-package designed for Robot Learning tasks. It has been tested with simulators like OpenAIGym, IsaacGym, OmniIsaacGym (see example gallery), and also differentiable simulators like PlasticineLab and DiffCloth. Here is a list of robot tasks trained by RofuncRL:

For more details, please check the documentation for RofuncRL.

Tasks Animation Performance ModelZoo
Ant βœ…
Cartpole
FrankaCabinet βœ…
FrankaCubeStack
CURICabinet βœ…
CURICabinetImage
CURICabinetBimanual
Humanoid βœ…
HumanoidAMP_backflip βœ…
HumanoidAMP_walk βœ…
HumanoidAMP_run βœ…
HumanoidAMP_dance βœ…
HumanoidAMP_hop βœ…
HumanoidASEGetupSwordShield βœ…
HumanoidASEPerturbSwordShield βœ…
HumanoidASEHeadingSwordShield βœ…
HumanoidASELocationSwordShield βœ…
HumanoidASEReachSwordShield βœ…
HumanoidASEStrikeSwordShield βœ…
BiShadowHandBlockStack βœ…
BiShadowHandBottleCap βœ…
BiShadowHandCatchAbreast βœ…
BiShadowHandCatchOver2Underarm βœ…
BiShadowHandCatchUnderarm βœ…
BiShadowHandDoorOpenInward βœ…
BiShadowHandDoorOpenOutward βœ…
BiShadowHandDoorCloseInward βœ…
BiShadowHandDoorCloseOutward βœ…
BiShadowHandGraspAndPlace βœ…
BiShadowHandLiftUnderarm βœ…
BiShadowHandOver βœ…
BiShadowHandPen βœ…
BiShadowHandPointCloud
BiShadowHandPushBlock βœ…
BiShadowHandReOrientation βœ…
BiShadowHandScissors βœ…
BiShadowHandSwingCup βœ…
BiShadowHandSwitch βœ…
BiShadowHandTwoCatchUnderarm βœ…

Star History

Star History Chart

Citation

If you use rofunc in a scientific publication, we would appreciate citations to the following paper:

@software{liu2023rofunc,
          title={Rofunc: The full process python package for robot learning from demonstration and robot manipulation},
          author={Liu, Junjia and Li, Chenzui and Delehelle, Donatien and Li, Zhihao and Chen, Fei},
          month=jun,
          year= 2023,
          publisher={Zenodo},
          doi={10.5281/zenodo.8084510},
          url={https://doi.org/10.5281/zenodo.8084510}
}

Related Papers

  1. Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects (IEEE RA-L 2022 | Code)
@article{liu2022robot,
         title={Robot cooking with stir-fry: Bimanual non-prehensile manipulation of semi-fluid objects},
         author={Liu, Junjia and Chen, Yiting and Dong, Zhipeng and Wang, Shixiong and Calinon, Sylvain and Li, Miao and Chen, Fei},
         journal={IEEE Robotics and Automation Letters},
         volume={7},
         number={2},
         pages={5159--5166},
         year={2022},
         publisher={IEEE}
}
  1. SoftGPT: Learn Goal-oriented Soft Object Manipulation Skills by Generative Pre-trained Heterogeneous Graph Transformer (IROS 2023|Code coming soon)
@inproceedings{liu2023softgpt,
               title={Softgpt: Learn goal-oriented soft object manipulation skills by generative pre-trained heterogeneous graph transformer},
               author={Liu, Junjia and Li, Zhihao and Lin, Wanyu and Calinon, Sylvain and Tan, Kay Chen and Chen, Fei},
               booktitle={2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
               pages={4920--4925},
               year={2023},
               organization={IEEE}
}
  1. BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration (IEEE CDC 2023 | Code)
@article{liu2023birp,
        title={BiRP: Learning Robot Generalized Bimanual Coordination using Relative Parameterization Method on Human Demonstration},
        author={Liu, Junjia and Sim, Hengyi and Li, Chenzui and Chen, Fei},
        journal={arXiv preprint arXiv:2307.05933},
        year={2023}
}

The Team

Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.

Acknowledge

We would like to acknowledge the following projects:

Learning from Demonstration

  1. pbdlib
  2. Ray RLlib
  3. ElegantRL
  4. SKRL
  5. DexterousHands

Planning and Control

  1. Robotics codes from scratch (RCFS)

rofunc's People

Contributors

skylark0924 avatar drawzeropoint avatar lee950507 avatar zhihaoairobotic avatar hawkeex avatar 1am5hy avatar ddonatien avatar zainzh avatar reichenbar avatar rip4kobe avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.