Git Product home page Git Product logo

dgrasp's Introduction

D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions

Paper | Video | Project Page

Official code release for the CVPR 2022 paper D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions.

Contents

  1. Info
  2. Installation
  3. Demo
  4. Training
  5. Evaluation
  6. Citation
  7. License

Info

This code was tested with Python 3.8 on Ubuntu 18.04 and 20.04 and an NVIDIA GeForce RTX 2080 Ti. You require about 1.8GB of space because the repository comes with all the features of the RaiSim physics simulation, as D-Grasp is integrated into RaiSim.

The D-Grasp related code can be found in the raisimGymTorch subfolder.

Installation

For good practice for Python package management, it is recommended to use virtual environments (e.g., virtualenv or conda) to ensure packages from different projects do not interfere with each other.

For installation, we follow the official RaiSim guide to install the physics simulator. See and follow our documentation of the installation steps under docs/INSTALLATION.md. Note that you need to get a valid, free license and an activation key via this link.

Demo

We provide some pretrained models to view the output of our method. They are stored in this folder. For interactive visualizations, you need to open the unity executable in the raisimUnity folder

  • To visualize the policy for motion synthesis, run the following from the raisimGymTorch folder:

    python raisimGymTorch/env/envs/dgrasp/runner_motion.py -ao -e 'all_objs'  -sd 'pretrained_policies' -w 'full_6000.pt' 

    The visualization should look like this, with randomly sampled objects and sequences:

    To just visualize one object, add the flag -o <obj_id>.

  • If you want to run the policy that was trained for a single object, run the following command from the raisimGymTorch folder:

    python raisimGymTorch/env/envs/dgrasp/runner_motion.py -o 12 -e '021_bleach_dexycb' -sd 'pretrained_policies' -w 'full_3000.pt' 
  • Similarly, policies can be run for the other objects. The commands for all pretrained policies are stored in this file. (The missing models will be added over the coming days).

Training

  • To train a new policy from scratch with labels from DexYCB (Chao et. al, CVPR 2021), you need to run the runner.py file from within the raisimGymTorch folder as follows:

    python raisimGymTorch/env/envs/dgrasp/runner.py  -o <obj_id> -e <exp name> -d <experiment path> -sd <storage folder> 

    where -o indicates the object id according to DexYCB. -d is the path the RaiSim data should be stored and -sd indicates in which folder you want your current batch of experiments to be stored. By default, the experiments will be stored in the raisimGymTorch directory in a folder called data_all.

  • If you want to train a single policy over all objects, run the following:

    python raisimGymTorch/env/envs/dgrasp/runner.py -ao -e <exp name> -d <experiment path> -sd <storage folder> -nr 1 -itr 6001

Quantiative Evaluations

  • Once you have a trained policy and want to evaluate it, you can run the following commands.

    python raisimGymTorch/env/envs/dgrasp_test/runner.py -o <obj_id> -e '<exp name>' -w '<policy path>.pt' -d <experiment path> -sd <storage folder>
  • For example, running the following command:

    python raisimGymTorch/env/envs/dgrasp_test/runner.py -o 12 -e '021_bleach_dexycb' -sd 'pretrained_policies' -w 'full_3000.pt' 
    

    Should yield the following output (different hardware may lead to slight deviations for the displacement values):

    ----------------------------------------------------
    object:                                      12
    success:                                  1.000
    disp mean:                                0.282
    disp std:                                 0.092
    ----------------------------------------------------
    
  • Note if you want the policy trained over all object to be evaluated as such, -o <obj_id> can be replaced by -ao. For example, evaluating the policy trained on all objects (compare with main paper) can be run with:

    python raisimGymTorch/env/envs/dgrasp_test/runner.py -ao -e 'all_objs'  -sd 'pretrained_policies' -w 'full_6000.pt' 
    

    The last rows of the terminal output should look like this (different hardware may lead to slight deviations for the displacement values):

    ----------------------------------------------------
    all objects
    total success rate:                       0.777
    disp mean:                                4.850
    disp std:                                 9.337
    ----------------------------------------------------
    

Visual Evaluation

If you want interactive visualizations, you need to open the unity executable in the raisimUnity folder. The videos of the sequences will be stored within the raisimUnity/<OS>/Screenshot folder. Note that this does not work on headless servers.

Visual Evaluation Stability Test

  • This command visualizes the experiment where the surface is removed to see whether an object slips:

    python raisimGymTorch/env/envs/dgrasp_test/runner.py -o <obj_id> -e '<exp name>' -w '<policy path>.pt' -d <experiment path> -sd <storage folder> -ev
    
  • For example, the command:

    python raisimGymTorch/env/envs/dgrasp_test/runner.py -o 12 -e '021_bleach_dexycb' -sd 'pretrained_policies' -w 'full_3000.pt' -ev
    

    should yield the following output:

Visual Evaluation Motion Synthesis

  • This command visualizes the experiment where the object is moved to a target 6D pose:

    python raisimGymTorch/env/envs/dgrasp/runner_motion.py -o <obj_id> -e '<exp name>' -w '<policy path>.pt' -d <experiment path> -sd <storage folder>
    

Data Generation (Coming Soon)

We provide the possibility to generate data with our framework and customized 6D target goal spaces.

Citation

To cite us, please use the following:

@inproceedings{christen2022dgrasp,
      title={D-Grasp: Physically Plausible Dynamic Grasp Synthesis for Hand-Object Interactions},
      author={Christen , Sammy and Kocabas, Muhammed and Aksan, Emre and Hwangbo, Jemin and Song, Jie and Hilliges, Otmar},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2022}

License

See the following license.

dgrasp's People

Contributors

christsa avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.