Git Product home page Git Product logo

acronym's Introduction

ACRONYM is a dataset of 17.7M simulated parallel-jaw grasps of 8872 objects. It was generated using NVIDIA FleX.

This repository contains a sample of the grasping dataset and tools to visualize grasps, generate random scenes, and render observations.

For using the full ACRONYM dataset, see instructions below.

License

The source code is released under MIT License. The dataset is released under CC BY-NC 4.0.

Requirements

  • Python3
  • python -m pip install -r requirements.txt

Installation

  • python -m pip install -e .

Use Cases

Visualize Grasps

usage: acronym_visualize_grasps.py [-h] [--num_grasps NUM_GRASPS] input [input ...]

Visualize grasps from the dataset.

positional arguments:
  input                 HDF5 or JSON Grasp file(s).

optional arguments:
  -h, --help            show this help message and exit
  --num_grasps NUM_GRASPS
                        Number of grasps to show. (default: 20)
  --mesh_root MESH_ROOT
                        Directory used for loading meshes. (default: .)

Examples

The following command shows 40 grasps for a mug from the dataset. Grasp markers are colored green/red based on whether the simulation result was a success/failure:

acronym_visualize_grasps.py --mesh_root data/examples/ data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5

Generate Random Scenes and Visualize Grasps

usage: generate_scene.py [-h] [--objects OBJECTS [OBJECTS ...]] --support
                         SUPPORT [--support_scale SUPPORT_SCALE]
                         [--show_grasps]
                         [--num_grasps_per_object NUM_GRASPS_PER_OBJECT]

Generate a random scene arrangement and filtering grasps that are in
collision.

optional arguments:
  -h, --help            show this help message and exit
  --objects OBJECTS [OBJECTS ...]
                        HDF5 or JSON Object file(s). (default: None)
  --support SUPPORT     HDF5 or JSON File for support object. (default: None)
  --support_scale SUPPORT_SCALE
                        Scale factor of support mesh. (default: 0.025)
  --mesh_root MESH_ROOT
                        Directory used for loading meshes. (default: .)
  --show_grasps         Show all grasps that are not in collision. (default:
                        False)
  --num_grasps_per_object NUM_GRASPS_PER_OBJECT
                        Maximum number of grasps to show per object. (default:
                        20)

Examples

This will show a randomly generated scene with a table as a support mesh and four mugs placed on top of it:

acronym_generate_scene.py --mesh_root data/examples/ --objects data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 --support data/examples/grasps/Table_99cf659ae2fe4b87b72437fd995483b_0.009700376721042367.h5

Same as above but also showing green grasp markers (maximum: 20 per object) for successful grasps (filtering those that are in collision):

acronym_generate_scene.py --mesh_root data/examples/ --objects data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 --support data/examples/grasps/Table_99cf659ae2fe4b87b72437fd995483b_0.009700376721042367.h5 --show_grasps

Render and Visualize Observations

usage: render_observations.py [-h] [--objects OBJECTS [OBJECTS ...]] --support
                              SUPPORT [--support_scale SUPPORT_SCALE]
                              [--show_scene]

Render observations of a randomly generated scene.

optional arguments:
  -h, --help            show this help message and exit
  --objects OBJECTS [OBJECTS ...]
                        HDF5 or JSON Object file(s). (default: None)
  --support SUPPORT     HDF5 or JSON File for support object. (default: None)
  --support_scale SUPPORT_SCALE
                        Scale factor of support mesh. (default: 0.025)
  --mesh_root MESH_ROOT
                        Directory used for loading meshes. (default: .)
  --show_scene          Show the scene and camera pose from which observations
                        are rendered. (default: False)

Examples

This will show RGB image, depth, image and segmentation mask rendered from a random viewpoint):

acronym_render_observations.py --mesh_root data/examples/ --objects data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 --support data/examples/grasps/Table_99cf659ae2fe4b87b72437fd995483b_0.009700376721042367.h5

Same as above but also visualizes the scene and camera position in 3D:

acronym_render_observations.py --mesh_root data/examples/ --objects data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5 --support data/examples/grasps/Table_99cf659ae2fe4b87b72437fd995483b_0.009700376721042367.h5 --show_scene

Using the full ACRONYM dataset

  1. Download the full dataset (1.6GB): acronym.tar.gz
  2. Download the ShapeNetSem meshes from https://www.shapenet.org/
  3. Create watertight versions of the downloaded meshes:
    1. Clone and build: https://github.com/hjwdzh/Manifold
    2. Create a watertight mesh version assuming the object path is model.obj: manifold model.obj temp.watertight.obj -s
    3. Simplify it: simplify -i temp.watertight.obj -o model.obj -m -r 0.02

For more details about the structure of the ACRONYM dataset see: https://sites.google.com/nvidia.com/graspdataset

Citation

If you use the dataset please cite:

@inproceedings{acronym2020,
    title     = {{ACRONYM}: A Large-Scale Grasp Dataset Based on Simulation},
    author    = {Eppner, Clemens and Mousavian, Arsalan and Fox, Dieter},
    year      = {2020},
    booktitle = {Under Review at ICRA 2021}
}

acronym's People

Contributors

barcode avatar clemense avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

acronym's Issues

Gripper width?

Hello!
Thank you very much for providing this dataset. I have a question.

Do we assume that the gripper width is always at the constant opening of 0.08 for each grasp? Because I cannot find in the provided files any array describing the width for each of the 2000 grasps per object. I found only /gripper/configuration, which is a scalar. Did I miss something?

Thank you in advance!

ValueError: string is not a file: data/examples/meshes/1Shelves/1e3df0ab57e8ca8587f357007f9e75d1.obj

When the ShapeNetSems are unzipped, there is only the Models folder,so I run

python scripts/acronym_generate_scene.py --mesh_root models/ --objects grasps/1Shelves_1e3df0ab57e8ca8587f357007f9e75d1_0.011099225885734912.h5 grasps/1Shelves_1e3df0ab57e8ca8587f357007f9e75d1_0.011099225885734912.h5 --support data/examples/grasps/Table_99cf659ae2fe4b87b72437fd995483b_0.009700376721042367.h5 --show_grasps

than get

Traceback (most recent call last):
File "/home/user/acronym/scripts/acronym_generate_scene.py", line 123, in
main()
File "/home/user/acronym/scripts/acronym_generate_scene.py", line 73, in main
object_meshes = [load_mesh(o, mesh_root_dir=args.mesh_root) for o in args.objects]
File "/home/user/acronym/scripts/acronym_generate_scene.py", line 73, in
object_meshes = [load_mesh(o, mesh_root_dir=args.mesh_root) for o in args.objects]
File "/home/user/acronym/acronym_tools/acronym.py", line 375, in load_mesh
obj_mesh = trimesh.load(os.path.join(mesh_root_dir, mesh_fname))
File "/home/user/anaconda3/envs/acr/lib/python3.9/site-packages/trimesh/exchange/load.py", line 110, in load
) = parse_file_args(file_obj=file_obj,
File "/home/user/anaconda3/envs/acr/lib/python3.9/site-packages/trimesh/exchange/load.py", line 605, in parse_file_args
raise ValueError('string is not a file: {}'.format(file_obj))
ValueError: string is not a file: models/meshes/1Shelves/1e3df0ab57e8ca8587f357007f9e75d1.obj

Was there a step I missed?

2021-10-13 08-08-42 的屏幕截图
2021-10-13 08-08-56 的屏幕截图

Custom grasp annotation convention

Hello

I am a big fan of your work! Thank you for releasing your code.

For my research project, I would like to generate grasp annotations for my own mesh.

Particularly, I wonder to which reference frame is the orientation and position of the grasp annotation.

If I were to use it for real world grasping by training some NN model like ContactGraspNet or M2T2, should it be with respect to the robot base frame? or should it be with respect to the camera frame?

Thank you so much in advance for your help and guidance.

How to generate grasps for own objects?

Hi,

Thank you for your excellent work!
I would like to generate some physically feasible grasps for our own objects. Could you give some hints for the implementation based on Nvidia Flex? Thank you

Beat,
Rui

Questions regarding the certain fields in the .h5 files

Hi,
I have some questions regarding the following fields in each of the grasp files:

/grasps/qualities/flex/object_in_gripper Dataset {2000}

/grasps/qualities/flex/object_motion_during_closing_angular Dataset {2000}

/grasps/qualities/flex/object_motion_during_closing_linear Dataset {2000}

/grasps/qualities/flex/object_motion_during_shaking_angular Dataset {2000}

/grasps/qualities/flex/object_motion_during_shaking_linear Dataset {2000}

For the 'object_in_gripper', 'object_motion_during_closing_angular', and 'object_motion_during_closing_linear' fields, why are they single array of 2000 numbers instead of something like a 2000 SE3 matrices? How can these numbers be translated to describe the full position and orientations of a 3D object?

For the 'object_motion_during_shaking_angular' and the 'object_motion_during_shaking_linear' field, are these the rotation in radian and mm as described in the paper during shaking?

I understand that maintaining a dataset is a lot of work. I really appreciate this open-source dataset and the effort you put into it. Any response would be appreciated.

Can't decode str in python3

Hey :)

I am encountering an error when loading meshes from the h5 files:

line 371, in load_mesh
    mesh_fname = data["object/file"][()].decode('utf-8')
AttributeError: 'str' object has no attribute 'decode'

In Python3, strings are already unicode objects, so dropping .decode('utf-8') resolves the error for me.

Parsing of ShapeNetSem meshes

Can you give some better instructions regarding the mesh creation with manifold?
I am following your steps and leads to segmentation fail.
Is the shapenetsem file models-OBJ.zip the one needed or am I doing sth wrong?

Negative volume?

Hi. In this dataset, what does the "volume" represent for?
The reason why I asked this question is because there are some objects have negative volume. For example,
Table_2e7a2d47040a0d4e35836c728d324152_0.009612910954917283.h5 with volume: -6.49676e-05
Barstool_d4fd2dd6e3407f0f5a2aa419fded589d_0.006393724755823298.h5 with volume: -5.2156e-06
Bench_6f5cb68e41ab4356ad5067eac75a07f7_0.011921571439527373.h5 with volume: -2.68208e-05
Basket_b06aa40e972e1de86d431f3195ee60_0.003267646646604986.h5 with volume: -1.41226e-05

script to convert ShapeNet meshes to simulation-compatible ones

This is more of a feature request. It seems like some people are struggling to convert the original meshes even with the instructions in the README, including myself (besides some errors for some meshes, my conversion script gets stuck for some reason).
Would be nice if there can be a bash script to reproduce the conversion process you guys did.

This is the script I'm using:

for fn in ~/data/shapenetsem/models-OBJ/models/*.obj
do
  rm temp.watertight.obj

  timeout -sKILL 10 ./manifold "$fn" temp.watertight.obj -s
  
  if [ $? != 0 ]; then
    echo "skipping object"
  else
    out="$(basename $fn)"
    ./simplify -i temp.watertight.obj -o ~/data/shapenetsem/models-OBJ/simplified/$out -m -r 0.02
  fi
done

Valid grasps not displayed in scene generation

Hey,
I have an issue with the scene generation. Everything seems to work fine, except the valid grasps are not displayed when generating a scene(--show_grasps is set). I am working with Ubuntu 20.04 and conda environment. Do you have an idea why it is not working? Thanks!

A typo in the README.md

In section Render and Visualize Observations, the second example should start with acronym_render_observations.py instead of acroacronym_render_observations.py.

Errors when executing examples

I wanted to try your examples, but ran into a few problems:
(i am on Ubuntu 18 using Anaconda)

The error occurs when running your example for acronym_visualize_grasps.py

$ acronym_visualize_grasps.py --mesh_root data/examples/ data/examples/grasps/Mug_10f6e09036350e92b3f21f1137c3c347_0.0002682457830986903.h5
Traceback (most recent call last):
  File "/tmp/acronym-env/bin/acronym_visualize_grasps.py", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/tmp/acronym/scripts/acronym_visualize_grasps.py", line 31, in <module>
    from acronym_tools import load_mesh, load_grasps, create_gripper_marker
  File "/tmp/acronym/acronym_tools/__init__.py", line 24, in <module>
    from .acronym import *
  File "/tmp/acronym/acronym_tools/acronym.py", line 28, in <module>
    import trimesh.path
  File "/tmp/acronym-env/lib/python3.7/site-packages/trimesh/path/__init__.py", line 9, in <module>
    from .path import Path2D, Path3D
  File "/tmp/acronym-env/lib/python3.7/site-packages/trimesh/path/path.py", line 34, in <module>
    from . import polygons
  File "/tmp/acronym-env/lib/python3.7/site-packages/trimesh/path/polygons.py", line 3, in <module>
    from shapely import ops
ModuleNotFoundError: No module named 'shapely'

it seems shapely is missing from your requirements.

After installing it with pip install shapely I get this error:

Traceback (most recent call last):
  File "/tmp/acronym-env/bin/acronym_visualize_grasps.py", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/tmp/acronym/scripts/acronym_visualize_grasps.py", line 74, in <module>
    main()
  File "/tmp/acronym/scripts/acronym_visualize_grasps.py", line 55, in main
    obj_mesh = load_mesh(f, mesh_root_dir=args.mesh_root)
  File "/tmp/acronym/acronym_tools/acronym.py", line 375, in load_mesh
    obj_mesh = trimesh.load(os.path.join(mesh_root_dir, mesh_fname))
  File "/tmp/acronym-env/lib/python3.7/posixpath.py", line 94, in join
    genericpath._check_arg_types('join', a, *p)
  File "/tmp/acronym-env/lib/python3.7/genericpath.py", line 155, in _check_arg_types
    raise TypeError("Can't mix strings and bytes in path components") from None
TypeError: Can't mix strings and bytes in path components

I get similar errors for your examples for acronym_generate_scene.py and acronym_render_observations.py

Simulation Implementation

Hi, thanks so much for the fantastic dataset. It has clean implementations and diverse training samples.

I'm using Acronym for a grasp planning project with offline reinforcement learning. I'm wondering whether it's possible to open-source the Flex implementation for the data generation? I noticed there is a repo for deformable objects, but using the original implementation may be helpful to avoid the covariant shift issue.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.