Git Product home page Git Product logo

oakink's People

Contributors

anran-xu avatar kailinli avatar kelvin34501 avatar lixiny avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

oakink's Issues

Request for md5sum for 11 image data zip files?

Hi, Lixin, I met some errors when executing "unzip single-archive.zip". Can you offer the md5sum code of the oakink_image_v2.* files (totally 11 parts)? It can help me find my problem more quickly.

hand annotation problems

Dear authors,

Thanks for your awesome work and the dataset!

After learning about your work, I have some problems about hand annotation files and hope you can help me. Thanks a lot!

The file hand_param.pkl has fields hand_pose, hand_shape, hand_tsl and obj_transf. Can you please explain what hand_tsl and obj_transf stand for? What's more, if I want to use annotations from other datasets, how can I get the hand_tsl and obj_transf parameters, given joints coordinates, camera parameters, hand poses and hand shapes?

Looking forward to your reply!

Thank you!

Question about contactpose model

Hi, recetnly I am trying to use the dataset in simulator. However, I found the data for contactpose only has pointcloud, and no mesh which can not be used in simulator to load object. I also tried to match the original contactpose object model, but it has slight rotation which is hard to match exactly. Thus, I am wondering can you public the processed object model related to contactpose, Thank you so much!

Missing Grasping Pose?

I would like to start by thanking you for providing the dataset on your GitHub repository. Your efforts in making this dataset available to the research community are highly appreciated.

However, upon reviewing the dataset, I noticed a discrepancy between the number of grasping poses reported in the paper and the number of grasping poses in the dataset. The paper states that there should be 1800 different objects (100 real, 1700 virtual), while the dataset contains 1801 meshes, but only 1668 objects have related grasping poses. I would like to check if it is expected.

Thank you for your attention to this matter.

The data correspondence between shape and image

Hi,

I'm interested in understanding how to locate the RGB image that corresponds to a specific grasp pose within the shape dataset. For instance, I'm looking to find the RGB image associated with the grasp pose detailed in "./shape/oakink_shape_v2/apple/C90001/461d0e1f41/hand_param.pkl". I noticed that the source file is indicated as "pass1E/C90001_0004_0001_0007/2021-10-09-14-39-54/dom.pkl". However, it appears that I need the actual frame to access the corresponding image from directory โ€œ./image/stream_release_v2/C90001_0004_0001_0007/2021-10-09-14-39-54/โ€. Could you provide guidance on how to proceed with this?
Snipaste_2024-03-11_19-39-39

Thanks a lot.

Empty array when visualizing some of OakInk-Shape data

I am trying to run viz_oakink_shape.py. But for some categories such as apple with intents [use, hold, liftup, handover] and split train, I get the following error: What are some possible reasons for this? Maybe there is no hand present for grasp?

  File "viz_oakink_shape.py", line 53, in <module>
    main(arg)
  File "viz_oakink_shape.py", line 28, in main
    oi_shape = OakInkShape(category=category, intent_mode=intent, data_split=split)
  File "/home/snarasimhaswamy/OakInk/oikit/oi_shape/oi_shape.py", line 126, in __init__
    batch_hand_shape = torch.from_numpy(np.stack(batch_hand_shape))
  File "<__array_function__ internals>", line 180, in stack
  File "/opt/conda/envs/oakink/lib/python3.8/site-packages/numpy/core/shape_base.py", line 422, in stack
    raise ValueError('need at least one array to stack')
ValueError: need at least one array to stack 

Dataset Annotation file format

Dear authors,

Congratulations on this awesome work! This is a superb and solid work.

Thanks for releasing dropbox versions of the dataset. I have some questions regarding dataset format and annotations:

  1. Where can I find the object label or type for each sequence? Is it the first part in the label of the sequence? For example, if a video is labeled as S100014_0003_0002, is the object used S100014? Also, what does 0003 and 0002 stand for? Are they intent labels or camera views?
  2. Where can I find the intent label? Are the intent labeled per-frame or for certain segments?
  3. For the OakInk-Image, I see hand annotations under anno/hand_v and anno/hand_j. What coordinate system are they in? World coordinates or Camera coordinates?
  4. When there are two hands in a video (for example, when handing over and receiving objects) do you annotate hand poses for both the hands or just a single hand?
  5. How are hand pose files labelled? I see two pickle files for the same frame. For example, I see anno/hand_v/A01001__0003__0002__2021-09-26-20-02-08__0__6__1.pkl and anno/hand_v/A01001__0003__0002__2021-09-26-20-02-08__0__6__2.pkl. What are the differences?
  6. Where can I find the camera extrinsics for each video?

Can you please clarify the above questions?

Also, are you planning to release a README file explaining the annotation and file format?

Attribute of object inconsistent with paper?

I am seraching for the attribute of object. However, in the metaV2.zip, I can only find attribute such as "handled" without attribute such as "loosen" in there, so where is the attribute information of every object claimed in Appx Table 7 ?

How to get the MANO hand faces/ triangles for the image data?

Dear authors,

I am trying to obtain hand faces/ triangles for the OakInk-Image part. I see that you already have code to do this for the OakInk-Shape here . How can I obtain this for the image part data? Should I just add a simple class method here that takes MANO hand shape and hand pose as input to the ManoLayer and access ManoLayer.th_faces?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.