Comments (7)
Please see the get_obs
function in the parent env (https://github.com/haosulab/ManiSkill2/blob/fc08823bf96791946591a508f336d724fa7cad26/mani_skill2/envs/sapien_env.py#L255) and in the child env for more details. The state contains 2 parts: get_obs_agent()
(agent proprioceptive states) and get_obs_extra()
(other environment-specific states). Different environments have different state dimensions. The state observation mode has more state dimensions than the visual (rgbd/pointcloud) observation modes, since the former contains ground truth object pose information in get_obs_extra()
, while the latter does not.
If you are not using a wrapper (e.g., ManiSkill2-Learn), then if you are under visual observation mode (rgbd & pointcloud) and print out env.get_obs()
, the dictionary keys will tell the meanings.
If you are using the state observation mode, then in particular, for PickCube-v0, the state dimension is 51, which contains (1) from get_obs_agent()
(see https://github.com/haosulab/ManiSkill2/blob/fc08823bf96791946591a508f336d724fa7cad26/mani_skill2/agents/base_agent.py#L170), agent qpos (state[:9]) & qvel (state[9:18]) & agent base pose (state[18:25]) & controller state (empty) (these dimensions are robot-specific); (2) from get_obs_extra()
, agent tcp pose (state[25:32]), goal pos (state[32:35]), tcp to goal pos (state[35:38]), cube pose (state[38:45]), tcp to obj pos (state[45:48]), obj to goal pos (state[48:51])
If you are using ManiSkill2-Learn wrapper and using any of the visual observation modes (which I guess it's your case), see https://github.com/haosulab/ManiSkill2-Learn/blob/83dfe26c73b6ce6b0388a0fa07493f340e36dd44/maniskill2_learn/env/wrappers.py#L235 for the environment wrapper. For the 38-dimensional "state" output from the wrapper, it contains agent qpos (state[:9]) & qvel (state[9:18]) & agent base_pose (state[18:25]) & agent tcp_pose (state[25:32]) & goal pos (state[32:35]) & tcp_to_goal_pos (state[35:38])
from maniskill.
Why does "agent qpos" have 9 dimensions, and what does each dimension represent?
from maniskill.
The first 7 dimensions are panda joint positions; the last 2 dimensions are panda gripper positions
from maniskill.
what is the mean of the base_pose and tcp_bose
from maniskill.
Pose = concat(xyz position, rotation), rotation is represented in the format of quaternion in our state space
base_pose = pose of robot base
tcp_pose = pose of robot tool center point (mid point between two gripper fingers)
from maniskill.
May I know the respective value ranges for these 38 dimensions?
from maniskill.
States corresponding to robot qpos are bounded by joint angle ranges. Other dims are unbounded.
from maniskill.
Related Issues (20)
- Computing end effector pose of robot HOT 2
- Align evaluation setups for different online RL algorithms
- [Enhancement] Make control_mode pd_ee_pose for target pose control HOT 4
- [Docs] Update google colab quick start with some nicer images in the first cell and new info
- Motionplanning GPU multi-env ? HOT 1
- Improve PPO baselines when there are no partial resets HOT 3
- Question on the effects of `use_target` in a controller config object HOT 3
- [Question]Motion Planning for Articulated Object link! HOT 1
- Support systems without GPUs for cpu sim running only
- [Question]How can I get real-time bbox about object when its position changes in motionplanning? HOT 1
- [Question] Debug Drawing in ManiSkill HOT 4
- [Question] Difficulty Achieving Correct Orientation in 'PickCube-v0' with Pose Control
- ValueError: Unicode strings with encoding declaration are not supported. HOT 1
- Getting specific object pose in mobile manipulation scene HOT 2
- [Question] Inverse Kinematics on GPU HOT 4
- Document how to build controllers in depth
- How to handle unexpected motions? HOT 2
- [Bug] env.get_state fails but env.get_state_dict works for PegInesertionSide HOT 1
- Fails when running RGB based PPO baseline HOT 3
- [Question] `max_episode_steps` for `num_envs>1` HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from maniskill.