Git Product home page Git Product logo

xr-egopose's People

Contributors

denistome avatar hbadino avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

xr-egopose's Issues

Question about how to inverse 3D point cloud from RGB-D image.

Thank you for sharing your great work.

I noticed that your synthetic dataset contains depth images.
I would like to use depth information for my research purpose.

However, the scale of depth images is normalized [0, 255].
So, it is difficult to reconstruct XYZ information from your RGB-D dataset.

Could you tell me how to inverse 3-dimensional point clouds from RGB-D images?
I know the fisheye lens parameter that you mentioned in the previous issue.

question about pts3d_fisheye

i try to recovery 3d pose
image
by using "pts3d_fisheye" in "female_001_a_a_000001.json"
and this is my code

temp_3d_coords=temp_json['pts3d_fisheye']
x=temp_3d_coords[0]
y=temp_3d_coords[1]
z=temp_3d_coords[2]
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x,y,z,marker='o',s=15)

for i,(x,y,z) in enumerate(zip(x,y,z)):
    ax.text(x,y,z,str(i))

ax.legend()
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.show()

i wondering why 3d figure not look like that compared to
image
and this
image

2d keypoint visibility

Hi sir, is there any way to know that, for example, a feet key-point is occluded by the body part or not? i.e. to get the "visibility" label from the 2d/3d kpts information? thanks.

Is the fine-grained action label annotated in this dataset?

Thanks for sharing your great work!
I notice that "Each of those action categories is the collection of many different and specific actions.
E.g. Gaming includes Boxing, Shooting Gun, Playing Golf, Playing Baseball just to cite a few."

I run the demo.py file and it seems the action label is within the nine action categories mentioned in the table. May I ask if there's any fine-grained action label for each frame? say, boxing instead of gaming?

Is real world test data available?

Hi,

I do not find the ~10K xR-EgoPose^{R} data included in the download script.
May I ask if it is available now, if not, do you have a plan to release it?

Thanks,
Zhe

Understanding of camera parameters

Hi,
I find that you have provided camera parameters in the json file.
The parameters of the camera are fov, trans and rot.

I guess the rot are Euler angles, but what is the axis order of it?

Another question is that how we could transform pts3d_fisheye into pts2d_fisheye, could you please provide a demo code

Thanks~!

Question on worldp

Thanks for contributing this great work. I'm a bit confused about how to use the worldp as 3D locations for each pixel. Could you explain how this map was computed and what the values at each pixel location represent? Thanks a lot.

Question about downloading synthetic dataset

Hi. Thank you for sharing your great work.

I ask you about "download.sh".
Now I am downloading your synthetic dataset.
But I got some errors like below

download.sh

cat $s.tar.gz.part?? | unpigz -p 32 | tar -xvC ./

Error

unpigz: skipping: <stdin>: corrupted -- incomplete deflate data
unpigz: abort: internal threads error
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: female_002_a_a/env_003/cam_down/rot: Cannot create symlink to ‘../../env_001/cam_down/rot’: Operation not supported
tar: female_002_a_a/env_002/cam_down/rot: Cannot create symlink to ‘../../env_001/cam_down/rot’: Operation not supported
tar: Error is not recoverable: exiting now

I think some "TrainSet/*/env_00?/cam_down/rot" folders might haven't been compressed correctly.

However, only 2 folders like below was correctly unzipped.
・TrainSet/female_001_a_a/env_001/cam_down/rot
・TrainSet/female_001_a_a/env_002/cam_down/rot

My "pigz" command version is 2.4.

Could you tell me how to unzip your synthetic dataset?

Missing training data.

Hi. Thanks for the code and the dataset.

I was going through the training data, and it seems there are some missing files in the rgba and json files. By missing files I mean that there is no rgba-json naming correspondence.
The missing files are:
female_003_a_a_002380.json
female_003_a_a.rgba.005000.png
male_003_f_s_000303.json
male_003_f_s.rgba.004981.png
male_006_f_s_000368.json
male_006_f_s.rgba.004981.png
male_008_f_s_000332.json
male_008_f_s.rgba.004981.png
male_010_f_s_000628.json
male_010_f_s.rgba.004981.png
male_011_f_s_000805.json
male_011_f_s.rgba.004981.png

Could you please let me know where can I find them?
Many thanks!

Results on Human3.6M

Thanks for sharing your great work!
I notice that your proposd method is also evaluated on Human 3.6M dataset. Do you use the subjects of four camera views to train and test your model or only using front-views subject?

Question about ground truth heatmap & embedding size

Hi. First of all, thank you for sharing your great work.

I have two questions about your proposed method described in paper.

  1. In Eq (1), L_2D is calculated as mean square error between heatmaps. But how could one can get ground truth heatmap? As far as I understand, heatmaps are the probability-encoded data for joints, and usually it is a byproduct(?) of pose estimation.

I think you may assume for instance normal distribution around ground truth 3D joints and project it to generate ground truth heatmap. And I guess your approach is similar as I described, since you mentioned in the paper "... the 3D lifting module can be trained independently using 3D mocap data and its projected heatmaps." (p.7732, 5.Architecture second paragraph)

I wonder If my guess is right and whether is it okay to use fixed size(standard deviation) to generate ground truth heatmaps.

  1. Did you conduct a comparative study on embeddings dimension? I think it's quite small(20D).

I wonder if it has serious impact on performance and/or inference speed if I change the dimension of embeddings.

How to understand the Joint rotations (Eulers angles)

If I understand correctly, the rotation angles are Euler angles normally represented as
Rotation about the x-axis = roll angle = α
Rotation about the y-axis = pitch angle = β
Rotation about the z-axis = yaw angle = γ

"rot": [0.2110171152869712, 0.7265798018474156, 0.03091111571684459]
how do I get the order of alpha, beta, and gamma or, roll pitch and yaw?
is it safe to assume that the first element of the above array is roll, the second is pitch and the third is yaw?

Also is there documentation available for the dataset to understand it better?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.