Git Product home page Git Product logo

freihand's People

Contributors

blgene avatar zimmerm avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

freihand's Issues

baseline

Hi @zimmerm ,

What is the detailed structrue of MANO CNN, it is extactly the same as [15] with discriminator part , regression part after encoder, or it is simple encoder ? would you please provide some more comment about that?

Numbers in leader board for baselines are (KP error) in cm?

needs to get results on the FreiHAND dataset

Hello, I have a new algorithm that needs to get results on the FreiHAND dataset, However, "Your request to participate in this challenge has been received and a decision is pending." has been displayed for a week. Could you please help pass it,Thank you very much!

TEST on my own images

Thanks for your great work!
I wonder how to implement the visualization MANO results on my own RGB image?
Look forward to your reply!

Context information

This is a multi-camera dataset where several subjects have contributed. Is there any information that binds all that together? Can someone know which frames correspond to a single performance, for a given subject and all cameras?

21 points data labelling issue

So I am new to this machine learning area.
In the key points estimation of joints on palm, are the points give (in file training_xyz.json) standardised(z-score normalised)?
I am really trying to plot the points on image but idk what procedure is used while labelling.

GT 3D joints and vertices of evaluation set

Hi, Sir,
I download the dataset, however, I did not find xyz and verts annotations of evaluation set.
Now, I want to test my model offline.
Could you provide these annotations?

hardware set

I would like to ask why you choose two different models of cameras, what is the difference between Basler acA800-510uc and Basler acA1300-200uc?
Looking forward to your reply!Thanks!

Codalab Malfunctioning

FreiHAND competition site is malfunctioning.
 I submitted multiple result files(*.zip) and none of them are processed for hours.
This phenomenon has been shown since yesterday.

I wish you to check this, and plz look at the attach for the details.
Thank you.

Codalab competition

Hi,
I applied to join the freiHand competition, but my request was never reviewed.
Could you please approve my request as soon as possible?
Thank you very much!

Have you tried to use manopytorch?

Hi,

I'm trying to use your dataset, but I met some problems.
I visualized the mesh using manopth using provided groundtruth mano parameters, but got very strange result. You can see the attached images with visualized mesh. I manually rotated the mesh in meshlab, so global rotation might be wrong. Although ignoring the global rotation, overall hand pose is absolutely wrong.

Have you faced this problem? Have you used the manopth to inspect your mano parameters are correct ones?
What I did is just split the 61-dimensional mano parameter (from training_mano.json) to [:48] as the pose parameter and [48:58] and the shape parameter. The splitted pose and shape parameters are feed to the manopth layer. I checked the training_xyz.json gives good joint coordinates.

0_0
스크린샷, 2020-01-02 11-15-51

What does scale values refers to?

I would like to know whether the scale values given in train_scales.json is the scale referred in the paper, which is used to calculate root relative depth. If not what this refers to? TIA

Steps for pre-processing of the image

It's great project for new learners like me,
I am able to run the project successfully,
I have gone through your paper it's impressive work.
Can you please elaborate on the steps to make own dataset?
Wanted to know the preprocessing of the image before feeding to mano model ?

root recover

How to generate xyz_root from uv_root on evaluation set?

I have noticed that the xyz_root can be recovered from uv_root using the functions in your code (https://github.com/lmb-freiburg/freihand/blob/master/utils/model.py#L58) with 'scale' parameters which are parts of mano parameters[60]. It's obvious that such 'scale' is different from the bone length of each sample.

I want to know the exact meaning of 'scale' (mano parameter[60]). And the way to calculate xyz_root from uv_root on the evaluation set ('scale' parameter is not provided).

It seems that the root joint precision influences the final results (kp_mean_error (aligned)) on Codalab when using the Latent25D method[1]. So, it is important to generate a precise root joint for each evaluation sample. Or, the alignment on Codalab should remove the influence of xyz_root.

Thanks for any reply!!!

[1]: Iqbal et.al. Hand Pose Estimation via Latent 2.5D Heatmap Regression.

How to participate in FreiHAND challenge

Hello, I want to verify my method on the evaluation dataset, but I didn't get a reply within a few days after I submitted my application to participate in the competition in codalab. How long do I probably need to wait?. Appreciate for your help.

Issue about MANO parameter

Thank you sharing the code and dataset. Your dataset is very helpful to my research.

I have a question about mano annotation file.

I used the first 48 of the 61 parameter as mano pose parameter and visualize one mano sample and image.

But it is not same. In addition to this, there were several other cases.

How can i solve this problem?

Thank you.

image

Submission error: 'NoneType' is not iterable

Hi, thanks for your excellent work. When I submit pred.zip file following README.md, I get the error:

Traceback (most recent call last):
  File "/worker/worker.py", line 330, in run
    if input_rel_path not in bundles:
TypeError: argument of type 'NoneType' is not iterable

Could you help me to solve it?

bbox annotation

Thank you for your excellent research.

I am looking at the annotations of the freihand dataset.

When I print the annotation, id and image_id increase in order. However, bbox contains four identical values ​​as shown below.
Data 0 to 3 are the same bbox, data 4 to 7 are the same bbox, and so on.

{'id': 0, 'image_id': 0, 'category_id': 1, 'bbox': [53.7150993347168, 66.13543701171875, 93.31415557861328, 107.55323028564453], 'area': 10036.238863857056, 'is_crowd': 0}
{'id': 1, 'image_id': 1, 'category_id': 1, 'bbox': [53.7150993347168, 66.13543701171875, 93.31415557861328, 107.55323028564453], 'area': 10036.238863857056, 'is_crowd': 0}
{'id': 2, 'image_id': 2, 'category_id': 1, 'bbox': [53.7150993347168, 66.13543701171875, 93.31415557861328, 107.55323028564453], 'area': 10036.238863857056, 'is_crowd': 0}
{'id': 3, 'image_id': 3, 'category_id': 1, 'bbox': [53.7150993347168, 66.13543701171875, 93.31415557861328, 107.55323028564453], 'area': 10036.238863857056, 'is_crowd': 0}
{'id': 4, 'image_id': 4, 'category_id': 1, 'bbox': [61.605167388916016, 67.90557861328125, 120.14663696289062, 83.92615509033203], 'area': 10083.445287329378, 'is_crowd': 0}
{'id': 5, 'image_id': 5, 'category_id': 1, 'bbox': [61.605167388916016, 67.90557861328125, 120.14663696289062, 83.92615509033203], 'area': 10083.445287329378, 'is_crowd': 0}
{'id': 6, 'image_id': 6, 'category_id': 1, 'bbox': [61.605167388916016, 67.90557861328125, 120.14663696289062, 83.92615509033203], 'area': 10083.445287329378, 'is_crowd': 0}
{'id': 7, 'image_id': 7, 'category_id': 1, 'bbox': [61.605167388916016, 67.90557861328125, 120.14663696289062, 83.92615509033203], 'area': 10083.445287329378, 'is_crowd': 0}
{'id': 8, 'image_id': 8, 'category_id': 1, 'bbox': [39.174591064453125, 71.41889190673828, 132.0823211669922, 112.26602935791016], 'area': 14828.357745794463, 'is_crowd': 0}
{'id': 9, 'image_id': 9, 'category_id': 1, 'bbox': [39.174591064453125, 71.41889190673828, 132.0823211669922, 112.26602935791016], 'area': 14828.357745794463, 'is_crowd': 0}
{'id': 10, 'image_id': 10, 'category_id': 1, 'bbox': [39.174591064453125, 71.41889190673828, 132.0823211669922, 112.26602935791016], 'area': 14828.357745794463, 'is_crowd': 0}
{'id': 11, 'image_id': 11, 'category_id': 1, 'bbox': [39.174591064453125, 71.41889190673828, 132.0823211669922, 112.26602935791016], 'area': 14828.357745794463, 'is_crowd': 0}

Is this okay?


I found the answer myself.
The correct answer is the augmented version.

Computation of the F Score

Hi!

I got a quick question regarding computing the precision and recall to get the F score in the evaluation. I see in your original paper you refer to [1] for the details. But for pose estimation, the point in Reconstruction has an exact match in GT, while in the general 3D reconstruction task in [1], the relationship between the reconstructed point cloud and ground truth point cloud is not clear. I guess that's why in [1] they define the distance based on the closest point like chamfer distance. I was wondering for Freihand evaluation, are we following [1] exactly or define distance based on the known match relationships?

Thanks!

[1] Tanks and Temples: Benchmarking Large-Scale Scene Reconstruction

Submission Failed: upload pred.zip to codalab but failed to get the evaluation score

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
/opt/conda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
Exception:
Traceback (most recent call last):
File "/opt/conda/lib/python2.7/site-packages/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/opt/conda/lib/python2.7/site-packages/pip/commands/install.py", line 335, in run
wb.build(autobuilding=True)
File "/opt/conda/lib/python2.7/site-packages/pip/wheel.py", line 749, in build
self.requirement_set.prepare_files(self.finder)
File "/opt/conda/lib/python2.7/site-packages/pip/req/req_set.py", line 380, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/opt/conda/lib/python2.7/site-packages/pip/req/req_set.py", line 620, in _prepare_file
session=self.session, hashes=hashes)
File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 821, in unpack_url
hashes=hashes
File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 659, in unpack_http_url
hashes)
File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 853, in _download_http_url
stream=True,
File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 488, in get
return self.request('GET', url, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/pip/download.py", line 386, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 475, in request
resp = self.send(prep, **send_kwargs)
File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/sessions.py", line 596, in send
r = adapter.send(request, **kwargs)
File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/cachecontrol/adapter.py", line 47, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "/opt/conda/lib/python2.7/site-packages/pip/_vendor/requests/adapters.py", line 497, in send
raise SSLError(e, request=request)
SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:661)
You are using pip version 9.0.1, however version 21.3.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Traceback (most recent call last):
File "/tmp/codalab/tmpTReBmp/run/program/eval.py", line 20, in
import open3d as o3d
ImportError: No module named open3d

Is the competitions still active?

Hi, I want to verify my method on the evaluation dataset, but I didn't get any reply within a few days after I requested to participate in the competition in codalab. Is the competitions still active?

submission to v3 competition leaderboard failed

Traceback (most recent call last):
File "/worker/worker.py", line 656, in run
put_blob(stdout_url, stdout_file)
File "/worker/worker.py", line 207, in put_blob
'x-ms-version': '2018-03-28',
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 99, in put
return request('put', url, data=data, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 335, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 438, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
raise ConnectionError(e)
ConnectionError: HTTPSConnectionPool(host='miniodis-rproxy.lisn.upsaclay.fr', port=443): Max retries exceeded with url: /py3-private/submission_stdout/a37ca71c-63fe-4cce-bebb-59eab78d448b/stdout.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=EASNOMJFX9QFW4QIY4SL%2F20230201%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230201T234153Z&X-Amz-Expires=86400&X-Amz-SignedHeaders=host&X-Amz-Signature=f045368d6a73a2b11b76a854d1aecb2ecd8056758df656e51ae5e958a5db45bc (Caused by : [Errno 110] Connection timed out)

Camera coordinates

Hi everyone,
I've used training_mano annotations with pytorch3d renderer (https://github.com/facebookresearch/pytorch3d) and a pytorch Implementation of MANO hand model issued from (https://github.com/otaheri/MANO), I'm facing some issues to set up the camera parameters to get the same hand pose rendering in image 2D space. I guess, I need to know the camera coordinates regarding to the world coordinates. Could anyone help me to find this information? any doc on the camera extrinsic parameters that were used for each image in the dataset would be helpful.
Thanks.

about bvh

Thanks for your excellent work. Can you provide the code about converting the data to .bvh file?

can you public eval code??

hi,I want to test my result on your Codalab, but it seams something wrong and I cant get the score. Can you public the evaluation code so that I can get my eval result.

How can I run the code in python3?

I meet a lot of errors in python3, and now I'm blocked at an error in mano_loader.py where it try to load the file ./data/MANO_RIGHT.pkl and I come with a encode problem,

        #smpl_data = pickle.load(open(fname_or_dict,'r'))
        smpl_data = pickle.load(open(fname_or_dict,'rb'))

The above change I have done doesn't work. Can you help?

Deviation of 2D keypoints

Thank you for releasing the dataset!

I have tried to run the view_samples.py on the first few images of the training data and found that some of the 2D keypoints seem to deviate from the finger.

May I check if the below deviations are expected? Below are some examples from the training data:

Index 0
Figure_0
Index 3
Figure_3
Index 5
Figure_5
Index 8
Figure_8
Index 10
Figure_10
Index 13
Figure_13

Code for MANO fitting

Hello, thanks for ur outstanding work. This dataset help me a lot in research. Now I am working on 3D hand reconstruction and want to build my own 3D hand dataset. Thus, I wonder if you can release the code for fitting the MANO model into multi-view images by optimization.

Thanks in advance!!!

Migration to new CodaLab?

Hi, all

Thank you so much for your dataset and competitions!

It seems the old CodaLab server is retiring and I am wondering if you are able to migrate the current competition to the new server?

https://competitions.codalab.org/competitions/21238 --> CodaLab competition 21238 is no longer accepting new submissions

If you are working on the migration, please let us know. Thank you!

John

using python3

Hello, I modify your code by python3, however meet a question that cant solved.
image

Dont change core code, only modify version code.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.