Git Product home page Git Product logo

honnotate's Introduction

HOnnotate: A method for 3D Annotation of Hand and Object Poses

Shreyas Hampali, Mahdi Rad, Markus Oberweger, Vincent Lepetit, CVPR 2020

This repository contains code for annotating 3D poses of hand and object when captured with a single RGBD camera setup.

Citation

If this code base was helpful in your research work, please consider citing us:

@INPROCEEDINGS{hampali2020honnotate,
title={HOnnotate: A method for 3D Annotation of Hand and Object Poses},
author={Shreyas Hampali and Mahdi Rad and Markus Oberweger and Vincent Lepetit},
booktitle = {CVPR},
year = {2020}
}

Installation

  • This code has been tested with Tensorflow 1.12 and Python 3.5
  • Create a conda environment and install the following main packages.
  • Install DIRT differentiable renderer from here
  • Install pyRender from here (our code uses version 0.1.23)
  • Install chumpy from here
  • Install the following additional packages.
        pip install numpy matplotlib scikit-image transforms3d tqdm opencv-python cython open3d

Setup

HOnnotate_ROOT is the directory where you download this repo.

  • Clone the deeplab repository and checkout the commit on which our code is tested
            git clone https://github.com/tensorflow/models.git
            git checkout 834902277b8d9d38ef9982180aadcdaa0e5d24d3
  • Copy research/deeplab and research/slim folders to models folder in HOnnotate_ROOT repo
  • Download the checkpoint files (network weigts) from here and extract in HOnnotate_ROOT
  • Download the objects 3D corner files here and extract in HOnnotate_ROOT
  • Download the YCB object models by clicking on The YCB-Video 3D Models in [https://rse-lab.cs.washington.edu/projects/posecnn/]. Update the YCB_MODELS_DIR variable in HOdatasets/mypaths.py with the path where you unpacked the object models into (path to where models folder branches off).
  • Download and extract the test sequence from here and update HO3D_MULTI_CAMERA_DIR variable in HOdatasets/mypaths.py with its location. Note that the path should not contain the sequence name i.e., the path is until the test sequence folder.
  • Download Models&code from the MANO website. Assuming ${MANO_PATH} contains the path to where you unpacked the downloaded archive, use the provided script to setup the MANO folder as required.
        cd ./optimization
        python setup_mano.py ${MANO_PATH}
        cd ../
  • Finally, your folder structure should look like this:
            - checkpoints
                - CPM_Hand
                - Deeplab_seg
            - eval
            - HOdatasets
            - models
                - CPM
                - deeplab
                - slim
            - objCorners
                - 003_cracker_box
                - 004_sugar_box
                ....
            - onlineAug
            - optimization
            - utils

Data Capture

  • The codes in this repository allow annotation of hand-object 3D poses of sequences obtained from single RGBD camera setup.
  • A test sequence captured on Intel RealSense D415 camera can be downloaded from here
  • Note that as explained in Section (4.2) of paper, the grasp pose of the hand should vary marginally throughout the sequence when captured on single camera setup.
  • Any new data captured from other camera setup should follow the same folder structure as in the test sequence.
  • Folder structure:
        -test
            -rgb
                -0
                    -00001.png
                    -00002.png
                    .
                    .
            -depth
                -0
                        -00001.png
                        -00002.png
                        .
                        .
            -calibration
                -cam_0_depth_scale.txt
                -cam_0_instrinsics.txt
            -configs
                -configHandPose.json
                -configObjPose.json

cam_0_depth_scale.txt contains depth scale of the depth camera and cam_0_instrinsics.txt contains camera instrinsics. *.json in configs are used as inputs to the scripts (explained later). Folder name '0' in rgb and depth folder correspond to camera ID, which is always 0 in single camera setup.

Run (Single camera setup)

Please refer to Section 4.2 in the paper. Below stages for performing automatic hand-object pose annotation follows the same pipeline as in paper. Single camera pipeline

0. Keypoints and Segmentations

We use deeplab network for segmentation and Convolutional pose machine for hand keypoint detection. The networks are trained with HO3D dataset and the weights can be downloaded from here

0.1. Hand+Object segmentations

        python inference_seg.py --seq 'test'

The segmentations are saved in segmentation directory of the test sequence

0.2. Hand 2D keypoints

This requires the segmentation script to be run beforehand

        python inference_hand.py --seq 'test'

The 2D keypoints are saved in CPMHand directory of the test sequence

1. Hand and Object Pose Initializations

1.1. Object pose initialization

The object pose in all frames of the sequence is initialized by tracking. To reduce the effort of manual initialization, the object pose in the first frame can be simple upright position. Before tracking the object pose, the config file configObjPose.json in configs folder of thetest sequence should be updated.

  • obj: YCB name of the object used in the sequence
  • translationInit: Initialization of object translation in the first frame. Should be updated if the pose of the object is not upright in the first frame.
  • rotationInit: Initialization of object rotation (axis-angle representation) in the first frame. Should be updated if the pose of the object is not upright in the first frame.

The following script starts object tracking from the first frame of the sequence.

        cd ./optimization
        python objectTrackingSingleFrame.py --seq 'test' --doPyRender

Remove --doPyRender flag to run the script faster. It only helps with the visualization. The script creates dirt_obj_pose folder in the test sequence folder with the results of optimization for each frame and below visualization.

The above figure shows the input frame after object segmentation, object rendered in the initialization pose, depth map error and silhouette error.

1.2. Hand pose initialization

This script obtains initial 3D grasp pose of the hand relative to the object coordinate frame using the hand 2D keypoints detected earlier (step 0.2). Refer to Eq. (12) in paper for this optimization for more details.

        python handPoseMultiframeInit.py --seq 'test'

The optimization uses chumpy package and is hence slow. The results are stored in handInit folder of test sequence.

The 2D keypoints are lifted to 3D keypoints and the resulting mesh is shown in the above figure.

2. Grasp Pose Estimation

A more accurate grasp pose of the hand is obtained using the initialization in step 1.2. Refer to Eq. (13) in paper for more details. Modify the config file configHandPose.json in configs folder of test sequence as in step 1.1. Update betaFileName field in the json file to use different hand shape parameters or point to the correct beta files. Beta parameters of 10 different subjects used in the dataset generation can be downloaded from here

        python handPoseMultiframe.py --seq 'test' --numIter 200 --showFig --doPyRender

Remove --showFig and --doPyRender flags to run faster without visualization. The results of optimization and visualization (if enabled) will be dumped in dirt_grasp_pose folder of the test sequence.

The first figure above shows the pose of object and hand during optimization. First row is the input image, second row is the hand-object rendered with poses at current iteration, third and fourth row shows the depth and silhoutte erro. The second figure above is the grasp pose of the hand after optimization.

3. Object Pose Estimation

A more accurate object pose is obtained by tracking the object poses as explained in Section 4.2 of paper. The difference between this stage and Object pose initialization in step 1.1 is, the hand mesh rendered with the estimated grasp pose is also used in the optimization. Update the configHandObjPose.json file in configs folder of the test sequence as earlier in step 1.1.

       python handObjectTrackingSingleFrame.py --seq 'test' --showFig --doPyRender

The results are dumped in dirt_hand_obj_pose folder of test sequence.

4. Multi-frame Pose Refinement

This stage performs optimization over mutliple frames and over all the hand-object pose variables. Refer to Eq. (1) in paper. The optimization is done in batches.

        python handObjectRefinementMultiframe.py --seq 'test' --showFig --doPyRender --batchSize 20

The results are dumped in dirt_hand_obj_refine folder of test sequence.

Known issues

The segmentation network often under-segmentents the hand near the finger tips. This results in a small shift in the final annotated keypoints of the finger tips. In order to account for this, the segmentation maps are corrected after Step 3 using the estimated keypoints and depth map. The segmentation correction script will be updated soon.

Acknowledgements

honnotate's People

Contributors

shreyashampali avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

honnotate's Issues

ResourceExhaustedError: OutOfMemory error

Hi @shreyashampali ,

Thank you for the code. I am trying to run the code with test folder you provided.

System Configuration: GTX 1050Ti 4GB RAM

WHen I run this command : "objectTrackingSingleFrame.py --seq 'test' " it is saying that GPU OutOfMemory. Is there a way to run in on 4GB graphics ?

are these any FLAGS that I could modify to lower the data ?

Thank you for your suggestions.

Please tell me your GPU name.

Thank you for your very helpful package on github. I am having trouble with an error at the object pose initialization.
I am having problems with the following commands
$ python objectTrackingSingleFrame.py --seq 'test'

Let me share a little briefly what I have been able to ascertain. Below is my execution environment. Basically, the execution environment is made of dockers. The docker recognizes the nvidia driver, and libraries around cuda are also running.

ubuntu16.04
cuda9.0
cudnn7.6.5
python3.5
tensorflow1.12
tensorflow-gpu1.12

sample image is below
00001_image

On segmentation, I could run below command and get result like this.
$ python inference_seg.py --seq 'test'
00001_segmentation

And then, on hand 2d kyepoints, I could run below command and get result like this.
python inference_hand.py --seq 'test'
00001_handpose

And next, on object pose initialization, if I run this command,
$ python objectTrackingSingleFrame.py --seq 'test'
An error happens like this,
"Allocator (GPU_0_bfc) ran out of memory trying to allocate 29.75GiB."

What GPU did you use with 29 GB of memory?

joints index out of range in lift2DJoints

Hi, when I run python handPoseMultiframeInit.py --seq 'test
I got index out of range on projPts = utilsEval.chProjectPoints(m.J_transformed, camMat, False)[jointsMap]

I check your code and find out that you loaded the mano model, but the m.J_trandformed only has 16 joints as in the mano model, while the jointsMap has 21 joints' index

I wonder how to get 21 joints' information in your code using mano model for the loss calculation

Thanks

HO3D_v2 dataset evaluation

I submitted my result on the competition,but It don't give me my score. I try to use the public eval code, but where is the evaluation dataset?
image

submission error in codalab

Hi,
First thanks for such great work!
When I submit my result in codalab , I meet errors as follows:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. /opt/conda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') You are using pip version 9.0.1, however version 20.2.3 is available. You should consider upgrading via the 'pip install --upgrade pip' command. Traceback (most recent call last): File "/tmp/codalab/tmp3wJDBP/run/program/evaluate.py", line 400, in <module> set_name='evaluation' File "/tmp/codalab/tmp3wJDBP/run/program/evaluate.py", line 214, in main pred_file = _search_pred_file(pred_path, pred_file_name) File "/tmp/codalab/tmp3wJDBP/run/program/evaluate.py", line 200, in _search_pred_file raise Exception('Giving up, because its not clear which file to evaluate.') Exception: Giving up, because its not clear which file to evaluate.

Could you spare some time to solve it?
Thanks a lot!

Can't download ho3d dataset-version 2

Thanks for your great work! But I got 502 bad gateway error when I tried to download the ho3d dataset(version 2) from your website. I'd be really appreciated if you can help me!

Can/How I evaluate only the 21 hand 3d joints on HO3D?

Hi, it is gratefull for your works.When I use the HO3D for my hand 3d pose estimation(just the 21 joints location, not the mesh), I found that the evaluation split just has the root joint 3d anno about the wrist.

So it confuses me can I evaluate my work on the HO3D?
or how can I evaluate the 21 hand joints on HO3D?

We sincerely look forward to your reply.
Best wishes.

Docker Image

Thanks for publishing this work. I'm trying to use this work after almost 3 years of this being released now. I just wanted to check if anyone has a or is willing to provide a working docker image for this repo?

about constraints of thumb

In the supplementary material, the first term in the constraint of Thumb MCP is (0.00, 2.00), but in the code, this term is (-5,5). Which is right?

3d keypoints of Ho3d

Hi, thx for the nice Ho3d dataset. However I find that handJoints3D in annotation seems not right, thus I can not calculate hand bbox from it. Does anyone else counter this problem?

WARNING: failed to load librasterise.so; rasterisation functions will be unavailable: libtensorflow_framework.so.2: cannot open shared object file: No such file or directory

Anaconda environment:
Python version : 3.6.1
tensorflow version : 1.12.0
tensorflow-gpu version : 1.12.0

I got this error when trying to execute the command : "python objectTrackingSingleFrame.py --seq 'test'"
I got the following error:
PROBLEM : "WARNING: failed to load librasterise.so; rasterisation functions will be unavailable:
libtensorflow_framework.so.2: cannot open shared object file: No such file or directory
"
1.I googled and saw that that one solution is to try adding the path to LD_LIBRARY_PATH
2. The python version in the anaconda virtual environment is 3.6.1
3. I executed "find . -name "libtensorflow_framework.so" -print" in the directory where anaconda is installed. I got the following output:
./anaconda3/pkgs/tensorflow-1.10.0-py36_0/lib/python3.6/site-packages/tensorflow/libtensorflow_framework.so
./anaconda3/pkgs/tensorflow-base-2.3.0-eigen_py38hb57a387_0/lib/python3.8/site-packages/tensorflow/libtensorflow_framework.so.2
./anaconda3/pkgs/tensorflow-base-1.12.0-mkl_py36h3c3e929_0/lib/python3.6/site-packages/tensorflow/libtensorflow_framework.so
./anaconda3/pkgs/tensorflow-base-1.12.0-gpu_py36had579c0_0/lib/python3.6/site-packages/tensorflow/libtensorflow_framework.so
./anaconda3/pkgs/tensorflow-base-2.2.0-gpu_py38h83e3d50_0/lib/python3.8/site-packages/tensorflow/libtensorflow_framework.so.2
./anaconda3/pkgs/tensorflow-base-1.14.0-py37h4531e10_0/lib/python3.7/site-packages/tensorflow/libtensorflow_framework.so.1
./anaconda3/envs/honnotate_tf2/lib/python3.8/site-packages/tensorflow/libtensorflow_framework.so.2
./anaconda3/envs/honnotate/lib/python3.6/site-packages/tensorflow/libtensorflow_framework.so
4. All the instances of "libtensorflow_framework.so.2" are under directories corresponging to python version 3.8 whereas the anaconda environment has python 3.6.

Pose initilaization for shared 'test' sequence

Hi @shreyashampali ,

The objectTracking script fails to track the object with provided Pose for the test sequence.

for mustard bottle:
"translationInit": [0.0, 0.0, -0.4],
"rotationInit": [-0.28, -1.86, -2.19], are these values correct ? The tracking fails completely. I tried to modify the parameters as suggested manually, but I failed to get the correct tracking.

If these initial values are not correct for test sequence could you please share the correct values ?

Sample image after few frames:
00035

Thank you!

pyrender PerspectiveCamera

When I run python objectTrackingSingleFrame.py --seq 'test' --doPyRender. I found that 'self.camera.projMatrix = elements' in vis.py is useless. PerspectiveCamera can not set the Intrinsics. I change it to IntrinsicsCamera slove the problem.

About the 3D error term E_3D

Thanks for your terrific job!

I am confused about the loss between the point cloud and the vertex. You said in your paper that for each point of the point cloud, you look for the closest vertex on the corresponding mesh. But i don't quite understand the corresponding part computing the loss in your code. Could you please spare a time to explain it?

Thanks again!

prediction error

On ubuntu20.04, python3.6, cuda9.0, cudnn7, tensorflow1.12, tensorflow-gpu1.12 and GPU RTX3090, after following the installation and setup mentioned in readme, I ran "python inference_seg.py --seq 'test'. I used the given checkpoints, I don't think I'm missing anything in the readme, but the results are as the attached file. Could you please tell me about possible errors or mistakes.

00001_image
00001_prediction

About pose constraints

Hi,Hampali.
Thanks for this great work!

I have some confusions about pose constraints:

  1. It is said in paper that u use Ejoint to restrict joint angles in the optimization process and baseline method.And then u give the upper and lower limits for every 45 joint angle parameters in Supplementary Material.
    But in constraints.py, u use self.validThetaIDs to limit 33 of joint angle params. only.And I have checked that other 15 params are all zero in the whole train set.
    So why don't u only optimze 33 pose params and set 15 other params. to 0 in optimization process and baseline method?

2.As u said in the paper , L2 regularizer function is a more common method to limit joint angles. Have u compared the different effects of L2 regularizer function without PCA and your method proposed as equation 8 in paper?

Hope for your kind reply.

All best
Hao Meng

about the joint order

First,thank u for such great work!
Now I have been confused about the joint order in the dataset.
In the ho3d_v2 dataset, you mention the joint order is different from that of MANO model.
And now I take part in the competetion Ho3d_v2 Codalab competition u provide, and I get the reults which have high error in joint loacation but much lower mesh error.
So I wonder if the joint order in the evaluation dataset is also different from that of original MANO model?
Hope you could provide some suggestion,thank u!

Submission Error on CodaLab

Hi @shreyashampali
thanks for the great work. I am just testing some predictions on the codalab today , but I always encounder the same error as the following:
Execution time limit exceeded!
Would you please help to look into this.
Thanks.

OutOfRangeError exception running objectTrackingSingleFrame.py. Intentional?

Hello, when I run

python objectTrackingSingleFrame.py --seq 'test'

for '1.1. Object pose initialization' from the ReadMe I get an OutOfRangeError exception at what appears to be the very end of the run:.

maskPC for Image test/0/02294 is 0.036168
[Loading New frame ][0/02294]
'runOptimization'  63.31 ms
0.0034784062
0.0529846
0.026492303
'runOptimization'  561.64 ms
maskPC for Image test/0/02297 is 0.035820
[Loading New frame ][0/02297]
'runOptimization'  78.96 ms
0.003642647
0.060919605
0.03045981
'runOptimization'  1479.31 ms
Traceback (most recent call last):
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1356, in _do_call
    return fn(*args)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1341, in _run_fn
    options, feed_dict, fetch_list, target_list, run_metadata)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1429, in _call_tf_sessionrun
    run_metadata)
tensorflow.python.framework.errors_impl.OutOfRangeError: 2 root error(s) found.
  (0) Out of range: End of sequence
	 [[{{node cond/IteratorGetNext}}]]
	 [[cond/IteratorGetNext/_15]]
  (1) Out of range: End of sequence
	 [[{{node cond/IteratorGetNext}}]]
0 successful operations.
0 derived errors ignored.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "objectTrackingSingleFrame.py", line 329, in <module>
    objectTracker(w, h, paramInit, camProp, mesh, out_dir, configData)
  File "objectTrackingSingleFrame.py", line 142, in objectTracker
    opti1.runOptimization(session, 1, {loadData:True})
  File ".../HOnnotate/HOnnotate/optimization/ghope/utils.py", line 159, in timed
    result = method(*args, **kw)
  File ".../HOnnotate/HOnnotate/optimization/ghope/optimization.py", line 69, in runOptimization
    session.run(self.optOp, feed_dict=feedDict)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 950, in run
    run_metadata_ptr)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1173, in _run
    feed_dict_tensor, options, run_metadata)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1350, in _do_run
    run_metadata)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1370, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.OutOfRangeError: 2 root error(s) found.
  (0) Out of range: End of sequence
	 [[node cond/IteratorGetNext (defined at .../HOnnotate/HOnnotate/optimization/ghope/loss.py:227) ]]
	 [[cond/IteratorGetNext/_15]]
  (1) Out of range: End of sequence
	 [[node cond/IteratorGetNext (defined at .../HOnnotate/HOnnotate/optimization/ghope/loss.py:227) ]]
0 successful operations.
0 derived errors ignored.

Original stack trace for 'cond/IteratorGetNext':
  File "objectTrackingSingleFrame.py", line 329, in <module>
    objectTracker(w, h, paramInit, camProp, mesh, out_dir, configData)
  File "objectTrackingSingleFrame.py", line 62, in objectTracker
    frameCntInt, loadData, realObservs = LossObservs.getRealObservables(ds, numFrames, w, h)
  File ".../HOnnotate/HOnnotate/optimization/ghope/loss.py", line 242, in getRealObservables
    lambda: dummyFunc(fidV, segV, depthV, colV, maskV, frameCntIntV))
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1977, in cond
    orig_res_t, res_t = context_t.BuildCondBranch(true_fn)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_ops.py", line 1814, in BuildCondBranch
    original_result = fn()
  File ".../HOnnotate/HOnnotate/optimization/ghope/loss.py", line 241, in <lambda>
    frameID, seg, depth, col, mask, frameCntInt = tf.cond(loadRealObservs, lambda: loadVars(fidV, segV, depthV, colV, maskV, frameCntIntV),
  File ".../HOnnotate/HOnnotate/optimization/ghope/loss.py", line 227, in loadVars
    frameID, seg, depth, col, mask = dataset.make_one_shot_iterator().get_next()
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py", line 426, in get_next
    output_shapes=self._structure._flat_shapes, name=name)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/ops/gen_dataset_ops.py", line 1947, in iterator_get_next
    output_shapes=output_shapes, name=name)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 788, in _apply_op_helper
    op_def=op_def)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
    return func(*args, **kwargs)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3616, in create_op
    op_def=op_def)
  File ".../.conda/envs/HOnnotate/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 2005, in __init__
    self._traceback = tf_stack.extract_stack()

I'm wondering if this is expected/intended as the way for the script to bail out of the while(True) loop in objectTracker(...) in objectTrackingSingleFrame.py? If I add a try/except around session.run(self.optOp, feed_dict=feedDict) in runOptimization(...) the script never ends (due to the while(True) in objectTracker(...).

                try:
                    session.run(self.optOp, feed_dict=feedDict)
                except tf.errors.OutOfRangeError:
                    break # break the while(True)

Should I instead, put the try/except around the call site for objectTracker so that the while(True) will bail out? Is there a better method? Thanks.

                try:
                    objectTracker(w, h, paramInit, camProp, mesh, out_dir, configData)
                except tf.errors.OutOfRangeError:
                    pass	# We're done

Thanks a lot.

Python: 3.6.12
HOnnotate git hash: d94f6b7
dirt git hash: 571addc359201b668d9dc450086c6dce6c18d0b6
CUDA: 11
tensorflow: 1.14
GCC: 8.3.1

The units in the camera coordinate system?

I noticed that the 3D coordinates in the camera coordinate system(anno['handJoints3D']) are very small. It doesn't seem like the units are in millimeters. What are the units of these landmark points?

Execution time limit exceeded for HO3D_v3 challenge

The same issue also appears to HO3D_v3 challenge. After the fixing, it becomes:

WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. /opt/conda/lib/python2.7/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment. warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.') Execution time limit exceeded!

Could you please to kindly provide some help? Thanks!

TypeError: 'int' object is not iterable

first thanks for such great work, and i really want do some research based on such dataset.
However, when i install all the requirements as you told,and run " python inference_seg.py --seq 'test'",I got errors like that:

Traceback (most recent call last):
File "inference_seg.py", line 166, in
app.run(main)
File "/home/haomeng/anaconda3/envs/py35/lib/python3.5/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/home/haomeng/anaconda3/envs/py35/lib/python3.5/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "inference_seg.py", line 163, in main
runNetInLoop(fileListIn, numImgs)
File "inference_seg.py", line 105, in runNetInLoop
sess, g, predictions, dataPreProcDict = getNetSess(data, h, w, myG)
File "/home/haomeng/PycharmProjects/HOnnotate/utils/predictSegHandObject.py", line 69, in getNetSess
output_stride=FLAGS.output_stride)
File "/home/haomeng/PycharmProjects/HOnnotate/models/deeplab/common.py", line 240, in new
int(x) for x in FLAGS.decoder_output_stride]
TypeError: 'int' object is not iterable

And I doubt that the version of some libiraies caused it.
My virtual enviroment was like that,and could u be generous to tell me the difference?
I am really confused that i do as I see in reademe ,but meet such issues.

packages in environment at /home/haomeng/anaconda3/envs/py35:

Name Version Build Channel

_libgcc_mutex 0.1 main defaults
absl-py 0.9.0 pypi_0 pypi
astor 0.8.1 pypi_0 pypi
attrs 19.3.0 pypi_0 pypi
backcall 0.2.0 pypi_0 pypi
bleach 3.1.5 pypi_0 pypi
ca-certificates 2020.6.24 0 defaults
certifi 2016.2.28 py35_0 defaults
chumpy 0.69 pypi_0 pypi
cycler 0.10.0 pypi_0 pypi
cython 0.29.20 pypi_0 pypi
decorator 4.4.2 pypi_0 pypi
defusedxml 0.6.0 pypi_0 pypi
dirt 0.3.0 pypi_0 pypi
entrypoints 0.3 pypi_0 pypi
freetype-py 2.1.0.post1 pypi_0 pypi
gast 0.3.3 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.30.0 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
imageio 2.8.0 pypi_0 pypi
importlib-metadata 1.7.0 pypi_0 pypi
ipykernel 5.3.0 pypi_0 pypi
ipython 7.9.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 7.5.1 pypi_0 pypi
jedi 0.17.1 pypi_0 pypi
jinja2 2.11.2 pypi_0 pypi
joblib 0.14.1 pypi_0 pypi
jsonschema 3.2.0 pypi_0 pypi
jupyter-client 6.1.5 pypi_0 pypi
jupyter-core 4.6.3 pypi_0 pypi
keras-applications 1.0.8 pypi_0 pypi
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.1.0 pypi_0 pypi
libedit 3.1.20191231 h7b6447c_0 defaults
libffi 3.2.1 hd88cf55_4 defaults
libgcc-ng 9.1.0 hdf63c60_0 defaults
libstdcxx-ng 9.1.0 hdf63c60_0 defaults
markdown 3.2.2 pypi_0 pypi
markupsafe 1.1.1 pypi_0 pypi
matplotlib 3.0.3 pypi_0 pypi
mistune 0.8.4 pypi_0 pypi
nbconvert 5.6.1 pypi_0 pypi
nbformat 5.0.7 pypi_0 pypi
ncurses 6.2 he6710b0_1 defaults
networkx 2.4 pypi_0 pypi
notebook 6.0.3 pypi_0 pypi
numpy 1.18.5 pypi_0 pypi
open3d 0.10.0.0 pypi_0 pypi
opencv-python 4.2.0.34 pypi_0 pypi
openssl 1.0.2u h7b6447c_0 defaults
packaging 20.4 pypi_0 pypi
pandocfilters 1.4.2 pypi_0 pypi
parso 0.7.0 pypi_0 pypi
pexpect 4.8.0 pypi_0 pypi
pickleshare 0.7.5 pypi_0 pypi
pillow 7.2.0 pypi_0 pypi
pip 9.0.1 py35_1 defaults
prometheus-client 0.8.0 pypi_0 pypi
prompt-toolkit 2.0.10 pypi_0 pypi
protobuf 3.12.2 pypi_0 pypi
ptyprocess 0.6.0 pypi_0 pypi
pyglet 1.5.7 pypi_0 pypi
pygments 2.6.1 pypi_0 pypi
pyopengl 3.1.0 pypi_0 pypi
pyparsing 2.4.7 pypi_0 pypi
pypng 0.0.20 pypi_0 pypi
pyrender 0.1.43 pypi_0 pypi
pyrsistent 0.16.0 pypi_0 pypi
python 3.5.6 hc3d631a_0 defaults
python-dateutil 2.8.1 pypi_0 pypi
pywavelets 1.1.1 pypi_0 pypi
pyyaml 5.3.1 pypi_0 pypi
pyzmq 19.0.1 pypi_0 pypi
readline 7.0 h7b6447c_5 defaults
scikit-image 0.15.0 pypi_0 pypi
scikit-learn 0.22.2.post1 pypi_0 pypi
scipy 1.4.1 pypi_0 pypi
send2trash 1.5.0 pypi_0 pypi
setuptools 49.1.0 pypi_0 pypi
six 1.15.0 pypi_0 pypi
sklearn 0.0 pypi_0 pypi
sqlite 3.32.3 h62c20be_0 defaults
tensorboard 1.14.0 pypi_0 pypi
tensorflow-estimator 1.14.0 pypi_0 pypi
tensorflow-gpu 1.14.0 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
terminado 0.8.3 pypi_0 pypi
testpath 0.4.4 pypi_0 pypi
tf-slim 1.1.0 pypi_0 pypi
tk 8.6.10 hbc83047_0 defaults
tornado 6.0.4 pypi_0 pypi
tqdm 4.47.0 pypi_0 pypi
traitlets 4.3.3 pypi_0 pypi
transforms3d 0.3.1 pypi_0 pypi
trimesh 3.7.6 pypi_0 pypi
wcwidth 0.2.5 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
werkzeug 1.0.1 pypi_0 pypi
wheel 0.29.0 py35_0 defaults
widgetsnbextension 3.5.1 pypi_0 pypi
wrapt 1.12.1 pypi_0 pypi
xz 5.2.5 h7b6447c_0 defaults
zipp 1.2.0 pypi_0 pypi
zlib 1.2.11 h7b6447c_3 defaults

Possible bug in handPoseMultiframe.py if --showFig is not specified

Per the readme it says:

    python handPoseMultiframe.py --seq 'test' --numIter 200 --showFig --doPyRender

Remove --showFig and --doPyRender flags to run faster without visualization.

If I run

python handPoseMultiframe.py --seq 'test' --numIter 200

It gets all the way through and then throws this exception:

'runOptimization'  468.36 ms
[Open3D WARNING] GLFW Error: X11: The DISPLAY environment variable is missing
[Open3D WARNING] Failed to initialize GLFW
Traceback (most recent call last):
  File "handPoseMultiframe.py", line 701, in <module>
    handPoseMF(w, h, objParamInitList, handParamInitList, mesh, camProp, out_dir)
  File "handPoseMultiframe.py", line 431, in handPoseMF
    vis.get_render_option().light_on = False
AttributeError: 'NoneType' object has no attribute 'light_on'

The issue is that its trying to bring up a window even though --showFig is not specified. I'm running this script remotely so I can't bring up any windows. So I didn't supply --showFig or --doPyRender.

It appears that the fix is simply checking if FLAGS.showFig is true before bringing up the visualizer. So something like:

    if FLAGS.showFig:
        vis = o3d.visualization.Visualizer()
        vis.create_window(window_name='Open3D', width=640, height=480, left=0, top=0,
                          visible=True)  # use visible=True to visualize the point cloud
        vis.get_render_option().light_on = False
        vis.add_geometry(finalHandMesh)
        vis.add_geometry(finalObjMesh)
        vis.run()

That appears to fix the problem for me.

python 3.6.12
tensor-flow 1.14.0
CUDA 11.0
HOnnotate git hash d94f6b7

The ground truth hand (after MANO) is not aligned with the image

I found the 3D hand joints from the MANO layer is not aligned with the annotation: annotations['handJoints3D']

gt_mano_pose = torch.tensor(annotations['handPose'][None])
gt_mano_shape = torch.tensor(annotations['handBeta'][None])
gt_mano_trans = torch.tensor(annotations['handTrans'])[None]
gt_verts, gt_joints = mano_layer(th_pose_coeffs=gt_mano_pose, th_betas=gt_mano_shape, th_trans=gt_mano_trans)

I just checked the annotations['handJoints3D'] projected to image is aligned with hand but gt_joints do not.

Could you please explain why? Thanks in advance!

The CodaLab Challenge was down, could you repair it.

The CodaLab Challenge of HO3Dv2 was down, could you repair it. I want to do some work based on it, but it has been down frequently. Could U public the testing dataset like FreiHAND? It's too much trouble to upload for evaluation.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.