nghorbani / soma Goto Github PK
View Code? Open in Web Editor NEWSolving Optical MoCap Automatically
License: Other
Solving Optical MoCap Automatically
License: Other
when using render_mosh_once to visualize pkl. file generated by mosh, the soma_standard.blend is missing. could you share this file for us? What is the body in the blend file?
Hello!
Thanks for your great work, it helps me a lot. Unfortunately, I meet some trouble when I run the code of SOMA on MoCap point cloud data. I cannot import 'rm_spaces' from 'human_body_prior.tools.omni_tools'. I found that there is no 'rm_spaces' function in 'human_body_prior.tools.omni_tools' module. Is there any new edition of 'human_body_prior' that I didn't find?
Thank you very much!
in run_soma_multiple.py
line 132 should be f'**/*{mocap_ext}
? instead of f'*/*{mocap_ext}
on my linux machine it returns an empty list
Hi,
I have a question regarding to how we can normalize the resulting SMPL computed from mocap data, to a single human form across all the dataset. It means that all the human subjects have the standard form so the difference between subjects are not recognizable. I should mention that I don't mean the shape parameters but converting pose parameters to a normalised version.
After converting mocap to SMPL, I have converted the resulting SMPL pose parameters to fbx, and different subjects seem to have different forms of body (even without using shape parameters). I want the pose parameters to be normalized so the subjects don't differ from each other. I would appreciate if you could guide me how to do that.
Thank you very much in advance.
Best wishes,
Leila
Hello,
Thanks so much for sharing this work! I get an error when trying to run mosh on labelled data:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.7/site-packages/psbody/smpl/__init__.py", line 6, in <module>
from .serialization import *
File "/usr/local/lib/python3.7/site-packages/psbody/smpl/serialization.py", line 12, in <module>
from .posemapper import posemap
File "/usr/local/lib/python3.7/site-packages/psbody/smpl/posemapper.py", line 12, in <module>
from .rodrigues import Rodrigues
File "/usr/local/lib/python3.7/site-packages/psbody/smpl/rodrigues.py", line 6, in <module>
from .fast_derivatives.smpl_derivatives import Rodrigues as _Rodrigues
ModuleNotFoundError: No module named 'psbody.smpl.fast_derivatives.smpl_derivatives'
I verified that the psbody package is placed in the site-packages folder, and I can import psbody, but if I import psbody.smpl I get the same error.
I also was attempting to run the code for labelling unlabelled c3d files and was getting an OmegcaConf resolver error for resolve_mocap_subject(the same as here #8). Tracing through the soma and moshpp code, it looks like there isn’t a resolver set for resolve_mocap_subject, but it looks like resolve_mocap_session defined on line 164 of run_tools in moshpp (https://github.com/nghorbani/moshpp/blob/6599a2d7dde7baab67ef9d859c967c14c4e7badc/src/moshpp/tools/run_tools.py#L125) does what resolve_mocap_subject should do.
I also had the exact same issue as described here #16 and fixed in the exact same manner which seems to work.
When i tried to run soma on soma dataset using the given ipynb file in jupyter notebook, it gave me error
train_multiple_soma(
soma_data_settings=soma_data_settings,
soma_train_cfg={
'soma.expr_id': soma_expr_id, # the experiment ID
'dirs.support_base_dir': support_base_dir,
'dirs.work_base_dir': soma_work_base_dir,
'data_parms.mocap_dataset.amass_marker_noise_model.enable': False, # we cannot create amass marker noise model
'moshpp_cfg_override.moshpp.verbosity': 1,
'moshpp_cfg_override.dirs.support_base_dir':support_base_dir,
'trainer.fast_dev_run': False, # if true then only one iteration of training and validation is done.
'data_parms.mocap_dataset.marker_layout_fnames': [soma_marker_layout_fname],
'train_parms.batch_size': 256,
'trainer.num_gpus': num_gpus,
'train_parms.num_workers': num_cpus,
},
)
TypeError: init() got an unexpected keyword argument 'distributed_backend'
My specification :
Ubuntu 20.04
python 3.7
pytorch version 1.5.10
When trying to run "Running SOMA On MoCap Point Cloud Data" by following the steps in tutorial 1, getting this error:
UnsupportedInterpolationType: Unsupported interpolation type resolve_mocap_subject
full_key: dirs.mocap_out_fname
object_type=dict
Also, I tried to change the variable 'soma_mocap_target_ds_name' to 'SOMA_unlabeled_mpc/soma_subject1' as the .c3d files are in this folder. It removes the error but output says zero jobs submitted.
2022-04-11 13:02:22.358 | INFO | soma.tools.run_soma_multiple:run_soma_on_multiple_settings:245 - Submitting SOMA jobs.
2022-04-11 13:02:22.366 | INFO | soma.tools.parallel_tools:run_parallel_jobs:54 - #Job(s) submitted: 0
2022-04-11 13:02:22.366 | INFO | soma.tools.parallel_tools:run_parallel_jobs:67 - Will run the jobs in random order.
Any help would be really appreciated. Thanks in advance!
Hi,
Thank you for your excellent work! I would like to convert our labeled mocap data to SMPL format using SOMA. When I run Solving Bodies with MoSh++ part in Run SOMA On MoCap Point Cloud Data
. The layout file "SOMA_unlabeled_mpc_smplx.json"
is created. Both subject 1 and 2 got the same JSON file, and I also found a fine-tuned layout file at support_files/marker_layouts/SOMA/soma_subject1/clap_001_smplx_finetuned.json,
which has different vertices numbers(vids). I have some questions about the layout JSON files:
Thank you!
Hello,
I am trying to integrate the last part of the tutorial. Running SOMA on Mocap point cloud data, where the output model is displayed using AMASS, with the generated .pkl file using the last tutorial Solve Already Labeled MoCaps With MoSh++. But I am getting the following error:
TypeError: register_buffer() takes 3 positional arguments but 4 were given
I am working on Ubuntu 20.04, Python version 3.7.13
How do I fix this?
Thanks for your amazing work! I noticed that project uses SMPL-X gender neutral body data. I wondered how to transform neutral model's body shape and pose to other gender. Or can i use other SMPL-X gender body?
I'm trying this tutorial with sample data but I'm getting an error saying it's missing smplx/pose_hand_prior.npz
. Can anyone point me to the download that contains this file as I can't seem to find it in on https://smpl-x.is.tue.mpg.de/? Many thanks!
Hi, Is there any solutions to run the program more quickly? I takes a week on my laptop for a 20 minutes mocap recording.
Can we use parallel cfg parameters to run the program faster possibly in parallel?
Thank you very much
Hi, recently i am working on human body reconstruction and download your codes, thanks for your great work. It helps me a lot.
Unfortunatly, i meet some troubles in using this work on Windows10. The smpl-fast-derivatives file downloaded is precompiled, which can only be used on Linux. I have no idea how to download the smpl_derivatives which can be used in Windows. Could you provide files that support Windows? Thank you very much!
Hi,
first of all, thank you for publishing and releasing such an awesome work to the public!
While looking at the code I have noticed a small mismatch of function arguments. In soma/src/soma/tools/soma_processor.py:451 you call the function write_mocap_c3d with the wrong argument. It should be out_mocap_fname
not out_c3d_fname
, so
if rt_cfg.save_c3d:
c3d_out_fname = rt_cfg.dirs.mocap_out_fname.replace('.pkl', '.c3d')
nan_replaced_labels = [l if l != 'nan' else '*{}'.format(i) for i, l in enumerate(results['labels'])]
write_mocap_c3d(out_mocap_fname=c3d_out_fname,
markers=results['markers'],
labels=nan_replaced_labels,
frame_rate=soma_labeler.mocap_frame_rate)
logger.info(f'Created {c3d_out_fname}')
Happy to do a small PR to fix this as well :)
hi ...
what about inertial motion captures?
does a plan to support that?
like as Perception Neuron
could any help me to solve this error?
Thank you for your excellent task. As said in the code, the mosh process runs in the cpu, so the speed is very slow, so can the mosh process be accelerated by gpu? If so, how should I do it. Looking forward to your reply.
Hello,
Thank you for your work and code. I am trying to run the solve labeled mocap tutorial but having issues with the rendering option. I have posted the image of the error below.
Hi, Thanks for your tutorials, I wondered is there any specific format for the input mocap data, is there a need to preprocess the already existing mocap data to an acceptable input format?
Hi, thanks for this amazing work again!
When I tried to use my MoCap data as input to "solve_labeled_mocap.ipynb", this error appeared.
However, it didn't happen for every MoCap file, so I guess it was caused by the input data.
Just wonder if there is way to solve it or ignore it?
The whole error description:
ValueError: not enough frames were found that at least 100.0% of the markers.
either try mosh.stagei_frame_picker.type: random or set mosh.stagei.frame.picker.least_avail_markers to lower number in ange [0.1, 1.0]
Hi,
I am unable to go through the label priming demo, it throws an error when running run_soma_on_multiple_settings()
:
UnsupportedInterpolationType: Unsupported interpolation type resolve_mocap_subject
full_key: dirs.mocap_out_fname
object_type=dict
happens on line 156 of run_soma_multiple.py
:
soma_labeled_mocap_fname = cur_soma_cfg.dirs.mocap_out_fname
I trivially tried downgrading the omegaconf version from 2.3 to 2.1.2; this did not work. Have not figured out what part of the conf filename path is causing the interpolation error. Any ideas?
Hi, I have been trying to get SOMA to run for a couple of weeks now on COLAB using python 3.7 and carefully following the requirements, but I am left with dependency conflicts coming out of my ears, does anyone have any working COLAB code that I could use as I would like to use and reference SOMA as part of a project. Thanks
Hi, how do I modify the tutorial: https://github.com/nghorbani/soma/blob/main/src/tutorials/solve_labeled_mocap.ipynb to solve for SMPL parameters instead of SMPL-X? I see that MoshPP supports SMPL, but I can't find an immediate way to solve for SMPL instead of SMPL-X in the tutorial.
Hi,
Thank you so much for the excellent work.
I am trying to generate SMPL meshes from my custom & already labeled c3d files. However, my c3d marker labels are named differently than the AMASS dataset and also contain unique body locations that are not in the 89 supper set (e.g., ear).
I had some trouble following the solve_labeled_mocap tutorial regarding the marker_layout setups:
I'm a bit unsure about the best mosh workflow for my situation and would greatly appreciate any guidance or insights you could offer. Thank you in advance for your help and time!
I've been trying to run the first tutorial and I'm still jumping through hoops to avoid errors.
I would like to have a pip freeze
of a working environment, because the problem I'm facing now seems related to outdated features of PyTorch and PyTorch Lightning. The issue #7 is closed, but I don't think it should be. Either the code should be updated to work with current versions of the libraries or the requirements should be edited to a set of versions that works.
As a note, up until now I've managed to avoid the following problems:
smpl_derivatives
are wrong. My solution seems to work: #19 (comment)Cython
in the requirementspip install -r requirements.txt
. It has to be installed from source manually.mesh
always imports the meshviewer
module, which causes problems with OpenGL. For this, my solution has been to move the from OpenGL import GL, GLU, GLUT
statement inside the functions, to avoid importing it until the moment it is necessary (because I don't care about visualization if I can't even run the model). The related issues are:
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.