Git Product home page Git Product logo

transformer-inertial-poser's Introduction

Transformer Inertial Poser (TIP): Real-time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation

This is the Python implementation accompanying our TIP paper at SIGGRAPH Asia 2022.

Arxiv: https://arxiv.org/abs/2203.15720

Video: https://youtu.be/rXb6SaXsnc0

Copyright 2022 Meta Inc. and Stanford University

Licensed under the CC-BY-NC-4.0 License

TIP Teaser

Environment Setup

(Only tested on Ubuntu 18.04; Might work on Windows with some minor modifications)

1.Go to (https://www.anaconda.com/download/) and install the Python 3 version of Anaconda or Miniconda.

2.Open a new terminal and run the following commands to create a new conda environment (https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html):

conda create -n tip22 python=3.8

3.Activate & enter the new environment you just creared:

conda activate tip22

4.Inside the new environment, and inside the project directory:

pip install -r requirements.txt

5.Install pytorch with CUDA (only tested with the following version, should work with other versions though):

conda install pytorch==1.7.1 cudatoolkit=10.2 -c pytorch (check pytorch website for your favorite version)

6.Install our fork of the Fairmotion library, at a location you prefer:

git clone https://github.com/jyf588/fairmotion.git
cd fairmotion
pip install -e .

Datasets and Models

SMPL+H model

Download the SMPL model from https://psfiles.is.tuebingen.mpg.de/downloads/mano/smplh-tar-xz. Decompress and replace the empty data/smplh folder from this repo with the decompressed folder.

AMASS

Download AMASS from https://amass.is.tue.mpg.de/download.php. SMPL+H is the file format we use, though no hand motions are generated by our algorithm.

We only used a couple of subsets within AMASS to train the model; including more data synthesized from AMASS might improve performance, but we have not tried it. Decompress the datasets and place these folders AMASS_CMU, KIT, Eyes_Japan_Dataset, HUMAN4D, ACCAD, DFaust_67, HumanEva, MPI_Limits, MPI_mosh, SFU, Transitions_mocap, TotalCapture, DanceDB inside data/source/

Real IMU signals from DIP

Download DIP from https://psfiles.is.tuebingen.mpg.de/downloads/dip/DIPIMUandOthers_zip. Put the DIP_IMU folder inside data/source/. This folder should contain both IMU signals and SMPL ground-truth poses, both of which we use to train the model together with synthesized AMASS data.

Real IMU signals from TotalCapture (TC)

(Only used during eval. You could skip this if you are not interested in evaluating your trained model on real IMUs from TC.)

Check the licence terms and citation information for TotalCapture dataset: https://cvssp.org/data/totalcapture/.

Contact the DIP authors (https://github.com/eth-ait/dip18) to send you TC IMU signals preprocessed by them (in DIP format), the folder should carry the name “TotalCaputure_60FPS_Original”. The folder also packs Ground-Truth poses in old SMPL format which we do not use in this codebase -- we simply use SMPL poses from the TotalCapture included in AMASS. Put the TotalCaputure_60FPS_Original folder inside data/source/ as well.

We provide an untested and uncleaned script viz_raw_DIP_TC.py which you can modify and use to visualize the raw IMU signals from DIP or TC.

Data Processing and Training

1.Synthesize IMU signals from AMASS data, since AMASS only contains SMPL poses. Change the N_PROC and DATA_V_TAG arguments in the bash script according to your needs.

sh ./data-gen-new-scripts.bash

Note 1: We cast the ground-truth poses into a format usually used by physics simulators, namely q and dq. But dq part is actually unfilled (zeros) except for the root linear velocity which the algorithm actually cares.

Note 2: SBPs (refer paper for details) are also synthesized here. In data files they use the key name constrs

Note 3: We use URDF (a commonly used robotics specification file format) to specify the tree structure of human, and to specify locations where the IMUs are placed. You can easily modify those positions by searching IMU in data/amass.urdf. Note though the root (pelvis) IMU location offset is specified instead in constants.py due to a limitation of PyBullet.

We provide an untested and uncleaned script viz_generated_sbp.py which you can modify and use to visualize the synthesized SBP data for AMASS.

2.Preprocess real IMU data from DIP (1st line) and TC (2nd line) into our own format

python preprocess_DIP_TC_new.py --is_dip --data_version_tag v1
python preprocess_DIP_TC_new.py --data_version_tag v1

Note: we did two additional things to the DIP dataset: first as DIP do not have root motion, we use a pre-trained (trained without DIP) model to label pseudo ground-truth SBPs for the DIP motions. In this repo we omit this tedious step and directly provide the pre-trained SBP info to append to the DIP dataset; second we split subjects 1 to 8 for training and 9 to 10 for testing in this script, following previous works for fair comparison.

3.Combine both synthesized (AMASS subsets) and real (DIP subjects 1~8) data for training and do further pre-processing to save training time:

python preprocess_and_combine_syn_amass.py --data_version_tag v1

4.Training:

python -m train_model --save_path "output/model-new-v1-0" --log-interval 20 --seed 5104 --clip 5.0 --epochs 1100 --batch_size 256 --lr 1e-4 --seq_len 40 --rnn_nhid 512 --tf_nhid 1024 --tf_in_dim 256 --n_heads 16 --tf_layers 4 --in_dropout 0.0 --past_dropout 0.8 --weight_decay 1e-4 --cuda --cosine_lr --n_sbps 5 --with_acc_sum --data_version_tag "v1" --noise_input_hist 0.15 > "output/model-new-v1-0-out.txt" &

We did not do much search on the hyper parameters above -- feel free to do more sweeping on them.

5.Model Evaluations (ref. Table 1 in paper):

Test on DIP Subjects 9~10:

python offline_testing_simple.py --name_contains "dipimu_s_09 dipimu_s_10" --ours_path_name_kin output/model-new-v1-0.pt --with_acc_sum  --test_len 30000 --compare_gt --seed 42 --five_sbp

Test on real IMUs from TotalCapture:

python offline_testing_simple.py --name_contains "tcimu" --ours_path_name_kin output/model-new-v1-0.pt --with_acc_sum  --test_len 30000 --compare_gt --seed 42 --five_sbp

Test on synthesized DanceDB:

python offline_testing_simple.py --name_contains "DanceDB" --ours_path_name_kin output/model-new-v1-0.pt --with_acc_sum  --test_len 30000 --compare_gt --seed 42 --five_sbp

The commands above calls real_time_runner_minimal.py, which runs core (i.e. minimal) test-time features of our algorithm. Specifically, it assumes a flat ground without terrain, and the learned SBPs are only used to correct root translational drift. This minimal version of our system is provided to lower the bar of understanding and adaptation. To use full features of our system, i.e. terrain reconstruction and SBP for joint motion correction (e.g. sitting), you would need to instead call real_time_runner.py. This can be easily done by commenting/uncommenting the following two lines in offline_testing_simple.py:

    # ours_out, c_out, viz_locs_out = test_run_ours_gpt_v4_with_c_rt(char, s_gt, imu, m, 40)
    ours_out, c_out, viz_locs_out = test_run_ours_gpt_v4_with_c_rt_minimal(char, s_gt, imu, m, 40)

Note: different from previous methods, our Transformer-decoder-based method does not have an offline mode that considers all future IMU readings when making the current pose estimation. Even in offline_testing_simple.py, we pretend the pre-recorded IMU signal file is streaming into the system frame by frame. In other words, the system is always "real-time".

  1. We release two pre-trained models. The first model, output/model-without-dip9and10.pt, is produced exactly from the commands above, holding out subject 9 and 10 in real DIP data. The second model, ```output/model-with-dip9and10.pt``, includes subject 9 and 10 in training. While we shall not use this model for number reporting, we can use it for the real-time demo.

Real-time Demo

(Only tested with six Xsens Awinda sensors, connected to a Windows workstation (though an Xsens station) with a decent GPU. Sorry we won't be able to provide much help for other setups. The paper's Appendix provide a detailed description of Calibration for real setups.)

1.Follow Environment Setup to set up a similar Conda environment on this Windows machine. Some of the python dependencies used during training are no longer needed, you could remove them in case you run into issues.

2.Download MT Software Suite 2021.4 (specific version probably doesn't matter): https://www.xsens.com/software-downloads (→ MTi products) and install it.

3.Connect your MTw Awinda sensors and base station to computer, run MT Manager and follow steps in this link until Step 5, to make sure your computer can successfully read the IMU signals https://base.xsens.com/s/article/Getting-started-with-MTw-and-MT-Manager-2019?language=en_US . Check the visualization to see if the readings make intuitive sense, to prevent using a bad sensor.

4.Get yourself familiar with the sensor coordinate system (e.g. sec. 11.6 for the Awinda manual): https://www.xsens.com/hubfs/Downloads/Manuals/MTw_Awinda_User_Manual.pdf. Also refer to the paper's Appendix.

5.Depending where you installed the MT Software Suite, replace C:\Program Files\Xsens\MT Software Suite 2021.4\MT SDK\Examples\xda_cpp\example_mtw.cpp with my file here: https://drive.google.com/file/d/1fiW9_PntFnsqrqNPp1AjEeG24hyTeavW/view?usp=sharing

6.Open C:\Program Files\Xsens\MT Software Suite 2021.4\MT SDK\Examples\xda_cpp\xda_cpp.sln with Visual Studio with sudo/admin privileges (I used VS 2019), and run Local Windows Debugger with “Release, x64” profile, to start the C++ client which will send IMU signals over to the Python server (within the same workstation, for sure). The C++ program will be constantly outputting current IMUs and their signals in the format of L550 of example_mtw.cpp, if the IMUs are connected and turned on. In other words, the C++ problem now serves as the role of a "simplified" MT Manager, reading signals from the IMU sensors (see Step 3).

7.Look at L550 of example_mtw.cpp, among the 6 IMUs we use, #0 will have the smallest device ID and #5 have the largest. We used the order (#0 (smallest ID): pelvis, #1: left wrist, #2: right wrist, #3: left knee, #4: right knee, #5 (largest ID): head) on the Python server side. Use markers/stickers to label your IMUs accordingly so that you don't wear them at the wrong order/location.

8.Now at Python server side run live_demo_new.py, if the C++ program is also running and constantly printing out the signals in the L550 format, the python server should be able to connect the C++ client and start streaming IMU signals in.

9.When the Python program prompts: “Put all imus aligned with your body reference frame and then press any key.” Think about which direction you will face towards when doing the T-pose calibration later, then put each IMU so that its x-axis is along your front, y-axis along your left, and z-axis along your up direction (Figure Below). When you finish placing all IMUs, press any key to start first-step calibration.

drawing

10.Follow the rest prompts from the Python program to do the T-pose calibration, after which the program should start running and you will see reconstructed motion visualized in PyBullet GUI.

Citation

If you found this code or paper useful, please consider citing:

@inproceedings{TIP22,
author = {Jiang, Yifeng and Ye, Yuting and Gopinath, Deepak and Won, Jungdam and Winkler, Alexander W. and Liu, C. Karen},
title = {Transformer Inertial Poser: Real-Time Human Motion Reconstruction from Sparse IMUs with Simultaneous Terrain Generation},
year = {2022},
doi = {10.1145/3550469.3555428},
booktitle = {SIGGRAPH Asia 2022 Conference Papers},
articleno = {3},
numpages = {9},
keywords = {Wearable Devices, Human Motion, Inertial Measurement Units},
location = {Daegu, Republic of Korea},
series = {SA '22 Conference Papers}
}

transformer-inertial-poser's People

Contributors

jyf588 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

transformer-inertial-poser's Issues

𝒒𝑡∈R57

May I ask whether the parameter is in the local coordinate system or the global coordinate system?In TransPose, the parameters are in the local coordinate system.

Visualize with SMPL: The definition of 18 joints

Thanks for doing such great job.
The paper mentioned that the output will be18 joints angles defined in SMPL human model, but SMPL has 24 joints, so what's the difference between 18 and 24 joints? How can I use 18 joints to drive the normal SMPL model?

Effect of the SBP for joint motion correction

Hello, thanks for your excellent work!

I tried to evaluate your model on the DanceDB and DIP IMU datasets. I use your pretrained model "model-without-dip9and10.pt" and the retrained one by myself. However, for both of these two models, I got better results when running test_run_ours_gpt_v4_with_c_rt_minimal(), which means only use the SBPs for root correction. For example, when I test your released checkpoint on the DanceDB dataset, I got
loss_angle: 14.596411590771083
loss_j_pos: 7.566159683066048
loss_2s_root: 0.022359336708061575
loss_5s_root: 0.09909819956046487
loss_10s_root: 0.28028876447285034
loss_jerk_max: 1.3138657493107144
loss_jerk_root: 0.8364943741098269
using test_run_ours_gpt_v4_with_c_rt_minimal().
But the results of the test_run_ours_gpt_v4_with_c_rt() is:
loss_angle: 14.613705011926633
loss_j_pos: 7.568692342151206
loss_2s_root: 0.029894802392026975
loss_5s_root: 0.10650085406996405
loss_10s_root: 0.27479880557401726
loss_jerk_max: 1.408000152377208
loss_jerk_root: 0.9518027618990865,
this is in accord with the results reported in your paper. But why the first one can get a better result, could you please help me figure it out?

Room frames(Gp)

Hello, I have some questions that I would like to ask you:

1、May I know how you obtain the room coordinate system Gp? I noticed that you have drawn some lines on the carpet based on your room coordinate system. Is it calculated by measuring several positions using high-precision instruments and then determining the coordinate system for the entire plane? After obtaining the room coordinate system Gp, do the starting positions for the 6 IMUs align parallel to this room coordinate system Gp by default? Could you please let me know where in the code I can find the room coordinate system? Also, is it possible to transform this room coordinate system Gp to the coordinate system calibrated by the optical motion capture device? I would appreciate it if you could provide some guidance.
2、I would like to inquire about the content of the Rs_aligned_T_pose matrix. I noticed that all 6 rows of data are the same. Does this matrix represent the initial position setting for the 6 IMUs? If so, why are they all the same,aren't they in different locations in body? Furthermore, if I only want to use two IMUs, do I just need to consider two rows of data?
3、Regarding the IMU usage, do I need to place the IMU in a straight line on the ground every time? Or after performing the calibration process, can the IMU be placed at the specified position on the body for subsequent uses?

question about runtime correction

Hi Yifeng,
I have a question about the root velocity correction and ik correction, since the joints location in world space are determined by both root translation and body pose, and the predictions of root velocity and body pose can't be perfect, then how to decide whether to refine velocity or the body pose so that sbps remain unchanged ?

Pretrained Model for preprocessed_DIP_IMU_c?

Hello Yifeng,

Thank you for this excellent work. I am wondering whether u could provide the model and pipeline for using pre-trained model to label pseudo ground-truth SBPs for the DIP motions? I captured some real-world IMU data, and I would like to test it on TIP. Thank you in advance for your help.

fairmotion

Hi Yifeng,
I have a question about the fairmotion.I can't execute the following sentence correctly, even though I updated pip and add domestic mirror source for it, do you know how I can solve it?
The operation I can't perform correctly:
pip install -e .
This is my Error message
Running command git clone --filter=blob:none --quiet https://github.com/nghorbani/body_visualizer.git /tmp/pip-install-dah9q3ni/body-visualizer_a355826f8d274204be9d009b9d672c08
error: RPC failed; curl 56 GnuTLS recv error (-110): The TLS connection was non-properly terminated.
fatal: the remote end hung up unexpectedly
error: subprocess-exited-with-error

× git clone --filter=blob:none --quiet https://github.com/nghorbani/body_visualizer.git /tmp/pip-install-dah9q3ni/body-visualizer_a355826f8d274204be9d009b9d672c08 did not run successfully.
│ exit code: 128
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× git clone --filter=blob:none --quiet https://github.com/nghorbani/body_visualizer.git /tmp/pip-install-dah9q3ni/body-visualizer_a355826f8d274204be9d009b9d672c08 did not run successfully.
│ exit code: 128
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.

question on TOTALCAPTURE

Hello, I contacted the Dip author and he sent me this file, but the file name is slightly different. I want to confirm whether the file is correct.The file the author sent me is called TotalCapture_Real_60FPS.zip.

Motion visualization

Thanks for such good work,
I wonder if it is possible to visualize the results of training, for example, for the cmu data set

ValueError: zero-size array to reduction operation maximum which has no identity

when i run offline_testing_simple.py ,it causes:

Traceback (most recent call last):
File "transformer-inertial-poser/offline_testing_simple.py", line 460, in
print(np.max(losses_angle), test_files_included[np.argmax(losses_angle)])
File "<array_function internals>", line 180, in amax
File ".conda/envs/avatarjlm/lib/python3.9/site-packages/numpy/core/fromnumeric.py", line 2793, in amax
return _wrapreduction(a, np.maximum, 'max', axis, None, out,
File ".conda/envs/avatarjlm/lib/python3.9/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation maximum which has no identity

thanks your help

Query Regarding Model in Your Paper

Dear Author,
I've been testing your released model, model-without-dip9and10.pt, but found inconsistencies with the results from Table 1 of your paper.
Is the model used in Table 1 the same as the model-without-dip9and10.pt? If so, could you explain the reason for these discrepancies?
Looking forward to your guidance.
Best,
image
image
image

Aligning the coordinate systems of IMUs

I would like to ask you about aligning the coordinate systems of different IMUs. Is it true that during motion capture experiments, you convert the acceleration and attitude angle information of all IMUs into the ENU coordinate system to align the coordinate systems of the six IMUs (processing everything in the ENU coordinate system)? I hope you can answer my question. Thank you very much!

TotalCaputure_60FPS_Original

I don't know which dataset to download for TotalCapture_60FPS_Original in TC IMU signals preprocessed by dip's authors. Could you please point it out for me? I have already obtained the download permission from dip's authors.
image
Could you please help me identify which datasets in the picture are related to TotalCaputure_60FPS_Original? The picture contains many datasets labelled as s1 to s5, but I couldn't find TotalCaputure_60FPS_Original in it.

The problem about viz_generated_sbp.py

Traceback (most recent call last):
File "viz_generated_sbp.py", line 22, in
import sim_agent
ModuleNotFoundError: No module named 'sim_agent'

Traceback (most recent call last):
File "viz_generated_sbp.py", line 31, in
from transformers.utils import set_seed
ImportError: cannot import name 'set_seed' from 'transformers.utils' (/home/xxx/anaconda3/envs/tip22/lib/python3.8/site-packages/transformers/utils/init.py)

I cannot find sim_agent. I cannot find set_seed from transformers.util. What should I do?

Data Processing

Hi ,
I have a question about Data Processing. I can't execute the following sentence correctly, do you know how I can solve this problem?

python preprocess_DIP_TC_new.py --is_dip --data_version_tag v1

pybullet build time: Jan 26 2024 14:00:49
argv[0]=
[SimAgent] Creating an agent... data/amass.urdf
warning: path existed
data/source/DIP_IMU\s_01\01.pkl data/source/DIP_IMU\s_01\01.pkl
data/preprocessed_DIP_IMU_v1/dipimu_DIP_IMU\s_01_01.pkl
(13778, 72)
(13777, 114)
Traceback (most recent call last):
File "preprocess_DIP_TC_new.py", line 373, in
gen_data_all_dip(robot, "data/source/DIP_IMU", "data/preprocessed_DIP_IMU_" + TAG)
File "preprocess_DIP_TC_new.py", line 241, in gen_data_all_dip
load_and_store(char, motion_name, motion_name, save_name)
File "preprocess_DIP_TC_new.py", line 212, in load_and_store
with open(save_name, "wb") as handle:
FileNotFoundError: [Errno 2] No such file or directory: 'data/preprocessed_DIP_IMU_v1/dipimu_DIP_IMU\s_01_01.pkl'

Real IMU signals from DIP Download denied

I am unable to download zip from the provided link,it will cause:

Download denied
Please keep in mind that all downloads are blocked if they are not started from the website directly.

If it still doesn't work then please try to use another browser.

Especially some privacy settings prevent the download server from detecting the origin of the download link and blocking the download.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.