Yiyue Luo, Yunzhu Li, Michael Foshey, Wan Shou, Pratyusha Sharma, Tomás Palacios, Antonio Torralba, and Wojciech Matusik
CVPR 2021 [Project Page] [Paper] [Video]
git clone https://github.com/yiyueluo/intelligentCarpet
cd IntelligentCarpet
conda env create -f environment.yml
conda activate p36
Raw dataset on 10 people recorded on three days (including camera calibration, which happens once per day): https://www.dropbox.com/sh/g3l4jdablczffj3/AACuFy9E2YonQdNjUu4beClta?dl=0
In each dataset folder, there are:
- raw tactile signal (including timestamp) on 9 pieces of tiles: touch#.hdf5
- aligned tactile signal as 96x96 matrix: touch_aligned.p
- normalized tactile signal : touch_normalized.p
- video recorded by 2 calibrated cameras: webcam#.mp4, and corresponding timestamps: webcam#.txt
- visualization with Openpose output (only in selective folder for demonstration): webcam#_openpose.avi
- 21 keypoints output by Openpose from each camera: pt_webcam#.mat
- triangulated 21 keypoints: keypoint3D.mat
- optimized 3D keypoints: keypoint_refined.p
- transformed 3D keypoints corresponding to the carpet perspective: keypoint_transform.p, and the positions of the carpet: tile_transform.p
In each calibration folder, there are:
- calibration parameters for each camera: webcam#_intrinsic.mat, webcam#_extrinsic.mat, webcam#_dis.mat
- carpet positions corresponding to each camera
The important files are: touch_normalized.p, keypoint_refined.p (they are aligned with timestamps). Use heatmap_from_keypoint3D.py to generate GT 3D heatmap (20x20x18).
Checkpoints and test dataset can be found here: https://www.dropbox.com/sh/5l0lm4po64xf6jd/AACuMt_oGy99Beyz_IMeknQ6a?dl=0
- ckpts.zip contains the trained model
- singlePerson_test.zip contains the test set for single person pose estimation
- singlePerson_test_diffTask.zip contains the test set for single person pose estimation, which is arranged by individual tasks (Note: use
sample_data_diffTask
to load data)
Download the checkpoint and desired test set to "./train/" and unzip to its correspondng folder.
python ./train/threeD_train_final.py
To visualize predictions, set --exp_image
or --exp_video
.
To export L2 distance between predicted skeleton and groundtruth , set --exp_L2
.
To export data on tactile input, groundtruth keypoint, grondtruth heatmap, predicted keypoint, predicted heatmap, set --exp_data
.
Note: pay attention to the used checkpoints --ckpts
, experiment path --exp_dir
and test data path --test_dir
.
If you have any questions about the paper or the codebase, please feel free to contact [email protected].