Git Product home page Git Product logo

audio2bodydynamics's Introduction

Audio2BodyDynamics

Introduction

This repository contains the code to predict skeleton movements that correspond to music, published in:

Abstract

We present a method that gets as input an audio of violin or piano playing, and outputs a video of skeleton predictions which are further used to animate an avatar. The key idea is to create an animation of an avatar that moves their hands similarly to how a pianist or violinist would do, just from audio. Notably, it’s not clear if body movement can be predicted from music at all and our aim in this work is to explore this possibility. In this paper, we present the first result that shows that natural body dynamics can be predicted. We built an LSTM network that is trained on violin and piano recital videos uploaded to the Internet. The predicted points are applied onto a rigged avatar to create the animation

Predicted Skeleton Video

Predicted Skeleton Video

Getting Started

  • Install requirements by running: pip install -r requirements.txt
  • Download ffmpeg to enable visualization
  • This repository contains starter data in the data folder. We provide json files formatted as follows
    • Naming convention - {split}_{body part}.json
    • video_id : (audio mfcc features, keypoints)
    • keypoints : NxC where N is the number of frames and C is the number of keypoints
    • audio mfcc features : NxD where N is the number of frames and D is the number of MFCC Features

Training Instructions for training on All Keypoints together

  • Run python pytorch_A2B_dynamics.py --help for argument list
  • For training
    • python pytorch_A2B_dynamics.py --logfldr {...} --data data/train_all.json --device {...} ...
    • See run_pipeline.sh for an example
  • For testing - generates video from test model
    • python pytorch_A2B_dynamics.py --test_model {...} --logfldr {...} --data test_all.json --device {...} ... --audio_file {...} --batch_size 1
    • See run_pipeline.sh for an example
    • NB : Testing is constrained to 1 video at a time. We restrict batch size to 1 for the test video and proceed to generate the whole test sequence at once instead of breaking it up.

Training Instructions for separate training of Body, Lefthand and Righthand

  • We expose data and functionality for training and testing on key-points of individual parts of the body and stitching the final results into a single video.
  • sh run_pipeline.sh
  • Outputs are by default logged to $HOME/logfldr

Other Quirks

  • Checkpointing saves training data statistics for use in testing.
  • Modify FFMPEG_LOC in visualize.py to specify the path to ffmpeg.
  • Set the --visualize flag to turn off visualization during testing
  • The losses observed for provided data are different from those reported in the paper due to different resolution of train and test images used for this dataset.

Citation

Please cite the Audio To Body Dynamics paper if you use this code:

@inproceedings{shlizerman2018audio,
  title={Audio to body dynamics},
  author={Shlizerman, Eli and Dery, Lucio and Schoen, Hayden and Kemelmacher-Shlizerman, Ira}
  journal={CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition}
  year={2018}
}

License

Audio2BodyDynamics is Non-Commercial Creative Commons Licensed. Please refer to LICENSE.

audio2bodydynamics's People

Contributors

shlizee avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

audio2bodydynamics's Issues

The time interval of videos

Dear authors:
Thanks for your great work.
I have checked the raw videos and the pose.
I find that you provide 10 18-minute video clips of poses, but you provide 12 raw videos with variant length.
Could you please provide the details on how you process the data?
For example, the starting time and ending time of the original raw video.

raw audio in your data

Hi,
Thanks for your nice work. Would it possible for you to release the raw audio of corresponded key point?

Raw audio clip data and their pose data.

Hi Authors:

Thanks for the great work.
I would like to explore the different sound representations.
Could you please provide the clip information used in your training and testing data?
I don't find a solution on how to align the raw audio data with the pre-processed pose data.

Thanks!

Whether the training data is a complete sequence.

Dear author,

Thank you very much for your great work.I have a question about the data set. I find that there are 10 sets of training data you gave, and the sequence length of each set is 26821.Could you tell me whether the 26821 sequence is a whole playing sequence or is it made up of several shorter playing sequences?

Thanks!

Time infomations of raw data

Hi.
Thanks for your nice work.
I want to train my own network using other audio representation instead of MFCC, but i could not find any time interval of processed audio and pose in your released data.zip. For example, the start time and end time of audios and pose. So how can i get any time interval ?

How to get the keypoints json file?

I know that the keypoints number of upper body is 8 from paper. But when I read the train_body.json I found that the number of keypoints is 14. I don't quite understand the meaning of these 14 Numbers. I want to know whether these 14 Numbers are the x and y axis coordinates of the keypoints. How should I deal with the data of keypoints that I got from openpose before starting my own training? I will deeply appreciate it if you can reply me.

video data list

Hello,
Thanks for your nice work.
Is it convenient for you to share the video link of your dataset?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.