Comments (3)
I think only @simonalexanderson knows how to answer these questions, and I hope he can find the time to help.
from stylegestures.
Thank you. And I might add that, just like the synthesized gesture of Obama you provided. The Obama audio file was divided into many clips, and the model sampled many output '.bvh' files. You must first find the corresponding audio clip of the output 'bvh' file and then the audio clip would be synchronized with the gesture. Finally the synthesized gesture video of Obama, which had the audio clip and the corresponding gesture motion at the same time, was present. And now I am not sure how to find the corresponding audio clips of the output 'bvh' files and looking forward to some advice and guidances.
from stylegestures.
Hi @Meteor-Stars,
I have now restructured the code and added a script called 'prepare_gesture_testdata.py' to facilitate sythesis from arbitrary wav-sources. The process is:
- resample wav files to 48k, and place them in the data/GENEA/source/test_audio folder
- run 'python prepare_gesture_testdata.py'.
- modify hparams/<some_params>.json to point at the data/GENEA/processed/test file and add the pretrained model.
- run python train_moglow.py hparams/<some_params>.json trinity.
Hope this helps.
from stylegestures.
Related Issues (20)
- bvh files with fixed frames
- Difference between time_steps and seqlen? HOT 2
- Possible bug when computing the log-det of Jacobian for affine coupling HOT 2
- About datasets HOT 1
- For the freshmen about Gesture Generation HOT 2
- Questions about the latent random variable Z HOT 2
- The Python version
- About the Cuda version HOT 4
- About the swapaxes for self.x and self.cond HOT 6
- This dataset link doesn't seem to work. HOT 2
- The dataset are inconsistent HOT 10
- Excuse me, where can I find the dataset used by Example3 in readme? HOT 11
- This is the curve when I use two different data sets for training, and the parameters are the same. It can be roughly seen that the loss in Figure 1 will be lower than that in Figure 2, which can indicate that the performance of the first trained model will be better? HOT 6
- How to apply the output file(*.bvh) to 3D model file(*.3ds) HOT 3
- Some questions about the style control HOT 2
- Some questions about the style control HOT 1
- About the dataset HOT 2
- The trained model posture shakes badly. What might be the cause? Is there any way to solve this problem? HOT 2
- Pre-trained models
- What's the output of this framework?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stylegestures.