Comments (3)
Hi @TaeilJin,
Apologies for the late response. The last few weeks have been busy.
If I understand correctly, there's a simple reason for the shape difference you are wondering about. The dimension of interest here (dimension 1, which measures 70 during training and 1 during synthesis) corresponds to time. During training, the entire input and output sequences are available to us, so we can easily run the LSTM (and the network) forwards over all 70 time steps in one go. This can be done without using any explicit for-loop over time, by applying the LSTM to the input sequence as normal.
Generation time is different. Here we do not know the output sequence, but must instead create it. However, because the MoGlow model is autoregressive, it takes the output of recent time steps as an input to the flow when generating the next pose. This means that we must generate output poses one at a time, so that each new pose can be manually fed back in to the network to generate the next one. We also need an explicit loop over time. This should explain why the input shape only contains 1 single time frame at any given point during synthesis.
from stylegestures.
@TaeilJin Has your question in this issue been answered?
If so, I would recommend that you close the issue.
from stylegestures.
Closing this since the question appears to have been answered.
from stylegestures.
Related Issues (20)
- bvh files with fixed frames
- Difference between time_steps and seqlen? HOT 2
- Possible bug when computing the log-det of Jacobian for affine coupling HOT 2
- About datasets HOT 1
- For the freshmen about Gesture Generation HOT 2
- Questions about the latent random variable Z HOT 2
- The Python version
- About the Cuda version HOT 4
- About the swapaxes for self.x and self.cond HOT 6
- This dataset link doesn't seem to work. HOT 2
- The dataset are inconsistent HOT 10
- Excuse me, where can I find the dataset used by Example3 in readme? HOT 11
- This is the curve when I use two different data sets for training, and the parameters are the same. It can be roughly seen that the loss in Figure 1 will be lower than that in Figure 2, which can indicate that the performance of the first trained model will be better? HOT 6
- How to apply the output file(*.bvh) to 3D model file(*.3ds) HOT 3
- Some questions about the style control HOT 2
- Some questions about the style control HOT 1
- About the dataset HOT 2
- The trained model posture shakes badly. What might be the cause? Is there any way to solve this problem? HOT 2
- Pre-trained models
- What's the output of this framework?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stylegestures.