Comments (10)
My (potentially incorrect) impression of this is that this need not be a big deal. The data has 20 frames (poses) per second. Assuming that the second dimension is time, this means that models are trained on 4-second sequences (80 frames), and tested on 5-second sequences (100 frames). MoGlow is an autoregressive model and is not restricted to data with a particular, fixed length, so it has a well defined behaviour both for 4 and 5-second sequences.
Does this make sense, or am I perhaps missing something here?
from stylegestures.
But when I tested it, I got an error:
Input. Size (-1) must be equal to input_size. Expected 694 got
In torch.nn. Modules. rnn RNNBase check_input()
from stylegestures.
I don't have detailed knowledge about the code unfortunately, but I don't recognise where the number "694" comes from, nor how it is related to the numbers that were posted earlier.
input_data: (13710, 80, 66)
test_data: (31, 100, 66)
from stylegestures.
I'm unsure if the tuple (input_data, test_data)
(above) is to be understood as training versus test data, or as inputs versus outputs.
from stylegestures.
The error appears here h = self.f(z1_cond.permute(0, 2, 1)).permute(0, 2, 1)
z1_cond.shape is ( ,694, 70)
z1_cond = torch.cat((z1, cond), dim=1)
z1.shape is (,63//2,70)
cond.shape is (,663,70)
from stylegestures.
In addition, I would like to ask whether it is normal for likehood_loss to be negative all the time during training
from stylegestures.
I would like to ask whether it is normal for likehood_loss to be negative all the time during training
The loss used to train our systems is the logarithm of the likelihood, a.k.a. the negative log-likelihood, or NLL. The NLL for a continuous-valued distribution, as considered here, can in principle attain any value on the real line. In particular, negative loss values are normal and expected in our training, and we see them all the time. Please refer to the training curves in our paper at INNF+ 2020 to see what example training curves can look like.
from stylegestures.
The error appears here
h = self.f(z1_cond.permute(0, 2, 1)).permute(0, 2, 1)
z1_cond.shape is ( ,694, 70)
z1_cond = torch.cat((z1, cond), dim=1)
z1.shape is (,63//2,70)
cond.shape is (,663,70)
I'm sorry, but I cannot easily see the origin of the issue in this snippet that you shared. Since I did not write this code and am not a developer, I don't think I can debug the issue you are having effectively via GitHub comments. @simonalexanderson might know better, but I think he is busy with other work right now.
from stylegestures.
@110wuqu Based on the training curves that you posted in issue #43 a little while ago, I am getting the impression that you have managed to solve and/or work around the problems that you asked about in this issue. As a courtesy, could you please:
- Write what the problem was and how you solved it, so that other people who come here with the same issue will find the solution here.
- Close the issue, if appropriate.
from stylegestures.
Sorry, because it was too long ago, I have forgotten how I solved this problem in the beginning, there is no problem in the code of the project itself, it should be my careless problem.
from stylegestures.
Related Issues (20)
- Strange results when I train with multiple GPUs. HOT 2
- bvh files with fixed frames
- Difference between time_steps and seqlen? HOT 2
- Possible bug when computing the log-det of Jacobian for affine coupling HOT 2
- About datasets HOT 1
- For the freshmen about Gesture Generation HOT 2
- Questions about the latent random variable Z HOT 2
- The Python version
- About the Cuda version HOT 4
- About the swapaxes for self.x and self.cond HOT 6
- This dataset link doesn't seem to work. HOT 2
- Excuse me, where can I find the dataset used by Example3 in readme? HOT 11
- This is the curve when I use two different data sets for training, and the parameters are the same. It can be roughly seen that the loss in Figure 1 will be lower than that in Figure 2, which can indicate that the performance of the first trained model will be better? HOT 6
- How to apply the output file(*.bvh) to 3D model file(*.3ds) HOT 3
- Some questions about the style control HOT 2
- Some questions about the style control HOT 1
- About the dataset HOT 2
- The trained model posture shakes badly. What might be the cause? Is there any way to solve this problem? HOT 2
- Pre-trained models
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from stylegestures.