Comments (7)
I'm seeing the same thing, and am running a comparison between the current model and a modified model that is passed z
rather than latent mu
.
from world-models.
You are right, we are passing the mean instead of a sample. I don't think this will make a significant difference, and notably it's unclear wether it is going to improve the results, but I may be wrong. Typically, I don't think this could explain the lack of necessity for a model of the world, since our observation is not that we obtain significantly worse results than (Ha and Schmidhuber, 2018), but that we already have very good performance without training the model. Anyway, @wildermuthn, thanks for running this experiment. Could you keep us updated on the results. Besides, if you have time for that, and a code that is already ready to be integrated, don't hesitate to issue a pull request. Otherwise I'll be fixing that soon.
from world-models.
@wildermuthn , in your experiments, are you using the carRacing environment? I modified this library slightly and I should be ready to (hopefully!) run a few experiments on the viZDoom environment by the end of the week. I could program a few extra runs to test performance if you haven't done so already!
from world-models.
@AlexGonRo I am using the carRacing environment. I'm switching to a cloud server with multiple V100s, as my experiments were inconclusive running on my single 1080ti with only 8 CPUs. I did notice that with z
, in a day I was able to get to 700, but without z
, it seemed the training stalled around 500. But like I said, inconclusive. Will report back once I actually run it on a good machine.
@ctallec I've got some nvidia-docker and gcp code that is messy, but will see about putting up a PR for the z
code. It's just a few lines, stealing from the mdrnn code.
from world-models.
@wildermuthn You may want to be extra cautious with the hyperparameters you use and the training duration, typically you want to use the same hyperparameters for cma as in the original paper, and not the one we provided. With the one we provided, you will get lower final performance. The original paper used 16 rollouts per return evaluation and a population size of 64, that's what we used for reproducing, but this also mean that you'll have to use in the order of 32 jobs and 4 gpus to get it to run in a reasonable amount of time.
from world-models.
I ran into an issue for a while now, which may be related to this one (but my controller is a bit different than this one). I trained the LSTM on a GPU, and used that in a different controller setup on a local CPU machine with pytorch 0.4.1. I finally isolated the problem, which seems to be related to this issue in pytorch. Basically, the torch.exp function didn't work properly for me, and it made the entire hidden state of the LSTM garbage (when using on the CPU). When I ran my controller on the GPU on the server (with the same pytorch version 0.4.1), it worked.
from world-models.
Despite being a bit late, I created a new pull request fixing the issue.
I did some testing with my library and didn't find any significant boost in performance with these new changes. However, as we discussed here, this should be the expected behaviour of the code.
I must clarify that I did not perform any extensive test of these changes in the current version of this library (ctallec's master branch). I, however, made sure that the new lines of code do not cause any errors.
from world-models.
Related Issues (20)
- Shouldn't this be outside the for loop? HOT 3
- Data generation script: No module named 'utils' HOT 3
- Different transform in trainvae.py & trainmdrnn.py HOT 2
- sleep(0.1) leads to infty loops HOT 3
- inconsisent MDRNN / MDRNNCell behavoir HOT 3
- Error training MD-rnn HOT 3
- Splitting of Train and Validation / Test set HOT 1
- one question about gmm_loss function HOT 1
- a multi-process problem in the controller
- Training the controller and getting stuck in local minima HOT 5
- The definition of GMM linear layer may wrong? Or I have missed something? HOT 4
- Multiprocessing very slow HOT 2
- issue about gmm_loss HOT 2
- Controller Input HOT 1
- the train_controller always break off when trainning about 15min
- MDRNN losses extremely low due to numerical instability?
- problem about training VAE
- MDRNN doesn't train properly on carracing?
- Worker dying issue with controller training
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from world-models.