Git Product home page Git Product logo

Comments (7)

wildermuthn avatar wildermuthn commented on July 19, 2024

I'm seeing the same thing, and am running a comparison between the current model and a modified model that is passed z rather than latent mu.

from world-models.

ctallec avatar ctallec commented on July 19, 2024

You are right, we are passing the mean instead of a sample. I don't think this will make a significant difference, and notably it's unclear wether it is going to improve the results, but I may be wrong. Typically, I don't think this could explain the lack of necessity for a model of the world, since our observation is not that we obtain significantly worse results than (Ha and Schmidhuber, 2018), but that we already have very good performance without training the model. Anyway, @wildermuthn, thanks for running this experiment. Could you keep us updated on the results. Besides, if you have time for that, and a code that is already ready to be integrated, don't hesitate to issue a pull request. Otherwise I'll be fixing that soon.

from world-models.

AlexGonRo avatar AlexGonRo commented on July 19, 2024

@wildermuthn , in your experiments, are you using the carRacing environment? I modified this library slightly and I should be ready to (hopefully!) run a few experiments on the viZDoom environment by the end of the week. I could program a few extra runs to test performance if you haven't done so already!

from world-models.

wildermuthn avatar wildermuthn commented on July 19, 2024

@AlexGonRo I am using the carRacing environment. I'm switching to a cloud server with multiple V100s, as my experiments were inconclusive running on my single 1080ti with only 8 CPUs. I did notice that with z, in a day I was able to get to 700, but without z, it seemed the training stalled around 500. But like I said, inconclusive. Will report back once I actually run it on a good machine.

@ctallec I've got some nvidia-docker and gcp code that is messy, but will see about putting up a PR for the z code. It's just a few lines, stealing from the mdrnn code.

from world-models.

ctallec avatar ctallec commented on July 19, 2024

@wildermuthn You may want to be extra cautious with the hyperparameters you use and the training duration, typically you want to use the same hyperparameters for cma as in the original paper, and not the one we provided. With the one we provided, you will get lower final performance. The original paper used 16 rollouts per return evaluation and a population size of 64, that's what we used for reproducing, but this also mean that you'll have to use in the order of 32 jobs and 4 gpus to get it to run in a reasonable amount of time.

from world-models.

ranahanocka avatar ranahanocka commented on July 19, 2024

I ran into an issue for a while now, which may be related to this one (but my controller is a bit different than this one). I trained the LSTM on a GPU, and used that in a different controller setup on a local CPU machine with pytorch 0.4.1. I finally isolated the problem, which seems to be related to this issue in pytorch. Basically, the torch.exp function didn't work properly for me, and it made the entire hidden state of the LSTM garbage (when using on the CPU). When I ran my controller on the GPU on the server (with the same pytorch version 0.4.1), it worked.

from world-models.

AlexGonRo avatar AlexGonRo commented on July 19, 2024

Despite being a bit late, I created a new pull request fixing the issue.

I did some testing with my library and didn't find any significant boost in performance with these new changes. However, as we discussed here, this should be the expected behaviour of the code.

I must clarify that I did not perform any extensive test of these changes in the current version of this library (ctallec's master branch). I, however, made sure that the new lines of code do not cause any errors.

from world-models.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.