Git Product home page Git Product logo

Comments (6)

boris-il-forte avatar boris-il-forte commented on July 19, 2024

If I'm not wrong, stable baselines3 implements the PPO2 version of the algorithm from openai (not stable) baselines. Our version matches PPO 1.

One clear difference is the fit of the value function, which is done separately from the policy update.
Other differences may be input normalization and different handling of the terminal states (we differentiate from truncated episodes and absorbing states, and we handle this correctly in the algorithm).

Are you sure that you used the same network? the stable baselines 3 may use a shared network implementation or different activation functions. In mushroom it's trivial to check the difference, while in stable baselines may not be so easy.

from mushroom-rl.

cantor-dust avatar cantor-dust commented on July 19, 2024

Yes I use the same network: I can specify the number of layers, the number of neurons, the activation function and whether to use a shared network. This can also be checked by extracting the policy from the model object in StableBaselines3. I also use the same initial variance for the policy.

The biggest factor that messes up the result achieved by your implementation is that the final variance tensor has components which are still quite big, whereas in StableBaselines3 the final variance tensor has smaller components.
But I used the same initial variance in both cases.

Since your implementation matches that of PPO1, then something else may cause this issue. Could it have something to do with the policy? (i.e: with how you perform exploration?)

from mushroom-rl.

boris-il-forte avatar boris-il-forte commented on July 19, 2024

the policy depends on the policy you choose.
If you use the gaussian policy, the variance is parametrized by log std. See here:
GaussianTorchPolicy

Not sure if stable baselines use the same policy, but if you see our benchmarks, the performance matches the ones on the paper see here

from mushroom-rl.

cantor-dust avatar cantor-dust commented on July 19, 2024

Yeah in both tests I am using a Gaussian Policy.

Even if there were slight modifications between the two implementations, still I would expect both implementations to be able to solve a very simple LQG environment.

from mushroom-rl.

boris-il-forte avatar boris-il-forte commented on July 19, 2024

Unfortunately, I cannot give support on stable baselines.
Regarding our implementation, as you see it matches the one from the paper.
I'm quite sure it can solve the LQG environment if you change the tuning of the hyperparameters. Not knowing how they are implemented in stable baselines 3, I can only suppose that there may be different scales/usage/meaning or different input normalization (maybe reward normalization that is not present in mushroom), or maybe a different training (mixing the value function with the policy on the training step might have a scale effect on the optimization).

The implementation of mushroom is quite straightforward to check, and the results on the benchmark are obtained with the current implementation (except, maybe, updates on NumPy/torch libraries)

from mushroom-rl.

cantor-dust avatar cantor-dust commented on July 19, 2024

I found the issue, it was a bug on my end...
Thank you for your help, and sorry I wasted your time.

from mushroom-rl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.