Comments (6)
If I'm not wrong, stable baselines3 implements the PPO2 version of the algorithm from openai (not stable) baselines. Our version matches PPO 1.
One clear difference is the fit of the value function, which is done separately from the policy update.
Other differences may be input normalization and different handling of the terminal states (we differentiate from truncated episodes and absorbing states, and we handle this correctly in the algorithm).
Are you sure that you used the same network? the stable baselines 3 may use a shared network implementation or different activation functions. In mushroom it's trivial to check the difference, while in stable baselines may not be so easy.
from mushroom-rl.
Yes I use the same network: I can specify the number of layers, the number of neurons, the activation function and whether to use a shared network. This can also be checked by extracting the policy from the model object in StableBaselines3. I also use the same initial variance for the policy.
The biggest factor that messes up the result achieved by your implementation is that the final variance tensor has components which are still quite big, whereas in StableBaselines3 the final variance tensor has smaller components.
But I used the same initial variance in both cases.
Since your implementation matches that of PPO1, then something else may cause this issue. Could it have something to do with the policy? (i.e: with how you perform exploration?)
from mushroom-rl.
the policy depends on the policy you choose.
If you use the gaussian policy, the variance is parametrized by log std. See here:
GaussianTorchPolicy
Not sure if stable baselines use the same policy, but if you see our benchmarks, the performance matches the ones on the paper see here
from mushroom-rl.
Yeah in both tests I am using a Gaussian Policy.
Even if there were slight modifications between the two implementations, still I would expect both implementations to be able to solve a very simple LQG environment.
from mushroom-rl.
Unfortunately, I cannot give support on stable baselines.
Regarding our implementation, as you see it matches the one from the paper.
I'm quite sure it can solve the LQG environment if you change the tuning of the hyperparameters. Not knowing how they are implemented in stable baselines 3, I can only suppose that there may be different scales/usage/meaning or different input normalization (maybe reward normalization that is not present in mushroom), or maybe a different training (mixing the value function with the policy on the training step might have a scale effect on the optimization).
The implementation of mushroom is quite straightforward to check, and the results on the benchmark are obtained with the current implementation (except, maybe, updates on NumPy/torch libraries)
from mushroom-rl.
I found the issue, it was a bug on my end...
Thank you for your help, and sorry I wasted your time.
from mushroom-rl.
Related Issues (20)
- Can't install package HOT 4
- suspected memory leak HOT 8
- How to train an agent in one environment and use it on another slightly different envoirnment HOT 3
- dynaq agent HOT 1
- how to reproduce DQN nature paper? HOT 7
- compress frames HOT 2
- n_steps dqn performs worse. bug? HOT 1
- support for new spaces HOT 2
- PPO for lunar lander [BUG] HOT 10
- Multi modal state support HOT 1
- Save and Load Agent for the Second Time HOT 2
- 'Taxi-v3' error: "ValueError: too many values to unpack (expected 4)" HOT 2
- TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool. HOT 2
- SAC postload optimizer for alpha HOT 2
- Unable to tun atari_dqn.py file in examples HOT 5
- Python 3.11 support HOT 2
- Suggestion: Add median to compute_metrics
- [solvers/dynamic_programming] Use np.linalg.solve instead of np.inv HOT 2
- [requirements.txt] Missing requirement for OpenAI gym HOT 4
- [Categorical DQN/Rainbow] Inconsistent behavior of Categorical DQN for an even number of atoms
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mushroom-rl.