Git Product home page Git Product logo

Comments (5)

mbhenaff avatar mbhenaff commented on September 26, 2024

It does sound odd that the inverse dynamics loss is not decreasing - for MiniHack the IDM loss decreases nicely (see attached figure). I think making sure the IDM loss decreases properly would be a first sanity check. You could try tuning the entropy cost of the policy - if this is too high, then the policy might output a distribution close to uniform over actions in which case there is not much signal for the IDM to pick up on. Specifically you could try decreasing the entropy_cost parameter from 0.005 to 0.001, 0.0005, 0.0001. Also you could plot the entropy of the policy and check that it is lower than that of a uniform distribution, which is -log(num_actions).

Screen Shot 2022-12-20 at 1 35 48 PM

from e3b.

mbhenaff avatar mbhenaff commented on September 26, 2024

Also, when tuning hyperparameters I would probably stick with intrinsic reward normalization alone (i.e. reward_norm='intr') or none at all. If you have sparse reward then extrinsic alone will not be very helpful. I think it's more important to tune the intrinsic reward coefficient, like 10.0, 3.0, 1.0, 0.3, 0.1 or something when using intrinsic reward normalization. When using none, the range of optimal values can vary a lot more.

from e3b.

hlsafin avatar hlsafin commented on September 26, 2024

reward = norm(intr) + (intweight)*norm(extr) .
Do you think this norm might be better? Also, the majority of the games like Gravitar and H.E.R.O both have dense rewards but to get high score you need further exploration and not only focus on intermediate rewards. I'll try again with a different entropy_cost and see if this makes a difference. I don't believe actions are uniform because it achieves relatively high score, however, it's either not exploring or maybe the exploration process is very slow. The inverse loss starts at 150, decreases to 90, and hovers around 90 for the duration of training. I will do further testing and training.

from e3b.

mbhenaff avatar mbhenaff commented on September 26, 2024

You could try that, maybe with a slight modification as follows: reward = (intweight)*norm(intr) + norm(extr). Note that you would need two separate reward normalizers, one for the intrinsic and one for the extrinsic rewards.

If the inverse dynamics loss goes from 150 to 90 that suggests it is learning something.

Is the issue that the agent isn't learning anything, or that it doesn't seem to improve over an agent trained with dense reward only? As a sanity check you could turn the intrinsic reward off (just set intweight=0) and check that the reward improves there. For Gravitar and Hero you should recover the performance of standard IMPALA (you may need to change the hyperparameters from the defaults though though since they might be different for MiniHack and Atari).

Also, I think it might be harder to get improvements with E3B over IMPALA on dense reward games. You might have better luck on sparse reward games like Montezuma's Revenge. Getting it to work there would at least tell you that the hyperparameters are good.

from e3b.

hlsafin avatar hlsafin commented on September 26, 2024

Yeah, I tried e3b on Montezuma revenge with no ext rewards, and after 1 billion steps the reward was still 0.

from e3b.

Related Issues (3)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.