Git Product home page Git Product logo

reinforcement-learning-reinforce's Introduction

REINFORCE

This repo constains the implementation of REINFORCE and REINFORCE-Baseline algorithm on Mountain car problem.

The result after training the agent with REINFORCE algorithm or REINFORCE-Baseline will look like this:

Table of Contensts


What is REINFORCE algorithm?

Proposed at 1992, REINFORCE is the root of policy gradient methods.

In short, the algorithm estimates the return using monte carlo method and in return adjust the policy using gradient ascent.

Below is the pseudo code:

A modified version on top of REINFORCE is REINFORCE-Baseline.

REINFORCE-baseline chooses the baseline using value estimate and subtracted from estimated return. The goal is to reduce variance.


Differentiable policy

REINFORCE algorithm requires a differentiable policy parameterization. In theory, any function approxiamtion algorithm would work.

For this implementation, I choose tile coding with linear approximation and softmax to keep things simple.
The policy is given as follows,

The gradient of the policy is,

Tile coding

The feature x(s,a) is generated through tile coding. This method encodes the entire continous state space into binary vector. To take action into account, the encode vector needs to stack together depending on the number of actions.

Neural Network

To do.


Environment

Mountain car is a classical RL example of driving an underpowered car up a steep mountain road.

State : Car position and velocity
Action : Left, stationary, right
Reward : -1 per time step
Terimal state : Reaching the goal


Results

Total reward

This figure illustrates 2 different tile encoding methods for training over 10,000 episodes.
As shown, 16 tilings provide better discrimination power and hence more accurate value estimation across states.

Agent performance

After episode 1 (16 tilings, 8 tilings):

After episode 500 (16 tilings, 8 tilings):

After episode 10000 (16 tilings, 8 tilings):


Things I have learnt

  1. Reinforcement Learning is prune to silent bug (The agent will not learn)
  2. Function representation plays a big role
  3. Choosing appropriate step size for gradient ascent is difficult
  4. Monte Carlo methods are high variance especially when terminal state is hard to reach
  5. Probably needs experience replay strategy to get faster convergence
  6. An alternative way to detect divergence is by monitoring weights change

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.