Git Product home page Git Product logo

snake-rl's Introduction

Snake Reinforcement Learning

Code for training a Deep Reinforcement Learning agent to play the game of Snake. The agent takes 2 frames of the game as input (image) and predicts the action values for the next action to take.


Sample games from the best performing agent
model v15.1 agentmodel v15.1 agent model v15.1 agentmodel v15.1 agent


Code Structure

game_environment.py contains the necessary code to create and interact with the snake environment (class Snake and SnakeNumpy). The interface is similar to openai gym interface. Key points for SnakeNumpy Class

  • Use the games argument to decide the number of games to play in parallel
  • Set frame_mode to True for continuously running the game, any completed game is immediately reset
  • When performing reset, use the stateful argument to decide whether to do a hard reset or not

agent.py contains the agent for playing the game. It implements and trains a convolutional neural network for the action values. Following classes are available

Class Description
DeepQLearningAgent/td>Deep Q Learning Algorithm with CNN Network
PolicyGradientAgentPolicy Gradient Algorithm with CNN Network
AdvantageActorCriticAgentAdvantage Actor Critic (A2C) Algorithm with CNN Network
HamiltonianCycleAgentCreates a Hamiltonian Cycle on Even Sized Boards for Traversal
SupervisedLearningAgentTrains Using Examples from another Agent/Human
BreadthFirstSearchAgentRepeatedly Finds Shortest Path from Snake Head to Food for Traversal

training.py contains the complete code to train an agent.

game_visualization.py contains the code to convert the game to mp4 format.

from game_environment import SnakeNumpy
from agent import QLearningAgent
import numpy as np

game_count = 10

env = Snake(board_size=10, frames=2, 
            max_time_limit=298, games=game_count, # Allows running 10 games in parallel
            frame_mode=False) # Allows continuous run of successive games
state = env.reset(stateful=True) # first manual reset required to initialize few variables
agent = QLearningAgent(board_size=10, frames=2, n_actions=env.get_num_actions(),
                       buffer_size=10000)
done = np.zeros((game_count,), dtype=np.uint8)
total_reward = np.zeros((game_count,), dtype=np.uint8)
epsilon = 0.1
while(not done.all()):
    legal_moves = env.get_legal_moves()
    if(np.random.random() <= epsilon):
        action = np.random.choice(np.arange(env.get_num_actions(), game_count)
    else:
        action = agent.move(s, legal_moves, values=env.get_values())
    next_state, reward, done, info, next_legal_moves = env.step(action)
    # info contains time, food (food count), termination_reason (if ends)
    agent.add_to_buffer([state, action, reward, next_state, done, next_legal_moves])
    total_reward += reward
    state = next_state.copy()
agent.train_agent(batch_size=32) # perform one step of gradient descent
agent.update_target_net() # update the target network


# another way to use the environment is the frame mode
# which allows faster accumulation of training data
env = Snake(board_size=10, frames=2, 
            max_time_limit=298, games=game_count,
            frame_mode=True)
while(True):
    s = env.reset(stateful=True)
    total_frames = 0
    while(total_frames < 100):
        """ same code as above """
        total_frames += game_count
    """ add data to buffer """

Experiments

Configuration for different experiments can be found in model_versions.json file. Adam optimizer gives a very noisy curve with very slow increase in rewards. Loss is also not stable. Hence, RMS optimizer is chosen for all further tests and training.

Effect of Reward Type

Two reward structures are studied

  1. Simple +1/-1 reward for eating food/termination
  2. +1/-1 * (length of snake - starting length + 1) (increasing rewards) Both schemes give similar trends for length of snake. alt text

Sample game from the second reward structure
model v15.3 agent

Effect of Batch Size

Batch sizes of 64 and 128 are compared. Since both give similar performance, 64 is chosen for faster training. alt text

Sample game from batch size 128 model
model v15.4 agent

Effect of PreTraining

The model is initialized with a pretrained model using samples collected from BFS Agent. Initially, the pretrained model seems to have quicker learning, but DQN is able to soon catch with it. This is due the fact that samples collected from BFS Agent were restricted to 18 time steps to allow DQN to do further learning. alt text

Sample game from pretrained model
model v15.2 agent

Environment with Obstacles

40000 10x10 boards are randomly generated with 8 cells as obstacles, while ensuring that the snake always has a path to navigate through the board. The code for the same can be located at : obstacles_board_generator.py
Based on the sample plays below, it is evident that the learned policy generalizes well over boards with random obstacles, and even works good on boards with higher number of obstacles (although it has a higher chance of getting stuck in a loop)

Sample games from the best model
model v17.1 agentmodel v17.1 agent

Sample games from the best model on out of sample boards
model v17.1 agentmodel v17.1 agent

Sample game from the best model on empty board
model v17.1 agent

Sample game from the best model on board with more obstacles
model v17.1 agent

snake-rl's People

Contributors

dragonwarrior15 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.