Git Product home page Git Product logo

garlicdevs / fruit-api Goto Github PK

View Code? Open in Web Editor NEW
66.0 9.0 22.0 7.63 MB

A Universal Deep Reinforcement Learning Framework

Home Page: http://fruitlab.org/

License: GNU General Public License v3.0

Python 99.74% Makefile 0.11% Batchfile 0.14%
reinforcement-learning reinforcement-learning-algorithms multiplayer-game deep-reinforcement-learning deep-learning multi-agent-reinforcement-learning multi-agent multi-objective-optimization human games environment arcade-learning-environment atari actor-critic-algorithm actor-critic policy-gradients human-in-the-loop

fruit-api's Introduction

Logo

Introduction

Fruit API (http://fruitlab.org/) is a universal deep reinforcement learning framework, which is designed meticulously to provide a friendly user interface, a fast algorithm prototyping tool, and a multi-purpose framework for RL research community. Specifically, Fruit API has the following noticeable contributions:

  • Friendly API: Fruit API follows a modular design combined with the OOP in Python to provide a solid foundation and an easy-to-use user interface via a simplified API. Based on the design, our ultimate goal is to provide researchers a means to develop reinforcement learning (RL) algorithms with little effort. In particular, it is possible to develop a new RL algorithm under 100 lines of code. What users need to do is to create a Config, a Learner, and plug them into the framework. We also provides a lot of sample Configs and Learners in a hierarchical structure so that users can inherit a suitable one.

Figure 1

  • Portability: The framework can work properly in different operating systems including Windows, Linux, and Mac OS.

  • Interoperability: We keep in mind that Fruit API should work with any deep learning libraries such as PyTorch, Tensorflow, Keras, etc. Researchers would define the neural network architecture in the config file by using their favourite libraries. Instead of implementing a lot of deep RL algorithms, we provide a flexible way to integrate existing deep RL libraries by introducing plugins. Plugins extract learners from other deep RL libraries and plug into FruitAPI.

  • Generality: The framework supports different disciplines in reinforement learning such as multiple objectives, multiple agents, and human-agent interaction.

We also implemented a set of deep RL baselines in different RL disciplines as follows.

RL baselines

  • Monte-Carlo
  • Q-Learning

Value-based deep RL baselines:

  • Deep Q-Network (DQN)
  • Double DQN
  • Dueling network with DQN
  • Prioritized Experience Replay (proportional approach)
  • DQN variants (asynchronous/synchronous method)

Policy-based deep RL baselines:

  • A3C

Multi-agent deep RL:

  • Multi-agent A3C
  • Multi-agent A3C with communication map

Multi-objective RL/deep RL:

  • Q-Learning
  • Multi-objective Q-Learning (linear and non-linear method)
  • Multi-objective DQN (linear and non-linear method)
  • Multi-objective A3C (linear and non-linear method)
  • Single-policy/multi-policy method
  • Hypervolume

Human-agent interaction

  • A3C with map
  • Divide and conquer strategy with DQN

Plugins

  • TensorForce plugin (still experimenting). By using TensorForce plugin, it is possible to use all deep RL algorithms implemented in TensorForce library via FruitAPI such as: PPO, TRPO, VPG, DDPG/DPG.
  • Other plugins (OpenAI Baselines, RLLab) are coming soon.

Built-in environments

  • Arcade learning environment (Atari games)
  • OpenAI Gym
  • DeepMind Lab
  • Carla (self-driving car)
  • TensorForce's environments:
    • OpenAI Retro
    • DeepMind Pycolab
    • Unreal Engine
    • Maze Explorer
    • Robotics - OpenSim
    • Pygame Learning environment
    • ViZDoom

External environments can be integrated into the framework easily by plugging into FruitEnvironment. Finally, we developed 5 extra environments as a testbed to examine different disciplines in deep RL:

  • Mountain car (multi-objective environment/graphical support)
  • Deep sea treasure (multi-objective environment/graphical support)
  • Tank battle (multi-agent/multi-objective/human-agent cooperation environment)
  • Food collector (multi-objective environment)
  • Milk factory (multi-agent/heterogeneous environment)

Video demonstrations can be found here (click on the images):

Documentation

  1. Installation guide

  2. Quick start

  3. API reference

Please visit our official website here for more updates, tutorials, sample codes, etc.

References

ReinforcePy is a great repository that we referenced during the development of Fruit API.

fruit-api's People

Contributors

garlicdevs avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fruit-api's Issues

number of actions that vary

i have a number of actions that varies and get less with each step, does the MODQNLearner takes this in consideration and randoms from the possible actions left in each step?

in my engine i'm using this function to calculate action space
def get_action_space(self):
return range(len(self.get_possible_actions()))

each time it gets less, so if the MODQNLearner calls it each time it performs an action, i guess it takes it in consideration

pareto front

how to get the pareto front after training a model, saving it and evaluating?

Number of output and input in layer of networks

hello, wanted to ask u if possible, why did u choose this number of input and output in the layers?
layer_1 = self.layer_manager.create_conv_layer(self.tf_inputs_norm, 32, 8, strides=4, activation_fn='relu',
padding='valid', scope='tf_layer_1')
layer_2 = self.layer_manager.create_conv_layer(layer_1, 64, 4, strides=2, activation_fn='relu', padding='valid',
scope='tf_layer_2')
layer_3 = self.layer_manager.create_conv_layer(layer_2, 64, 3, strides=1, activation_fn='relu', padding='valid',
scope='tf_layer_3')
layer_4 = self.layer_manager.create_fully_connected_layer(layer_3, 512, activation_fn='relu',
scope='tf_layer_4')
as i know the number of outputs in the output layer should be the number of possible actions in the environment, and the input is the state

matplotlib

i have an issue with matplotlib, i couldnt disable Disable Show plots in toolwindow because i coumdt find Python Scientific, is there anything else to do to fix this?
im using community version
image

what is reward_clip_thresholds??

i would like to know please, what is reward_clip_thresholds? and why it is set to None in MODQNLearner(DQNLearner)?
also do u have the algorithm to share with us of the MODQNLearner, so we can understand it better, cause in ur code their is no comments which makes it harder to guess how it works, thanks in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.