Git Product home page Git Product logo

collision-avoidance's Introduction

Collision-avoidance

Towards Monocular Vision Based Collision Avoidance Using Deep Reinforcement Learning You could see the algorithm verification in real environment from here. No distance sensors are used. The paper can be found in here.

충돌회피 1 (2) 충돌회피 2 (2) 충돌회피 3 충돌회피 5

Overall Network Structure

An RGB image from a monocular sensor is converted into a depth image. Then the estimated depth image is used as an input of the Dueling Double Deep Q-Network.

D3QN figure

Depth Estimation

  • Tensorflow version == 1.12.0
  • Depth Estimation model is based on ResNet 50 architecture
  • python file that contains the model architecture is located in models
  • Due to huge size of trained depth estimation model, you have to download the depth estimation model here.

To implement the code, you need

- fcrn.py
- __init__.py
- network.py
- NYU_FCRN-chekpoint

Training Environment in Robot Operating System

  • In our setup, ubuntu 16.04 and ROS kinetic are used
  • Training env file contains the figures of the training map in ROS
  • You could use the training environments in model_editor_models
  • Place editor models in your gazebo_model_repository

Training Environment Setup

1. Spawning the drone for training

Training agent for the drone is hector_qaudrotor. Please take a look at the ROS description and install it. To spawn the training agent for our setup, type the command below:

roslaunch spawn_quadrotor_with_asus_with_laser.launch

To enable motors, type the command below:

rosservice call /enable_motors true

2. Setting the initial position and velocity of the agent

You could change the initial position and velocity in the ENV.py.

  • To change the spawining position of the drone, change the spawn_table in the ENV.py
  • To change the velocity of the drone, change the action_table: (three linear speed, five angular rate)

Training

To train the model python3 D3QN.py. You could change the hyperparameters in the D3QN.py.

Testing

  • Simulation Test: To test the model, please change the trained model's directory and then type python3 Test.py.
  • Real world experiment test: Go to real_world_test file and run the D3QN_test.py.

Citation

This research has been accepted in Expert Systems with Application (ESWA).   
The title of the paper is 'Towards Monocular Vision-Bassed Autonomous Flight Through Deep Reinforcement Learning'.   

collision-avoidance's People

Contributors

mw9385 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.