Git Product home page Git Product logo

pietrovitiello / actionrepresentation Goto Github PK

View Code? Open in Web Editor NEW
11.0 1.0 1.0 157.45 MB

MSc Project aimed at finding an alternative way of representing robot actions. We evaluate several machine learning models to control a simulated 7-joint robotic arm using solely a wrist mounted camera as input.

Python 100.00%
inverse-kinematics machine robotic-arm robotic-arm-kinematics robotics computer-vision imitation-learning machine-learning robot robot-learning

actionrepresentation's Introduction

A New Representation for Actions in Visual Robot Learning

This repo was created for a project aimed at finding a new representation for robot actions. The work environment consists in a robotic arm with a camera mounted on its wrist. An agent is then trained through imitation learning to control this robotic arm by receiving as input the images captured by the camera. The presented work has been fully developed in simulation and this repo includes:

  • code to interface with a simulated environment and control it.
  • a kinematics package to control a robotic arm, making it possible to reach and grasp a target object.
  • code to generate different simulation scenes.
  • code to collect demonstrations of a robotic arm grasping objects.
  • various neural networks that process an image and return a robot action.
  • code to train these models using different data pipelines.
  • code that evaluates the ability of each trained model to autonomously grasp objects.

Table of contents

Project Motivation

The field of robotics has been gaining a lot of traction, especially in the medical, military and industrial sectors. However, the majority of the currently used robots rely on human-crafted control systems, which take time to be programmed and have a low tolerance for variations in their environment. Consequently, Robot Learning tries to use artificial intelligence to teach robots how to autonomously perform tasks, with the ultimate goal of creating general agents that could perform in any environment without having to be reprogrammed. Usually these agents take sensory input, such as images or cloud-point data (like LiDAR), and process it in order to come up with instructions to give to the robot.

These instructions are effectively robot actions, or in other words movements that the robot should perform. Nonetheless, a question arises on how one should represent these actions. A robot action could for example be represented as a vector containing the linear and angular velocities of the end-effector, or as a location and orientation in space that the end-effector should reach. Representation Learning teaches us that how data is represented in a machine learning model has a great impact on its performance. Therefore an agent could more easily predict actions when they are represented in a form compared to another. This project tries to explore a new way of representing robot actions, not as vectors but as images. The reason being that neural networks might find it easier to process images rather than vectors and a visual representation could be also more easily interpretable by a human being.

Requirements

This project has been fully developed in simulated environments. The simulator used was CoppeliaSim V4.1 which was controlled via python scripts through the use of the PyRep python library. In order for the latter to work one must run the scripts in the Linux operating system.

Apart from these major requirements, this repo relies on a few python libraries that can be simply pip installed by running pip install -r requirements.txt

Code Structure

The main code of this project resides in the src folder. More precisely, each folder has the following functionality

  • /Demos: This folder has the code used to interact with the simulations. All of the actual simulation files are contained in this folder. Moreover, the code necessary to create the experiment scenes and generate demonstrations is also found here.

  • /Robotics: This folder hosts the code that actually controls the robot. Here you can find the classes that interface and communicate with the robot in the simulated scene. Additionally the full kinematic controls of the robot can be found here.

  • /Learning: This folder contains all of the models that have been implemented. Some of them use autoencoders, others LSTMs, other only convolutional layers etc. In this folder you can also find all the custom dataloaders as well as various methods to train and test the models.

These folders contain all of the source code. However the main scripts of the repo are the following and they can be used to control the entire project:

  • get_demos.py: In this script you can chose a simulation environment as well as set some demonstration parameters. By running this script you will generate demonstrations, automatically saving the demonstration data in the dataset folder src/Demos/Dataset as well as saving the configurations used to create such dataset into the descriptions.yaml file.

  • train.py: In this script you can chose a model and its configuration as well as a dataset from does stored in the src/Demos/Dataset folder. By running this script the chosen model will be trained on the chosen dataset. The trained model will be saved in the src/Learning/TrainedModels folder and in this same folder the configurations of the model will be saved in a yaml file.

  • test.py: In this script you can chose the trained model to test and the scene in which to test it. By running this script the model will be tested on grasping objects and the number of successful grasps will be automatically saved.

Scenes

There are three main scenes that have been used to train and test the models. The source code to define and generate them can be found in src/Demos/Scenes, while the code to collect demonstrations using these scenes can be found in src/Demos/DemoCollection

Red Cube

Here a red cube was randomly initialised in front of the robotic arm. The objective is to successfully grasp the cube. Below you can find images of an example of such a scene both in third (left) and first (right) person views.

Red Cube with Distractor Objects

Here the objective remains the same as above. However, in the scene also distractor objects get randomly initialised. This environment can be used to test whether models can adapt and generalise to scenes which are visually different and if the models are able to still recognise that the target is the red cube although there are other objects present as well. Below you can find images of an example of such a scene both in third (left) and first (right) person views.

Grasping Different Shapes

In this case the target object is not necessarily a cube anymore. The object gets generated with a random shape between cube, horizontal prism, vertical prism, cylinder or a taller cylinder. These shapes have different properties and need to be grasped in a slightly different way in order for the grasp to be robust enough to lift the object. As a result using this environment can test the adaptability of the models to different object shapes.

Proposed Novel Action Representation

We propose an alterantive action representation that tries to represent motion in the form of an image.

The Motion Image

This work introduces the idea of using motion images as action representation. These are generated by subtracting following frames of a video. More specifically, during demonstrations one is effectively collecting a video of what the robot camera should be seeing while performing a certain task. By subtracting the video frame at time-step t from the video frame at time step t+โˆ†t what you obtain is an image representing the change in point of view of the camera. This image is known as the motion image and it depends on the movement that the camera has performed between time-step t and t+โˆ†t. As a result, one can use the motion image to represent actions. Examples of motion images can be found below.

MI-Net

To leverage the use of the motion image we propose the MI-Net which makes use of an autoencoder and attention mechanism. A decoder is trained to generate the motion image corresponding to the desired action that should be executed by the robot. The attention mechanism uses the information of the decoder to provide attention on the prediction of end-effector velocities. The code for the MI-Net and some of its alterations can be found in src/Learning/Models/MotionIMG

actionrepresentation's People

Contributors

pietrovitiello avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

ruifmaxx

actionrepresentation's Issues

references

Hello, could you please provide the references used in the code? Thank you.

Demo

Your work is very meaningful to me, but some of the proper nouns and abbreviations in the code are difficult for me to understand. What bothers me the most is how to reasonably choose the configuration information in the code to meet my needs. Due to the above problems, I cannot run your code successfully. It would be better if you could introduce the steps to make the code run successfully as a demo at your convenience. Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.