Git Product home page Git Product logo

visualswarm's People

Contributors

mezdahun avatar

Stargazers

 avatar

Watchers

 avatar

Forkers

yating1901

visualswarm's Issues

Decrease simulation timestep to match parameter regimes

During the simulations a pendulum-like movement has been observed. It is a known fact that discrete timestep control can give rise to these kind of behaviors in dynamical systems. It should be checked if with a very fine-grained simulation the pendulum movement is still there or not.

Test Software setup steps on a fresh installation for future

As it is not that straightforward to install openCV to a fresh installation of the Raspbian, it is necessary to test if the finalized workflow to prepare the software environment really works.

Take a new SD card and freshly install raspbian, then follow this guide to prepare the software environment. DoD: opencv can be imported in python7 in a virtualenv as

import cv2

Prepare a Raspi 4

A Raspberry 4 should be prepared as the basic setup as follows:

  • Raspbian is installed on an SD card
  • The hardware as well as the camera module of the Raspi is working
  • The camera module is validated with the raspstill and raspvid commands and the generated output is checked for basic quality

At the end of the task the steps of general setting up of a raspi 4 is included in the wiki of the project

Robot movement without interaction (explorative movement)

As robots have a limited angle of view, we also need to define the robots movement when there is no object to interact with in the available FOV. This could be a slow and smooth brownian motion. The task is to check for best practices and implement a reasonable rest movement that enables the robots to find interaction partners on the long run.

Setup python connection with camera module

The possibility to acquire camera image and store/process it via OpenCV shall be provided in the python module as a specific submodule of the project.

the initial implementation should happen according to this link

Implement flocking algorithm for arbitrary visual projection field

The task is to implement the simplest case of the flocking algorithm described in the main article. A function should be created that takes a visual projection field as an input and according to this it calculated the temporal changes in the velocity (v) and the heading vector (psi) of an agent.

Figure out real time plotting with Pi

The Rapsberry Pis are not strong enough to use commonly used real time visualization techniques such as matplotlib package. Other packages such as pyqtgraph that are supposedly much faster than matplotlib require additional installation of SW that might be particularly tricky on a Pi.

Task: find a solution to plot real time (or close to rel time) mathematical data on actual figures instead of cv2.imshow

This is key to visualize velocity vectors, visual projection fields, etc.

Elevate Halo in design

It is highly probable that the halo in the scaffold design is way too low. This way the robot agents don't see each other when they are too close to each other. The halo should be elevated posiibly to the roof.

Stabilize Pi HW with lense, camera and case

The fisheye lense should be fixed on the camera module and the camera module should be steadily attached to the casing of the pie in a way that the orientation of the hardware modules fits our future goals.

Create architecture diagram of current architecture

To illustrate how the different elements are connected and how they communicate one should create an architecture plan of the stack, as well as a process diagram including parallel computations in the stack.

Insufficient output amperage for Pi 4

Problem Description

  • Raspberry Pi 4 need a stable 5V/3A input
  • The current battery can only provide 2.4A at max
  • Low amperage will cause the Pi to underclock the CPU during heavy computation and possible failures will cause the Pi to shut off
  • Unexpected failures like this can cause the SD card to permanently set to read-only that would make it unusable.

As it turns out we are not alone (1)(2) with the problem of making a Pi 4 portable. For previous Pi versions a lower output amperage battery served just fine, Pi 4 need stable 5V/3A input which is rather rare for portable chargers.  

Solution
Choose another power bank taht is able to provide a "fast charging" function with 5V/3A output. The following results can fit our needs.

ID Provider Name 5V/3A specifically stated Comes with cable Capacity in mAh Price in EUR Link to buy
1 Anker PowerCore Speed 20000, Qualcomm Quick Charge 3.0 yes yes 20000 45.99 link
2 RAVPower USB C Portable Battery 22000 RAVPower yes no (USB-C to USB-C fastcharge cable needed) 22000 c.UnA link
3 RAVPower USB C Portable Charger RAVPower 26800mAh yes no (USB-C to USB-C fastcharge cable needed) 26800 40 link
4 POSUGEAR Power Bank 20000 mAh Quick Charge 3.0 yes no (USB-C to USB-C fastcharge cable needed) 20000 21.99 link
5 GuliKit Portable Power Bank 10000mAh 5V/3A yes yes 10000 39.99 link
6 Intenso (DE) PD20000 Moblier Zusatzakku 20000 mAh Anthrazit yes yes (USB-C to USB-C) 20000 27 link

Milestone Description (Write Lab Rotation)

The following topics belong to this milestone:

  • anythiong related to writing the final summary document of the lab rotation
  • anything related to visualization of results, figures, architecture and process diagrams, etc.
  • anything related to the final summary presentation for the group.

Milestone Description (Prepare Vision)

During this milestone a connection/interface between the camera module and the processing unit (in this case a piece of python code) is provided so that:

Image Acquisition

  1. The software is able to acquire images via the camera module in python using the OpenCV submodule or similar. (preferred is OpenCv)
  2. The acquired image can be further processed programatically
  3. A camera stream is established between the software and the camera module with a desired sampling freq.

Field of View extraction:

According to the solution chosen in the previous milestone the goal is the implementation of either an approx. "360°" quasi-1D azimuth vision, or implementing a the same kind of vision for only the camera angle (ca. 100°)

  1. A efficient solution is provided with which only a part of the stream is kept and processed other parts of the sensor information is disposed as soon as possible. By soon we mean in the processing stream, so that we do not waste rescources.
  2. The best preserving field of view shall be extracted from the camera stream, i.e. the one with less distortion, i.e. middle elevation full azimuth.

Blob detection/segmentation

Develop Python code for integrating a binary visual projection field (blobs) and the absolute value of its retinal derivative (edges). For binarization I suggest using thresholding of a single RGB channel, e.g. green. So that we can put later color tape on the robots or objects to clearly distinguish them from the background. Example for a colorspace based approach: https://www.authentise.com/post/object-detection-using-blob-tracing

  1. The segmentation should be either before or after FOV extraction according to the segmentation algorithm which we use.
  2. The segmentation shall be fast enough to get real-time outlining edges assuming that thymio robots will have a special colorscheme.

The exact order of Segmentation and FOV extraction is not yet clear and shall pe tailored to the algorithms we will use.

Improve parameter passing so we can easily reproduce results

Provide a solution to pass and save parameters to the stack easily, e.g. with a json file. This way the used default environmental variables and parameters in the contrib package could be overwritten with a file, so we could easily reproduce any result.

Measure FOV of current camera module and tune parameters accordingly

To be able to use the current camera module with a limited FOV (without 360 degree vision) we need to first calculate the real FOV angle of the camera module and include this a s a parameter in the current code base. After that the paramneters of the flocking algorithm must be tuned accordingly, so that the movement response is sufficiently sensitive for this limited FOV.

Implement stable segmented high level vision

Implement a process that extracts a given color range from the raw visual input and cleans the result in such, that the ouput has now small grained noise, the input queue is always clean and the computation is efficient. The process should be targeted with an RGB color.

Attach fisheye lense on camera module

The extracted fisheye lense should be attached to the camera lense carefully, yet in a stable way. The acquired image shall be tested with single tools such as raspistill.

Set up GitHub security and branch settings

In case multiple researchers would like to work on the repo at the same time a branch defense is necessary. Go through GitHub settings and set them up as recommended (develop default, controlled merge, etc.).

Milestone Description (Visual Behaviour)

Implement the model equations in Python so that they output turning rates and acceleration/deceleration commands based on the visual input.

During this milestone additional features of the software prototype are implemented such that:

  1. A stable stream of extracted, blob/edge detected visual imaginery is prvoded as an input
  2. The SW can identify the centers of the blobs, and the blobs are then transformed according to the main article of the project using a cosine function so that we can get back the 2 main metrics of the visual behavior, the "blob area" and the "edge size"
  3. The blob area and edge size is transformed to appealing and repllent forces and is coded as a set of output turning rates and acceleration/deceleration commands based on the visual input.

Robot border conditions (interaction with environment borders)

Although the robots interact with each other it is a natural requirement that the robots should not be allowed to exit a confined experimental spacial environment for better control and precautions. One solution is to use build bounding walls. On the other hand it would be much easier to mark the available space with some simple black/white tape on the floor so that the robots can catch these borders with their bottom sensors like in this simulation: https://www.geogebra.org/m/nDnPzHWs#material/ugNhFvjc

Extract fisheye lense

The fisheye lense is encapsulated in a frame on which there is a high edge (supposed to be fit on the side of the phone) that does not let the cameramodule to be attached to the lense. 

To extract the the lense we can use:

  1. Dental drill or manicure drill
  2. Sandpaper
  3. Hot blade

Approach 2. and 3. have the disadvantages to ruin the lense or the mirror.

D.o.D: The lense is extracted in a way that the camera module of the Pi can be attached to it.

Literature Research

Collect possibly citable docuemnts and articles for our current approach.

Write test for segmented vision and merge to develop

After reviewing the current color segmentation and visual field projection calculation we can fix the code and write unittests for these functionalities.

The task is to work back the code quality and the coverage so that we can merge the segmented vision functionalities to develop

Milestone Description (Prepare Hardware)

This issue breaks down the Milestone into a list of necessities as a list of Definitoon of Done.

D.o.D: A single Raspberry Pi 4 is prepared/set up so that:

Basics:

  1. The camera module extension can be and is connected which the pi recognizes and is able to use
  2. The camera module is validated to some extent with a commercial camera software for pi 4 or with a python extension using the camera module.
  3. The 2 different camera modules are compared and information/reasoning is collected for further use in the project. I.e.: do we need high resolution for planned algorithms, if so why, etc.
  4. A final decision has been made about which camera module is used during the Lab Rotation.

Rough testing with Field of View:

Goal: Investigate the ability of acquiring 360° visual information using a spherical mirror (from old Iphone extensions). Here essentially a thin “quasi-one dimensional” image is sufficient for the model in 2D (movement on a surface), with the width of the image covering the 360 view at the azimuth, and the height much lower than the with essentially with relatively few pixel allowing for robust detection of blobs and edges in the visual projection field.

  1. The spherical mirrors are extracted from the embedding cases and are attached stabelly enough on the camera modules carefully
  2. The obtained image is first validated with a commercial camera software without coding
  3. Conditions and limitations are collected for using the spherical mirrors and according to these it is decided if we proceed with them. (I.e.: how wide the FOV is, how much distortion do we have, etc.) or choose a bypass solution.
  4. Fallback option for problems with 360° vision: Implementation of the “quasi 1d” azimuth vision only within the view angle of the camera module to the front. Such a restricted field of vision should be already enough to obtain stable collective motion

Extract position and orientation of robots

It is crucial to save data from the simulation environment, such as the psoition and orientation of the robots.

This can be achieved by 2 ways:
1.) add a GPS and a Compass device to the robot and get the value of these regularly

2.) Or use the Supervisor interface to get the position and the orientation of the robot

The fetched data should be written into files in a given file structure

Research best practises to calculate motor commands

In the POC the motor command calculation is a quick and dirty calculation that is most probably not the best way to do it. We need to look into state-of-the art motor control algorithms that enables us to calculate left and right motor "velocity" from agent overall velocity and heading angle.

Validate wide-field vision and final decision

The acquired wide-field images shall be validated according to the following points:

  • distortion
  • FOV
  • transformation from circular into linear FOV
  • sufficient resolution after transformation and cutting
  • sufficient separation in colorspace

According to these point a final decision should be made if we proceed with the current solution.

Install working aseba on PI4 with raspbian 10 (buster)

Aseba is only tested on PI3 with raspbian versions lower than 10 before. The distributed deb package via any package manager such as apt, on the other hand seems to be broken. The task is to find a way to install aseba and asebamedulla on a PI4 with raspbian 10.

[WeBots] Try WeBots Simulation platform

As Argos seems to not suffice all our needs, we have to move to a more promising simulation platform preferably with Thymio2 plugin and possibility for onboard camera intergation as this will be the input of our robots.

The task is to install and explore WeBots on a preferred OS.

Questions:

  • [AGENT] Is the Thymio2 plugin indeed there and usable/supported well for our needs? (motor velocities, top led values, proximity sensor values, bottom infrared sensor values are the most importatnt)
  • [CAMERA] Is there an on-board camera model, and how we can connect with a robot? How we can configure this model to reflect the same FOV and AOV that we would get in a real scenario?
  • [CONTROLLER] How we can integrate an already written python controller code to move the robots accordingly? Is this possible?

[WeBots] Robot Identification

In WeBots environment we can create Thymio robots with on-board camera modules with which the robots should be able to identify each other.

In real life this will happen with a colorful skirt/halo element around the robots. Other agents will identify each other according to a terget color and the colorful skirt/halo element.

The task is to look into the possibilities in WeBots environment to add a custom made 3D skirt/halo around the Thymio robot models.

Crucial questions:

  • Is it possible to add elements/parts to the robot models in WeBots platform?
  • If it is possible what we need to do to add a 360 degree colorful sheet around the Thymio robots?
  • How we can integrate the found solution in our simulation environment

Prepare python project structure

Prepare a generic python project for VisualSwarm with a basic project folder structure and setup.py

The project structure should serve as a basis for future implementation

Map model parameters to real-life equilibrium distances

To enrich the content of the lab rotation or possibly the master's thesis, one should know how to translate model parameters into real life behavior of the robots. The main aspect that controls the behavior of a fixed size agent using the flocking algorithm is the equilibrium distance, i.e. the distance of the agent from another target agent in which the compulsion and repulsion force on the agent are in equilibrium and therefore the velocity is equal to the target equilibrium velocity. This should be measure by tuning the model parameters.

Missing heat absorbers

Pi4 comes with a higher performance than Pi3 and therefore it also produces more heat. It was thought that the cases include a heat absorbers but this is not the case.

We need to order heat absorbers for the Pis. We should first look into the options and then decide which absorber units and how many we buy.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.