During the simulations a pendulum-like movement has been observed. It is a known fact that discrete timestep control can give rise to these kind of behaviors in dynamical systems. It should be checked if with a very fine-grained simulation the pendulum movement is still there or not.
As it is not that straightforward to install openCV to a fresh installation of the Raspbian, it is necessary to test if the finalized workflow to prepare the software environment really works.
Take a new SD card and freshly install raspbian, then follow this guide to prepare the software environment. DoD: opencv can be imported in python7 in a virtualenv as
As robots have a limited angle of view, we also need to define the robots movement when there is no object to interact with in the available FOV. This could be a slow and smooth brownian motion. The task is to check for best practices and implement a reasonable rest movement that enables the robots to find interaction partners on the long run.
After implementing both the visual projection field as well as the basic flocking algorithm of an agent the 2 shall be connected and tested for behavior. E.g.: If one moves a blob closer to the camera how does the velocity vector changes over time.
The task is to implement the simplest case of the flocking algorithm described in the main article. A function should be created that takes a visual projection field as an input and according to this it calculated the temporal changes in the velocity (v) and the heading vector (psi) of an agent.
Measure the timedelay between incloming received raw camera image and the output of behavioral parameters. (i.e. the time delay of the full computational workflow)
The Rapsberry Pis are not strong enough to use commonly used real time visualization techniques such as matplotlib package. Other packages such as pyqtgraph that are supposedly much faster than matplotlib require additional installation of SW that might be particularly tricky on a Pi.
Task: find a solution to plot real time (or close to rel time) mathematical data on actual figures instead of cv2.imshow
This is key to visualize velocity vectors, visual projection fields, etc.
To double-check that everything is correct when we calculate the motor control commands from the output state variables of the algorithm, already at this point of project it should be formalized mathematically and should be validated with PR.
It is highly probable that the halo in the scaffold design is way too low. This way the robot agents don't see each other when they are too close to each other. The halo should be elevated posiibly to the roof.
The fisheye lense should be fixed on the camera module and the camera module should be steadily attached to the casing of the pie in a way that the orientation of the hardware modules fits our future goals.
To illustrate how the different elements are connected and how they communicate one should create an architecture plan of the stack, as well as a process diagram including parallel computations in the stack.
Low amperage will cause the Pi to underclock the CPU during heavy computation and possible failures will cause the Pi to shut off
Unexpected failures like this can cause the SD card to permanently set to read-only that would make it unusable.
As it turns out we are not alone (1)(2) with the problem of making a Pi 4 portable. For previous Pi versions a lower output amperage battery served just fine, Pi 4 need stable 5V/3A input which is rather rare for portable chargers.
Solution
Choose another power bank taht is able to provide a "fast charging" function with 5V/3A output. The following results can fit our needs.
During this milestone a connection/interface between the camera module and the processing unit (in this case a piece of python code) is provided so that:
Image Acquisition
The software is able to acquire images via the camera module in python using the OpenCV submodule or similar. (preferred is OpenCv)
The acquired image can be further processed programatically
A camera stream is established between the software and the camera module with a desired sampling freq.
Field of View extraction:
According to the solution chosen in the previous milestone the goal is the implementation of either an approx. "360°" quasi-1D azimuth vision, or implementing a the same kind of vision for only the camera angle (ca. 100°)
A efficient solution is provided with which only a part of the stream is kept and processed other parts of the sensor information is disposed as soon as possible. By soon we mean in the processing stream, so that we do not waste rescources.
The best preserving field of view shall be extracted from the camera stream, i.e. the one with less distortion, i.e. middle elevation full azimuth.
Blob detection/segmentation
Develop Python code for integrating a binary visual projection field (blobs) and the absolute value of its retinal derivative (edges). For binarization I suggest using thresholding of a single RGB channel, e.g. green. So that we can put later color tape on the robots or objects to clearly distinguish them from the background. Example for a colorspace based approach: https://www.authentise.com/post/object-detection-using-blob-tracing
The segmentation should be either before or after FOV extraction according to the segmentation algorithm which we use.
The segmentation shall be fast enough to get real-time outlining edges assuming that thymio robots will have a special colorscheme.
The exact order of Segmentation and FOV extraction is not yet clear and shall pe tailored to the algorithms we will use.
After finishing vision and segmented vision as well as the basic motoric control according to the main algorithm, update the readme files with the corresponding descriptions.
Provide a solution to pass and save parameters to the stack easily, e.g. with a json file. This way the used default environmental variables and parameters in the contrib package could be overwritten with a file, so we could easily reproduce any result.
To be able to use the current camera module with a limited FOV (without 360 degree vision) we need to first calculate the real FOV angle of the camera module and include this a s a parameter in the current code base. After that the paramneters of the flocking algorithm must be tuned accordingly, so that the movement response is sufficiently sensitive for this limited FOV.
Implement a process that extracts a given color range from the raw visual input and cleans the result in such, that the ouput has now small grained noise, the input queue is always clean and the computation is efficient. The process should be targeted with an RGB color.
To be able to monitor the PI's performance real time, create a corresponding measurement in the ifdb and forward it to grafana using psutil as in this source: https://simonhearne.com/2020/pi-metrics-influx/
The extracted fisheye lense should be attached to the camera lense carefully, yet in a stable way. The acquired image shall be tested with single tools such as raspistill.
In case multiple researchers would like to work on the repo at the same time a branch defense is necessary. Go through GitHub settings and set them up as recommended (develop default, controlled merge, etc.).
Implement the model equations in Python so that they output turning rates and acceleration/deceleration commands based on the visual input.
During this milestone additional features of the software prototype are implemented such that:
A stable stream of extracted, blob/edge detected visual imaginery is prvoded as an input
The SW can identify the centers of the blobs, and the blobs are then transformed according to the main article of the project using a cosine function so that we can get back the 2 main metrics of the visual behavior, the "blob area" and the "edge size"
The blob area and edge size is transformed to appealing and repllent forces and is coded as a set of output turning rates and acceleration/deceleration commands based on the visual input.
Although the robots interact with each other it is a natural requirement that the robots should not be allowed to exit a confined experimental spacial environment for better control and precautions. One solution is to use build bounding walls. On the other hand it would be much easier to mark the available space with some simple black/white tape on the floor so that the robots can catch these borders with their bottom sensors like in this simulation: https://www.geogebra.org/m/nDnPzHWs#material/ugNhFvjc
The fisheye lense is encapsulated in a frame on which there is a high edge (supposed to be fit on the side of the phone) that does not let the cameramodule to be attached to the lense.
To extract the the lense we can use:
Dental drill or manicure drill
Sandpaper
Hot blade
Approach 2. and 3. have the disadvantages to ruin the lense or the mirror.
D.o.D: The lense is extracted in a way that the camera module of the Pi can be attached to it.
After reviewing the current color segmentation and visual field projection calculation we can fix the code and write unittests for these functionalities.
The task is to work back the code quality and the coverage so that we can merge the segmented vision functionalities to develop
This issue breaks down the Milestone into a list of necessities as a list of Definitoon of Done.
D.o.D: A single Raspberry Pi 4 is prepared/set up so that:
Basics:
The camera module extension can be and is connected which the pi recognizes and is able to use
The camera module is validated to some extent with a commercial camera software for pi 4 or with a python extension using the camera module.
The 2 different camera modules are compared and information/reasoning is collected for further use in the project. I.e.: do we need high resolution for planned algorithms, if so why, etc.
A final decision has been made about which camera module is used during the Lab Rotation.
Rough testing with Field of View:
Goal: Investigate the ability of acquiring 360° visual information using a spherical mirror (from old Iphone extensions). Here essentially a thin “quasi-one dimensional” image is sufficient for the model in 2D (movement on a surface), with the width of the image covering the 360 view at the azimuth, and the height much lower than the with essentially with relatively few pixel allowing for robust detection of blobs and edges in the visual projection field.
The spherical mirrors are extracted from the embedding cases and are attached stabelly enough on the camera modules carefully
The obtained image is first validated with a commercial camera software without coding
Conditions and limitations are collected for using the spherical mirrors and according to these it is decided if we proceed with them. (I.e.: how wide the FOV is, how much distortion do we have, etc.) or choose a bypass solution.
Fallback option for problems with 360° vision: Implementation of the “quasi 1d” azimuth vision only within the view angle of the camera module to the front. Such a restricted field of vision should be already enough to obtain stable collective motion
In the POC the motor command calculation is a quick and dirty calculation that is most probably not the best way to do it. We need to look into state-of-the art motor control algorithms that enables us to calculate left and right motor "velocity" from agent overall velocity and heading angle.
As the robots have to recognize each other during the experiments, we have to design a scaffold around the Raspberry Pis and the batteries on the top of the robots that hasthe following specifications
Aseba is only tested on PI3 with raspbian versions lower than 10 before. The distributed deb package via any package manager such as apt, on the other hand seems to be broken. The task is to find a way to install aseba and asebamedulla on a PI4 with raspbian 10.
As Argos seems to not suffice all our needs, we have to move to a more promising simulation platform preferably with Thymio2 plugin and possibility for onboard camera intergation as this will be the input of our robots.
The task is to install and explore WeBots on a preferred OS.
Questions:
[AGENT] Is the Thymio2 plugin indeed there and usable/supported well for our needs? (motor velocities, top led values, proximity sensor values, bottom infrared sensor values are the most importatnt)
[CAMERA] Is there an on-board camera model, and how we can connect with a robot? How we can configure this model to reflect the same FOV and AOV that we would get in a real scenario?
[CONTROLLER] How we can integrate an already written python controller code to move the robots accordingly? Is this possible?
In WeBots environment we can create Thymio robots with on-board camera modules with which the robots should be able to identify each other.
In real life this will happen with a colorful skirt/halo element around the robots. Other agents will identify each other according to a terget color and the colorful skirt/halo element.
The task is to look into the possibilities in WeBots environment to add a custom made 3D skirt/halo around the Thymio robot models.
Crucial questions:
Is it possible to add elements/parts to the robot models in WeBots platform?
If it is possible what we need to do to add a 360 degree colorful sheet around the Thymio robots?
How we can integrate the found solution in our simulation environment
To enrich the content of the lab rotation or possibly the master's thesis, one should know how to translate model parameters into real life behavior of the robots. The main aspect that controls the behavior of a fixed size agent using the flocking algorithm is the equilibrium distance, i.e. the distance of the agent from another target agent in which the compulsion and repulsion force on the agent are in equilibrium and therefore the velocity is equal to the target equilibrium velocity. This should be measure by tuning the model parameters.
Pi4 comes with a higher performance than Pi3 and therefore it also produces more heat. It was thought that the cases include a heat absorbers but this is not the case.
We need to order heat absorbers for the Pis. We should first look into the options and then decide which absorber units and how many we buy.