Git Product home page Git Product logo

hydra-alpha's People

Contributors

john-wl avatar

Watchers

 avatar

hydra-alpha's Issues

Add the pca9685 driver

We then add the 3 specific servos we're gonna need in the project.

Ideally, we'd like to set the angle of the servos with an angle passed as parameter.

Implement the 2 main modes

For following your commands:

bdcMotorLeft.setTorque(throttle - steer);
bdcMotorRight.setTorque(throttle + steer);

servoCameraZ.setAngle(latestReceivedRfData.cameraZ);
servoCameraY.setAngle(latestReceivedRfData.cameraZ);

For swarm is swarmy:

if(EspCam::rectangle)
{
    followOtherTank();
}
else
{
    searchForOtherTank();
}

Then, in follow other tank:

// finding the center of where the other tank is in our image
Vector2 midRec = EspCam::rectangle.center();

// shifting the center to get signed values
Vector2 shiftedMidRec = midRec - Vector2{320/2, 240/2};

float radPerPixel = (62 * (PI/180))/320;

// scale so the values are in radians/pixel instead
Vector2 rotationAmounts = shiftedMidRec * radPerPixel;

// create rotator vectors
Vector3 frontVector = Vector3{1, 0, 0};
Vector3 rotatorZ = Vector3{0, 0, 1} * rotationAmounts.x;
Vector3 rotatorY = Vector3{0, 1, 0} * rotationAmounts.y;

// perform the initial rotation to find the orientation of the vector in the camera's perspective
Vector3 approximateOrientationInCameraPerspective = frontVector
    .rotate(rotatorZ)
    .rotate(rotatorY);

// find rotator vectors that correspond to servo motor angles
Vector3 rotatorServoZ = Vector3{0, 0, 1} * servoCameraZ.getAngle();
Vector3 rotatorServoY = Vector3{0, 1, 0} * servoCameraY.getAngle();

// rotate the vector that was in camera perspective using the servo motor rotators to find its orientation in local coordinates
Vector3 offsetedOtherTankDirection = approximateOrientationInCameraPerspective
   .rotate(rotatorServoZ)
   .rotate(rotatorServoY);

// this function takes in the distance between 2 corners of the rectangle, and outputs an approximate distance
float approximateDistanceOfOtherTank = findDistanceFromRectangleSize(Camera::rectangle.diagonalLength());

// generate the approximate ray corresponding to the orientation vector
Ray3 localPositionOfOtherTank = Ray3{
CAMERA_POSITION_OF_CENTER_OF_ROTATION_ON_TANK, 
offsetedOtherTankDirection * approximateDistanceOfOtherTank
};

// after finding the orientation of the other tank, we need to do 2 (3?) things:
// 1) update the servo motors to try to put the other tank in the middle of the pixels of the camera;
// 2) update the bdc motors to try to always make the tank face towards the othe tank, and to try to always keep a certain specific distance from it. 
// 3) the sonar could be used to try to detect walls and avoid them, but this is certainly more advanced, and it adds complexity, so let's focus on the camera, servos and bdcs first. 

// it's important to note that we wish to handle the motor controls (only the bdcs) using pid controllers (and actual sensor values). 
// the servo motors will be updated using simple exponentials (newAngle = currentAngle + (desiredAngle - currentAngle)*K). 

Finally, in search for other tank:

// turn and turn and turn untill we find the other tank.

Try to implement the "fast tank recognition" algorithm

Here are the steps I came up with:

  • Transform the JPEG image into a 320x240 array;
  • Loop over every 4 pixel in x AND y;
  • In the loop, if blue/(red+green) > some value (or maybe blue/red > some value && blue/green > some value?? Unsure right now), add the x and y position separately in an (undivided yet) average variable of type long, and count the number of "blue-enough" pixels;
  • After the loop, divide the x and y undivided average by the amount of blue-enough pixels we counted. That's where the other tank probably is in the camera frame;
  • Take the square root of the blue-enough counter. That's our square side length;
  • Divide that square side length by 2, and create the upperLeft and lowerRight points by adding that halfed square side length to the center we just found;
  • There you go! You can now send that rectangle to the master;
  • Obviously, if we have counted no blue-enough pixels, we found no rectangle. The protocol I created already handles that.

Add basic Nrf24L01 implementation

Some code to send and receive on 3 different channels:

  • One on which we transmit;
  • Two on which we receive.

The receiving channel's pipe number is the same as the device Id for Rf communications (#0, #1 or #2).

Like this, we have a nice triangular connection mesh.

Remove useless delays in the Slow I2C class

Slow I2C works right now, but we can optimize it a little by removing some delays.

So currently, the class is set up in such a way that every time we send or receive a bit, we wait a little bit for the slave to keep up.

The thing is, the writing speed can require as little as 10µS of delay to work properly, whereas the reading speed needs 200 to 300µS to work properly.

The thing is, I juste realized that the reason for this might not be that it's slower for the slave to write back, but because the slave fills the queue only when it knows it will send back data. I beleive this is taking quite some time in the process. If we put a delay that adjusts its length depending on the amount of bytes the master requested there (right after sending the address byte for reading), we would probably be able to reduce the delay between slave bit-writes to something way more reasonable.

Create the protocol for the espcam and esp32

The esp32 master can send requests to the espcam:

  1. enable/disable sending images over wifi to Remote;
  2. request pixel corner positions of the rectangle that contains the other tank.

I2C data transfer looks like this:

  • slave address is 0x25 (if it's not used...);
  • command Id (#0 or #1);
  • data from master (expected length varies with command Id);
  • data from slave (expected length varies with command Id too).

I2C data transfer specification for command Id #0:

  • 0x25; // slave address
  • 0x00; // command Id
  • 1 byte from master (only the msb matters, all the other bits are "don't care");
  • 1 byte from slave (0x00 means the wifi comm is fine and we're transmitting (or if the wifi comm is disabled because we said to not send anything with the msb sent by the master, then it just means we're idling and ready to process any incomming request), 0x01 means we are trying to connect to other esp32s by wifi, 0x02 means we failed to connect to other esp32s);

note: when receiving "0x02" from the slave after having sent the command Id #0, the master must restart the espcam by pulling low the "reset" line for about [long enough to ensure restart] ms.

Example :
0x25 0x00 0x80 0x01

I2C data transfer specification for command Id #1:

  • 0x25;
  • 0x01;
  • 0 byte from master (we don't need to tell the slave anything);
  • 1 or 7 bytes (explained right below).

The data the slave transmits is separated in 5 different parts:

  1. the msb of the first byte corresponds to data validity. If it's equal to 0, the communication stops right there because we don't see any rectangle in the camera frame. The other 7 remaining bits are "don't care". If the msb is equal to 1, we intend to receive more bytes from the slave;
    2-5) the 4 next chunks of 12 bits (12 bits + 12 bits + 12 bits + 12 bits) correspond to pixel positions (X1 Y1 X2 Y2) (this is 6 bytes).

Example:

0x25 0x01 0x80 0x23 0x68 0xC2 0x4E 0x83 0x7D

Change the file format of the camera

Change the initial capture to rgb565, and then, use the transformation function to translate it to JPEG (it seems it's easier in this order!).

We can then use the rgb565 for our algorithm!

Add sonar scaner behaviour

The idea is to always move the servomotor controlling the sonar from side-to-side with sinusoidal speed (goes towards right slowly, then fast, then slowly when close to max angle, then stops, then slow in the other direction, fast at mid way, then slow, then stops again when close to max angle #2, then repeate).

While moving from side to side, we sample distances with the sonar, and save them in an array indexed by angles.

With this, we can have a pretty good idea of the distance from walls in front of us.

We can therefore use this to check if we're about to run into one or not, and we can generate crude drivable directions with this data.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.