Git Product home page Git Product logo

2023-robot-code's People

Contributors

anonymousomeone avatar codecreator3 avatar davidemassarenti-optio3 avatar luminousllama avatar miamanzella avatar psifi96 avatar randomstring avatar soliade avatar sonicsquirrels avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

2023-robot-code's Issues

make vision subsystem with 2 vision io's for left and right cameras

you will most likely have to change the code in the VisionIOPhotonVision to accommodate multiple cameras

Also, many of the Vision top-level methods will have to be changed to accommodate polling multiple cameras. Right now it just gets all the data from the 1 io but you might have to add a parameter to each method to check which camera its requesting data from.

also, remember to actually poll both io's in the Vision.java periodic and include its data in the pose estimator

controller rumble until button pressed

look at the controller rumble command from the 2022 code and enhance it to finish when a certain button a pressed

have parameters for which button, rumble strength, duration etc etc.

better validTarget() method

make this method also check to make sure the target is between id 0 - 8

If its ID is past 8 its not on the game field and is most likely a bug and should not be counted
image

In Vision.java

TODO: before DCMP

  • fix cube alignment with 2 piece engage (possibly switch to purely odometry based auto?)
  • make cube ground pickup better
  • make drive to grid reliable
  • stream deck integration/testing
  • wall side 2 piece engage
  • optimizing current autos (maybe try for 2.5 piece engage auto)
    • tune when stingevator comes up and down to be more in parallel with driving
    • make stinger extend further for ground pickup
    • see if overall speed/acceleration can be increased?
    • score at end of path, rather then in between paths to make motion more of a "dunk"
  • optimizing double substation intaking
    • parallel stingevator drivetrain action
    • tune speed
    • possibly automate driving last couple feet
  • optimizing drive to grid and score
    • more parallel stingevator driving
    • try to recreate "dunk" method of scoring
    • tune speed
    • bring final drive in point (to score) closer to actual grid
  • PNP vision processing
  • make 3 piece autos more reliable
  • middle 1.5 engage
  • 3rd camera programming (if mechanical is able to implement)
  • maybe make middle score, cross charge pad, then intake cube, then go around pad to score (compliment other engaging bot)
  • adjust low translation elevator height, slowed down speed, and when elevator is going down don't slow robot
  • adjust elevator stow height so cubes picked up from the ground don't fall out from hitting the bumper
  • test cube pickup from double feeder station
  • investigate cone pickup from single substation (254 style)

move stinger extension inches clamping logic to the top level subsystem instead of the IO

The way I think about it is that the IO is just the interface between hardware and software, you want to minimize the logic you put in the IO and put most of the logic in the top level so it works for every IO

image
Move the logic for clamping between 0 and max extension from here in the IO

To the setExtensionInches method in the top level Stinger.java file over here before the io.set...
image

TODO testing after enabling robot

Drivetrain

  • test mk4i- code (put robot on wooden stilts and confirm every module is spinning the same way)
  • characterize drivetrain (might be worth doing this on the 2022 robot for practice)
  • put characterized constants into code
  • run test auto paths on robot (2m forward) (2m forward 180) etc. etc.

Elevator

  • test lower limit switch and upper soft limit
  • test/calibrate ticks2distance
  • find voltage that holds the elevator in one position (empty, with cube, with cone)
  • tune PID (kP) for simple positional control
  • tune kI and iZone ( = 0.5-1")
  • confirm trapezoidal motion is working

Stinger

  • test setExtension and manualControl (does it go to desired position?)
  • test upper and lower limit switches
  • make sure the ticks2distance calculation is accurate to the real stinger
  • tune the PID numbers for more stable stinger control
  • see whether or not the motor must be inverted (unless we already know this)

intake

  • note direction of motor for intake/outake of cubes/cones and update code
  • pick intake/outtake Voltage for cubes/cones

vision

  • calibrate vision whilst april tags are on the robot
  • log ambiguity while moving towards, adjacent, and away from the april tag
  • test accuracy with the april tags
  • odometry using april tags

LED subsystem (blinkin)

  • confirm LEDs work
  • trigger purple/yellow/orange/"black"

Stingevator Setpoints and Commands

  • High/Mid/low cube scoring position
  • High/Mid/Low cone scoring position
  • ground pickup cube/cone
  • stow position cube/cone
  • Commands for each position (command factory) with and without scoring to allow for manual trigger

Bumpers

  • attach bumpers check fit
  • test driving onto charge station

Add 2930 lib folder from 2022 robot code to this repo

  • Copy useful library code from the 2022-Robot-Code repo into the 2023-Robot-Code repo
  • make sure that the code is compatible with 2023 WPILib
  • consider leaving the limelight library code out, it's old and we may not even use limelight this year

Take periodic snapshots w/ PhotonVision

Photon vision can save an image to a file. https://docs.photonvision.org/en/latest/docs/programming/photonlib/getting-target-data.html#saving-pictures-to-file

We should save an image periodically, say every 0.5-2 seconds throughout a match, maybe more during testing so we can debug errors with photonvision.

Perhaps we can log more frequently when some conditions are met, like when the new image would create a large change in our estimated position.

This will have to wait until we have a USB stick installed on the roboRIO.

TODO: practice field 3/11

  • align swerve
  • system test
    • can
    • elevator
    • stinger
    • drive
    • intake
  • confirm oper and drive controls
  • SYSID: characterize drivetrain
  • test autos
    • 2 meters
    • 2 meters 360
    • simple middle engage +taxi
  • drive to grid position
  • up vision decimate
  • investigate not picking up cubes
  • tune intake stalling

Install Phoenix Pro on 2 Carnivores

Install the Phoenix Pro license on two Carnivores:

  • 2022 Comp bot/ test platform
  • 2023 Comp bot

The following licenses you purchased in Order#200003704 have been processed and are now ready to be activated using your account.
License SKU Description Quantity
LIC-23-80827900-B-FRC Pro 2023 - CANivore (FRC) 1 (2 seats per unit qty)

Documentation on how to use your license(s) can be found here.

TODO things to test at practice field

High Priority

  • test position estimation accuracy with april tags (blue and red sides)
  • manual pickup (ground and substation)
  • manual scoring
  • manual charge pad engaging

Mid Priority

  • test auto paths (possibly without events at first) (blue and red sides)
  • auto drive to grid pos and score (with stream deck)
  • auto pickup from substations (Cubes and Cones)

Low Priority

  • do auto pickup, then autoscore to get a rough average cycle time
  • optimize auto paths

create ROBOT_2023_COMPBOT robot type

create the ROBOT_2023_COMPBOT robot type in constants

remember to update the getMode() function as well
image

also create a new switch case in robot container and initialize the subsystems with the correct IO's
image

Localization/Apriltag support

  • mount AprilTags in Portable
  • better camera mount on 2022 comp bot ("PT")
  • configure and test 3061-lib localization w/ limelight
  • configure and test OrangePi w/ USB camera config
  • consider adding 2nd AprilTag camera
  • tune camera settings
  • debugging: log updates
  • debugging: display localization to driver station Field Preview

TODO Before We Can Enabling Robot

  • CANivore: update firmware and set name to "CANivore"
  • roboRIO, image SD card with latest image (done 2/11/2023)
  • roboRIO: set team number, check firmware
  • Rev products, update firmware: PDP
  • set CAN ids on all motors and devices (use Constants.java as a guild)
  • Swerve: record encoder offsets and add to Constants.java for 2023 comp bot
  • insert USB thumbdrive for recording WPILogs (2/11/2023)
  • FIXME: set the CANivore name on 2023 robot to "CANivore" (is currently "CANivor" with no "e")
  • test flipped swerve drive motors (back left drive, back right steer)
  • confirm elevator limit switch works, display on dashboard
  • confirm stinger limit switch works, display on dashboard
  • manually move elevator up/down and confirm elevator height in code on SmartDashboard
  • manually move stinger in/out and confirm extention in code on SmartDashboard

Record the date an firmware numbers in the comments when you update a device.

TODO: things we have left to do

  • faster "stingevator" (speed up pace) (test parallel motion)
  • install latest DriverStation https://www.ni.com/en-us/support/downloads/drivers/download.frc-game-tools.html#479842
  • Photonvision OrangePi Update
  • PV calibrate in Gym lighting
  • ROCK SOLID mid engage auto (TEST FOR RED ALLIANCE)
  • test human player autos for red side
  • april tag tuning (new camera mounts)
    • new camera mount just 35 degrees out
    • decrease FPS on camera
    • decrease exposure for gym
  • Button for ground pickup while held (toggle for desired game piece)
  • bind button for drive set rotation
  • Drive to grid position and score
  • stream deck
  • auto substation pickup
  • Wall side autos
  • pickup game piece in middle engage
  • 3 ball autos
  • 4 ball autos

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.