Comments (11)
Interesting discussion here on CD about the tradeoffs of tunable pipeline parameters for FPS, accuracy, and max distance for sensing.
https://www.chiefdelphi.com/t/limelight-2023-1-megatag-performance-boost-my-mistake/423943
In particular the Crop
feature to limit the area of the image to search for AprilTags was very good at increasing the FPS (and also potentially eliminating error). we need to see if PhotonVision has this feature. We should investigate using multiple pipelines with different crop levels work better for different distances.
For example we could have one pipeline for when we are in the community (close to april tags) that crops to the part of the image that the April tags will appear in and optimized for closer tags. And have a second pipeline that is optimized for further out when we are outside of the community.
from 2023-robot-code.
If I'm reading the documents correctly LL will compute a camera (or robot) pose on the LL. While PhotonVision only returns the Pose of each detected AprilTag and the estimated robot pose need to be calculated on the roboRIO. The LL's MegaTag feature to combine multiple tags into one pose estimate looks like a big win for getting a more accurate pose.
from 2023-robot-code.
Interesting documentation from LL on optimizing parameters for AprilTag detection accuracy, speed, and range.
https://docs.limelightvision.io/en/latest/apriltags_in_2d.html#tips
[Edit to add AprilTag tips]
Tips from PhontVision devs:
- Low exposure is good: this reduces motion blur
- brightness will depend on the environment.
- Autoexposure OFF
- angle camera 10-15degrees up or down (NOT straight on)
Tips from LimeLight:
For ideal tracking, consider the following:
- Your tags should be as flat as possible.
- Your Limelight should be mounted above or below tag height and angled up/down. Your target should look as trapezoidal as possible from your camera’s perspective. You don’t want your camera to ever be completely “head-on” with a tag if you want to avoid tag flipping.
There is an interplay between the following variables for AprilTag Tracking:
- Increasing capture resolution will always increase 3D accuracy and increase 3d stability. This will also reduce the rate of ambiguity flipping from most perspectives. It will usually increase range. This will reduce pipeline framerate.
- Increasing detector downscale will always increase pipeline framerate. It will decrease effective range, but in some cases this may be negligible. It will not affect 3D accuracy, 3D stability, or decoding accuracy.
- Reducing exposure will always improve motion-blur resilience. This is actually really easy to observe. This may reduce range.
- Reducing the brightness and contrast of the image will generally improve pipeline framerate and reduce range.
- Increasing Sensor gain allows you to increase brightness without increasing exposure. It may reduce 3D stability, and it may reduce tracking stability.
from 2023-robot-code.
Just popped into my head, there is a problem where using 2 of the exact same camera leads to photonvision not working. If I remember correctly there is a working around but this should be a critical thing we need to explore as it interferes with our vision solution.
from 2023-robot-code.
If we run two cameras on a single Orange pi, we might run into the problem with duplicate camera names. We have the option to either run two Orange Pis with one camera each, or some combination of LimeLight hardware and Orange pi.
from 2023-robot-code.
@davidemassarenti-optio3 and @dylanh12210 solved the duplicate camera by using a software tool to update the USB cameras' serial number. I think they changed the serial number to "left camera" and "right camera" can either of you post a link to the software you used so we have a record for future use?
from 2023-robot-code.
I think it's this:
https://docs.arducam.com/UVC-Camera/Serial-Number-Tool-Guide/
from 2023-robot-code.
photonvision version 2023.3.0 https://github.com/PhotonVision/photonvision/releases/tag/v2023.3.0
from 2023-robot-code.
Based on my research I think we want the camera mounted
- 22.8" off the ground (this is exactly between the centers of the low and high AprilTags),
- angled down 10 degrees (important to avoid ambiguity, and down to reduce glare),
- angled out 30 degrees (this allows us to see tags nearly directly in front of us with a 70deg FOV).
If that's not possible, then mounted low about 10" from the ground, angled up 10 degrees, and angled out 30 degrees.
from 2023-robot-code.
The glare is a function of the relative position of cameras and tag. The angle up/down of the camera shouldn't affect reflections.
The angle of the camera could affect the ambiguity, although I wouldn't be too worried about that. We are doing multiple samples and a sample that teleports us to the other side of the world would be easy to drop.
I think angling the cameras out would be enough. When we care most about tags, in front of the pickup or drop areas, the tags will not be parallel to the cameras
from 2023-robot-code.
OrangePi vision setup https://docs.google.com/document/d/17DNCNHxUo31Rh-7VmXXyn-Y25UtGND3NPoGL9gRosaQ/edit
from 2023-robot-code.
Related Issues (20)
- In robotContrainer rewrite XboxController to use CommandXboxController and update the button bindings accordingly
- Update to WPILib 2023.2.1 HOT 1
- Create Mechanism2d HOT 4
- Take periodic snapshots w/ PhotonVision
- Install OS and configure PhotonVision on 2 OrangePi HOT 2
- Install Phoenix Pro on 2 Carnivores HOT 1
- override the remaining IO methods for Elevator2023Real
- move stinger extension inches clamping logic to the top level subsystem instead of the IO HOT 1
- finish writing intake commands HOT 1
- create ROBOT_2023_COMPBOT robot type
- better validTarget() method HOT 1
- make vision subsystem with 2 vision io's for left and right cameras HOT 1
- Tune reliable vision range with photonvision HOT 3
- TODO Before We Can Enabling Robot HOT 1
- TODO testing after enabling robot HOT 3
- calibrate vision whilst april tags are on the robot HOT 1
- TODO things to test at practice field
- TODO: things we have left to do
- TODO: practice field 3/11
- TODO: before DCMP
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from 2023-robot-code.