Name: Jerrin Bright
Type: User
Company: Vision and Image Processing Lab, UWaterloo, Canada
Bio: 3D Human Modeling, Sports Analytics, Autonomous Systems, and Real-time Perception enthusiast!!!
Location: Canada
Blog: jerrinbright.github.io
Jerrin Bright's Projects
Consists of a diverse collection of small projects developed majorly in Python scripting language. Some include path planning algorithms, 200+ bird species detection, object detection using imageai, number plate detection, segmentation, etc., amongst others.
Extracting the G-code cloud points and saving them in CSV. Simultaneously background of the images are removed and compared with the G-code point clouds, thus detecting errors in the 3D printed component.
Personal Website
Autonomous lane detection for self-driving cars using two different methods - CNN and Canny Detectors.
URDF-M2WR-ROBOT+NEW-WORLD+OBSTACLE-AVOIDANCE+GMAPPING-SLAM
Realtime Monocular depth estimation using Logitech Camera and Jetson Xavier NX board. Used weights from the "Monodepth" algorithm for training, to test in real-time.
RRT Based Obstacle Avoidance using pixhawk flight controller
Developed a ROSpy based control system for a quadcopter to transverse to a set of GPS setpoints autonomously. The Control System has two modules namely the Altitude controller and the position controller, Altitude controller stabilizes the drone at the zero error Roll, Yaw, Pitch angles using a PID based controller, the position controller takes in the target GPS coordinate has setpoint values and calculates the roll yaw pitch angles to successfully move to the setpoint coordinates. these controllers work in synchronization to autonomously fly the drone from one coordinate to another
A custom-built drone package equipped with sensors including Kinect, IMU, Lidar and GPS especially made for incorporating Visual Inertial SLAM into the system.
6-DOF Pick and Place Robotic arm manipulator using Moveit framework and Gazebo environment for simulation
Implementation of 3D mapping using Kinect and RGB-D Camera sensors in an indoor environment. Real-time appearance-based mapping (RTAB-Map) was used to make the 3D map simulated in the Gazebo environment.
Development of python package/ tool for mono and stereo visual odometry. Also, pose file generation in KITTI ground truth format is done. EVO evaluation tool is used for the evaluation of the estimated trajectory using my visual odometry code.
WhatsApp bot made for business enterprises. Targetting instantaneous communication with the customers, thus enabling smart and convenient e-shopping via WhatsApp.
Object recognition is done using ResNet50 pre-trained network. We custom trained over the REsNet50 model for recognition of automobile components (Oil filter, Screw, etc.). Then deployed it using FLASK along with HTML, CSS and JS framework into web.