Git Product home page Git Product logo

Mohamed Fazil's Projects

applieddeeplearning icon applieddeeplearning

This repository consists a set of Jupyter Notebooks with a different Deep Learning methods applied. Each notebook gives walkthrough from scratch to the end results visualization hierarchically. The Deep Learning methods include Multiperceptron layers, CNN, GAN, Autoencoders, Sequential and Non-Sequential deep learning models. The fields applied includes Image Classification, Time Series Prediction, Recommendation Systems , Anomaly Detection and Data Analysis.

myotron_wrist_control icon myotron_wrist_control

This project proposes and delivers a novel approach to train and test a Convolutional Neural Network (CNN) model for muscle synergy controlled prosthetic hands. The project is focused on providing a solution for precise control and real-time testing of prosthetic hand control used by below-elbow amputees having independent control over the prosthetic fingers. Multiple EMG sensors that are placed on the forearm will be used to control the prosthetic hand using the trained model. CNN allows us to extract features from raw EMG signals without the requirement for manual feature engineering done over raw data in traditional methods. Furthermore, the trained model will be evaluated in real-time within a Virtual Reality environment developed using the Mujoco Physics environment with the HTC Vive VR headset. The developed algorithm will be tested on ten healthy participants and their data will be analyzed to show the performance of the presented controller.

pattern_recognition icon pattern_recognition

This repository contains various jupyter pages written by me working on the MNIST datasets for my course Pattern Recognition. It uses different learning methods such as Support Vector Machines, Neural Networks, Generative Models, Probabilistic Graphic Models and Linear Discriminant functions. It uses keras and tensorflow for most of the codes.

realsense_bot icon realsense_bot

This is a ROS package for Intel realsense D435i with 3-DOF Manipulator robot that can be used for Indoor Mapping and localization of objects in the world frame with an added advantage of the robot's dexterity. The 3-DOF Manipulator is a self-built custom robot where the URDF with the depth sensor is included. The package covers the Rosserial communication with Arduino nodes or I2C with the Jetson Nano to control the robot's Joint States and PCL pipelines required for autonomous mapping/Localization/Tracking of the objects in real-time.

realsense_explorer_bot icon realsense_explorer_bot

Autonomous ground exploration mobile robot which has 3-DOF manipulator with Intel Realsense D435i mounted on a Tracked skid-steer drive mobile robot. The robot is capable of mapping spaces, exploration through RRT, SLAM and 3D pose estimation of objects around it. This is an custom robot with self built URDF model.The Robot uses ROS's navigation stacks .

ros_autonomous_slam icon ros_autonomous_slam

ROS package which uses the Navigation Stack to autonomously explore an unknown environment with help of GMAPPING and constructs a map of the explored environment. Finally, a path planning algorithm from the Navigation stack is used in the newly generated map to reach the goal. The Gazebo simulator is used for the simulation of the Turtlebot3 Waffle Pi robot. Various algorithms have been integrated for Autonomously exploring the region and constructing the map with help of the 360-degree Lidar sensor. Different environments can be swapped within launch files to generate a map of the environment.

touchlessclockin_a2il icon touchlessclockin_a2il

Am Touchless clock-in system Designed and deployed a web based application in which uses face recognition with deep learning. Web application was built in Angular and deployed through Google Cloud Console with Python flask backend which also runs in a GCP Cloud Server. It uses Dlib leep learning library in python and Manages a unstructured database in cloud through MongoDB.

touchlessclockin_project icon touchlessclockin_project

This is a computer vision based project developed admist the pandemic situation where touchless systems are required everywhere. It uses face recognition and deep learning to identify employees/members of an institution to check in or check out of the system without touching, It uses voice interaction and has a sophisticated interface done with opencv. It is also integrated with google's firebase which is a cloud managed structureless database which stores all the user data, time stamped images for security and face descriptions. A UI for the administrator is also developed using NodeRed platform which can be used to monitor the user checkin activities.

ub_gym_notifier icon ub_gym_notifier

A python-based web content monitoring app to notify available Gym session slots for UB students. As the UB gyms require a prior booking of the time slots but it tedious to get a slot due to decreased capacity as everything gets filled by the start of the week itself. This app would notify you through notify.run web client app whenever there is a free slot available throughout the week which puts in advantage to book slots compared with other people. The python script is deployed in the GCP Debian cloud computer and is executed continuously every 15 minutes.

virtual_pen_mnist icon virtual_pen_mnist

This is a python program which uses deep learning and image processing to create virtual pen where the user can hover with the configured colour tip over the webcam to write digits. The deep learning model trained using mnist is used to recognize the digits. It uses keras for deep learning and opencv for image processing.

vr_communication_mujoco200 icon vr_communication_mujoco200

This is the development repo of Virtual Reality rendering with Mujoco Physics Environment with help of OpenVR SDK and HTC Vive HMD hardware. On top of it, a PubSub socket-based communication is introduced using the ZMQ library. Using this PubSub communication any application outside the Mujoco application can be used to operate actuators inside the Mujoco environment by just publishing the joint positions to the Topic to which the Mujoco has subscribed.

wrist_control_cnn_awear icon wrist_control_cnn_awear

This is the cumulative repository for the research project Deep Learning Approach to Robotic Prosthetic Wrist Control using EMG Signals done in the AWEAR lab. This repository would consist of all the Data processing pipelines codes, custom data preprocessing library built for this project, and all the time series CNN training Jupyter notebooks using the Data collected within the AWEAR Lab, University at Buffalo.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.