Git Product home page Git Product logo

Mohit Ahuja's Projects

2d-filtering-using-vhdl icon 2d-filtering-using-vhdl

The goal is to process the input data flow (corresponding to lena image) using a 2D filter. Two main tasks are expected:  The design and the validation of a customizable 2D filter (filter IP)  The implementation on a Nexys4 evaluation board of the 2D filter. The filter IP implementation should be included in a reference design (furnished by teacher) to ease the integration. The filter IP could be split into two main parts: the memory cache which aims to be temporarily stored the data flow before filtering and the processing part. The cache memory designed for simultaneous pixel accesses enables a 3x3 pixel neighbourhood to be accessible in one clock cycle. The structure is based on flip-flop registers and First-In-First-Out (FIFO) memory.

3d-model-based-tracking-using-visp icon 3d-model-based-tracking-using-visp

We have done 3D model object detection, tracking, computing pose using VISP library. We had seen great advantages of this library as we can track and detect using a single line of code and the data structures defined for holding data are well defined and the class and methods used in the library are well documented with many examples.

8-point-algorithm-vs-5-point-algorithm icon 8-point-algorithm-vs-5-point-algorithm

The goal of this practical is to compare the classical linear 8 points algorithm and the linear 5 points knowing the vertical direction of the camera. Let's consider 50 points randomly distributed in a cube of size [-300*300]*[-300*300]*[- 300*300] in the world frame (Ow;Xw; Yw;Zw). Let's note respectively, (Oc1 ;Xc1 ; Yc1 ;Zc1) and (Oc2 ;Xc2 ; Yc2 ;Zc2) the camera positions. We suppose a calibrated camera posed at a rotation Ri and Ti of the world coordinate (Xw = RiXci + Ti).

bug-0-algo-implementation-on-e-puck icon bug-0-algo-implementation-on-e-puck

we have to set a goal anywhere in the plane and there might be or might not be obstacles in the path. And the Robot have to first identify the Goal in comparative to his own location and then start moving towards it and it should turn according to the angle of rotation required for facing towards goal.

calibrated-sfm icon calibrated-sfm

Incremental SfM is the standard approach that adds on one image at a time to grow the reconstruction. While this method is robust, it is not scalable because it requires repeated operations of expensive bundle adjustment. Global SfM is different from incremental SfM in that it considers the entire view graph at the same time instead of incrementally adding more and more images to the Reconstruction.

face-recognition-using-pca icon face-recognition-using-pca

Principal component Analysis can be used for many purposes we found some of them are to decrease the computational complexity and measure of the covariance between the images. PCA reduces the complexity of computation when there is large number of database of images. These principal components of the Eigen vector of this covariance matrix when concatenated and converted gives the Eigen faces. These Eigen faces are the ghostly faces of the trained set of faces form a face space. This distance gives the location of the image in the Eigen space, which is taken as the output matched image.

fitting-a-3d-scan-of-an-object-using-optimisation icon fitting-a-3d-scan-of-an-object-using-optimisation

The main objective of this project was to obtain a new cloud of points by merging the 2 clouds • 1st difficulty: One is full for the second 10% of the points are missing • 2nd difficulty: for each cloud 10% of the points are missing • 1st step: on a classical geometrical volume • 2nd step: on real scans

human-activity-recognition-from-videos-using-machine-learning icon human-activity-recognition-from-videos-using-machine-learning

Nowadays, it’s a very hot topic on video-based human action detection, which has recently been demonstrated to be very useful in a wide range of applications including video surveillance, tele-monitoring of patients and senior people, medical diagnosis and training, video content analysis and search, and intelligent human computer interaction [1]. As video camera sensors become less expensive, this approach is increasingly attractive since it is low cost and can be adapted to different video scenarios.

image-processing-toolbox-using-matlab icon image-processing-toolbox-using-matlab

This application is embedding functions of Matlab 2016b for users. This application consist of one GUI for displaying input and output images by Matlab. It accepts one image at a time and storing it as original image. Each modification is applied on output image consecutively. By default, all operations are disabled until you select an image.

image-processing-toolbox-using-opencv icon image-processing-toolbox-using-opencv

Developing Computer Vision applications are difficult. One should consider the capabilities of the framework, on the other hand how this framework will react and perform on given test data. In other case, one may want to see only effect of the consecutive image processing functions on test data. In both and many cases, small toolboxes of the frameworks help people to see results easily, fast and enable them to fast prototyping. Hence this toolbox is created for Computer Vision application developers and enthusiast who want to see image processing functions on their images with very basic knowledge. It's developed with minimal design, which makes it easy to use. However it is also powerful toolbox due to its support of parameters.

image-registration-using-matlab icon image-registration-using-matlab

Image registration is the procedure consisting of aligning an unregistered image (also called moving image) into a template image (also called fixed image) via a geometric transformation. This problem is usually addressed as presented in Fig. 1. An iterative procedure takes place to infer the geometric transformation (parametric or non-parametric) via an optimizer, which maximizes the similarity between the two images.

implemented-8-point-algorithm- icon implemented-8-point-algorithm-

The eight-point algorithm is an algorithm used in computer vision to estimate the essential matrix or the fundamental matrix related to a stereo camera pair from a set of corresponding image points.

implementing-active-contour-model-snakes-algorithm- icon implementing-active-contour-model-snakes-algorithm-

Active contour model, also called snakes, is a framework in computer vision for delineating an object outline from a possibly noisy 2D image. The snakes model is popular in computer vision, and snakes are greatly used in applications like object tracking, shape recognition, segmentation, edge detection and stereo matching.

intensity-based-visual-servoing-using-visp icon intensity-based-visual-servoing-using-visp

We have done Intensity Based Visual Servoing by computing velocities for the robot using Control Law by VISP library. We had seen great advantages of this library as we can track and detect using a single line of code and the data structures defined for holding data are well defined and the class and methods used in the library are well documented with many examples. We found it's easy to use this library for visual servoing in the context of our tasks.

management-and-post-processing-of-prostate-mri icon management-and-post-processing-of-prostate-mri

Adenocarcinoma of the prostate appears in older men. About 85% of cases are diagnosed in men over 60 years. Prostate cancer is a common cancer who’s the incidence and mortality are now steadily increasing (85,000 new cases per year in Europe) [1]. It is the second most common cancer after lung cancer and the third leading cause of cancer death in men (9% of all cancer deaths in men in Europe).

mapping-and-localization-of-turtlebot-using-ros icon mapping-and-localization-of-turtlebot-using-ros

The motto of the project is to gain experience in the implementation of different robotic algorithms using ROS framework. The first step of task is to build a map of the environment and navigate to a desired location in the map. Next, we have to sense the location of marker (e.g. AR marker, color markers etc) in the map, where there is pick and place task, and autonomously localise and navigate to the desired marker location. After reaching to the desired marker location, we have to precisely move towards the specified location based on visual servoing. At the desired location, we have a robotic arm which picks an object (e.g a small cube) and places on our turtlebot (called as pick and place task). After, the pick and place task, again the robot needs to find another marker, which specifies the final target location, and autonomously localise and navigate to the desired marker location, which finishes the complete task of the project.

perform-odometry-functions-on-e-puck icon perform-odometry-functions-on-e-puck

Odometry is the use of data from motion sensors to estimate change in position over time. It is used in robotics by some legged or wheeled robots to estimate their position relative to a starting location. This method is sensitive to errors due to the integration of velocity measurements over time to give position estimates. Rapid and accurate data collection, instrument calibration, and processing are required in most cases for odometry to be used effectively.

point-based-virtual-visual-servoing-using-visp icon point-based-virtual-visual-servoing-using-visp

We have done dot detection, tracking, computing pose and virtual visual servoing using VISP library. We had seen great advantages of this library as we can track and detect using a single line of code and the data structures defined for holding data are well defined and the class and methods used in the library are well documented with many examples. We found it's easy to use this library for visual servoing in the context of our tasks

projective-reconstruction icon projective-reconstruction

From several images of a scene and the coordinates of corresponding points identified in the different images, it is possible to construct a three-dimensional point-cloud model of the scene and compute the camera locations. From uncalibrated images the model can be reconstructed up to an unknown projective transformation, which can be upgraded to a Euclidean model by adding or computing calibration information.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.