Git Product home page Git Product logo

mog-motion-detection's Introduction

Motion detection by Mixture of Gaussian (MOG) background subtraction

Background Subtraction is a commonly used method to segment moving parts from static scenes (background and foreground). The moving regions which contain edges of vehicles are detected by subtracting the current frame of the video from a reference static background. The reference background creation process is known as background modeling. The background model must be continuously updated and contain no moving objects.

Ali Tourani Motion Detecion MOG

1. Mixture of Gaussian (MOG) background subtraction algorithm

One of the extensions to the common background subtraction method is Mixture of Gaussian (MOG) background subtraction that is dependent to a combination of frames instead of only one frame. In this method, for each background pixel, a mixture of k Gaussian distribution and a weighting parameter are utilized to save the lifetime of pixels in the scene, where k can vary in the range of 3 to 5. Thus, remaining pixels with more than a threshold time in the scene means they have the higher possibility of belonging to the background scene. On the other hand, if the pixel remains unchanged for a period of time, it is considered as a dominant background pixel. To update the model, MoG-BS method can be used as an online approximation scale. If the difference of pixels in a frame is more than a predefined threshold, they are classified as moving parts. This method is very sensitive to changes in the environment. To use this method, we can simply use below code in C#:

BackgroundSubtractorMOG mog = new BackgroundSubtractorMOG(mog_history, mog_nMixtures, mog_backgroundRatio, mog_noiseSigma);

// Appropriate function
mog.apply();
2. Mixture of Gaussian (MOG) background subtraction algorithm 2

Another Gaussian Mixture-based Background/Foreground segmentation algorithm is known as MOG2. The difference between MOG and MOG2 is that MOG2 selects the appropriate number of gaussian distribution for each pixel, where MOG takes a K gaussian distribution for modeling. For this reason, MOG2 provides a better adaptibility to varying scenes due illumination changes. We can also set the algorithm to detect shadows as well. To use this method, we can simply use below code in C#:

bool detectShadows = True;
BackgroundSubtractorMOG2 mog2 = new BackgroundSubtractorMOG2(history, varThreshold, detectShadows);

// Appropriate function
mog2.Update(frame);

Environment

The application is implemented in C# programming language and utilizes AForge.Net and EmguCV image processing libraries.

How to Run the Project

Simply clone the repository and start the solution in Visual Studio.

Known Issues

Here is the list of known issues and bugs which I will work on them later:

  • Need a play/pause button in the UI
  • Need a configuration pane to simply manipulate MGO and MOG2 input parameters
  • Bug: a non-reference error occurs when the video finished in MOG/MOG2 view modes

References

  1. P. KaewTraKulPong and R. Bowden, "An Improved Adaptive Background Mixture Model for Real-time Tracking with Shadow Detection,โ€ 2nd European Workshop on Advanced Video-based Surveillance Systems, Genova, 2002. (link)
  2. Z. Zivkovic, "Improved Adaptive Gaussian Mixture Model for Background Subtraction," Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, 2004. (link)
  3. A. Tourani, A. Shahbahrami, A. Akoushideh, S. Khazaee, and C. Y Suen "Motion-based Vehicle Speed Measurement for Intelligent Transportation Systems," International Journal of Image, Graphics and Signal Processing, vol. 11, no. 4, pp. 42-54, 2019. (link)
  4. A. Tourani, A. Shahbahrami and A. Akoushideh, "Challenges of Video-Based Vehicle Detection and Tracking in Intelligent Transportation Systems," International Conference on Soft Computing, Rudsar, 2017. (link)

mog-motion-detection's People

Contributors

alitourani avatar

Stargazers

 avatar  avatar  avatar

mog-motion-detection's Issues

Feature request

Would it be possible to do this in real time - say your constantly recording the screen, could you point out faster motions from the slower motions as its happening on the screen?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.