Change detection or background subtraction is the key element of surveillance and vision based applications. These applications are mainly used in real time projects like visitor counters in a building where a static camera is taking regular frames and sending them back to the server. Another example would be a traffic control application based on the amount and flow of traffic. In both of the above cases the first critical step is to subtract the vehicles or persons from the scene so that we can make decisions about what actions should be taken. If we have just a background image like the image of a road without vehicles or without visitors then we can take that image as a background and compare it to the coming frames to find the difference which will tell us two things. One, has any change occurred in scene and two, if there has been a change, then is it a visitor or vehicle or something else in the scene.

So in this way we just get the change in scene but there are few possibilities for errors in this method which we can reduce by reducing the noise. One large potential for error is the shadow of an object in the foreground. As in the background we have an empty scene in foreground, so if we get the vehicle with a shadow then that can create problems.

Background subtraction is mostly used for tracking and detecting moving objects. This is done by finding the difference between the current and previous frame and is also called background model subtraction. Background subtraction works best indoors in a static environment—in outdoor environments, elements like wind, rain, other weather condition, light, and shadows can affect the results. Background subtraction has various techniques like frame differencing, mean filter, Gaussian average and background mixture model.

First we’ll look at the frame differencing technique, a method that starts with segmentation and feature extraction. We find objects from the difference between the background and foreground. This method is implemented by making a background frame from a scene that is preserved for further use. After the background we take the coming scenes as the foreground and find the difference from the background frame. Hence by using simple arithmetic operations between pixels of two different frames we can find objects. Then we also set a threshold value to find the real objects and disregard any noise or object shadows. For this purpose a threshold value is set and if a difference is greater than that value then any object detected will be considered as a real object without any error.

Another technique for filtering is mean filtering. In mean filtering we use a background like in the frame differencing technique but our foreground is different and little stronger. Instead of taking single frame as the foreground we take a number of frames and average them. After the formation of the foreground we find the difference between the background and foreground. For better noise remove a threshold technique is applied. In this technique we ignore a few ranges of pixels falling under a certain value in an accumulator array so that after the threshold implementation we are assured that we are getting smooth and correct values and that noisy data is ignored.

BackgroundSubtractor MOG and BackgroundSubtractor MOG2 are also famous background subtraction techniques. BackgroundSubtractor MOG is an approved background subtractor method for real time tracking and detection. BackgroundSubtractor MOG uses a model of background pixels by K Gaussian and then checks the weight of the mixture representing color proportions. During code we create BackgroundSubtractorMOG by creating a function and then passing mat objects to that function to start filtering by apply function. BackgroundSubtractor GMG is another famous technique for background subtraction change detection.

BackgroundSubtractor GMG algorithms check both image and pixels segmentations one at a time. BackgroundSubtractor GMG takes first few frames for background modeling. After background setting it takes the foreground and applies a probabilistic foreground segmentation algorithm.

These were a few techniques for background subtraction and change detection. Now we will discuss some important points for background subtraction. Before we can start subtracting things, we must begin with smooth, noise free frames. To do this we’ll use averaging. In averaging we take the average of all pixels and one by one replace the middle pixels. With this filtering we must declare the kernel size to start so we get specific areas that are smoothed out and the rest of the areas are not changed or effected. The next noise removing technique is median filtering in which we take the median and apply its value to the center. This is really an effective technique for salt and pepper noise which is a common noise type in frames. In all other filters the central value is newly calculated while in median filtering pixel value is always replaced by some value in the image. Now let’s take a look at some code for background subtraction.

Code:

#include <opencv2/opencv.hpp> #include <iostream> #include <cv.h> #include <highgui.h> using namespace std; #include <iostream> #include "opencv2/opencv.hpp" #include <windows.h> using namespace std; using namespace cv; int main() { Mat nave, nave1; String path=”"D:/Video/g.mp4");” VideoCapture cap(path); If(cap.empty()) { cout<<”error in frame get”; } namedWindow("my_window"); for(;;) { cap >> src; cv::Rect roi(222, 120, 250, 323); nave=src(roi); Sleep(100); cap>>frame; cv::Rect roi1(222, 120, 250, 323); nave1=frame(roi1); int c=cap.get(CV_CAP_PROP_FPS); cout<<c; absdiff(nave,nave1,b); //cout <<absdiff<<"............"; cout<<"B ...",b; int na; imshow("my_window", src); imshow("Difff", b); if(cv::waitKey(30) >= 0) break; } return 0; }

Directions and discussion on code:

At the beginning of the code we had to include a few OpenCV libraries. After this we needed to create Mat objects. Mat is two dimensional array that contains header pointers and pixel values. The size of the Mat depends on the size of the image or frame. Mats are created by the Mat word and object name. After that we make a string for the image path we can pass the path string directly to an imread function. As in the above example in input. empty () is true that means we are getting the video frames and we can move forward for further steps.

Frame fetching is the main and primary task. After frames we set the region of interest on the frames as this region of interest is used to get the desired or required area on any frame so we identify our area for change detection. After the region of interest selection we set that ROI to mat objects for further tasking.

The last and main step is to pass the mat object to the absdiff function which actually performs the absolute difference for two frames. It is the same as the frame differencing technique in which we first make a background and find its difference from all upcoming foregrounds. In absdiff the difference is decided on the basis of pixels value differences so at the end we get the ratio of difference in two frames and also a mat object that contains all the difference features. After all these techniques, we just show the difference and original frames using the imshow function in OpenCV.

Hope you enjoyed the article. Be sure to stop by the homepage to search, compare, review, and download the best SDK tools.