|Non-Parametric Model for Background Subtraction|
|Ahmed Elgammal David Harwood Larry Davis|
|In video surveillance
systems, stationary cameras are typically used to monitor activities at outdoor or indoor
sites. Since the cameras are stationary, the detection of moving objects can be achieved
by comparing each new frame with a representation of the scene background. This process is
called background subtraction and the scene representation is called the background model.
Typically, background subtraction forms the first stage in automated visual surveillance
systems. Results from background subtraction are used for further processing, such as
tracking targets and understanding events.
Typically, in outdoor environments with moving trees and bushes, the background of the scene is not completely static. For example, one pixel can be the image of the sky at one frame, a tree leaf at another frame, a tree branch on a third frame and some mixture subsequently; in each situation the pixel will have a different intensity (color). This research focuses on how to construct a statistical representation of the scene background that supports sensitive detection of moving objects in the scene.
We introduced a novel background model and a background subtraction technique based on nonparametric statistical modeling of the pixel process. The model keeps a sample of intensity values for each pixel in the image and uses this sample to estimate the probability density function of the pixel intensity. The density function is estimated using kernel density estimation technique. Since this approach is quite general, the model can approximate any distribution for the pixel intensity without any assumptions about the underlying distribution shape.
The model can handle situations where the background of the scene is cluttered and not completely static but contains small motions that are due to moving tree branches and bushes. The model is updated continuously and therefore adapts to changes in the scene background. The approach runs in real-time and was successfully used for motion segmentation in many research projects in our labs.
|See Results Video Clips|