Introduction

Identifying moving objects from a video sequence is a fundamental and  critical task in many computer-vision applications. A common approach is to perform background subtraction, which identifies moving objects from the portion of a video frame that differs significantly from a  background model. There are many challenges in developing a good background subtraction algorithm. First, it must be robust against changes in illumination. Second, it should avoid detecting non-stationary background objects such as moving leaves, rain, snow, and shadows cast by moving objects. Finally, its internal background model should react quickly to changes in background such as starting and stopping of vehicles.

Our research began with a comparison of various background subtraction algorithms for detecting moving vehicles and pedestrians in urban traffic video sequences (Cheung and Kamath 2004). We considered approaches varying from simple techniques such as frame differencing and adaptive median filtering, to more sophisticated probabilistic modeling techniques. While complicated techniques often produce superior performance, our experiments show that simple techniques such as adaptive median filtering can produce good results with much lower computational complexity.     

In addition, we found that pre-and post-processing of the video might be necessary to improve the detection of moving objects. For example, by spatial and temporal smoothing, we can remove snow from a video as shown in Figure 1. Small moving objects, such as moving leaves on a tree, can be removed by morphological processing of the frames after the identification of the moving objects, as shown in Figure 2.


Figure 1. Video frame on left showing a traffic scene while snowing. Note the streaks in the image due to the snow flakes. The same video frame after spatial and temporal smoothing is on the right, without the snow streaks.



Figure 2. The video frame on the left highlights, in pink, the objects detected as moving. Note the movement of the leaves on the trees in the foreground. Morphological processing cleans up the video frame as shown on the right.

The rate and weight of model updates greatly effect foreground results. Slow adapting background models cannot quickly overcome large changes in the image background (such as a cloud passing over a scene). This results in a period of time where many background pixels are incorrectly classified as foreground pixels. A slow update rate also tends to create a ghost mask which trails the actual object. Fast adapting background models can quickly deal with background changes, but they fail at low frame rates. They are also very susceptible to noise and the aperture problem. These observations indicate that a hybrid approach might help mitigate the drawbacks of each.

We have created a new foreground validation technique that can be applied to any slow-adapting background subtraction algorithm (Cheung and Kamath 2005). Slow adapting methods produce relatively stable masks and tend to be more inclusive than fast adapting methods. As aresult, they can also have high false positive rate. Foreground validation further examines individual foreground pixels in an attempt to eliminate false positives. Our algorithm first obtains a foreground mask from a slow-adapting algorithm, and then validates foreground pixels by a simple moving object model built using both foreground and background statistics as well as a fast-adapting algorithm (Figure 3).


(a)                                                                    (b)

Figure 3. The mixtures of Gaussians approach (a) is not very robust to changes in illumination in comparison with our proposed method (b).


Ground-truth experiments with urban traffic sequences have shown that our proposed algorithm produces performance that are comparable or better than other background subtraction techniques (Figure 4).

Figure 4: Comparison of different algorithms. (a) Original image showing a car which starts to move after being stationary for a while. Foreground detected by (b) frame differencing, (c) approximate median, (d) median, (e) Kalman filter, (f) mixtures of Gaussians, and (g) our new method with foreground validation.

Acknowledgments

The videos used in our work are from the website maintained by KOGS-IAKS Universitaet Karlsruhe. We appreciate their willingness to make their data publicly available.

References

Cheung, S.-C. and C. Kamath, "Robust Background Subtraction with Foreground Validation for Urban Traffic Video," EURASIP Journal on Applied Signal Processing, Volume 14, pp 1-11, 2005. UCRL-JRNL-201916.

Cheung, S.-C. and C. Kamath, "Robust techniques for background subtraction in urban traffic video," Video Communications and Image Processing, SPIE Electronic Imaging, San Jose, January 2004, UCRL-JC-153846-ABS, UCRL-CONF-200706 [PDF].

For more technical information, contact: kamath2@llnl.gov -- Chandrika Kamath, (925) 423-3768
UCRL-WEB-214348       These pages were last modified on August 8, 2005.         LLNL Disclaimer