Abstract
Automated processing of video streams is core to current surveillance systems. The basic building blocks of video processing techniques are object detection and tracking. Tracking results are further analyzed to detect various events and activities for situation assessment. Several approaches to object detection and tracking are based on background modeling. These approaches are generally vulnerable to noise, illumination changes etc. Further, the object may not look similar in an image sequence over time due to changes in orientation, lighting, occlusion, etc. In this chapter, we explore application of neurobiology-saliency for object detection and tracking using particle filters. We use low-level features such as color, luminance and edge information along with motion cues to track a single person. Experimental results show that this approach is illumination invariant and can track persons in varying lighting conditions.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Abrams, R.A., Christ, S.E.: Motion onset captures attention. Psychol. Sci. 14(5) (2003)
Bajcsy, A., Gelade, G.: A feature integration theory of attention. Cogn. Psychol. 12, 97–136 (1980)
Baluja, S., Pomerleau, D.: Expectation based selective attention for visual monitoring and control of a robot vehicle. Robot. Auton. Syst. 22(3–4), 329–344 (1997)
Barth, E., Dorr, M., Böhme, M., Gegenfurtner, K.R., Martinetz, T.: Guiding the mind’s eye: improving communication and vision by external control of the scanpath. In: Proc. SPIE Human Vision and Electronic Imaging, San Jose, CA, USA, vol. 6057 (2006)
Boiman, O., Irani, M.: Detecting irregularities in images and in video. In: IEEE Intl Conf. on Computer Vision, pp. 1–8 (2005)
Cannon, M., Fullenkamp, S.: A model for inhibitory lateral interaction effects on perceived contrast. Vis. Res. 36(8), 1115–1125 (1996)
Chen, C., Wolf, W.: Background modeling and object tracking using multi-spectral sensors. In: 4th ACM International Workshop on Video Surveillance and Sensor Networks, pp. 27–34 (2006)
Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)
Engel, S., Zhang, X., Wandell, B.: Color tuning in visual cortex measured with functional magnetic resonance imaging. Nature 388(6637), 68–71 (1997)
Greenspan, H., Belongie, S., Goodman, R., Perona, P., Rakshit, S., Anderson, C.H.: Overcomplete steerable pyramid filters and rotation invariance. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 222–228 (1994)
Itti, L., Baldi, P.: Bayesian surprise attracts human attention. In: Neural Information Processing Systems (NIPS), pp. 1–8 (2005)
Itti, L., Baldi, P.: A principled approach to detecting surprising events in video. In: IEEE Intl. Conf. Computer Vision and Pattern Recognition, pp. 631–637 (2005)
Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000). citeseer.ist.psu.edu/itti00saliencybased.html
Itti, L., Koch, C.: A saliency-based search mechanism for overt and covert shifts of visual attention. Vis. Res. 40, 1489–1506 (2000)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)
Kadir, T., Brady, M.: Saliency, scale and image description. Int. J. Comput. Vis. 45(2), 85–105 (2001)
Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Hum. Neurobiol. 4, 219–227 (1985)
Leventhal, A.: The Neural Basis of Visual Function. Vision and Visual Dysfunction, vol. 4. CRC Press, Boca Raton (1991)
Logan, G.: The CODE theory of visual attention: an integration of space-based and object-based attention. Psychol. Rev. 103, 603–649 (1996)
Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60, 91–110 (2004)
Ma, Y., Lu, L., Zhang, H., Li, M.: A user attention model for video summarization. In: Proceedings of ACM Multimedia (2002). citeseer.ist.psu.edu/ma03user.html
Mahapatra, D., Sun, Y.: Nonrigid registration of dynamic renal MR images using a saliency based MRF model. In: Proc. MICCAI, pp. 771–779 (2008)
Mahapatra, D., Sun, Y.: Registration of dynamic renal mr images using neurobiological model of saliency. In: Proc. ISBI, pp. 1119–1122 (2008)
Mahapatra, D., Sun, Y.: Using saliency features for graphcut segmentation of perfusion kidney images. In: 13th International Conference on Biomedical Engineering, pp. 639–642 (2008)
Mahapatra, D., Sun, Y.: Joint registration and segmentation of dynamic cardiac perfusion images using MRFs. In: Proc. MICCAI, pp. 493–501 (2010)
Mahapatra, D., Sun, Y.: Mrf based intensity invariant elastic registration of cardiac perfusion images using saliency information. IEEE Trans. Biomed. Eng. 58(4), 991–1000 (2011)
Mahapatra, D., Sun, Y.: Orientation histograms as shape priors for left ventricle segmentation using graph cuts. In: Proc: MICCAI, pp. 420–427 (2011)
Mahapatra, D., Sun, Y.: Integrating segmentation information for improved mrf-based elastic image registration. IEEE Trans. Image Process. 21(1), 170–183 (2012)
Mahapatra, D., Saini, M., Sun, Y.: Illumination invariant tracking in office environments using neurobiology-saliency based particle filter. In: IEEE ICME, pp. 953–956 (2008)
Mahapatra, D., Winkler, S., Yen, S.C.: Motion saliency outweighs other low-level features while watching videos. In: Proc. SPIE Human Vision and Electronic Imaging, San Jose, CA, vol. 6806 (2008)
Milanese, R., Gil, S., Pun, T.: Attentive mechanisms for dynamic and static scene analysis. Opt. Eng. 34(8), 2428–2434 (1995)
Mozer, M., Sitton, M.: Computational modeling of spatial attention. In: Pashle, H. (ed.) Attention, pp. 341–393. UCL Press, London (1998)
Neibur, E., Koch, C.: Computational architectures for attention. In: Parasuraman, R. (ed.) The Attentive Brain, pp. 163–186. MIT Press, Cambridge (1998)
Nie, Y., Ma, K.H.: Adaptive rood pattern search for fast block-matching motion estimation. IEEE Trans. Image Process. 11(12), 1442–1448 (2002)
Nummiaro, K., Koller-Meier, E., Gool, L.V.: An adaptive color-based particle filter. Image Vis. Comput. 21(1), 99–110 (2003). citeseer.ist.psu.edu/nummiaro02adaptive.html
Olsahausen, B., Anderson, C.H., van Essen, D.: A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. J. Neurosci. 13(11), 4700–4719 (1993)
Posner, M., Cohen, Y.: Components of visual orienting. In: Bouma, H., Bouwhuis, D. (eds.) Attention and Performance, pp. 531–556. Erlbaum, Hilldale (1984)
Robinson, D., Peterson, S.: The representation of visual salience in monkey parietal cortex. Nature 391(6,666), 481–484 (1998)
Serre, T., Wolf, L., Poggio, T.: A new biologically motivated framework for robust object recognition. Technical report AI Memo 2004-026, Computer Science and Artificial Intelligence Laboratory, Massachussets Institute of Technology (2004)
Simoncelli, E.P., Heeger, D.J.: A model of neuronal responses in visual area MT. Vis. Res. 38(5), 743–761 (1998). http://www.cns.nyu.edu/~eero/ABSTRACTS/simoncelli96-abstract.html
Soto, D., Blanco, M.: Spatial attention and object-based attention: a comparison within a single task. Vis. Res. 44, 69–81 (2004)
Spengler, M., Schiele, B.: Towards robust multi-cue integration for visual tracking. ACM Comput. Surv. 14(1), 50–58 (2003)
Tremoluet, P., Feldman, J.: Perception of animacy from the motion of a single object. Perception 29, 943–951 (2000)
Triesch, J., Malsburg, C.: Self-organized integration of adaptive visual cues for face tracking. In: International Conference on Automatic Face and Gesture Recognition, pp. 102–107 (2000)
Tsotsos, J., Culhane, S., Hai, W., Lai, Y., Davis, N., Nuflo, F.: Modeling visual attention via selective tuning. Artif. Intell. 78(1), 507–545 (1995)
Wixson, L.: Detecting salient motion by accumulating directionally-consistent flow. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 774–780 (2000)
Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. 38(4) (2006)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Mahapatra, D., Saini, M. (2013). A Particle Filter Framework for Object Tracking Using Visual-Saliency Information. In: Atrey, P., Kankanhalli, M., Cavallaro, A. (eds) Intelligent Multimedia Surveillance. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-41512-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-642-41512-8_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-41511-1
Online ISBN: 978-3-642-41512-8
eBook Packages: Computer ScienceComputer Science (R0)