Abstract
To effectively track targets under partial occlusion and illumination variation, an improved target tracking method based on combination of sparse representation and particle filtering is proposed in this paper. We regard the candidate target particle set as redundant dictionary and the target template as observation signal to reduce the computational complexity and enhance the real-time performance of target tracking. Besides, to enhance tracking robustness for better adaption to illumination and occlusion, the density histogram, local binary pattern feature fusion, trivial templates and energy control parameters are also utilized in this study. Finally, extensive simulation experiments under different circumstances show that the proposed method performs better compared with other methods, and the average computation time decreases greatly.
Similar content being viewed by others
References
Mei X, Ling H. Robust visual tracking using L1 minimization. In: Proceedings of the 12th international conference on computer vision. 2009. p. 1436–43.
Lim J, Ross DA, Lin RS, et al. Incremental learning for visual tracking. In: Advances in neural information processing system (NIPS 2004), vol. 17, 13-18 December 2004, Vancouver, British Columbia. DBLP; 2004.
Zhong W, Lu HC, Yang MH. Robust object tracking via sparsity-based collaborative model. In: Proceedings of the 25th IEEE international conference on computer vision and pattern recognition (CVPR), Providence. 2012. p. 1838–45.
Chenouard N, Smal I, De Chaumont F, et al. Objective comparison of particle tracking methods. Nat Methods. 2014;11(3):281–9.
Bouaynaya N, Schonfeld D. On the optimality of motion-based particle filtering. IEEE Trans Circuits Syst Video Technol. 2009;19(7):1068–72.
Chen C, Schonfeld D. A particle filtering framework for joint video tracking and pose estimation. IEEE Trans Image Process. 2010;19(6):1625–34.
Kwon J, Lee KM. Visual tracking decomposition. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR), IEEE. 2010. p. 1269–76.
Ross DA, Lim J, Lin RS, et al. Incremental learning for robust visual tracking. Int J Comput Vis. 2008;77(1–3):125–41.
Isard M, Blake A. ICONDENSATION: unifying low-level and high-level tracking in a stochastic framework. In: Computer vision ECCV98. Berlin: Springer. 1998. p. 893–908.
Shen C, Van den Hengel A, Dick A. Probabilistic multiple cue integration for particle filter based tracking. In: Proceedings of the 7th digital image computing: techniques and applications. 2003. p. 399–408.
Wright J, Yang AY, Ganesh A, et al. Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell. 2009;31(2):210–27.
Cevher V, Sankaranarayanan A, Duarte MF, et al. Compressive sensing for background subtraction. In: Computer vision—ECCV 2008. Berlin: Springer. 2008. p. 155–68.
Gu J, Nayar SK, Grinspun E, et al. Compressive structured light for recovering inhomogeneous participating media. IEEE Trans Pattern Anal Mach Intell. 2013;35(3):555–67.
Mairal J, Bach F, Ponce J, et al. Discriminative learned dictionaries for local image analysis. In: IEEE conference on computer vision and pattern recognition, 2008, CVPR 2008. IEEE. 2008. p. 1–8.
Mei X, Ling H. Robust visual tracking and vehicle classification via sparse representation. IEEE Trans Pattern Anal Mach Intell. 2011;33(11):2259–72.
Bao C, Wu Y, Ling H, et al. Real time robust L1 tracker using accelerated proximal gradient approach. In: 2012 IEEE conference on computer vision and pattern recognition (CVPR). IEEE. 2012. p. 1830–7.
Zhang Z, Xu H, Chao Z, et al. A novel vehicle reversing speed control based on obstacle detection and sparse representation. IEEE Trans Intell Transp Syst. 2015;16(3):1321–34.
Zhang S, Yao H, Sun X, et al. Target tracking based on sparse coding. Intell Comput Appl. 2013;3(1):21–5.
Mei X, Ling H, Wu Y, et al. Minimum error bounded efficient L1 tracker with occlusion detection. In: 2011 IEEE conference on computer vision and pattern recognition (CVPR). IEEE. 2011. p. 1257–64.
Zhang K, Zhang L, Yang MH, et al. Fast tracking via spatiotemporal context learning. arXiv preprint arXiv:1311.1939. 2013.
Acknowledgments
This study was funded by the National Natural Science Foundation of People’s Republic of China (Grant No. 91026005). We wish to thank Dr. Zhang Shuang who has contributed to the paper improvement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
Gun Li, Zhong-yuan Liu, Hou-biao Li, and Peng Ren declare that they have no conflict of interest.
Informed Consent
All procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008 (5). Additional informed consent was obtained from all patients for which identifying information is included in this article.
Human and Animal Rights
This article does not contain any studies with human participants or animals performed by any of the authors.
Rights and permissions
About this article
Cite this article
Li, G., Liu, Zy., Li, Hb. et al. Target Tracking Based on Biological-Like Vision Identity via Improved Sparse Representation and Particle Filtering. Cogn Comput 8, 910–923 (2016). https://doi.org/10.1007/s12559-016-9410-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-016-9410-z