Abstract
Object tracking is a very important topic in the field of computer vision. Many sophisticated appearance models have been proposed. Among them, the trackers based on holistic appearance information provide a compact notion of the tracked object and thus are robust to appearance variations under a small amount of noise. However, in practice, the tracked objects are often corrupted by complex noises (e.g., partial occlusions, illumination variations) so that the original appearance-based trackers become less effective. This paper presents a correntropy-based robust holistic tracking algorithm to deal with various noises. Then, a half-quadratic algorithm is carefully employed to minimize the correntropy-based objective function. Based on the proposed information theoretic algorithm, we design a simple and effective template update scheme for object tracking. Experimental results on publicly available videos demonstrate that the proposed tracker outperforms other popular tracking algorithms.
Similar content being viewed by others
References
S. Sun, N. Akhtar, H. S. Song, A. S. Mian, M. Shah. Deep affinity network for multiple object tracking. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, to be published. DOI: 10.1109/TPAMI.2019.2929520.
X. Y. Lan, M. Ye, S. P. Zhang, H. Y. Zhou, P. C. Yuen. Modality-correlation-aware sparse representation for RGB-infrared object tracking. Pattern Recognition Let-ters, vol.130, pp. 12–20, 2020. DOI: 10.1016/j.patrec.2018. 10.002.
C. Ma, J. B. Huang, X. K. Yang, M. H. Yang. Adaptive correlation filters with long-term and short-term memory for object tracking. International Journal of Computer Vision, vol.126, no. 8, pp.771–796, 2018. DOI: 10.1007/ s11263-018-1076-4.
T. Z. Zhang, S. Liu, N. Ahuja, M. H. Yang, B. Ghanem. Robust visual tracking via consistent low-rank sparse learning. International Journal of Computer Vision, vol.111, no.2, pp. 171–190, 2014. DOI: 10.1007/s11263-014-0738-0.
S. Hare, A. Saffari, P. H. S. Torr. Struck: Structured output tracking with kernels. In Proceedings of IEEE Interna-tional Conference on Computer Vision, IEEE, Barcelona, Spain, pp. 263–270, 2011.DOI: 10.1109/ICCV.2011.6126251.
X. Mei, H. B. Ling, Y. Wu, E. Blasch, L. Bai. Minimum error bounded efficient t tracker with occlusion detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 1257–1264, 2011}. DOI: 10.1109/CVPR.2011.59954
T. Z. Zhang, S. Liu, C. S. Xu, S. C. Yan, B. Ghanem, N. Ahuja, M. H. Yang. Structural sparse tracking. In Pro-ceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 150–158, 2015. DOI: 10.1109/CVPR.2015.7298610.
Z. B. Kang, W. Zou, Z. Zhu, H. X. Ma. Smooth-optimal adaptive trajectory tracking using an uncalibrated fish-eye camera. International Journal of Automation and Com-puting, vol.17, no.2, pp. 267–278, 2020. DOI: 10.1007/ s11633-019-1209-4.
Q. Fu, X. Y. Chen, W. He. A survey on 3D visual tracking of multicopters. International Journal of Automation and Computing, vol.16, no.6, pp. 707–719, 2019. DOI: 10. 1007/s11633-019-1199-2.
S. Liu, G. C. Liu, H. Y. Zhou. A robust parallel object tracking method for illumination variations. Mobile Net-works and Applications, vol. 24, no. 1, pp. 5–17, 2019. DOI: 10.1007/s11036-018-1134-8.
Y. K. Qi, L. Qin, S. P. Zhang, Q. M. Huang, H. X. Yao. Robust visual tracking via scale-and-state-awareness. Neurocomputing, vol.329, pp.75–85, 2019. DOI: 10.1016/ j.neucom.2018.10.035.
D. Wang, H. C. Lu. Visual tracking via probability con-tinuous outlier model. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Columbus, USA, pp. 3478–3485, 2014. DOI: 10.1109/CV-PR.2014.445.
F. Yang, H. C. Lu, M. H. Yang. Robust superpixel tracking. IEEE Transactions on Image Processing, vol. 23, no. 4, pp. 1639–1651, 2014. DOI: 10.1109/TIP.2014.2300823.
T. Z. Zhang, B. Ghanem, S. Liu, N. Ahuja. Robust visual tracking via multi-task sparse learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recog-nition, IEEE, Providence, USA, pp. 2042–2049, 2012. DOI: 10.1109/CVPR.2012.6247908.
H. G. Ren, W. M. Liu, T. Shi, F. J. Li. Compressive tracking based on online Hough forest. International Journal of Automation and Computing, vol.14, no.4, pp. 396–406, 2017. DOI: 10.1007/s11633-017-1083-x.
Z. Q. Zhao, P. Zheng, S. T. Xu, X. D. Wu. Object detection with deep learning: A review. IEEE Transactions on Neural Networks and Learning Systems, vol.30, no.11, pp. 3212–3232, 2019. DOI: 10.1109/TNNLS.2018.2876865.
Q. Wang, L. Zhang, L. Bertinetto, W. M. Hu, P. H. S. Torr. Fast online object tracking and segmentation: A unifying approach. In Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 1328–1338, 2019. DOI: 10.1109/CVPR. 2019.00142.
J. R. Xue, J. W. Fang, P. Zhang. A survey of scene understanding by event reasoning in autonomous driving. International Journal of Automation and Computing, vol. 15, no. 3, pp. 249–266, 2018. DOI: 10.1007/s11633-018-1126-y.
A. Krizhevsky, I. Sutskever, G. E. Hinton. ImageNet clas-sification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, ACM, Lake Tahoe, USA, pp. 1097–1105, 2012.
J. Long, E. Shelhamer, T. Darrell. Fully convolutional net-works for semantic segmentation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Boston, USA, pp. 3431–3440, 2015. DOI: 10.1109/ CVPR.2015.7298965.
H. Fan, L. T. Lin, F. Yang, P. Chu, G. Deng, S. J. Yu, H. X. Bai, Y. Xu, C. Y. Liao, H. B. Ling. LaSOT: A high-quality benchmark for large-scale single object tracking. In Proceedings of IEEE/CVF Conference on Computer Vis-ion and Pattern Recognition, IEEE, Beach, USA, pp. 5374–5383, 2019. DOI: 10.1109/CVPR.2019.00552.
K. H. Zhang, Q. S. Liu, Y. Wu, M. H. Yang. Robust visual tracking via convolutional networks without training. IEEE Transactions on Image Processing, vol.25, no.4, pp. 1779–1792, 2016. DOI: 10.1109/TIP.2016.2531283.
T. Z. Zhang, C. S. Xu, M. H. Yang. Learning multi-task correlation particle filters for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 2, pp. 365–378, 2018. DOI: 10.1109/TPAMI.2018. 2797062.
X. Mei, H. B. Ling. Robust visual tracking using l1 minimization. In Proceedings of the 12th IEEE International Conference on Computer Vision, IEEE, Kyoto, Japan, pp. 1436–1443, 2009. DOI: 10.1109/ICCV.2009.5459292.
T. Z. Zhang, B. Ghanem, S. Liu, N. Ahuja. Low-rank sparse learning for robust visual tracking. In Proceedings of the 12th European Conference on Computer Vision, Springer, Florence, Italy, pp. 470–484, 2012. DOI: 10. 1007/978-3-642-33783-3_34.
T. Z. Zhang, K. Jia, C. S. Xu, Y. Ma, N. Ahuja. Partial occlusion handling for visual tracking via robust part matching. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Columbus, USA, pp. 1258–1265, 2014. DOI: 10.1109/CVPR.2014.164.
D. A. Ross, J. Lim, R. S. Lin, M. H. Yang. Incremental learning for robust visual tracking. International Journal of Computer Vision, vol.77, no. 1–3, pp. 125–141, 2008. DOI: 10.1007/s11263-007-0075-7.
Y. Wu, H. B. Ling, J. Y. Yu, F. Li, X. Mei, E. K. Cheng. Blurred target tracking by blur-driven tracker. In Proceed-ings of IEEE International Conference on Computer Vis-ion, IEEE, Barcelona, Spain, pp. 1100–1107, 2011. DOI: 10.1109/ICCV.2011.6126357.
C. L. Bao, Y. Wu, H. B. Ling, H. Ji. Real time robust LI tracker using accelerated proximal gradient approach. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 1830–1837, 2012. DOI: 10.1109/CVPR.2012.6247881.
B. Babenko, M. H. Yang, S. Belongie. Visual tracking with online multiple instance learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Miami, USA, pp. 983–990, 2009. DOI: 10.1109/CV-PR.2009.5206737.
J. Gall, A. Yao, N. Razavi, L. Van Gool, V. Lempitsky. Hough forests for object detection, tracking, and action recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.33, no. 11, pp.2188–2202, 2011. DOI: 10.1109/TPAMI.2011.70.
S. Liu, T. Z. Zhang, X. C. Cao, C. S. Xu. Structural correlation filter for robust visual tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, pp. 4312–4320, 2016. DOI: 10.1109/CVPR.2016.467.
A. W. M. Smeulders, D. M. Chu, R. Cucchiara, S. Calder-ara, A. Dehghan, M. Shah. Visual tracking: An experimental survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.36, no.7, pp. 1442–1468, 2014. DOI: 10.1109/TPAMI.2013.230.
Y. Wu, J. Lim, M. H. Yang. Online object tracking: A benchmark. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Portland, USA, pp. 2411–2418, 2013. DOI: 10.1109/CVPR.2013.312.
Z. T. Li, W. Wei, T. Z. Zhang, M. Wang, S. J. Hou, X. Peng. Online multi-expert learning for visual tracking. IEEE Transactions on Image Processing, vol.29, pp. 934–946, 2019. DOI: 10.1109/TIP.2019.2931082.
T. Z. Zhang, S. Liu, C. S. Xu, B. Liu, M. H. Yang. Correlation particle filter for visual tracking. IEEE Transactions on Image Processing, vol.27, no.6, pp.2676–2687, 2018. DOI: 10.1109/TIP.2017.2781304.
M. J. Black, A. D. Jepson. Eigentracking: Robust matching and tracking of articulated objects using a view-based representation. International Journal of Computer Vision, vol.26, no.1, pp. 63–84, 1998. DOI: 10.1023/A:10079392 32436.
D. Wang, H. C. Lu, M. H. Yang. Online object tracking with sparse prototypes. IEEE Transactions on Image Processing, vol.22, no. 1, pp.314–325, 2013. DOI: 10.1109/ TIP.2012.2202677.
D. Wang, H. C. Lu, M. H. Yang. Least soft-threshold squares tracking. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Port-land, OR, USA, pp. 2371–2378, 2013. DOI: 10.1109/CV-PR.2013.307.
N. Y. Wang, J. D. Wang, D. Y. Yeung. Online robust non-negative dictionary learning for visual tracking. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Sydney, NSW, Australia, pp. 657–664, 2013. DOI: 10.1109/ICCV.2013.87.
W. F. Liu, P. P. Pokharel, J. C. Principe. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Transactions on Signal Processing, vol. 55, no. 11, pp. 5286–5298, 2007. DOI: 10.1109/TSP.2007. 896065.
R. He, W. S. Zheng, B. G. Hu. Maximum correntropy cri-terion for robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.33, no.8, pp. 1561–1576, 2011. DOI: 10.1109/TPAMI.2010.220.
W. M. Hu, X. Li, X. Q. Zhang, X. C. Shi, S. Maybank, Z. F. Zhang. Incremental tensor subspace learning and its applications to foreground segmentation and tracking. International Journal of Computer Vision, vol.91, no.3, pp.303–327, 2011. DOI: 10.1007/s11263-010-0399-6.
T. Wang, I. Y. H. Gu, P. F. Shi. Object tracking using in-cremental 2D-PCA learning and ml estimation. In Pro-ceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Honolulu, HI, USA, pp.I–933–I–936, 2007. DOI: 10.1109/ICASSP.2007.366062.
J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, Y. Ma. Ro-bust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.31, no.2, pp. 210–227, 2009. DOI:10.1109/TPAMI. 2008.79.
W. Zhong, H. C. Lu, M. H. Yang. Robust object tracking via sparsity-based collaborative model. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, RI, USA, pp. 1838–1845, 2012. DOI: 10.1109/CVPR.2012.6247882.
R. He, W. S. Zheng, T. N. Tan, Z. N. Sun. Half-quadratic-based iterative minimization for robust sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.36, no.2, pp.261–275, 2014. DOI: 10.1109/TPAMI.2013.102.
B. D. Chen, J. C. Principe. Maximum correntropy estimation is a smoothed map estimation. IEEE Signal Processing Letters, vol.19, no.8, pp.491–494, 2012. DOI: 10.1109/LSP.2012.2204435.
M. Nikolova, M. K. Ng. Analysis of half-quadratic minim-ization methods for signal and image recovery. SIAM Journal on Scientific Computing, vol.27, no.3, pp. 937–966, 2005. DOI: 10.1137/030600862.
A. Adam, E. Rivlin, I. Shimshoni. Robust fragments-based tracking using the integral histogram. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, New York, USA, pp. 798–805, 2006. DOI: 10.1109/CVPR.2006.256.
J. Kwon, K. M. Lee. Visual tracking decomposition. In Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, San Francisco, USA, pp. 1269–1276, 2010. DOI: 10.1109/CV-PR.2010.5539821.
B. Y. Liu, J. Z. Huang, L. Yang, C. Kulikowsk. Robust tracking using local sparse appearance model and K-selection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Providence, USA, pp. 1313–1320, 2011. DOI: 10.1109/CVPR.2011.5995730.
Z. Kalal, K. Mikolajczyk, J. Matas. Tracking-learning-de-tection. IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, vol.34, no.7, pp. 1409–1422, 2012. DOI: 10.1109/TPAMI.2011.239.
X. Jia, H. C. Lu, M. H. Yang. Visual tracking via adaptive structural local sparse appearance model. In Proceedings of IEEE Conference on Computer Vision and Pattern Re-cognition, IEEE, Providence, RI, USA, pp. 1822–1829, 2012. DOI: 10.1109/CVPR.2012.6247880.
Acknowledgements
This work was supported by National Natural Science Foundation of China (Nos. 61702513, 61525306, 61633021), National Key Research and Development Program of China (No. 2016YFB1001000), Capital Science and Technology Leading Talent Training Project (No. Z181100006318030), CAS-AIR and Shandong Provincial Key Research and Development Program (Major Scientific and Technological Innovation Project) (No. 2019JZZY010119)
Author information
Authors and Affiliations
Corresponding author
Additional information
Recommended by Associate Editor Hui Yu
Wei-Ning Wang received the B.Eng. degree in automation from North China Electric Power University, China in 2015. She is currently a Ph.D. degree candidate at National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences (CASIA), China.
Her research interests include computer vision, pattern recognition and video analysis.
Qi Li received the B.Eng. degree in automation from the China University of Petroleum, China in 2011 and the Ph.D. degree in pattern recognition and intelligent systems from CASIA, China in 2016. He is currently an associate professor with the Center for Research on Intelligent Perception and Computing, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, China.
His research interests include face recognition, computer vision, and machine learning.
Liang Wang received both the B.Eng. and M.Eng. degrees from Anhui University, China in 1997 and 2000, respectively, and the Ph. D. degree from the Institute of Automation, Chinese Academy of Sciences (CASIA), China in 2004. From 2004 to 2010, he was a research assistant at Imperial College London, UK, and Mon-ash University, Australia, a research fellow with the University of Melbourne, Australia, and a lecturer with the University of Bath, UK, respectively. Currently, he is a full professor of the Hundred Talents Program at the National Lab of Pattern Recognition, CASIA, China. He is currently an IEEE Fellow and IAPR Fellow.
His research interests include machine learning, pattern recognition, and computer vision.
Rights and permissions
About this article
Cite this article
Wang, WN., Li, Q. & Wang, L. Robust Object Tracking via Information Theoretic Measures. Int. J. Autom. Comput. 17, 652–666 (2020). https://doi.org/10.1007/s11633-020-1235-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11633-020-1235-2