Skip to main content
Log in

Robust visual tracking based on scale invariance and deep learning

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Visual tracking is a popular research area in computer vision, which is very difficult to actualize because of challenges such as changes in scale and illumination, rotation, fast motion, and occlusion. Consequently, the focus in this research area is to make tracking algorithms adapt to these changes, so as to implement stable and accurate visual tracking. This paper proposes a visual tracking algorithm that integrates the scale invariance of SURF feature with deep learning to enhance the tracking robustness when the size of the object to be tracked changes significantly. Particle filter is used for motion estimation. The confidence of each particle is computed via a deep neural network, and the result of particle filter is verified and corrected by mean shift because of its computational efficiency and insensitivity to external interference. Both qualitative and quantitative evaluations on challenging benchmark sequences demonstrate that the proposed tracking algorithm performs favorably against several state-of-the-art methods throughout the challenging factors in visual tracking, especially for scale variation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Jia Y M. Robust control with decoupling performance for steering and traction of 4WS vehicles under velocity-varying motion. IEEE Transactions on Control Systems Technology, 2000, 8(3): 554–569

    Article  Google Scholar 

  2. Jia Y M. Alternative proofs for improved LMI representations for the analysis and the design of continuous-time systems with polytopic type uncertainty: a predictive approach. IEEE Transactions on Automatic Control, 2003, 48(8): 1413–1416

    Article  MathSciNet  Google Scholar 

  3. Jia Y M. General solution to diagonal model matching control of multiple-output-delay systems and its applications in adaptive scheme. Progress in Natural Science, 2009, 19(1): 79–90

    Article  MathSciNet  Google Scholar 

  4. Wang N Y, Yeung D Y. Learning a deep compact image representation for visual tracking. In: Proceedings of Advances in Neural Information Processing Systems. 2013, 809–817

    Google Scholar 

  5. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol P A. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. The Journal of Machine Learning Research, 2010, 11: 3371–3408

    MathSciNet  MATH  Google Scholar 

  6. Smeulders AWM, Chu DM, Rita C, Simone C, Afshin D, Mubarak S. Visual tracking: an experimental survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(7): 1442–1468

    Article  Google Scholar 

  7. Ali A, Jalil A, Niu J, Zhao X K, Rathore S, Ahmed J, Iftikhar M A. Visual object tracking—classical and contemporary approaches. Frontiers of Computer Science, 2016, 10(1): 167–188

    Article  Google Scholar 

  8. Wu Y, Lim J, Yang M H. Online object tracking: a benchmark. In: proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2013, 9(4): 2411–2418

    Google Scholar 

  9. Wu Y, Lim J, Yang M H. Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(9): 1834–1848

    Article  Google Scholar 

  10. Li X, Dick A, Shen C H, Anton V D H, Wang H Z. Incremental learning of 3D-DCT compact representations for robust visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013, 35(4): 863–881

    Article  Google Scholar 

  11. Gao J, Ling H B, Hu W M, Xing J L. Transfer learning based visual tracking with Gaussian processes regression. In: Proceedings of the 13th European Conference on Computer Vision. 2014, 188–203

    Google Scholar 

  12. Henriques J F, Caseiro R, Martins P, Batista J. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583–596

    Article  Google Scholar 

  13. Li X, Shen C H, Dick A, Zhang Z M, Zhuang Y. Online metricweighted linear representations for robust visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 38(5): 931–950

    Article  Google Scholar 

  14. Zhou Y, Bai X, Liu WY, Latecki L J. Similarity fusion for visual tracking. International Journal of Computer Vision, 2016, 118(3): 337–363

    Article  MathSciNet  Google Scholar 

  15. Zhong W, Lu H C, Yang M H. Robust object tracking via sparsitybased collaborative model. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1838–1845

    Google Scholar 

  16. Hare S, Saffari A, Torr P H S. Struck: structured output tracking with kernels. In: proceedings of IEEE Conference on Computer Vision. 2011, 263–270

    Google Scholar 

  17. Li X, Dick A, Shen C H, Zhang Z F, Hengel A V D, Wang H Z. Visual tracking with spatio-temporal Dempster-Shafer information fusion. IEEE Transactions on Image Processing, 2013, 22(8): 3028–3040

    Article  MathSciNet  Google Scholar 

  18. Gao C X, Chen F F, Yu J G, Huang R, Sang N. Robust visual tracking using exemplar-based detectors. IEEE Transactions on Circuits and Systems for Video Technology, 2015

    Google Scholar 

  19. Li K, He F Z, Chen X. Real-time object tracking via compressive feature selection. Frontiers of Computer Science, 2016, 10(4): 689–701

    Article  Google Scholar 

  20. Zhang B C, Perina A, Li Z G, Murino V, Liu J Z, Ji R R. Bounding multiple gaussians uncertainty with application to object tracking. International Journal of Computer Vision, 2016, 118(3): 364–379

    Article  MathSciNet  Google Scholar 

  21. Zhu Y Y, Zhang C Q, Zhou D Y, Wang X G, Bai X, Liu W Y. Traffic sign detection and recognition using fully convolutional network guided proposals. Neurocomputing, 2016, 214: 758–766

    Article  Google Scholar 

  22. Li H X, Li Y, Porikli F. Deep Track: learning discriminative feature representations by convolutional neural networks for visual tracking. IEEE Transactions on Image Processing, 2015, 25(4): 1834–1848

    Article  Google Scholar 

  23. Hong S H, You T G, Kwak S H, Han B H. Online tracking by learning discriminative saliency map with convolutional neural network. 2015, arXiv:1502.06796v1

    Google Scholar 

  24. Wang L, Liu T, Wang G, Chan K L, Yang Q X. Video tracking using learned hierarchical features. IEEE Transactions on Image Processing, 2015, 24(4): 1424–1435

    Article  MathSciNet  Google Scholar 

  25. Ma C, Huang J B, Yang X K, Yang M H. Hierarchical convolutional features for visual tracking. In: proceedings of IEEE International Conference on Computer Vision. 2015, 3074–3082

    Google Scholar 

  26. Wang N Y, Li S Y, Gupta A, Yeung D Y. Transferring rich feature hierarchies for robust visual tracking. 2015, arXiv:1501.04587v2

    Google Scholar 

  27. Zhang K H, Liu Q S, Wu Y, Yang M H. Robust visual tracking via convolutional networks without training. IEEE Transactions on Image Processing, 2016, 25(4): 1779–1792

    MathSciNet  Google Scholar 

  28. Held D, Thrun S, Savarese S. Learning to track at 100 fps with deep regression networks. 2016, arXiv:1604.01802

    Book  Google Scholar 

  29. Wang L J, Ouyang W L, Wang X G, Lu H C. STCT: sequentially training convolutional networks for visual tracking. In: proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2016

    Google Scholar 

  30. Zhai MY, Roshtkhari M J, Mori G. Deep Learning of appearance models for online object tracking. 2016, arXiv:1607.02568

    Google Scholar 

  31. Arulampalam M S, Maskell S, Gordon N, Clapp T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Transactions on Signal Processing, 2002, 50(2): 174–188

    Article  Google Scholar 

  32. Comaniciu D, Ramesh V, Meer P. Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(5): 564–577

    Article  Google Scholar 

  33. Torralba A, Fergus R, Freeman W T. 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008, 30(11): 1958–1970

    Article  Google Scholar 

  34. Zhang J M, Ma S G, Sclaroff S. MEEM: robust tracking via multiple experts using entropy minimization. In: proceedings of European Conference on Computer Vision. 2014, 188–203

    Google Scholar 

  35. He S F, Yang Q X, Lau RWH, Wang J, Yang M H. Visual tracking via locality sensitive histograms. In: proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2013, 2427–2434

    Google Scholar 

  36. Jia X, Lu H C, Yang M H. Visual tracking via adaptive structural local sparse appearance model. In: proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 1822–1829

    Google Scholar 

  37. Kwon J, Lee K M. Visual tracking decomposition. In: proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2010, 1269–1276

    Google Scholar 

  38. Ross D A, Lim J W, Lin R S, Yang M H. Incremental learning for robust visual tracking. International Journal of Computer Vision, 2008, 77(1): 125–141

    Article  Google Scholar 

  39. Dinh T B, Vo N, Medioni G. Context tracker: exploring supporters and distracters in unconstrained environments. In: proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2011, 1177–1184

    Google Scholar 

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant Nos. 61320106006, 61532006, 61502042).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junping Du.

Additional information

Nan Ren is a graduate student in School of Computer Science, Beijing University of Posts and Telecommunications, China. She received her bachelor’s degree in computer science and technology from University of Science and Technology Beijing, China in 2014. Her research interest is mainly image processing and visual tracking.

Junping Du is now a full professor and PhD supervisor in School of Computer Science and Technology, Beijing University of Posts and Telecommunications, China. Her research interests include artificial intelligence, image processing and pattern recognition.

Suguo Zhu received her MS degree in computer science and technology from Guangdong University of Technology, China in 2012. She is now a PhD candidate in computer science of Beijing University of Posts and Telecommunications, China. Her research interests include image processing, computer vision and visual tracking.

Linghui Li received her BS degree in computer science and technology from Xi’an University of Posts and Telecommunications, China. She is now studying for a master’s degree in computer science and technology from Beijing University of Posts and Telecommunications, China. Her research interests include image processing and computer vision.

Dan Fan received the BS degree in computer science and technology from Shandong University, China. He is currently pursuing the master’s degree. His current research interests include image processing and computer vision.

JangMyung Lee received the BS and MS degrees in electronic engineering from Seoul National University, Korea in 1980 and 1982, respectively, and the PhD degree in computer engineering from the University of Southern California, Los Angeles, USA in 1990. Since 1992, he has been a professor with Pusan National University, Korea. He is a research group leader for Logistics and IT. His current research interests include intelligent robotic systems, ubiquitous port, and intelligent sensors. Prof. Lee is a former chairman of the Research Institute of Computer, Information, and Communication (RICIC). He is currently serving as the director of the Institute of Control Automation, and Systems Engineers (ICASE), Institute of Electronics Engineers, Korea (IEEK), Korea Robotics Society (KROS), and Society of Instrument and Control Engineers (SICE).

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ren, N., Du, J., Zhu, S. et al. Robust visual tracking based on scale invariance and deep learning. Front. Comput. Sci. 11, 230–242 (2017). https://doi.org/10.1007/s11704-016-6050-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-016-6050-0

Keywords

Navigation