Skip to main content
Log in

Collaborative model with adaptive selection scheme for visual tracking

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Visual tracking is a challenging task since it involves developing an effective appearance model to deal with numerous factors. In this paper, we propose a robust object tracking algorithm based on a collaborative model with adaptive selection scheme. Specifically, based on the discriminative features selected from the feature selection scheme, we develop a sparse discriminative model (SDM) by introducing a confidence measure strategy. In addition, we present a sparse generative model (SGM) by combining ℓ1 regularization with PCA reconstruction. In contrast to existing hybrid generative discriminative tracking algorithms, we propose a novel adaptive selection scheme based on the Euclidean distance as the joint mechanism, which helps to construct a more reasonable likelihood function for our collaborative model. Experimental results on several challenging image sequences demonstrate that the proposed tracking algorithm leads to a more favorable performance compared with the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. Acm Comput Surv 38(4):81–93

    Article  Google Scholar 

  2. Li X et al. (2013) A survey of appearance models in visual object tracking. Acm Trans Intell Syst Technol 4(4):478–488

    Article  Google Scholar 

  3. Avidan S, Ensemble tracking (2007) IEEE Trans Pattern Anal Mach Intell 29(2):261–271

    Article  Google Scholar 

  4. Grabner H, Bischof H (2006) On-line boosting and vision. In: IEEE computer society conference on computer vision and pattern recognition, pp 260–267

  5. Grabner H, Leistner C, Bischof H (2008) Semi-supervised on-line boosting for robust tracking. In: European conference on computer vision, pp 234–247

  6. Babenko B, Yang MH, Belongie S (2011) Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intell 33(8):1619–1632

    Article  Google Scholar 

  7. Kalal Z, Mikolajczyk K, Matas J (2012) Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell 34(7):1409–1422

    Article  Google Scholar 

  8. Jiang N, Liu W, Wu Y (2011) Learning adaptive metric for robust visual tracking. IEEE Trans Image Process 20(8):2288–2300

    Article  MathSciNet  MATH  Google Scholar 

  9. Zhuang, Lu H, Xiao Z (2014) Visual tracking via discriminative sparse similarity map. IEEE Trans Image Process 23(4):1872–1881

    Article  MathSciNet  MATH  Google Scholar 

  10. Wu G, Zhao C, Lu W et al (2015) Efficient structured L1 tracker based on laplacian error distribution. Int J Mach Learn Cybern 6(4):581–595

    Article  Google Scholar 

  11. Adam A, Rivlin E, Shimshoni I (2006) Robust fragments-based tracking using the integral histogram. In: IEEE computer society conference on computer vision and pattern recognition, pp 798–805

  12. Ross J, Lim J, Lin RS, Yang M (2008) Incremental learning for robust visual tracking. Int J Comput Vision 77(1–3):125–141

    Article  Google Scholar 

  13. Kong J, Liu C, Jiang M, Wu J, Tian S, Lai H (2016) Generalized P-regularized representation for visual tracking. Neurocomputing 213:155–161

    Article  Google Scholar 

  14. Mei X, Ling H (2009) Robust visual tracking using L1 minimization. In: IEEE international conference on computer vision, pp 1436–1443

  15. Bao C, Wu Y, Ling H, Ji H (2012) Real time robust L1 tracker using accelerated proximal gradient approach. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 1830–1837

  16. Wang, Lu H, Yang M-H (2013) Online object tracking with sparse prototypes. IEEE Trans Image Process 22(1):314–325

    Article  MathSciNet  MATH  Google Scholar 

  17. Jia X, Lu H, Yang M (2012) Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of IEEE conference on computer vision and pattern recognition, pp 1822–1829

  18. Liu H, Yuan M, Sun F, Zhang J (2014) Spatial neighborhood-constrained linear coding for visual object tracking. IEEE Trans Ind Inf 10(1):469–480

    Article  Google Scholar 

  19. Liu R, Cheng J, Lu H (2009) A robust boosting tracker with minimum error bound in a co-training framework. Proceedings 30(2):1459–1466.

    Google Scholar 

  20. Dinh TB, Medioni GG (2011) Co-training framework of generative and discriminative trackers with partial occlusion handling. In: IEEE workshop on the applications of computer vision, pp 642–649.

  21. Zhong W, Lu H, Yang M (2014) Robust object tracking via sparse collaborative appearance model. IEEE Trans Image Process 23(5):2356–2368

    Article  MathSciNet  MATH  Google Scholar 

  22. Zhao L, Zhao Q, Chen Y, Lv P (2016) Combined discriminative global and generative local models for visual tracking. J Electron Imaging 25(2):023005

    Article  Google Scholar 

  23. Wang Q, Chen F, Xu W, Yang M-H (2012) Online discriminative object tracking with local sparse representation. In: IEEE workshop on the applications of computer vision, pp 425–432.

  24. Zhang T, Ghanem B, Liu S, Ahuja N (2013) Robust visual tracking via structured multi-task sparse learning. Int J Comput Vis 101(2):367–383

    Article  MathSciNet  Google Scholar 

  25. Zhang T, Ghanem B, Liu S, Ahuja N (2012) Low-rank sparse learning for robust visual tracking. In: European conference on computer vision, pp 2042–2049

  26. Wang D, Lu H, Yang M (2016) Robust visual tracking via least soft-threshold squares. IEEE Trans Circuits Syst Video Technol 26(9):1709–1721

    Article  Google Scholar 

  27. Wang D, Lu H, Bo C (2015) Fast and robust object tracking via probability continuous outlier model. IEEE Trans Image Process 24(12):5166–5176

    Article  MathSciNet  Google Scholar 

  28. Hale T, Yin W, Zhang Y (2008) Fixed-point continuation for ℓ1-minimization: methodology and convergence. Siam J Optim 19(3):1107–1130

    Article  MathSciNet  MATH  Google Scholar 

  29. Wang D, Lu H, Bo C, Visual (2014) Tracking via weighted local cosine similarity. IEEE Trans Cybern 45(9):1838–1850

    Article  Google Scholar 

  30. Xiao Z, Lu H, Wang D (2014) L2-RLS-based object tracking. IEEE Trans Circuits Syst Video Technol 24(8):1301–1309

    Article  Google Scholar 

  31. Pan J, Lim J, Su Z, Yang M (2014) L0-regularized object representation for visual tracking. In: Proceedings British machine vision conference

  32. Everingham M, Van Gool L, C. K. I. Williams, Winn J, Zisserman A (2010) The pascal visual object classes (voc) challenge. Int J Comput Vis 88(2):303–338

    Article  Google Scholar 

Download references

Acknowledgements

This work was partially supported by the National Natural Science Foundation of China (61362030, 61201429), the Project Funded by China Postdoctoral Science Foundation (2015M581720, 2016M600360), the Project Funded by Jiangsu Postdoctoral Science Foundation (1601216C) and Technology Research Project of the Ministry of Public Security of China (2014JSYJB007).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Kong.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (MP4 5176 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, T., Kong, J., Jiang, M. et al. Collaborative model with adaptive selection scheme for visual tracking. Int. J. Mach. Learn. & Cyber. 10, 215–228 (2019). https://doi.org/10.1007/s13042-017-0709-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-017-0709-1

Keywords

Navigation