Skip to main content
Log in

Coupled-layer based visual tracking via adaptive kernelized correlation filters

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Part-based visual model is particularly useful when the target appearance undergoes partial occlusion or deformation. The existing reliable patches tracking (RPT) method has achieved better result by identifying and exploiting the reliable patches that can be tracked correctly, yet it tends to fail in some challenging scenes since it ignores the holistic information of target completely, while, in fact, the target’s holistic appearance provides more discriminative features than local patches with low resolution. Based on the existing RPT and kernelized correlation filters tracking method, in this paper, we propose a coupled-layer visual model based tracker by combining the target’s global and local appearance in a coupled way. The global layer provides the holistic information and is treated as an approximation of the target. The local layer is composed of multiple small patches that are randomly initialized in the first frame. During tracking, the global tracker detects the target itself; its detection result is employed in the local layer to exploit the reliable patches and to estimate the target position corresponding to each patch. The exploited reliable patches are employed to estimate the target scale and to vote the current target location. Finally, both global and local models are updated with carefully designed updating mechanisms. Experiments conducted on 80 challenging benchmark sequences clearly show that our tracker improves the RPT tracker significantly both in overall and individual performance yet without obvious speed cost. Also, our tracker outperforms all the state-of-the-art trackers in overall datasets and eight independent datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. 38(4), 1–45 (2006)

  2. Smeulders, A.W., Chu, D.M., Cucchiara, R., Calderara, S., Dehghan, A., Shah, M.: Visual tracking: an experimental survey. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1442–1468 (2013)

    Google Scholar 

  3. Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)

    Article  Google Scholar 

  4. Zhong, W., Lu, H.C., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. In: IEEE Conference on ComputerVision and Pattern Recognition (CVPR), pp. 1838–1845 (2012)

  5. Zhang, K., Zhang, L., Yang, M.H.: Fast compressive tracking. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 2002–2015 (2014)

    Article  Google Scholar 

  6. Black, M.J., Jepson, A.D.: Eigen tracking: Robust matching and tracking of articulated objects using a view-based representation. Int. J. Comput. Vis. 26(1), 63–84 (1998)

    Article  Google Scholar 

  7. Yang, M., Wu, Y., Hua, G.: Context-aware visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 31(7), 1195–1209 (2009)

    Article  Google Scholar 

  8. Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 798–805 (2006)

  9. Kwon, J., Lee, K.M.: Visual tracking decomposition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1269–1276 (2010)

  10. Ross, D.A., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008)

    Article  Google Scholar 

  11. Mei, X., Ling, H.: Robust visual tracking using L1 minimization. In: IEEE International Conference on Computer Vision (ICCV), pp. 1436–1443 (2009)

  12. Kalal, Z., Mikolajczyk, K., Matas, J.: Tracking-learning-detection. IEEE Trans. Pattern Anal. Mach. Intell. 34(7), 1409–1422 (2011)

    Article  Google Scholar 

  13. Collins, R.T., Liu, Y.X., Leordeanu, M.: Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1631–1643 (2005)

    Article  Google Scholar 

  14. Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 261–271 (2007)

    Article  Google Scholar 

  15. Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)

    Article  Google Scholar 

  16. Zeisl, B., Leistner, C., Saffari, A., Bischof, H.: On-line semi-supervised multiple-instance boosting. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1879–1886 (2010)

  17. Bolme, D.S., Beveridge, J.R., Draper, B,A., Lui, Y.M.: Visual object tracking using adaptive correlation filters. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2544–2550 (2010)

  18. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: European Conference on Computer Vision (ECCV), pp. 702–715 (2012)

  19. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)

    Article  Google Scholar 

  20. Danelljan, M., Shahbaz Khan, F., Felsberg, M., Van de Weijer, J.: Adaptive color attributes for real-time visual tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1090–1097 (2014)

  21. Danelljan, M., Hager, G., Shahbaz Khan, F., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking. In: IEEE International Conference on Computer Vision (ICCV), pp. 4310–4318 (2015)

  22. Cehovin, L., Kristan, M., Leonardis, A.: An adaptive coupled-layer visual model for robust visual tracking. In: IEEE International International Conference on Computer Vision (ICCV), pp. 1363–1370 (2011)

  23. Nejhum, S.M.S., Ho, J., Yang, M.H.: Visual tracking with histograms and articulating blocks. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2008)

  24. Kwon, J., Lee, K.M.: Highly nonrigid object tracking via patch-based dynamic appearance modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(10), 2427–2441 (2013)

    Article  MathSciNet  Google Scholar 

  25. Jia, X., Lu, H., Yang, M.H.: Visual tracking via adaptive structural local sparse appearance model. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1822–1829 (2012)

  26. Yao, R., Shi, Q., Shen, C., Zhang, Y., Van Den Hengel, A.: Part-based visual tracking with online latent structural learning. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2363–2370 (2013)

  27. Li, Y, Zhu, J., Hoi, S.C.: Reliable patch trackers: Robust visual tracking by exploiting reliable patches. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 353–361 (2015)

  28. Liu, T., Wang, G., Yang, Q.: Real-time part-based visual tracking via adaptive correlation filters. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4902–4912 (2015)

  29. Xu, Y., Wang, J., Li, H., Li, Y., Miao, Z., Y. Zhang, Y.: Patch-based scale calculation for real-time visual tracking. IEEE Signal Proc. Lett. 23(1), 40–44 (2016)

  30. Doucet, A., De Freitas, N., Gordon, N.: An introduction to sequential Monte Carlo methods. Sequential Monte Carlo methods in practice. Part I, 3–14 (2001)

  31. Nebehay, G., Pflugfelder, R.: Consensus-based matching and tracking of keypoints for object tracking. In: IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 862–869 (2014)

  32. Danelljan, M., Hager, G., Shahbaz Khan, F., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference (BMVC) (2014)

  33. Godec, M., Roth, P.M., Bischof, H.: Hough-based tracking of non-rigid objects. Comput. Vis. Image Underst. 117(10), 1245–1256 (2013)

    Article  Google Scholar 

  34. Everingham, M., Van Gool, L., Williams, C.K.I., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)

    Article  Google Scholar 

  35. Hare, S., Saffari, A., Torr, P.H.: Struck: Structured output tracking with kernels. In: IEEE International Conference on Computer Vision (ICCV), pp. 263–270 (2011)

  36. Zhang, J., Ma, S., Sclaroff, S.: MEEM: Robust tracking via multiple experts using entropy minimization. In: European Conference on Computer Vision (ECCV), pp. 188–203 (2014)

  37. Gao, J., Ling, H., Hu, W., Xing, J.: Transfer learning based visual tracking with gaussian processes regression. In: European Conference on Computer Vision (ECCV) (2014)

  38. Kristan, M., Pflugfelder, R., Leonardis, A., Matas, J., et al.: The visual object tracking vot2014 challenge results, In: European Conference on Computer Vision Workshops, pp. 191–217 (2015)

  39. Kristan, M., Matas, J., Leonardis, A., Felsberg, M., et al.: The visual object tracking vot2015 challenge results. In: IEEE International Conference on Computer Vision Workshops, pp. 564–586 (2015)

Download references

Acknowledgments

This work is supported by the Preliminary Research Foundation of National Defence Science and Technology under Grant 90406150007, and the Fundamental Research Funds for the Central Universities under Grant NSIY191414.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guixi Liu.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Liu, G. Coupled-layer based visual tracking via adaptive kernelized correlation filters. Vis Comput 34, 41–54 (2018). https://doi.org/10.1007/s00371-016-1310-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-016-1310-4

Keywords

Navigation