Skip to main content

Discriminative Context-Aware Correlation Filter Network for Visual Tracking

  • Conference paper
  • First Online:
  • 1071 Accesses

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 1250))

Abstract

In recent years, discriminative correlation filter (DCF) based trackers using convolutional features have received great attention due to their accuracy in online object tracking. However, the convolutional features of these DCF trackers are mostly extracted from convolutional networks trained for other vision tasks like object detection, which may limit the tracking performance. Moreover, under the challenge of fast motion and motion blur, the tracking performance usually decreases due to the lack of context information. In this paper, we present an end-to-end trainable discriminative context-aware correlation filter network, namely DCACFNet, which integrates context-aware correlation filter (CACF) into the fully-convolutional Siamese network. Firstly, the CACF is modeled as a differentiable layer in the DCACFNet architecture, which can back-propagate the localization error to the convolutional layers. Then, a novel channel attention module is embedded into the DCACFNet architecture to improve the target adaption of the whole network. Finally, this paper proposes a novel high-confidence update strategy to avoid the model corruption under the challenging of occlusion and out-of-view. Extensive experimental evaluations on two tracking benchmarks, OTB-2013 and OTB-2015, demonstrate that the proposed DCACFNet achieves the competitive tracking performance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Kristan, M., Matas, J., Leonardis, A., Felsberg, M.: The visual object tracking VOT2017 challenge results. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1949–1972 (2015)

    Google Scholar 

  2. Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: A benchmark. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411–2418 (2013)

    Google Scholar 

  3. Wu, Y., Lim, J., Yang, M.-H.: Object tracking benchmark. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1834–1848 (2015)

    Article  Google Scholar 

  4. Nam, H., Han, B.: Learning multi-domain convolutional neural networks for visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4293–4302 (2016)

    Google Scholar 

  5. Bertinetto, L., Valmadre, J., Henriques, J., Vedaldi, A., Torr, P.: Fully-convolutional siamese networks for object tracking. In: European Conference on Computer Vision, pp. 850–865 (2016)

    Google Scholar 

  6. Danelljan, M., Bhat, G., Khan, F.S., Felsberg, M.: ECO: efficient convolution operators for tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6638–6646 (2017)

    Google Scholar 

  7. Li, B., Yan, J., Wu, W., Zhu, Z., Hu, X.: High performance visual tracking with siamese region proposal network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8971–8980 (2018)

    Google Scholar 

  8. Li, B., Wu, W., Wang, Q., Zhang, F., Xing, J., Yan, J.: SiamRPN++: evolution of siamese visual tracking with very deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4282–4291 (2019)

    Google Scholar 

  9. Danelljan, M., Bhat, G., Khan, F.S., Felsberg, M.: ATOM: accurate tracking by overlap maximization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4660–4669 (2019)

    Google Scholar 

  10. Ross, D., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vision 77(1), 125–141 (2008)

    Article  Google Scholar 

  11. Alt, N., Hinterstoisser, S., Navab, N.: Rapid selection of reliable templates for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1355–1362 (2010)

    Google Scholar 

  12. Jia, X., Lu, H., Yang, M.-H.: Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1822–1829 (2012)

    Google Scholar 

  13. Avidan, S.: Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 1064–1072 (2004)

    Article  Google Scholar 

  14. Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 261–271 (2007)

    Article  Google Scholar 

  15. Bai, Y., Tang, M.: Robust tracking via weakly supervised ranking SVM. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1854–1861 (2012)

    Google Scholar 

  16. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2014)

    Article  Google Scholar 

  17. Danelljan, M., Khan, F.S., Felsberg, M., Weijer, J.v.d.: Adaptive color attributes for real-time visual tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1090–1097 (2014)

    Google Scholar 

  18. Danelljan, M., Häger, G., Khan, F., Felsberg, M.: Accurate scale estimation for robust visual tracking. In: British Machine Vision Conference, 1–5 September 2014

    Google Scholar 

  19. Danelljan, M., Hager, G., Khan, F.S., Felsberg, M.: Learning spatially regularized correlation filters for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4310–4318 (2015)

    Google Scholar 

  20. Bertinetto, L., Valmadre, J., Golodetz, S., Miksik, O., Torr, P.H.S.: Staple: complementary learners for real-time tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1401–1409 (2016)

    Google Scholar 

  21. Bibi, A., Mueller, M., Ghanem, B.: Target response adaptation for correlation filter tracking. In: European Conference on Computer Vision, pp. 419–433 (2016)

    Google Scholar 

  22. Mueller, M., Smith, N., Ghanem, B.: Context-aware correlation filter tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1396–1404 (2017)

    Google Scholar 

  23. Ma, C., Huang, J.B., Yang, X., Yang, M.H.: Hierarchical convolutional features for visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3074–3082 (2015)

    Google Scholar 

  24. Danelljan, M., Hager, G., Shahbaz Khan, F., Felsberg, M.: Convolutional features for correlation filter based visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 58–66 (2015)

    Google Scholar 

  25. Qi, Y., Zhang, S., Qin, L., Yao, H., Huang, Q., Lim, J., Yang, M.-H.: Hedged deep tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4303–4311 (2016)

    Google Scholar 

  26. Danelljan, M., Robinson, A., Khan, F.S., Felsberg, M.: Beyond correlation filters: learning continuous convolution operators for visual tracking. In: European Conference on Computer Vision (2016)

    Google Scholar 

  27. Bhat, G., Johnander, J., Danelljan, M., Khan, F.S., Felsberg, M.: Unveiling the power of deep tracking. In: European Conference on Computer Vision, pp. 483–498 (2018)

    Google Scholar 

  28. Valrnadre, J., Bertinetto, L., Henriques, J.F., Vedaldi, A., Torr, P.H.S.: End-to-end representation learning for correlation filter based tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2805–2813 (2017)

    Google Scholar 

  29. Wang, Q., Gao, J., Xing, J., Zhang, M., Hu, W.: Dcfnet: discriminant correlation filters network for visual tracking, arXiv: 1704.04057 (2017)

    Google Scholar 

  30. Wang, M.M., Liu, Y., Huang, Z.Y.: Large margin object tracking with circulant feature maps. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4021–4029 (2017)

    Google Scholar 

  31. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)

    Google Scholar 

  32. Boeddeker, C., Hanebrink, P., Drude, L., Heymann, J., Haeb-Umbach, R.: On the computation of complex-valued gradients with application to statistically optimum beamforming. arXiv:1701.00392 (2017)

  33. Russakovsky, O., Deng, J., Su, H., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 231–252 (2015)

    Article  MathSciNet  Google Scholar 

  34. Wang, N., Song, Y., Ma, C., Zhou, W., Liu, W., Li, H.: Unsupervised deep tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1308–1317 (2019)

    Google Scholar 

  35. Dong, X., Shen, J.: Triplet loss in siamese network for object tracking. In: European Conference on Computer Vision, pp. 472–488 (2018)

    Google Scholar 

Download references

Acknowledgments

This research is partially supported by National Natural Science Foundation of China (No.61876018, No.61976017).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weibin Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, X., Liu, W., Xing, W. (2021). Discriminative Context-Aware Correlation Filter Network for Visual Tracking. In: Arai, K., Kapoor, S., Bhatia, R. (eds) Intelligent Systems and Applications. IntelliSys 2020. Advances in Intelligent Systems and Computing, vol 1250. Springer, Cham. https://doi.org/10.1007/978-3-030-55180-3_55

Download citation

Publish with us

Policies and ethics