Skip to main content
Log in

Visual node prediction for visual tracking

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

A novel visual tracking algorithm based on visual node (VN) prediction is proposed in this paper. First, we count the distribution area and gray levels of the larger probability density in the VN. Then, all the frequencies of the VN are calculated, of which the weaker frequency gradient is removed by filtration. The stronger frequency gradient of the VN is reserved. Finally, we estimate the optimal object position by maximizing the likelihood of node clusters, which are formed by VNs. Extensive experiments show that the proposed approach has good adaptability to variable-structure tracking and outperforms the state-of-the-art trackers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Marco, P., Radu, T., Tinne, T., et al.: An elastic deformation field model for object detection and tracking. Int. J. Comput. Vis. 111(2), 137–152 (2015)

    Article  Google Scholar 

  2. Christophe, G., Séverine, D.: Combinatorial resampling particle filter: an effective and efficient method for articulated object tracking. Int. J. Comput. Vis. 112(3), 255–284 (2015)

    Article  MATH  Google Scholar 

  3. Rapuru, M., Kakanuru, S., Venugopal, P., et al.: Correlation based tracker level fusion for robust visual tracking. IEEE Trans. Image Process. 26(10), 4832–4842 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  4. Zhang, K., Li, X., Song, H., et al.: Visual tracking using spatio-temporally nonlocally regularized correlation filter. Pattern Recogn. 83, 185–195 (2018)

    Article  Google Scholar 

  5. Zhang, K., Liu, Q., Yang, J., et al.: Visual tracking via boolean map representations. Pattern Recogn. 81, 147–160 (2018)

    Article  Google Scholar 

  6. Li, C., Cheng, H., Hu, S., et al: Learning collaborative sparse representation for grayscale-thermal tracking. IEEE Trans. Image Process. 25(12), 5743–5756 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Li, C., Lin, L., Zuo, W., et al.: Visual tracking via dynamic graph learning. IEEE Trans. Pattern Anal. Mach. Intell. (2018). https://doi.org/10.1109/TPAMI.2018.2864965

    Google Scholar 

  8. Zhang, K., Liu, Q., Wu, Y., et al.: Robust visual tracking via convolutional networks without training. IEEE Trans. Image Process. 25(4), 1779–1792 (2016)

    MathSciNet  MATH  Google Scholar 

  9. Yang, J., Zhang, K., Liu, Q.: Robust object tracking by online Fisher discrimination boosting feature selection. Comput. Vis. Image Underst. 153, 100–108 (2016)

    Article  Google Scholar 

  10. Chen, W., Zhang, K., Liu, Q.: Robust visual tracking via patch based Kernel correlation filters with adaptive multiple feature ensemble. Neurocomputing 214, 607–617 (2016)

    Article  Google Scholar 

  11. Song, H., Zheng, Y., Zhang, K.: Robust visual tracking via self-similarity learning. Electron. Lett. 53(1), 20–22 (2016)

    Article  Google Scholar 

  12. Wang, X., Li, C., Luo, B., et al.: SINT++: robust visual tracking via adversarial positive instance generation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4864–4873 (2018)

  13. Dominik, A.K.: BoBoT—Bonn benchmark on tracking. http://www.iai.uni-bonn.de/~kleind/tracking/index.htm (2010). Accessed 1 Mar 2017

  14. Choi, J., Chang, H.J., Jeong, J., et al.: Visual tracking using attention-modulated disintegration and integration. In: Computer Vision and Pattern Recognition. IEEE, pp. 4321–4330 (2016)

  15. Duffner, S., Garcia, C.: Fast pixelwise adaptive visual tracking of non-rigid objects. IEEE Trans. Image Process. 26(5), 2368–2380 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  16. Kwon, J., Lee, K.M.: Adaptive visual tracking with minimum uncertainty gap estimation. IEEE Trans. Pattern Anal. Mach. Intell. 39(1), 18–31 (2017)

    Article  Google Scholar 

  17. Danelljan, M., Häger, G., Khan, F.S., et al.: Discriminative scale space tracking. IEEE Trans. Pattern Anal. Mach. Intell. 39(8), 1561–1575 (2017)

    Article  Google Scholar 

  18. Bertinetto, L., Valmadre, J., Golodetz, S., et al.: Staple: complementary learners for real-time tracking. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1401–1409 (2016)

Download references

Acknowledgements

Heng Yuan, Wen-Tao Jiang, and Wan-Jun Liu are supported by the National Natural Science Foundation of the Republic of China under Grant 61601213 and NSF of Liaoning province under Grant 20170540426 and Liaoning province education department project under Grant LJ2017QL034, LJYL049. Sheng-Chong Zhang is supported by the China People’s Liberation Army weapons and equipment fund under Grant 61421070101162107002.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wen-Tao Jiang.

Additional information

Communicated by B. Huet.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (WMV 5349 KB)

Supplementary material 2 (WMV 2521 KB)

Supplementary material 3 (WMV 2942 KB)

Supplementary material 4 (WMV 5700 KB)

Supplementary material 5 (WMV 1200 KB)

Supplementary material 6 (WMV 6279 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yuan, H., Jiang, WT., Liu, WJ. et al. Visual node prediction for visual tracking. Multimedia Systems 25, 263–272 (2019). https://doi.org/10.1007/s00530-019-00603-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-019-00603-1

Keywords

Navigation