Skip to main content
Log in

Online convolution network tracking via spatio-temporal context

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

According to the lack of spatio-temporal information of convolution neural network abstraction, an online visual tracking algorithm based on convolution neural network is proposed, combining the spatio-temporal context model to the order filter of convolution neural network. Firstly, the initial target is preprocessed and the target spatial model is extracted, the spatio-temporal context model is obtained by the spatio-temporal information. The first layer adopts the spatio-temporal context model to convolve the input to obtain the simple layer feature. The second layer starts with skip the spatio-temporal context model to get a set of convolution filters, convolving with the simple features of the first layer to extract the target abstract features, and then the deep expression of the target can be obtained by superimposing the convolution results of the simple layer. Finally, the target tracking is realized by sparse updating method combining with particle filter tracking framework. Experiments show that deep abstract feature extracted by online convolution network structure combining with spatio-temporal context model, can preserve spatio-temporal information and improve the background clutters, illumination variation, low resolution, occlusion and scale variation and the tracking efficiency under complex background.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Babenko B, Yang MH, Belongie S (2009) Visual tracking with online multiple instance learning. In CVPR

  2. Bao C, Wu Y, Ling H, Ji H (2012) Real time robust L1 tracker using accelerated proximal gradient approach. In CVPR

  3. Bolme DS, Beveridge JR, Draper B, Lui YM (2010) Visual object tracking using adaptive correlation filters. In ICCV

  4. Chi ZZ, Li HY, Lu HC, Yang MH (2017) Dual deep network for visual tracking. In TIP

  5. Danelljan M, Khan FS, Felsberg M, van de Weijer J (2014) Adaptive color attributes for real-time visual tracking. In CVPR

  6. Danelljan M, Häger G, Khan FS, Felsberg M (2014) Accurate scale estimation for robust visual tracking. In BMVC

  7. Danelljan M, Khan FS, Häger G, Felsberg M (2015) Learning spatially regularized correlation filters for visual tracking. In ICCV

  8. Danelljan M, Häger G, Khan FS, Felsberg M (2015) Convolutional features for correlation filter based visual tracking. In ICCV workshop

  9. Danelljan M, Robinson A, Khan F, Felsberg M (2016) Beyond correlation filters: learning continuous convolution operators for visual tracking. In ECCV

  10. Hare S, Golodetz S, Saffari A, Vineet V, Cheng MM (2016) Struck: structured output tracking with Kernels. In TPAMI

  11. Held D, Thrun S, Savarese S (2016) Learning to track at 100 FPS with deep regression networks. In ECCV

  12. Henriques JF, Caseiro R, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with Kernels. In ECCV

  13. Henriques JF, Caseiro R, Martins P, Batista J (2015) High-speed tracking with kernelized correlation filters. In TPAMI

  14. Kalal Z, Mikolajczyk K, Matas J (2012) Tracking-learning-detection, In TPAMI

  15. Ma C, Huang JB, Yang XK, Yang MH (2015) Hierarchical convolutional features for visual tracking. In ICCV

  16. Nam H, Han B (2016) Learning multi-domain convolutional neural networks for visual tracking. In CVPR

  17. Nam H, Baek M, Han B (2016) Modeling and propagating CNNs in a tree structure for visual tracking. In arXiv

  18. Qi YK, Zhang SP, Qin L, Yao HX, Huang QM, Lim JW, Yang MH (2016) Hedged deep tracking. In CVPR

  19. Wang NY, Yeung DY (2013) Learning a deep compact image representation for visual tracking. In NIPS

  20. Wang NY, Shi JP, Yeung DY, Jia JY (2015) Understanding and diagnosing visual tracking systems. In ICCV

  21. Wang LJ, Ouyang WL, Wang XG, Lu HC (2016) STCT: sequentially training convolutional networks for visual tracking. In CVPR

  22. Wu Y, Lim J, Yang MH (2013) online object tracking: a benchmark. In CVPR

  23. Wu Y, Lim J, Yang MH (2015) Object tracking benchmark. In TPAMI

  24. Zhang KH, Zhang L, Yang MH (2012) Real-time compressive tracking. In ECCV

  25. Zhang KH, Zhang L, Yang MH, Zhang D (2014) Fast tracking via spatio-temporal context learning. In ECCV

  26. Zhang KH, Liu QS, Wu Y, Yang MH (2016) Robust visual tracking via convolutional networks without training. In TIP

  27. Zhong W, Lu H, Yang MH (2012) Robust object tracking via sparsity-based collaborative model. In CVPR

Download references

Acknowledgements

This work is supported by the Nature Science Foundation of China (Grant No. 61605048), and the Fujian Provincial Natural Science Foundation Projects Grant (No. 2016 J01300).The authors would like to thank the reviewers for their valuable suggestions and comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongzhao Du.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Liu, P., Du, Y. et al. Online convolution network tracking via spatio-temporal context. Multimed Tools Appl 78, 257–270 (2019). https://doi.org/10.1007/s11042-017-5533-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5533-9

Keywords

Navigation