Skip to main content
Log in

Spatio-Context-Based Target Tracking with Adaptive Multi-Feature Fusion for Real-World Hazy Scenes

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

The problem of serious air pollution such as haze and fog and other complex scenes has posed great challenges for target tracking in computer vision. Single-feature approaches demonstrating unstable performances lead to poor tracking. Through the multi-feature fusion and the focus of attention (FOA) mechanism in biological vision system, these problems under complex scenes can be solved. Accordingly, a tracking algorithm based on multi-feature fusion and spatio-context correlation is proposed. Accurate expression of multi-feature fusion by color, texture, and edge features, and adaptive weighted update by information entropy, has greatly enhanced the adaptability to environment variations. Then, combined with the spatio-temporal context algorithm, the target can be achieved accurately. Compared with the state-of-the-art tracking algorithms, the experiment results validate the effectiveness of our method in hazy scenes, furthermore, our method also results in the promotion of image vision quality. Specifically, the average center error is reduced to 0.9440 pixel, and the average overlap rate and the FPS rise to 0.8700 and 4.4302, respectively. Additionally, the halo artifacts and color cast of restored images have been avoided by the popular dehazing algorithm. Our proposed tracking algorithm outperforms the state-of-the-art methods in accuracy, robustness, and real-time even in complex real-world hazy scenes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

References

  1. Kamal AT, Bappy JH, Farrell JA, Roy-Chowdhury AK. Distributed multi-target tracking and data association in vision networks. IEEE Trans Pattern Anal Mach Intell. 2016;38(7):1397–410.

    Article  PubMed  Google Scholar 

  2. Beard M, Vo BT, Vo BN. Bayesian multi-target tracking with merged measurements using labelled random finite sets. IEEE Trans Signal Process. 2015;63(6):1433–47.

    Article  Google Scholar 

  3. Xie L, Yu Y, Huang Z. An online learning target tracking method based on extreme learning machine. Intelligent Control and Automation (WCICA). IEEE. 2016:2080–2085.

  4. Zhang KH, Zhang L, Yang MH, et al. Fast tracking via spatio-temporal context learning. arXiv preprint arXiv. 2013:1311.1939.

  5. Li G, Liu Z, Li H, Ren P. Target tracking based on biological-like vision identity via improved sparse representation and particle filtering. Cogn Comput. 2016;8(5):910–23.

    Article  Google Scholar 

  6. Kalal Z, Mikolajczyk K, Matas J. Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell. 2012;34(7):1409–22.

    Article  PubMed  Google Scholar 

  7. Zhang K, Zhang L, Yang M. Real-time compressive tracking. In: Proceedings of ECCV. 2012;7574(1):864–877.

  8. Deilamani MJ, Asli RN. Moving object tracking based on mean shift algorithm and features fusion. Int Symp Artif Intell Signal Process. 2011:48–53.

  9. Chen C, Schonfeld D. A particle filtering framework for joint video tracking and pose estimation. IEEE Trans Image Process. 2010;19(6):1625–34.

    Article  PubMed  Google Scholar 

  10. Jun-Fei L, Jian-Zhen W, Hong-Qin L. Vehicle traffic flow detection system based on video images under haze environment. image. 2016;3(10).

  11. Gang W, Xiaoqin Z, Shoubao S, et al. Vehicle tracking incorporating low-rank sparse into particle filter in haze scene. In: 2016 3rd International Conference on Information Science and Control Engineering (ICISCE). IEEE. 2016:739–742.

  12. Kim SK, Choi KH, Park SY. A framework for object detection by haze removal. J Inst Electron Inf Eng. 2014;51(5):168–76.

    Google Scholar 

  13. Yuan Y, Zhao Y, Wang X. Day and night vehicle detection and counting in complex environment. In: 2013 28th International Conference of Image and Vision Computing New Zealand (IVCNZ). IEEE. 2013;453–458.

  14. Zhou Z, Wu D, Peng X, et al. Object tracking based on camshift with multi-feature fusion. J Softw. 2014;9(1):147–53.

    Google Scholar 

  15. Riahi D, Bilodeau G A. Multiple feature fusion in the dempster-shafer framework for multi-object tracking. Computer and Robot Vision (CRV), 2014 Canadian Conference on. IEEE. 2014;313–320.

  16. Li Y, Zhu E, Zhu X, Yin J, Zhao J. Counting pedestrian with mixed features and extreme learning machine. Cogn Comput. 2014;6(3):462–76.

    Article  Google Scholar 

  17. Zhang Y, Wang Y, Jin J, Wang X. Sparse Bayesian learning for obtaining sparsity of EEG frequency bands based feature vectors in motor imagery classification. Int J Neural Syst. 2017;27(02):1650032.

    Article  PubMed  Google Scholar 

  18. Wang H, Zhang Y, Waytowich NR, Krusienski DJ, Zhou G, Jin J, et al. Discriminative feature extraction via multivariate linear regression for SSVEP-based BCI. IEEE Trans Neural Syst Rehabil Eng. 2016;24(5):532–41.

    Article  PubMed  CAS  Google Scholar 

  19. Sun W, Wang H, Sun C, Guo B, Jia W, Sun M. Fast single image haze removal via local atmospheric light veil estimation. Comput Electr Eng. 2015;46:371–83.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Wang Z, Luo J, Qin K, et al. Model based edge-preserving and guided filter for real-world hazy scenes visibility restoration. Cogn Comput. 2017:1–14.

  21. Yeh CH, Kang LW, Lee MS, Lin CY. Haze effect removal from image via haze density estimation in optical model. Opt Express. 2013;21(22):27127–41.

    Article  PubMed  Google Scholar 

  22. He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell. 2011;33(12):2341–53.

    Article  PubMed  Google Scholar 

  23. Zhang T, Ghanem B, Liu S, et al. Robust visual tracking via structured multi-task sparse learning. Int J Comput Vis. 2013;101(2);1838–1845.

  24. Zhong W, Lu H, Yang MH. Robust object tracking via sparsity-based collaborative model. In: 2012 I.E. Conference on Computer vision and pattern recognition (CVPR). IEEE. 2012;1838–1845.

  25. Mei X, Ling H. Robust visual tracking using L1 minimization. In: 2009 I.E. 12th International Conference on Computer Vision. 2009;1436–1443.

  26. Kristan M., Pflugfelder R., Leonardis A, et al. The visual object tracking VOT 2013 challenge results. In: 2013 I.E. International Conference on Computer Vision Workshops (ICCVW). IEEE. 2013;98–111.

Download references

Funding

This study was funded by the National Natural Science Foundation of People’s Republic of China (Grant No. 91026005) and the Fundamental Research Funds for the Central Universities (Grant Nos. ZYGX2016J131, ZYGX2016J138).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Gun Li or Hou-biao Li.

Ethics declarations

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, G., Wang, Zy., Luo, J. et al. Spatio-Context-Based Target Tracking with Adaptive Multi-Feature Fusion for Real-World Hazy Scenes. Cogn Comput 10, 545–557 (2018). https://doi.org/10.1007/s12559-018-9550-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-018-9550-4

Keywords

Navigation