Skip to main content
Log in

A compressive tracking based on time-space Kalman fusion model

一种基于时空Kalman融合模型的压缩跟踪方法

  • Research Paper
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

The compressive tracking (CT) method is a simple yet efficient algorithm which compresses the high-dimensional features into a low-dimensional space while preserving most of the salient information. This paper proposes a compressive time-space Kalman fusion tracking algorithm to extend the CT method to the case of multi-sensor fusion tracking. Existing fusion trackers deal with multi-sensor features individually and without time-space adaptability. Besides, significant information cumulated in the updating process has not been fully exploited, which calls for a necessity for temporal information extraction. Unlike previous algorithms, the proposed fusion model is completed in both space and time domains. Also, extended Kalman filter is introduced to formulate an updating method for fusion coefficient optimization. The accuracy and robustness of the proposed fusion tracking algorithm are demonstrated by several experimental results.

创新点

本文提出的时空Kalman融合模型的压缩跟踪方法将压缩跟踪算法扩展到多传感器融合跟踪的问题中。现有的融合跟踪算法忽略了多传感器特征融合的时空适用性。与现有方法不同的是,本文提出的融合模型同时在时间和空间两个领域完成。此外,本文引入扩展Kalman滤波器为融合系数提供一个优化更新方法。大量实验结果证明了该融合跟踪方法的准确性和鲁棒性。

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Zhang C L, Jing Z L, Tang Y P, et al. Locally discriminative stable model for visual tracking with clustering and principle component analysis. IET Comput Vis, 2013, 7: 151–162

    Article  Google Scholar 

  2. Xu M, Ellis T, Godsill S J, et al. Visual tracking of partially observable targets with suboptimal filtering. IET Comput Vis, 2011, 5: 1–13

    Article  MathSciNet  Google Scholar 

  3. Mei X, Ling H. Robust visual tracking and vehicle classification via sparse representation. IEEE Trans Pattern Anal Mach Intell, 2011, 33: 2259–2272

    Article  Google Scholar 

  4. Li H, Shen C, Shi Q. Real-time visual tracking using compressive sensing. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, 2011. 1305–1312

    Google Scholar 

  5. Babenko B, Yang M H, Belongie S. Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intell, 2011, 33: 1619–1632

    Article  Google Scholar 

  6. Zhang K H, Zhang L. Real-time object tracking via online discriminative feature selection. IEEE Trans Image Process, 2013, 22: 4664–4677

    Article  MathSciNet  Google Scholar 

  7. Lei Y, Ding X Q, Wang S J. Visual tracker using sequential Bayesian learning: discriminative, generative and hybrid. IEEE Trans Syst Man Cybern Part B-Cybern, 2008, 38: 1578–1591

    Article  Google Scholar 

  8. Dinh T B, Medioni G. Co-training framework of generative and discriminative trackers with partial occlusion handling. In: Proceedings of the IEEE Workshop on Applications of Computer Vision, Kona, 2011. 642–649

    Google Scholar 

  9. Zhang K H, Zhang L. Real-time compressive tracking. In: Proceedings of European Conference on Computer Vision, Firenze, 2012. 866–879

    Google Scholar 

  10. Zhu Q P, Yan J, Deng D X. Compressive tracking via oversaturated sub-region classifiers. IET Comput Vis, 2013, 7: 448–455

    Article  Google Scholar 

  11. Zhou P, Yao J H, Pei J L. Implementation of an energy-efficient scheduling scheme based on pipeline flux monitoring networks. Sci China Ser-F: Inf Sci, 2009, 52: 1632–1639

    Article  MATH  Google Scholar 

  12. Mihaylova L, Loza A, Nikolov S. The influences of multi-sensor video fusion on object tracking using a particle filter. In: Proceedings of Workshop on Multiple Sensor Data Fusion, Dresden, 2006. 354–358

    Google Scholar 

  13. Cvejic N, Nikolov S G, Knowles H, et al. The effect of pixel-level fusion on object tracking in multi-sensor surveillance video. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, Minneapolis, 2007. 1–7

    Google Scholar 

  14. Xiao G, Yun X, Wu J M. A new tracking approach for visible and infrared sequences based on tracking-before-fusion. Int J Dynam Control, 2014. 1–12

    Google Scholar 

  15. Torresan H, Turgeon B, Ibarra C, et al. Advanced surveillance system: combining video and thermal imagery for pedestrian detection. In: Proceedings of the International Society for Optical Engineering, Beijing, 2004. 506–515

    Google Scholar 

  16. Stolkin R, Rees D, Talha, M, et al. Bayesian fusion of thermal and visible spectra camera data for region based tracking with rapid background adaptation. In: Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, Hamburg, 2012. 192–199

    Google Scholar 

  17. Chen S, Zhu W, Leung H. Thermo-visual video fusion using probabilistic graphical model for human tracking. In: IEEE International Symposium on Circuits and Systems, Seattle, 2008. 1926–1929

    Google Scholar 

  18. Perez P, Vermaak J, Blake A. Data fusion for visual tracking with particles. Proc IEEE, 2004, 92: 495–513

    Article  Google Scholar 

  19. Topkaya I S, Erdogan H. Histogram correlation based classifier fusion for object tracking. In: Proceedings of the IEEE 19th Signal Processing and Communications Applications Conference, Xi’an, 2011. 403–406

    Google Scholar 

  20. Liu H P, Sun F C. Fusion tracking in color and infrared images using sequential belief propagation. In: Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, 2008. 2259–2264

    Google Scholar 

  21. Xiao G, Yun X, Wu J M. A multi-cue mean-shift target tracking approach based on fuzzified region dynamic image fusion. Sci China Inf Sci, 2012, 55: 577–589

    Article  MathSciNet  Google Scholar 

  22. Wen L G, Cai Z W, Lei Z, et al. Online spatio-temporal structure context learning for visual tracking. In: Proceedings of European Conference on Computer Vision, Firenze, 2012. 716–729

    Google Scholar 

  23. Sigal L, Zhu Y, Comaniciu D, et al. Tracking complex objects using graphical object models. In: Proceedings of 1st International Workshop Complex Motion, Gunzburg, 2004. 223–234

    Google Scholar 

  24. Wen L G, Cai Z W, Lei Z, et al. Robust online learned spatio-temporal context model for visual tracking. IEEE Trans Image Process, 2014, 23: 785–796

    Article  MathSciNet  Google Scholar 

  25. Zhang K H, Zhang L, Liu Q, et al. Fast tracking via dense spatio-temporal context learning. In: Proceedings of European Conference on Computer Vision, Zurich, 2014. 1–15

    Google Scholar 

  26. Kim D, Jeon M. Spatio-temporal auxiliary particle filtering with l1-norm based appearance model learning for robust visual tracking. IEEE Trans Image Process, 2013, 22: 511–522

    Article  MathSciNet  Google Scholar 

  27. Shafiee M J, Azimifar Z, Fieguth P. Model-based tracking: temporal conditional random fields. In: Proceedings of the IEEE International Conference on Image Processing, Hong Kong, 2010. 4645–4648

    Google Scholar 

  28. Shafiee M J, Azimifar Z, Fieguth P. Temporal conditional random fields: a conditional state space predictor for visual tracking. In: Proceedings of the Iranian Conference on Machine Vision and Image Processing, Isfahan, 2010. 1–6

    Google Scholar 

  29. Li X, Dick A, Shen C, et al. Visual tracking with spatio-temporal Dempster-Shafer information fusion. IEEE Trans Image Process, 2013, 22: 3028–3040

    Article  MathSciNet  Google Scholar 

  30. Lazaridis G, Petrou M. Image registration using the Walsh transform. IEEE Trans Image Process, 2006, 15: 2343–2357

    Article  Google Scholar 

  31. Bay H, Tuvtellars T, Gool L V. SURF: speeded up robust features. In: Proceedings of European Conference on Computer Vision, Graz, 2006. 404–417

    Google Scholar 

  32. Abdel-Hakim A E, Farag A A. CSIFT: a SIFT descriptor with color invariant characteristics. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, 2006. 1978–1983

    Google Scholar 

  33. Achlioptas D. Database-friendly random projections: Johnson-Lindenstrauss with binary coins. J Comput Syst Sci, 2003, 66: 671–687

    Article  MathSciNet  MATH  Google Scholar 

  34. Ng A, Jordan M. On discriminative vs. generative classifier: a comparison of logistic regression and naive bayes. In: Proceedings of the Conference on Neural Information Processing Systems, 2002. 841–848

    Google Scholar 

  35. Soundararajan R, Bovik A C. Video quality assessment using spatio-temporal entropic differences. In: Proceedings of the IEEE International Conference on Image Processing, Orlando, 2012. 684–694

    Google Scholar 

  36. Mehrseresht N, Taubman D. An efficient content-adaptive motion compensated 3-D DWT with enhanced spatial and temporal scalability. IEEE Trans Image Process, 2006, 15: 1397–1412

    Article  Google Scholar 

  37. Cehovin L, Kristan M, Leonardis A. An adaptive coupled-layer visual model for robust visual tracking. In: Proceedings of the IEEE International Conference on Computer Vision, Barcelona, 2011. 1363–1370

    Google Scholar 

  38. Deza E, Deza M M. Encyclopedia of Distances. Berlin/Heidelberg: Springer, 2009. 94–95

    Book  MATH  Google Scholar 

  39. Lu G, Zhao W, Sun J P, et al. A novel particle filter for target tracking in wireless sensor network. In: Proceedings of the IET International Radar Conference, 2013. 1–6

    Google Scholar 

  40. Vaswani N. Kalman filtered compressed sensing. In: Proceedings of the IEEE International Conference on Image Processing, San Diego, 2008. 893–896

    Google Scholar 

  41. Jayamohan S, Mathurakani M. Noise tolerance analysis of marginalized particle filter for target tracking. In: Proceedings of the Annual International Conference on Microelectronics, Communications and Renewable Energy, Kanjirapally, 2013. 1–6

    Google Scholar 

  42. Simon D. Training radial basis neural networks with the extended Kalman filter. Neurocomputing, 2002, 48: 455–475

    Article  MATH  Google Scholar 

  43. Wang X Q, Wang X L. The comparison of particle filter and extended Kalman filter in predicting building envelope heat transfer coefficient. In: Proceedings of the IEEE International Conference on Cloud Computing and Intelligence Systems, Hangzhou, 2012. 1524–1528

    Google Scholar 

  44. Bingham E, Mannila H. Random projection in dimensionality reduction: applications to image and text data. In: Proceedings of the 7th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, 2001. 245–250

    Google Scholar 

  45. Candes E, Tao T. Near optimal signal recovery from random projections and universal encoding strategies. IEEE Trans Inform Theory, 2006, 52: 5406–5425

    Article  MathSciNet  MATH  Google Scholar 

  46. Jiang N, Liu W Y, Wu Y. Learning adaptive metric for robust visual tracking. IEEE Trans Image Process, 2011, 20: 2288–2300

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhongliang Jing.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yun, X., Jing, Z., Xiao, G. et al. A compressive tracking based on time-space Kalman fusion model. Sci. China Inf. Sci. 59, 1–15 (2016). https://doi.org/10.1007/s11432-015-5356-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11432-015-5356-0

Keywords

关键词

Navigation