Skip to main content

Multi-attribute Based Fire Detection in Diverse Surveillance Videos

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2017)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10132))

Included in the following conference series:

Abstract

Fire detection, as an immediate response of fire accident to avoid great disaster, has attracted many researchers’ focuses. However, the existing methods cannot effectively exploit the comprehensive attribute of fire to give satisfying accuracy. In this paper, we design a multi-attribute based fire detection system which combines the fire’s color, geometric, and motion attributes to accurately detect the fire in complicated surveillance videos. For geometric attribute, we propose a descriptor of shape variation by combining contour moment and line detection. Furthermore, to utilize fire’s instantaneous motion character, we design a dense optical flow based descriptor as fire’s motion attribute. Finally, we build a fire detection video dataset as the benchmark, which contains 305 fire and non-fire videos, with 135 very challenging negative samples for fire detection. Experimental results on this benchmark demonstrate that the proposed approach greatly outperforms the state-of-the-art method with 92.30% accuracy and only 8.33% false positives.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Foggia, P., Saggese, A., Vento, M.: Real-time fire detection for video-surveillance applications using a combination of experts based on color, shape, and motion. IEEE Trans. Circuits Syst. Video Technol. 25(9), 1545–1556 (2015)

    Article  Google Scholar 

  2. Dimitropoulos, K., Barmpoutis, P., Grammalidis, N.: Spatio-temporal flame modeling and dynamic texture analysis for automatic video-based fire detection. IEEE Trans. Circuits Syst. Video Technol. 25(2), 339–351 (2015)

    Article  Google Scholar 

  3. Qi, X., Ebert, J.: A computer vision-based method for fire detection in color videos. Int. J. Imaging Robot. 2(S09), 22–34 (2009)

    Google Scholar 

  4. Çelik, T., Demirel, H.: Fire detection in video sequences using a generic color model. Fire Saf. J. 44(2), 147–158 (2009)

    Article  Google Scholar 

  5. Ko, B.C., Cheong, K.H., Nam, J.Y.: Fire detection based on vision sensor and support vector machines. Fire Saf. J. 44(3), 322–329 (2009)

    Article  Google Scholar 

  6. Borges, P.V.K., Izquierdo, E.: A probabilistic approach for vision-based fire detection in videos. IEEE Trans. Circuits Syst. Video Technol. 20(5), 721–731 (2010)

    Article  Google Scholar 

  7. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  8. Demi, M.: Contour tracking with a spatio-temporal intensity moment. IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1141–1154 (2016)

    Article  Google Scholar 

  9. Sun, X., Yao, H., Zhang, S., Li, D.: Non-rigid object contour tracking via a novel supervised level set model. IEEE Trans. Image Process. 24(11), 3386–3399 (2015)

    Article  MathSciNet  Google Scholar 

  10. Mueller, M., Karasev, P., Kolesov, I., Tannenbaum, A.: Optical flow estimation for flame detection in videos. IEEE Trans. Image Process. 22(7), 2786–2797 (2013)

    Article  Google Scholar 

  11. Töreyin, B.U., Dedeoglu, Y., Güdükbay, U., Çetin, A.E.: Computer vision based method for real-time fire and flame detection. Pattern Recogn. Lett. 27(1), 49–58 (2006)

    Article  Google Scholar 

  12. Liang, C., Juang, C.: Moving object classification using a combination of static appearance features and spatial and temporal entropy values of optical flows. IEEE Trans. Intell. Transp. Syst. 16(6), 3453–3464 (2015)

    Article  Google Scholar 

  13. Sun, D., Roth, S., Black, M.J.: A quantitative analysis of current practices in optical flow estimation and the principles behind them. Int. J. Comput. Vis. 106(2), 115–137 (2014)

    Article  Google Scholar 

  14. Kittler, J.: Combining classifiers: a theoretical framework. Pattern Anal. Appl. 1(1), 18–27 (1998)

    Article  Google Scholar 

  15. Liu, W., Mei, T., Zhang, Y., Che, C., Luo, J.: Multi-task deep visual-semantic embedding for video thumbnail selection. In: CVPR, pp. 3707–3715 (2015)

    Google Scholar 

  16. Liu, W., Mei, T., Zhang, Y.: Instant mobile video search with layered audio-video indexing and progressive transmission. IEEE Trans. Multimed. 16(8), 2242–2255 (2014)

    Article  Google Scholar 

  17. Chu, L., Zhang, Y., Li, G., Wang, S., Zhang, W., Huang, Q.: Effective multimodality fusion framework for cross-media topic detection. IEEE Trans. Circuits Syst. Video Technol. 26(3), 556–569 (2016)

    Article  Google Scholar 

  18. Zeng, C., Ma, H.: Robust head-shoulder detection by PCA-based multilevel HOG-LBP detector for people counting. In: ICPR, pp. 2069–2072 (2010)

    Google Scholar 

  19. Chu, L., Jiang, S., Huang, Q.: Fast common visual pattern detection via radiate geometric model. In: ICIP, pp. 2465–2468 (2011)

    Google Scholar 

  20. Nie, W., Liu, A., Gao, Z., Su, Y.: Clique-graph matching by preserving global & local structure. In: CVPR, pp. 4503–4510 (2015)

    Google Scholar 

  21. Liu, A., Su, Y., Nie, W., Kankanhalli, M.: Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 38(6), 1 (2016)

    Article  Google Scholar 

  22. Gan, C., Wang, N., Yang, Y., Yeung, D.Y., Hauptmann, A.G.: DevNet: a deep event network for multimedia event detection and evidence recounting. In: CVPR, pp. 2568–2577 (2015)

    Google Scholar 

Download references

Acknowledgment

This work is supported by the Funds for Creative Research Groups of China (No. 61421061), the Beijing Training Project for the Leading Talents in S&T (ljrc 201502), the National Natural Science Foundation of China (No. 61602049, 61402048), the CCF-Tencent Open Research Fund (No. AGR20160113).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huadong Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Li, S., Liu, W., Ma, H., Fu, H. (2017). Multi-attribute Based Fire Detection in Diverse Surveillance Videos. In: Amsaleg, L., Guðmundsson, G., Gurrin, C., Jónsson, B., Satoh, S. (eds) MultiMedia Modeling. MMM 2017. Lecture Notes in Computer Science(), vol 10132. Springer, Cham. https://doi.org/10.1007/978-3-319-51811-4_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-51811-4_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-51810-7

  • Online ISBN: 978-3-319-51811-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics