Abstract
Fire detection is a vital task for social, economic and environmental reasons. Early identification of fire outbreaks is crucial in order to limit the damage that will be sustained. In open areas, this task is typically performed by humans, e.g., security guards, who are responsible for watching out for possible occurrences. However, people may get distracted, or may not have enough eyesight, which can result in considerable delays in identifying a fire, after much damage has occurred. Thus, the idea of having machines to automatically detect fires has long been considered an interesting possibility. Over the years, different approaches for fire detection have been developed using computer vision. Currently, the most promising ones are based on convolutional neural networks (CNNs). However, smoke and fire, the main visual indicators of wildfires, present additional difficulties for the vast majority of such learning systems. Both smoke and fire have a high intra-class variance, assuming different shapes, colors and textures, which makes the learning process more complicated than for well-defined objects. This work proposes an automatic fire detection method based on both spatial (visual) and temporal patterns. This hybrid method works in two stages: (i) detection of probable fire events by a CNN based on visual patterns (spatial processing) and (ii) analysis of the dynamics of these events over time (temporal processing). Experiments performed on our surveillance video database show that cascading these two stages can reduce the false positive rate with no significant impact either on the true positive rate or the processing time.
Similar content being viewed by others
Data availability
The datasets that support the findings of this study are available in the GitHub repository: https://github.com/gaiasd/DFireDataset.
Notes
This is a circular buffer initially filled with False values.
All discussions about computational cost in this work are based on the YOLO networks designed for fire and smoke detection (\(C=2\) classes). However, we note that the computational cost increases as C increases.
The forward propagation of a 640x640 RGB image through a YOLOv4 network, not shown in Table 2, requires 141.00 BFLOPs.
References
National Institute for Space Research (INPE) (1998) Wildfire Monitoring Program. https://queimadas.dgi.inpe.br/queimadas/portal-static/situacao-atual/. Accessed 29 Oct 2022
Yin Z, Wan B, Yuan F, Xia X, Shi J (2017) A deep normalization and convolutional neural network for image smoke detection. IEEE Access 5:18429–18438. https://doi.org/10.1109/ACCESS.2017.2747399
Jadon A, Omama M, Varshney A, Ansari MS, Sharma R (2019) FireNet: a specialized lightweight fire and & smoke detection model for real-time IoT applications. Preprint at https://arxiv.org/abs/1905.11922. https://doi.org/10.48550/arxiv.1905.11922
Toulouse T, Rossi L, Celik T, Akhloufi M (2016) Automatic fire pixel detection using image processing: a comparative analysis of rule-based and machine learning-based methods. SIViP 10(4):647–654. https://doi.org/10.1007/s11760-015-0789-x
Mukhopadhyay D, Iyer R, Kadam S, Koli R (2019) FPGA deployable fire detection model for real-time video surveillance systems using convolutional neural networks. In: 2019 global conference for advancement in technology (GCAT). IEEE, Bangalore, India, pp 1–7. https://doi.org/10.1109/GCAT47503.2019.8978439
Gaur A, Singh A, Kumar A, Kumar A, Kapoor K (2020) Video flame and smoke based fire detection algorithms: a literature review. Fire Technol 56(5):1943–1980. https://doi.org/10.1007/s10694-020-00986-y
Xie Y, Zhu J, Cao Y, Zhang Y, Feng D, Zhang Y, Chen M (2020) Efficient video fire detection exploiting motion-flicker-based dynamic features and deep static features. IEEE Access 8:81904–81917. https://doi.org/10.1109/ACCESS.2020.2991338
Nguyen MD, Vu HN, Pham DC, Choi B, Ro S (2021) Multistage real-time fire detection using convolutional neural networks and long short-term memory networks. IEEE Access 9:146667–146679. https://doi.org/10.1109/ACCESS.2021.3122346
Shahid M, Virtusio JJ, Wu Y-H, Chen Y-Y, Tanveer M, Muhammad K, Hua K-L (2022) Spatio-temporal self-attention network for fire detection and segmentation in video surveillance. IEEE Access 10:1259–1275. https://doi.org/10.1109/ACCESS.2021.3132787
Hashemzadeh M, Zademehdi A (2019) Fire detection for video surveillance applications using ICA K-medoids-based color model and efficient spatio-temporal visual features. Expert Syst Appl 130:60–78. https://doi.org/10.1016/j.eswa.2019.04.019
Qian Z, Xiao-jun L, Lei H (2020) Video image fire recognition based on color space and moving object detection. In: 2020 international conference on artificial intelligence and computer engineering (ICAICE). IEEE, Beijing, China, pp 367–371. https://doi.org/10.1109/ICAICE51518.2020.00077
Kong SG, Jin D, Li S, Kim H (2016) Fast fire flame detection in surveillance video using logistic regression and temporal smoothing. Fire Saf J 79:37–43. https://doi.org/10.1016/j.firesaf.2015.11.015
Çetin AE, Merci B, Gûnay O, Uğur Töreyin B, Verstockt S (2016) Methods and techniques for fire detection: signal, image and video processing perspectives. Academic Press, London, pp 1–87. https://doi.org/10.1016/C2014-0-01269-5
Abid F (2020) A survey of machine learning algorithms based forest fires prediction and detection systems. Fire Technol 57:559–590. https://doi.org/10.1007/s10694-020-01056-z
Muhammad K, Ahmad J, Baik SW (2018) Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing 288:30–42. https://doi.org/10.1016/j.neucom.2017.04.083
Muhammad K, Khan S, Elhoseny M, Ahmed SH, Baik SW (2019) Efficient fire detection for uncertain surveillance environment. IEEE Trans Ind Inf 15(5):3113–3122. https://doi.org/10.1109/TII.2019.2897594
Li P, Zhao W (2020) Image fire detection algorithms based on convolutional neural networks. Case Stud Thermal Eng 19:100625. https://doi.org/10.1016/j.csite.2020.100625
Majid S, Alenezi F, Masood S, Ahmad M, Gündüz ES, Polat K (2022) Attention based CNN model for fire detection and localization in real-world images. Expert Syst Appl 189:116114. https://doi.org/10.1016/j.eswa.2021.116114
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117. https://doi.org/10.1016/j.neunet.2014.09.003
Venâncio PVAB, Rezende TM, Lisboa AC, Barbosa AV (2021) Fire detection based on a two-dimensional convolutional neural network and temporal analysis. In: 2021 IEEE Latin American conference on computational intelligence (LA-CCI). IEEE, Temuco, Chile, pp 1–6. https://doi.org/10.1109/LA-CCI48322.2021.9769824
Saponara S, Elhanashi A, Gagliardi A (2021) Real-time video fire/smoke detection based on CNN in antifire surveillance systems. J Real-Time Image Proc 18(3):889–900. https://doi.org/10.1007/s11554-020-01044-0
Venâncio PVAB, Lisboa AC, Barbosa AV (2022) An automatic fire detection system based on deep convolutional neural networks for low-power, resource-constrained devices. Neural Comput Appl. https://doi.org/10.1007/s00521-022-07467-z
Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) ImageNet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252. https://doi.org/10.1007/s11263-015-0816-y
Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) ImageNet: a large-scale hierarchical image database. In: Conference on computer vision and pattern recognition. IEEE, Miami, pp 248–255. https://doi.org/10.1109/CVPR.2009.5206848
Krizhevsky A, Sutskever I, Hinton GE (2017) ImageNet classification with deep convolutional neural networks. Commun ACM 60(6):84–90. https://doi.org/10.1145/3065386
Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, pp 1–9. https://doi.org/10.1109/CVPR.2015.7298594
Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: 2014 IEEE conference on computer vision and pattern recognition, pp 580–587. https://doi.org/10.1109/CVPR.2014.81
Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision. IEEE, Santiago, Chile, pp 1440–1448. https://doi.org/10.1109/ICCV.2015.169
Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of the 28th international conference on neural information processing systems—volume 1. NIPS’15. MIT Press, Cambridge, MA, USA, pp 91–99
Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: IEEE conference on computer vision and pattern recognition. IEEE, Las Vegas, pp 779–788. https://doi.org/10.1109/CVPR.2016.91
Redmon J, Farhadi A (2017) YOLO9000: better, faster, stronger. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, Las Vegas, USA, pp 6517–6525. https://doi.org/10.1109/CVPR.2017.690
Redmon J, Farhadi A (2018) YOLOv3: an incremental improvement. arXiv preprint arXiv:1804.02767. https://doi.org/10.48550/ARXIV.1804.02767
Bochkovskiy A, Wang C-Y, Liao H-YM (2020) YOLOv4: optimal speed and accuracy of object detection. Preprint at https://arxiv.org/abs/2004.10934. https://doi.org/10.48550/arXiv.2004.10934
Bochkovskiy A (2013) Darknet: open source neural networks in C. https://git.io/JTICL. Accessed 29 Dec 2021
Wang C-Y, Bochkovskiy A, Liao H-YM (2021) Scaled-YOLOv4: scaling cross stage partial network. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR). IEEE, Nashville, USA, pp 13024–13033. https://doi.org/10.1109/CVPR46437.2021.01283
Jocher G, Stoken A, Chaurasia A, Borovec J, Kwon Y, Michael K et al (2021) Ultralytics/yolov5: v6.0—YOLOv5n ‘Nano’ models, Roboflow integration, TensorFlow export, OpenCV DNN support. Zenodo. https://doi.org/10.5281/zenodo.5563715
Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L, et al (2019) Pytorch: an imperative style, high-performance deep learning library. Preprint at https://arxiv.org/abs/1912.01703
Gaia, Solutions on Demand (2018) D-Fire: an image data set for fire detection. https://git.io/JONna. Accessed 13 May 2022
CEMIG, UFMG, Gaia, RaroLabs and UFVJM (2020) Apaga o Fogo! https://apagaofogo.eco.br/. Accessed 15 May 2022
Celik T, Demirel H (2009) Fire detection in video sequences using a generic color model. Fire Saf J 44(2):147–158. https://doi.org/10.1016/j.firesaf.2008.05.005
State Key Lab of Fire Science (SKLFS) (2012) Video smoke detection. http://staff.ustc.edu.cn/~yfn/vsd.html. Accessed 11 Feb 2022
National Fire Research Laboratory (NFRL) (2019) National Institute of Standards and Technology (NIST). https://www.nist.gov/fire. Accessed 11 Feb 2022
Chakrabortya DB, Detania V, Jigneshkumar SP (2021) Fire threat detection from videos with Q-rough sets. Preprint at https://doi.org/10.48550/ARXIV.2101.08459
Acknowledgements
Financial support for this work was provided by CEMIG-ANEEL (R &D project D0619), by the National Council for Scientific and Technological Development (CNPq, Brazil) to Adriano Chaves Lisboa (Grant 304506/2020-6), by the Foundation for Research of the State of Minas Gerais (FAPEMIG, Brazil) to Adriano Vilela Barbosa (Grant APQ-03701-16), and by the Coordination for the Improvement of Higher Education Personnel (CAPES, Brazil).
Author information
Authors and Affiliations
Contributions
PVABdV: methodology; formal analysis; software; writing—original draft, visualization. RJC: writing—original draft, visualization. TMR: writing—original draft, visualization. ACL: formal analysis; writing—review and editing; validation; supervision. AVB: writing—review and editing; supervision.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
de Venâncio, P.V.A.B., Campos, R.J., Rezende, T.M. et al. A hybrid method for fire detection based on spatial and temporal patterns. Neural Comput & Applic 35, 9349–9361 (2023). https://doi.org/10.1007/s00521-023-08260-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-023-08260-2