Elsevier

Expert Systems with Applications

Volume 130, 15 September 2019, Pages 60-78
Expert Systems with Applications

Fire detection for video surveillance applications using ICA K-medoids-based color model and efficient spatio-temporal visual features

https://doi.org/10.1016/j.eswa.2019.04.019Get rights and content

Highlights

  • A computer vision-based fire detection method is presented.

  • A robust color model is developed to reliably detect all candidate fire regions.

  • A motion-intensity-aware motion detection technique is used to analyze the motion.

  • Spatio-temporal features are used to distinguish real-fire from non-real regions.

  • The performance is higher than that of the competitors.

Abstract

Automated detection of fire flames in videos shot from a surveillance camera is an active research topic, as fire detection must be accurate and fast. The present study proposes and evaluates an efficient fire detection method. The contributions of this method lies in threefold: (1) a robust ICA (Imperialist Competitive Algorithm) K-medoids-based color model first is developed to reliably detect all candidate fire regions in a scene; (2) a motion-intensity-aware motion detection technique is introduced to simply extract the regions containing movement together with the motion intensity rate of every moving pixel, which are then used to analyze the characteristics of the fire; (3) a set of new spatio-temporal features having the distinct characteristics of fire flames are extracted from the candidate fire regions which are fed into a support vector machine classifier in order to distinguish real fire regions from non-real ones. The experimental results for a set of benchmark fire video datasets and videos provided in this research confirm that the proposed method outperforms state-of-the-art fire detection approaches, providing high detection accuracy and a low false detection rate.

Introduction

Fire is a destructive and distressing natural or man-made disaster. Implementing fast and accurate fire detection systems are crucial to minimizing casualties and environmental and property damage. Various types of fire detection methods have been developed that use temperature or smoke detectors or photosensitive sensors (Liyang, Neng, & Xiaoqiao, 2005); however, many such methods require the sensor to be close to the source of the fire. This means that they may not work properly in outdoor environments or for large spaces. In addition, the conventional methods are not capable of providing supplemental information about the fire status and burning process (Luo and Su, 2007, Podržaj and Hashimoto, 2008). The development of fire detection methods using video surveillance systems is a suitable way of coping with such weaknesses. Larger spaces can be monitored using a single surveillance camera and the fire can be detected more quickly using advanced image and video processing techniques.

Although explicit categorization of computer vision-based fire detection methods is not easy, two main categories involve color-based and motion-based methods. Color-based methods consider the distinct color characteristics of a fire, such as the color of the flame being in the red-yellow range (Borges & Izquierdo, 2010). A common shortcoming of these methods is that they are very sensitive to changes in illumination and presence of fire-like objects in the scene, which cause a high number of false detections due to the different tonalities in the red and yellow colors.

The motion-based approach is based on the idea that the chaotic movement of a fire flame can help to better distinguish it from other moving objects in the scene (Foggia, Saggese, & Vento, 2015). Accordingly, spatio-temporal motion-based features such as motion orientation (Ko, Ham, & Nam, 2011), dynamic texture (Dimitropoulos, Barmpoutis, & Grammalidis, 2015) and optical flow (Mueller, Karasev, Kolesov, & Tannenbaum, 2013) have been used to detect flames. However, methods using only motion information also have limitations in some situations, especially when there are other moving objects in the scene.

In order to enhance the performance of fire detection, most approaches have tried to combine both the color and motion features of fire (Foggia et al., 2015, Habiboğlu et al., 2012, Han et al., 2017, Kong et al., 2016, Xuan Truong and Kim, 2012). Usually, these methods first identify the candidate fire regions using color features; then, the motion-based features extracted from the candidate regions are evaluated to classify them into real fire regions and non-real ones.

Although some methods using both the color and motion characteristics of fire have achieved considerable success, each of them lacks sufficient robustness and has limited applications. These methods have two main limitations. The first is that the simple color thresholding techniques or color segmentation methods traditionally used to detect the candidate fire regions are not as robust as they should be; thus, it is likely for a large number of false positives or false negatives to be injected into the subsequent processes. This reduces the performance of the classification task in the second stage.

The second limitation is that the employment of large numbers of uninformative and irregular motion features in most of the existing approaches leads to high-dimensional feature vectors for the classification algorithms that is an important problem in machine learning. As shown in Vapnik (1999), using a high-dimensional feature vector leads a significant increase for data required to train the classifiers in order to achieve the desired results and avoid overspecialization.

In recent years, some fire flame detection methods based on convolutional neural networks (CNNs) have been proposed (Dunnings and Breckon, 2018, Muhammad et al., 2018). However, the deep learning-based methods generally need more computational time and high memory, restricting their implementation on conventional hardware such as FPGAs1 in real-world surveillance networks.

In this study, an efficient fire detection approach comprising four stages is proposed. In the first stage, a robust color-based flame detection method using the Imperialist Competitive Algorithm (ICA) (Atashpaz-Gargari & Lucas, 2007) and the K-medoids clustering method is applied in order to identify candidate fire regions as accurately as possible. The ICA algorithm is a successful computational method that is used to solve optimization problems of different types. The overall algorithm in this stage is an improved version of the recent color-based fire detection method proposed by Khatami, Mirghasemi, Khosravi, Lim, & Nahavandi (2017). It can detect all potential fire regions with a reasonable number of false positives. In the second stage, a motion detection technique is developed which is able to simply extract the movement-containing regions and additional information about the motion intensity of moving pixels. This motion intensity information allows extraction of useful features in subsequent stages.

In the third stage, the unique features of the fire are considered and a set of features are extracted from the spatio-temporal characteristics of fire regions. These features are complementary in their nature as they are able to provide useful information for analysis of the problem based on the different characteristics of fire (color, movement and shape variation). Among the extracted features, four features are introduced for the first time in this paper. In the final stage, fire and non-fire regions are classified using the support vector machine (SVM) algorithm. The experimental results for a wide set of benchmark fire video datasets and video clips provided in this research show that the proposed method outperforms state-of-the-art fire detection approaches in terms of accuracy, providing high reliability and a low false detection rate in different environments.

In brief, the contributions of this paper are three-fold:

  • (1)

    A robust ICA K-medoids-based color model is developed to reliably detect all candidate fire regions in a scene.

  • (2)

    A motion-intensity-aware motion detection technique is introduced to simply extract the regions containing movement along with the motion intensity rate of every moving pixel.

  • (3)

    A set of new spatio-temporal features are extracted which are capable of distinguishing real-fire from non-real regions.

The rest of the paper is organized as follows: Section 2 provides an overview of existing fire detection methods. Section 3 describes the proposed method. Section 4 discusses the experimental results. Section 5 draws the conclusions and gives ideas for future work.

Section snippets

Related work

Computer vision-based methods have shown successful results in various applications (Farajzadeh and Hashemzadeh, 2018, Farajzadeh et al., 2018, Hashemzadeh, 2018, Hashemzadeh and Farajzadeh, 2016a, Hashemzadeh and Farajzadeh, 2016b, Hashemzadeh et al., 2019, Hashemzadeh et al., 2014, Hashemzadeh et al., 2013). The recent advances in embedded processing capabilities of smart cameras have given rise to intelligent CCTV (Closed-Circuit Television) surveillance systems (Muhammad et al., 2018).

Overview

A sample frame from a video containing three separate fire regions is shown in Fig. 1. The goal of the proposed method is to detect every fire region within a given frame of the input video by analyzing the spatial and temporal features of fire. Spatial features can be extracted from the frame under process, but to analyze the temporal features, other frames of the input video should be used. In order to make the proposed system as simple as possible and to reduce the computational complexity

Experiments

In this section, the performance of the proposed method is evaluated using different video datasets. First, the datasets used in our experiments are introduced. Then, the fire detection results of the proposed method on each video dataset are presented and compared with those of nine state-of-the-art fire detection approaches (Foggia et al., 2015, Habiboğlu et al., 2012, Khatami et al., 2017, Ko et al., 2009, Kong et al., 2016, Muhammad et al., 2018, Muhammad et al., 2018, Borges and Izquierdo,

Conclusion and future work

Reliable and early detection of fire in surveillance videos ensures on-time reaction in case of fire. Many existing computer vision-based fire detection approaches have shown good detection accuracy, but often have unsatisfactorily high false detection rates. In order to improve the performance of fire detection, we proposed a fire detection system utilizing the efficient color-based and motion-based features of fire. A robust color-based flame detection scheme using ICA optimization technique

Conflict of interest

There are no conflicts of interest.

References (71)

  • Y. Guo et al.

    Deep learning for visual understanding: a review

    Neurocomputing

    (2016)
  • M. Hashemzadeh

    Hiding information in videos using motion clues of feature points

    Computers & Electrical Engineering

    (2018)
  • M. Hashemzadeh et al.

    Content-aware image resizing: An improved and shadow-preserving seam carving method

    Signal Processing

    (2019)
  • M. Hashemzadeh et al.

    Combining keypoint-based and segment-based features for counting people in crowded scenes

    Information Sciences

    (2016)
  • B. Karasulu et al.

    Moving object detection and tracking by using annealed background subtraction method in videos: Performance optimization

    Expert Systems with Applications

    (2012)
  • A. Khatami et al.

    A new PSO-based approach to fire flame detection using K-Medoids clustering

    Expert Systems with Applications

    (2017)
  • B.C. Ko et al.

    Fire detection based on vision sensor and support vector machines

    Fire Safety Journal

    (2009)
  • S.G. Kong et al.

    Fast fire flame detection in surveillance video using logistic regression and temporal smoothing

    Fire Safety Journal

    (2016)
  • N.A. Mandellos et al.

    A background subtraction algorithm for detecting and tracking vehicles

    Expert Systems with Applications

    (2011)
  • G. Marbach et al.

    An image processing technique for fire detection in video images

    Fire Safety Journal

    (2006)
  • K. Muhammad et al.

    Early fire detection using convolutional neural networks during surveillance for effective disaster management

    Neurocomputing

    (2018)
  • F. Ortega-Zamorano et al.

    Smart motion detection sensor based on video processing using self-organizing maps

    Expert Systems with Applications

    (2016)
  • W. Phillips Iii et al.

    Flame recognition in video

    Pattern Recognition Letters

    (2002)
  • J. Rong et al.

    Fire flame detection based on GICA and target tracking

    Optics & Laser Technology

    (2013)
  • J. Son et al.

    Fast illumination-robust foreground detection using hierarchical distribution map for real-time video surveillance system

    Expert Systems with Applications

    (2016)
  • B.U. Töreyin et al.

    Computer vision based method for real-time fire and flame detection

    Pattern Recognition Letters

    (2006)
  • T. Toulouse et al.

    Computer vision for wildfire research: An evolving image dataset for processing and analysis

    Fire Safety Journal

    (2017)
  • D.-c. Wang et al.

    Adaptive flame detection using randomness testing and robust features

    Fire Safety Journal

    (2013)
  • Y. Xu et al.

    Background modeling methods in video analysis: a review and comparative evaluation

    CAAI Transactions on Intelligence Technology

    (2016)
  • T. Xuan Truong et al.

    Fire flame detection in video sequences using multi-stage pattern recognition techniques

    Engineering Applications of Artificial Intelligence

    (2012)
  • C. Yu et al.

    A real-time video fire flame and smoke detection algorithm

    Procedia Engineering

    (2013)
  • W. Zhang et al.

    Deep convolutional neural networks for multi-modality isointense infant brain image segmentation

    NeuroImage

    (2015)
  • E. Atashpaz-Gargari et al.

    Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition

  • P.V.K. Borges et al.

    A probabilistic approach for vision-based fire detection in videos

    IEEE Transactions on Circuits and Systems for Video Technology

    (2010)
  • P.V.K. Borges et al.

    Efficient visual fire detection applied for video retrieval

  • Cited by (79)

    View all citing articles on Scopus
    View full text