Fire detection for video surveillance applications using ICA K-medoids-based color model and efficient spatio-temporal visual features
Introduction
Fire is a destructive and distressing natural or man-made disaster. Implementing fast and accurate fire detection systems are crucial to minimizing casualties and environmental and property damage. Various types of fire detection methods have been developed that use temperature or smoke detectors or photosensitive sensors (Liyang, Neng, & Xiaoqiao, 2005); however, many such methods require the sensor to be close to the source of the fire. This means that they may not work properly in outdoor environments or for large spaces. In addition, the conventional methods are not capable of providing supplemental information about the fire status and burning process (Luo and Su, 2007, Podržaj and Hashimoto, 2008). The development of fire detection methods using video surveillance systems is a suitable way of coping with such weaknesses. Larger spaces can be monitored using a single surveillance camera and the fire can be detected more quickly using advanced image and video processing techniques.
Although explicit categorization of computer vision-based fire detection methods is not easy, two main categories involve color-based and motion-based methods. Color-based methods consider the distinct color characteristics of a fire, such as the color of the flame being in the red-yellow range (Borges & Izquierdo, 2010). A common shortcoming of these methods is that they are very sensitive to changes in illumination and presence of fire-like objects in the scene, which cause a high number of false detections due to the different tonalities in the red and yellow colors.
The motion-based approach is based on the idea that the chaotic movement of a fire flame can help to better distinguish it from other moving objects in the scene (Foggia, Saggese, & Vento, 2015). Accordingly, spatio-temporal motion-based features such as motion orientation (Ko, Ham, & Nam, 2011), dynamic texture (Dimitropoulos, Barmpoutis, & Grammalidis, 2015) and optical flow (Mueller, Karasev, Kolesov, & Tannenbaum, 2013) have been used to detect flames. However, methods using only motion information also have limitations in some situations, especially when there are other moving objects in the scene.
In order to enhance the performance of fire detection, most approaches have tried to combine both the color and motion features of fire (Foggia et al., 2015, Habiboğlu et al., 2012, Han et al., 2017, Kong et al., 2016, Xuan Truong and Kim, 2012). Usually, these methods first identify the candidate fire regions using color features; then, the motion-based features extracted from the candidate regions are evaluated to classify them into real fire regions and non-real ones.
Although some methods using both the color and motion characteristics of fire have achieved considerable success, each of them lacks sufficient robustness and has limited applications. These methods have two main limitations. The first is that the simple color thresholding techniques or color segmentation methods traditionally used to detect the candidate fire regions are not as robust as they should be; thus, it is likely for a large number of false positives or false negatives to be injected into the subsequent processes. This reduces the performance of the classification task in the second stage.
The second limitation is that the employment of large numbers of uninformative and irregular motion features in most of the existing approaches leads to high-dimensional feature vectors for the classification algorithms that is an important problem in machine learning. As shown in Vapnik (1999), using a high-dimensional feature vector leads a significant increase for data required to train the classifiers in order to achieve the desired results and avoid overspecialization.
In recent years, some fire flame detection methods based on convolutional neural networks (CNNs) have been proposed (Dunnings and Breckon, 2018, Muhammad et al., 2018). However, the deep learning-based methods generally need more computational time and high memory, restricting their implementation on conventional hardware such as FPGAs1 in real-world surveillance networks.
In this study, an efficient fire detection approach comprising four stages is proposed. In the first stage, a robust color-based flame detection method using the Imperialist Competitive Algorithm (ICA) (Atashpaz-Gargari & Lucas, 2007) and the K-medoids clustering method is applied in order to identify candidate fire regions as accurately as possible. The ICA algorithm is a successful computational method that is used to solve optimization problems of different types. The overall algorithm in this stage is an improved version of the recent color-based fire detection method proposed by Khatami, Mirghasemi, Khosravi, Lim, & Nahavandi (2017). It can detect all potential fire regions with a reasonable number of false positives. In the second stage, a motion detection technique is developed which is able to simply extract the movement-containing regions and additional information about the motion intensity of moving pixels. This motion intensity information allows extraction of useful features in subsequent stages.
In the third stage, the unique features of the fire are considered and a set of features are extracted from the spatio-temporal characteristics of fire regions. These features are complementary in their nature as they are able to provide useful information for analysis of the problem based on the different characteristics of fire (color, movement and shape variation). Among the extracted features, four features are introduced for the first time in this paper. In the final stage, fire and non-fire regions are classified using the support vector machine (SVM) algorithm. The experimental results for a wide set of benchmark fire video datasets and video clips provided in this research show that the proposed method outperforms state-of-the-art fire detection approaches in terms of accuracy, providing high reliability and a low false detection rate in different environments.
In brief, the contributions of this paper are three-fold:
- (1)
A robust ICA K-medoids-based color model is developed to reliably detect all candidate fire regions in a scene.
- (2)
A motion-intensity-aware motion detection technique is introduced to simply extract the regions containing movement along with the motion intensity rate of every moving pixel.
- (3)
A set of new spatio-temporal features are extracted which are capable of distinguishing real-fire from non-real regions.
The rest of the paper is organized as follows: Section 2 provides an overview of existing fire detection methods. Section 3 describes the proposed method. Section 4 discusses the experimental results. Section 5 draws the conclusions and gives ideas for future work.
Section snippets
Related work
Computer vision-based methods have shown successful results in various applications (Farajzadeh and Hashemzadeh, 2018, Farajzadeh et al., 2018, Hashemzadeh, 2018, Hashemzadeh and Farajzadeh, 2016a, Hashemzadeh and Farajzadeh, 2016b, Hashemzadeh et al., 2019, Hashemzadeh et al., 2014, Hashemzadeh et al., 2013). The recent advances in embedded processing capabilities of smart cameras have given rise to intelligent CCTV (Closed-Circuit Television) surveillance systems (Muhammad et al., 2018).
Overview
A sample frame from a video containing three separate fire regions is shown in Fig. 1. The goal of the proposed method is to detect every fire region within a given frame of the input video by analyzing the spatial and temporal features of fire. Spatial features can be extracted from the frame under process, but to analyze the temporal features, other frames of the input video should be used. In order to make the proposed system as simple as possible and to reduce the computational complexity
Experiments
In this section, the performance of the proposed method is evaluated using different video datasets. First, the datasets used in our experiments are introduced. Then, the fire detection results of the proposed method on each video dataset are presented and compared with those of nine state-of-the-art fire detection approaches (Foggia et al., 2015, Habiboğlu et al., 2012, Khatami et al., 2017, Ko et al., 2009, Kong et al., 2016, Muhammad et al., 2018, Muhammad et al., 2018, Borges and Izquierdo,
Conclusion and future work
Reliable and early detection of fire in surveillance videos ensures on-time reaction in case of fire. Many existing computer vision-based fire detection approaches have shown good detection accuracy, but often have unsatisfactorily high false detection rates. In order to improve the performance of fire detection, we proposed a fire detection system utilizing the efficient color-based and motion-based features of fire. A robust color-based flame detection scheme using ICA optimization technique
Conflict of interest
There are no conflicts of interest.
References (71)
- et al.
Object-oriented convolutional features for fine-grained image retrieval in large surveillance datasets
Future Generation Computer Systems
(2018) - et al.
Fire detection in video sequences using a generic color model
Fire Safety Journal
(2009) - et al.
Fire detection using statistical color model in video sequences
Journal of Visual Communication and Image Representation
(2007) - et al.
Intelligent video surveillance beyond robust background modeling
Expert Systems with Applications
(2018) - et al.
Video fire detection – Review
Digital Signal Processing
(2013) Picture processing grammar and its applications
Information Sciences
(1971)- et al.
Visual language implementation through standard compiler – compiler techniques
Journal of Visual Languages & Computing
(2007) - et al.
What is a visual language
Journal of Visual Languages & Computing
(2017) - et al.
Exemplar-based facial expression recognition
Information Sciences
(2018) - et al.
Video based wildfire detection at night
Fire Safety Journal
(2009)
Deep learning for visual understanding: a review
Neurocomputing
Hiding information in videos using motion clues of feature points
Computers & Electrical Engineering
Content-aware image resizing: An improved and shadow-preserving seam carving method
Signal Processing
Combining keypoint-based and segment-based features for counting people in crowded scenes
Information Sciences
Moving object detection and tracking by using annealed background subtraction method in videos: Performance optimization
Expert Systems with Applications
A new PSO-based approach to fire flame detection using K-Medoids clustering
Expert Systems with Applications
Fire detection based on vision sensor and support vector machines
Fire Safety Journal
Fast fire flame detection in surveillance video using logistic regression and temporal smoothing
Fire Safety Journal
A background subtraction algorithm for detecting and tracking vehicles
Expert Systems with Applications
An image processing technique for fire detection in video images
Fire Safety Journal
Early fire detection using convolutional neural networks during surveillance for effective disaster management
Neurocomputing
Smart motion detection sensor based on video processing using self-organizing maps
Expert Systems with Applications
Flame recognition in video
Pattern Recognition Letters
Fire flame detection based on GICA and target tracking
Optics & Laser Technology
Fast illumination-robust foreground detection using hierarchical distribution map for real-time video surveillance system
Expert Systems with Applications
Computer vision based method for real-time fire and flame detection
Pattern Recognition Letters
Computer vision for wildfire research: An evolving image dataset for processing and analysis
Fire Safety Journal
Adaptive flame detection using randomness testing and robust features
Fire Safety Journal
Background modeling methods in video analysis: a review and comparative evaluation
CAAI Transactions on Intelligence Technology
Fire flame detection in video sequences using multi-stage pattern recognition techniques
Engineering Applications of Artificial Intelligence
A real-time video fire flame and smoke detection algorithm
Procedia Engineering
Deep convolutional neural networks for multi-modality isointense infant brain image segmentation
NeuroImage
Imperialist competitive algorithm: an algorithm for optimization inspired by imperialistic competition
A probabilistic approach for vision-based fire detection in videos
IEEE Transactions on Circuits and Systems for Video Technology
Efficient visual fire detection applied for video retrieval
Cited by (79)
A lightweight smoke detection network incorporated with the edge cue
2024, Expert Systems with ApplicationsA study of engine room smoke detection based on proactive machine vision model for intelligent ship
2024, Expert Systems with ApplicationsA triple interference removal network based on temporal and spatial attention interaction for forest smoke recognition in videos
2024, Computers and Electronics in AgricultureA label-relevance multi-direction interaction network with enhanced deformable convolution for forest smoke recognition
2024, Expert Systems with ApplicationsA density-based oversampling approach for class imbalance and data overlap
2023, Computers and Industrial EngineeringPreferred vector machine for forest fire detection
2023, Pattern Recognition