Skip to main content
Log in

Squeezed fire binary segmentation model using convolutional neural network for outdoor images on embedded device

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Even though image-based prediction of fire events is widely used, the current predictive methods are difficult to implement due to low performance and high specifications. In this work designed to overcome such problems, we propose binary semantic segmentation for fire images by employing deep learning that can be applied to embedded devices such as Jetson TX2. To reduce the parameters and consequently the model size while maintaining the performance, we replaced regular convolution with depthwise separable convolution and \(1 \times 1 \) convolution. Moreover, the addition operation in the long skip connection was replaced with the concatenation operation to properly convey the information in the encoding phase. Besides, we propose the confusion block that can execute the model to proceed training more actively. From these approaches, we achieved a significantly small-sized network for fire segmentation with the highest performance. We compared the performance of the proposed method with various deep learning-based binary segmentation networks and image processing algorithm. Extensive experimental results on the FiSmo Dataset and Corsican Fire Database demonstrated that the proposed network outperforms other models with fewer parameters and is suitable for application in embedded devices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

  1. Bogue, R.: Sensors for fire detection. Sensor Rev. 3(2), 99–103 (2013)

  2. Thomas K.: Fire detection with temperature sensor arrays. In: Proceedings IEEE 34th Annual 2000 International Carnahan Conference on Security Technology (Cat. No. 00CH37083)

  3. Chen, Shin-Juh., Hovde, David C., Peterson, Kristen A., Marshall, André W.: Fire detection using smoke and gas sensors. Fire Saf. J. 42(8), 507–515 (2007)

    Article  Google Scholar 

  4. Qiu, Xuanbing, Xi, Tingyu, Sun, Dongyuan, Zhang, Enhua, Li, Chuanliang, Peng, Ying, Wei, Jilin, Wang, Gao: Fire detection algorithm combined with image processing and flame emission spectroscopy. Fire Technol. 54(5), 1249–1263 (2018)

    Article  Google Scholar 

  5. Celik, Turgay: Fast and efficient method for fire detection using image processing. ETRI J. 32(6), 881–890 (2010)

    Article  Google Scholar 

  6. Khan, M., Salman, K., Mohamed, E., Syed, H.A., Sung, W.B.: Efficient fire detection for uncertain surveillance environment. IEEE Trans. Ind. Informatics 15(5), 3113–3122 (2019)

    Article  Google Scholar 

  7. Lin, Gaohua, Zhang, Yongming, Gao, Xu., Zhang, Qixing: Smoke detection on video sequences using 3d convolutional neural networks. Fire Technol. 55(5), 1827–1847 (2019)

    Article  Google Scholar 

  8. Süleyman, A., Uğur, G., B Uğur, T., Enis Çetin, A.: Deep convolutional generative adversarial networks based flame detection in video. arXiv:1902.01824 (2019)

  9. Sudhakar, S., Varadarajan Vijayakumar, C., Sathiya Kumar, V., Priya., Logesh, R., Subramaniyaswamy, V, : Unmanned aerial vehicle (uav) based forest fire detection and monitoring for reducing false alarms in forest-fires. Comput. Commun. 149, 1–16 (2020)

  10. Emmy Premal, C., Vinsley, S.S.: Image processing based forest fire detection using ycbcr colour model. Presented at the (2014)

  11. Viktor, T., Romana, C.-H., Eva, T.: Forest fires detection in digital images based on color features. Int. J. Educ. Learn. Syst., 2, 66–70 (2017)

  12. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Lawrence, C., Zitnick. (eds.): In: Microsoft coco: Common objects in context, pp. 740–755. Springer (2014)

  13. Yuval N., Iacopo, M., Anh Tran, T., Tal, H., Gerard, M.: On face segmentation, face swapping, and face perception. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 98–105. IEEE (2018)

  14. Li, Ye., Lele, Xu., Rao, Jun, Guo, Lili, Yan, Zhen, Jin, Shan: A y-net deep learning method for road segmentation using high-resolution visible remote sensing images. Remote Sens. Lett. 10(4), 381–390 (2019)

    Article  Google Scholar 

  15. Michael, W., Thomas, S., Xiao, X.Z., Matthias, W., Hannes, T.: Semantic segmentation of slums in satellite images using transfer learning on fully convolutional neural networks. ISPRS J. Photogramm. Remote Sens. 150, 59–69 (2019)

    Article  Google Scholar 

  16. Yan, C., Shao, B., Zhao, H., Ning, R., Zhang, Y., Xu, F.: 3d room layout estimation from a single rgb image. IEEE Trans. Multimed. 22(11), 3014–3024 (2020)

  17. Ronneberger, O., Fischer, P., Brox, T.: In: U-net: convolutional networks for biomedical image segmentation, pp. 234–241. Springer (2015)

  18. Tran, M.Q., David, G.C.H, Won-Ki, J.: Fusionnet: a deep fully residual convolutional neural network for image segmentation in connectomics. arXiv:1612.05360 (2016)

  19. Drozdzal, Michal, Chartrand, Gabriel, Vorontsov, Eugene, Shakeri, Mahsa, Di Jorio, Lisa, Tang, An., Romero, Adriana, Bengio, Yoshua, Pal, Chris, Kadoury, Samuel: Learning normalized inputs for iterative estimation in medical image segmentation. Med. Image Anal. 44, 1–13 (2018)

    Article  Google Scholar 

  20. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. Presented at the (2015)

  21. Lane, N.D., Bhattacharya, S., Georgiev, P., Forlivesi, C., Jiao, L., Qendro, L., Kawsar, F., Deepx, (eds.): A software accelerator for low-power deep learning inference on mobile devices. Presented at the (2016)

  22. Cazzolato, M.T., Avalhais, L.P.S., Chino, D.Y.T., Ramos, J.S., de Souza, J.A., Rodrigues-Jr, Jose, F., Traina, A.J.: Fismo: a compilation of datasets from emergency situations for fire and smoke analysis. Proc. Satell. events (2017)

  23. Tom, T., Lucile, R., Antoine, C., Turgay, C., Moulay Akhloufi, A.: An evolving image dataset for processing and analysis. Computer vision for wildfire research. Fire Saf. J. 92, 188–194 (2017)

    Article  Google Scholar 

  24. Celik, Turgay, Demirel, Hasan: Fire detection in video sequences using a generic color model. Fire Saf. J. 44(2), 147–158 (2009)

    Article  Google Scholar 

  25. Noh, H., Hong, S., Han, B.: Learning deconvolution network for semantic segmentation. Presented at the (2015)

  26. Drozdzal, M., Vorontsov, E., Chartrand, G., Kadoury, S., Pal, C.: The importance of skip connections in biomedical image segmentation. In: Deep Learning and Data Labeling for Medical Applications, pp. 179–187. Springer (2016)

  27. Yuan, Feiniu, Zhang, Lin, Xia, Xue, Wan, Boyang, Huang, Qinghua, Li, Xuelong: Deep smoke segmentation. Neurocomputing 357, 248–260 (2019)

    Article  Google Scholar 

  28. Forrest, N.I., Song, H., Matthew, W.M., Khalid, A., William, J.D., Kurt, K.: Squeezenet: Alexnet-level accuracy with 50x fewer parameters and 0.5 mb model size. arXiv:1602.07360 (2016)

  29. Andrew, G.H., Menglong, Z., Bo, C., Dmitry, K., Weijun, W., Tobias, W., Marco, A., Hartwig, A.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 (2017)

  30. Novac, I., Geipel K.R., de Domingo Gil, J.E., de Paula, L.G., Hyttel, K., Chrysostomou, D.: A framework for wild-re inspection using deep convolutional neural networks. In: 2020 IEEE/SICE International Symposium on System Integration (SII), pp 867–872. IEEE (2020)

  31. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.-C..: Mobilenetv 2: inverted residuals and linear bottlenecks. Presented at the (2018)

  32. Zeiler, M.D., Fergus, R.: In: Visualizing and understanding convolutional networks, pp. 818–833. Springer (2014)

  33. Chollet, F.: Xception: deep learning with depthwise separable convolutions. Presented at the (2017)

Download references

Acknowledgements

Myungjoo Kang was supported by the National Research Foundation grant of Korea (2015R1A5A1009350, 2021R1A2C3010887) and the ICT R&D program of MSIT/IITP(No. 1711117093)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Myungjoo Kang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Kyungmin Song and Han-Soo Choi contributed equally to this work.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, K., Choi, HS. & Kang, M. Squeezed fire binary segmentation model using convolutional neural network for outdoor images on embedded device. Machine Vision and Applications 32, 120 (2021). https://doi.org/10.1007/s00138-021-01242-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00138-021-01242-1

Keywords

Navigation