Skip to main content

Advertisement

Log in

Deep Learning Approach in Aerial Imagery for Supporting Land Search and Rescue Missions

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

In this paper, we propose a novel approach to person detection in UAV aerial images for search and rescue tasks in Mediterranean and Sub-Mediterranean landscapes. Person detection in very high spatial resolution images involves target objects that are relatively small and often camouflaged within the environment; thus, such detection is a challenging and demanding task. The proposed method starts by reducing the search space through a visual attention algorithm that detects the salient or most prominent segments in the image. To reduce the number of non-relevant salient regions, we selected those regions most likely to contain a person using pre-trained and fine-tuned convolutional neural networks (CNNs) for detection. We established a special database called HERIDAL to train and test our model. This database was compiled for training purposes, and it contains over 68,750 image patches of wilderness acquired from an aerial perspective as well as approximately 500 labelled full-size real-world images intended for testing purposes. The proposed method achieved a detection rate of 88.9% and a precision of 34.8%, which demonstrates better effectiveness than the system currently used by Croatian Mountain search and rescue (SAR) teams (IPSAR), which is based on mean-shift segmentation. We also used the HERIDAL database to train and test a state-of-the-art region proposal network, Faster R-CNN (Ren et al. in Faster R-CNN: towards real-time object detection with region proposal networks, 2015. CoRR arXiv:1506.01497), which achieved comparable but slightly worse results than those of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

Notes

  1. The data set has been published on IPSAR website, http://ipsar.fesb.unist.hr under the page “HERIDAL” or direct link: http://ipsar.fesb.unist.hr/HERIDAL%20database.html.

References

  • Angelova, A., Krizhevsky, A., Vanhoucke, V., Ogale, A., & Ferguson, D. (2015). Real-time pedestrian detection with deep network cascades. In Proceedings of BMVC 2015.

  • Anna Gaszczak, J. H., & Breckon, Toby P. (2011). Real-time people and vehicle detection from UAV imagery (Vol. 7878, pp. 7878–7878-13). https://doi.org/10.1117/12.876663.

  • Borji, A., Cheng, M. M., Hou, Q., Jiang, H., & Li, J. (2014). Salient object detection: A survey. arXiv preprint arXiv:1411.5878.

  • Chen, C., Liu, M. Y., Tuzel, O., & Xiao, J. (2017). R-CNN for small object detection. In S. H. Lai, V. Lepetit, K. Nishino, & Y. Sato (Eds.), Computer Vision—ACCV 2016 (pp. 214–230). Cham: Springer.

    Chapter  Google Scholar 

  • Daubechies, I. (1992). Ten lectures on wavelets. Philadelphia, PA: Society for Industrial and Applied Mathematics.

    Book  MATH  Google Scholar 

  • Eggert, C., Brehm, S., Winschel, A., Zecha, D., & Lienhart, R. (2017). A closer look: Small object detection in faster R-CNN. In 2017 IEEE international conference on multimedia and expo (ICME) (pp. 421–426). https://doi.org/10.1109/ICME.2017.8019550.

  • Enzweiler, M., & Gavrila, D. M. (2009). Monocular pedestrian detection: Survey and experiments. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(12), 2179–2195. https://doi.org/10.1109/TPAMI.2008.260.

    Article  Google Scholar 

  • Girshick, R. B. (2015). Fast R-CNN. CoRR arXiv:1504.08083.

  • Girshick, R. B., Donahue, J., Darrell, T., & Malik, J. (2013). Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR arXiv:1311.2524.

  • Gotovac, S., Papić, V., & Marušić, Ž. (2016). Analysis of saliency object detection algorithms for search and rescue operations. In 24th International conference on software, telecommunications and computer networks (SoftCOM) (pp. 1–6). https://doi.org/10.1109/SOFTCOM.2016.7772118.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition. CoRR arXiv:1512.03385.

  • Hosang, J., Omran, M., Benenson, R., & Schiele, B. (2015). Taking a deeper look at pedestrians. In IEEE conference on computer vision and pattern recognition (CVPR).

  • Imamoglu, N., Lin, W., & Fang, Y. (2013). A saliency detection model using low-level features based on wavelet transform. IEEE Transactions on Multimedia, 15(1), 96–105. https://doi.org/10.1109/TMM.2012.2225034.

    Article  Google Scholar 

  • Koch, C., & Ullman, S. (1987). Shifts in selective visual attention: Towards the underlying neural circuitry (pp. 115–141). Dordrecht: Springer. https://doi.org/10.1007/978-94-009-3833-55.

    Google Scholar 

  • Koester, R. (2008). Lost person behavior: A search and rescue guide on where to look for land, air, and water. dbS Productions. https://books.google.hr/books?id=YQeSIAAACAAJ.

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Proceedings of the 25th international conference on neural information processing systems—Volume 1, Curran Associates Inc., USA, NIPS’12 (pp. 1097–1105). http://dl.acm.org/citation.cfm?id=2999134.2999257.

  • Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE (pp. 2278–2324).

  • Leroy, J., Riche, N., Mancas, M., Gosselin, B., & Dutoit, T. (2014). Superrare: an object-oriented saliency algorithm based on superpixels rarity.

  • Li, J., Levine, M. D., An, X., Xu, X., & He, H. (2016). Visual saliency based on scale-space analysis in the frequency domain. CoRR arXiv:1605.01999.

  • Musić, J., Orović, I., Marasović, T., Papić, V., & Stanković, S. (2016). Gradient compressive sensing for image data reduction in UAV based search and rescue in the wild. In Mathematical problems in engineering, 2016. https://doi.org/10.1155/2016/6827414.

  • Ren, S., He, K., Girshick, R. B., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. CoRR arXiv:1506.01497.

  • Rudol, P., & Doherty, P. (2008). Human body detection and geolocalization for UAV search and rescue missions using color and thermal imagery. In 2008 IEEE aerospace conference (pp. 1–8). https://doi.org/10.1109/AERO.2008.4526559.

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2014). Imagenet large scale visual recognition challenge. CoRR arXiv:1409.0575.

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. CoRR arXiv:1409.1556.

  • Sokalski, J., Breckon, T. P., & Cowling, I. (2010). Automatic salient object detection in uav imagery. In Proceedings of the 25th international unmanned air vehicle systems (pp. 1–12).

  • Syrotuck, W., & Syrotuck, J. (2000). Analysis of lost person behavior: An aid to search planning. Barkleigh Productions. https://books.google.hr/books?id=3rWDAAAACAAJ.

  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., et al. (2015). Going deeper with convolutions. In 2015 IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1–9). https://doi.org/10.1109/CVPR.2015.7298594.

  • Tian, Y., Luo, P., Wang, X., & Tang, X. (2015). Deep learning strong parts for pedestrian detection. In: 2015 IEEE international conference on computer vision (ICCV) (pp. 1904–1912).

  • Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognit Psychol, 12(1), 97–136.

    Article  Google Scholar 

  • Turić, H., Dujmić, H., & Papić, V. (2010). Two-stage segmentation of aerial images for search and rescue. Information Technology and Control, 39, 138–145.

    Google Scholar 

  • Viola, P., Jones, M. J., & Snow, D. (2003). Detecting pedestrians using patterns of motion and appearance. In Proceedings ninth IEEE international conference on computer vision (Vol. 2, pp. 734–741). https://doi.org/10.1109/ICCV.2003.1238422.

  • Yuan, P., Zhong, Y., & Yuan, Y. (2017). Faster r-cnn with region proposal refinement.

  • Zendel, O., Murschitz, M., Humenberger, M., & Herzner, W. (2017). How good is my test data? Introducing safety analysis for computer vision. International Journal of Computer Vision, 125(1–3), 95–109. https://doi.org/10.1007/s11263-017-1020-z.

    Article  MathSciNet  Google Scholar 

  • Zhang, L., Lin, L., Liang, X., & He, K. (2016). Is faster R-CNN doing well for pedestrian detection? In B. Leibe, J. Matas, N. Sebe, & M. Welling (Eds.), Computer Vision—ECCV 2016 (pp. 443–457). Cham: Springer.

    Chapter  Google Scholar 

Download references

Acknowledgements

This research was carried out in part within the framework of a IPSAR project, University of Split, Croatia. It is also partly supported by Federal Ministry of Education and Science, Bosnia and Herzegovina by Grant (NG 05-39-2945-3/16) to Faculty of Science and Education, University of Mostar. We thank NVIDIA Corporation for GPUs donation through Nvidia GPU Edcuation Center program at University of Mostar.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dunja Božić-Štulić.

Additional information

Communicated by Dr. Jason J. Corso.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

figure a

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Božić-Štulić, D., Marušić, Ž. & Gotovac, S. Deep Learning Approach in Aerial Imagery for Supporting Land Search and Rescue Missions. Int J Comput Vis 127, 1256–1278 (2019). https://doi.org/10.1007/s11263-019-01177-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-019-01177-1

Keywords

Navigation