Skip to main content
Log in

Knowledge discovery of suspicious objects using hybrid approach with video clips and UAV images in distributed environments: a novel approach

  • Original Paper
  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

The current video surveillance systems that employ manual face detection and automatic face recognition in unmanned aerial vehicles (UAVs) have limited accuracy, typically below 90%. This is due to the utilization of a small number of Eigenfaces for principal component analysis transformation. Detecting faces in cloud-based Internet of Things (IoT) video frames involves separating video/image windows into two classes: one with faces (to train the surroundings) and the other with matches (in the foreground). The face detection process is further complicated by geometries, inconsistent image/video qualities, and lighting conditions, as well as the possibility of partial occlusion and disguises. Moreover, a fully automated iris image-based face recognition and detection system could prove useful in surveillance applications such as automated teller machine user security, whereas an automated face recognition system using UAV video frames in a cloud-integrated-IoT-based distributed computing environment is better suited for mug-shot matching and surveillance of distrustful objects. This is because controlled conditions are present when capturing mug shots. The proposed hybrid approach was rigorously tested, and the experimental results suggest that its real-world performance will be far more accurate than existing systems. Intelligent surveillance knowledge databases contain vast amounts of information on landmarks, terrain, events, activities, and entities that need to be efficiently and accurately processed and disseminated. Therefore, discovering the appropriate knowledge to detect distrustful objects plays a crucial role in future analysis. The experimental findings indicate that the proposed hybrid approach has high accuracy, low overall and average error rates, and very high average recall rates for benchmark and self-generated datasets. These results demonstrate the robustness, efficiency, and reliability of the authors’ choices. Although further improvements in results are possible, the proposed approach is sufficient for detecting distrustful objects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Availability of data and materials

The data and material used in this paper are appropriately referred to and described in this paper.

Code availability

The source code/custom code/software application will be provided when required.

References

  1. Donlon, E., Dong, S., Liu, M., Li, J., Adelson, E., & Rodriguez, A. (2018). Gelslim: A high-resolution, compact, robust, and calibrated tactile-sensing finger. In IEEE/RSJ IEEE/RSJ international conference on intelligent robots and systems (pp. 1–6).

  2. Pranav, K. B., & Manikandan, J. (2020). Design and evaluation of a real-time face recognition system using convolution neural networks. Procedia Computer Science, 171, 1651–1659.

    Google Scholar 

  3. Alais, D., Xu, Y., Wardle, S. G., & Taubert, J. (2021). A shared mechanism for facial expression in human faces and face pareidolia. Proceedings of the Royal Society B, 288(20210966), 1–8.

    Google Scholar 

  4. Teoh, K. H., Ismail, R. C., Naziri, S. Z. M., Hussin, R., Isa, M. N. M., & Basir, M. S. S. M. (2020). Face recognition and identification using deep learning approach. In 5th Int Conf on Electr Design (pp. 1–9).

  5. Tolba, A. S., El-Baz, A. H., & El-Harby, A. A. (2017). Face recognition: A literature review. International Journal of Signal Processing, 2(2), 88–103.

    Google Scholar 

  6. Jie, Xu. (2021). A deep learning approach to building an intelligent video surveillance system. Multimedia Tools and Applications, 80, 5495–5515.

    Google Scholar 

  7. Ding, C., & Tao, D. (2018). Trunk-branch ensemble convolutional neural networks for video-based face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4), 1002–1014.

    Google Scholar 

  8. Edwin A.S.C., Claudio R. J., & Carlos H.E.F. (2017) Facial expression recognition using temporal POEM features. Pattern Recognition Letters, 1–9.

  9. Raphael A., Jules-R., & Aderemi A. (2018) Age estimation via face images: A survey. EURASIP Journal on Image and Video Processing, 1–35.

  10. Chellappa, R., Chen, J. C., Ranjan, R., Sankaranarayanan, S., Kumar, A., Patel, V. M., & Castillo, C. D. (2016). Towards the design of an end-to-end automated system for image and video-based recognition. CoRR abs/1601.07883.

  11. Huang, G. B., Lee, H., & Learned-Miller, E. (2012). Learning hierarchical representations for face verification with convolutional deep belief networks. In CVPR (2012) (pp. 1–7).

  12. Schroff, F., Kalenichenko, D., & Philbin, J. (2015). FaceNet: A unified embedding for face recognition and clustering. In IEEE conference on computer vision and pattern recognition (pp. 815–823).

  13. Sun, Y., Wang, X., & Tang, X. (2013). Hybrid deep learning for face verification. In ICVV (pp. 1–6).

  14. Sun, Y., Wang, X., & Tang, X. (2014). Deep learning face representation from predicting 10,000 classes. In 2014 IEEE conference on computer vision and pattern recognition (pp. 1891–1898).

  15. Taigman, Y., Yang, M., Ranzato, M. A., & Wolf, L. (2014). Deepface: Closing the gap to human-level performance in face verification. In CVPR (2014) (pp. 1–6).

  16. Ding, C., & Tao, D. (2017). Trunk-branch ensemble convolutional neural networks for video-based face 542 recognition. IEEE Trans on PAMI PP(99), 1–14.

  17. Parchami, M., Bashbaghi, S., & Granger, E. (2017). Cnns with cross-correlation matching for face recognition in video surveillance using a single training sample per person. In AVSS Conference (pp. 1–6).

  18. Parchami, M., Bashbaghi, S., & Granger, E. (2017). Video-based face recognition using an ensemble of haar-like deep convolutional neural networks. In IJCNN (pp. 1–8).

  19. Parkhi, O.M., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. BMVC (pp. 1–12).

  20. Gao, S., Zhang, Y., Jia, K., Lu, J., & Zhang, Y. (2015). Single sample face recognition via learning deep supervised autoencoders. IEEE Transactions on Information Forensics and Security, 10(10), 2108–2118.

    Google Scholar 

  21. Parchami, M., Bashbaghi, S., Granger, E., & Sayed, S. (2017). Using deep autoencoders to learn robust domain-invariant representations for still-to-video face recognition. In AVSS (pp. 1–6).

  22. Bashbaghi, S., Granger, E., Sabourin, R, & Parchami, M. (2018). Deep learning architectures for face recognition in video surveillance. In Deep learning in Object Detection and Recognition (pp. 1–22).

  23. Viola, P., & Jones, M. J. (2001). Robust real-time face detection. International Journal of Computer Vision, 57(2), 137–154.

    Google Scholar 

  24. Dou, P., Wu, Y., Shah, S., & Kakadiaris, I. A. (2014). Benchmarking 3D pose estimation for face recognition. In 22nd International Conference on Pattern Recognition (pp. 1–6).

  25. Sharma, S., Bhatt, M., & Sharma, P. (2020). Face recognition system using machine learning algorithm. In 5th IEEE International Conference on Communication and Electronics Systems (ICCES) (pp. 1162–1168).

  26. Vikas, M., Suneeta A. , Vinay K. S., & Sushila M. (2012). Face recognition using geometric measurements, directional edges and directional multiresolution information. In 2nd Int Conf on Comm, Comp & Amp Security, Procedia Tech. (vol. 6, pp. 939–946).

  27. Cendrillon, R.,& Lowell, B.C. (2000). Real-time face recognition using eigenfaces. In International conference on visual communications and image processing (vol. 4067, pp. 269–276).

  28. Zhang, C., & Zhang, Z. (2010). Boosting-based face detection and adaptation. Sams Python, Chap- 1 (pp. 1–8).

  29. Lienhart, R., & Maydt, J. (2002). An extended set of haar-like features for rapid object detection. In International conference on image processing (ICIP) (pp. 1–6).

  30. Zhang, C., & Zhang, Z. (2010). A survey of recent advances in face detection (pp. 1–17). Microsoft Research.

    Google Scholar 

  31. Çarikç, M., & Ozen, F. (2012). A face recognition system based on eigenfaces method. Procedia Technology, 118–123.

  32. Hasan, M. K., Ahsan, M. S., Newaz, S. S., & Lee, G. M. (2021). Human face detection techniques: A comprehensive review and future research directions. Electronics, 10, 2354.

    Google Scholar 

  33. Web Link: https://machinelearningmastery.com/face-recognition-using-principal-component-analysis/. Last Accessed On March 5, 2022.

  34. Martinez, A., & Kak, A. (2001). PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 228–233.

    Google Scholar 

  35. Sahoolizadeh, H., & Aliyari, Y. (2008). Face recognition using eigenfaces, fisher-faces, and neural networks. In 2008. CIS 2008. 7th IEEE international conference on cybernetic intelligent systems (pp. 1–6).

  36. Moller, D.P.F. (2016).Guide to Computing Fundamentals in Cyber-Physical Systems. Digital Manufacturing/Industry 4.0, Compu Comm & Networks (pp. 1–12).

  37. Gilchrist, A. (2016). Introducing Industry 4.0 (pp. 195–215). Springer.

    Google Scholar 

  38. Chen, S., Xu, H., Liu, D., Hu, B., & Wang, H. A. (2014). Vision of IoT: Applications, challenges, and opportunities with China PERSPECTIVE. IEEE Internet of Things Journal, 1(4), 349–359.

    Google Scholar 

  39. Suárez-A, M., Fernández-Caramés, T. M., Fraga-Lamas, P., & Castedo, L. (2017). A practical evaluation of a high-security energy-efficient gateway for IoT fog computing applications. Sensors, 17, 1–39.

    Google Scholar 

  40. Ferrández-P., F. J., García-Chamizo, J. M., Nieto-Hidalgo, M., Mora-P., J., & Mora-M., J. (2016). Developing ubiquitous sensor network platform using internet of things: Application in precision agriculture. Sensors, 1141.

  41. Ganpathyraja, R., & Balamurugan, S. P. (2022). Suspicious Loitering detection using a contour-based object tracking andimage moment for intelligent video surveillance system. Journa of Algebraic Statistics, 13(2), 1294–1303.

    Google Scholar 

  42. Abdolamir, K., Abtahi, F., & Sjöström, M. (2022). Event detection in surveillance videos: A review. Multimedia Tools and Applications, 81, 35463–35501.

    Google Scholar 

  43. Shakir, K., & Lulwah, A. (2022). Agricultural monitoring system in video surveillance object detection using feature extraction and classification by deep learning techniques. Computers and Electrical Engineering, 102, 108201 (1–14).

  44. Sheng, R., Li, J., Tu, T., Peng, Y., & Jiang, J. (2021). Towards efficient video detection object super-resolution with deep fusion network for public safety. Security and Communication Networks, 2021, 1–14.

    Google Scholar 

  45. Guruh F.S., Noersasongko, E., Nugraha, A., Andono, P. N., Jumanto, J., & Kusuma, E. J. (2019). A systematic review of intelligence video surveillance: Trends, techniques, frameworks, and datasets. IEEE Access, 7, 170457 (1–17).

  46. Rasha, S., Moussa, M. M., & El Nemr, H. A. (2023). Attribute based spatio-temporal person retrieval in video surveillance. Alexandria Engineering Journal, 63, 441–454.

    Google Scholar 

  47. Fernández-C, T. M., & Fraga-L, P. (2017). A review on human-centered IoT-connected smart labels for the Industry 4.0. IEEE Access, 6, 25939–25957.

    Google Scholar 

  48. Wan, J., Tang, S., Yan, H., Li, D., Wang, S., & Vasilakos, A. V. (2016). Cloud robotics: Current status and open issues. IEEE Access, 4, 2797–2807.

    Google Scholar 

  49. Robla-Gömez, S., Becerra, V. M., Llata, J. R., González-Sarabia, E., Ferrero, C. T., & Pérez-Oria, J. (2017). Working together: A review on safe human-robot collaboration in industrial environments. IEEE Access, 5, 26754–26773.

    Google Scholar 

  50. Koch, P. J., van Amstel, M., De˛bska, P., Thormann, M. A., Tetzlaff, A. J., Bøgh, S., Chrysostomou, D. (2017). A skill-based robot co-worker for industrial maintenance tasks. In 27th Int Conf on Flex Automa & Intell Manu (FAIM 2017) (pp. 1–6).

  51. Andreasson, H., Bouguerra, A., Cirillo, M., Dimitrov, D. N., Driankov, D., Karlsson, L., & Stoyanov, T. (2015). Autonomous transport vehicles: Where we are and what is missing. IEEE Robotics & Automation Magazine, 22, 64–75.

    Google Scholar 

  52. Alsamhi, S. H., Ma, O., Ansari, M. S., & Gupta, S. K. (2019). Collaboration of drone and internet of public safety things in smart cities: An overview of QoS and network performance optimization. Drones, 3(13), 1–18.

    Google Scholar 

  53. Soorki, M. N., Mozaffari, M., Saad, W., Manshaei, M. H., & Saidi, H. (2016). Resource allocation for machine-to-machine communications with unmanned aerial vehicles. In 2016 IEEE Globecom Workshops (pp. 1–6).

  54. Shakhatreh, H., Sawalmeh, A. H., Al-Fuqaha, A., Dou, Z., Almaita, E., Khalil, I., & Guizani, M. (2019). Unmanned aerial vehicles (UAVs): A survey on civil applications and key research challenges. In IEEE Access (vol. 7, pp. 48572–48634).

  55. Larrauri, J. I., Sorrosal, G., & González, M. (2013). Automatic system for overhead power line inspection using an unmanned aerial vehicle RELIFO project. In International conference on unmanned aircraft systems (pp. 244–252).

  56. Industrial Skyworks. Drone Inspections Services. Available online: https://industrialskyworks.com/drone-inspections-services. Last accessed on 2 March 2022.

  57. Sacchi, C., & Regazzoni, C. S. (2000). A distributed surveillance system for detection of abandoned objects in unmanned railway environments. IEEE Transactions on Vehicular Technology, 49(5), 2013–2026.

    Google Scholar 

  58. Foresti, G. L., Marcenaro, L., & Regazzoni, C. S. (2002). Automatic detection and indexing of video event shots for surveillance applications. IEEE Transactions on Multimedia, 4(4), 459–471.

    Google Scholar 

  59. Lavee, G., Khan, L., & Thuraisingham, B. (2005) A framework for a video analysis tool for suspicious event detection (pp. 79–84).

  60. Lavee, G., Khan, L., & Thuraisingham, B. (2007). A framework for a video analysis tool for suspicious event detection. Multimedia Tools and Applications, 35(1), 109–123.

    Google Scholar 

  61. Ellingsen, K. (2008). Salient event-detection in video surveillance scenarios. In ACM workshop on analysis and retrieval of events/actions and workflows in video streams (pp 57–64).

  62. Porikli, F., Ivanov, Y., & Haga, T. (2008). Robust abandoned object detection using dual foregrounds. EURASIP Journal of Advanced in Signal Processing, 2008(30), 1–11.

    MATH  Google Scholar 

  63. Mart’ınez, J. M., & Miguel, J. C. S. (2008). Robust unattended and stolen object detection by fusing simple algorithms. In IEEE International conference on advanced video and signal-based surveillance (AVSS’08) (pp 18–25).

  64. Chuang, C. H., Hsieh, J. W., Tsai, L. W., Chen, S. Y., & Fan, K. C. (2009). Carried object detection using ratio histogram and its application to suspicious event analysis. IEEE Transactions on Circuits and Systems for Video Technology, 19(6), 911–916.

    Google Scholar 

  65. Bhargava, M., Chen, C. C., Ryoo, M. S., & Aggarwal, J. K. (2009). Detection of object abandonment using temporal logic. Machine Vision and Applications, 20(5), 271–281.

    Google Scholar 

  66. Li, Q., Mao, Y., Wang, Z., & Xiang, W. (2009). Robust real-time detection of abandoned and removed objects. In 5th IEEE International conference on image and graphics (pp 156–161).

  67. Li, X., Zhang, C., & Zhang, D. (2010). Abandoned objects detection using double illumination invariant foreground masks. In 20th IEEE international conference on pattern recognition (ICPR) (vol. 2010, pp. 436–439).

  68. Evangelio, R. H., & Sikora, T. (2011). Static object detection based on a dual background model and a finite-state machine. EURASIP Journal on Image and Video Processing, 2011(1), 858,502.

    Google Scholar 

  69. Singh, R., Vishwakarma, S., Agrawal, A., & Tiwari, M. D. (2010). Unusual activity detection for video surveillance. In International conference on intelligent interactive technologies and multimedia (pp 297–305). ACM

  70. Rothkrantz, L., & Yang, Z. (2011). Surveillance system using abandoned object detection. In Proceedings of the 12th international conference on computer systems and technologies (pp 380–386). ACM

  71. Tian, Y., Feris, R. S., Liu, H., Hampapur, A., & Sun, M. T. (2011). Robust detection of abandoned and removed objects in complex surveillance videos. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 41(5), 565–576.

    Google Scholar 

  72. Sanmiguel, J. C., Caro, L., & Martínez, J. M. (2012). Pixel-based colour contrast for abandoned and stolen object discrimination in video surveillance. Electronics letters, 48(2), 86–87.

    Google Scholar 

  73. Tian, Y., Senior, A., & Lu, M. (2012). Robust and efficient foreground analysis in complex surveillance videos. Machine Vision and Applications, 23(5), 967–983.

    Google Scholar 

  74. Fan, Q., & Pankanti, S. (2012). Robust foreground and abandonment analysis for large-scale abandoned object detection in complex surveillance videos. In IEEE 9 th Int conf on adv video and signal- based surveill, (AVSS) (pp. 58–63).

  75. Zin, T. T., Tin, P., Toriu, T., & Hama, H. (2012b) A probability-based model for detecting abandoned objects in video surveillance systems. In Proceedings of the world congress on engineering (vol. II. pp. 1–6).

  76. Prabhakar, G., & Ramasubramanian, B. (2012). An efficient approach for real-time tracking of intruder and abandoned object in video surveillance system. International Journal of Computers and Applications, 54(17), 22–27.

    Google Scholar 

  77. Fernández-Caballero, A., Castillo, J. C., & Rodríguez-Sánchez, J. M. (2012). Human activity monitoring by local and global finite state machines. Expert Systems with Applications, 39(8), 6982–6993.

    Google Scholar 

  78. Chitra, M., Geetha, M. K., & Menaka, L. (2013.). Occlusion and abandoned object detection for surveillance applications. International Journal of Computer Applications Technology and Research, 2(6), 708–meta.

  79. Petrosino, A., & Maddalena, L. (2013). Stopped object detection by learning foreground model in videos. IEEE Transactions on Neural Networks and Learning Systems, 24(5), 723–735.

    Google Scholar 

  80. Fan, Q., Gabbur, P., & Pankanti, S. (2013). Relative attributes for large-scale abandoned object detection. In IEEE international conference on computer vision (ICCV) (pp. 2736–2743).

  81. Tripathi, R. K., & Jalal, A. S. (2014). A framework for suspicious object detection from surveillance video. International Journal of Machine Intelligence and Sensory Signal Processing, 1(3), 251–266.

    Google Scholar 

  82. Pavithradevi, M. K., & Aruljothi, S. (2014). Detection of suspicious activities in public areas using staged matching technique. IJAICT, 1(1), 140–144.

    Google Scholar 

  83. Nam, Y. (2016). Real-time abandoned and stolen object detection based on spatiotemporal features in crowded scenes. Multimedia Tools and Applications, 75(12), 7003–7028.

    Google Scholar 

  84. Kong, H., Audibert, J. Y., & Ponce, J. (2010). Detecting abandoned objects with a moving camera. IEEE Transactions on Image Processing, 19(8), 2201–2210.

    MathSciNet  MATH  Google Scholar 

  85. Ahamad, R., & Mishra K. N. (2023) Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-integrated-IoT-based computing environment. Cluster Computing, 1–22.

  86. Jhuang, H., Serre, T., Wolf, L., & Poggio, T. (2007). A biologically inspired system for action recognition. In IEEE 11th international conference on computer vision (pp. 1–8).

  87. Lin Z, Zhuolin Jiang, & Davis L.S. (2009). Recognizing actions by shape-motion prototype trees. In 12th international conference on computer vision (pp. 444–51).

  88. Liu, J., Luo, J., & Shah, M. (2009). Recognizing realistic actions from videos in the wild. In IEEE conference on computer vision and pattern recognition (pp. 1996–2003).

  89. Kim T. K., Wong S. F., & Cipolla R. (2007). Tensor canonical correlation analysis for action classification. In IEEE conference on computer vision and pattern recognition (vol. 2007. pp. 1–8).

  90. Padmaja, B., Myneni, M. B., & Krishna Rao Patro, E. (2020). A comparison on visual prediction models for MAMO (multi activity-multi object) recognition using deep learning. Journal of Big Data, 7(24), 1–15.

    Google Scholar 

  91. Cho, J., Lee, M., Chang, H. J., & Oh, S. (2014). Robust action recognition using local motion and group sparsity. Pattern Recognition, 47(5), 1813–1825.

    Google Scholar 

  92. Ravanbakhsh, M., Mousavi, H., Rastegari, M., Murino, V., & Davis, L. S. (2015). Action recognition with image based CNN features. In IEEE conference on computer vision and pattern recognition (pp. 1–10).

  93. Ulutan, O., Rallapalli, S., Srivatsa, M., Torres, C., & Manjunath, B. S. (2019). Actor conditioned attention maps for video action detection. In Computer vision and pattern recognition (pp. 527–536).

  94. Choi, W., & Savarese, S. (2014). Understanding collective activities of people from videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36, 1242–1257.

    Google Scholar 

  95. Choi, W., Shahid, K., & Savarese, S. (2011). Learning context for collective activity recognition. In IEEE conference on computer vision and pattern recognition (pp. 3273–3280).

  96. Li, J., Xia, C., & Chen, X. (2018). A benchmark dataset and saliency-guided stacked autoencoders for video-based salient object detection. IEEE Transactions on Image Processing, 27(1), 349–364.

    MathSciNet  MATH  Google Scholar 

  97. Achanta, R., Hemami, S., Estrada, F., & Susstrunk, S. (2009) Frequency-tuned salient region detection. In Proc. IEEE conference on computer vision and pattern recognition (pp. 1597–1604).

  98. Tsai, D., Flagg, M., & Rehg, J. M. (2010). Motion coherent tracking with multi-label MRF optimization. In Proc. Brit. Mach. Vis. Conf. (pp. 1–11).

  99. Li, F., Kim, T., Humayun, A., Tsai, D., & Rehg, J. M. (2013). Video segmentation by tracking many figure-ground segments. In Proceedings of the IEEE international conference on computer vision (pp. 2192–2199).

  100. Wang, W., Shen, J., & Shao, L. (2015). Consistent video saliency using local gradient flow optimization and global refinement. IEEE Transactions on Image Processing, 24(11), 4185–4196.

    MathSciNet  MATH  Google Scholar 

  101. Ahamad, R., & Mishra, K. N. (2023). Hybrid approach for suspicious object surveillance using video clips and UAV images in cloud-IoT-based computing environment. Cluster Computer.

Download references

Funding

This research work is funded by B.I.T. Mesra and BISR.

Author information

Authors and Affiliations

Authors

Contributions

Author 1 has written the paper under the guidance of Author 2

Corresponding author

Correspondence to Kamta Nath Mishra.

Ethics declarations

Conflict of interest

Being the corresponding author I declare that there is no conflict of interest with any person or organization for this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ahamad, R., Mishra, K.N. Knowledge discovery of suspicious objects using hybrid approach with video clips and UAV images in distributed environments: a novel approach. Wireless Netw 29, 3393–3416 (2023). https://doi.org/10.1007/s11276-023-03394-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-023-03394-6

Keywords

Navigation