Skip to main content

Goals and Stakeholder Involvement in XAI for Remote Sensing: A Structured Literature Review

  • Conference paper
  • First Online:
Artificial Intelligence XL (SGAI 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14381))

  • 588 Accesses

Abstract

A currently upcoming direction in the research of explainable artificial intelligence (XAI) is focusing on the involvement of stakeholders to achieve human-centered explanations. This work conducts a structured literature review to asses the current state of stakeholder involvement when applying XAI methods to remotely sensed image data. Additionally it is assessed, which goals are pursued for integrating explainability. The results show that there is no intentional stakeholder involvement. The majority of work is focused on improving the models performance and gaining insights into the models internal properties, which mostly benefits developers. Closing, future research directions, that emerged from the results of this work, are highlighted.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abdollahi, A., Pradhan, B.: Urban vegetation mapping from aerial imagery using explainable AI (XAI). Sensors 21(14), 4738 (2021)

    Article  Google Scholar 

  2. Angelov, P.P., et al.: Explainable artificial intelligence: an analytical review. WIREs Data Min. Knowl. Discov. 11(5), e1424 (2021)

    Article  Google Scholar 

  3. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2019)

    Article  Google Scholar 

  4. Burgueño, A.M., et al.: Scalable approach for high-resolution land cover: a case study in the Mediterranean Basin. J. Big Data 10(1), 91 (2023)

    Article  Google Scholar 

  5. Carneiro, G.A., et al.: Segmentation as a preprocessing tool for automatic grapevine classification. In: IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, pp. 6053–6056 (2022). ISSN: 2153–7003

    Google Scholar 

  6. Chen, L., et al.: Towards transparent deep learning for surface water detection from SAR imagery. Int. J. Appl. Earth Obs. Geoinf. 118, 103287 (2023)

    Google Scholar 

  7. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey (2020)

    Google Scholar 

  8. Feng, J., et al.: Bidirectional flow decision tree for reliable remote sensing image scene classification. Remote Sens. 14(16), 3943 (2022)

    Article  Google Scholar 

  9. Gohel, P., et al.: Explainable AI: current status and future directions (2021)

    Google Scholar 

  10. Guo, X., et al.: Network pruning for remote sensing images classification based on interpretable CNNs. IEEE Trans. Geosci. Remote Sens. 60, 1–15 (2022)

    Google Scholar 

  11. Hoffman, R.R., et al.: Metrics for explainable AI: challenges and prospects (2019)

    Google Scholar 

  12. Hosseiny, B., et al.: Urban land use and land cover classification with interpretable machine learning - A case study using Sentinel-2 and auxiliary data. Remote Sens. Appl.: Soc. Environ. 28, 100843 (2022)

    Google Scholar 

  13. Huang, X., et al.: Better visual interpretation for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 19, 1–5 (2022)

    Google Scholar 

  14. Ishikawa, S.N., et al.: Example-based explainable AI and its application for remote sensing image classification. Int. J. Appl. Earth Obs. Geoinf. 118, 103215 (2023)

    Google Scholar 

  15. Jeon, M., et al.: Recursive visual explanations mediation scheme based on dropattention model with multiple episodes pool. IEEE Access 11, 4306–4321 (2023)

    Article  Google Scholar 

  16. Kakogeorgiou, I., Karantzalos, K.: Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing. Int. J. Appl. Earth Obs. Geoinf. 103, 102520 (2021)

    Google Scholar 

  17. Kawauchi, H., Fuse, T.: SHAP-based interpretable object detection method for satellite imagery. Remote Sens. 14(9), 1970 (2022)

    Article  Google Scholar 

  18. Levering, A., et al.: Liveability from above: understanding quality of life with overhead imagery and deep neural networks. In: 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp. 2094–2097 (2021). ISSN: 2153–7003

    Google Scholar 

  19. Liao, Q.V., Varshney, K.R.: Human-centered explainable AI (XAI): from algorithms to user experiences (2022). arXiv:2110.10790

  20. Luo, R., et al.: Glassboxing deep learning to enhance aircraft detection from SAR imagery. Remote Sens. 13(18), 3650 (2021)

    Article  Google Scholar 

  21. Ma, L., et al.: Deep learning in remote sensing applications: a meta-analysis and review. ISPRS J. Photogramm. Remote. Sens. 152, 166–177 (2019)

    Article  Google Scholar 

  22. Marvasti-Zadeh, S.M., et al.: Crown-CAM: interpretable visual explanations for tree crown detection in aerial images. IEEE Geosci. Remote Sens. Lett. 20, 1–5 (2023)

    Article  Google Scholar 

  23. Matin, S.S., Pradhan, B.: Earthquake-induced building-damage mapping using explainable AI (XAI). Sensors 21(13), 4489 (2021)

    Article  Google Scholar 

  24. Moradi, L., et al.: On the use of XAI for CNN model interpretation: a remote sensing case study. In: 2022 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), pp. 1–5 (2022)

    Google Scholar 

  25. Saeidi, V., et al.: Water depth estimation from Sentinel-2 imagery using advanced machine learning methods and explainable artificial intelligence. Geomat. Nat. Haz. Risk 14(1), 2225691 (2023)

    Article  Google Scholar 

  26. Seydi, S.T., et al.: BDD-Net+: a building damage detection framework based on modified coat-net. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 16, 4232–4247 (2023)

    Article  Google Scholar 

  27. Su, S., et al.: Explainable analysis of deep learning methods for sar image classification. In: IGARSS 2022–2022 IEEE International Geoscience and Remote Sensing Symposium, pp. 2570–2573 (2022). ISSN: 2153–7003

    Google Scholar 

  28. Sugumaran, R., et al.: Processing remote-sensing data in cloud computing environments (2015)

    Google Scholar 

  29. Temenos, A., et al.: Interpretable deep learning framework for land use and land cover classification in remote sensing using SHAP. IEEE Geosci. Remote Sens. Lett. 20, 1–5 (2023)

    Article  Google Scholar 

  30. Tholen, C., et al.: Machine learning on multisensor data from airborne remote sensing to monitor plastic litter in oceans and rivers (plasticobs+). In: OCEANS 2023 Limerick. OCEANS MTS/IEEE Conference (OCEANS-2023), 5–8 June, Limerick, Ireland, pp. 1–7. IEEE (2023)

    Google Scholar 

  31. Toth, C., Jóźków, G.: Remote sensing platforms and sensors: a survey. ISPRS J. Photogramm. Remote. Sens. 115, 22–36 (2016)

    Article  Google Scholar 

  32. Valdés, J.J., Pou, A.: A machine learning - explainable AI approach to tropospheric dynamics analysis using Water Vapor meteosat images. In: 2021 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8 (2021)

    Google Scholar 

  33. Vasu, B., Savakis, A.: Resilience and plasticity of deep network interpretations for aerial imagery. IEEE Access 8, 127491–127506 (2020)

    Article  Google Scholar 

  34. Vilone, G., Longo, L.: Classification of explainable artificial intelligence methods through their output formats. Mach. Learn. Knowl. Extr. 3(3), 615–661 (2021)

    Article  Google Scholar 

  35. Wolf, M., et al.: Machine learning for aquatic plastic litter detection, classification and quantification (aplastic-q). Environ. Res. Lett. (ERL) 15(11), 1–14 (2020)

    Google Scholar 

  36. Woo Kim, Y., et al.: Validity evaluation of a machine-learning model for chlorophyll a retrieval using Sentinel-2 from inland and coastal waters. Ecol. Ind. 137, 108737 (2022)

    Article  Google Scholar 

  37. Zaryabi, H., et al.: Unboxing the black box of attention mechanisms in remote sensing big data using XAI. Remote Sens. 14(24), 6254 (2022)

    Article  Google Scholar 

  38. Zielinski, O., et al.: Detecting marine hazardous substances and organisms: sensors for pollutants, toxins, and pathogens. Ocean Sci. 5(3), 329–349 (2009)

    Article  Google Scholar 

Download references

Acknowledgements

Funded by the German Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV) based on a resolution of the German Bundestag (Grant No. 67KI21014A).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carolin Leluschko .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Leluschko, C., Tholen, C. (2023). Goals and Stakeholder Involvement in XAI for Remote Sensing: A Structured Literature Review. In: Bramer, M., Stahl, F. (eds) Artificial Intelligence XL. SGAI 2023. Lecture Notes in Computer Science(), vol 14381. Springer, Cham. https://doi.org/10.1007/978-3-031-47994-6_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47994-6_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47993-9

  • Online ISBN: 978-3-031-47994-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics