Skip to main content

Towards a Holistic Framework for Explainable Robot Navigation

  • Conference paper
  • First Online:
Human-Friendly Robotics 2023 (HFR 2023)

Abstract

With the rising tendency to deploy autonomous robots, their navigational decisions will strongly influence humans. Robot navigation should be explainable to mitigate the undesirable effects of navigation faults and unexpectedness on people. To contribute to compliance between humans and autonomous robots, we present HiXRoN (Hierarchical eXplainable Robot Navigation)—a comprehensive hierarchical framework for explaining robot navigational choices. Besides providing explanations of robot navigation, our framework encompasses qualitative, quantitative, and temporal strategies for explanation conveyance. We further discuss its possibilities and limitations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018)

    Article  Google Scholar 

  2. Alvanpour, A., Das, S.K., Robinson, C.K., Nasraoui, O., Popa, D.: Robot failure mode prediction with explainable machine learning. In: 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), pp. 61–66. IEEE (2020)

    Google Scholar 

  3. Ambsdorf, J., et al.: Explain yourself! Effects of explanations in human-robot interaction. arXiv preprint arXiv:2204.04501 (2022)

  4. Andrist, S., Mutlu, B., Tapus, A.: Look like me: matching robot personality via gaze to increase motivation. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 3603–3612 (2015)

    Google Scholar 

  5. Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, 13–17 May 2019, pp. 1078–1088. International Foundation for Autonomous Agents and Multiagent Systems (2019)

    Google Scholar 

  6. Bairy, A., Hagemann, W., Rakow, A., Schwammberger, M.: Towards formal concepts for explanation timing and justifications. In: 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW), pp. 98–102. IEEE (2022)

    Google Scholar 

  7. Bautista-Montesano, R., Bustamante-Bello, R., Ramirez-Mendoza, R.A.: Explainable navigation system using fuzzy reinforcement learning. Int. J. Interact. Des. Manuf. (IJIDeM) 14(4), 1411–1428 (2020)

    Article  Google Scholar 

  8. Bohus, D., Saw, C.W., Horvitz, E.: Directions robot: in-the-wild experiences and lessons learned. In: Proceedings of the 2014 International Conference on Autonomous Agents and Multi-agent Systems, pp. 637–644 (2014)

    Google Scholar 

  9. Brandao, M., Canal, G., Krivić, S., Magazzeni, D.: Towards providing explanations for robot motion planning. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 3927–3933. IEEE (2021)

    Google Scholar 

  10. Brandao, M., Coles, A., Magazzeni, D.: Explaining path plan optimality: fast explanation methods for navigation meshes using full and incremental inverse optimization. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 31, pp. 56–64 (2021)

    Google Scholar 

  11. Breazeal, C.: Socially intelligent robots. Interactions 12(2), 19–22 (2005)

    Article  Google Scholar 

  12. Breazeal, C., Dautenhahn, K., Kanda, T.: Social Robotics. Springer Handbook Of Robotics, pp. 1935–1972. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-540-30301-5

  13. Cashmore, M., Collins, A., Krarup, B., Krivic, S., Magazzeni, D., Smith, D.: Towards explainable AI planning as a service. arXiv preprint arXiv:1908.05059 (2019)

  14. Das, D., Banerjee, S., Chernova, S.: Explainable AI for system failures: generating explanations that improve human assistance in fault recovery. arXiv preprint arXiv:2011.09407 (2020)

  15. De Graaf, M.M., Malle, B.F.: How people explain action (and autonomous intelligent systems should too). In: 2017 AAAI Fall Symposium Series (2017)

    Google Scholar 

  16. Du, N., et al.: Look who’s talking now: implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transp. Res. C Emerg. Technol. 104, 428–442 (2019)

    Article  Google Scholar 

  17. Edmonds, M., et al.: A tale of two explanations: enhancing human trust by explaining robot behavior. Sci. Robot. 4(37), eaay4663 (2019)

    Google Scholar 

  18. El-Assady, M., et al.: Towards XAI: structuring the processes of explanations. In: Proceedings of the ACM Workshop on Human-Centered Machine Learning, Glasgow, UK, vol. 4 (2019)

    Google Scholar 

  19. Felzmann, H., Fosch-Villaronga, E., Lutz, C., Tamo-Larrieux, A.: Robots and transparency: the multiple dimensions of transparency in the context of robot technologies. IEEE Robot. Autom. Mag. 26(2), 71–78 (2019)

    Article  Google Scholar 

  20. Fox, M., Long, D., Magazzeni, D.: Explainable planning. arXiv preprint arXiv:1709.10256 (2017)

  21. Freeberg, T.M., Dunbar, R.I., Ord, T.J.: Social complexity as a proximate and ultimate factor in communicative complexity. Philos. Trans. Royal Soc. B Biol. Sci. 367(1597), 1785–1801 (2012)

    Article  Google Scholar 

  22. Garcia, F.J.C., Robb, D.A., Liu, X., Laskov, A., Patron, P., Hastie, H.: Explainable autonomy: a study of explanation styles for building clear mental models. In: 11th International Conference of Natural Language Generation 2018, pp. 99–108. Association for Computational Linguistics (2018)

    Google Scholar 

  23. Gavriilidis, K., Munafo, A., Pang, W., Hastie, H.: A surrogate model framework for explainable autonomous behaviour. arXiv preprint arXiv:2305.19724 (2023)

  24. de Graaf, M.M., Malle, B.F., Dragan, A., Ziemke, T.: Explainable robotic systems. In: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 387–388 (2018)

    Google Scholar 

  25. Gunning, D.: Explainable artificial intelligence (XAI). Defense Adv. Res. Projects Agency (DARPA) Web 2(2), 1 (2017)

    Google Scholar 

  26. Halilovic, A., Lindner, F.: Explaining local path plans using lime. In: Müller, A., Brandstötter, M. (eds.) Advances in Service and Industrial Robotics: RAAD 2022, vol. 120, pp. 106–113. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-04870-8_13

    Chapter  Google Scholar 

  27. Halilovic, A., Lindner, F.: Visuo-textual explanations of a robot’s navigational choices. In: Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, pp. 531–535 (2023)

    Google Scholar 

  28. Hauser, K.: The minimum constraint removal problem with three robotics applications. Int. J. Robot. Res. 33(1), 5–17 (2014)

    Article  Google Scholar 

  29. He, L., Aouf, N., Song, B.: Explainable deep reinforcement learning for UAV autonomous path planning. Aerosp. Sci. Technol. 118, 107052 (2021)

    Article  Google Scholar 

  30. Hoffman, R.R., Mueller, S.T., Klein, G., Litman, J.: Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608 (2018)

  31. Huang, C.M., Andrist, S., Sauppé, A., Mutlu, B.: Using gaze patterns to predict task intent in collaboration. Front. Psychol. 6, 1049 (2015)

    Article  Google Scholar 

  32. Karalus, J., Halilovic, A., Lindner, F.: Explanations in, explanations out: human-in-the-loop social navigation learning. In: ICDL Workshop on Human aligned Reinforcement Learning for Autonomous Agents and Robots (2021)

    Google Scholar 

  33. Kim, T., Hinds, P.: Who should i blame? Effects of autonomy and transparency on attributions in human-robot interaction. In: ROMAN 2006-The 15th IEEE International Symposium on Robot and Human Interactive Communication, pp. 80–85. IEEE (2006)

    Google Scholar 

  34. Körber, M., Prasch, L., Bengler, K.: Why do i have to drive now? Post hoc explanations of takeover requests. Hum. Factors 60(3), 305–323 (2018)

    Article  Google Scholar 

  35. Kottinger, J., Almagor, S., Lahijanian, M.: Maps-X: explainable multi-robot motion planning via segmentation. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 7994–8000. IEEE (2021)

    Google Scholar 

  36. Kottinger, J., Almagor, S., Lahijanian, M.: Conflict-based search for explainable multi-agent path finding. In: Proceedings of the International Conference on Automated Planning and Scheduling, vol. 32, pp. 692–700 (2022)

    Google Scholar 

  37. Krarup, B., Krivic, S., Magazzeni, D., Long, D., Cashmore, M., Smith, D.E.: Contrastive explanations of plans through model restrictions. J. Artif. Intell. Res. 72, 533–612 (2021)

    Article  Google Scholar 

  38. Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.K.: Too much, too little, or just right? Ways explanations impact end users’ mental models. In: 2013 IEEE Symposium on Visual Languages and Human Centric Computing, pp. 3–10. IEEE (2013)

    Google Scholar 

  39. Kwon, M., Huang, S.H., Dragan, A.D.: Expressing robot incapability. In: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp. 87–95 (2018)

    Google Scholar 

  40. Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, AAAI 2017, pp. 4762–4763. AAAI Press (2017)

    Google Scholar 

  41. Leichtmann, B., Humer, C., Hinterreiter, A., Streit, M., Mara, M.: Effects of explainable artificial intelligence on trust and human behavior in a high-risk decision task. Comput. Hum. Behav. 139, 107539 (2023)

    Article  Google Scholar 

  42. Lindner, F.: Towards a formalization of explanations for robots’ actions and beliefs. In: JOWO 2020 Proceedings of the FOIS Workshop Ontologies for Autonomous Robotics (ROBONTICS 2020) (2020)

    Google Scholar 

  43. Lomas, M., Chevalier, R., Cross, E.V., Garrett, R.C., Hoare, J., Kopack, M.: Explaining robot actions. In: Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction, pp. 187–188 (2012)

    Google Scholar 

  44. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  45. Malle, B.F.: How people explain behavior: a new theoretical framework. Pers. Soc. Psychol. Rev. 3(1), 23–48 (1999)

    Article  Google Scholar 

  46. Molnar, C.: Interpretable machine learning. Lulu. com (2020)

    Google Scholar 

  47. Parenti, L., Lukomski, A.W., De Tommaso, D., Belkaid, M., Wykowska, A.: Human-likeness of feedback gestures affects decision processes and subjective trust. Int. J. Soc. Robot. 15, 1–9 (2022)

    Google Scholar 

  48. Perera, V., Selveraj, S.P., Rosenthal, S., Veloso, M.: Dynamic generation and refinement of robot verbalization. In: 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 212–218 (2016)

    Google Scholar 

  49. Puiutta, E., Veith, E.M.S.P.: Explainable reinforcement learning: a survey. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 77–95. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_5

    Chapter  Google Scholar 

  50. Remman, S.B., Lekkas, A.M.: Robotic lever manipulation using hindsight experience replay and shapley additive explanations. In: 2021 European Control Conference (ECC), pp. 586–593. IEEE (2021)

    Google Scholar 

  51. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016, pp. 1135–1144 (2016)

    Google Scholar 

  52. Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Conference on Artificial Intelligence (AAAI) (2018)

    Google Scholar 

  53. Robb, D.A., Liu, X., Hastie, H.: Explanation styles for trustworthy autonomous systems. In: Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, pp. 2298–2300 (2023)

    Google Scholar 

  54. Rosenthal, S., Selvaraj, S.P., Veloso, M.M.: Verbalization: narration of autonomous robot experience. In: IJCAI, vol. 16, pp. 862–868 (2016)

    Google Scholar 

  55. Sakai, T., Nagai, T.: Explainable autonomous robots: a survey and perspective. Adv. Robot. 36(5–6), 219–238 (2022)

    Article  Google Scholar 

  56. Setchi, R., Dehkordi, M.B., Khan, J.S.: Explainable robotics in human-robot interactions. Procedia Comput. Sci. 176, 3057–3066 (2020)

    Article  Google Scholar 

  57. Shahriari, K., Shahriari, M.: IEEE standard review-ethically aligned design: a vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. In: 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), pp. 197–201. IEEE (2017)

    Google Scholar 

  58. Sidner, C.L., Lee, C., Kidd, C.D., Lesh, N., Rich, C.: Explorations in engagement for humans and robots. Artif. Intell. 166(1–2), 140–164 (2005)

    Article  Google Scholar 

  59. Sieusahai, A., Guzdial, M.: Explaining deep reinforcement learning agents in the Atari domain through a surrogate model. In: Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, AIIDE 2021 (2021)

    Google Scholar 

  60. Song, S., Yamada, S.: Effect of expressive lights on human perception and interpretation of functional robot. In: Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, pp. 1–6 (2018)

    Google Scholar 

  61. Stein, G.: Generating high-quality explanations for navigation in partially-revealed environments. Adv. Neural Inf. Process. Syst. 34 (2021)

    Google Scholar 

  62. Szymanski, M., Millecamp, M., Verbert, K.: Visual, textual or hybrid: the effect of user expertise on different explanations. In: 26th International Conference on Intelligent User Interfaces, pp. 109–119 (2021)

    Google Scholar 

  63. Thielstrom, R., Roque, A., Chita-Tegmark, M., Scheutz, M.: Generating explanations of action failures in a cognitive robotic architecture. In: 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, pp. 67–72 (2020)

    Google Scholar 

  64. Tolmeijer, S., et al.: Taxonomy of trust-relevant failures and mitigation strategies. In: Proceedings of HRI 2020 (2020)

    Google Scholar 

  65. Toohey, K., Duckham, M.: Trajectory similarity measures. SIGSPATIAL Spec. 7(1), 43–50 (2015)

    Article  Google Scholar 

  66. Van Camp, W.: Explaining understanding (or understanding explanation). Eur. J. Philos. Sci. 4, 95–114 (2014)

    Article  Google Scholar 

  67. Voigt, P., Von dem Bussche, A.: The EU General Data Protection Regulation (GDPR). A Practical Guide, 1st edn. Springer, Cham (2017). 10(3152676), 10–5555

    Google Scholar 

  68. Wachter, S., Mittelstadt, B., Floridi, L.: Transparent, explainable, and accountable AI for robotics. Sci. Robot. 2(6), eaan6080 (2017)

    Google Scholar 

  69. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–15 (2019)

    Google Scholar 

  70. Williams, T., Briggs, P., Scheutz, M.: Covert robot-robot communication: human perceptions and implications for human-robot interaction. J. Hum.-Robot Interact. 4(2), 24–49 (2015)

    Article  Google Scholar 

  71. Wilson, J.R., Aung, P.T., Boucher, I.: When to help? A multimodal architecture for recognizing when a user needs help from a social robot. In: Cavallo, F., et al. (eds.) ICSR 2022. LNCS, vol. 13817, pp. 253–266. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-24667-8_23

    Chapter  Google Scholar 

  72. Winfield, A.F., et al.: IEEE P7001: a proposed standard on transparency. Front. Robot. AI 8, 665729 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amar Halilovic .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Halilovic, A., Krivic, S. (2024). Towards a Holistic Framework for Explainable Robot Navigation. In: Piazza, C., Capsi-Morales, P., Figueredo, L., Keppler, M., Schütze, H. (eds) Human-Friendly Robotics 2023. HFR 2023. Springer Proceedings in Advanced Robotics, vol 29. Springer, Cham. https://doi.org/10.1007/978-3-031-55000-3_15

Download citation

Publish with us

Policies and ethics