skip to main content
10.1145/3543174.3546846acmconferencesArticle/Chapter ViewAbstractPublication PagesautomotiveuiConference Proceedingsconference-collections
research-article

Human Centered Explainability for Intelligent Vehicles – A User Study

Published: 17 September 2022 Publication History

Abstract

Advances in artificial intelligence (AI) are leading to an increased use of algorithm-generated user-adaptivity in everyday products. Explainable AI aims to make algorithmic decision-making more transparent to humans. As future vehicles become more intelligent and user-adaptive, explainability will play an important role ensuring that drivers understand the AI system's functionalities and outputs. However, when integrating explainability into in-vehicle features there is a lack of knowledge about user needs and requirements and how to address them. We conducted a study with 59 participants focusing on how end-users evaluate explainability in the context of user-adaptive comfort and infotainment features. Results show that explanations foster perceived understandability and transparency of the system, but that the need for explanation may vary between features. Additionally, we found that insufficiently designed explanations can decrease acceptance of the system. Our findings underline the requirement for a user-centered approach in explainable AI and indicate approaches for future research.

Supplementary Material

Appendix: HMI concepts and questionnaires (Submission1090_SupplementaryFile.pdf)

References

[1]
Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, and Mohan Kankanhalli. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–18.
[2]
Amina Adadi and Mohammed Berrada. 2018. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 6, 52138–52160.
[3]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. 2019. Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–13.
[4]
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila, and Francisco Herrera. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58, 82–115.
[5]
Ashok Chandrashekar, Fernando Amat, Justin Basilico, and Tony Jebara. 2017. Artwork Personalization at Netflix (2017). Retrieved February 22, 2022 from https://​netflixtechblog.com​/​artwork-personalization-c589f074ad76.
[6]
Michael C. Dorneich, Kellie A. McGrath, Rachel F. Dudley, and Max D. Morris. 2013. Analysis of the Characteristics of Adaptive Systems. In . IEEE, 888–893.
[7]
Finale Doshi-Velez and Been Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning.
[8]
Malin Eiband, Sarah T. Völkel, Daniel Buschek, Sophia Cook, and Heinrich Hussmann. 2019. When people and algorithms meet. In Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, New York, NY, USA, 96–106.
[9]
Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. "I always assumed that I wasn't really that close to [her]". In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 153–162.
[10]
Karen M. Feigh, Michael C. Dorneich, and Caroline C. Hayes. 2012. Toward a characterization of adaptive systems: a framework for researchers and system designers. Human factors 54, 6, 1008–1024.
[11]
Google. Find places you'll like. Retrieved February 10, 2022 from https://​support.google.com​/​maps/​answer/​7677966?hl=en&co=GENIE.Platform%3DAndroid.
[12]
Andreas Holzinger, Ed. 2008. HCI and Usability for Education and Work. Lecture Notes in Computer Science. Springer Berlin Heidelberg, Berlin, Heidelberg.
[13]
Anthony Jameson and Krzysztof Z. Gajos. 2012. Systems That Adapt to Their Users. In The human-computer interaction handbook. Fundamentals, evolving technologies, and emerging applications, Julie A. Jacko, Ed. Human factors and ergonomics. Taylor & Francis, Boca Raton, 431–455.
[14]
Linda M. Köhler. 2018. Adaptives Informationskonzept für beanspruchende urbane Fahrsituationen. PhD Thesis. Technical University of Munich, München.
[15]
Jeamin Koo, Jungsuk Kwac, Wendy Ju, Martin Steinert, Larry Leifer, and Clifford Nass. 2015. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. Int J Interact Des Manuf 9, 4, 269–275.
[16]
Moritz Körber. 2019. Theoretical Considerations and Development of a Questionnaire to Measure Trust in Automation. In Proceedings of the 20th Congress of the International Ergonomics Association (IEA 2018), Sebastiano Bagnara, Riccardo Tartaglia, Sara Albolino, Thomas Alexander and Yushi Fujita, Eds. Advances in Intelligent Systems and Computing. Springer International Publishing, Cham, 13–30.
[17]
Brian Y. Lim and Anind K. Dey. 2010. Toolkit to support intelligibility in context-aware applications. In Proceedings of the 12th ACM international conference on Ubiquitous computing. ACM, New York, NY, USA, 13–22.
[18]
Brian Y. Lim and Anind K. Dey. 2013. Evaluating Intelligibility Usage and Usefulness in a Context-Aware Application. In Human-Computer Interaction. Towards Intelligent and Implicit Interaction, David Hutchison, Takeo Kanade, Josef Kittler, Jon M. Kleinberg, Friedemann Mattern, John C. Mitchell, Moni Naor, Oscar Nierstrasz, C. Pandu Rangan, Bernhard Steffen, Madhu Sudan, Demetri Terzopoulos, Doug Tygar, Moshe Y. Vardi, Gerhard Weikum and Masaaki Kurosu, Eds. Lecture Notes in Computer Science. Springer Berlin Heidelberg, Berlin, Heidelberg, 92–101.
[19]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 2119–2128.
[20]
Aniek F. Markus, Jan A. Kors, and Peter R. Rijnbeek. 2021. The role of explainability in creating trustworthy artificial intelligence for health care: A comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of biomedical informatics 113, 103655.
[21]
Meta. 2021. How machine learning powers Facebook's News Feed ranking algorithm (2021). Retrieved February 16, 2022 from https://​engineering.fb.com​/​2021/​01/​26/​ml-applications/​news-feed-ranking/​.
[22]
Tim Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence 267, 1–38.
[23]
Christoph Molnar. 2019. Interpretable machine learning. A Guide for Making Black Box Models Explainable.
[24]
Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as Mechanisms for Supporting Algorithmic Transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–13.
[25]
Mireia Ribera and Agata Lapedriza. 2019. Can we do better explanations? A proposal of User-Centered Explainable AI. In Joint Proceedings of the ACM IUI 2019 Workshops, New York, NY, USA, 7 pages.
[26]
Lena Rittger, Doreen Engelhardt, Oliver Stauch, and Ivo Muth. 2020. Adaptive User Experience und empathische HMI-Konzepte. ATZ - Automobiltechnische Zeitschrift, 11.
[27]
Rob Semmens, Nikolas Martelaro, Pushyami Kaveti, Simon Stent, and Wendy Ju. 2019. Is Now A Good Time? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–12.
[28]
Donghee Shin. 2021. The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies 146, 102551.
[29]
Patrick Tchankue, Janet Wesson, and Dieter Vogts. 2011. The impact of an adaptive user interface on reducing driver distraction. In Proceedings of the 3rd International Conference on Automotive User Interfaces and Interactive Vehicular Applications - AutomotiveUI '11. ACM Press, New York, New York, USA, 87.
[30]
Richard Tomsett, Alun Preece, Dave Braines, Federico Cerutti, Supriyo Chakraborty, Mani Srivastava, Gavin Pearson, and Lance Kaplan. 2020. Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI. Patterns (New York, N.Y.) 1, 4, 100049.
[31]
Jinke D. van der Laan, Adriaan Heino, and Dick de Waard. 1997. A Simple Procedure for the Assessment of Acceptance of Advanced Transport Telematics. Transportation Research - Part C: Emerging Technologies, 5, 1–10.
[32]
Sarah T. Völkel, Christina Schneegass, Malin Eiband, and Daniel Buschek. 2020. What is "intelligent" in intelligent user interfaces? In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, New York, NY, USA, 477–487.
[33]
Nadine Walter. 2018. Personalization and context-sensitive user interaction of in-vehicle infotainment systems. PhD Thesis. Technische Universität München, München.
[34]
Philipp Wintersberger, Hannah Nicklas, Thomas Martlbauer, Stephan Hammer, and Andreas Riener. 2020. Explainable Automation: Personalized and Adaptive UIs to Foster Trust and Understanding of Driving Automation Systems. In 12th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. ACM, New York, NY, USA, 252–261.

Cited By

View all
  • (2024)Move, Connect, Interact: Introducing a Design Space for Cross-Traffic InteractionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785808:3(1-40)Online publication date: 9-Sep-2024
  • (2024)TimelyTale: A Multimodal Dataset Approach to Assessing Passengers' Explanation Demands in Highly Automated VehiclesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785448:3(1-60)Online publication date: 9-Sep-2024
  • (2024)Toward Robust 3D Perception for Autonomous Vehicles: A Review of Adversarial Attacks and CountermeasuresIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.345629325:12(19176-19202)Online publication date: Dec-2024
  • Show More Cited By

Index Terms

  1. Human Centered Explainability for Intelligent Vehicles – A User Study

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      AutomotiveUI '22: Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
      September 2022
      371 pages
      ISBN:9781450394154
      DOI:10.1145/3543174
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 17 September 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Human-AI interaction
      2. explainable AI
      3. intelligent vehicles
      4. user studies
      5. user-adaptive

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      AutomotiveUI '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 248 of 566 submissions, 44%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)154
      • Downloads (Last 6 weeks)22
      Reflects downloads up to 20 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Move, Connect, Interact: Introducing a Design Space for Cross-Traffic InteractionProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785808:3(1-40)Online publication date: 9-Sep-2024
      • (2024)TimelyTale: A Multimodal Dataset Approach to Assessing Passengers' Explanation Demands in Highly Automated VehiclesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36785448:3(1-60)Online publication date: 9-Sep-2024
      • (2024)Toward Robust 3D Perception for Autonomous Vehicles: A Review of Adversarial Attacks and CountermeasuresIEEE Transactions on Intelligent Transportation Systems10.1109/TITS.2024.345629325:12(19176-19202)Online publication date: Dec-2024
      • (2024)Enhancing the Multi-User Experience in Fully Autonomous Vehicles Through Explainable AI Voice AgentsInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2383034(1-15)Online publication date: 29-Jul-2024
      • (2024)Human Factors in Intelligent VehiclesHuman-Machine Interaction (HMI) Design for Intelligent Vehicles10.1007/978-981-97-7823-2_2(23-58)Online publication date: 25-Oct-2024
      • (2023)What and When to Explain?Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36108867:3(1-26)Online publication date: 27-Sep-2023
      • (2023)The Impact of Explanation Detail in Advanced Driver Assistance Systems: User Experience, Acceptance, and Age-Related EffectsProceedings of Mensch und Computer 202310.1145/3603555.3608536(307-312)Online publication date: 3-Sep-2023
      • (2023)Evaluating the Potential of Interactivity in Explanations for User-Adaptive In-Vehicle Systems – Insights from a Real-World Driving StudyHCI International 2023 – Late Breaking Papers10.1007/978-3-031-48047-8_19(294-312)Online publication date: 23-Jul-2023

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media