ABSTRACT
User understanding and confidence are critical in the context of advanced intelligent driver assistance systems (ADAS) to ensure the desired response and prevent manual countersteering during automated maneuvers. However, the interventions of advanced ADAS can sometimes be unexpected and disruptive to drivers, especially when the reasons are unclear. In our study, we investigated the effects of differently presented explanations provided by a driver assistance system. We presented participants with three scenarios from the driver’s perspective and created two videos for each scenario with explanations of varying detail. Participants were asked to answer two questionnaires following each video. The results show that more detailed explanations generally lead to a better user experience and higher confidence in the system’s performance. We also discuss the possible influence of age and technology acceptance in our article.
- Chelsea A. DeGuzman and Birsen Donmez. 2021. Knowledge of and trust in advanced driver assistance systems. Accident Analysis and Prevention 156 (2021), 106121. https://doi.org/10.1016/j.aap.2021.106121Google ScholarCross Ref
- Uwe Drewitz, Marc Wilbrink, Michael Oehl, Meike Jipp, and Klas Ihme. 2021. Subjektive Sicherheit zur Steigerung der Akzeptanz des automatisierten und vernetzten Fahrens. Forschung im Ingenieurwesen 85, 4 (Dec. 2021), 997–1012. https://doi.org/10.1007/s10010-021-00500-yGoogle ScholarCross Ref
- European Commission. 2020. White Paper on Artificial Intelligence: a European approach to excellence and trust. https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_enGoogle Scholar
- Julia Graefe, Selma Paden, Doreen Engelhardt, and Klaus Bengler. 2022. Human Centered Explainability for Intelligent Vehicles – A User Study. In Proceedings of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (Seoul, Republic of Korea) (AutomotiveUI ’22). Association for Computing Machinery, New York, NY, USA, 297–306. https://doi.org/10.1145/3543174.3546846Google ScholarDigital Library
- Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for Explainable AI: Challenges and Prospects. ArXiv abs/1812.04608 (2018), 50. https://arxiv.org/ftp/arxiv/papers/1812/1812.04608.pdfGoogle Scholar
- Jinkyu Kim, Anna Rohrbach, Zeynep Akata, Suhong Moon, Teruhisa Misu, Yi-Ting Chen, Trevor Darrell, and John Canny. 2021. Toward explainable and advisable model for self-driving cars. Applied AI Letters 2, 4 (2021), e56. https://doi.org/10.1002/ail2.56 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/ail2.56.Google ScholarDigital Library
- Jeamin Koo, Jungsuk Kwac, Wendy Ju, Martin Steinert, Larry Leifer, and Clifford Nass. 2015. Why did my car just do that? Explaining semi-autonomous driving actions to improve driver understanding, trust, and performance. International Journal on Interactive Design and Manufacturing (IJIDeM) 9 (2015), 269–275. Issue 4. https://doi.org/10.1007/s12008-014-0227-2Google ScholarCross Ref
- Samuli Laato, Miika Tiainen, A.K.M. Najmul Islam, and Matti Mäntymäki. 2022. How to explain AI systems to end users: a systematic literature review and research agenda. Internet Research 32, 7 (Jan. 2022), 1–31. https://doi.org/10.1108/INTR-08-2021-0600 Publisher: Emerald Publishing Limited.Google ScholarCross Ref
- Steven D. Lubkowski, Bridget A. Lewis, Valerie J. Gawron, Travis L. Gaydos, Keith C. Campbell, Shelley A. Kirkpatrick, Ian J. Reagan, and Jessica B. Cicchino. 2021. Driver trust in and training for advanced driver assistance systems in Real-World driving. Transportation Research Part F: Traffic Psychology and Behaviour 81 (2021), 540–556. https://doi.org/10.1016/j.trf.2021.07.003Google ScholarCross Ref
- Jan Maarten Schraagen, Pia Elsasser, Hanna Fricke, Marleen Hof, and Fabyen Ragalmuto. 2020. Trusting the X in XAI: Effects of different types of explanations by a self-driving car on trust, explanation satisfaction and mental models. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 64 (2020), 339–343. Issue 1. https://doi.org/10.1177/1071181320641077Google ScholarCross Ref
- Jan Maarten Schraagen, Sabin Kerwien Lopez, Carolin Schneider, Vivien Schneider, Stephanie Tönjes, and Emma Wiechmann. 2021. The Role of Transparency and Explainability in Automated Systems. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 65 (2021), 27–31. Issue 1. https://doi.org/10.1177/1071181321651063Google ScholarCross Ref
- Martin Schrepp, Jörg Thomaschewski, and Andreas Hinderks. 2017. Design and Evaluation of a Short Version of the User Experience Questionnaire (UEQ-S). International Journal of Interactive Multimedia and Artificial Intelligence 4, 6 (2017), 103–108. https://doi.org/10.9781/ijimai.2017.09.001Google ScholarCross Ref
- Joonwoo Son, Myoungouk Park, and Byungkyu Brian Park. 2015. The effect of age, gender and roadway environment on the acceptance and effectiveness of Advanced Driver Assistance Systems. Transportation Research Part F: Traffic Psychology and Behaviour 31 (2015), 12–24. https://doi.org/10.1016/j.trf.2015.03.009Google ScholarCross Ref
- Viswanath Venkatesh and Fred Davis. 2000. A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science 46 (02 2000), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926Google ScholarCross Ref
- Warren J. von Eschenbach. 2021. Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology 34, 4 (Dec. 2021), 1607–1622. https://doi.org/10.1007/s13347-021-00477-0Google ScholarCross Ref
- Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2019. "Do you trust me?": Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents(IVA ’19). Association for Computing Machinery, New York, NY, USA, 7–9. https://doi.org/10.1145/3308532.3329441Google ScholarDigital Library
- Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias Huber, and Elisabeth André. 2021. “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design. Journal on Multimodal User Interfaces 15, 2 (June 2021), 87–98. https://doi.org/10.1007/s12193-020-00332-0Google ScholarCross Ref
- Gesa Wiegand, Malin Eiband, Maximilian Haubelt, and Heinrich Hussmann. 2020. “I’d like an Explanation for That!”Exploring Reactions to Unexpected Autonomous Driving. In 22nd International Conference on Human-Computer Interaction with Mobile Devices and Services (Oldenburg, Germany) (MobileHCI ’20). Association for Computing Machinery, New York, NY, USA, Article 36, 11 pages. https://doi.org/10.1145/3379503.3403554Google ScholarDigital Library
Index Terms
- The Impact of Explanation Detail in Advanced Driver Assistance Systems: User Experience, Acceptance, and Age-Related Effects
Recommendations
Age-Related Differences in the Interaction with Advanced Driver Assistance Systems - A Field Study
HCI in Mobility, Transport, and Automotive Systems. Automated Driving and In-Vehicle Experience DesignAbstractThe automotive industry invests enormous sums in vehicle automation. However, for people to buy such (semi-)automated vehicles, trust and acceptance are essential requirements. In addition to trust and acceptance, situation awareness, that is the ...
Advanced Driver Assistance Systems for Aging Drivers: Insights on 65+ Drivers' Acceptance of and Intention to Use ADAS
AutomotiveUI '19: Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular ApplicationsAdvanced Driver Assistance Systems (ADAS) aim to increase safety by supporting drivers in the driving task. Especially older drivers (65+ years), given the nature of aging, could benefit from these systems. However, little is known about older drivers' ...
Determinants of end-user acceptance of biometrics
The information systems (IS) literature has long emphasized the importance of user acceptance of computer-based IS. Evaluating the determinants of acceptance of information technology (IT) is vital to address the problem of underutilization and leverage ...
Comments