Skip to main content

Explainable Artificial Intelligence (XAI) User Interface Design for Solving a Rubik’s Cube

  • Conference paper
  • First Online:
HCI International 2022 – Late Breaking Posters (HCII 2022)

Abstract

Explainable Artificial Intelligence (XAI) aims to bridge the understanding between decisions made by an AI interface and the user interacting with the AI. When the goal of the AI is to teach the user how to solve a problem, user-friendly explanations of the AI’s decisions must be given to the user so they can learn how to replicate the process for themselves. This paper describes the process of defining explanations in the context of a collaborative AI platform, ALLURE, which teaches the user how to solve a Rubik’s Cube. A macro-action in our collaborative AI algorithm refers to a set of moves that takes the cube from initial state to goal state - a process that was not transparent nor accessible when we revealed back-end logic to the front-end for user engagement. By providing macro-action explanations to the user in a chatbot as well as a visual representation of the moves being performed on a virtual Rubik’s Cube, we created an XAI interface to engage and guide the user through a subset of the solutions that can later be applied to the remaining solutions of the AI. After initial usability testing, our study provides some useful and practical XAI user interface design implications.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Agostinelli, F., et al.: Designing children’s new learning partner: collaborative artificial intelligence for learning to solve the Rubik’s cube. In: Interaction Design and Children, pp. 610–614. Athens, Greece (2021). https://doi.org/10.1145/3459990.3465175

  2. Agostinelli, F., Panta, R., Khandelwal, V., Srivastava, B., Muppasani, B., Wu, D.: Explainable Pathfinding for Inscrutable Planners with Inductive Logic Programming (2022)

    Google Scholar 

  3. Amitai, Y., Avni, G., Amir, O.: Interactive explanations of agent behavior. In: ICAPS 2022 Workshop on Explainable AI Planning, April 2022

    Google Scholar 

  4. Akgun, S., Greenhow, C.: Artificial intelligence in education: addressing ethical challenges in K-12 settings. AI Ethics 2, 1–10 (2021). https://doi.org/10.1007/s43681-021-00096-7

    Article  Google Scholar 

  5. Beardsley, M., Santos, P., Hernández-Leo, D., Michos, K.: Ethics in educational technology research: informing participants on data sharing risks. Br. J. Edu. Technol. 50(3), 1019–1034 (2019)

    Article  Google Scholar 

  6. Bingham, A.J., Witkowsky, P.: Deductive and inductive approaches to qualitative data analysis. In: Analyzing and Interpreting Qualitative Data: After the Interview, pp. 133–146 (2021)

    Google Scholar 

  7. Cheng, X., Sun, J., Zarifis, A.: Artificial intelligence and deep learning in educational technology research and practice. Br. J. Edu. Technol. 51(5), 1653–1656 (2020). https://doi.org/10.1111/bjet.13018

    Article  Google Scholar 

  8. Conati, C., Barral, O., Putnam, V., Rieger, L.: Toward personalized XAI: a case study in intelligent tutoring systems. Artif. Intell. 298, 103503 (2021)

    Article  MATH  Google Scholar 

  9. Creswell, J.W.: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, 4th edn. SAGE Publications, Thousand Oaks (2014)

    Google Scholar 

  10. Creswell, J.W., Plano Clark, V.L.: Designing and Conducting Mixed Methods Research, 3rd edn. SAGE Publications, Thousand Oaks (2011)

    Google Scholar 

  11. Das, D., Chernova, S.: Leveraging rationales to improve human task performance. In: Proceedings of the 25th International Conference on Intelligent User Interfaces, pp. 510–518 (2020)

    Google Scholar 

  12. Das, D., Kim, B., Chernova, S.: Subgoal-based explanations for unreliable intelligent decision support systems. arXiv preprint arXiv:2201.04204 (2022)

  13. Fiok, K., Farahani, F.V., Karwowski, W., Ahram, T.: Explainable artificial intelligence for education and training. J. Defense Model. Simul. 19(2), 133–144 (2022)

    Article  Google Scholar 

  14. Khosravi, H., et al.: Explainable artificial intelligence in education. Comput. Educ. Artif. Intell. 100074 (2022)

    Google Scholar 

  15. Liao, Q.V., et al.: All work and no play? Conversations with a question-and-answer chatbot in the wild. Assoc. Comput. Mach. 3, 1–13 (2018). https://doi.org/10.1145/3173574.3173577

    Article  Google Scholar 

  16. Putnam, V., Conati, C.: Exploring the need for explainable artificial intelligence (XAI) in intelligent tutoring systems (ITS). In: IUI Workshops (2019)

    Google Scholar 

  17. Saldana, J.: The Coding Manual for Qualitative Researchers, 3rd edn. SAGE Publications, Thousand Oaks (2015)

    Google Scholar 

  18. Spall, S.: Peer debriefing in qualitative research: emerging operational models. Qual. Inq. 4(2), 280–292 (1998). https://doi.org/10.1177/107780049800400208

    Article  Google Scholar 

  19. Vandenberg, S.G., Kuse, A.R.: Mental rotations, a group test of three-dimensional spatial visualization. Percept. Mot. Skills 47(2), 599–604 (1978). https://doi.org/10.2466/pms.1978.47.2.599

    Article  Google Scholar 

  20. Weitz, K., Schiller, D., Schlagowski, R., Huber, T., André, E.: “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design. J. Multimodal User Interfaces 15(2), 87–98 (2020). https://doi.org/10.1007/s12193-020-00332-0

    Article  Google Scholar 

  21. Wilkinson, D., et al.: Why or why not? The effect of justification styles on chatbot recommendations. ACM Trans. Inf. Syst. 39(4) (2021). https://doi.org/10.1145/3441715

Download references

Acknowledgements

The authors would like to acknowledge the generous funding support from ASPIRE II grant at the University of South Carolina (U of SC), and partial funding support provided by UofSC’s Grant No: 80002838.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dezhi Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bradley, C. et al. (2022). Explainable Artificial Intelligence (XAI) User Interface Design for Solving a Rubik’s Cube. In: Stephanidis, C., Antona, M., Ntoa, S., Salvendy, G. (eds) HCI International 2022 – Late Breaking Posters. HCII 2022. Communications in Computer and Information Science, vol 1655. Springer, Cham. https://doi.org/10.1007/978-3-031-19682-9_76

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19682-9_76

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19681-2

  • Online ISBN: 978-3-031-19682-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics