skip to main content
10.1145/3334480.3382831acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
abstract

Getting Playful with Explainable AI: Games with a Purpose to Improve Human Understanding of AI

Published: 25 April 2020 Publication History

Abstract

Explainable Artificial Intelligence (XAI) is an emerging topic in Machine Learning (ML) that aims to give humans visibility into how AI systems make decisions. XAI is increasingly important in bringing transparency to fields such as medicine and criminal justice where AI informs high consequence decisions. While many XAI techniques have been proposed, few have been evaluated beyond anecdotal evidence. Our research offers a novel approach to assess how humans interpret AI explanations; we explore this by integrating XAI with Games with a Purpose (GWAP). XAI requires human evaluation at scale, and GWAP can be used for XAI tasks which are presented through rounds of play. This paper outlines the benefits of GWAP for XAI, and demonstrates application through our creation of a multi-player GWAP that focuses on explaining deep learning models trained for image recognition. Through our game, we seek to understand how humans select and interpret explanations used in image recognition systems, and bring empirical evidence on the validity of GWAP designs for XAI.

References

[1]
Abdul and et al. 2018. Trends and trajectories for explainable, accountable and intelligible systems. Proceedings of the CHI Conference on Human Factors in Computing Systems (2018), 582.
[2]
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, and et. al. 2018. Sanity checks for saliency maps. (2018).
[3]
Luis Von Ahn and Laura Dabbish. 2008. Designing games with a purpose. Commun. ACM 51, 8 (2008), 58--67.
[4]
Greg Costikyan. 2013. Uncertainty in Games. MIT Press, Cambridge, MA.
[5]
Vickie Curtis. 2015. Motivation to participate in an online citizen science game: A study of foldit. Science Communications 37, 6 (2015), 723--746.
[6]
Doshi-Velez and Kim. 2017. Towards A Rigorous Science of Interpretable Machine Learning. (2017).
[7]
Marta Sabou et al. 2014. Corpus Annotation through Crowdsourcing: Towards Best Practice Guidelines. (2014).
[8]
David Gundry and et al. 2018. Intrinsic elicitation: A model and design approach for games collecting human subject data. "International Conference on the Foundations of Digital Games" (2018), 38.
[9]
Fred Hohman, Minsuk Kahng, Robert Pienta, and Duen Horng Chau. 2018. Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers. (2018).
[10]
Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis, and Douglas B. Kell. 2017. What do we need to build explainable AI systems for the medical domain? (2017).
[11]
Matthew Kay and et al. 2016. When (Ish) is My Bus?: User-centered Visualizations of Uncertainty in Everyday, Mobile Predictive Systems. In CHI Conference on Human Factors in Computing Systems.
[12]
Josua Krause, Adam Perer, and Kenney Ng. 2016. Interacting with Predictions: Visual Inspection of Black-box Machine Learning Models. In 2016 CHI Conference on Human Factors in Computing Systems.
[13]
Edith L. M. Law, Luis Von Ahn, Roger B. Dannenberg, and Mike Crawford. 2007. Tagatune: A game for music and sound annotation. (2007).
[14]
Tania Lombrozo. 2006. The structure and function of explanations. Trends in Cognitive Science (2006).
[15]
Tania Lombrozo. 2009. Explanation and categorization: How "why?" informs "what?". Cognition (2009).
[16]
Chris Madge and et al. 2017. Experiment-Driven Development of a GWAP for Marking Segments in Text.
[17]
Tim Miller. 2017. Explanation in Artificial Intelligence: Insights from the Social Sciences. (2017).
[18]
Chris Olah, Alexander Mordvintsev, and Ludwig Schubert. 2017. Feature Visualization. Distill (2017).
[19]
Ei Pa Pa Pe-Than, Dion Hoe-Lian Goh, and Chei Sian Lee. 2015. A typology of human computation games: An analysis and a review of current games. Behaviour Information Technology 34 (2015), 809--824.
[20]
Ei Pa Pa Pe-Than, Dion Hoe-Lian Goh, and Chei Sian Lee. 2017. Does It Matter How You Play? The Effects of Collaboration and Competition Among Players of Human Computation Games. (2017).
[21]
Mark A Runco. 2010. Divergent thinking, creativity, and ideation. The Cambridge handbook of creativity 413 (2010). Issue 446.
[22]
Michael Saler, Jan Ulrich, Hense, Sarah Katharina Mayr, and et. al. 2017. How gamification motivates: An experimental study of the effects of specific game design elements on psychological need satisfaction. Computers in Human Behavior 69 (2017), 371--380.
[23]
Nitin Seemakurty, Jonathan Chu, Luis von Ahn, and Anthony Tomasic. 2010. Word Sense Disambiguation via Human Computation. In ACM SIGKDD Workshop on Human Computation (HCOMP '10).
[24]
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2013. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. (2013).
[25]
Kristin Siu and Mark O. Riedl. 2016. Reward Systems in Human Computation Games. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play (CHI PLAY '16).
[26]
Kristin Siu, Alexander Zook, and Mark O. Riedl. 2017. A Framework for Exploring and Evaluating Mechanics in Human Computation Games. In International Conference on the Foundations of Digital Games.
[27]
Simone Stumpf and et al. 2009. Interacting meaningfully with machine learning systems: Three experiments. Int. J. Hum.-Comput. Stud. 67 (2009), 639--662.
[28]
Alexandra To, Jarrek Holmes, Elaine Fath, Eda Zhang, Geoff Kaufman, and Jessica Hammer. 2017. Modeling and Designing for Key Elements of Curiosity: Risking Failure, Valuing Questions. Digital Games Research Association (2017).
[29]
Miaomiao Wen and et al. 2016. Transactivity as a predictor of future collaborative knowledge integration in team-based learning in online courses. International Educational Data Mining Society (2016).
[30]
Miaomiao Wen, Keith Maki, Xu Wang, Steven P Dow, James Herbsleb, and et. al. 2019. Exploring Neural Networks with Activation Atlases. (2019).
[31]
J. Wexler, M. Pushkarna, T. Bolukbasi, M. Wattenberg, F. Viégas, and J. Wilson. 2020. The What-If Tool: Interactive Probing of Machine Learning Models. IEEE Transactions on Visualization and Computer Graphics 26, 1 (2020), 56--65.
[32]
Jichen Zhu, Antonios Liapis, Sebastian Risi, Rafael Bidarra, and G Michael Youngblood. 2018. Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation. IEEE, 1--8.

Cited By

View all
  • (2024)Navigating the Job-Seeking Journey: Challenges and Opportunities for Digital Employment Support in KashmirProceedings of the ACM on Human-Computer Interaction10.1145/36373758:CSCW1(1-28)Online publication date: 26-Apr-2024
  • (2024) Efficient and Compressed Deep Learning Model for Brain Tumour Classification With Explainable AI for Smart Healthcare and Information Communication Systems Expert Systems10.1111/exsy.1377042:2Online publication date: 27-Oct-2024
  • (2024)An analysis of ensemble pruning methods under the explanation of Random ForestInformation Systems10.1016/j.is.2023.102310120:COnline publication date: 4-Mar-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems
April 2020
4474 pages
ISBN:9781450368193
DOI:10.1145/3334480
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 April 2020

Check for updates

Author Tags

  1. explainable AI
  2. games with a purpose
  3. interpretable machine learning
  4. visualization

Qualifiers

  • Abstract

Conference

CHI '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 6,164 of 23,696 submissions, 26%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)143
  • Downloads (Last 6 weeks)16
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Navigating the Job-Seeking Journey: Challenges and Opportunities for Digital Employment Support in KashmirProceedings of the ACM on Human-Computer Interaction10.1145/36373758:CSCW1(1-28)Online publication date: 26-Apr-2024
  • (2024) Efficient and Compressed Deep Learning Model for Brain Tumour Classification With Explainable AI for Smart Healthcare and Information Communication Systems Expert Systems10.1111/exsy.1377042:2Online publication date: 27-Oct-2024
  • (2024)An analysis of ensemble pruning methods under the explanation of Random ForestInformation Systems10.1016/j.is.2023.102310120:COnline publication date: 4-Mar-2024
  • (2023)Sustainable Cities With GamificationIntersecting Health, Livability, and Human Behavior in Urban Environments10.4018/978-1-6684-6924-8.ch010(205-226)Online publication date: 3-May-2023
  • (2023)Eye into AI: Evaluating the Interpretability of Explainable AI Techniques through a Game with a PurposeProceedings of the ACM on Human-Computer Interaction10.1145/36100647:CSCW2(1-22)Online publication date: 4-Oct-2023
  • (2023)Human Centricity in the Relationship Between Explainability and Trust in AIIEEE Technology and Society Magazine10.1109/MTS.2023.334023842:4(66-76)Online publication date: Dec-2023
  • (2023)Explainable Artificial Intelligence in Alzheimer’s Disease Classification: A Systematic ReviewCognitive Computation10.1007/s12559-023-10192-x16:1(1-44)Online publication date: 13-Nov-2023
  • (2022)The Role of Human Knowledge in Explainable AIData10.3390/data70700937:7(93)Online publication date: 6-Jul-2022
  • (2022)Novel Vision Transformer–Based Bi-LSTM Model for LU/LC Prediction—Javadi Hills, IndiaApplied Sciences10.3390/app1213638712:13(6387)Online publication date: 23-Jun-2022
  • (2022)Family as a Third Space for AI Literacies: How do children and parents learn about AI together?Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3502031(1-17)Online publication date: 29-Apr-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media