skip to main content
10.1145/3172944.3172946acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article
Public Access

Toward Foraging for Understanding of StarCraft Agents: An Empirical Study

Published: 05 March 2018 Publication History

Abstract

Assessing and understanding intelligent agents is a difficult task for users that lack an AI background. A relatively new area, called "Explainable AI," is emerging to help address this problem, but little is known about how users would forage through information an explanation system might offer. To inform the development of Explainable AI systems, we conducted a formative study -- using the lens of Information Foraging Theory -- into how experienced users foraged in the domain of StarCraft to assess an agent. Our results showed that participants faced difficult foraging problems. These foraging problems caused participants to entirely miss events that were important to them, reluctantly choose to ignore actions they did not want to ignore, and bear high cognitive, navigation, and information costs to access the information they needed.

References

[1]
Juan Felipe Beltran, Ziqi Huang, Azza Abouzied, and Arnab Nandi. 2017. Don't just swipe left, tell me why: Enhancing gesture-based feedback with reason bins. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM, 469--480.
[2]
Sourav S Bhowmick, Aixin Sun, and Ba Quan Truong. 2013. Why Not, WINE?: Towards answering why-not questions in social image search. In Proceedings of the 21st ACM International Conference on Multimedia. ACM, 917--926.
[3]
Svetlin Bostandjiev, John O'Donovan, and Tobias Höllerer. 2012. TasteWeights: a visual interactive hybrid recommender system. In Proceedings of the Sixth ACM Conference on Recommender Systems. ACM, 35--42.
[4]
Nico Castelli, Corinna Ogonowski, Timo Jakobi, Martin Stein, Gunnar Stevens, and Volker Wulf. 2017. What Happened in my home? An end-user development approach for smart home data visualization. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 853--866.
[5]
Gifford Cheung and Jeff Huang. 2011. Starcraft from the Stands: Understanding the Game Spectator. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11). ACM, New York, NY, USA, 763--772.
[6]
Ed H Chi, Peter Pirolli, Kim Chen, and James Pitkow. 2001. Using information scent to model user information needs and actions and the web. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 490--497.
[7]
Kelley Cotter, Janghee Cho, and Emilee Rader. 2017. Explaining the News Feed Algorithm: An Analysis of the “News Feed FYI” Blog. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1553--1560.
[8]
Jonathan Dodge, Sean Penney, Claudia Hilderbrand, Andrew Anderson, and Margaret Burnett. 2018. How the experts do it: Assessing and explaining agent behaviors in real-time strategy games. In ACM Conference on Human Factors in Computing Systems. ACM.
[9]
Scott D. Fleming, Chris Scaffidi, David Piorkowski, Margaret Burnett, Rachel Bellamy, Joseph Lawrance, and Irwin Kwan. 2013. An information foraging theory perspective on tools for debugging, refactoring, and reuse tasks. ACM Transactions on Software Engineering and Methodology (TOSEM) 22, 2 (2013), 14.
[10]
Wai-Tat Fu and Peter Pirolli. 2007. SNIF-ACT: A cognitive model of user navigation on the world wide web. Human-Computer Interaction 22, 4 (2007), 355--412.
[11]
Miriam Greis, Emre Avci, Albrecht Schmidt, and Tonja Machulla. 2017. Increasing users' confidence in uncertain data by aggregating data from multiple sources. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 828--840.
[12]
Valentina I Grigoreanu, Margaret M Burnett, and George G Robertson. 2010. A strategy-centric approach to the design of end-user debugging tools. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 713--722.
[13]
Bradley Hayes and Julie A Shah. 2017. Improving robot controller transparency through autonomous policy explanation. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 303--312.
[14]
Zhian He and Eric Lo. 2014. Answering why-not questions on top-k queries. IEEE Transactions on Knowledge and Data Engineering 26, 6 (2014), 1300--1315.
[15]
Ashish Kapoor, Bongshin Lee, Desney Tan, and Eric Horvitz. 2010. Interactive optimization for steering machine classification. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1343--1352.
[16]
Man-Je Kim, Kyung-Joong Kim, SeungJun Kim, and Anind K Dey. 2016. Evaluation of StarCraft Artificial Intelligence Competition Bots by Experienced Human Players. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1915--1921.
[17]
Todd Kulesza, Margaret Burnett, Weng-Keen Wong, and Simone Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces. ACM, 126--137.
[18]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1--10.
[19]
Todd Kulesza, Simone Stumpf, Margaret Burnett, Weng-Keen Wong, Yann Riche, Travis Moore, Ian Oberst, Amber Shinsel, and Kevin McIntosh. 2010. Explanatory debugging: Supporting end-user debugging of machine-learned programs. In Visual Languages and Human-Centric Computing (VL/HCC), 2010 IEEE Symposium on. IEEE, 41--48.
[20]
Todd Kulesza, Simone Stumpf, Weng-Keen Wong, Margaret M Burnett, Stephen Perona, Andrew Ko, and Ian Oberst. 2011. Why-oriented end-user debugging of naive Bayes text classification. ACM Transactions on Interactive Intelligent Systems (TiiS) 1, 1 (2011), 2.
[21]
Sandeep Kaur Kuttal, Anita Sarma, and Gregg Rothermel. 2013. Predator behavior in the wild web world of bugs: An information foraging theory perspective. In Visual Languages and Human-Centric Computing (VL/HCC), 2013 IEEE Symposium on. IEEE, 59--66.
[22]
Brian Y Lim and Anind K Dey. 2009. Assessing demand for intelligibility in context-aware applications. In Proceedings of the 11th International Conference on Ubiquitous Computing. ACM, 195--204.
[23]
Brian Y. Lim, Anind K. Dey, and Daniel Avrahami. 2009. Why and why not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2119--2128.
[24]
M. Lomas, R. Chevalier, E. V. Cross, R. C. Garrett, J. Hoare, and M. Kopack. 2012. Explaining robot actions. In 2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI). 187--188.0.1007/978--3--319--10590--1_53

Cited By

View all
  • (2023)Integrating Players’ Perspectives in AI-Based Games: Case Studies of Player-AI Interaction DesignProceedings of the 18th International Conference on the Foundations of Digital Games10.1145/3582437.3582451(1-9)Online publication date: 12-Apr-2023
  • (2023)On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive ExplanationsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581314(1-21)Online publication date: 19-Apr-2023
  • (2023)Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial IntelligenceInformation Fusion10.1016/j.inffus.2023.10180599(101805)Online publication date: Nov-2023
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '18: Proceedings of the 23rd International Conference on Intelligent User Interfaces
March 2018
698 pages
ISBN:9781450349451
DOI:10.1145/3172944
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 March 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. content analysis
  2. explainable ai
  3. information foraging
  4. intelligent agents
  5. intelligibility
  6. starcraft
  7. video games

Qualifiers

  • Research-article

Funding Sources

  • NSF
  • Defense Advanced Research Projects Agency

Conference

IUI'18
Sponsor:

Acceptance Rates

IUI '18 Paper Acceptance Rate 43 of 299 submissions, 14%;
Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)140
  • Downloads (Last 6 weeks)14
Reflects downloads up to 22 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Integrating Players’ Perspectives in AI-Based Games: Case Studies of Player-AI Interaction DesignProceedings of the 18th International Conference on the Foundations of Digital Games10.1145/3582437.3582451(1-9)Online publication date: 12-Apr-2023
  • (2023)On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive ExplanationsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581314(1-21)Online publication date: 19-Apr-2023
  • (2023)Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial IntelligenceInformation Fusion10.1016/j.inffus.2023.10180599(101805)Online publication date: Nov-2023
  • (2022)Contrastive Explanations of Plans through Model RestrictionsJournal of Artificial Intelligence Research10.1613/jair.1.1281372(533-612)Online publication date: 4-Jan-2022
  • (2022)"I Want To See How Smart This AI Really Is": Player Mental Model Development of an Adversarial AI PlayerProceedings of the ACM on Human-Computer Interaction10.1145/35494826:CHI PLAY(1-26)Online publication date: 31-Oct-2022
  • (2022)How Do People Rank Multiple Mutant Agents?Proceedings of the 27th International Conference on Intelligent User Interfaces10.1145/3490099.3511115(191-211)Online publication date: 22-Mar-2022
  • (2022)Finding AI’s Faults with AAR/AI: An Empirical StudyACM Transactions on Interactive Intelligent Systems10.1145/348706512:1(1-33)Online publication date: 4-Mar-2022
  • (2022)Recent Advances in Trustworthy Explainable Artificial Intelligence: Status, Challenges, and PerspectivesIEEE Transactions on Artificial Intelligence10.1109/TAI.2021.31338463:6(852-866)Online publication date: Dec-2022
  • (2021)"What Happened Here!?" A Taxonomy for User Interaction with Spatio-Temporal Game Data VisualizationProceedings of the ACM on Human-Computer Interaction10.1145/34746875:CHI PLAY(1-27)Online publication date: 6-Oct-2021
  • (2021)Learn, Generate, Rank, Explain: A Case Study of Visual Explanation by Generative Machine LearningACM Transactions on Interactive Intelligent Systems10.1145/346540711:3-4(1-34)Online publication date: 3-Sep-2021
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media