Abstract
Intelligent autonomous systems are quickly becoming part of everyday life. Efforts to design systems whose behaviors are transparent and explainable to users are stymied by models that are increasingly complex and interdependent, and compounded by an ever-increasing scope in autonomy, allowing for more autonomous system decision making and actions than ever before. Previous efforts toward designing transparency in autonomous systems have focused largely on explanations of algorithms for the benefit of programmers and back-end debugging. Less emphasis has been applied to model the information needs of end-users, or to evaluate what features most impact end-user trust and influence positive user engagements in the context of human-machine teaming. This study investigated user information preferences and priorities directly by presenting users with an interaction scenario that depicted ambiguous, unexpected, and potentially unsafe system behaviors. We then elicited what features these users desired most from the system to resolve these interaction conflicts (i.e., what information is most necessary for users to trust the system and continue using it in our described scenario). Using factor analysis, we built detailed user typologies that arranged and prioritized user information needs and communication strategies. This typology can be adapted as a user model for autonomous system designs in order to guide design decisions. This mixed methods approach to modeling user interactions with complex sociotechnical systems revealed design strategies which have the potential to increase user understanding of system behaviors, which may in turn improve user trust in complex autonomous systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Gregor, S., Benbasat, I.: Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q. 23(4), 497 (1999)
Sherry, L., Feary, M., Polson, P., Palmer, E.: What’s it doing now?: taking the covers off autopilot behavior. Presented at the 11th International Symposium on Aviation Psychology, Dayton, OH, USA (2001)
Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. Presented at the 2000 ACM Conference for Computer Supported Social Work, Philadelphia, PA, USA (2000)
Swearingen, K., Sinha, R.: Beyond algorithms: an HCI perspective on recommender systems. In: ACM SIGIR 2001 Workshop on Recommender Systems, New Orleans, LA, USA (2001)
Lim, B.Y., Dey, A.K.: Toolkit to support intelligibility in context-aware applications. Presented at the 12th ACM International Conference on Ubiquitous Computing, Copenhagen, Denmark, pp. 13–22 (2010)
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?. Presented at the the 22nd ACM SIGKDD International Conference, San Francisco, CA, USA, pp. 1135–1144 (2016)
Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not: explanations improve the intelligibility of context-aware intelligent systems. Presented at the ACM CHI Conference on Human Factors in Computing Systems, Boston, MA, USA, pp. 2119–2128 (2009)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning, AirXiv (2017)
Glass, A., McGuinness, D.L., Wolverton, M.: Toward establishing trust in adaptive agents. Presented at the 13th International Conference on Intelligent User Interfaces, New York, New York, USA, p. 227 (2008)
Ososky, S., Sanders, T., Jentsch, F., Hancock, P.A., Chen, J.Y.C.: Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. Presented at the SPIE Defense + Security, vol. 9084 (2014)
Krause, J., Perer, A., Ng, K.: Interacting with predictions. Presented at the the 2016 CHI Conference, New York, New York, USA, pp. 5686–5697 (2016)
Cooper, A.: The inmates are running the asylum: why high-tech products drive us crazy and how to restore the sanity, London, UK (2004)
Miller, T.: Explanation in artificial intelligence: insights from the social sciences, airXiv, pp. 1–57, June 2017
Buchannan, B., Shortliffe, E.: Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison Wesley, Reading (1984)
Swartout, W.R., Moore, J.D.: Explanation in second generation expert systems. In: David, J.M., Krivine, J.P., Simmons, R. (eds.) Second Generation Expert Systems, pp. 543–585. Springer, Heidelberg (1993). https://doi.org/10.1007/978-3-642-77927-5_24
Vorm, E.S., Miller, A.D.: Assessing the value of transparency in recommender systems: an end-user perspective. Presented at the RecSys 2018 Proceedings of the 12th ACM conference on Recommender Systems, Vancouver, Canada (2018)
Brown, S.R.: A primer on Q methodology. Operant Subj. 16, 91–138 (1993)
O’Leary, K., Wobbrock, J.O., Riskin, E.A.: Q-methodology as a research and design tool for HCI. Presented at the CHI 2013, Paris, France, pp. 1941–1950 (2013)
Ram, A.: AQUA: Questions that Drive the Explanation Process. Lawrence Erlbaum, Hillsdale (1993)
Silveira, M.S., de Souza, C.S., Barbosa, S.D.J.: Semiotic engineering contributions for designing online help systems. Presented at the 19th Annual International Conference on Computer Documentation, Santa Fe, NM, USA, p. 31 (2001)
Ford, J.K., MacCallum, R.C., Tait, M.: The application of exploratory factor analysis in applied psychology: a critical review and analysis. Pers. Psychol. 39(2), 291–314 (1986)
Devore, J.: Probability and Statistics for Engineering and the Sciences, 4th edn. Brooks/Cole, New York (1995)
Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)
Eriksson, A., Stanton, N.: Takeover time in highly automated vehicles: noncritical transitions to and from manual control. Hum. Factors 59(4), 689–705 (2017)
Thakur, G.S., Sparks, K., Li, R., Stewart, R.N., Urban, M.L.: Demonstrating PlanetSense. Presented at the 24th ACM SIGSPATIAL International Conference, New York, New York, USA, pp. 1–4 (2016)
Agapie, E., et al.: Crowdsourcing exercise plans aligned with expert guidelines and everyday constraints. Presented at the 2018 CHI Conference, New York, New York, USA, pp. 1–13 (2018)
Huang, T.H.K., Chang, J.C., Bigham, J.P.: Evorus. Presented at the 2018 CHI Conference, Montreal, Quebec, Canada, pp. 1–13 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply
About this paper
Cite this paper
Vorm, E.S., Miller, A.D. (2020). Modeling User Information Needs to Enable Successful Human-Machine Teams: Designing Transparency for Autonomous Systems. In: Schmorrow, D., Fidopiastis, C. (eds) Augmented Cognition. Human Cognition and Behavior. HCII 2020. Lecture Notes in Computer Science(), vol 12197. Springer, Cham. https://doi.org/10.1007/978-3-030-50439-7_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-50439-7_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-50438-0
Online ISBN: 978-3-030-50439-7
eBook Packages: Computer ScienceComputer Science (R0)