Skip to main content

Modeling User Information Needs to Enable Successful Human-Machine Teams: Designing Transparency for Autonomous Systems

  • Conference paper
  • First Online:
Augmented Cognition. Human Cognition and Behavior (HCII 2020)

Abstract

Intelligent autonomous systems are quickly becoming part of everyday life. Efforts to design systems whose behaviors are transparent and explainable to users are stymied by models that are increasingly complex and interdependent, and compounded by an ever-increasing scope in autonomy, allowing for more autonomous system decision making and actions than ever before. Previous efforts toward designing transparency in autonomous systems have focused largely on explanations of algorithms for the benefit of programmers and back-end debugging. Less emphasis has been applied to model the information needs of end-users, or to evaluate what features most impact end-user trust and influence positive user engagements in the context of human-machine teaming. This study investigated user information preferences and priorities directly by presenting users with an interaction scenario that depicted ambiguous, unexpected, and potentially unsafe system behaviors. We then elicited what features these users desired most from the system to resolve these interaction conflicts (i.e., what information is most necessary for users to trust the system and continue using it in our described scenario). Using factor analysis, we built detailed user typologies that arranged and prioritized user information needs and communication strategies. This typology can be adapted as a user model for autonomous system designs in order to guide design decisions. This mixed methods approach to modeling user interactions with complex sociotechnical systems revealed design strategies which have the potential to increase user understanding of system behaviors, which may in turn improve user trust in complex autonomous systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gregor, S., Benbasat, I.: Explanations from intelligent systems: theoretical foundations and implications for practice. MIS Q. 23(4), 497 (1999)

    Article  Google Scholar 

  2. Sherry, L., Feary, M., Polson, P., Palmer, E.: What’s it doing now?: taking the covers off autopilot behavior. Presented at the 11th International Symposium on Aviation Psychology, Dayton, OH, USA (2001)

    Google Scholar 

  3. Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. Presented at the 2000 ACM Conference for Computer Supported Social Work, Philadelphia, PA, USA (2000)

    Google Scholar 

  4. Swearingen, K., Sinha, R.: Beyond algorithms: an HCI perspective on recommender systems. In: ACM SIGIR 2001 Workshop on Recommender Systems, New Orleans, LA, USA (2001)

    Google Scholar 

  5. Lim, B.Y., Dey, A.K.: Toolkit to support intelligibility in context-aware applications. Presented at the 12th ACM International Conference on Ubiquitous Computing, Copenhagen, Denmark, pp. 13–22 (2010)

    Google Scholar 

  6. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?. Presented at the the 22nd ACM SIGKDD International Conference, San Francisco, CA, USA, pp. 1135–1144 (2016)

    Google Scholar 

  7. Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not: explanations improve the intelligibility of context-aware intelligent systems. Presented at the ACM CHI Conference on Human Factors in Computing Systems, Boston, MA, USA, pp. 2119–2128 (2009)

    Google Scholar 

  8. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning, AirXiv (2017)

    Google Scholar 

  9. Glass, A., McGuinness, D.L., Wolverton, M.: Toward establishing trust in adaptive agents. Presented at the 13th International Conference on Intelligent User Interfaces, New York, New York, USA, p. 227 (2008)

    Google Scholar 

  10. Ososky, S., Sanders, T., Jentsch, F., Hancock, P.A., Chen, J.Y.C.: Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. Presented at the SPIE Defense + Security, vol. 9084 (2014)

    Google Scholar 

  11. Krause, J., Perer, A., Ng, K.: Interacting with predictions. Presented at the the 2016 CHI Conference, New York, New York, USA, pp. 5686–5697 (2016)

    Google Scholar 

  12. Cooper, A.: The inmates are running the asylum: why high-tech products drive us crazy and how to restore the sanity, London, UK (2004)

    Google Scholar 

  13. Miller, T.: Explanation in artificial intelligence: insights from the social sciences, airXiv, pp. 1–57, June 2017

    Google Scholar 

  14. Buchannan, B., Shortliffe, E.: Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison Wesley, Reading (1984)

    Google Scholar 

  15. Swartout, W.R., Moore, J.D.: Explanation in second generation expert systems. In: David, J.M., Krivine, J.P., Simmons, R. (eds.) Second Generation Expert Systems, pp. 543–585. Springer, Heidelberg (1993). https://doi.org/10.1007/978-3-642-77927-5_24

    Chapter  Google Scholar 

  16. Vorm, E.S., Miller, A.D.: Assessing the value of transparency in recommender systems: an end-user perspective. Presented at the RecSys 2018 Proceedings of the 12th ACM conference on Recommender Systems, Vancouver, Canada (2018)

    Google Scholar 

  17. Brown, S.R.: A primer on Q methodology. Operant Subj. 16, 91–138 (1993)

    Google Scholar 

  18. O’Leary, K., Wobbrock, J.O., Riskin, E.A.: Q-methodology as a research and design tool for HCI. Presented at the CHI 2013, Paris, France, pp. 1941–1950 (2013)

    Google Scholar 

  19. Ram, A.: AQUA: Questions that Drive the Explanation Process. Lawrence Erlbaum, Hillsdale (1993)

    Google Scholar 

  20. Silveira, M.S., de Souza, C.S., Barbosa, S.D.J.: Semiotic engineering contributions for designing online help systems. Presented at the 19th Annual International Conference on Computer Documentation, Santa Fe, NM, USA, p. 31 (2001)

    Google Scholar 

  21. Ford, J.K., MacCallum, R.C., Tait, M.: The application of exploratory factor analysis in applied psychology: a critical review and analysis. Pers. Psychol. 39(2), 291–314 (1986)

    Article  Google Scholar 

  22. Devore, J.: Probability and Statistics for Engineering and the Sciences, 4th edn. Brooks/Cole, New York (1995)

    Google Scholar 

  23. Biran, O., Cotton, C.: Explanation and justification in machine learning: a survey. In: IJCAI-17 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia (2017)

    Google Scholar 

  24. Eriksson, A., Stanton, N.: Takeover time in highly automated vehicles: noncritical transitions to and from manual control. Hum. Factors 59(4), 689–705 (2017)

    Google Scholar 

  25. Thakur, G.S., Sparks, K., Li, R., Stewart, R.N., Urban, M.L.: Demonstrating PlanetSense. Presented at the 24th ACM SIGSPATIAL International Conference, New York, New York, USA, pp. 1–4 (2016)

    Google Scholar 

  26. Agapie, E., et al.: Crowdsourcing exercise plans aligned with expert guidelines and everyday constraints. Presented at the 2018 CHI Conference, New York, New York, USA, pp. 1–13 (2018)

    Google Scholar 

  27. Huang, T.H.K., Chang, J.C., Bigham, J.P.: Evorus. Presented at the 2018 CHI Conference, Montreal, Quebec, Canada, pp. 1–13 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrew D. Miller .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vorm, E.S., Miller, A.D. (2020). Modeling User Information Needs to Enable Successful Human-Machine Teams: Designing Transparency for Autonomous Systems. In: Schmorrow, D., Fidopiastis, C. (eds) Augmented Cognition. Human Cognition and Behavior. HCII 2020. Lecture Notes in Computer Science(), vol 12197. Springer, Cham. https://doi.org/10.1007/978-3-030-50439-7_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50439-7_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50438-0

  • Online ISBN: 978-3-030-50439-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics