Skip to main content

AI in the Human Loop: The Impact of Differences in Digital Assistant Roles on the Personal Values of Users

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2023 (INTERACT 2023)

Abstract

As AI systems become increasingly prevalent in our daily lives and work, it is essential to contemplate their social role and how they interact with us. While functionality and increasingly explainability and trustworthiness are often the primary focus in designing AI systems, little consideration is given to their social role and the effects on human-AI interactions. In this paper, we advocate for paying attention to social roles in AI design. We focus on an AI healthcare application and present three possible social roles of the AI system within it to explore the relationship between the AI system and the user and its implications for designers and practitioners. Our findings emphasise the need to think beyond functionality and highlight the importance of considering the social role of AI systems in shaping meaningful human-AI interactions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Interface was designed based on the research of the Positive Health Institute (https://www.iph.nl).

  2. 2.

    ATLAS.ti Scientific Software Development GmbH [ATLAS.ti 22 Windows]. (2022). Retrieved from https://atlasti.com.

References

  1. Agarwal, S., Mishra, S.: Responsible AI Implementing Ethical and Unbiased Algorithms. Springer International Publishing (2021). https://doi.org/10.1007/978-3-030-76860-7

  2. Christian, B.: The Alignment Problem: Machine Learning and Human Values. WW Norton & Company, New York (2020)

    Google Scholar 

  3. Guzman, A.L., Lewis, S.C.: Artificial intelligence and communication: a human-machine communication research agenda. New Media Soc. 22(1), 70–86 (2020)

    Article  Google Scholar 

  4. Kim, J., Merrill, K., Jr., Collins, C.: AI as a friend or assistant: the mediating role of perceived usefulness in social AI vs. functional AI. Telematics Inform. 64, 101694 (2021)

    Article  Google Scholar 

  5. Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. (TOCHI) 12(2), 293–327 (2005)

    Article  Google Scholar 

  6. Gabarro, J.: The development of working relationships. In: Galegher, J., Kraut, R., Eido, C. (eds.) Intellectual Teamwork: Social and Technological Foundations of Cooperative Work, pp. 79–110. Lawrence Erlbaum Associates, Hillsdale, New Jersey (1990)

    Google Scholar 

  7. Holtzblatt, K., Beyer, H.: Contextual Design: Defining Customer-Centered Systems. Elsevier (1997)

    Google Scholar 

  8. Berscheid, E., Reis, H.T.: Attraction and close relationships. In: Gilbert, D.T., Fiske, S.T., Lindzey, G. (eds.) The Handbook of Social Psychology, pp. 193–281. McGraw-Hill (1998)

    Google Scholar 

  9. Petty, R.E., Wegener, D.T.: Attitude change: multiple roles for persuasion variables. In: Gilbert, D.T., Fiske, S.T., Lindzey, G. (eds.) The Handbook of Social Psychology, pp. 323–390. McGraw-Hill (1998)

    Google Scholar 

  10. Borhani, K., Beck, B., Haggard, P.: Choosing, doing, and controlling: implicit sense of agency over somatosensory events. Psychol. Sci. 28, 882–893 (2017). https://doi.org/10.1177/0956797617697693

    Article  Google Scholar 

  11. Caspar, E.A., Christensen, J.F., Cleeremans, A., Haggard, P.: Coercion changes the sense of agency in the human brain. Curr. Biol. 26, 585–592 (2016). https://doi.org/10.1016/j.cub.2015.12.067

    Article  Google Scholar 

  12. Deci, E.L., Ryan, R.M.: The “what” and “why” of goal pursuits: human needs and the self-determination of behavior. Psychol. Inq. 11, 227–268 (2000). https://doi.org/10.1207/S15327965PLI110401

    Article  Google Scholar 

  13. Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, 24–28 April, pp. 72–78. ACM, New York (1994)

    Google Scholar 

  14. Nass, C., Moon, Y.: Machines and mindlessness: social responses to computers. J. Soc. Issues 56(1), 81–103 (2000). https://doi.org/10.1111/0022-4537.00153

    Article  Google Scholar 

  15. Friedman, B.: Value-sensitive design. Interactions 3(6), 16–23 (1996)

    Article  Google Scholar 

  16. Friedman, B., Kahn, P.H., Borning, A., Huldtgren, A.: Value sensitive design and information systems. In: Doorn, N., Schuurbiers, D., Poel, I., Gorman, M.E. (eds.) Early Engagement and New Technologies: Opening Up the Laboratory. PET, vol. 16, pp. 55–95. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-7844-3_4

    Chapter  Google Scholar 

  17. Friedman, B., Harbers, M., Hendry, D.G., van den Hoven, J., Jonker, C., Logler, N.: Eight grand challenges for value sensitive design from the 2016 Lorentz workshop. Ethics Inf. Technol. 23(1), 5–16 (2021). https://doi.org/10.1007/s10676-021-09586-y

    Article  Google Scholar 

  18. Vereschak, O., Bailly, G., Caramiaux, B.: How to evaluate trust in AI-assisted decision making? A survey of empirical methodologies. Proc. ACM Hum.-Comput. Interact. 5(CSCW2), 1–39 (2021)

    Article  Google Scholar 

  19. Dietvorst, B.J., Simmons, J.P., Massey, C.: Overcoming algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them. Manage. Sci. 64(3), 1155–1170 (2018)

    Article  Google Scholar 

  20. Guzman, A.L.: Making AI safe for humans: a conversation with Siri. In: Socialbots and Their Friends, pp. 85–101. Routledge(2016)

    Google Scholar 

  21. Kudina, O.: Alexa does not care. Should you? Media literacy in the age of digital voice assistants. Glimpse 20, 107–115 (2019). https://doi.org/10.5840/glimpse2019207

    Article  Google Scholar 

  22. Dahlbäck, N., Jönsson, A., Ahrenberg, L.: Wizard of Oz studies: why and how. In: Proceedings of the 1st International Conference on Intelligent User Interfaces, pp. 193–200 (1993)

    Google Scholar 

  23. Riek, L.D.: Wizard of oz studies in hri: a systematic review and new reporting guidelines. J. Hum.-Robot Interact. 1(1), 119–136 (2012)

    Article  Google Scholar 

  24. Porcheron, M., Fischer, J.E., Reeves, S.: Pulling back the curtain on the wizards of Oz. Proc. ACM Hum.-Comput. Interact. 4(CSCW3), 1–22 (2021)

    Article  Google Scholar 

  25. Van de Poel, I.: Embedding values in artificial intelligence (AI) systems. Mind. Mach. 30(3), 385–409 (2020)

    Article  Google Scholar 

  26. Anderson, L.R.J., Luchsinger, A.: Artificial Intelligence and the Future of Humans (2018). https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/

  27. Abbass, H.A.: Social integration of artificial intelligence: functions, automation allocation logic and human-autonomy trust. Cogn. Comput. 11(2), 159–171 (2019)

    Article  Google Scholar 

  28. Kraus, M., Wagner, N., Minker, W.: Effects of proactive dialogue strategies on human-computer trust. In: Proceedings of the 28th ACM Conference on User Modeling, Adaptation and Personalization, pp. 107–116 (2020)

    Google Scholar 

  29. Sankaran, S., Zhang, C., Funk, M., Aarts, H., Markopoulos, P.: Do I have a say? Using conversational agents to re-imagine human-machine autonomy. In: Proceedings of the 2nd Conference on Conversational User Interfaces, pp. 1–3 (2020)

    Google Scholar 

  30. Sankaran, S., Zhang, C., Aarts, H., Markopoulos, P.: Exploring peoples’ perception of autonomy and reactance in everyday AI interactions. Front. Psychol. 12, 713074 (2021)

    Article  Google Scholar 

  31. Zhang, C., Sankaran, S., Aarts, H.: A functional analysis of personal autonomy: how restricting ‘what’, ‘when’ and ‘how’ affects experienced agency and goal motivation. Eur. J. Soc. Psychol. 53(3), 567–584 (2023)

    Article  Google Scholar 

  32. High-Level Expert Group on Artificial Intelligence. Ethics guidelines for trustworthy AI. European Commission (2019)

    Google Scholar 

  33. European Commission: White Paper on Artificial Intelligence: a European approach to excellence and trust 2020. https://commission.europa.eu/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en

  34. Ray, C., Mondada, F., Siegwart, R.: What do people expect from robots? In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, pp. 3816–3821(2008). https://doi.org/10.1109/IROS.2008.4650714

  35. Dautenhahn, K., et al.: What is a robot companion - friend, assistant or butler? In: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada,, pp. 1192–1197 (2005). https://doi.org/10.1109/IROS.2005.1545189

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shakila Shayan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Shayan, S. et al. (2023). AI in the Human Loop: The Impact of Differences in Digital Assistant Roles on the Personal Values of Users. In: Abdelnour Nocera, J., Kristín Lárusdóttir, M., Petrie, H., Piccinno, A., Winckler, M. (eds) Human-Computer Interaction – INTERACT 2023. INTERACT 2023. Lecture Notes in Computer Science, vol 14144. Springer, Cham. https://doi.org/10.1007/978-3-031-42286-7_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42286-7_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42285-0

  • Online ISBN: 978-3-031-42286-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics