skip to main content
10.1145/3543758.3543774acmotherconferencesArticle/Chapter ViewAbstractPublication PagesmundcConference Proceedingsconference-collections
research-article

The Influence of Unequal Chatbot Treatment on Users in Group Chat

Published:15 September 2022Publication History

ABSTRACT

The area of unfair treatment by artificial intelligences in human-AI interaction has seen frequent attention over the recent years. However, research in this area tends to target one-on-one interaction. Experiments which focus on perceived unfairness in group settings that involve an AI are mostly nonexistent. This work intends to provide insight into settings such as these through conducting a comparative study that exposes groups of people to AIs which treat parts of the participants differently than others in a cooking setting. Our results show significant differences between participants who have been treated unfairly by the AI, but also in groups not directly affected by the unfair treatment; the latter also thought worse of the AI if they felt another group partner was treated unfairly. We discuss these results and theorize about possible reasons

References

  1. Abdul, A., Vermeulen, J., Wang, D., Lim, B.Y. and Kankanhalli, M. 2018. Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada, Apr. 2018), 1–18. DOI:https://doi.org/10.1145/3173574.3174156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Alexander, S. and Ruderman, M. 1987. The role of procedural and distributive justice in organizational behavior. Social justice research. 1, 2 (1987), 177–198. DOI:https://doi.org/10.1007/BF01048015.Google ScholarGoogle Scholar
  3. Bae Brandtzæg, P.B., Skjuve, M., Kristoffer Dysthe, K.K. and Følstad, A. 2021. When the Social Becomes Non-Human: Young People's Perception of Social Support in Chatbots. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2021), 1–13. DOI:https://doi.org/10.1145/3411764.3445318 .Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Bartneck, C., Kulic, D., Croft, E. and Zoghbi, S. 2008. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics. 1, (Jan. 2008), 71–81. DOI:https://doi.org/10.1007/s12369-008-0001-3.Google ScholarGoogle Scholar
  5. Bicchieri, C. 1999. Local Fairness. Philosophy and Phenomenological Research. 59, 1 (Feb. 1999), 229–236.Google ScholarGoogle ScholarCross RefCross Ref
  6. Brandtzaeg, P.B. and Følstad, A. 2018. Chatbots: changing user needs and motivations. Interactions. 25, 5 (Aug. 2018), 38–43. DOI:https://doi.org/10.1145/3236669.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Brandtzaeg, P.B. and Følstad, A. 2017. Why People Use Chatbots. Internet Science (Cham, 2017), 377–392.DOI:https://doi.org/10.1007/978-3-319-70284-1_30 .Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Colquitt, J.A., Zapata-Phelan, C.P. and Roberson, Q.M. 2005. Justice in Teams: A Review of Fairness Effects in Collective Contexts. Research in Personnel and Human Resources Management. J. J. Martocchio, ed. Emerald Group Publishing Limited. 53–94.DOI: https://doi.org/10.1016/S0742-7301(05)24002-1.Google ScholarGoogle Scholar
  9. Curhan, J.R., Elfenbein, H.A. and Xu, H. 2006. What do people value when they negotiate? Mapping the domain of subjective value in negotiation. Journal of Personality and Social Psychology. 91, 3 (2006), 493–512. DOI:https://doi.org/10.1037/0022-3514.91.3.493.Google ScholarGoogle ScholarCross RefCross Ref
  10. Doherty, M. 1999. Sprachspezifische Aspekte der Informationsverteilung. Walter de Gruyter GmbH & Co KG.Google ScholarGoogle Scholar
  11. Følstad, A., Araujo, T., Law, E.L.-C., Brandtzaeg, P.B., Papadopoulos, S., Reis, L., Baez, M., Laban, G., McAllister, P. and Ischen, C. 2021. Future directions for chatbot research: an interdisciplinary research agenda. Computing. 103, 12 (2021), 2915–2942. DOI:https://doi.org/10.1007/s00607-021-01016-7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Følstad, A. and Taylor, C. 2020. Conversational Repair in Chatbots for Customer Service: The Effect of Expressing Uncertainty and Suggesting Alternatives. Chatbot Research and Design (Cham, 2020), 201–214. DOI:https://doi.org/10.1007/978-3-030-39540-7_14.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Ghandeharioun, A., McDuff, D., Czerwinski, M. and Rowan, K. 2019. EMMA: An Emotion-Aware Wellbeing Chatbot. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII) (Sep. 2019), 1–7. DOI:https://doi.org/10.1109/ACII.2019.8925455.Google ScholarGoogle Scholar
  14. Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M. and Wallach, H. 2019. Improving fairness in machine learning systems: What do industry practitioners need? Proceedings of the 2019 CHI conference on human factors in computing systems (2019), 1–16. DOI:https://doi.org/10.1145/3290605.3300830.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Jung, M.F., Difranzo, D., Shen, S., Stoll, B., Claure, H. and Lawrence, A. 2020. Robot-Assisted Tower Construction—A Method to Study the Impact of a Robot's Allocation Behavior on Interpersonal Dynamics and Collaboration in Groups. ACM Transactions on Human-Robot Interaction. 10, 1 (Oct. 2020), 1–23. DOI:https://doi.org/10.1145/3394287.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Keyes, O. 2018. The misgendering machines: Trans/HCI implications of automatic gender recognition. Proceedings of the ACM on human-computer interaction. 2, CSCW (2018), 1–22. DOI:https://doi.org/10.1145/3274357.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Kim, S., Eun, J., Oh, C., Suh, B. and Lee, J. 2020. Bot in the bunch: Facilitating group chat discussion by improving efficiency and participation with a chatbot. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (2020), 1–13. DOI:https://doi.org/10.1145/3313831.3376785.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Klatte, B. and Sackmann, S.A. 2014. Kommunikative Wissensverteilung in Gruppen: Bestimmungsmerkmale, Ansprüche und Implikationen. disserta Verlag.Google ScholarGoogle Scholar
  19. Lee, M.K. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society. 5, 1 (2018), DOI:https://doi.org/10.1177/2053951718756684 .Google ScholarGoogle Scholar
  20. Lee, M.K. and Baykal, S. 2017. Algorithmic mediation in group decisions: Fairness perceptions of algorithmically mediated vs. discussion-based social division. Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing (2017), 1035–1048. DOI:https://doi.org/10.1145/2998181.2998230 .Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Lee, N., Madotto, A. and Fung, P. 2019. Exploring Social Bias in Chatbots using Stereotype Knowledge. WNLP@ ACL (2019), 177–180.Google ScholarGoogle Scholar
  22. Lehr, A., Kammerer, M., Konerding, K.-P., Storrer, A., Thimm, C. and Wolski, W. eds. 2011. Sprache im Alltag: Beiträge zu neuen Perspektiven in der Linguistik. Herbert Ernst Wiegand zum 65. Geburtstag gewidmet. De Gruyter.Google ScholarGoogle Scholar
  23. Lin, Z., Xu, P., Winata, G.I., Siddique, F.B., Liu, Z., Shin, J. and Fung, P. 2020. CAiRE: An Empathetic Neural Chatbot. arXiv:1907.12108 [cs]. (Apr. 2020).DOI:https://doi.org/10.48550/arXiv.1907.12108.Google ScholarGoogle Scholar
  24. Lutz, B. 2013. Wissen im Dialog: Beiträge zu den Kremser Wissensmanagement-Tagen 2012. Edition Donau-Universität Krems.Google ScholarGoogle Scholar
  25. Ma, P., Wang, S. and Liu, J. 2020. Metamorphic Testing and Certified Mitigation of Fairness Violations in NLP Models. IJCAI (2020), 458–465.Google ScholarGoogle Scholar
  26. McFarlin, D.B. and Sweeney, P.D. 1992. Distributive and procedural justice as predictors of satisfaction with personal and organizational outcomes. Academy of management Journal. 35, 3 (1992), 626–637. DOI:https://doi.org/10.5465/256489.Google ScholarGoogle Scholar
  27. Morrissey, K. and Kirakowski, J. 2013. ‘Realness’ in Chatbots: Establishing Quantifiable Criteria. Human-Computer Interaction. Interaction Modalities and Techniques. Springer Berlin Heidelberg. 87–96. DOI:https://doi.org/10.1007/978-3-642-39330-3_10 .Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H. and Hagita, N. 2009. Footing in human-robot conversations: How robots might shape participant roles using gaze cues. (Jan. 2009), 61–68. DOI:https://doi.org/10.1145/1514095.1514109.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Onnasch, L. and Roesler, E. 2020. A Taxonomy to Structure and Analyze Human–Robot Interaction. International Journal of Social Robotics. (Jun. 2020). DOI:https://doi.org/10.1007/s12369-020-00666-5.Google ScholarGoogle Scholar
  30. Radhakrishnan, J. and Gupta, S. 2020. Artificial Intelligence in Practice – Real-World Examples and Emerging Business Models. Re-imagining Diffusion and Adoption of Information Technology and Systems: A Continuing Conversation (Cham, 2020), 77–88. DOI:https://doi.org/10.1007/978-3-030-64849-7_8.Google ScholarGoogle ScholarCross RefCross Ref
  31. Reeves, B. and Nass, C.I. 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.Google ScholarGoogle Scholar
  32. Riedl, M.O. 2019. Human-centered artificial intelligence and machine learning. Human Behavior and Emerging Technologies. 1, 1 (2019), 33–36. DOI:https://doi.org/10.1002/hbe2.117.Google ScholarGoogle ScholarCross RefCross Ref
  33. Rosner, B., Glynn, R.J. and Lee, M.-L.T. 2006. The Wilcoxon Signed Rank Test for Paired Comparisons of Clustered Data. Biometrics. 62, 1 (2006), 185–192. DOI:https://doi.org/10.1111/j.1541-0420.2005.00389.x.Google ScholarGoogle ScholarCross RefCross Ref
  34. Sawilowsky, S.S. 1990. Nonparametric Tests of Interaction in Experimental Design. Review of Educational Research. 60, 1 (Mar. 1990), 91–126. DOI:https://doi.org/10.3102/00346543060001091.Google ScholarGoogle ScholarCross RefCross Ref
  35. Sebo, S., Stoll, B., Scassellati, B. and Jung, M.F. 2020. Robots in groups and teams: a literature review. Proceedings of the ACM on Human-Computer Interaction. 4, CSCW2 (2020), 1–36. DOI:https://doi.org/10.1145/3415247.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Shen, S., Slovak, P. and Jung, M.F. 2018. Stop. I See a Conflict Happening.: A Robot Mediator for Young Children's Interpersonal Conflict Resolution. Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (New York, NY, USA, Feb. 2018), 69–77. DOI:https://doi.org/10.1145/3171221.3171248.Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Skjuve, M. and Brandzaeg, P.B. 2019. Measuring User Experience in Chatbots: An Approach to Interpersonal Communication Competence. Internet Science (Cham, 2019), 113–120. DOI:https://doi.org/10.1007/978-3-030-17705-8_10.Google ScholarGoogle Scholar
  38. Srivastava, B., Rossi, F., Usmani, S. and Bernagozzi, M. 2020. Personalized chatbot trustworthiness ratings. IEEE Transactions on Technology and Society. 1, 4 (2020), 184–192. DOI:https://doi.org/10.1109/TTS.2020.3023919.Google ScholarGoogle ScholarCross RefCross Ref
  39. Vázquez, M., Carter, E.J., McDorman, B., Forlizzi, J., Steinfeld, A. and Hudson, S.E. 2017. Towards Robot Autonomy in Group Conversations: Understanding the Effects of Body Orientation and Gaze. Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (New York, NY, USA, Mar. 2017), 42–52.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Zheng, Q., Tang, Y., Liu, Y., Liu, W. and Huang, Y. 2022. UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library. CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2022), 1–24. DOI:https://doi.org/10.1145/3491102.3501855.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The Influence of Unequal Chatbot Treatment on Users in Group Chat

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      MuC '22: Proceedings of Mensch und Computer 2022
      September 2022
      624 pages

      Copyright © 2022 ACM

      © 2022 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 15 September 2022

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format