Skip to main content

Effects of Agents’ Transparency on Teamwork

  • Conference paper
  • First Online:
Explainable, Transparent Autonomous Agents and Multi-Agent Systems (EXTRAAMAS 2019)

Abstract

Transparency in the field of human-machine interaction and artificial intelligence has seen a growth of interest in the past few years. Nonetheless, there are still few experimental studies on how transparency affects teamwork, in particular in collaborative situations where the strategies of others, including agents, may seem obscure.

We explored this problem using a collaborative game scenario with a mixed human-agent team. We investigated the role of transparency in the agents’ decisions, by having agents that reveal and tell the strategies they adopt in the game, in a manner that makes their decisions transparent to the other team members. The game embraces a social dilemma where a human player can choose to contribute to the goal of the team (cooperate) or act selfishly in the interest of his or her individual goal (defect). We designed a between-subjects experimental study, with different conditions, manipulating the transparency in a team. The results showed an interaction effect between the agents’ strategy and transparency on trust, group identification and human-likeness. Our results suggest that transparency has a positive effect in terms of people’s perception of trust, group identification and human likeness when the agents use a tit-for-tat or a more individualistic strategy. In fact, adding transparent behaviour to an unconditional cooperator negatively affects the measured dimensions.

This work was supported by national funds through Fundação para a Ciência e a Tecnologia (FCT-UID/CEC/50021/2019), and Silvia Tulli acknowledges the European Union’s Horizon 2020 research and innovation program for grant agreement No. 765955 ANIMATAS project. Filipa Correia also acknowledges an FCT grant (Ref. SFRH/BD/118031/2016).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Allen, K., Bergin, R.: Exploring trust, group satisfaction, and performance in geographically dispersed and co-located university technology commercialization teams. In: Proceedings of the NCIIA 8th Annual Meeting: Education that Works, pp. 18–20 (2004)

    Google Scholar 

  2. Axelrod, R.: On six advances in cooperation theory. Anal. Kritik 22, 130–151 (2000). https://doi.org/10.1515/auk-2000-0107

    Article  Google Scholar 

  3. Bartneck, C., Kulic, D., Croft, E., Zoghbi, S.: Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 71–81 (2008). https://doi.org/10.1007/s12369-008-0001-3

    Article  Google Scholar 

  4. Bornstein, G., Nagel, R., Gneezy, U., Nagel, R.: The effect of intergroup competition on group coordination: an experimental study. Games Econ. Behav. 41, 1–25 (2002). https://doi.org/10.2139/ssrn.189434

    Article  MATH  Google Scholar 

  5. Burton-Chellew, M.N., Mouden, C.E., West, S.A.: Conditional cooperation and confusion in public-goods experiments. Proc. Nat. Acad. Sci. U.S.A. 113(5), 1291–6 (2016)

    Article  Google Scholar 

  6. Chen, J.Y.C., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19(3), 259–282 (2018). https://doi.org/10.1080/1463922X.2017.1315750

    Article  Google Scholar 

  7. Chen, J.Y., Barnes, M.J.: Human-agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum.-Mach. Syst. 44(1), 13–29 (2014)

    Article  Google Scholar 

  8. Chen, J.Y., Barnes, M.J.: Agent transparency for human-agent teaming effectiveness. In: 2015 IEEE International Conference on Systems, Man, and Cybernetics, pp. 1381–1385. IEEE (2015)

    Google Scholar 

  9. Correia, F., et al.: Exploring prosociality in human-robot teams. In: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 143–151. IEEE (2019)

    Google Scholar 

  10. DARPA: Explainable artificial intelligence (XAI) program (2016). www.darpa.mil/program/explainable-artificial-intelligence,fullsolicitationatwww.darpa.mil/attachments/DARPA-BAA-16-53.pdf

  11. Davis, D., Korenok, O., Reilly, R.: Cooperation without coordination: signaling, types and tacit collusion in laboratory oligopolies. Exp. Econ. 13(1), 45–65 (2010)

    Article  Google Scholar 

  12. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning (2017)

    Google Scholar 

  13. Fiala, L., Suetens, S.: Transparency and cooperation in repeated dilemma games: a meta study. Exp. Econ. 20(4), 755–771 (2017)

    Article  Google Scholar 

  14. Fredrickson, J.E.: Prosocial behavior and teamwork in online computer games (2013)

    Google Scholar 

  15. Fudenberg, D., Maskin, E.: The Folk theorem in repeated games with discounting or with incomplete. Information (2009). https://doi.org/10.1142/9789812818478_0011

    Article  MATH  Google Scholar 

  16. Helldin, T.: Transparency for future semi-automated systems, Ph.D. dissertation. Orebro University (2014)

    Google Scholar 

  17. Herlocker, J.L., Konstan, J.A., Riedl, J.: Explaining collaborative filtering recommendations. In: Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, pp. 241–250. ACM (2000)

    Google Scholar 

  18. Sedano, C.I., Carvalho, M., Secco, N., Longstreet, C.: Collaborative and cooperative games: facts and assumptions, pp. 370–376, May 2013. https://doi.org/10.1109/CTS.2013.6567257

  19. Klien, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004). https://doi.org/10.1109/MIS.2004.74

    Article  Google Scholar 

  20. Lange, P., Otten, W., De Bruin, E.M.N., Joireman, J.: Development of prosocial, individualistic, and competitive orientations: theory and preliminary evidence. J. Pers. Soc. Psychol. 73, 733–46 (1997). https://doi.org/10.1037//0022-3514.73.4.733

    Article  Google Scholar 

  21. Leach, C.W., et al.: Group-level self-definition and self-investment: a hierarchical (multicomponent) model of in-group identification. J. Pers. Soc. Psychol. 95(1), 144 (2008)

    Article  Google Scholar 

  22. Lee, C.C., Chang, J.W.: Does trust promote more teamwork? Modeling online game players’ teamwork using team experience as a moderator. Cyberpsychology Behav. Soc. Netw. 16(11), 813–819 (2013). https://doi.org/10.1089/cyber.2012.0461. pMID: 23848999

    Article  Google Scholar 

  23. McEwan, G., Gutwin, C., Mandryk, R.L., Nacke, L.: “i’m just here to play games”: social dynamics and sociality in an online game site. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, CSCW 2012, pp. 549–558. ACM, New York (2012). https://doi.org/10.1145/2145204.2145289

  24. Mercado, J.E., Rupp, M.A., Chen, J.Y., Barnes, M.J., Barber, D., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-uxv management. Hum. Factors 58(3), 401–415 (2016)

    Article  Google Scholar 

  25. Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.M.: Manipulating and measuring model interpretability. CoRR arXiv:abs/1802.07810 (2018)

  26. Rader, E., Cotter, K., Cho, J.: Explanations as mechanisms for supporting algorithmic transparency, pp. 103:1–103:13 (2018). https://doi.org/10.1145/3173574.3173677

  27. Segal, U., Sobel, J.: Tit for tat: foundations of preferences for reciprocity in strategic settings. J. Econ. Theory 136(1), 197–216 (2007). https://EconPapers.repec.org/RePEc:eee:jetheo:v:136:y:2007:i:1:p:197-216

    Article  MathSciNet  Google Scholar 

  28. Zelmer, J.: Linear public goods experiments: a meta-analysis. Quantitative studies in economics and population research reports 361. McMaster University, June 2001. https://ideas.repec.org/p/mcm/qseprr/361.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Silvia Tulli .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tulli, S., Correia, F., Mascarenhas, S., Gomes, S., Melo, F.S., Paiva, A. (2019). Effects of Agents’ Transparency on Teamwork. In: Calvaresi, D., Najjar, A., Schumacher, M., Främling, K. (eds) Explainable, Transparent Autonomous Agents and Multi-Agent Systems. EXTRAAMAS 2019. Lecture Notes in Computer Science(), vol 11763. Springer, Cham. https://doi.org/10.1007/978-3-030-30391-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30391-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30390-7

  • Online ISBN: 978-3-030-30391-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics