Skip to main content

Exploring the Effect of Communication Patterns and Transparency on the Attitudes Towards Robots

  • Conference paper
  • First Online:
Advances in Human Factors and Simulation (AHFE 2019)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 958))

Included in the following conference series:

  • 953 Accesses

Abstract

Robots’ increased autonomous capabilities necessitate human-robot communication. Exploration of this communication, including the pattern of and the content of such, is relevant to the development of these robots and the understanding of how they can interact with human teammates. This study compares two different patterns of communication and two approaches to transparent interaction, looking at their effects on human team members’ attitudes towards robots with which they are communicating. Participants found robots using a bidirectional communication pattern to be more animate, likeable, and intelligent than robots using a unidirectional communication pattern.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. U.S. Army: The U.S. Army Robotic and Autonomous Systems Strategy. In: Maneuver, A., Soldier Division Army Capabilities Integration Center, (ed.) TRADOC. Fort Eustis, VA (2017)

    Google Scholar 

  2. David, R.A., Nielsen, P.: Defense science board summer study on autonomy. Defense Science Board Washington United States (2016)

    Google Scholar 

  3. Fan, X., Yen, J.: Modeling and simulating human teamwork behaviors using intelligent agents. Phys. Life Rev. 1, 173–201 (2004)

    Article  Google Scholar 

  4. Stubbs, K., Wettergreen, D., Hinds, P.H.: Autonomy and common ground in human-robot interaction: a field study. IEEE Intell. Syst. 22, 42–50 (2007)

    Article  Google Scholar 

  5. Chen, J.Y.C., Barnes, M.J.: Human–agent teaming for multirobot control: a review of human factors issues. IEEE Trans. Hum. Mach. Syst. 44, 13–29 (2014)

    Article  Google Scholar 

  6. Fiore, S.M., Wiltshire, T.J.: Technology as teammate: examining the role of external cognition in support of team cognitive processes. Front. Psychol. 7, 1531 (2016)

    Article  Google Scholar 

  7. Kaupp, T., Makarenko, A., Durrant-Whyte, H.: Human–robot communication for collaborative decision making—a probabilistic approach. Robot. Auton. Syst. 58, 444–456 (2010)

    Article  Google Scholar 

  8. Sweet, N.: Semantic Likelihood Models for Bayesian Inference in Human-Robot Interaction (2016)

    Google Scholar 

  9. Héder, M.: The machine’s role in human’s service automation and knowledge sharing. AI & Soc. 29, 185–192 (2014)

    Article  Google Scholar 

  10. Chen, J.Y., Lakhmani, S.G., Stowers, K., Selkowitz, A.R., Wright, J.L., Barnes, M.: Situation awareness-based agent transparency and human-autonomy teaming effectiveness. Theor. Issues Ergon. Sci. 19, 259–282 (2018)

    Article  Google Scholar 

  11. Chen, J.Y., Procci, K., Boyce, M., Wright, J.L., Garcia, A., Barnes, M.J.: Situation Awareness-Based Agent Transparency. U.S. Army Research Laboratory, Aberdeen Proving Ground, MD (2014)

    Google Scholar 

  12. Lyons, J.B., Havig, P.R.: Transparency in a human-machine context: interface approaches for fostering shared awareness/intent. In: 6th International Conference on Virtual, Augmented, and Mixed Reality: Designing and Developing Virtual and Augmented Environments, pp. 181–190. Springer International Publishing, Las Vegas, NV (2014)

    Google Scholar 

  13. Phillips, E., Ososky, S., Grove, J., Jentsch, F.: From tools to teammates toward the development of appropriate mental models for intelligent robots. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1491–1495. SAGE Publications (2011)

    Google Scholar 

  14. Ososky, S., Schuster, D., Phillips, E., Jentsch, F.G.: Building appropriate trust in human-robot teams. In: 2013 AAAI Spring Symposium Series (2013)

    Google Scholar 

  15. Cramer, H., Evers, V., Ramlal, S., Someren, M., Rutledge, L., Stash, N., Aroyo, L., Wielinga, B.: The effects of transparency on trust in and acceptance of a content-based art recommender. User Model. User-Adapt. Inter. 18, 455–496 (2008)

    Article  Google Scholar 

  16. Maass, S.: Why systems transparency? In: Green, T.R.G., Payne, S.J., van der Veer, G.C. (eds.) The Psychology of Computer Use, pp. 19–28. Academic Press Inc, Orlando (1983)

    Google Scholar 

  17. Ososky, S., Sanders, T., Jentsch, F., Hancock, P., Chen, J.Y.C.: Determinants of system transparency and its influence on trust in and reliance on unmanned robotic systems. In: SPIE Defense+ Security, pp. 90840E-90841–90840E-90812. International Society for Optics and Photonics (2014)

    Google Scholar 

  18. Grote, G., Weyer, J., Stanton, N.A.: Beyond Human-Centred Automation–Concepts for Human–Machine Interaction in Multi-layered Networks. Taylor & Francis, London (2014)

    Book  Google Scholar 

  19. Kilgore, R., Voshell, M.: Increasing the transparency of unmanned systems: applications of ecological interface design. In: International Conference on Virtual, Augmented and Mixed Reality, pp. 378–389. Springer (2014)

    Google Scholar 

  20. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum.-Comput. Stud. 58, 697–718 (2003)

    Article  Google Scholar 

  21. Helldin, T., Falkman, G., Riveiro, M., Dahlbom, A., Lebram, M.: Transparency of military threat evaluation through visualizing uncertainty and system rationale. In: International Conference on Engineering Psychology and Cognitive Ergonomics, pp. 263–272. Springer (2013)

    Google Scholar 

  22. Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors: J. Hum. Factors Ergon. Soc. 37, 32–64 (1995)

    Article  Google Scholar 

  23. Allen, J.E., Guinn, C.I., Horvtz, E.: Mixed-initiative interaction. IEEE Intell. Syst. Their Appl. 14, 14–23 (1999)

    Article  Google Scholar 

  24. Lakhmani, S., Abich IV, J., Barber, D., Chen, J.: A proposed approach for determining the influence of multimodal robot-of-human transparency information on human-agent teams. In: International Conference on Augmented Cognition, pp. 296–307. Springer (2016)

    Google Scholar 

  25. BĂĽtepage, J., Kragic, D.: Human-robot collaboration: from psychology to social robotics. arXiv preprint arXiv:1705.10146 (2017)

  26. Sycara, K., Sukthankar, G.: Literature review of teamwork models. In: Institute, R. (ed.) Carnegie Mellon University, Pittsburgh, PA (2006)

    Google Scholar 

  27. Yen, J., Fan, X., Sun, S., Hanratty, T., Dumer, J.: Agents with shared mental models for enhancing team decision makings. Decis. Support Syst. 41, 634–653 (2006)

    Article  Google Scholar 

  28. Cooke, N.J., Demir, M., McNeese, N.: Synthetic Teammates as Team Players: Coordination of Human and Synthetic Teammates. Cognitive Engineering Research Institute Mesa United States (2016)

    Google Scholar 

  29. Sheridan, T.B.: Teleoperation, telerobotics and telepresence: a progress report. Control. Eng. Pract. 3, 205–214 (1995)

    Article  Google Scholar 

  30. Marko, H.: The bidirectional communication theory–a generalization of information theory. IEEE Trans. Commun. 21, 1345–1351 (1973)

    Article  Google Scholar 

  31. Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., De Visser, E.J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors: J. Hum. Factors Ergon. Soc. 53, 517–527 (2011)

    Article  Google Scholar 

  32. Jones, K.S., Schmidlin, E.A.: Human-robot interaction toward usable personal service robots. Rev. Hum. Factors Ergon. 7, 100–148 (2011)

    Article  Google Scholar 

  33. Morrow, P.B., Fiore, S.M.: Supporting human-robot teams in social dynamicism: an overview of the metaphoric inference framework. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 1718–1722. SAGE (2012)

    Google Scholar 

  34. Williams, T., Briggs, P., Scheutz, M.: Covert robot-robot communication: Human perceptions and implications for human-robot interaction. J. Hum.- Robot Interact. 4, 24–49 (2015)

    Article  Google Scholar 

  35. Norman, D.A.: How might people interact with agents. Commun. ACM 37, 68–71 (1994)

    Article  Google Scholar 

  36. Schillaci, G., Bodiroža, S., Hafner, V.V.: Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. Int. J. Soc. Robot. 5, 139–152 (2013)

    Article  Google Scholar 

  37. Fink, J.: Anthropomorphism and human likeness in the design of robots and human-robot interaction. In: International Conference on Social Robotics, pp. 199–208. Springer (2012)

    Google Scholar 

  38. Barber, D.J., Abich IV, J., Phillips, E., Talone, A.B., Jentsch, F., Hill, S.G.: Field assessment of multimodal communication for dismounted human-robot teams. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, pp. 921–925. SAGE Publications, Los Angeles, CA (2015)

    Article  Google Scholar 

  39. Bartneck, C., Kulić, D., Croft, E., Zoghbi, S.: Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int. J. Soc. Robot. 1, 71–81 (2009)

    Article  Google Scholar 

  40. Wright, J.L., Chen, J.Y.C., Lakhmani, S.G., Selkowitz, A.R.: Agent transparency for an autonomous squad member: depth of reasoning and reliability. U.S. Army Research Laboratory, Aberdeen Proving Ground, MD (in press)

    Google Scholar 

  41. Sandoval, E.B.: Reciprocity in human robot interaction. Human Interface Technology. University of Canterbury (2016)

    Google Scholar 

  42. Mathieu, J.E., Heffner, T.S., Goodwin, G.F., Salas, E., Cannon-Bowers, J.A.: The influence of shared mental models on team process and performance. J. Appl. Psychol. 85, 273–283 (2000)

    Article  Google Scholar 

Download references

Acknowledgments

This research was funded by the U.S. Army Research Laboratory’s Human-Robot Interaction program. The authors would like to thank Jason English, Christopher Miller, Thomas Pring, and Dr. Jessie Chen for their contributions to this project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shan G. Lakhmani .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lakhmani, S.G., Wright, J.L., Schwartz, M., Barber, D. (2020). Exploring the Effect of Communication Patterns and Transparency on the Attitudes Towards Robots. In: Cassenti, D. (eds) Advances in Human Factors and Simulation. AHFE 2019. Advances in Intelligent Systems and Computing, vol 958. Springer, Cham. https://doi.org/10.1007/978-3-030-20148-7_3

Download citation

Publish with us

Policies and ethics