Skip to main content

Abstract

Artificially intelligent agents increasingly collaborate with humans in human-agent teams. Timely proactive sharing of relevant information within the team contributes to the overall team performance. This paper presents a machine learning approach to proactive communication in AI-agents using contextual factors. Proactive communication was learned in two consecutive experimental steps: (a) multi-agent team simulations to learn effective communicative behaviors, and (b) human-agent team experiments to refine communication suitable for a human team member. Results consist of proactive communication policies for communicating both beliefs and goals within human-agent teams. Agents learned to use minimal communication to improve team performance in simulation, while they learned more specific socially desirable behaviors in the human-agent team experiment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bernsen, N.O., Dybkjær, L.: Building usable spoken dialogue systems: some approaches. Sprache und Datenverarbeitung 28(2), 111–131 (2004)

    Google Scholar 

  2. Bradshaw, J.M., et al.: Adjustable autonomy and human-agent teamwork in practice: an interim report on space applications. In: Weiss, G., Hexmoor, H., Castelfranchi, C., Falcone, R. (eds.) Agent Autonomy, vol. 7, pp. 243–280. Kluwer Academic Press, Boston (2003). https://doi.org/10.1007/978-1-4419-9198-0_11. http://link.springer.com/10.1007/978-1-4419-9198-0 11

  3. Butchibabu, A.: Anticipatory Communication Strategies for Human Robot Team Coordination (2016)

    Google Scholar 

  4. Chu-Carroll, J.: MIMIC: an adaptive mixed initiative spoken dialogue system for information queries. In: Proceedings of the Sixth Conference on Applied Natural Language Processing, pp. 97–104. Association for Computational Linguistics, Seattle (2000). https://doi.org/10.3115/974147.974161, http://portal.acm.org/citation.cfm?doid=974147.974161

  5. Costa, A.C., Anderson, N.: Measuring trust in teams: development and validation of a multifaceted measure of formative and reflective indicators of team trust. Eur. J. Work Organ. Psychol. 20(1), 119–154 (2011). https://doi.org/10.1080/13594320903272083, http://www.tandfonline.com/doi/abs/10.1080/13594320903272083

  6. Foerster, J., Assael, I.A., de Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29, pp. 2137–2145. Curran Associates, Inc. (2016). http://papers.nips.cc/paper/6042-learning-to-communicate-with-deep-multi-agent-reinforcement-learning.pdf

  7. Goldman, C.V., Zilberstein, S.: Optimizing information exchange in cooperative multi-agent systems. In: Proceedings of the Second International Joint Conference on Autonomous Agents and Multi Agent Systems, pp. 137–144. ACM Press, Melbourne (2003)

    Google Scholar 

  8. Jarvenpaa, S.L., Leidner, D.E.: Communication and trust in global virtual teams. Organ. Sci. 10(6), 791–815 (1999). https://doi.org/10.1287/orsc.10.6.791, http://pubsonline.informs.org/doi/abs/10.1287/orsc.10.6.791

  9. Johnson, M., Bradshaw, J.M., Feltovich, P.J., Jonker, C.M., van Riemsdijk, B., Sierhuis, M.: The fundamental principle of coactive design: interdependence must shape autonomy. In: De Vos, M., Fornara, N., Pitt, J.V., Vouros, G. (eds.) COIN -2010. LNCS (LNAI), vol. 6541, pp. 172–191. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21268-0_10. http://link.springer.com/10.1007/978-3-642-21268-0 10

    Chapter  Google Scholar 

  10. Johnson, M., Jonker, C., van Riemsdijk, B., Feltovich, P.J., Bradshaw, J.M.: Joint activity testbed: blocks world for teams (BW4T). In: Aldewereld, H., Dignum, V., Picard, G. (eds.) ESAW 2009. LNCS (LNAI), vol. 5881, pp. 254–256. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-10203-5_26. http://link.springer.com/10.1007/978-3-642-10203-5 26

    Chapter  Google Scholar 

  11. Klein, G., Woods, D.D., Bradshaw, J.M., Hoffman, R.R., Feltovich, P.J.: Ten challenges for making automation a “team player” in joint human-agent activity. IEEE Intell. Syst. 19(6), 91–95 (2004). https://doi.org/10.1109/MIS.2004.74

    Article  Google Scholar 

  12. Kruijff, G.J.M., Janıcek, M., Keshavdas, S., Larochelle, B., Zender, H.: Experience in system design for human-robot teaming in urban search & rescue. In: 8th International Conference on Field and Service Robotics, Matsushima, Japan, pp. 1–14 (2012)

    Google Scholar 

  13. Lazaridou, A., Peysakhovich, A., Baroni, M.: Multi-agent cooperation and the emergence of (natural) language. arXiv:1612.07182 [cs], December 2016

  14. Lewis, J.R.: IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. Int. J. Hum.-Comput. Interact. 7, 57–78 (1995)

    Article  Google Scholar 

  15. Singh, D., Hindriks, K.V.: Learning to improve agent behaviours in GOAL. In: Dastani, M., Hübner, J.F., Logan, B. (eds.) ProMAS 2012. LNCS (LNAI), vol. 7837, pp. 158–173. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38700-5_10. http://link.springer.com/10.1007/978-3-642-38700-5 10

    Chapter  Google Scholar 

  16. Sordoni, A., et al.: A neural network approach to context-sensitive generation of conversational responses. arXiv:1506.06714 [cs], June 2015

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emma M. van Zoelen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

van Zoelen, E.M., Cremers, A., Dignum, F.P.M., van Diggelen, J., Peeters, M.M. (2020). Learning to Communicate Proactively in Human-Agent Teaming. In: De La Prieta, F., et al. Highlights in Practical Applications of Agents, Multi-Agent Systems, and Trust-worthiness. The PAAMS Collection. PAAMS 2020. Communications in Computer and Information Science, vol 1233. Springer, Cham. https://doi.org/10.1007/978-3-030-51999-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-51999-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-51998-8

  • Online ISBN: 978-3-030-51999-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics