Skip to main content
Log in

Learning to trust in the competence and commitment of agents

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

For agents to collaborate in open multi-agent systems, each agent must trust in the other agents’ ability to complete tasks and willingness to cooperate. Agents need to decide between cooperative and opportunistic behavior based on their assessment of another agents’ trustworthiness. In particular, an agent can have two beliefs about a potential partner that tend to indicate trustworthiness: that the partner is competent and that the partner expects to engage in future interactions. This paper explores an approach that models competence as an agent’s probability of successfully performing an action, and models belief in future interactions as a discount factor. We evaluate the underlying decision framework’s performance given accurate knowledge of the model’s parameters in an evolutionary game setting. We then introduce a game-theoretic framework in which an agent can learn a model of another agent online, using the Harsanyi transformation. The learning agents evaluate a set of competing hypotheses about another agent during the simulated play of an indefinitely repeated game. The Harsanyi strategy is shown to demonstrate robust and successful online play against a variety of static, classic, and learning strategies in a variable-payoff Iterated Prisoner’s Dilemma setting.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Abdul-Rahman, A., & Hailes, S. (2000). Supporting trust in virtual communities. In Hawaii Int. Conference on System Sciences (Vol. 33). Maui, Hawaii, Jan. 2000.

  2. Axelrod R. (1984). The evolution of cooperation. Basic Books, New York

    Google Scholar 

  3. Baker, J. E. (1987). Reducing bias and inefficiency in the selection algorithm. In Proc. of the 2nd Intl Conf on GA (pp. 14–21). Mahwah, NJ, USA: Lawrence Erlbaum Associates, Inc.

  4. Birk, A. (2000). Learning to trust. In Proceedings of the Workshop on Deception, Fraud, and Trust in Agent Societies held during the Autonomous Agents Conference: Trust in Cyber-societies, Integrating the Human and Artificial Perspectives, (pp. 133–144). Springer-Verlag, London, UK.

  5. Branke, J. (2002). Evolutionary optimization in dynamic environments. Kluwer Academic Publishers.

  6. Carmel, D., & Markovitch, S. (1997). Exploration and adaptation in multiagent systems: A model-based approach. In Proceedings of The Fifteenth International Joint Conference for Artificial Intelligence (pp. 606–611). Nagoya, Japan, 1997.

  7. Castelfranchi, C., & Falcone, R. (1998). Principles of trust for MAS: Cognitive anatomy, social importance, and quantification. In Proceedings of Third International Conference on Multi-Agent Systems, (pp. 72–79). Paris, France, 1998.

  8. Castelfranchi, C., Falcone, R., & Pezzulo, G. (2003). Trust in information sources as a source for trust: A fuzzy approach. In Proceedings of AAMAS ’03, (pp. 89–96). ACM, Melbourne, Australia, July 2003.

  9. Dennett, D. (1987). The intentional stance. MIT Press.

  10. Fullam, K., & Barber, K. S. (2006). Learning trust strategies in reputation exchange networks. In The Fifth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-2006).

  11. Gambetta, D. (1988). In Can we trust trust? Trust: Making and Breaking Cooperative Relations, (pp. 213–237). Department of Sociology, University of Oxford.

  12. Gintis, H. (2000). Game theory evolving. Princeton University Press.

  13. Gmytrasiewicz P. and Durfee E. (1993). Toward a theory of honesty and trust among communicating autonomous agents. Group Decision and Negotiation 2: 237–258

    Article  Google Scholar 

  14. Hamilton W. (1964). The genetical evolution of social behaviour. Journal of Theoretical Biology 7(1): 1–16

    Article  Google Scholar 

  15. Harsanyi J.C. (1967). Games with incomplete information played by ‘Bayesian’ players. Management Science 14(3): 159–182

    Article  MATH  MathSciNet  Google Scholar 

  16. Hoeting J.A., Madigan D., Raftery A.E. and Volinsky C.T. (1999). Bayesian model averaging: A~tutorial. Statistical Science 14: 382–417

    Article  MATH  MathSciNet  Google Scholar 

  17. Huynh T.D., Jennings N.R. and Shadbolt N.R. (2006). An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Sytems 13: 119–154

    Article  Google Scholar 

  18. Josang, A., & Ismail, R. (2002). The Beta reputation system. In Proceedings of the 15th Bled Conference on Electronic Commerce.

  19. Kalai E. and Lehrer E. (1993). Rational learning leads to Nash equilibrium. Econometrica 61(5): 1019–1045

    Article  MATH  MathSciNet  Google Scholar 

  20. Kraines D. and Kraines V. (1989). Pavlov and the prisoner’s dilemma. Theory and Decision 26: 47–79

    Article  MathSciNet  Google Scholar 

  21. Kraines D. and Kraines V. (1993). Learning to cooperate with Pavlov: An adaptive strategy for the iterated prisoner’s dilemma with noise. Theory and Decision 35: 107–150

    Article  MATH  MathSciNet  Google Scholar 

  22. Marsh, S. P. (1994). Formalising trust as a computational concept. PhD thesis, University of Stirling, Apr. 1994.

  23. McKnight, D. H., & Chervany, N. L. (2000). Trust and distrust definitions: One bite at a time. In Trust in Cyber-societites, integrating the human and artificial perspectives (pp. 27–54). Springer-Verlag, London, UK, 2000.

  24. Mui, L. (2003). Computational models of trust and reputation: Agents, evolutionary games, and social networks. PhD thesis, Massachusetts Institute of Technology, 2003.

  25. Mui, L., Mohtashemi, M., & Halberstadt, A. (2002). Notions of reputation in multi-agent systems: A review. In Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 280–287). Bologna, Italy, July 2002. First International Conference on Autonomous Agents and Multiagent Systems, ACM.

  26. Nowak M. and Sigmund K. (1998). Evolution of indirect reciprocity by image scoring. Nature 393: 573–577

    Article  Google Scholar 

  27. Nowak M.A., Sigmund K. and El-Sedy E. (1995). Automata, repeated games and noise. Journal of Mathematical Biology 33: 705–722

    Article  MathSciNet  Google Scholar 

  28. Ramchurn S.D., Hunyh D. and Jennings N.R. (2004). Trust in multi-agent systems. Knowledge Engineering Review 19: 1–25

    Google Scholar 

  29. Ritzberger, K. (2002). Foundations of non-cooperative game theory. Oxford University Press.

  30. Sabater, J., & Sierra, C. (2001). Regret: Reputation in gregarious societies. In Proceedings of the Fifth International Conference on Autonomous Agents (pp. 194–195). ACM Press, Montreal, Canada, 2001.

  31. Sabater, J., & Sierra, C. (2002). Social regret, a reputation model based on social relations. In ACM SIGecom Exchanges (Vol. 3.1, pp. 44–56). ACM.

  32. Sandholm T. and Crites R. (1995). Multiagent reinforcement learning in the iterated prisoner’s dilemma. Biosystems, Special Issue on the Prisoner’s Dilemma 37: 147–166

    Google Scholar 

  33. Schotter, A. (1986). In Paradoxical effects of social behavior. On the economic virtues of incompetency and dishonesty (pp. 235–241). Springer-Verlag, 1986.

  34. Smith, M. J., & desJardins, M. (2005). A model for competence and integrity in variable payoff games. In Working Notes of the AAMAS-05 Workshop on Trust in Multiagent Systems, July 2005.

  35. Sutton R.S. and Barto A.G. (1998). Reinforcement learning. MIT Press, Cambridge MA

    Google Scholar 

  36. Trivers R. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology 46: 35–57

    Article  Google Scholar 

  37. Wu J. and Axelrod R. (1995). How to cope with noise in the iterated prisoner’s dilemma. Journal of Conflict Resolution 39: 183–189

    Article  Google Scholar 

  38. Yu, B., & Singh, M. P. (2002). An evidential model of distributed reputation management. In Proceedings of First International Conference on Autonomous Agents and MAS (pp. 294–301). Bologna, Italy, July 2002.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael J. Smith.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Smith, M.J., desJardins, M. Learning to trust in the competence and commitment of agents. Auton Agent Multi-Agent Syst 18, 36–82 (2009). https://doi.org/10.1007/s10458-008-9055-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-008-9055-8

Keywords

Navigation