Skip to main content

Experiments in Building Experiential Trust in a Society of Objective-Trust Based Agents

  • Conference paper
  • First Online:
Trust in Cyber-societies

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2246))

Abstract

In this paper we develop a notion of “objective trust” for Software Agents, that is trust of, or between, Agents based on actual experiences between those Agents. Experiential objective trust allows Agents to make decisions about how to select other Agents when a choice has to be made. We define a mechanism for such an “objective Trust-Based Agent” (oTB-Agent), and present experimental results in a simulated trading environment based on an Intelligent Networks (IN) scenario. The trust one Agent places in another is dynamic, updated on the basis of each experience. We use this to investigate three questions related to trust in Multi-Agent Systems (MAS), first how trust affects the formation of trading partnerships, second, whether trust developed over a period can equate to “loyalty” and third whether a less than scrupulous Agent can exploit the individual nature of trust to its advantage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Artikis A., Kamara L. and Pitt J. (2001) “Towards an Open Society Model and Animation”, in the Proceedings of Agent-Based Simulation II Workshop, Passau, pp. 48–55

    Google Scholar 

  2. Axelrod, R. (1990) “The Evolution of Cooperation”, London: Penguin Books

    Google Scholar 

  3. Barber, K.S. and Kim, J. (2000) “Belief Revision Process Based on Trust: Agents Evaluating Reputation of Information Sources”, Workshop on “Deception, Fraud and Trust in Agent Societies”, Autonomous Agents, 2000, pp. 15–26

    Google Scholar 

  4. Biswas, A., Debnath, S. and Sen, S. (1999) “Believing Others: Pros and Cons”, Proc. IJCAI-99 Workshop “Agents Learning About, From and With Other Agents”

    Google Scholar 

  5. Cantwell, J. (1998) “Resolving Conflicting Information”, in the Journal of Logic, Language and Information, Vol. 7, No. 2, pp. 191–220

    Article  MathSciNet  Google Scholar 

  6. Castelfranchi C., Conte R., Paolucci M (1998) “Normative Reputation and the Costs of Compliance”, in Journal of Artificial Societies and Social Simulation, Vol. 1, No. 3, http://jasss.soc.surrey.ac.uk/1/3/3.html

  7. Castelfranchi, C. and Falcone, R. (1998) “Principles of Trust for MAS: Cognitive Anatomy, Social Importance, and Quantification”, ICMAS-98, pp. 72–79

    Google Scholar 

  8. Dignum, F. and Linder, B. (1996) “Modelling Social Agents: Communication as Action”, in “Intelligent Agents III: Agent Theories, Architectures, and Languages”, pp. 205–217

    Google Scholar 

  9. FIPA (1997) “FIPA-97 Specification, Part 2: Agent Communication Language”, http://www.fipa.org

  10. Gambetta, D. (ed.) (1988) “Trust: making and Breaking Cooperative Relations”, Basil Blackwell, Oxford.

    Google Scholar 

  11. Griffiths, N. and Luck, M. (1999) “Cooperative Plan Selection Through Trust”, in Proc. 9th European Workshop on Multi-Agent Systems Engineering (MAAMAW’99), Springer Verlag, Berlin, pp. 162–174

    Google Scholar 

  12. He, Q, Sycara, K. and Su, Z. (1998) “Security Infrastructure for Software Agent Society”, in Trust and Deception in Virtual Societies, eds. Castelfranchi and Hua-Tan, pp. 137–154

    Google Scholar 

  13. Jennings, N., et al. (1999) “FIPA-compliant Agents for Real-time Control of Intelligent Network Traffic”, Computer Networks, Vol. 31, pp. 2017–2036

    Article  Google Scholar 

  14. Jones, A. and Firozabadi, B. (1998) “On the Characterisation of a Trusting Agent-Aspects of a Formal Approach”, in Castelfranchi, C. and Hua-Tan (eds.) “Trust and Deception in Virtual Societies”, pp. 155–166

    Google Scholar 

  15. Jonker, C.M. and Treur, J. (1999) “Formal Analysis of Models for the Dynamics of Trust Based on Experiences”, in Proc. 9th European Workshop on Multi-Agent Systems Engineering (MAAMAW’ 99), Springer Verlag, Berlin, pp. 221–232

    Google Scholar 

  16. Lorenz, E.H. (1988) “Neither Friends nor Strangers: Informal Networks of Subcontracting in French Industry”, in: [10], pp. 194–209

    Google Scholar 

  17. Marsh, S. (1994) “Trust in Distributed Artificial Intelligence”, in Proc. 4th European Workshop on Multi-Agent Systems Engineering (MAAMAW’ 92), Springer Verlag, Berlin, pp. 94–113

    Google Scholar 

  18. Patel, A., Prouskas, K., Barria, J. and Pitt, J. (2000) “IN Load Control using a Competitive Market-based Multi-Agent System”, in Proc. Intelligence and Services in Networks 2000 (IS&N-2000), pp. 239–254

    Google Scholar 

  19. Pham, X.H. and Betts, R. (1994) “Congestion Control in Intelligent Networks”, Computer Networks and ISDN Systems, Vol. 26, No. 5, pp. 511–524

    Article  Google Scholar 

  20. Prouskas, K., Patel, A., Pitt, J. and Barria, J. (2000) “A Multi-Agent System for Intelligent Network Load Control Using a Market-based Approach”, in Proc. 4th Int. Conf. on MultiAgent Systems (ICMAS-2000), pp. 231–238

    Google Scholar 

  21. Schaerf, A., Shoham, Y. and Tennenholtz, M. (1995) “Adaptive Load Balancing: A Study in Multi-Agent Learning”, Journal of Artificial Intelligence Research, Vol. 2, pp. 475–500

    Article  Google Scholar 

  22. Schillo, M. and Funk, P. (1999) “Learning from and about other Agents in Terms of Social Metaphors”, in Proc. “Agents Learning about, from and with other Agents” Workshop.

    Google Scholar 

  23. Sholtz, F. and Hanrahan, H. (1999) “Market Based Control of SCP Congestion in Intelligent Networks”, South African Telecommunications Networks and Applications Conference (SATNAC-99)

    Google Scholar 

  24. Thirunavukkarasu, C., Finin, T. and Mayfield, J. (1995) “Secret Agents-A Security Architecture for the KQML”, Proc. ACM CIKM Intelligent Information Agents Workshop, Baltimore, December 1995

    Google Scholar 

  25. Watkins, C.J.C.H. (1989) “Learning from Delayed Rewards”, King’s College, Cambridge University (Ph.D. thesis)

    Google Scholar 

  26. Williams, B. (1988) “Formal Structures and Social Reality”, in: [10], pp. 3–13

    Google Scholar 

  27. Wong, H.C. and Sycara, K. (1999) “Adding Security and Trust to Multi-Agent Systems”, in: Proc. of Autonomous Agents’ 99 Workshop on Deception, Fraud, and Trust in Agent Societies, pp. 149–161

    Google Scholar 

  28. Zacharia, G. (1999) “Trust Management Through Reputation Mechanisms”, Workshop on “Deception, Fraud and Trust in Agent Societies”, Autonomous Agents, 1999, pp. 163–167

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Witkowski, M., Artikis, A., Pitt, J. (2001). Experiments in Building Experiential Trust in a Society of Objective-Trust Based Agents. In: Falcone, R., Singh, M., Tan, YH. (eds) Trust in Cyber-societies. Lecture Notes in Computer Science(), vol 2246. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45547-7_7

Download citation

  • DOI: https://doi.org/10.1007/3-540-45547-7_7

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43069-8

  • Online ISBN: 978-3-540-45547-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics