Skip to main content
Log in

Honesty and trust revisited: the advantages of being neutral about other’s cognitive models

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Open distributed systems pose a challenge to trust modelling due to the dynamic nature of these systems (e.g., electronic auctions) and the unreliability of self-interested agents. The majority of trust models implicitly assume a shared cognitive model for all the agents participating in a society, and thus they treat the discrepancy between information and experience as a source of distrust: if an agent states a given quality of service, and another agent experiences a different quality for that service, such discrepancy is typically assumed to indicate dishonesty, and thus trust is reduced. Herein, we propose a trust model, which does not assume a concrete cognitive model for other agents, but instead uses the discrepancy between the information about other agents and its own experience to better predict the behavior of the others. This neutrality about other agents’ cognitive models allows an agent to obtain utility from lyres or agents having a different model of the world. The experiments performed suggest that this model improves the performance of an agent in dynamic scenarios under certain conditions such as those found in market-like evolving environments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Abdul-Rahman, S. H. A. (2000). Supporting trust in virtual communities. In 33th IEEE International Conference on Systems Sciences.

  2. Ba S., Whinston A., Zhang H. (2003) Building trust in online auction markets through an economic incentive mechanism. Decision Support System 35: 273–286

    Article  Google Scholar 

  3. Braynov, S., & Sandholm, T. (2002). Trust revelation in multiagent interaction. In Proceedings of CHI’02 Worhshop on the Philosophy and Design of Socially Adept Technologies (pp. 57–60). Minneapotis.

  4. Carbo, J., Garcia, J., & Molina, J. (2004). Subjective trust inferred by kalman filtering vs. a fuzzy reputation. In S. Wang, D. Yang, & K. Tanaka (Eds.) Workshop on Conceptual Modelling for Agents (COMOA2004), 23th International Conference on Conceptual Modelling, Lecture Notes in Computer Science (Vol. 3289, pp.496–505). Shanghai, China: Springer-Verlag November 2004.

  5. Carbo, J., Garcia, J., & Molina, J. (2005). Convergence of agent reputation with alpha-beta filtering vs. a fuzzy system. In International Conference on Intelligent Agents, Web Technologies and Internet Commerce, Wien, Austria, November 2005.

  6. Carbo J., Molina J., Davila J. (2003) Trust management through fuzzy reputation. International Journal of Cooperative Information Systems 12(1): 135–155

    Article  Google Scholar 

  7. Castellfranchi, C., & Falcone, R. (1998). Principles of trust for multiagent systems: Cognitive anatomy, social importance and quantification. In Third International Conference on Multi-Agent Systems (pp. 72–79).

  8. Conte, R., & Paolucci, M. (2002). Reputation in artificial societies. Kluwer Academic Publishers.

  9. Fullam, K., Klos, T., Muller, G., Sabater, J., Schlosser, A., Topol, Z., Barber, K.S., Rosenschein, J., Vercouter, L., & Voss. M. A specification of the agent reputation and trust (art) testbed: Experimentation and competition for trust in agent societies. In The Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-2005), (pp. 512–518).

  10. Huynh T.D., Jennings N.R., Shadbolt N.R. (2006) An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems 13(2): 119–154

    Article  Google Scholar 

  11. Schillo P.F.M., Rovatsos M. (2000) Using trust for detecting deceitful agents in artificial societies. Applied Artificial Intelligence, Special Issue on Trust, Deception and Fraud in Agent Societies 14(8): 825–849

    Google Scholar 

  12. Marsh, S. (1994). Trust in distributed artificial intelligence. In C. Castelfranchi & E. Werner (Eds.) Lecture notes in artificial intelligence (Vol. 830, pp. 94–112) Springer Verlag.

  13. Russell, S., & Norvig, P. (2003). Artificial inteslligence: A modern approach. Prentice Hall Pearson Education International.

  14. Sabater, J. (2003). Trust and reputation for agent societies. Consejo Superior de Bellaterra, Spain: Investigaciones Cientificas.

  15. Sabater, J., & Sierra, C. (2001). Regret: A reputation model for gregarious societies. In Fourth Workshop on Deception, Fraud and Trust in Agent Societies (pp. 61–69). Montreal, Canada.

  16. Sen, S., Biswas, A., & Debnath, S. (2000). Believing others: Pros and cons. In Proceedings of the 4th International Conference on MulitAgent Systems (pp. 279–285). Boston, MA, July 2000.

  17. Wasserman S., Galaskiewicz J. (1994) Advances in social network analysis. Sage Publications, Thousand Oaks U.S.

    Google Scholar 

  18. Yu, B., & Singh, M. (2000). A social mechanism for reputation management in electronic communities. Lecture notes in computer science (Vol. 1860, pp. 154–165). Springer.

  19. Zacharia G., Maes P. (2000) Trust management through reputation mechanisms. Applied Artificial Intelligence 14: 881–907

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mario Gómez.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gómez, M., Carbó, J. & Earle, C.B. Honesty and trust revisited: the advantages of being neutral about other’s cognitive models. Auton Agent Multi-Agent Syst 15, 313–335 (2007). https://doi.org/10.1007/s10458-007-9015-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-007-9015-8

Keywords

Navigation