Skip to main content

Rational Trust Modeling

  • Conference paper
  • First Online:
Decision and Game Theory for Security (GameSec 2018)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11199))

Included in the following conference series:

  • 1991 Accesses

Abstract

Trust models are widely used in various computer science disciplines. The primary purpose of a trust model is to continuously measure the trustworthiness of a set of entities based on their behaviors. In this article, the novel notion of rational trust modeling is introduced by bridging trust management and game theory. Note that trust models/reputation systems have been used in game theory (e.g., repeated games) for a long time, however, game theory has not been utilized in the process of trust model construction; this is the novelty of our approach. In our proposed setting, the designer of a trust model assumes that the players who intend to utilize the model are rational/selfish, i.e., they decide to become trustworthy or untrustworthy based on the utility that they can gain. In other words, the players are incentivized (or penalized) by the model itself to act properly. The problem of trust management can be then approached by game theoretical analyses and solution concepts such as Nash equilibrium. Although rationality might be built-in in some existing trust models, we intend to formalize the notion of rational trust modeling from the designer’s perspective. This approach will result in two fascinating outcomes. First of all, the designer of a trust model can incentivize trustworthiness in the first place by incorporating proper parameters into the trust function, which can be later utilized among selfish players in strategic trust-based interactions (e.g., e-commerce scenarios). Furthermore, using a rational trust model, we can prevent many well-known attacks on trust models. These two prominent properties also help us to predict the behavior of the players in subsequent steps by game theoretical analyses.

This research was supported by the Department of Defense (DoD) Research and Education Program, grant 72498-RT-REP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Castelfranchi, C., Falcone, R.: Principles of trust for MAS: cognitive anatomy, social importance, and quantification. In: 3rd International Conference on Multi Agent Systems, pp. 72–79. IEEE (1998)

    Google Scholar 

  2. Feng, R., Che, S., Wang, X., Wan, J.: An incentive mechanism based on game theory for trust management. Secur. Commun. Netw. 7(12), 2318–2325 (2014)

    Article  Google Scholar 

  3. Harish, M., Mahalakshmi, G., Geetha, T.: Game theoretic model for P2P trust management. In: International Conference on Computational Intelligence and Multimedia Applications, vol. 1, pp. 564–566. IEEE (2007)

    Google Scholar 

  4. Jøsang, A., Ismail, R., Boyd, C.: A survey of trust and reputation systems for online service provision. Decis. Support Syst. 43(2), 618–644 (2007)

    Article  Google Scholar 

  5. Kerr, R.: Addressing the issues of coalitions & collusion in multiagent systems. Ph.D. thesis, UWaterloo (2013)

    Google Scholar 

  6. Mailath, G., Samuelson, L.: Repeated Games and Reputations: Long-Run Relationships. Oxford University Press, Oxford (2006)

    Book  Google Scholar 

  7. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)

    Article  Google Scholar 

  8. Mui, L.: Computational models of trust and reputation: agents, evolutionary games, and social networks. Ph.D. thesis, Massachusetts Institute of Technology (2002)

    Google Scholar 

  9. Mui, L., Mohtashemi, M., Halberstadt, A.: Notions of reputation in multi-agents systems: a review. In: 1st ACM International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS 2002, pp. 280–287 (2002)

    Google Scholar 

  10. Nojoumian, M.: Novel secret sharing and commitment schemes for cryptographic applications. Ph.D. thesis, Department of Computer Science, UWaterloo, Canada (2012)

    Google Scholar 

  11. Nojoumian, M.: Trust, influence and reputation management based on human reasoning. In: 4th AAAI Workshop on Incentives and Trust in E-Communities, pp. 21–24 (2015)

    Google Scholar 

  12. Nojoumian, M., Lethbridge, T.C.: A new approach for the trust calculation in social networks. In: Filipe, J., Obaidat, M.S. (eds.) ICETE 2006. CCIS, vol. 9, pp. 64–77. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-70760-8_6

    Chapter  Google Scholar 

  13. Nojoumian, M., Stinson, D.R.: Brief announcement: secret sharing based on the social behaviors of players. In: 29th ACM Symposium on Principles of Distributed Computing (PODC), pp. 239–240 (2010)

    Google Scholar 

  14. Nojoumian, M., Stinson, D.R.: Social secret sharing in cloud computing using a new trust function. In: 10th IEEE Annual International Conference on Privacy, Security and Trust (PST), pp. 161–167 (2012)

    Google Scholar 

  15. Nojoumian, M., Stinson, D.R.: Socio-rational secret sharing as a new direction in rational cryptography. In: Grossklags, J., Walrand, J. (eds.) GameSec 2012. LNCS, vol. 7638, pp. 18–37. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34266-0_2

    Chapter  Google Scholar 

  16. Nojoumian, M., Stinson, D.R., Grainger, M.: Unconditionally secure social secret sharing scheme. IET Inf. Secur. (IFS) Spec. Issue Multi-Agent Distrib. Inf. Secur. 4(4), 202–211 (2010)

    Google Scholar 

  17. Resnick, P., Kuwabara, K., Zeckhauser, R., Friedman, E.: Reputation systems: facilitating trust in internet interactions. Comm. ACM 43(12), 45–48 (2000)

    Article  Google Scholar 

  18. Wang, Y., Singh, M.P.: Trust representation and aggregation in a distributed agent system. In: 21st National Conference on AI, AAAI 2006, pp. 1425–1430 (2006)

    Google Scholar 

  19. Wang, Y., Singh, M.P.: Formal trust model for multiagent systems. In: 20th International Joint Conference on Artificial Intelligence, IJCAI 2007, pp. 1551–1556 (2007)

    Google Scholar 

  20. Wang, Y., Singh, M.P.: Evidence-based trust: a mathematical model geared for multiagent systems. ACM Trans. Auto Adapt. Syst. 5(4), 1–28 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehrdad Nojoumian .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nojoumian, M. (2018). Rational Trust Modeling. In: Bushnell, L., Poovendran, R., BaÅŸar, T. (eds) Decision and Game Theory for Security. GameSec 2018. Lecture Notes in Computer Science(), vol 11199. Springer, Cham. https://doi.org/10.1007/978-3-030-01554-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-01554-1_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-01553-4

  • Online ISBN: 978-3-030-01554-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics