Skip to main content
Log in

Autonomous agents and human cultures in the trust–revenge game

  • Published:
Autonomous Agents and Multi-Agent Systems Aims and scope Submit manuscript

Abstract

Autonomous agents developed by experts are embedded with the capability to interact well with people from different cultures. When designing expert agents intended to interact with autonomous agents developed by non-game theory agents (NGTE), it is beneficial to obtain insights on the behavior of these NGTE agents. Is the behavior of these NGTE agents similar to human behavior from different cultures? This is an important question as such a quality would allow an expert agent interacting with NGTE agents to model them using the same methods that are used to model humans from different cultures. To study this point, we evaluated NGTE agents behavior using a game called the Trust–Revenge game, which is known in social science for capturing different human tendencies. The Trust–Revenge game has a unique subgame-perfect equilibrium strategy profile, however, very rarely do people follow it. We compared the behavior of autonomous agents to the actions of several human demographic groups—one of which is similar to the designers of the autonomous agents. We claim that autonomous agents are similar to human players from various cultures. This enables the use of approaches, developed for handling cultural diversity among humans, to be applied for interaction with NGTE agents. This paper also analyzes additional aspects of autonomous agents behavior and whether composing autonomous agents affects human behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. All human subjects played all five settings as Player A exactly once in random order, and played five more games as player B. Unfortunately, due to matching difficulties, we could not ensure that all human subjects played each of the five settings as player B.

  2. We also considered a different criterion which tests whether the difference between the group \(A\) and \(\cup \mathcal{B}\) are statistically significant. However, such a criterion would be sensible to the size of the data since, theoretically, any two groups which have different averages would differ statistically significantly if enough data is gathered. Futhermore, we do not claim that group \(A\)’s distribution or average is similar to that of \(\cup \mathcal{B}\), but simply that it falls among the diversity of the groups in \(\mathcal{B}\). We therefore did not test this method on our data.

References

  1. Guttman, R. H., Moukas, A. G., & Maes, P. (1998). Agent-mediated electronic commerce: A survey. The Knowledge Engineering Review, 13(02), 147–159.

    Article  Google Scholar 

  2. Heydon, A., & Najork, M. (1999). Mercator: A scalable, extensible web crawler. World Wide Web, 2(4), 219–229.

    Article  Google Scholar 

  3. Markoff, J. (2010). Google cars drive themselves, in traffic. The New York Times, 10, A1.

    Google Scholar 

  4. Bemelmans, R., Gelderblom, G. J., Jonker, P., & De Witte, L. (2012). Socially assistive robots in elderly care: A systematic review into effects and effectiveness. Journal of The American Medical Directors Association, 13(2), 114–120.

    Article  Google Scholar 

  5. Robins, B., Dickerson, P., Stribling, P., & Dautenhahn, K. (2004). Robot-mediated joint attention in children with autism: A case study in robot-human interaction. Interaction studies, 5(2), 161–198.

    Article  Google Scholar 

  6. Riley, P., & Veloso, M. (2002). Planning for distributed execution through use of probabilistic opponent models. In AIPS (pp. 72–82).

  7. Wooldridge, M. (2009). An introduction to multiagent systems. New York: Wiley.

    Google Scholar 

  8. Gal, Y., & Pfeffer, A. (1999,2007). Modeling reciprocal behavior in human bilateral negotiation. In Proceedings of the national conference on artificial intelligence (Vol. 22, p. 815). Menlo Park, CA: Cambridge, MA; London; AAAI Press; MIT Press.

  9. Hindriks, K., & Tykhonov, D. (2008). Opponent modelling in automated multi-issue negotiation using bayesian learning. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 1, International foundation for autonomous agents and multiagent systems (pp. 331–338).

  10. Oshrat, Y., Lin, R., & Kraus S. (2009). Facing the challenge of human-agent negotiations via effective general opponent modeling. In Proceedings of the 8th international conference on autonomous agents and multiagent systems-volume 1, international foundation for autonomous agents and multiagent systems (pp. 377–384).

  11. Rosenfeld, A., & Kraus, S. (2011). Using aspiration adaptation theory to improve learning. In The 10th international conference on autonomous agents and multiagent systems-Volume 1, international foundation for autonomous agents and multiagent systems (pp. 423–430).

  12. Peled, N., Gal, Y., & Kraus, S. (2015). A study of computational and human strategies in revelation games. Autonomous Agents and Multi-Agent Systems, 29(1), 73–97.

    Article  Google Scholar 

  13. Azaria, A., Gal, Y., Kraus, S., & Goldman, C. (2015). Strategic advice provision in repeated human-agent interactions. Autonomous Agents and Multi-Agent Systems, 1–26.

  14. Azaria, A., Aumann, Y., & Kraus, S. (2012). Automated strategies for determining rewards for humanwork. In AAAI.

  15. Azaria, A., Rabinovich, Z., Kraus, S., Goldman, C. V., & Tsimhoni, O. (2012). Giving advice to people in path selection problems. In AAMAS

  16. Azaria, A., Kraus, S., Goldman, C., & Tsimhoni, O. (2014). Advice provision for energy saving in automobile climate control systems. In Proceedings of the 2014 international conference on Autonomous agents and multi-agent systems, International foundation for autonomous agents and multiagent systems (pp. 1391–1392).

  17. Azaria, A., Hassidim, A., Kraus, S., Eshkol, A., Weintraub, O., & Netanely, I. (2013). Movie recommender system for profit maximization. In RecSys (pp. 121–128). ACM

  18. Azaria, A., Aumann, Y., & Kraus, S. (2014). Automated agents for reward determination for human work in crowdsourcing applications. Autonomous Agents and Multi-Agent Systems, 28(6), 934–955.

    Article  Google Scholar 

  19. Rosenfeld, A., & Kraus, S. (2009). Modeling agents through bounded rationality theories. In IJCAI (Vol. 9, pp. 264–271).

  20. Elmalech, A., & Sarne, D. (2012). Evaluating the applicability of peer-designed agents in mechanisms evaluation. In Proceedings of the the 2012 IEEE/WIC/ACM international joint conferences on web intelligence and intelligent agent technology (Vol. 2, pp. 374–381). IEEE Computer Society.

  21. Grosz, B. J., Kraus, S., Talman, S., Stossel, B., & Havlin, M. (2004). The influence of social dependencies on decision-making: Initial investigations with a new game. In: Proceedings of the third international joint conference on autonomous agents and multiagent systems (Vol. 2, pp. 782–789). IEEE Computer Society.

  22. Gal, Y., Kraus, S., Gelfand, M., Khashan, H., & Salmon, E. (2011). An adaptive agent for negotiating with people in different cultures. ACM Transactions on Intelligent Systems and Technology, 3(1), 8.

    Article  Google Scholar 

  23. Haim, G., Gal, Y. K., Gelfand, M., & Kraus, S. (2012). A cultural sensitive agent for human-computer negotiation In Proceedings of the 11th international conference on autonomous agents and multiagent systems-Volume 1, international foundation for autonomous agents and multiagent systems (pp. 451–458).

  24. Rosenfeld, A., Zuckerman, I., Azaria, A., & Kraus, S. (2012). Combining psychological models with machine learning to better predict peoples decisions. Synthese, 189(1), 81–93.

    Article  Google Scholar 

  25. Berg, J., Dickhaut, J., & McCabe, K. (1995). Trust, reciprocity, and social history. Games and Economic Behavior, 10(1), 122–142.

    Article  MATH  Google Scholar 

  26. Johnson, N. D., & Mislin, A. A. (2011). Trust games: A meta-analysis. Journal of Economic Psychology, 32(5), 865–889.

    Article  Google Scholar 

  27. Gneezy, A., & Ariely, D. (2010). Don’t get mad get even: On consumers’ revenge, manuscript. Duke University.

  28. Carmel, D., & Markovitch, S. (1996). Opponent modeling in multi-agent systems. In Adaption and learning in multi-agent systems (pp. 40–52). Springer

  29. McCracken, P., & Bowling, M. (2004). Safe strategies for agent modelling in games. In AAAI fall symposium on artificial multi-agent learning.

  30. Lazaric, A., Quaresimale, M., & Restelli, M. (2008). On the usefulness of opponent modeling: The kuhn poker case study. In Proceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 3, International foundation for autonomous agents and multiagent systems (pp. 1345–1348).

  31. Chalamish, M., Sarne, D., & Kraus, S. (2008). Programming agents as a means of capturing self-strategy. In AAMAS (pp. 1161–1168).

  32. Elmalech, A., & Sarne, D. (2012). Evaluating the applicability of peer-designed agents in mechanisms evaluation. In Proceedings of the the 2012 IEEE/WIC/ACM international joint conferences on web intelligence and intelligent agent technology (Vol. 02, pp. 374–381) WI-IAT ’12.

  33. Chalamish, M., Sarne, D., & Lin, R. Enhancing parking simulations using peer-designed agents. IEEE Transactions on Intelligent Transportation Systems (1) 492–498.

  34. Mash, M., Lin, R., & Sarne, D. (2014). Peer-design agents for reliably evaluating distribution of outcomes in environments involving people. In Proceedings of the 2014 international conference on autonomous agents and multi-agent systems (pp. 949–956). AAMAS ’14.

  35. Lin, R., Kraus, S., Oshrat, Y. & Gal, Y. K. (2010). Facilitating the evaluation of automated negotiators using peer designed agents.

  36. Manistersky, E., Lin, R., & Kraus, S. (2013). The development of the strategic behavior of peer designed agents. Lecture Notes in Computer Science 8001(I), 180–196.

  37. Chalamish, M., Sarne, D., & Lin, R. (2012). The effectiveness of peer-designed agents in agent-based simulations. Multiagent and Grid Systems, 8(4), 349–372.

    Google Scholar 

  38. Willinger, M., Keser, C., Lohmann, C., & Usunier, J.-C. (2003). A comparison of trust and reciprocity between france and germany: Experimental investigation based on the investment game. Journal of Economic Psychology, 24(4), 447–466.

    Article  Google Scholar 

  39. Gächter, S., & Herrmann, B. (2009). Reciprocity, culture and human cooperation: Previous insights and a new cross-cultural experiment. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1518), 791–806.

    Article  Google Scholar 

  40. Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14(4), 583–610.

    Article  MathSciNet  Google Scholar 

  41. Oosterbeek, H., Sloof, R., & Van De Kuilen, G. (2004). Cultural differences in ultimatum game experiments: Evidence from a meta-analysis. Experimental Economics, 7(2), 171–188.

    Article  MATH  Google Scholar 

  42. Hofstede, G., Hofstede, G. J., & Minkov, M. (1991). Cultures and organisations-software of the mind: Intercultural cooperation and its importance for survival. New York, NY: McGraw-Hill.

    Google Scholar 

  43. De Dreu, C. K., & Van Lange, P. A. (1995). The impact of social value orientations on negotiator cognition and behavior. Personality and Social Psychology Bulletin, 21(11), 1178–1188.

    Article  Google Scholar 

  44. Mascarenhas, S., Prada, R., Paiva, A., & Hofstede, G. J. (2013). Social importance dynamics: A model for culturally-adaptive agents. In Intelligent virtual agents (pp. 325–338). Springer

  45. Kistler, F., Endrass, B., Damian, I., Dang, C. T., & André, E. (2012). Natural interaction with culturally adaptive virtual characters. Journal on Multimodal User Interfaces, 6(1–2), 39–47.

    Article  Google Scholar 

  46. Elmalech, A., Sarne, D., & Agmon, N. (2014). Can agent development affect developer’s strategy? In Proceedings of the twenty-eighth international joint conference on artificial intelligence (pp. 923–929).

  47. Shin, J., & Ariely, D. (2004). Keeping doors open: The effect of unavailability on incentives to keep options viable. Management Science 575–586.

  48. Amazon, Mechanical Turk Services. (2013). http://www.mturk.com/.

  49. Hajaj, C., Hazon, N., & Sarne, D. (2014). Ordering effects and belief adjustment in the use of comparison shopping agents. In Proceedings of the twenty-eighth international joint conference on artificial intelligence.

  50. Selten, R., & Stoecker, R. (1986). End behavior in sequences of finite prisoner’s dilemma supergames a learning theory approach. Journal of Economic Behavior & Organization, 7(1), 47–70.

    Article  Google Scholar 

  51. Camerer, C., & Weigelt, K. (1988). Experimental tests of a sequential equilibrium reputation model. Econometrica, 56(1), 1–36.

    Article  MathSciNet  Google Scholar 

  52. Azaria, A., Richardson, A., Elmalech, A., & Rosenfeld, A. (2014). Automated agents’ behavior in the trust-revenge game incomparison to other cultures. In Proceedings of the 2014 international conference on autonomous agents and multi-agent systems, international foundation for autonomous agents and multiagent systems (pp. 1389–1390).

Download references

Acknowledgments

We thank Shira Abuhatzera for her help. A very preliminary version of this paper appears in Azaria et al. [52].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amos Azaria.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Azaria, A., Richardson, A. & Rosenfeld, A. Autonomous agents and human cultures in the trust–revenge game. Auton Agent Multi-Agent Syst 30, 486–505 (2016). https://doi.org/10.1007/s10458-015-9297-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10458-015-9297-1

Keywords

Navigation