Abstract
We present an interaction strategy with reinforcement learning to promote mutual cooperation among agents in complex networks. Networked computerized systems consisting of many agents that are delegates of social entities, such as companies and organizations, are being implemented due to advances in networking and computer technologies. Because the relationships among agents reflect the interaction structures of the corresponding social entities in the real world, social dilemma situations like the prisoner’s dilemma are often encountered. Thus, agents have to learn appropriate behaviors from the long term viewpoint to be able to function properly in the virtual society. The proposed interaction strategy is called the enhanced expectation-of-cooperation (EEoC) strategy and is an extension of our previously proposed strategy for improving robustness against defecting agents and for preventing exploitation by them. Experiments demonstrated that agents using the EEoC strategy can effectively distinguish cooperative neighboring agents from all-defecting (AllD) agents and thus can spread cooperation among EEoC agents and avoid being exploited by AllD agents. Examination of robustness against probabilistically defecting (ProbD) agents demonstrated that EEoC agents can spread and maintain mutual cooperation if the number of ProbD agents is not large. The EEoC strategy is thus simple and useful in actual computerized systems.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
Agent i selects the best action learned so far with probability \(1-\varepsilon \); otherwise selects it randomly.
References
Iwagami, Akio, Masuda, Naoki: Upstream reciprocity in heterogeneous networks. J. Theor. Biol. 265(3), 297–305 (2010)
Albert, R., Jeong, H., Barabasi, A.L.: Error and attack tolerance of complex networks. Nature 406(6794), 378–382 (2000)
Barabasi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286, 509–512 (1999)
Erdös, P., Rényi, A.: On random graphs I. Publicationes Mathematicae 6, 290–297 (1959)
Hao, J., Leung, H.F.: Achieving socially optimal outcomes in multiagent systems with reinforcement social learning. ACM Trans. Auton. Adapt. Syst. 8(3), 15:1–15:23 (2013)
Jiang, Y., Zhou, Y., Li, Y.: Reliable task allocation with load balancing in multiplex networks. TAAS 10(1), 3:1–3:32 (2015)
Li, P., Duan, H.: Robustness of cooperation on scale-free networks in the evolutionary prisoner’s dilemma game. EPL (Europhys. Lett.) 105(4), 48003 (2014)
Matlock, M., Sen, S.: Effective tag mechanisms for evolving coordination. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS ’07, New York, NY, USA, pp. 251:1–251:8. ACM (2007)
Nowak, M.A.: Five rules for the evolution of cooperation. Science 314(5805), 1560–1563 (2006)
Nowak, M.A., Sigmund, K.: Evolution of indirect reciprocity. Nature 437, 1291–1298 (2005)
Otsuka, T., Sugawara, T.: Robust spread of cooperation by expectation-of-cooperation strategy with simple labeling method. In: Proceedings of 2017 IEEE/WIC/ACM International Conference on Web Intelligence (WI2017), pp. 483–490. ACM (2017)
Rockenbach, B., Milinski, M.: The efficient interaction of indirect reciprocity and costly punishment. Nature 444, 718–723 (2006)
Sasaki, T., Yamamoto, H., Okada, I., Uchida, S.: The evolution of reputation-based cooperation in regular networks. Games 8(1) (2017)
Shi, D.M., Yang, H.X., Hu, M.B., Du, W.B., Wang, B.H., Cao, X.B.: Preferential selection promotes cooperation in a spatial public goods game. Physica A 388(21), 4646–4650 (2009)
Shibusawa, R., Otsuka, T., Sugawara, T.: Spread of cooperation in complex agent networks based on expectation of cooperation. In: Baldoni, M., et al. (eds.) Proceedings of 19th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2016. LNCS, vol. 9862, pp. 76–91. Springer, Cham (2016)
Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)
Acknowledgements
This work is partly supported by KAKENHI (17KT0044).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Otsuka, T., Sugawara, T. (2018). Promotion of Robust Cooperation Among Agents in Complex Networks by Enhanced Expectation-of-Cooperation Strategy. In: Cherifi, C., Cherifi, H., Karsai, M., Musolesi, M. (eds) Complex Networks & Their Applications VI. COMPLEX NETWORKS 2017. Studies in Computational Intelligence, vol 689. Springer, Cham. https://doi.org/10.1007/978-3-319-72150-7_66
Download citation
DOI: https://doi.org/10.1007/978-3-319-72150-7_66
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-72149-1
Online ISBN: 978-3-319-72150-7
eBook Packages: EngineeringEngineering (R0)