Abstract
Human behavior can be analysed through a moral perspective when considering strategies for cooperation in evolutionary games. Presuming a multiagent task performed by self-centered agents, artificial moral behavior could bring about the emergence of cooperation as a consequence of the computational model itself. Herein we present results from our MultiA computational architecture, derived from a biologically inspired model and projected to simulate moral behavior through an Empathy module. Our testbed is a multiagent game previously defined in the literature such that the lack of cooperation may cause a cascading failure effect (“bankruptcy”) that impacts on the global network topology via local neighborhood interactions. Starting with sensorial information originated from the environment, MultiA transforms it into basic and social artificial emotions and feelings. Then its own emotions are employed to estimate the current state of other agents through an Empathy module. Finally, the artificial feelings of MultiA provide a measure (called well-being) of its performance in response to the environment. Through that measure and reinforcement learning techniques, MultiA learns a mapping from emotions to actions. Results indicate that strategies relied upon simulation of moral behavior may indeed help to decrease the internal reward from selfish selection of actions, thus favoring cooperation as an emergent property of multiagent systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Tomasello, M., Vaish, A.: Origins of human cooperation and morality. Annu. Rev. Psychol. 64, 231–255 (2013)
Tomasello, M.: Human culture in evolutionary perspective. Adv. Cult. Psychol. 1, 5–51 (2011)
Wakano, J.Y., Hauert, C.: Pattern formation and chaos in spatial ecological public goodsgames. J. Theor. Biol. 268(1), 30–38 (2011)
Wardil, L., Hauert, C.: Origin and structure of dynamic cooperative networks. Sci. Rep. 4, 1–6 (2014)
Matignon, L., Laurent, G., Le Fort-Piat, N., et al.: Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. Knowl. E. Rev. 27(1), 1–31 (2012)
Bowling, M., Veloso, M.: Convergence of gradient dynamics with a variable learning rate. In: ICML, pp. 27–34 (2001)
Damásio, A.: Looking for Spinoza: Joy, Sorrow, and the Feeling Brain. Random House, New York (2004)
Eliott, F., Ribeiro, C.: A computational model for simulation of moral behavior. In: Proceedings of the International Conference on Neural Computation Theory and Applications (NCTA-2014), pp. 282–287. SCITEPRESS (Science and Technology Publications) (2014)
Guarini, M.: Moral cases, moral reasons, and simulation. AISB/IACAP World Congr. 21(4), 22–28 (2012)
Franklin, S., Madl, T., D’Mello, S., Snaider, J.: LIDA: a systems-level architecture for cognition, emotion, and learning. IEEE Trans. Auton. Ment. Dev. 6, 19–41 (2014)
Faghihi, U., Estey, C., McCall, R., Franklin, S.: A cognitive model fleshes out Kahneman’s fast and slow systems. Biol. Inspired Cogn. Architect. 11, 38–52 (2015)
Damásio, A.: Descartes’ Error. Putnam, New York (1994)
Gadanho, S., Custódio, L.: Asynchronous learning by emotions and cognition. In: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior on From Animals to Animats, pp. 224–225. MIT Press (2002)
Gadanho, S.: Learning behavior-selection by emotions and cognition in a multi-goal robot task. J. Mach. Learn. Res. 4, 385–412 (2003)
Di Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., Rizzolatti, G.: Understanding motor events: a neurophysiol. study. Exp. Brain Res. 91(1), 176–180 (1992)
Rizzolatti, G., Fadiga, L., Gallese, V., Fogassi, L.: Premotor cortex and the recognition of motor actions. Cogn. Brain Res. 3(2), 131–141 (1996)
Hickok, G.: The Myth of Mirror Neurons: The Real Neuroscience of Communication and Cognition. WW Norton & Company, New York (2014)
Bentham, J.: An Introduction to the Principles of Morals and Legislation. Courier Dover Publications, New York (2007 (1789))
Watkins, C.J.: Learning from delayed rewards. Ph.D. thesis, Kings College, UK (1989)
Lin, L.: Reinforcement learning for robots using neural networks. Technical report, DTIC Document (1993)
Werbos, P.J.: Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis, Harvard (1974)
Wang, W.X., Lai, Y.C., Armbruster, D.: Cascading failures and the emergence of cooperation in evolutionary-game based models of social and economical networks. Chaos: an Interdisciplinary. J. Nonlin. Sci. 21(3), 033112–033112 (2011)
Acknowledgment
The authors thank CNPQ and FAPESP for the financial support.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Eliott, F.M., Ribeiro, C.H.C. (2015). Emergence of Cooperation Through Simulation of Moral Behavior. In: Onieva, E., Santos, I., Osaba, E., Quintián, H., Corchado, E. (eds) Hybrid Artificial Intelligent Systems. HAIS 2015. Lecture Notes in Computer Science(), vol 9121. Springer, Cham. https://doi.org/10.1007/978-3-319-19644-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-319-19644-2_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-19643-5
Online ISBN: 978-3-319-19644-2
eBook Packages: Computer ScienceComputer Science (R0)