Abstract
Individuals inside a society can make organizational changes by modifying their behavior. These changes can be guided by the outcome of the actions of every individual in the society. Should the outcome be worse than expected, they would innovate to find a better solution to adapt the society to the new situation automatically.
Following these ideas, a novel social agent model, based on emotions and social welfare, is proposed in this paper. Also, a learning algorithm based on this model, as well as a case of study to test its validity, are given.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Kirton, M.: Adaptors and innovators: A description and measure. Journal of Applied Psychology 61(5), 622–629 (1976)
Ortony, A., Clore, G.L., Collins, A.: The Cognitive Structure of Emotions. Cambridge University Press, Cambridge (1988)
Melo, F.S., Ribeiro, M.I.: Coordinated learning in multiagent MDPs with infinite state-space. In: Autonomous Agents and Multi-Agent Systems, pp. 1–47 (2010)
Hu, J., Wellman, M.P.: Nash Q-learning for general-sum stochastic games. The Journal of Machine Learning Research 4, 1039–1069 (2003)
Akchurina, N.: Multiagent reinforcement learning: algorithm converging to nash equilibrium in general-sum discounted stochastic games. In: AAMAS 2009: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, pp. 725–732 (2009)
Steunebrink, B.R., Dastani, M., Meyer, J.-J.C.: A logic of emotions for intelligent agents. In: AAAI 2007: Proceedings of the 22nd National Conference on Artificial Intelligence, pp. 142–147. AAAI Press, Menlo Park (2007)
Kearns, M., Koller, D.: Efficient reinforcement learning in factored MDPs. In: International Joint Conference on Artificial Intelligence, vol. 16, pp. 740–747. Citeseer (1999)
Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artificial Intelligence 136(2), 215–250 (2002)
Bowling, M., Veloso, M.: Scalable learning in stochastic games. In: AAAI Workshop on Game Theoretic and Decision Theoretic Agents, pp. 11–18 (2002)
Mataric, M.: Learning to behave socially. In: From Animals to Animats: International Conference on Simulation of Adaptive Behavior, pp. 453–462. MIT Press, Cambridge (1994)
Fabregat, J., Carrascosa, C., Botti, V.: A social reinforcement teaching approach to social rules. Communications of SIWN 4, 153–157 (2008)
Martinez-Miranda, J., Aldea, A.: Emotions in human and artificial intelligence. Computers in Human Behavior 21(2), 323–341 (2005)
Oatley, K., Keltner, D., Jenkins, J.M.: Understanding Emotions. Wiley-Blackwell (2006)
Damasio, A.R., Sutherland, S.: Descartes’ error: Emotion, reason, and the human brain. Picador (1995)
García-Pardo, J.A., Soler, J., Carrascosa, C.: Social Conformity and Its Convergence for Reinforcement Learning. In: Dix, J., Witteveen, C. (eds.) MATES 2010. LNCS, vol. 6251, pp. 150–161. Springer, Heidelberg (2010)
Watkins, C.J.C.H., Dayan, P.: Q-learning. Machine learning 8(3), 279–292 (1992)
Singh, S.P., Jaakkola, T., Jordan, M.I.: Reinforcement learning with soft state aggregation. In: Advances in Neural Information Processing Systems, vol. 7, pp. 361–368. MIT Press, Cambridge (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Garcá-Pardo, J.A., Carrascosa, C. (2011). Social Welfare for Automatic Innovation. In: Klügl, F., Ossowski, S. (eds) Multiagent System Technologies. MATES 2011. Lecture Notes in Computer Science(), vol 6973. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24603-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-642-24603-6_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24602-9
Online ISBN: 978-3-642-24603-6
eBook Packages: Computer ScienceComputer Science (R0)