Abstract
In competition environment a satisfactory multi-agent learning algorithm should, at a minimum, have rationality and convergence. Exploiter-PHC (It is written as Exploiter here) could beat many fair opponents, but it is neither rational against stationary policy nor convergent in self-play, even it could be beaten possibly by some fair opponents in lower league. Now an improved algorithm named ExploiterWT (Exploiter With Testing) based on Exploiter is proposed. The basic idea of ExploiterWT is that an additional testing period is added to estimate the Nash Equilibrium policy. ExploiterWT could satisfy these properties mentioned above. It needn’t Nash Equilibrium as apriori knowledge like Exploiter when it begins to exploiting. Even ExploiterWT could avoid being beaten by some fair opponents in lower league. In this paper, at first the thoughts of this algorithm will be introduced, and then experiment results obtained in Game Pennies-Matching against other algorithms will be given.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Singh, S., Kearns, M., Mansour, Y.: Nash Convergence of Gradient Dynamics in General-Sum Games. In: Proceedings of the Sixteenth Conference on Uncertainty in Aritficial Intelligence, pp. 541–548. Morgan Kaufmann, San Francisco (2000)
Banerjee, B., Peng, J.: Convergent Gradient Ascent in General-Sum Games, ECML. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, pp. 1–9. Springer, Heidelberg (2002)
Bowling, M., Veloso, M.: Multi-agent learning using a variable learning rate. Arificial Intelligence 136, 215–250 (2002)
Bowling, M., Veloso, M.: Rational and convergent learning in stochastic games. In: proceedings of the Seventeenth International Joint Conference on Aritificial Intelligence (2001)
Chang, Y.-H., Kaelbling, L.P.: Playing is believing: The role of beliefs in multi-agent learning. In: Neural Information Proceesing SysTems, Vancouver, Canada (2001)
Banerjee, B., Peng, J.: The role of reactivity in multi-agent learning. In: The Proceedings of the Third International Jiont Conference on Autonomous Agents and Multi-agent Systemsm, July 19-23, 2004, New York (2004)
Conitzer, V., Sandholm, T.: AWESOME: A General Multi-agent Learning Algorithm that Converges in Self-Play and Learns a Best Response Against Stationary Opponents. In: Proceedings of the 20th International Conference on Machine Learning (ICML-2003), Washington, DC, USA, pp. 83–90 (2003)
Powers, R., Shoham, Y.: New Criteria and a New Algorithm for Learning in Multi-Agent Systems. NIPS (2004)
Banerjee, B., Peng, J.: Efficient No-regret Multiagent Learning. In: The Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-2005), July 9-13, 2005, Pittsburgh, PA (2005)
Banerjee, B., Peng, J.: Efficient Learning of Multi-step Best Response. In: The Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS-2005), July 25-29, 2005, Utrecht (2005)
Nash, J.F.: Non-cooperative games. Annals of Mathematics 54, 286–295 (1951)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wang, Lm., Bai, Y. (2006). Exploiting Based Pre-testing in Competition Environment. In: Shi, ZZ., Sadananda, R. (eds) Agent Computing and Multi-Agent Systems. PRIMA 2006. Lecture Notes in Computer Science(), vol 4088. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11802372_27
Download citation
DOI: https://doi.org/10.1007/11802372_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-36707-9
Online ISBN: 978-3-540-36860-1
eBook Packages: Computer ScienceComputer Science (R0)