Abstract
Reinforcement learning has remarkable achievements in areas such as GO, Game, and autonomous vehicles, and is attracting attention as the most promising future technology among neural network-based technique. In spite of its capability, Reinforcement learning suffers from its greedy nature that causes selective behaviors. This is because the agent learns in such a way that each action seeks an optimal reward. However, the pursuit of maximum rewards by each action does not guarantee the maximization of the total reward. This is an obstacle to improving the overall performance of reinforcement learning. This paper introduces a concise illustration of Action Sequence, a method that encourages the trained model to seek grander strategies generating more desirable results. Especially aiming at highly complexed problems where plain actions can hardly handle any tasks. The model was tested under a video game environment to verify its improved proficiency in choosing suitable actions.
This work was supported by GIST Research Institute(GRI) grant funded by the GIST in 2019.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Kim, M.-J., Kim, K.J.: Opponent modeling based on action table for MCTS-based fighting game AI. In: IEEE Conference on Computational Intelligence and Games, pp. 178–180 (2017)
Fujimoto, S., Meger, D., Precup, D.: Off-policy deep reinforcement learning without exploration. ArXiv Preprint, arXiv:1812.02900 (2018)
Wiedemann, S., Müller, K., Samek, W.: Compact and computationally efficient representation of deep neural networks. IEEE Trans. Neural Netw. Learn. Syst. 31, 772–785 (2019)
Kim, M.-J., Ahn, C.W.: Hybrid fighting game AI using a genetic algorithm and Monte Carlo tree search. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 129–130 (2018)
Holland, J.: Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. MIT press, Cambridge (1992)
Yoshida, S., Ishihara, M., Miyazaki, T., et al.: Application of Monte-Carlo tree search in a fighting game AI. In: IEEE 5th Global Conference on Consumer Electronics, pp. 1–2 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Kim, MJ., Kim, J.S., Lee, D., Ahn, C.W. (2020). Genetic Action Sequence for Integration of Agent Actions. In: Pan, L., Liang, J., Qu, B. (eds) Bio-inspired Computing: Theories and Applications. BIC-TA 2019. Communications in Computer and Information Science, vol 1159. Springer, Singapore. https://doi.org/10.1007/978-981-15-3425-6_54
Download citation
DOI: https://doi.org/10.1007/978-981-15-3425-6_54
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-3424-9
Online ISBN: 978-981-15-3425-6
eBook Packages: Computer ScienceComputer Science (R0)