Skip to main content

Genetic Action Sequence for Integration of Agent Actions

  • Conference paper
  • First Online:
Bio-inspired Computing: Theories and Applications (BIC-TA 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1159))

  • 1050 Accesses

Abstract

Reinforcement learning has remarkable achievements in areas such as GO, Game, and autonomous vehicles, and is attracting attention as the most promising future technology among neural network-based technique. In spite of its capability, Reinforcement learning suffers from its greedy nature that causes selective behaviors. This is because the agent learns in such a way that each action seeks an optimal reward. However, the pursuit of maximum rewards by each action does not guarantee the maximization of the total reward. This is an obstacle to improving the overall performance of reinforcement learning. This paper introduces a concise illustration of Action Sequence, a method that encourages the trained model to seek grander strategies generating more desirable results. Especially aiming at highly complexed problems where plain actions can hardly handle any tasks. The model was tested under a video game environment to verify its improved proficiency in choosing suitable actions.

This work was supported by GIST Research Institute(GRI) grant funded by the GIST in 2019.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Kim, M.-J., Kim, K.J.: Opponent modeling based on action table for MCTS-based fighting game AI. In: IEEE Conference on Computational Intelligence and Games, pp. 178–180 (2017)

    Google Scholar 

  2. Fujimoto, S., Meger, D., Precup, D.: Off-policy deep reinforcement learning without exploration. ArXiv Preprint, arXiv:1812.02900 (2018)

  3. Wiedemann, S., Müller, K., Samek, W.: Compact and computationally efficient representation of deep neural networks. IEEE Trans. Neural Netw. Learn. Syst. 31, 772–785 (2019)

    Article  Google Scholar 

  4. Kim, M.-J., Ahn, C.W.: Hybrid fighting game AI using a genetic algorithm and Monte Carlo tree search. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 129–130 (2018)

    Google Scholar 

  5. Holland, J.: Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence. MIT press, Cambridge (1992)

    Book  Google Scholar 

  6. Yoshida, S., Ishihara, M., Miyazaki, T., et al.: Application of Monte-Carlo tree search in a fighting game AI. In: IEEE 5th Global Conference on Consumer Electronics, pp. 1–2 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chang Wook Ahn .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kim, MJ., Kim, J.S., Lee, D., Ahn, C.W. (2020). Genetic Action Sequence for Integration of Agent Actions. In: Pan, L., Liang, J., Qu, B. (eds) Bio-inspired Computing: Theories and Applications. BIC-TA 2019. Communications in Computer and Information Science, vol 1159. Springer, Singapore. https://doi.org/10.1007/978-981-15-3425-6_54

Download citation

  • DOI: https://doi.org/10.1007/978-981-15-3425-6_54

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-15-3424-9

  • Online ISBN: 978-981-15-3425-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics