Skip to main content

A Comparison of Hybrid Incremental Reuse Strategies for Reinforcement Learning in Genetic Programming

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3103))

Abstract

Easy missions is an approach to machine learning that seeks to synthesize solutions for complex tasks from those for simpler ones. ISLES (Incrementally Staged Learning from Easier Subtasks) [1] is a genetic programming (GP) technique that achieves this by using identified goals and fitness functions for subproblems of the overall problem. Solutions evolved for these subproblems are then reused to speed up learning, either as automatically defined functions (ADF) or by seeding a new GP population. Previous positive results using both approaches for learning in multi-agent systems (MAS) showed that incremental reuse using easy missions achieves comparable or better overall fitness than single-layered GP. A key unresolved issue dealt with hybrid reuse using ADF with easy missions. Results in the keep-away soccer (KAS) [2] domain (a test bed for MAS learning) were also inconclusive on whether compactness-inducing reuse helped or hurt overall agent performance. In this paper, we compare reuse using single-layered (with and without ADF) GP and easy missions GPs to two new types of GP learning systems with incremental reuse.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Hsu, W.H., Gustafson, S.M.: Genetic programming and multi-agent layered learning by reinforcements. In: GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, New York, pp. 764–771. Morgan Kaufmann Publishers, San Francisco (2002)

    Google Scholar 

  2. McAllester, D., Stone, P.: Keeping the ball from CMUnited-99. In: Stone, P., Balch, T., Kraetzschmar, G.K. (eds.) RoboCup 2000. LNCS (LNAI), vol. 2019, p. 333. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  3. Luke, S.: Issues in Scaling Genetic Programming: Breeding Strategies, Tree Generation, and Code Bloat. PhD thesis, Department of Computer Science, University of Maryland, A. V. Williams Building, University of Maryland, College Park, MD 20742 USA (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Harmon, S., Rodríguez, E., Zhong, C., Hsu, W. (2004). A Comparison of Hybrid Incremental Reuse Strategies for Reinforcement Learning in Genetic Programming. In: Deb, K. (eds) Genetic and Evolutionary Computation – GECCO 2004. GECCO 2004. Lecture Notes in Computer Science, vol 3103. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24855-2_79

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-24855-2_79

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-22343-6

  • Online ISBN: 978-3-540-24855-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics