skip to main content
10.1145/3520304.3529012acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
poster

Reduction of genetic drift in population-based incremental learning via entropy regularization

Published:19 July 2022Publication History

ABSTRACT

The Population-Based Incremental Learning (PBIL) is a Bernoulli distribution-based evolutionary algorithm for binary black-box optimization. The PBIL updates the distribution parameter according to the samples generated from the current distribution and their rankings. In PBIL, when some distribution parameters are continuously updated randomly, undesirable convergence without sufficient exploration is observed. This behavior is called genetic drift and induces an increasing number of function evaluations and convergence to local optima. In particular, large update strength leads to genetic drift while faster search. The ways to deal with genetic drift are limited, such as decreasing the update strength, and there is a trade-off between search efficiency and stability. This paper proposes a method to reduce genetic drift in PBIL based on entropy regularization widely used in reinforcement learning. We introduce entropy regularization into PBIL as a penalty term or constraint. The experimental results on well-known benchmark problems show that the proposed entropy regularization can efficiently suppress genetic drift, decrease the number of function evaluations, and improve stability.

References

  1. Abbas Abdolmaleki, Bob Price, Nuno Lau, Luis Paulo Reis, and Gerhard Neumann. 2017. Deriving and Improving CMA-ES with Information Geometric Trust Regions. In Genetic and Evolutionary Computation Conference (GECCO). 657--664.Google ScholarGoogle Scholar
  2. Riad Akrour, Gerhard Neumann, Hany Abdulsamad, and Abbas Abdolmaleki. 2016. Model-Free Trajectory Optimization for Reinforcement Learning. In International Conference on Machine Learning (ICML). 2961--2970.Google ScholarGoogle Scholar
  3. Shumeet Baluja. 1994. Population-Based Incremental Learning: A Method for Integrating Genetic Search Based Function Optimization and Competitive Learning. Technical Report Tech Rep CMU-CS-94-163. Carnegie Mellon University.Google ScholarGoogle Scholar
  4. Shumeet Baluja and Rich Caruana. 1995. Removing the Genetics from the Standard Genetic Algorithm. In International Conference on Machine Learning (ICML). 38--46.Google ScholarGoogle ScholarCross RefCross Ref
  5. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. 2016. Asynchronous Methods for Deep Reinforcement Learning. In International Conference on Machine Learning (ICML). 1928--1937.Google ScholarGoogle Scholar
  6. Yann Ollivier, Ludovic Arnold, Anne Auger, and Nikolaus Hansen. 2017. Information-Geometric Optimization Algorithms: A Unifying Picture via Invariance Principles. Journal of Machine Learning Research 18, 1 (2017), 564--628.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust Region Policy Optimization. In International Conference on Machine Learning (ICML). 1889--1897.Google ScholarGoogle Scholar
  8. Dirk Sudholt and Carsten Witt. 2016. Update Strength in EDAs and ACO: How to Avoid Genetic Drift. In Genetic and Evolutionary Computation Conference (GECCO). 61--68.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Philip Thomas, Bruno Castro Silva, Christoph Dann, and Emma Brunskill. 2016. Energetic Natural Gradient Descent. In International Conference on Machine Learning (ICML). 2887--2895.Google ScholarGoogle Scholar
  10. Ronald Williams and Jing Peng. 1991. Function Optimization Using Connectionist Reinforcement Learning Algorithms. Connection Science 3, 3 (1991), 241--268.Google ScholarGoogle ScholarCross RefCross Ref
  11. Ronald J. Williams. 1992. Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning. Mach. Learn. 8, 3--4 (1992), 229--256.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Reduction of genetic drift in population-based incremental learning via entropy regularization

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference Companion
    July 2022
    2395 pages
    ISBN:9781450392686
    DOI:10.1145/3520304

    Copyright © 2022 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 19 July 2022

    Check for updates

    Qualifiers

    • poster

    Acceptance Rates

    Overall Acceptance Rate1,669of4,410submissions,38%

    Upcoming Conference

    GECCO '24
    Genetic and Evolutionary Computation Conference
    July 14 - 18, 2024
    Melbourne , VIC , Australia

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader