Elsevier

Applied Soft Computing

Volume 11, Issue 2, March 2011, Pages 2017-2034
Applied Soft Computing

Genetic Algorithm with adaptive elitist-population strategies for multimodal function optimization

https://doi.org/10.1016/j.asoc.2010.06.017Get rights and content

Abstract

This paper introduces a new technique called adaptive elitist-population search method. This technique allows unimodal function optimization methods to be extended to efficiently explore multiple optima of multimodal problems. It is based on the concept of adaptively adjusting the population size according to the individuals’ dissimilarity and a novel direction dependent elitist genetic operators. Incorporation of the new multimodal technique in any known evolutionary algorithm leads to a multimodal version of the algorithm. As a case study, we have integrated the new technique into Genetic Algorithms (GAs), yielding an Adaptive Elitist-population based Genetic Algorithm (AEGA). AEGA has been shown to be very efficient and effective in finding multiple solutions of complicated benchmark and real-world multimodal optimization problems. We demonstrate this by applying it to a set of test problems, including rough and stepwise multimodal functions. Empirical results are also compared with other multimodal evolutionary algorithms from the literature, showing that AEGA generally outperforms existing approaches.

Introduction

Genetic algorithms (GAs) have proven useful in solving a variety of search and optimization problems [2], [8], [9], [10], [22], [24], [39]. Many real-world problems require an optimization algorithm that is able to explore multiple optima in their search space. In this respect, GAs have demonstrated the best potential for finding the optimal solutions because they are population-based search approaches and have strong global optimization capabilities. However, in the standard GA for maximization problems, all individuals, which may be located on different peaks at the beginning of the search process, eventually converge to a single peak. Thus, it usually ends up with only one solution. If this solution is a local optimum, we call it premature convergence in GAs. This phenomenon is even more serious in GAs with elitist strategy, which is a widely adopted method to improve GAs’ convergence [16].

Over the years, various population diversity enhancement mechanisms have been proposed, which enable GAs to maintain a diverse population of individuals throughout their search, to avoid convergence of the population to a single peak and to allow GAs to identify multiple optima in a multimodal function landscape. However, various current population diversity enhancement mechanisms have not demonstrated themselves to be very efficient as expected. The efficiency problems, in essence, are related to some fundamental dilemmas in GAs implementation. We believe any attempt to improve the efficiency of GAs has to compromise between these two dilemmas:

  • The elitist search versus diversity maintenance dilemma:

    GAs are expected to be global optimizers with global search capability to encourage exploration of the global optimal solutions. So the elitist strategy is widely adopted in the GAs’ search processes to improve the chance of finding the global optimal solution. Unfortunately, the elitist strategy concentrates on some “super” individuals, but reduces the diversity of the population, and in turn leads to premature convergence. Contrarily, GAs need to maintain the diversity of the population in their search processes to find the multiple optimal solutions. How to balance both the elitist search and the diversity maintenance is important for constructing an efficient multimodal GA. Some researchers have attempted to handle the dilemma, e.g., Mahfoud’s Deterministic Crowding methods [34], Petrowski’s Clearing Procedure [41] and Li’s Species Conserving Genetic Algorithm (SCGA) [32].

  • The algorithm effectiveness versus population redundancy dilemma:

    For many GAs, we can use a large population size to improve the chance to obtain the global and multiple optima for optimization problems. However, the large population size will notably increase the computational complexity of the algorithms and generate a lot of redundant individuals in the population, thereby decrease the efficiency of GAs.

Our idea in this study is to strike a tactical balance between the two contradictory dilemmas. We propose a new adaptive elitist-population search technique to identify and search for multiple optima efficiently in the multimodal function landscape. The technique is based on an elitist population with a dynamically adapting size and the adoption of a series of new GA mechanisms: a specific definition of an elitist individual for the multimodal function landscape, a new principle for the individual’s dissimilarity, and a new set of direction dependent elitist genetic operators. Combining this technique with GA, we propose a novel multimodal GA—adaptive elitist-population based genetic algorithm (AEGA). AEGA was first proposed by the authors [31]. This paper describes an improved version of AEGA. Using multiple test functions, we demonstrate empirically that our proposed approach generally outperforms the existing multimodal evolutionary algorithms reported in the literature.

To illustrate our technique, we will use unconstrained optimization problems of real-valued functions, defined over an array of real numbers. Where no confusion could occur we denote the objective function by f(x). AEGA in this paper makes no distinction between genotypes and phenotypes. Thus, genetic operators will be applied directly to individuals represented by arrays of real numbers. Note that none of the above restrictions are required for our technique to be applicable. The only reason for imposing them is for simplicity of presentation.

The remainder of this paper is organized as follows. The next section describes related work relevant to our proposed technique. Section 3 introduces the adaptive elitist-population search technique and describes the implementation of the algorithm. Section 4 presents the results from a series of experiments on a set of test functions, and the comparison of our results with other multimodal evolutionary algorithms. Sections 5 and 6 present the analyses of the parameter choice in AEGA. Section 7 presents the conclusion and some future directions of research.

Section snippets

Related work

When applying GAs to multimodal optimization problems, it is very important to maintain two apparently contradictory requirements, which are to preserve promising individuals from one generation to the next and maintain the diversity of the population [32]. This section briefly reviews the existing methods developed to address the related issues: elitism, niche formation methods and other parallel subpopulations search methods.

A new adaptive elitist-population search technique

The new technique for multimodal function maximization presented in this paper achieves adaptive elitist-population searching by exploring the notion of the relative ascending directions of two individuals (and for a minimization problem it is called the relative descending directions).

For a high dimensional maximization problem, every individual generally can have many possible ascending directions. But along the line, which is uniquely defined by each pair of individuals (e.g., x1 and x2, x3

Evaluation of AEGA

In this section, the performance of AEGA is study. In Section 4.1, the experiment methodology is described. In Sections 4.2, 4.3 and 4.4, the experimental results on comparing AEGA with other multimodal evolutionary algorithms for different kinds of optimization problems are reported respectively.

The effect of the distance parameter

In this section, we study how the distance parameter σs affects the performance of AEGA. In Sections 5.1 and 5.2, the experimental results on comparing AEGA with different σs for Roots function f3 and Four dimensional negative Shubert function f9−4 are reported respectively.

Effect of different parameters α, β, λ, μ in the population control constraints

In our proposed AEGA, we have added four parameters α, β, λ, μ into the two population control constraints. These constraints can avoid AEGA wasting computational power to obtain low fitness optima. In this section, we study how these parameters affect the performance of AEGA.

In the first population control constraint of AEGA, if a newly generated individual has evolved for α generations and its fitness is still lower than a fitness threshold, β × the best-so-far solution (e.g., 0.5 × the

Conclusion

In this paper we have presented the adaptive elitist-population search method, a new technique for evolving parallel elitist individuals for multimodal function optimization. The technique is based on the concept of adaptively adjusting the population size according to the individuals’ dissimilarity using direction dependent elitist genetic operators.

The adaptive elitist-population search technique can be implemented with any combinations of standard genetic operators. To use it, we just need

Acknowledgments

This research was partially supported by RGC Research Grant GRF 414708 of Hong Kong SAR and Macau Science and Technology Develop Fund (Grant No. 021/2008/A) of Macau SAR. The authors would like to thank the anonymous reviewers for their constructive comments and suggestions which have significantly improved this paper.

References (42)

  • L. Costa et al.

    An adaptive sharing elitist evolution strategy for multiobjective optimization

    Evolutionary Computation

    (2003)
  • K. Deb et al.

    An investigation of niche and species formation in generic function optimization

  • K.A. De Jong, An analysis of the behavior of a class of genetic adaptive systems, Doctoral dissertation, University of...
  • D. Dumitrescu et al.

    Evolutionary Computation

    (2000)
  • S. Elo

    A parallel genetic algorithm on the CM-2 for multi-modal optimization

  • L.J. Eshelman, The CHC adaptive search algorithm: how to have safe search when engaging in nontraditional genetic...
  • L.J. Eshelman, J.D. Schaffer, Real-coded genetic algorithms and interval-schamata, in: L.D. Whitley (Ed.), Foundation...
  • J. Gan et al.

    Dynamic Niche Clustering: a fuzzy variable radius niching techniquefor multimodal optimisation

  • M. Gen et al.

    Genetic Algorithm and Engineering Design

    (1997)
  • D.E. Goldberg et al.

    Genetic algorithms with sharing for multimodal function optimization

  • D.E. Goldberg

    Genetic Algorithm in Search, Optimization, and Machine Learning

    (1989)
  • Cited by (110)

    • A self-adaptive and gradient-based cuckoo search algorithm for global optimization[Formula presented]

      2022, Applied Soft Computing
      Citation Excerpt :

      Most of the SGO methods are inspired by some successful characteristics of biological systems in nature, such as natural selection, propagation, nutrition search, and hunting [1]. Examples are genetic algorithms [GA, 2], particle swarm optimization [PSO, 3], ant colony optimization [ACO, 4], differential evolution [DE, 5], and more recently, cuckoo search [CS, 6]. Among these, CS is inspired by the natural behavior of cuckoos, specifically, the obligate brood parasitism of some cuckoo species [6].

    View all citing articles on Scopus
    View full text