Computational investigation of simple memetic approaches for continuous global optimization

https://doi.org/10.1016/j.cor.2016.01.015Get rights and content

Highlights

  • Introduction of three different variants of the Memetic Differential Evolution method (see [20]).

  • Computational investigation of these variants and their relation with the funnel properties of the functions to be optimized.

  • Comparison with the existing literature about methods based on multiple local searches (in particular, Monotonic Basin Hopping).

  • Comparison with the existing literature about memetic approaches.

  • Experimental analysis of the proposed approaches to get more insight about them.

Abstract

In Locatelli et al. (2014) [20] a memetic approach, called MDE (Memetic Differential Evolution), for the solution of continuous global optimization problems, has been introduced and proved to be quite efficient in spite of its simplicity. In this paper we computationally investigate some variants of MDE. The investigation reveals that the best tested variant of MDE outperforms the original MDE itself, but also that the best variant depends on some properties of the function to be optimized. In particular, a greedy variant of MDE turns out to perform very well over functions with a single-funnel landscape, while another variant, based on a diversity measure applied to the members of the population, works better over functions with a multi-funnel landscape. A hybrid approach is also proposed which combines both the previous variants in order to obtain an overall performance which is good over all functions.

Introduction

The task of globally minimizing a multimodal objective function f over some compact domain XRn is a very difficult one. Different approaches exist based on the dimension of the search space and on the objective function and feasible domain׳s properties. The approaches range from exact (usually branch-and-bound) ones, suitable for highly structured problems (e.g., with a quadratic objective function and a polyhedral feasible domain) when the dimension n is not too large (say, few hundreds of variables), to heuristic ones where only few function evaluations are performed, suitable for problems where a single function evaluation is a rather costly operation. While we refer to [19] for a thorough discussion about all possible different approaches, here we focus our attention on approaches which are suitable for problems where local searches are a relatively cheap task but at the same time the huge number of local minimizers rules out the simplest approach based on multiple local searches, namely Multistart, where local searches are performed from different points randomly generated within the feasible region. In the algorithms we are going to discuss the points observed at each iteration are always local minimizers. To be more precise, the observed points are always the output of local search procedures, which are usually guaranteed to be stationary points but, in fact, are typically also local minimizers. In what follows we will always refer to these points as local minimizers, keeping in mind the clarification we have just made. Of course, observing local minimizers has a cost, since for each observation we need to perform a local search, which requires some function (and gradient) evaluations. However, this cost is often largely compensated. Indeed, when the landscape of the objective function is rough with high barriers between local minimizers, even points very close to the global minimizer may have large function values. If, as it is often the case, the function value is employed to evaluate the quality of a point and to decide whether to keep or to discard it, such points, though close to the global minimizer, will be discarded from further consideration. Local searches, by driving a point towards a local minimizer, remove the negative effect of high barriers between local minimizers.

In this paper we discuss three variants of the Memetic Differential Evolution (MDE) approach introduced in [20]. In that paper it was shown that MDE is very competitive, in terms of local searches needed to reach the global minimizer, with respect to other methods based on multiple local searches and, in particular, with respect to the Monotonic Basin Hopping method which will be discussed in Section 2.1. The primary aim of the paper is to show, through an extensive computational investigation, that each proposed variant improves, again in terms of local searches needed to reach the global minimizer, the original MDE approach under some given funnel properties of the objective function to be minimized. More precisely, the greedy variant (G-MDE) improves the performance of MDE when applied to single-funnel functions (but with some local search procedures also when applied to multi-funnel functions); the distance variant (D-MDE), by preserving a higher degree of diversity within the population, improves upon the original MDE approach over multi-funnel functions; finally, the hybrid approach (H-MDE), where the greedy and distance strategies are mixed, improves upon the original MDE over all the tested functions. We remark that none of the proposed approaches dominates all the others but this is consistent with the “no free lunch” theorem [43]. As a secondary aim we would like to show that, in spite of its extreme simplicity, one of the proposed variants, namely G-MDE, is competitive with some state-of-the art evolutionary approaches in terms of the quality of the solutions returned within a prefixed budget of function evaluations. The paper is structured as follows. In Section 2 we introduce a general scheme of a global optimization approach based on local searches and we briefly discuss some existing approaches which fit into this scheme, namely Multistart and Mononotic Basin Hopping (Section 2.1), memetic algorithms (Section 2.2), and MDE (Section 2.3) . In Section 3 we propose the three previously mentioned simple variants of MDE. In Section 4 we present the set of test problems on which we compare the different variants and we discuss the results of the computational experiments. We also perform an experimental analysis of the proposed approaches in order to better understand their behavior. Finally, in Section 5 we draw some conclusions and we discuss some possible future developments.

Section snippets

Global optimization based on local searches

A general scheme for a global optimization approach based on local searches is displayed in Algorithm 1, where L is a local solver which input is made up by the objective function f, the feasible domain X and a starting point x. In this scheme:

Algorithm 1

Generic model for a global optimization algorithm based on local searches.

  • PRn×k={p1,,pk} is the population matrix containing the k population members as column vectors;

  • QiRn×h={q1,,qh} is the candidate matrix containing the h candidate points

Some variants of MDE

In this section we discuss three simple variants of MDE, namely the greedy variant (G-MDE), the distance variant (D-MDE), and the hybrid variant (H-MDE). We emphasize that we choose not to investigate any sophisticated variant of the basic MDE scheme. Our aim in this paper is to show that even simple and easy to implement approaches are able to return very good results.

Computational experiments

In this section we make a detailed comparison of the three variants of MDE with the original MDE itself. We recall that MDE, in spite of its simplicity, already proved to be quite effective (see [20]), outperforming or, at least, being competitive with MBH. We will first introduce the set of test problems over which the comparison will be performed. These are modifications of some well known highly multimodal global optimization test problems (Section 4.1). Next, we will present and discuss the

Conclusions and future developments

In this paper we have computationally investigated some simple variants of MDE, a memetic approach for continuous global optimization, which has been proved to be quite efficient in [20]. The analysis revealed that the best of such variants often outperforms the performance of the original MDE, but at the same time that the best variant is not always the same and is strictly problem dependent. In particular, the quickly convergent greedy variant is the best one for single-funnel functions,

References (44)

  • Baldwin JM. A new factor in evolution. In: Evolving populations: models and algorithms; 1996. p....
  • Andrea Cassioli et al.

    Dissimilarity measures for population-based global optimization algorithms

    Comput Optim Appl

    (2010)
  • A. Duarte et al.

    Hybrid scatter tabu search for unconstrained global optimization

    Ann Oper Res

    (2011)
  • À.E. Eiben et al.

    Parameter control in evolutionary algorithms

    IEEE Trans Evol Comput

    (1999)
  • S. Elfwing et al.

    Evolutionary development of hierarchical learning structures

    IEEE Trans Evol Comput

    (2007)
  • Englander JA, Englander AC. Tuning monotonic basin hopping: improving the efficiency of stochastic search as applied to...
  • Finck S, Hansen N, Rosz R, Auger A. Real-parameter black-box optimization benchmarking 2010: noiseless functions...
  • Goldman BW, Punch WF. Parameter-less population pyramid. In: Proceedings GECCO 2014; 2014. p....
  • Andrea Grosso et al.

    A population based approach for hard global optimization problems based on dissimilarity measures

    Math Program

    (2007)
  • Hart WE. Adaptive global optimization with local search [Ph.D. thesis]. San Diego: University of California;...
  • Bernd Hartke

    Global cluster geometry optimization by a phenotype algorithm with nicheslocation of elusive minima, and low-order scaling with cluster size

    J Comput Chem

    (1999)
  • N. Krasnogor et al.

    A tutorial for competent memetic algorithmsmodel, taxonomy, and design issues

    IEEE Trans Evol Comput

    (2005)
  • Cited by (8)

    • Efficient large scale global optimization through clustering-based population methods

      2021, Computers and Operations Research
      Citation Excerpt :

      Here, we are interested in showing the increased capabilities of our method in saving local searches when embedded in a memetic algorithm. So we decided to keep DE parameters constant in our experiments and in particular, we choose to use the same settings as in Cabassi and Locatelli (2016). Three control parameters are involved in DE which are: the scaling factor (F), the crossover probability (CR) and the population size (p).

    • (Global) Optimization: Historical notes and recent developments

      2021, EURO Journal on Computational Optimization
      Citation Excerpt :

      In Bagattini et al. (2018, 2019) and, Schoen and Tigli (2021) we proposed different strategies in order to overcome the limitations of original clustering methods. In particular, we showed that the idea of those algorithms can be successfully applied to much more refined algorithms than the standard Multistart, by showing how a memetic Differential Evolution variant, inspired by Cabassi and Locatelli (2016), can profitably save a very large number of useless local searches while maintaining the quality of the original. Moreover, we showed how to use the same ideas with methods based on local search methods which are, in a sense, more refined and “global” than standard, gradient-based, local optimization tools.

    • A memetic procedure for global multi-objective optimization

      2023, Mathematical Programming Computation
    • UAV search-and-rescue planning using an adaptive memetic algorithm

      2021, Frontiers of Information Technology and Electronic Engineering
    View all citing articles on Scopus
    View full text