Research paper
A hybrid of Bayesian approach based global search with clustering aided local refinement

https://doi.org/10.1016/j.cnsns.2019.104857Get rights and content

Highlights

  • Problems of continuous non convex optimization are considered.

  • An algorithm is proposed hybridizing a Bayesian approach based global search with the clustering aided identification of candidate global minima, and their local refinement.

  • The advantages of mathematical approach and soft computing is merged by the hybridization of respective algorithms.

  • Numerical experiments with a set of randomly generated objective functions show that that the proposed hybrid outperforms the original global search.

Abstract

We propose a global optimization algorithm in which the P-algorithm is hybridized with a clustering aided local search. The proposed algorithm is free from the computational burden typical for the known implementations of Bayesian algorithms, i.e. the inversion of high dimensional covariance matrices, and the solution of inner non-convex optimization problems. Clustering aided switching between the global and local search rationally combines advantages of both strategies: the basins of potential global minima indicated by the clustering are excluded from the global search, and the potential minima are refined by a local search method. We complete the paper with numerical experiments that illustrate the performance achieved.

Introduction

Optimization problems in engineering design frequently are continuous and non convex. Although the characteristics of those problems are quite different, two classes of problems can be distinguished. The first class includes problems where objective functions and feasible regions are described by mathematical formulas. The second class consists of the so-called black-box problems where objective functions are available only as computer algorithms. Different approaches apply to the problems of these classes. The development of algorithms for the problems of the first class is based on the analysis of favorable mathematical properties of aimed problems; see, e.g. [1], [2], [3]. Metaheuristic approach prevails in the development of methods for the black-box problems. However, another approach is needed for the latter subclass, referred as expensive black-box problems, where uncertainty of properties and computational complexity are inherent for the objective functions. Therefore, an approach based on the ideas of rational decision making under uncertainty seems most appropriate for the development of optimization methods for such functions which are considered as realizations of a statistical model, e.g. of a Gaussian random field (GRF). In various publications, this research direction is described by the terms statistical models based global optimization, Bayesian approach, information theory based approach, and surrogates and kriging based algorithms [4], [5], [6]. The increase of interest in these algorithms can be explained by a growing number of potential applications; see e.g. [7], [8] and references there. However, the inherent computational burden of such algorithms bounds their application area. In the present paper we propose coping with such challenge by hybridization of a Bayesian algorithm, which is implemented using a computationally favorable statistical model, with the clustering aided identification of potential global minima and their local refinement.

We recall briefly the basic ideas of Bayesian approach in global optimization and present the P-algorithm in this context; for a recent review of related methods see [6]. An optimal (in average) method proposed in [9], was called Bayesian where the optimality means the minimum average error with respect to a random field chosen for a statistical model of objective functions. A high complexity of the optimal Bayesian algorithm caused the introduction of various simplifications, e.g. the replacement of the minimum average error criterion with a one-step Bayesian optimality [9], [10]. However, by the time of original publication, the applicability of the one-step Bayesian algorithm was quite limited since its computational complexity was high with respect to the performance of contemporary computers. Later, with the increase of computing power, the interest in this algorithm renewed, and its various modifications were considered under the names EGO, the kriging based algorithm, and (possibly the most appropriate name) the maximum average improvement algorithm [11]. The P-algorithm was proposed in [12] about a decade earlier than the algorithm of the maximum average improvement. The P-algorithm originally was defined as a sequence of steps maximizing the improvement probability, and later it was substantiated by the axioms of the rational decision making under uncertainty [13]. The P-algorithm and the algorithm of maximum average improvement are similar in the interpretation of the objective function as a sampling function of a GRF. However, from the point of view of the theory of rational decision making, they differ by the utility function which evaluates the progression at the current iteration. The maximum average improvement algorithm implements the optimal planning for one step ahead. The planning horizon of the P-algorithm can be controlled by the improvement threshold. For further comments on the comparison of these algorithms we refer to [14].

A possible way to reduce the computational burden of Bayesian global optimization algorithms is partitioning based implementation. The considered version of the P-algorithm is implemented using the hyper-rectangular partitioning of the feasible region. The abbreviation RPP (Rectangular Partition version of the P-algorithm) will be used for this algorithm in the subsequent text. Note, that the idea of the partitioning based implementation was successful for the development of numerous global optimization algorithms [3], [15], [16], [17], [18], [19]. We note two main advantages of RPP. First, the computational complexity of the current iteration of RPP is considerably lower than the complexity of the current iteration of the tradition implementations of Bayesian algorithms. Second, the asymptotic convergence rate of RPP is relatively high [20]. However, the favorable asymptotics is not necessarily achieved after the acceptable number of iterations, especially for the problems with expensive objective functions. In such a case, the refinement of rough approximations of the potential minimizers found by RPP is more rational by a local optimization algorithm than the continuation of global search in neighbourhoods of these approximations. Advantages of the hybridization of global and local search were demonstrated for solving different global optimization problems, e.g. in [21], [22], [23], [24]. In the present paper we propose the hybridization of RPP with clustering aided local descent: RPP is interrupted when a basin of a potential global minimizer is indicated by a clustering algorithm, and the indicated potential solution is refined by a local minimization algorithm. The indicated sub-region is excluded from further global search.

Section snippets

The hybridization of RPP with local refinement

A black-box bound constrained optimization problem is considered: min x ∈ Af(x), where for the simplicity we assume that A=[0,1]d. The algorithm operates by repeatedly switching between two modes of operation: the global search and the local refinement until a stopping condition is satisfied. The abbreviation HPL (Hybrid of the P-algorithm with Local refinement) is used later where appropriate.

Numerical experiments

This section presents numerical experiments, demonstrating the performance of the proposed hybrid algorithm. The results were produced using our C++ implementations of HPL and HPL-a, the counterpart of which is the RPP-algorithm. The clustering procedure CURE and the local search algorithm by Hooke and Jeeves, were also implemented to incorporate in the proposed algorithm. In our implementation the function evaluation database (a nested collection of self-balancing AVL binary trees [25], [32])

Conclusions

A global optimization algorithm aimed at expensive black-box functions is proposed hybridizing the P-algorithm with clustering and local minimization algorithms. The implementation of the P-algorithm circumvents the typical computational challenges of Bayesian algorithms by using a statistical model which is based on the rectangular partition of the feasible region. The basins of potential minima, which are indicated by a clustering method, are excluded from the global search, and the potential

Acknowledgment

This work was supported by the Research Council of Lithuania under Grant No. P-MIP-17-61. We thank the associated editor and two anonymous referees for valuable critical remarks and comments.

References (37)

  • A. Žilinskas et al.

    Stochastic global optimization: a review on the occasion of 25 years of Informatica

    Informatica

    (2016)
  • Z. Han et al.

    Surrogate-based aerodynamic shape optimization of a wing-body transport aircraft configuration

  • X. Liu et al.

    Kriging-based surrogate model combined with weighted expected improvement for ship hull form optimization

    ASME 37th international conference on ocean, offshore and arctic engineering

    (2018)
  • J. Mockus

    On Bayesian methods for seeking an extremum

    Avtom Vychislitelnaja Tech

    (1972)
  • A. Žilinskas

    One-step Bayesian method for the search of the optimum of one-variable functions

    Cybernetics

    (1975)
  • D.R. Jones et al.

    Efficient global optimization of expensive black-box functions

    J Glob Optim

    (1998)
  • H. Kushner

    A versatile stochastic model of a function of unknown and time-varying form

    J Math Anal Appl

    (1962)
  • A. Z̆ilinskas

    Axiomatic characterization of global optimization algorithm and investigation of its search strategy

    OR Lett

    (1985)
  • Cited by (10)

    View all citing articles on Scopus
    View full text