A new asynchronous parallel global optimization method based on simulated annealing and differential evolution
Introduction
Global optimization problems arise in almost every field of scientific research, engineering, chemistry, economy, etc. Many real world problems can be formulated as global optimization problems of the following form:where is the so-called cost function (CF), x is an -dimensional vector of optimization variables, and and are the lower and the upper bound for the ith variable, respectively. denotes the global minimum of the CF. In this paper we only consider problems with simple box constraints.
In practical applications the actual shape of the CF is usually unknown. Often the CF values are the result of expensive and time consuming measurements or simulations. In such cases problem (1) cannot be solved analytically. Many different classes of optimization methods have been developed to solve the problem numerically. Gradient methods are the fastest, but they require the derivatives of the CF and work only on differentiable functions. They are also extremely local by nature and sensitive to noise. This reduces their suitability for many practical applications. The alternative to the gradient methods are the direct search methods. They do not require gradients of the CF and can handle noisy and multimodal functions. Optimization methods can also be classified as local or global. The former are designed to find a minimum as fast as possible, even though it may not be the true global minimum. The latter are usually slower but can find the true global minimum with high probability. There are also many different hybrid methods that try to exploit the fast convergence of the local methods and the global search capabilities of the global methods [1], [2], [3], [4], [5], [6].
The basis of the method presented in this paper is a hybrid method (DESA) [7] that combines simulated annealing (SA) and differential evolution (DE). The random sampling and the Metropolis criterion from SA [8] are combined with the population of points and the sampling mechanism from DE [9] to balance global and local search. We extend DESA so that it can be run in parallel on a cluster of computers. The new method is referred to as Parallel Simulated Annealing Differential Evolution (PSADE).
The paper is organized as follows. In Sections 2 Simulated annealing, 3 Differential evolution the simple SA and the basic DE algorithms are summarized. Section 4 presents a brief classification of parallel optimization approaches. In Section 5 PSADE is presented in more detail. Section 6 compares PSADE with the original DE and simple SA on a set of 23 well-known mathematical test functions. In Section 7 PSADE is applied to the problem of device sizing in analog integrated circuit (IC) design. Section 8 gives the concluding remarks.
We use to denote a uniformly distributed random integer from , and to denote a uniformly distributed random number from the (0,1) interval. Superscripts denote different vectors, while subscripts denote vector components. Parentheses are used to denote iteration numbers. for example denotes the n th component of the ith vector in k th iteration.
Section snippets
Simulated annealing
SA is a very popular stochastic sequential global optimization algorithm that performs random sampling of the search space [8]. Its main feature is the so-called Metropolis criterion that occasionally allows the acceptance of inferior solutions. The probability of making the transition from the current point to a trial point generated by randomly perturbing is defined as:where and denote the CF values at and , respectively. SA always accepts
Differential evolution
DE is another very popular optimization method. Unlike the serial SA, DE uses a population of points to guide the search process [9]. In its various forms DE has been applied to many real world problems (e.g. [11], [12], [13], [14]). In our experiments we used the scheme classified as DE/rand/bin [9]. Algorithm 1 represents the used DE algorithm.
Algorithm 1 Differential evolution
Parallel optimization methods
In many practical applications the computationally most expensive part of the optimization is the CF evaluation. Since the number of CF evaluations (CFE) required to obtain high quality solutions is often very large, the entire optimization can take a very long time. Parallel methods are capable of distributing the workload among several processing units and can achieve considerable speedups when compared to sequential methods. There are two major approaches to parallelization, synchronous and
Parallel Simulated Annealing Differential Evolution – PSADE
With PSADE we wish to improve the random sampling of SA with some kind of memory that would allow more efficient sampling of the search space. We replace the serial random search of SA with the population of points and augment the random sampling with a mechanism similar to the original DE. To avoid the difficulties regarding the selection of problem dependent DE parameters (weight factor f and crossover probability ), PSADE assigns different parameter values to every population member (
Optimization of mathematical functions
To examine the performance of PSADE, a set of 23 well-known mathematical test functions was used. The definitions of the test functions can be found in [18]. The set contains unimodal and multimodal functions with dimensionality ranging from 2 to 30.
Table 1 shows the optimization results for SA, DE, and PSADE with a limited number of CF evaluations (). Every function was optimized 10 times with different random seeds. For every test function the table lists the number of variables (),
Optimization of analog integrated circuits
Analog IC design is a very difficult and time consuming task. It consists of two major steps. The first step is the selection of the circuit topology which depends mostly on the knowledge and the experience of the designer. In the second step, referred to as parametric optimization the device parameters (transistor dimensions, capacitances, resistances, etc.) must be determined so that the final circuit satisfies the design requirements. In this paper we are only concerned with the parametric
Conclusion
A new hybrid asynchronous parallel global optimization method (PSADE) was presented. It combines features from simulated annealing and differential evolution to efficiently sample the parameter space. The method was designed as an asynchronous parallel algorithm that allows simultaneous evaluation of several trial solutions. This can greatly reduce the time needed for the optimization especially in applications where the CF evaluation times are long and vary with time. Optimization of 23
Acknowledgement
The research has been supported by the Ministry of Higher Education, Science and Technology of the Republic of Slovenia within programme P2-0246 – Algorithms and optimization methods in telecommunications.
References (32)
- et al.
Towards hybrid evolutionary algorithms
International Transactions in Operational Research
(1999) - et al.
A combined heuristic optimization technique
Advances in Engineering Software
(2005) - et al.
A distributed PSO-SVM hybrid system with feature selection and parameter optimization
Applied Soft Computing
(2008) - et al.
Differential evolution as a viable tool for satellite image registration
Applied Soft Computing
(2008) - et al.
Differential evolution approach for optimal reactive power dispatch
Applied Soft Computing
(2008) - et al.
Digital self-learning calibration system for smart sensors
Sensors and Actuators A-Physical
(2008) - et al.
Automated robust design and optimization of integrated circuits by means of penalty functions
AEU-International Journal of Electronics and Communication
(2003) - et al.
Functionality fault model: a basis for technology-specific test generation
Microelectronics Reliability
(1998) - et al.
A combined global & local search CGLS approach to global optimization
Journal of Global Optimization
(2006) - et al.
A modified ant colony optimization algorithm modeled on tabu-search methods
IEEE Transactions on Magnetics
(2006)
DESA: a new hybrid global optimization method and its application to analog integrated circuit sizing
Journal of Global Optimization
Optimization by simulated annealing
Science
Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces
Journal of Global Optimization
Convergence of the simulated annealing algorithm for continuous global optimization
Journal of Optimization Theory and Applications
B-spline neural network design using improved differential evolution for identification of an experimental nonlinear process
Applied Soft Computing
Cited by (43)
Nonlinear characteristics identification of an impact oscillator with a one-sided elastic constraint
2024, Journal of Sound and VibrationSimulated annealing assisted NSGA-III-based multi-objective analog IC sizing tool
2022, IntegrationCitation Excerpt :Since the late 1980s, numerous automation methods and computer-aided design (CAD) systems for analog circuit design have been proposed [3]. Particularly for circuit sizing, the last few decades have witnessed employment of metaheuristic methods including heuristics [4], simulated annealing (SA) [5,6], genetic algorithm (GA) [7–10], swarm intelligence [11], and Tabu search (TS) [12]. Especially guided random search methods [13] such as SPEA-2, NSGA-II, MOEA/D, and SA are highly popular in analog IC sizing.
New enhanced colliding body optimization algorithm based on a novel strategy for exploration
2021, Journal of Building EngineeringParallel computational optimization in operations research: A new integrative framework, literature review and research directions
2020, European Journal of Operational ResearchImproved differential evolution algorithm to solve the advertising method selection problem
2019, Journal of Open Innovation: Technology, Market, and Complexity