Investigating a hybrid simulated annealing and local search algorithm for constrained optimization

https://doi.org/10.1016/j.ejor.2006.06.050Get rights and content

Abstract

Constrained Optimization Problems (COP) often take place in many practical applications such as kinematics, chemical process optimization, power systems and so on. These problems are challenging in terms of identifying feasible solutions when constraints are non-linear and non-convex. Therefore, finding the location of the global optimum in the non-convex COP is more difficult as compared to non-convex bound-constrained global optimization problems. This paper proposes a Hybrid Simulated Annealing method (HSA), for solving the general COP. HSA has features that address both feasibility and optimality issues and here, it is supported by a local search procedure, Feasible Sequential Quadratic Programming (FSQP). We develop two versions of HSA. The first version (HSAP) incorporates penalty methods for constraint handling and the second one (HSAD) eliminates the need for imposing penalties in the objective function by tracing feasible and infeasible solution sequences independently. Numerical experiments show that the second version is more reliable in the worst case performance.

Introduction

Many important real world problems can be expressed in terms of a set of nonlinear constraints that restrict the domain over which a given performance criterion is optimized (Floudas and Pardalos, 1990). This study is concerned with the Constrained Optimization Problem (COP) that is expressed as: minimize f(x):x=(x1,xn)tFRn where F is the feasible domain. F is defined by k inequalities, gi(x)  0, i = 1…k, and (m  k) equalities, hi(x) = 0, i = k + 1…m, and domain lower and upper bounds (LBx  x  UBx). The expressions g(x) and h(x) may involve nonlinear and linear relations. The objective function, f(x), is minimized by an optimum solution vector x=(x1,xn)tFRn where f(x)  f(x) for all xF.

Finding feasible solutions, xF, in the general non-convex COP is an NP-hard problem. Derivative methods that attempt to solve this problem might be trapped in infeasible sub-spaces if the combined topology of the constraints is too rugged. The same problem exists in the discovery of global optima in non-convex bound-constrained global optimization problems. The COP has augmented complexity as compared to bound-constrained problems due to the restrictions imposed by highly non-linear relationships among variables.

Here, we adopt the Simulated Annealing (SA) approach to solve the COP. SA is a black box stochastic algorithm that generates a sequence of random solutions converging to a global optimum. SA employs a slow annealing process that accepts worse solutions more easily in the beginning stages of the search as compared to later phases (Kirkpatrick et al., 1983). Using this feature, SA escapes from local optima and overcomes the difficulties encountered by derivative based numerical methods. A convergence proof for SA in the real domain is provided by Dekkers and Aarts (1991). Various SA implementations exist in the literature for bound-constrained global optimization problems (e.g., Corana et al., 1987, Ingber, 1996, Hedar and Fukushima, 2002, Hedar and Fukushima, 2004). Özdamar and Demirhan (2000) provide an extensive computational survey that reflects the performance of stochastic approaches including different SA algorithms and clustering methods on a large number of bound-constrained test functions.

As Hedar and Fukushima (2005) also mention in their report (the authors Filter SA combined with local search), publications concerning the implementation of SA in the COP are quite scarce. Some successful special case SA applications for constrained engineering problems exist in the literature (e.g., structural optimization problems, Bennage and Dhingra, 1995, Leite and Topping, 1999, SA combined with genetic algorithms in economic dispatch, Wong and Fung, 1993, in power generator scheduling, Wong and Wong, 1995, Wong and Wong, 1996, Wong and Wong, 1997, in thermoelastic scaling behavior, Wong et al., 2000). Yet, general constraint handling methods are not discussed and tested extensively as they have been in the Genetic Algorithms (GA) field.

Various penalty constraint handling methods such as static, dynamic or adaptive penalties are proposed and discussed more frequently in GA literature (e.g., Michalewicz and Nazhiyath, 1995, Smith and Coit, 1997, Deb, 2000). Penalty methods convert the COP into an unconstrained problem where a penalty term reflecting the degree of infeasibility of the solution is added to the objective function. In static penalty functions (Morales and Quezada, 1998, Coello, 2002), the penalty parameter is constant throughout the search whereas in dynamic ones this parameter changes with the run time of the search (Joines and Houck, 1994, Kazarlis and Petridis, 1998). Static penalty functions suffer from the difficulty of determining the optimal magnitude of the penalty parameter, which if too large, may prevent the search from exploring infeasible regions. On the other hand, a too small parameter may result in failure to identify feasible solutions. Dynamic penalty functions overcome this obstacle though they too are sensitive to certain parameters related to run time (Michalewicz, 1995). A review of different penalty methods used in GAs and a discussion on their advantages and disadvantages can be found in Yeniay (2005).

Wah and Wang, 1999, Wah and Wang, 2000 avoid parametric problems in penalty functions by introducing a penalty method where pure SA is applied in its standard algorithmic form with features derived from discrete Lagrangian theory. This method is called Constrained SA, CSA. The novelty in this work is that although the authors work with a penalty augmented objective function, they utilize SA to perturb the Lagrangian parameters as well as the solution coordinates. Their algorithm adopts an ascending approach for the Lagrangian parameters where increased penalty parameters are accepted with probability one while decreased ones are accepted with an annealing probability. In this manner, they apply descending exploration for solution coordinates and an ascending one for penalty parameters to achieve feasibility of constraints.

However, when penalty methods are adopted by SA, a key problem arises. SA generates a sequence of solutions where each solution is derived by perturbing the previous one. Since the probability of acceptance of worse solutions typically depends on the difference in the objective function of two consecutive solutions, a feasible solution that is an immediate successor to an infeasible one might be accepted right away, because it does not have a penalty term in its assessment criterion. Even if the objective function value of a worse feasible solution is larger than its infeasible predecessor’s, it can dominate the penalty term in the predecessor solution and be accepted. Hence, a feasible solution might be accepted even if it does not fare well in the range of feasible solutions obtained so far. Similarly, if an infeasible solution succeeds a feasible one, it is less likely that it will be accepted because its probability of acceptance might be too small due to the penalty term. Such situations encountered while generating a sequence of convergent solutions may cause the search to be trapped in feasible regions where there are only local stationary points. Though CSA eliminates parametric issues in penalties, they still use the augmented objective function and still suffer from this drawback.

In a very recent paper, Hedar and Fukushima (2005) apply diversified multi-start SA to achieve a better exploration in both the feasible and infeasible regions. They propose to keep a set of starting solutions (diversification set) as initiators of multi-start SA. A ranking procedure is applied to the elements of this set using pareto optimality concepts related to feasibility and optimality. This ranking strategy differentiates solutions with regard to non-dominance in constraint violation and in the objective function value. A trial solution is compared with the best-ranked solution in this set and accepted according to its deterioration in f or total infeasibility, whichever is the maximum. Thus the augmented objective function becomes a dual criteria function consisting of infeasibility and f. The diversification set is updated with new non-dominated solutions encountered during the exploration process. The authors call this algorithm Filter SA, FSA.

In this paper, we adopt a simple easy to use scheme to deal with the feasibility issue in constrained optimization. First, we adapt and embed several penalty functions proposed for GAs in a Hybrid SA framework, HSA. HSA permits both diversification (exploration in infeasible regions) and intensification (hill-climbing in the immediate neighborhood of a worse solution) during the search. Furthermore, we integrate HSA with Feasible Sequential Quadratic Programming (FSQP) local search method (Zhou and Tits, 1996, Lawrence et al., 1997) as a supportive exploration tool.

Next, we introduce a non-penalty version of HSA for eliminating the disadvantages of penalty methods discussed above. In this version (dual sequence HSA, HSAD), a new performance assessment method that removes the requirement of the penalty in the assessment of infeasible solutions is proposed. HSAD treats the sequence of solutions generated by HSA as a dual sequence where the infeasible and feasible sequences are traced separately. In each iteration, a feasible candidate neighbor is compared with the last feasible solution obtained in the feasible sequence, and similarly an infeasible one is compared with the last infeasible solution. In this manner, the problems encountered by penalty methods are avoided.

This idea might seem to be similar to the one found in FSA, however, in FSA a single sequence that is composed of feasible and infeasible solutions is assessed with a double criteria annealing probability. FSA diversifies the search by applying multi-start SA with the best non-dominated solution obtained so far, however, each new sequence started as such is assessed individually. On the other hand, in HSAD there are two sequences of solutions (feasible and infeasible sequences) that are assessed simultaneously and a simpler diversification scheme is implemented.

We conduct numerical experiments that compare the performances of different HSAP versions (with various penalty functions) with HSAD, CSA and HFSA. HFSA is a derivation of HSAP where the idea of non-dominance based acceptance of candidate solutions is utilized to deal with a single mixed sequence of feasible and infeasible solutions.

Section snippets

HSA: Hybrid SA

HSA is a hybrid hill-climb-SA approach that is more stringent in accepting worse solutions. HSA is allowed to accept a non-improving solution probabilistically only if a consecutive number of moves have already resulted in non-improving solutions.

This strategy is also linked to the activation of the local search FSQP. HSA activates local search probabilistically whenever a feasible solution better than the last feasible solution is found. FSQP is also activated at the later stages of the search

Comparison between HSAP and HSAD

We test the performance of HSAP (five penalty functions) and HSAD on 32 COPs collected from different sources in the literature. These are listed in the Appendix with their characteristics (number of nonlinear/linear equalities and inequalities as well as expression types), optimal or best results reported, and their source references. A few test problems are tested in their revised versions. Most of the problems involve polynomial constraints and objective functions with nine exceptions where

Conclusion

We propose a Hybrid Simulated Annealing method, HSA, for solving the general Constrained Optimization Problem, COP. HSA has a global diversification counter that detects stagnation in a sequence of solutions. This global counter activates a new sequence when the stagnation limit is reached. HSA has also a local counter that executes a hill-climb approach that rejects worse candidate neighbors. The hill-climb approach alternates with SA’s probabilistic acceptance approach with a local loop

Acknowledgements

We wish to thank Professor Andre Tits (Electrical Engineering and the Institute for Systems Research, University of Maryland, USA) for providing the source code of CFSQP.

References (41)

  • A. Dekkers et al.

    Global optimization and simulated annealing

    Mathematical Programming

    (1991)
  • Epperly, T.G., 1995. Global optimization of nonconvex nonlinear programs using parallel branch and bound, Ph.D...
  • C.A. Floudas et al.

    A collection of Test Problems for Constrained Global Optimization Algorithms

    (1990)
  • L. Ingber

    Adaptive simulated annealing (ASA): Lessons learned

    Control and Cybernetics

    (1996)
  • J. Joines et al.

    On the use of non-stationary penalty functions to solve non-linear constrained optimization problems with GAs

  • P. Hansen et al.

    An analytical approach to global optimization

    Mathematical Programming

    (1991)
  • A.R. Hedar et al.

    Hybrid simulated annealing and direct search method for nonlinear unconstrained global optimization

    Optimization Methods and Software

    (2002)
  • A.R. Hedar et al.

    Heuristic pattern search and its hybridization with simulated annealing for nonlinear global optimization

    Optimization Methods and Software

    (2004)
  • Hedar, A.R., Fukushima, M., 2005. Derivative-free filter simulated annealing method for constrained continuous global...
  • S. Kazarlis et al.

    Varying fitness functions in genetic algorithms: studying the rate of increase in the dynamic penalty terms

  • Cited by (0)

    View full text