Elsevier

Information Sciences

Volume 566, August 2021, Pages 80-102
Information Sciences

Evolutionary continuous constrained optimization using random direction repair

https://doi.org/10.1016/j.ins.2021.02.055Get rights and content

Abstract

To solve constrained optimization problems (COPs), it is crucial to guide the infeasible solution to a feasible region. Gradient-based repair (GR) is a successful repair strategy, where the forward difference is often used to estimate the gradient. However, GR has major deficiencies. First, it is difficult to deal with individuals falling into the local optima. Second, large amounts of fitness evaluations are required to estimate the gradient. In this paper, we proposed a new repair strategy, random direction repair (RDR). RDR generates a set of random directions, and calculates the repair direction and the repair step size of infeasible individual to reduce its constraint violation. Since the introduction of randomness, RDR could deal with individuals falling into the local optima. Furthermore, RDR only requires a few number of fitness evaluation. To demonstrate the performance of RDR, RDR was embedded into two state-of-the-art evolutionary continuous constrained optimization algorithms, tested on the Congress on Evolutionary Computation 2017 constrained real-parameter optimization benchmark. Experimental results demonstrated that RDR combined with evolutionary algorithms are highly competitive.

Introduction

Constrained optimization problems (COPs) are important, because many real-world optimization problems are limited by constraints, such as scheduling [26], the knapsack problem [14], optimal power flow [46], and antenna design [23]. Among these, if the search space is continuous, it is called a continuous COP (CCOP). With CCOPs, constraints divide the search space into several feasible and infeasible regions, and the goal of the optimization algorithm is to find the optimal solution in the feasible regions.

Evolutionary algorithms (EAs) [20], [2], [17] and other meta-heuristic algorithms have been widely used with CCOPs, including differential evolution (DE) [35], [1], [19], evolutionary strategy (ES) [18], [32], [24], and particle swarm optimization (PSO) [30]. Furthermore, hybrid algorithms were often considered for their advantages and different capabilities. For example, a hybrid algorithm of PSO and genetic algorithm (GA) [37] proposed by Takahama et al., a hybrid algorithm of an EA and sequential quadratic programming (SQP) [10] proposed by Deb et al., a hybrid algorithm of an EA and local search (LS) [8] proposed by Datta et al., a hybrid algorithm of the gravitational search algorithm (GSA) and a GA [13] proposed by Garg, and a hybrid algorithm of a GA and the gradient descent method [11] proposed by D’Angelo and Palmieri.

To solve CCOPs, apart from evolutionary operators, the constraint handling technique is a core component for finding feasible regions. Common constraint handling techniques were divided into three categories in [33]: ranking/selection, problem reformulation, and parent selection/recombination. Ranking/selection redefines comparison rules to select offspring, such as the superiority of feasible solution (SF) [9], ε-constraint (EC) [36], and individual-dependent feasibility rule (IDFR) [44]. Problem reformulation defines a new fitness function formed by constraint and objective functions, and a widely used method is the penalty function [15], which adds a penalty term to the objective function to indicate the constraint violation. Parent selection/recombination selects suitable parents to generate competitive offspring. For example, in [31], parts of infeasible individuals were reserved for recombination to generate potential offspring.

Constrained optimization is a hot topic in the field of evolutionary computation. Recently, much work has been accomplished. Specifically, at the congress on evolutionary computation (CEC) 2017 and 2018 competitions on constrained real-parameter optimization, many competitive algorithms were proposed. In particular, some variants of success-history-based adaptive DE, including linear population size reduction (i.e., L-SHADE) [38], [39], [29] achieved competitive results. For example, Zamuda proposed an L-SHADE with adaptive constraint handling (i.e., CAL-SHADE) [47]. Polakova applied L-SHADE44 [28] to CCOPs. Tvrdik and Polakova proposed a single framework combining L-SHADE44 and IDE (DE with an individual-dependent mechanism) [42]. Fun et al. proposed an LSHADE44 with an improved EC (i.e., LSHADE44-IEpsilon) [12]. Moreover, Anupam et al. proposed a unified DE (UDE) in [40], drawing on the advantages of several DE variants, such as self-adaptive DE (SaDE) and composite DE (CoDE). Then, an improved version of UDE, called IUDE [41], was proposed in the following year, which improved the parameter adaptation and the offspring selection, winning the first place at the CEC 2018 competition. Apart from DE, a variant of matrix adaptation evolution strategy (MA-ES), MAg-ES, was proposed by Hellwig et al. [16], combining EC and a gradient-based repair with MA-ES, winning the second place.

The repair strategy is a positive technique for finding feasible solutions, which guides infeasible solutions toward promising regions. It is often used with combinatorial optimization problems [27], [6], [21]. Regarding CCOPs, some work has been performed from the perspective of repair strategy. In [25], Michalewicz and Nazhiyath proposed a repair strategy to repair an infeasible individual along the direction of a feasible reference point. In [7], Chootinan and Chen proposed gradient-based repair (GR), which used the gradient information derived from the constraint set to reduce the constraint violation. Afterward, Koch et al. proposed a repair method (i.e., RI-2) [22], which used the gradient of each constraint of an infeasible individual to form a parallelepiped, and then randomly search for feasible solutions in it. Recently, Spettel and Beyer developed a repair method for nonlinear constraints in [34]. Moreover, there is also a repair strategy for specific application [5]. Exist work [36], [4] has demonstrated that GR has been effective for solving CCOPs. However, GR has two major drawbacks. First, it repairs the individual to move only along the gradient. Thus, it is difficult to deal with the individual falling into the local optima. Second, each repair of GR requires D fitness evaluations to estimate the gradient, where D is the problem dimension. Therefore, the cost of GR is expensive when the dimensions are higher.

In this paper, a new repair strategy, random direction repair (RDR), is proposed to deal with infeasible individuals and to find feasible regions. Overall, the contributions of this paper are listed as follows:

  • 1.

    The main idea of RDR is that the infeasible individual is repaired along a random direction. RDR could reduce the constraint violation of the infeasible individual within a certain range. This is based on the fact that a random direction with a non-zero direction derivative is always monotonic in a local interval. In this paper, the primary focus of this work is to determine the repair direction and to calculate the repair step size, and the details of their derivation are provided. The infeasible individual repaired by RDR have more opportunities to escape the local optima, since the randomness is introduced in RDR to generate the uncertain repair direction. Moreover, the function evaluation cost of RDR is low, due to only one fitness evaluation is required to estimate the directional derivative at once.

  • 2.

    RDR could be embedded into various evolutionary constrained optimization algorithms, such as two state-of-the-art EAs, IUDE and MAg-ES. In this paper, RDR is first embedded into IUDE to generate the offspring. Then, RDR is embedded into MAg-ES as an alternative to GR. RDR combined with two EAs are tested on the CEC 2017 constrained real-parameter optimization benchmark, which is also used in the CEC 2017 and 2018 competitions on constrained real parameter optimization. The experimental results demonstrate that the performance of RDR is competitive.

The rest of this paper is organized as follows. Related work is introduced in Section 2. Section 3 describes the derivation and implementation of RDR. In Section 4, RDR is separately embedded into IUDE and MAg-ES. Section 5 shows experimental results and analysis. Then, the performance of RDR and GR is discussed in Section 6. Finally, Section 7 concludes this work and presents future considerations.

Section snippets

Related work

In this section, related work is discussed. First, the definition of a CCOP is shown. Then, the typical repair strategy, GR, and two popular constraint handling techniques are introduced. Finally, some related algorithms, including DE, IUDE, and MAg-ES, are reviewed. In this paper, we always assume that the objective of a CCOP is to find the feasible solution with the minimum value of the fitness function.

Random direction repair

In this section, a novel repair strategy based on the random direction, random direction repair (RDR), is proposed. We first derive the use of single and multiple random directions, then provide an implementation.

EAs with RDR

In this section, RDR is embedded into IUDE and MAg-ES in two different methods. On the one hand, RDR is used to generate trial vectors in IUDE. On the other hand, RDR is applied in MAg-ES as a repair strategy to replace GR.

Algorithm 2 RDR
1: function [x,evals]=RDR(x)
2: evals=0;
3: Calculate ΔV(x) by Eq. (3);
4: Generate q columns of the random direction matrix M;
5: Calculate Ψ in the direction matrix M;
6: Calculate S with Eq. (26);
7: Δx=MS;
8: Clamp the range of Δx;
9: x=x+Δx;
10: Calculate the

Benchmark problems

In this paper, we use 28 CCOPs from the CEC 2017 constrained real-parameter optimization competition as the benchmark problems [45]. For each CCOP, the maximum function evaluation is set to 20,000D, where D={10,30,50,100} are problem dimensions.

Algorithm 4 RDR-MA-ES
1: if mod(g,D)=0 and rand0.2
2:  h=1;
3:  while hτ and ϕ(xl(g)>0)
4:   [xl(g),used_evals]=RDR(xl(g));
5:   evals=evals+used_evals;
6:   h=h+1;
7:  end while
8: end if

The algorithm runs 25 times on each problem. The mean of the objective function

Discussions

In this section, RDR and GR are compared and analyzed according to the experimental results. Then, the Ackley function is used to illustrate the difference of RDR and GR.

The comparison results of RDR-MA-ES and MAg-ES are shown in Section 5.3, demonstrating that RDR is superior to GR. In fact, on 10D problems, RDR has a similar performance as GR, but consumes fewer evaluations. On the other problems, the constraint space is more complicated, and RDR is more advantageous in dealing with the

Conclusion

In this paper, we propose RDR to repair infeasible solutions. RDR gradually reduces the constraint violation of the infeasible solution along a random direction. On the one hand, RDR introduces randomness so that individuals are not easy to fall into the local optima. On the other hand, RDR requires only a few number fitness evaluations to calculate the step size for each repair. Then, RDR is embedded into two state-of-the-art EAs, modified from MAg-ES and IUDE as RDR-MA-ES and RDR-IUDE,

CRediT authorship contribution statement

Peilan Xu: Methodology, Software, Writing - original draft. Wenjian Luo: Methodology, Writing - original draft, Funding acquisition, Project administration. Xin Lin: Software, Writing - review & editing. Yingying Qiao: Software, Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

References (47)

  • M. Asafuddoula et al.

    An adaptive constraint handling approach embedded MOEA/D

  • C. Bu et al.

    Continuous dynamic constrained optimization with ensemble of locating and tracking feasible regions strategies

    IEEE Trans. Evol. Comput.

    (2016)
  • C. Bu et al.

    Differential evolution with a species-based repair strategy for constrained optimization

  • Q. Chen et al.

    Evolutionary optimization under uncertainty: the strategies to handle varied constraints for fluid catalytic cracking operation

    IEEE Trans. Cybern.

    (2020)
  • R. Datta et al.

    Individual penalty based constraint handling using a hybrid bi-objective and penalty function approach

  • K. Deb et al.

    A hybrid evolutionary multi-objective and SQP based procedure for constrained optimization

  • G. D’Angelo, F. Palmieri, GGA: a modified genetic algorithm with gradient-based local search for solving constrained...
  • Z. Fan et al.

    LSHADE44 with an improved ) constraint-handling method for solving constrained single-objective optimization problems

  • J. Gottlieb

    On the effectivity of evolutionary algorithms for the multidimensional knapsack problem

  • S.B. Hamida et al.

    An adaptive algorithm for constrained optimization problems

  • P. Hingston et al.

    Multi-level ranking for constrained multi-objective evolutionary optimisation

  • Huang, F.Z., Wang, L., He, Q., 2007. An effective co-evolutionary differential evolution for constrained optimization....
  • A. Isaacs, T. Ray, W. Smith, Blessings of maintaining infeasible solutions for constrained multi-objective optimization...
  • Cited by (0)

    This work is supported by National Natural Science Foundation of China (No. 61573327).

    View full text