Cooperative differential evolution with fast variable interdependence learning and cross-cluster mutation
Graphical abstract
Introduction
Optimization problems are rife in diverse fields such as mechanical engineering, compressed sensing, natural language processing, structure control, and bio-computing [1], [2], [3], [4], [5]. Researchers have to determine a set of model parameters or state-variables that provide the minimum or maximum value of a predefined cost or objective [6]. With the coming of internet of things (IoT) [7], many optimization problems are becoming more difficult, i.e., the problems are characterized by more variables with complicated interactions. Research on optimization problems has attracted the attention of researchers and many algorithms have been proposed. Though the existing optimizers have shown to be successful in solving moderate scale problems, many of them still suffer from the “curse of dimensionality”, which means that their performance will deteriorate as the dimensionality of the problem increases [8], [9]. Thus effective and efficient algorithms for large scale optimization have become essential requirements. In this paper, we aim at solving the large scale optimization problems and providing tools for scientists and engineers when they are solving real world problems from the involved disciplines.
Generally, the natural way to address the “curse of dimensionality” is to apply cooperative optimization, which can be regarded as an automatic approach to implement the divide-and-conquer strategy. A typical cooperative optimization algorithm can be summarized as follows [10]:
- 1.
Problem decomposition: decompose a large scale problem into smaller scale subproblems.
- 2.
Subproblem optimization: optimize each subproblem by means of a separate optimizer.
- 3.
Cooperative coordination: combine the subsolutions to obtain an entire solution.
A key issue with regards to the cooperative optimization is the task of problem decomposition. An appropriate decomposition algorithm should group interacted variables together so that the interdependencies among different subproblems are minimized. Based on whether the variable interdependencies are considered or not, the decomposition algorithms can be classified into two categories. Generally, the algorithms performed without considering variable interdependencies are simple and effective for separable problems, but have difficulty in solving nonseparable problems [10], [11], [12], [13], [14], [15], [16]. On the other hand, the algorithms performed by considering the variable interdependencies provide opportunities to solve large scale nonseparable problems [17], [18], [19], [20], [21], [22], [23], [24]. However, many of them either add extra computational burden to the algorithm or lack extensive variable interdependence learning.
Another key issue with regards to the cooperative optimization is the optimization of the subproblems. The widely used optimizers are inspired by nature phenomena, which include genetic algorithm (GA) [25], evolution programming (EP) [26], [27], evolution strategy (ES) [28], [29], differential evolution (DE) [6], [30], ant colony optimization (ACO) [31], particle swarm optimization (PSO) [32], [33], [34], [35], [36], [37], bacterial foraging optimization (BFO) [38], simulated annealing (SA) [39], tabu search (TS) [40], harmony search (HS) [35], [36], [40], etc. These optimizers facilitated research into the optimization of the subproblems. However, many of them are still not free from premature convergence for the complex multi-modal, rugged, and nonseparable problems.
As can be seen from the aforementioned, as far as cooperative optimization algorithms concerned, there still exists a big room to improve their performance through deeper studies. In this paper, we propose a variant of cooperative optimization algorithm. The study concentrates on the aforementioned two issues. To solve the task of problem decomposition, we propose a fast variable interdependence searching operator, which operates by recursively partitioning decision variables into blocks and identifying the interdependences among different blocks. Then we decompose a large scale problem into small scale subproblems based on the obtained interdependencies. To solve the task of subproblem optimization, we propose a cross-cluster mutation strategy, in which each operator is identified as exploration-biased or exploitation-biased. The population is divided into several clusters. For the individuals among different clusters, the exploration-biased operators are applied, and for the individuals within each cluster, the exploitation-biased operators are applied. By favoring search in the vicinity of each cluster and in the regions among different clusters, this strategy promotes efficient exploration as well as efficient exploitation. We further incorporate the proposed strategies into the original differential algorithm to perform optimization. The reason that we adopt differential evolution as base optimizer is that it has been frequently adopted and the resulting variants have been achieving top ranks in various competitions [30].
The reminder of this paper is organized as follows. Section 2 reviews the related works with regard to cooperative optimization and differential evolution. Section 3 gives the description of the proposed algorithm, which includes the fast variable interdependence learning method, and the cross-cluster mutation strategy. Section 4 presents the experimental results, followed by concluding remarks in Section 5.
Section snippets
Related works
This work is closely related to cooperative optimization and differential evolution. In this section, we firstly review the prevailing algorithms for cooperative optimization. And then review the state of the art algorithms for differential evolution.
Problem decomposition
In this section, we aim at solving the task of problem decomposition. The details with regards to the variable interdependence, the fast variable interdependence learning algorithm, and the problem decomposition method are provided.
Differential evolution with cross-cluster mutation
In this section, we aim at solving the task of the optimization of subproblems. The exploration and exploitation tendencies of DE mutation operators are studied, followed by the cross-cluster mutation strategies.
Test functions
In this section, experiments are conducted to test the performance of the proposed algorithm. In the experiments, the CEC2008, CEC2010, and CEC2013 are selected as benchmarks, which are shifted, rotated, expanded, and combined variants of the basic functions. The CEC2008 has seven functions. Among the functions, two of them are unimodal, and the others are multimodal. The CEC2010 has 20 functions. Among the functions, three of them are separable, 15 of them are partially separable, and two of
Conclusions
To solve large scale optimization problems, one of the most common ways is to adopt the cooperative optimization strategies. The decomposition of the problem and the optimization of the subproblems are critical issues with regard to cooperative optimization. The main contributions of this paper included the following two aspects. Firstly, to perform the problem decomposition tasks, a fast learning strategy was proposed to capture the interdependencies among different variables, with which a
Acknowledgements
The authors would like to thank the support of the National Natural Science Foundation of China (61103146, 61272207, 61402076, 61202306), Startup Fund for the Doctoral Program of Liaoning Province (20141023), the Open Project of Shanghai Key Laboratory of Trustworthy Computing (07dz22304201301), the Fundamental Research Funds for the Central Universities (DUT12RC(3)72, DUT14QY06), the Open Project of the Key Laboratory of Ministry of Education (2014ACOCP02), and the Research Promotion
References (57)
- et al.
Large scale evolutionary optimization using cooperative coevolution
Inf. Sci.
(2008) - et al.
A cooperative particle swarm optimizer with statistical variable interdependence learning
Inf. Sci.
(2012) Convergence results for the (1, λ)-SA-ES using the theory of φ-irreducible Markov chains
Theor. Comput. Sci.
(2005)How the (1 + 1) ES using isotropic mutations minimizes positive definite quadratic forms
Theor. Comput. Sci.
(2006)- et al.
Dynamic multi-swarm particle swarm optimizer with harmony search
Expert Syst. Appl.
(2011) - et al.
A dynamic neighborhood learning based particle swarm optimizer for global numerical optimization
Inf. Sci.
(2012) - et al.
A study on scale factor in distributed differential evolution
Inf. Sci.
(2011) - et al.
Global Optimization
(1989) - et al.
Differential evolution algorithm with ensemble of parameters and mutation strategies
Appl. Soft Comput.
(2011) - et al.
Large scale global optimization using self-adaptive differential evolution algorithm
MA-SW-chains: memetic algorithm based on local search chains for large scale continuous global optimization
Self-adaptive differential evolution with multi-trajectory search for large scale optimization
Soft Comput.
Differential evolution: a survey of the state-of-the-art
IEEE Trans. Evol. Comput.
A flocking-based paradigm for hierarchical cyber-physical smart grid modeling and control
IEEE Trans. Smart Grid
A cooperative approach to particle swarm optimization
IEEE Trans. Evol. Comput.
A survey on unsupervised outlier detection in high-dimensional numerical data
Stat. Anal. Data Min.
Cooperative coevolution: an architecture for evolving coadapted subcomponents
Evol. Comput.
A cooperative coevolutionary approach to function optimization
The Design and Analysis of a Computational Model of Cooperative Coevolution (PhD Thesis)
Scaling up fast evolutionary programming with cooperative coevolution
Cooperative co-evolutionary differential evolution for function optimization
Large scale global optimization using differential evolution with self-adaptation and cooperative co-evolution
A blended population approach to cooperative coevolution for decomposition of complex problems
Multilevel cooperative coevolution for large scale optimization
Tackling high dimensional nonseparable optimization problems by cooperatively coevolving particle swarms
On the improvement of coevolutionary optimizers by learning variable interdependencies
A cooperative coevolutionary algorithm with correlation based adaptive variable partitioning
A surrogate-assisted memetic co-evolutionary algorithm for expensive constrained optimization problems
Cited by (30)
Shape and sizing optimisation of space truss structures using a new cooperative coevolutionary-based algorithm
2024, Results in EngineeringLarge-scale evolutionary optimization: A review and comparative study
2024, Swarm and Evolutionary ComputationIncremental particle swarm optimization for large-scale dynamic optimization with changing variable interactions
2023, Applied Soft ComputingA surrogate-assisted variable grouping algorithm for general large-scale global optimization problems
2023, Information SciencesCitation Excerpt :Although these methods can achieve impressive decomposition performance, they only apply to additively separable problems [15]. It is worth noting that another prevalent type of learning-based decomposition algorithm, i.e., variable interaction learning (VIL) and its variants [2,6,7,36], attempted to decompose general LSGO problems without exploiting the characteristics of additive separability. To be specific, they group variables by detecting whether the monotonicity of the objective function w.r.t. a variable is affected by other variables, and have the opportunity to identify nonadditive separability.
Cooperative co-evolutionary differential evolution algorithm applied for parameters identification of lithium-ion batteries
2022, Expert Systems with ApplicationsCitation Excerpt :A general decomposition guideline for PSO was suggested to divide an D-dimensional problem into n s-dimensional sub-problems where ns = D and s ≪ D (van den Bergh & Engelbrecht, 2004). In addition, some researchers developed other decomposition methods on non-separable LSOPs, such as random decomposition methods (Omidvar, Li, Yang, & Yao, 2010; Yang et al., 2008), learning-based decomposition methods (Ge, Sun, Tan, Chen, & Chen, 2017; Ge, Sun, Yang, Yoshida, & Liang, 2015; Sun, Yoshida, Cheng, & Liang, 2012). delta grouping (Omidvar, Li, & Yao, 2010) and differential grouping (Hu, He, Chen, & Zhang, 2017; Mei, Omidvar, Li, & Yao, 2016; Omidvar, Li, Mei, & Yao, 2014; Omidvar, Yang, Mei, Li, & Yao, 2017).
A review of the recent use of Differential Evolution for Large-Scale Global Optimization: An analysis of selected algorithms on the CEC 2013 LSGO benchmark suite
2019, Swarm and Evolutionary ComputationCitation Excerpt :Then, evolution was conducted in these groups separately. In Ref. [130] the fast search operator was proposed to capture the interdependencies among variables. Problem decomposition was then performed based on the obtained interdependencies.