Elsevier

Applied Soft Computing

Volume 36, November 2015, Pages 300-314
Applied Soft Computing

Cooperative differential evolution with fast variable interdependence learning and cross-cluster mutation

https://doi.org/10.1016/j.asoc.2015.07.016Get rights and content

Highlights

Abstract

Cooperative optimization algorithms have been applied with success to solve many optimization problems. However, many of them often lose their effectiveness and advantages when solving large scale and complex problems, e.g., those with interacted variables. A key issue involved in cooperative optimization is the task of problem decomposition. In this paper, a fast search operator is proposed to capture the interdependencies among variables. Problem decomposition is performed based on the obtained interdependencies. Another key issue involved is the optimization of the subproblems. A cross-cluster mutation strategy is proposed to further enhance exploitation and exploration. More specifically, each operator is identified as exploitation-biased or exploration-biased. The population is divided into several clusters. For the individuals within each cluster, the exploitation-biased operators are applied. For the individuals among different clusters, the exploration-biased operators are applied. The proposed operators are incorporated into the original differential evolution algorithm. The experiments were carried out on CEC2008, CEC2010, and CEC2013 benchmarks. For comparison, six algorithms that yield top ranked results in CEC competition are selected. The comparison results demonstrated that the proposed algorithm is robust and comprehensive for large scale optimization problems.

Introduction

Optimization problems are rife in diverse fields such as mechanical engineering, compressed sensing, natural language processing, structure control, and bio-computing [1], [2], [3], [4], [5]. Researchers have to determine a set of model parameters or state-variables that provide the minimum or maximum value of a predefined cost or objective [6]. With the coming of internet of things (IoT) [7], many optimization problems are becoming more difficult, i.e., the problems are characterized by more variables with complicated interactions. Research on optimization problems has attracted the attention of researchers and many algorithms have been proposed. Though the existing optimizers have shown to be successful in solving moderate scale problems, many of them still suffer from the “curse of dimensionality”, which means that their performance will deteriorate as the dimensionality of the problem increases [8], [9]. Thus effective and efficient algorithms for large scale optimization have become essential requirements. In this paper, we aim at solving the large scale optimization problems and providing tools for scientists and engineers when they are solving real world problems from the involved disciplines.

Generally, the natural way to address the “curse of dimensionality” is to apply cooperative optimization, which can be regarded as an automatic approach to implement the divide-and-conquer strategy. A typical cooperative optimization algorithm can be summarized as follows [10]:

  • 1.

    Problem decomposition: decompose a large scale problem into smaller scale subproblems.

  • 2.

    Subproblem optimization: optimize each subproblem by means of a separate optimizer.

  • 3.

    Cooperative coordination: combine the subsolutions to obtain an entire solution.

A key issue with regards to the cooperative optimization is the task of problem decomposition. An appropriate decomposition algorithm should group interacted variables together so that the interdependencies among different subproblems are minimized. Based on whether the variable interdependencies are considered or not, the decomposition algorithms can be classified into two categories. Generally, the algorithms performed without considering variable interdependencies are simple and effective for separable problems, but have difficulty in solving nonseparable problems [10], [11], [12], [13], [14], [15], [16]. On the other hand, the algorithms performed by considering the variable interdependencies provide opportunities to solve large scale nonseparable problems [17], [18], [19], [20], [21], [22], [23], [24]. However, many of them either add extra computational burden to the algorithm or lack extensive variable interdependence learning.

Another key issue with regards to the cooperative optimization is the optimization of the subproblems. The widely used optimizers are inspired by nature phenomena, which include genetic algorithm (GA) [25], evolution programming (EP) [26], [27], evolution strategy (ES) [28], [29], differential evolution (DE) [6], [30], ant colony optimization (ACO) [31], particle swarm optimization (PSO) [32], [33], [34], [35], [36], [37], bacterial foraging optimization (BFO) [38], simulated annealing (SA) [39], tabu search (TS) [40], harmony search (HS) [35], [36], [40], etc. These optimizers facilitated research into the optimization of the subproblems. However, many of them are still not free from premature convergence for the complex multi-modal, rugged, and nonseparable problems.

As can be seen from the aforementioned, as far as cooperative optimization algorithms concerned, there still exists a big room to improve their performance through deeper studies. In this paper, we propose a variant of cooperative optimization algorithm. The study concentrates on the aforementioned two issues. To solve the task of problem decomposition, we propose a fast variable interdependence searching operator, which operates by recursively partitioning decision variables into blocks and identifying the interdependences among different blocks. Then we decompose a large scale problem into small scale subproblems based on the obtained interdependencies. To solve the task of subproblem optimization, we propose a cross-cluster mutation strategy, in which each operator is identified as exploration-biased or exploitation-biased. The population is divided into several clusters. For the individuals among different clusters, the exploration-biased operators are applied, and for the individuals within each cluster, the exploitation-biased operators are applied. By favoring search in the vicinity of each cluster and in the regions among different clusters, this strategy promotes efficient exploration as well as efficient exploitation. We further incorporate the proposed strategies into the original differential algorithm to perform optimization. The reason that we adopt differential evolution as base optimizer is that it has been frequently adopted and the resulting variants have been achieving top ranks in various competitions [30].

The reminder of this paper is organized as follows. Section 2 reviews the related works with regard to cooperative optimization and differential evolution. Section 3 gives the description of the proposed algorithm, which includes the fast variable interdependence learning method, and the cross-cluster mutation strategy. Section 4 presents the experimental results, followed by concluding remarks in Section 5.

Section snippets

Related works

This work is closely related to cooperative optimization and differential evolution. In this section, we firstly review the prevailing algorithms for cooperative optimization. And then review the state of the art algorithms for differential evolution.

Problem decomposition

In this section, we aim at solving the task of problem decomposition. The details with regards to the variable interdependence, the fast variable interdependence learning algorithm, and the problem decomposition method are provided.

Differential evolution with cross-cluster mutation

In this section, we aim at solving the task of the optimization of subproblems. The exploration and exploitation tendencies of DE mutation operators are studied, followed by the cross-cluster mutation strategies.

Test functions

In this section, experiments are conducted to test the performance of the proposed algorithm. In the experiments, the CEC2008, CEC2010, and CEC2013 are selected as benchmarks, which are shifted, rotated, expanded, and combined variants of the basic functions. The CEC2008 has seven functions. Among the functions, two of them are unimodal, and the others are multimodal. The CEC2010 has 20 functions. Among the functions, three of them are separable, 15 of them are partially separable, and two of

Conclusions

To solve large scale optimization problems, one of the most common ways is to adopt the cooperative optimization strategies. The decomposition of the problem and the optimization of the subproblems are critical issues with regard to cooperative optimization. The main contributions of this paper included the following two aspects. Firstly, to perform the problem decomposition tasks, a fast learning strategy was proposed to capture the interdependencies among different variables, with which a

Acknowledgements

The authors would like to thank the support of the National Natural Science Foundation of China (61103146, 61272207, 61402076, 61202306), Startup Fund for the Doctoral Program of Liaoning Province (20141023), the Open Project of Shanghai Key Laboratory of Trustworthy Computing (07dz22304201301), the Fundamental Research Funds for the Central Universities (DUT12RC(3)72, DUT14QY06), the Open Project of the Key Laboratory of Ministry of Education (2014ACOCP02), and the Research Promotion

References (57)

  • D. Molina et al.

    MA-SW-chains: memetic algorithm based on local search chains for large scale continuous global optimization

  • S.Z. Zhao et al.

    Self-adaptive differential evolution with multi-trajectory search for large scale optimization

    Soft Comput.

    (2011)
  • S. Das et al.

    Differential evolution: a survey of the state-of-the-art

    IEEE Trans. Evol. Comput.

    (2011)
  • W. Jin et al.

    A flocking-based paradigm for hierarchical cyber-physical smart grid modeling and control

    IEEE Trans. Smart Grid

    (2014)
  • F. van den Bergh et al.

    A cooperative approach to particle swarm optimization

    IEEE Trans. Evol. Comput.

    (2004)
  • A. Zimek et al.

    A survey on unsupervised outlier detection in high-dimensional numerical data

    Stat. Anal. Data Min.

    (2012)
  • M. Potter et al.

    Cooperative coevolution: an architecture for evolving coadapted subcomponents

    Evol. Comput.

    (2000)
  • M. Potter et al.

    A cooperative coevolutionary approach to function optimization

  • M. Potter

    The Design and Analysis of a Computational Model of Cooperative Coevolution (PhD Thesis)

    (1997)
  • Y. Liu et al.

    Scaling up fast evolutionary programming with cooperative coevolution

  • Y. Shi et al.

    Cooperative co-evolutionary differential evolution for function optimization

  • A. Zamuda et al.

    Large scale global optimization using differential evolution with self-adaptation and cooperative co-evolution

  • D. Sofge et al.

    A blended population approach to cooperative coevolution for decomposition of complex problems

  • Z. Yang et al.

    Multilevel cooperative coevolution for large scale optimization

  • X. Li et al.

    Tackling high dimensional nonseparable optimization problems by cooperatively coevolving particle swarms

  • K. Weicker et al.

    On the improvement of coevolutionary optimizers by learning variable interdependencies

  • T. Ray et al.

    A cooperative coevolutionary algorithm with correlation based adaptive variable partitioning

  • C.K. Goh et al.

    A surrogate-assisted memetic co-evolutionary algorithm for expensive constrained optimization problems

  • Cited by (30)

    • A surrogate-assisted variable grouping algorithm for general large-scale global optimization problems

      2023, Information Sciences
      Citation Excerpt :

      Although these methods can achieve impressive decomposition performance, they only apply to additively separable problems [15]. It is worth noting that another prevalent type of learning-based decomposition algorithm, i.e., variable interaction learning (VIL) and its variants [2,6,7,36], attempted to decompose general LSGO problems without exploiting the characteristics of additive separability. To be specific, they group variables by detecting whether the monotonicity of the objective function w.r.t. a variable is affected by other variables, and have the opportunity to identify nonadditive separability.

    • Cooperative co-evolutionary differential evolution algorithm applied for parameters identification of lithium-ion batteries

      2022, Expert Systems with Applications
      Citation Excerpt :

      A general decomposition guideline for PSO was suggested to divide an D-dimensional problem into n s-dimensional sub-problems where ns = D and s ≪ D (van den Bergh & Engelbrecht, 2004). In addition, some researchers developed other decomposition methods on non-separable LSOPs, such as random decomposition methods (Omidvar, Li, Yang, & Yao, 2010; Yang et al., 2008), learning-based decomposition methods (Ge, Sun, Tan, Chen, & Chen, 2017; Ge, Sun, Yang, Yoshida, & Liang, 2015; Sun, Yoshida, Cheng, & Liang, 2012). delta grouping (Omidvar, Li, & Yao, 2010) and differential grouping (Hu, He, Chen, & Zhang, 2017; Mei, Omidvar, Li, & Yao, 2016; Omidvar, Li, Mei, & Yao, 2014; Omidvar, Yang, Mei, Li, & Yao, 2017).

    • A review of the recent use of Differential Evolution for Large-Scale Global Optimization: An analysis of selected algorithms on the CEC 2013 LSGO benchmark suite

      2019, Swarm and Evolutionary Computation
      Citation Excerpt :

      Then, evolution was conducted in these groups separately. In Ref. [130] the fast search operator was proposed to capture the interdependencies among variables. Problem decomposition was then performed based on the obtained interdependencies.

    View all citing articles on Scopus
    View full text