Dynamic group optimization algorithm with a mean–variance search framework

https://doi.org/10.1016/j.eswa.2021.115434Get rights and content

Highlights

  • The proportioned mean solution generator can avoid fast shrinkage of search space.

  • Two significant difference individuals are perturbed to provide more useful information.

  • Population diversity is increased.

  • A lifespan selection operator is used to enhance the ability to avoid local optimum.

  • Comparative results of engineering problems in welded beam design show the promise of our algorithms for real-world applications.

Abstract

Dynamic group optimization has recently appeared as a novel algorithm developed to mimic animal and human socialising behaviours. Although the algorithm strongly lends itself to exploration and exploitation, it has two main drawbacks. The first is that the greedy strategy, used in the dynamic group optimization algorithm, guarantees to evolve a generation of solutions without deteriorating than the previous generation but decreases population diversity and limit searching ability. The second is that most information for updating populations is obtained from companions within each group, which leads to premature convergence and deteriorated mutation operators. The dynamic group optimization with a mean–variance search framework is proposed to overcome these two drawbacks, an improved algorithm with a proportioned mean solution generator and a mean–variance Gaussian mutation. The new proportioned mean solution generator solutions do not only consider their group but also are affected by the current solution and global situation. The mean–variance Gaussian mutation takes advantage of information from all group heads, not solely concentrating on information from the best solution or one group. The experimental results on public benchmark test suites show that the proposed algorithm is effective and efficient. In addition, comparative results of engineering problems in welded beam design show the promise of our algorithms for real-world applications.

Introduction

Metaheuristic optimization algorithms (Gomes & de Almeida, 2020) are a new kind of global optimization algorithm perceived as stochastic algorithms for optimization. In real-world applications, many problems are difficult to solve. Especially for NP-hard problems, they cannot find optimal solutions within an acceptable running time. In order to solve such problems, in the early days, some simple and effective heuristic algorithms have been proposed, such as greedy algorithm (Cormen, Leiserson, Rivest, & Stein, 2001) and local search algorithm (Hutter, Hoos, & Stützle, 2007). Greedy algorithm takes the best or optimal choice in the current state at each step, thus expecting to obtain the global best result finally. Local search starts from an initial solution and then searches the neighbourhood of the solution. If there is a better solution, then move to that solution, otherwise return to the current solution. Heuristic algorithms are a class of algorithms that, when solving a specific problem, can find feasible solutions in acceptable time and space, however, cannot guarantee to obtain the optimal solution. These algorithms are always taking the optimal choice under the current state to obtain the optimal solution. Because these algorithms always search for solutions in the local search space, they often fail to obtain the global optimal solution and meet real-world applications' precision requirements. Metaheuristic algorithms are an improvement of the heuristic algorithm by combining heuristic algorithm with stochastic strategy. They usually utilize some random strategies for the search process to escape the local optimal solution and find the global optimal solution, such as crossover factor in the genetic algorithm (GA) (Goldberg, 1989), metropolis criterion in the simulated annealing (SA) (Van Laarhoven & Aarts, 1987). GA is one of the most well-known metaheuristic algorithms, an optimization model simulating Darwin’s biological evolution theory, first proposed by J. Holland et al. in 1975. The algorithm is used in a wide variety of applications, including economics, medicine, and engineering. It was originally developed to mimic a number of phenomena in evolutionary biology, including crossover, mutation and natural selection, by which solutions are driven to evolve in a better direction. Mutation and crossover operators keep good solutions and discard bad solutions to keep the whole population healthy. Similarly, SA takes advantage of the metropolis criterion to generate a thermodynamic system's sample states and determines whether a new solution is accepted. Therefore, although the inputs of the metaheuristic algorithm are fixed, outputs are not. Generally, metaheuristic optimization algorithms can be divided into three groups. The first group is evolutionary algorithms, including GA (Goldberg, 1989) and differential evolution (DE) (Storn & Price, 1997). Inspired by the evolution process, the algorithms optimise targets by repeating reproduction and selection. DE is an evolutionary metaheuristic algorithm, which was proposed by Storn et al. in 1997. This algorithm includes initialization, mutation, crossover, and selection operators; unlike other algorithms, the search agents’ perturbations of the DE are represented by differential information from multiple search agents. This algorithm can keep the best solution in terms of the best objective function value and create new solutions using existing solutions and operators. The parameter settings of DE are challenges because different settings can lead to different results, including worse results. In order to solve this problem, many improved DE algorithms have been proposed. Self-adaptive DE (JDE) (Brest, Greiner, Boskovic, Mernik, & Zumer, 2006) replaces fixed control parameters with dynamic parameters, and it is more applicable than classical DE. Strategy adaption DE (SaDE) (Qin, Huang, & Suganthan, 2009) generates trial vectors and parameters by learning from their previous experiences to match different search process phases based on JDE. Adaptive DE with optimal external archive (JADE) (Zhang & Sanderson, 2009) adaptively updates control parameters by an external archive and ldquoDE/current-to- p bestrdquo operator. It improves the robustness of the algorithm by automatically updating the parameters and avoiding users’ prior knowledge. The second group is swarm intelligence algorithms (Cui et al., 2017), currently in the spotlight in the optimization algorithm field. The particle swarm optimization algorithm (PSO) (Kennedy, 2011) is a well-known swarm intelligence algorithm. It mainly mimics the behaviour of animals to search for global optimum. PSO is the population-based algorithm, moving search agents in the population to “better areas” based on their adaptation to the environment. The algorithm is a typical swarm intelligence algorithm that takes both the search agent’s movement and the entire population environment into account when the search agent moves. The development of this class of algorithms has greatly facilitated the development of optimization algorithms. Wolf search algorithm (WSA) (Tang, Fong, Yang, & Deb, 2012) is one of the well-known swarm intelligence algorithms, which is inspired by the way wolves search for food and survive by avoiding their enemies. The comparison results show that WSA has better performance in classical benchmark functions. Jellyfish algorithm (Chou & Truong, 2021) is another population-based algorithm that only has two control parameters, which greatly reduces the algorithm's complexity. Tunicate swarm algorithm (TSA) (Kaur, Awasthi, Sangal, & Dhiman, 2020) imitates jet propulsion and swarm behaviours of tunicates during the navigation and foraging process. Sooty tern optimization algorithm (STOA) (Dhiman & Kaur, 2019) uses two-steps actions to emphasize exploitation and exploration. The algorithm has shown excellent results in the benchmark test. These algorithms are well-known swarm intelligence algorithms (de Vasconcelos Segundo et al., 2019, Pierezan et al., 2019). The third group is the other algorithms, including the harmony search algorithm (Mahdavi, Fesanghary, & Damangir, 2007) and the dynamic group optimization (DGO) algorithm (Tang, Fong, Deb, & Wong, 2017). The harmony search algorithm first generates a group of initial solutions into the harmonic memory and search for new solutions within the harmonic memory with r probability and search for new solutions outside of harmonic memory with probability 1−r. Due to the open-source community and academia's openness, more and more open-source packages and content are available to further facilitate the development of metaheuristic optimization algorithms. PySwarm (Miranda, 2018) is a well-known Python PSO-based package, which provides high-level declarative interface for implementing PSO and its applications. MetaheuristicAlgorithmPython is a suite of metaheuristic algorithms Python package including harmony search algorithm, PSO, SA and firefly algorithm (Yang, 2010). Metaheuristic is novel evolutionary computation Python framework for rapid prototyping and testing. In can work with multiprocessing, however, it only supports five classical algorithms, such as harmony search and GA. Of these open-source packages, PySwarm is the most complete, providing full benchmark testing and the ability to produce relevant data visualizations.

DGO is a semi-evolutionary and semi-swarm intelligence algorithm that simulates animal behaviours and human societies. In DGO, each solution is a member of different groups where the best solution is recorded as the head. The worst member can dynamically transfer to join into a better group that has a better solution in terms of the objective function value. Accordingly, as generations grow, the number of members dynamically varies. DGO achieves the enhanced searching ability by combining the swarm and evolutionary algorithms. In DGO, three actions and one strategy are available: 1) intra-group cooperation, 2) inter-group communication, 3) group variation, and 4) greedy strategy. Intra-group cooperation set central on local area exploitation, where two mutations are used to search for a new solution around the group head. A solution is first generated by a mutation operator using the head of the current group as the base vector, and the current best global solution and a stochastically selected head are used as perturbed vectors. Second, a new solution is produced by mutating the current solution using two stochastically selected solutions. In inter-group communication, the Lévy flight random walk is employed to produce new solutions using the current heads, and the group variation deals with a population-not-diversity problem. In each iteration, better and new solutions are generated according to their fitness value in a greedy strategy. DGO accelerates convergence and enhances searching ability by creating new communication channels among agents. In DGO, the swarm and evolutionary algorithms are employed in the exploration and exploitation phases, respectively. The intra-group cooperation enhances the local exploitation ability, but can also quickly decrease the population diversity. Premature convergence and inefficient searches may occur in DGO. Also, if the difference between two perturbed vectors is insignificant in the mutation, the searching ability is deteriorated in DGO. The greedy strategy also limits population diversity.

In this paper, to solve the premature convergence and inefficient search problems, DGO with a mean–variance search framework (MvDGO) is proposed, which is an improved algorithm with a proportioned mean solution generator (PM) and mean–variance Gaussian mutation (MVG). PM is used in intra-group cooperation. Put simply, PM can be seen as a variant of a crossover operator. The base vector is a value calculated by a random proportion of the current solution’s corresponding head and the currently obtained best global solution, instead of the current solution in a normal crossover. The two parent-vectors are the current solution and the mean value of all heads, instead of two vectors from the same group. The two vectors used in PM are more significantly different than two vectors selected from the same group. The new solution not only considers its group but is also affected by the current solution and global situation. MVG is used instead of mutations in DGO. A new solution is generated with Gaussian distribution of group heads. MVG takes advantage of the information of all group heads, not merely concentrating on the information from the best solution or a single group. A new lifespan selection operator is also used in the proposed MvDGO. The basic idea of lifespan is that if an individual cannot improve itself within a given time’s trail, the search agent will ‘decease’ and re-generate; otherwise, it will continue the routine search. The MvDGO algorithm is compared with classical DGO and three suites of evolutionary algorithms through extensive experiments on benchmark datasets. Recently, more advanced metaheuristic algorithms have been proposed. Therefore, to investigate our proposed algorithm's performance in more detail, MvDGO is compared with the four latest proposed algorithms on state-of-the-art benchmark functions. The results demonstrate that the proposed MvDGO improves optimization in terms of effectiveness and efficiency.

The searching strategy of local exploitation mainly relies on two mutation operators concentrating on group members, although DGO is strongly suitable for exploitation and exploration. The greedy strategy can evolve a generation of solutions over the previous generation. However, it can decrease the population diversity and limit the searching ability. Most information is obtained from companions within each group, which leads to premature convergence and deteriorated mutation operators. Therefore, PM and MVG obtain information not only from their group but also from others that can avoid shrinking the search space rapidly.

Despite metaheuristic algorithm has ability to handle optimization problems, we are still doubtful whether optimization algorithms can handle real-world applications. Therefore, we will also introduce the classical design of the welded beam problem in this study to verify whether the proposed algorithm can solve real-world problems.

This work proffered five main contributions: 1) PM avoids shrinking the search space rapidly, 2) MVG uses two different individuals as perturbed to obtain more information and enhance searching ability; 3) population diversity is increased; 4) the algorithm’s efficiency is improved; 5) a lifespan selection operator is adopted to avoid local optimum.

The rest of this paper is structured as follows. Section 2 describes the DGO and analyses its mechanisms. Section 3 explains the MvDGO algorithm in detail. Experimental verification of the MvDGO is given in Section 4. Section 5 concludes the paper.

Section snippets

Dynamic group optimization algorithm and mechanism analysis

The DGO was initially designed to resolve optimization problems inspired by natural group structures. The members of any working group emerge, merge, expand, leave, or abort dynamically in real life regardless of human social groups or groups of insects that join forces to achieve a common task. DGO has three main actions. The first is intra-group cooperation, in which members mutate twice using the heads as the base vector. This action strengthens the local exploitation ability. The second is

Dynamic group optimization algorithm with a mean–variance search framework

To address the premature diversity loss problem, PM, MVG and a lifespan mechanism are proposed to enhance DGO’s searching ability. Further details are given in the following subsections.

Experimental verification

Three up-to-date test suites were used to evaluate MvDGO’s performance. These three test suites consisted of 53 benchmark functions in total (Liang et al., 2014, Liang et al., 2013). The test suites were extracted from CEC 2013, CEC 2015, and CEC 2019. The first test suite, f1f28, consists of 28 shifted and rotated functions, where f1–f5 are unimodal, f6–f20 are multimodal, and f21–f28 are hybrid functions. The functions were originally designed for testing the optimization algorithms in

Discussion

The classical DGO algorithm has strong exploration and exploitation performance; however, it has two main drawbacks. The first is the greedy strategy. Although the greedy strategy can guarantee evolving a generation of solutions that are no worse than the previous generation, it can decrease the population diversity and consequently limit the searching ability. From our comparison, it is clear that the population diversity is limited in DGO due to intra-group cooperation focusing on the current

Conclusion

Metaheuristic optimization algorithms have attracted much attention and actively researched in recent years. DGO is a recent metaheuristic algorithm that has shown its superiority recently, however, it has two main drawbacks: greedy strategy decreases population diversity and Intra-group cooperation action leads to premature convergence. In this paper, we propose a new metaheuristic MvDGO algorithm, which employs PM, MVG, and lifespan mechanisms to control the process to overcome drawbacks of

CRediT authorship contribution statement

Rui Tang: Conceptualization, Methodology, Software, Writing - original draft. Jie Yang: Investigation, Writing - original draft, Writing - review & editing. Simon Fong: Funding acquisition, Supervision. Raymond Wong: Writing - review & editing. Athanasios V. Vasilakos: Writing - review & editing. Yu Chen: Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

The authors are thankful to the financial support from the research grants, 1) MYRG2016-00069, titled ‘Nature-Inspired Computing and Metaheuristics Algorithms for Optimizing Data Mining Performance’ offered by RDAO/FST, University of Macau and Macau SAR government. 2) FDCT/126/2014/A3, titled ‘A Scalable Data Stream Mining Methodology: Stream-based Holistic Analytics and Reasoning in Parallel’ offered by FDCT of Macau SAR government. 3)71461016, Research on dimensions, influential factors and

References (37)

  • T.H. Cormen et al.

    Greedy algorithms

    Introduction to Algorithms

    (2001)
  • Z. Cui et al.

    An improved PSO with time-varying accelerator coefficients

  • E.H. de Vasconcelos Segundo et al.

    Metaheuristic inspired on owls behavior applied to heat exchangers design

    Thermal Science and Engineering Progress

    (2019)
  • L. dos Santos Coelho et al.

    Population's variance-based adaptive differential evolution for real parameter optimization

  • D. Goldberg

    Genetic algorithms in search, optimization, and machine learning

    (1989)
  • G.F. Gomes et al.

    Tuning metaheuristic algorithms using mixture design: Application of sunflower optimization for structural damage identification

    Advances in Engineering Software

    (2020)
  • Y.-J. Gong et al.

    Genetic learning particle swarm optimization

    IEEE Transactions on Cybernetics

    (2016)
  • S. He et al.

    Group search optimizer: An optimization algorithm inspired by animal searching behavior

    IEEE Transactions on Evolutionary Computation

    (2009)
  • Cited by (0)

    1

    has the equal contribution to this work.

    2

    ORCID:0000-0002-4466-316X.

    View full text