Group teaching optimization algorithm: A novel metaheuristic method for solving global optimization problems
Introduction
Optimization problems can be found in almost all engineering fields. Thus the development of optimization techniques is very essential for engineering applications, which is an interesting research direction for researchers. Most of the conventional optimization techniques require the gradient information and hence they cannot be used to solve non-differentiable functions (Rao, Savsani & Vakharia, 2012). Moreover, such techniques usually suffer from getting trapped in a local optimum in solving complex optimization problems with many local optima (Eskandar, Sadollah, Bahreininejad & Hamdi, 2012; Mirjalili, 2016). However, many real-world engineering optimization problems are very complex, whose objective functions usually have more than one local optima (Cheng & Prayogo, 2014; Zhang, Jin & Chen, 2019b). The drawbacks of conventional optimization techniques have encouraged researchers to develop better optimization methods to solve real-world engineering optimization problems.
A growing interest has been observed in metaheuristic methods over the last two decades. Here, it is worth mentioning that research worldwide in the metaheuristic field has produced optimization approaches that have proven superior to the conventional optimization approaches (Cheng & Prayogo, 2014; Rao et al., 2012). At present, metaheuristic methods have been used successfully to solve a lot of engineering optimization problems, such as multi-robot path planning (Nazarahari, Khanmirza & Doostie, 2019), unmanned aerial vehicles navigation (Kuroki, Young & Haupt, 2010), the opinion leader detection in online social network (Jain, Katarya & Sachdeva, 2020), the identification of influential users in social network (Zareie, Sheikhahmadi & Jalili, 2020); the deployment of unmanned aerial vehicles (Wang, Ru, Wang & Huang, 2019), the data collection system of Internet of Things (Huang, Wang, Wang & Yang, 2019), the localization in wireless sensor network localization (Liao, Kao & Li, 2011). Metaheuristic methods commonly operate by combining some defined rules and randomness to simulate natural phenomena (Eskandar et al., 2012; Lee & Geem, 2005). Although every metaheuristic algorithm has its own characteristics, all metaheuristic algorithms have the following three common features: nature-inspired, randomness and adjustable parameters. To the best of our knowledge, most research on the classification of metaheuristic algorithms focus on inspiration sources (Eskandar et al., 2012; Kaveh & Bakhshpoori, 2016; Mirjalili, 2016; Zhang, Xiao, Gao & Pan, 2018a) and no attempts to divide metaheuristic algorithms based on the adjustable parameters types have been reported in the literature. Next we give the classification method of metaheuristc algorithms according to the adjustable parameters types.
The adjustable parameters of metaheuristic algorithms generally include common parameters and special parameters. Common parameters are essential for every metaheuristic algorithm, which usually include population size and stopping criterion (e.g. the maximum number of function evaluations or the maximum number of iterations). Such parameters can be called Type I parameters. As for special parameters, they reflect the features of algorithms themselves, which can be divided into two categories as follows. Some special parameters need to be set in the initialization phase of algorithms, which can be called Type II parameters, such as crossover rate and mutation rate in differential evolution (Storn & Price, 1997) and the inertia weight in particle swarm optimization (Shi & Eberhart, 1998). Moreover, another some special parameters are usually associated with the type I parameters like the control parameter “α” in grey wolf optimizer (Mirjalili, Mirjalili & Lewis, 2014), which can be called Type III parameters. Based on the different types of parameters, metaheuristic algorithms can be divided into as follows: (1) Type I parameters-based algorithms, such as teaching-learning-based optimization (TLBO) (Rao, Savsani & Vakharia, 2011) and neural network algorithm (NNA) (Sadollah, Sayyaadi & Yadav, 2018); (2) Type I and II parameters-based algorithms, such as particle swarm optimization (PSO) (Shi & Eberhart, 1998), differential evolution (DE) (Storn & Price, 1997), harmony search (Lee & Geem, 2005), biogeography-based optimization (Simon, 2008) and cuckoo search (CS) (Yang & Deb, 2009); (3) Type I and III parameters-based algorithms, such as gray wolf optimizer (GWO) (Mirjalili et al., 2014), whale optimization algorithm (WOA) (Mirjalili & Lewis, 2016), sine cosine algorithm (SCA) (Mirjalili, 2016) and salp swarm algorithm (SSA) (Mirjalili et al., 2017); (4) Type I, II and III parameters-based algorithms, such as water cycle algorithm (WCA) (Eskandar et al., 2012).
Although many metaheuristic algorithms have been used successfully to solve different types of optimization problems, we think that it is very necessary for researchers to achieve more simple and efficient Type I parameters-based metaheuristic algorithms due to the following three reasons.
Firstly, most existing metaheuristic algorithms are related to Type II and III parameters. Although appropriate Type II and III parameters are good for the optimization performance of these algorithms, a perfect metaheuristic algorithm should avoid Type II and III parameters considering their drawbacks. The major disadvantages of metaheuristic algorithms with Type II parameters can be summarized as follows: (1) for an unknown optimization problem, how to determine the optimal values of these parameters is a very hard task; (2) different optimization problems have different characteristics, which usually need different optimal values. For instance, the discovery probability in CS is the Type II parameter, which is employed to balance exploration and exploitation of CS. The discovery rate is set to 0.25 according to the authors of CS (Yang & Deb, 2014), which means exploration and exploitation take about 0.75 and 0.25 of the total search time, respectively. However, some variants of CS with adaptive discovery probability (Mlakar, Fister & Fister, 2016; Rakhshani & Rahati, 2017; Valian, Tavakoli, Mohanna & Haghi, 2013; Wang & Zhou, 2016) have been proven to be more effective than original CS in solving some practical problems.
Secondly, few existing metaheuristic algorithms belong to Type I parameters-based algorithms. Although TLBO and NNA are Type I parameters-based algorithms, the two algorithms have their own drawbacks. TLBO is inspired by the traditional teaching. TLBO has the fast convergence speed while TLBO is easy to fall into a local optimum in solving complex optimization problems (Chen, Zou, Li, Wang & Li, 2015; Ouyang, Gao, Kong, Zou & Li, 2015). NNA is one of the newest metaheuristic algorithms, which has strong global search ability benefiting from the unique structure of artificial neural networks while it has been proven to have a slow convergence speed (Sadollah et al., 2018).
Thirdly, No Free Lunch (NFL) theorem (Wolpert & Macready, 1997) was proposed in 1997, which has great guidance to the development of optimization algorithms. According to NFL, there is no metaheuristic best suited for solving all optimization problems. More specifically, a metaheuristic algorithm may present very promising results on a set of optimization problems while it may show poor performance on another set of optimization problems. Thus NFL plays an important role in promoting the rapid development of metaheuristic algorithms.
Considering the above reasons, a novel metaheuristic optimization algorithm called group teaching optimization algorithm (GTOA) is proposed for solving global optimization problems. The proposed method is inspired by group teaching mechanism. Group teaching is a common teaching model, which can be briefly described as follows. The students first are divided into some groups according to the defined rules. Then combining with the characteristics of every group, the teacher uses the specific teaching method to improve the group's knowledge. Like TLBO and NNA, the proposed GTOA is also a Type I parameters-based metaheuristic algorithm. Moreover, GTOA has a simple framework and is easy to implement. The main contribution of this paper can be stated as follows:
- (1)
A new classification method for existing metaheuristic algorithms is presented based on parameter types.
- (2)
A novel optimization approach called GTOA is proposed for global optimization, which is inspired by group teaching.
- (3)
In order to adapt group teaching to be suitable for using as an optimization technique, a group teaching model is built.
- (4)
28 well-known unconstrained benchmark test functions and four constrained engineering design problems are employed to check the performance of the proposed method.
The remaining of this paper is organized as follows: In Section 2, the basic idea and the frame of the proposed GTOA are introduced. The specific implement of the proposed method is described in Section 3. The proposed method is examined using a wide set of test beds and details are discussed in Section 4. Finally, conclusions are given in Section 5.
Section snippets
The proposed GTOA
In this section, the basic idea of GTOA is first introduced. Then the framework of the proposed method is presented in detail.
Implementation of the proposed GTOA for optimization
In the following, the step-wise procedure for the implementation of GTOA is given and GTOA is explained with the aid of the flowchart in Fig. 3.
Step 1: Initialization information
(1.1) Initialization parameters.
These parameters include the maximum number of function evaluations Tmax, the current number of function evaluations Tcurrent (Tcurrent=0), population size N, the lower bounds of design variables l, the upper bounds of design variables u, dimension of problem D and fitness function f(•).
Experimental studies
This section is to prove the validation of the proposed GTOA in solving global optimization problems, which is divided into two parts. 28 complex unconstrained benchmark problems are tested in Section 4.1, with the results compared against nine other metaheuristic algorithms. Four constrained engineering design problems are examined in Section 4.2 and the obtained results are compared with other reported solutions.
Conclusions
This study presents a novel population-based optimization algorithm for solving global optimization problems, which is inspired by group teaching approach. The proposed algorithm called the group teaching optimization algorithm (GTOA) consists of teacher allocation phase, teacher phase, student phase and ability grouping phase. In addition, unlike most of existing meta-heuristic algorithms, GTOA does not need to preset extra control parameters except two essential parameters (population size
CRediT authorship contribution statement
Yiying Zhang: Conceptualization, Methodology, Software, Validation, Writing - original draft. Zhigang Jin: Supervision, Writing - review & editing.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
References (63)
- et al.
A modification of tree-seed algorithm using Deb's rules for constrained optimization
Applied Soft Computing
(2018) - et al.
An improved teaching–learning-based optimization algorithm for solving global optimization problem
Information Sciences
(2015) - et al.
Symbiotic organisms search: A new metaheuristic optimization algorithm
Computers & Structures
(2014) Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems
Expert Systems with Applications
(2010)Use of a self-adaptive penalty approach for engineering optimization problems
Computers in Industry
(2000)- et al.
A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms
Swarm and Evolutionary Computation
(2011) - et al.
Water cycle algorithm – A novel metaheuristic optimization method for solving constrained engineering optimization problems
Computers & Structures
(2012) - et al.
Improved grasshopper optimization algorithm using opposition-based learning
Expert Systems with Applications
(2018) - et al.
A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization
Applied Mathematics and Computation
(2007) - et al.
An effective co-evolutionary particle swarm optimization for constrained engineering design problems
Engineering Applications of Artificial Intelligence
(2007)
Chaotic opposition-based grey-wolf optimization algorithm based on differential evolution and disruption operator for global optimization
Expert Systems with Applications
Opinion leader detection using whale optimization algorithm in online social network
Expert Systems with Applications
Water evaporation optimization: A novel physically inspired optimization algorithm
Computers & Structures
UAV navigation by an expert system for contaminant mapping with a genetic algorithm
Expert Systems with Applications
A new meta-heuristic algorithm for continuous engineering optimization: Harmony search theory and practice
Computer Methods in Applied Mechanics and Engineering
A sensor deployment approach using glowworm swarm optimization algorithm in wireless sensor networks
Expert Systems with Applications
Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization
Applied Soft Computing
Solving high-dimensional global optimization problems using an improved sine cosine algorithm
Expert Systems with Applications
Grey wolf optimizer with cellular topological structure
Expert Systems with Applications
Binary dragonfly optimization for feature selection using time-varying transfer functions
Knowledge-Based Systems
Immune generalized differential evolution for dynamic multi-objective environments: An empirical study
Knowledge-Based Systems
SCA: A sine cosine algorithm for solving optimization problems
Knowledge-Based Systems
Salp Swarm algorithm: A bio-inspired optimizer for engineering design problems
Advances in Engineering Software
The Whale optimization algorithm
Advances in Engineering Software
Grey Wolf optimizer
Advances in Engineering Software
Hybrid self-adaptive cuckoo search for global optimization
Swarm and Evolutionary Computation
Multi-objective multi-robot path planning in continuous environment using an enhanced genetic algorithm
Expert Systems with Applications
Teaching-learning based optimization with global crossover for global optimization problems
Applied Mathematics and Computation
A new unconstrained global optimization method based on clustering and parabolic approximation
Expert Systems with Applications
Snap-drift cuckoo search: a novel cuckoo search optimization algorithm
Applied Soft Computing
Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems
Computer-Aided Design
Cited by (170)
Leveraging advanced technologies for early detection and diagnosis of oral cancer: Warning alarm
2024, Oral Oncology ReportsA discrete group teaching optimization algorithm for solving many-objective sand casting whole process production scheduling problem
2024, Computers and Operations ResearchMulti-product disassembly line balancing optimization method for high disassembly profit and low energy consumption with noise pollution constraints
2024, Engineering Applications of Artificial IntelligenceMulti-region combined heat and power economic dispatch based on modified group teaching optimization algorithm
2024, International Journal of Electrical Power and Energy Systems