Elsevier

Information Sciences

Volume 273, 20 July 2014, Pages 112-131
Information Sciences

Teaching–learning-based optimization with dynamic group strategy for global optimization

https://doi.org/10.1016/j.ins.2014.03.038Get rights and content

Abstract

Global optimization remains one of the most challenging tasks for evolutionary computation and swarm intelligence. In recent years, there have been some significant developments in these areas regarding the solution of global optimization problems. In this paper, we propose an improved teaching–learning-based optimization (TLBO) algorithm with dynamic group strategy (DGS) for global optimization problems. Different to the original TLBO algorithm, DGSTLBO enables each learner to learn from the mean of his corresponding group, rather than the mean of the class, in the teacher phase. Furthermore, each learner employs the random learning strategy or the quantum-behaved learning strategy in his corresponding group in the learner phase. Regrouping occurs dynamically after a certain number of generations, helping to maintain the diversity of the population and discourage premature convergence. To verify the feasibility and effectiveness of the proposed algorithm, experiments are conducted on 18 numerical benchmark functions in 10, 30, and 50 dimensions. The results show that the proposed DGSTLBO algorithm is an effective method for global optimization problems.

Introduction

Real-world optimization problems occur frequently in different fields of engineering, social sciences, and physical sciences. Effective and efficient optimization algorithms are thus required to tackle increasingly complex real-world problems. This poses a severe challenge to classical derivative-based techniques. The major issue is that algorithms may become trapped in the local optima of an optimization problem containing multiple optima. This has led researchers to develop various optimization techniques based on evolutionary computation and swarm intelligence. These algorithms (i.e. Genetic Algorithm (GA) [9], Particle Swarm Optimization (PSO) [11], Differential Evolution (DE) [37], Ant Colony Optimization (ACO) [3], Biogeography-Based Optimization (BBO) [36], Harmony Search (HS) [7], and Artificial Bee Colony (ABC) algorithms [1]) have different design philosophies and characteristics, and have been shown to be successful in dealing with many optimization problems. To avoid possible local optima in the convergence state and improve the performance on complex global optimization problems, numerous variants of these algorithms have also been developed by introducing auxiliary techniques such as crowding [40], fitness sharing [8], [40], clustering [45], clearing [21], restricted tournament selection [10], [43], and speciation [14], [16], [17].

Among existing meta-heuristics, DE is a simple yet powerful global optimization technique that has been successfully applied in various areas. Thomsen [40] proposed the Crowding DE, which limits competition between the nearest (Euclidean distance) members to maintain diversity, and integrated the fitness-sharing concept with DE to form the sharing DE (Sharing DE). Li [17] developed species-based DE (SDE), which forms species based on the Euclidean distance, and Das et al. [5] reported a variant DE with a neighbourhood-based mutation operator. Epitropakis et al. [6] proposed a proximity-based mutation operator, which selects the vectors for the mutation operation using a distance-related probability. Brest et al. [2] presented a new version of the DE algorithm, describing an efficient technique for adapting control parameter settings associated with DE. Qin et al. [24] proposed a self-adaptive DE (SaDE) algorithm, in which both trial vector generation strategies and their associated control parameter values are gradually self-adapted by learning from their previous experiences in generating promising solutions. Zhang and Sanderson [46] implemented a new mutation strategy, ‘DE/current-to-pbest’, with an optional external archive and control parameters that are updated in an adaptive manner to improve optimization performance. Qu et al. [25] proposed a neighbourhood mutation strategy that integrated with various DE algorithms to solve multimodal optimization problems.

PSO is currently popular because of its intelligent search and optimization abilities. To improve the performance of PSO on complex multimodal problems, many variants have been developed. Kennedy and Mendes [12] claimed that PSO with a small neighbourhood might perform better on complex problems, while PSO with a large neighbourhood would be more suited to simple problems. Peram et al. [23] proposed the fitness-distance-ratio-based PSO (FDR-PSO). When updating each velocity dimension, the FDR-PSO algorithm selects one particle that has a higher fitness value and is nearer to the particle being updated. Mendes et al. [18] introduced a fully informed PSO that, instead of using the pbest and gbest positions as in the standard algorithm, uses the neighbours of each particle to update the velocity. The influence of each particle on its neighbours is weighted according to its fitness value and the neighbourhood size. Parsopoulos and Vrahatis [22] combined global and local PSO to construct a unified version (UPSO). Liang et al. [13] presented a comprehensive learning particle swarm optimizer (CLPSO). CLPSO uses a novel learning strategy whereby all the particles’ historical best information is used to update the particle velocities. Nasira et al. [19] presented dynamic neighbourhood-learning PSO (DNLPSO). To achieve a better balance between the explorative and exploitative behaviour of CLPSO, they incorporated novel strategies for the selection of exemplar particles, which contribute to the velocity update of the other learning particles in the swarm.

The teaching–learning-based optimization (TLBO) algorithm [28], [29], which simulates the teaching–learning process in a classroom, is a recently proposed population-based algorithm. TLBO has emerged as one of the simplest and most efficient techniques, as it has been empirically shown to perform well on many optimization problems. It has been extended to function optimization, engineering optimization, multi-objective optimization, clustering, and other fields. Rao et al. proposed TLBO for the optimization of mechanical design problems [28], and then applied TLBO to find global solutions to large-scale non-linear optimization problems [29]. In [30], Rao et al. utilized TLBO to solve continuous unconstrained and constrained optimization problems. Togan [41] presented a design procedure that employed TLBO for the discrete optimization of planar steel frames. Rao and Kalyankar [26] applied TLBO to optimize the process parameters in modern machining processes. In [31], an elitism concept was introduced to the TLBO algorithm, and its effect on the performance of the algorithm was investigated. Degertekin and Hayalioglu [4] applied TLBO in the optimization of four truss structures. Rao and Patel [32] proposed an improved TLBO by introducing the concepts of the number of teachers, adaptive teaching factor, tutorial training, and self-motivated learning. Rao and Kalyankar [27] applied TLBO to the optimization of parameters such as cutting speed, feed rate, depth of cut, and number of passes in a multi-pass turning operation. Rao and Patel [33] then applied TLBO to the determination of the optimum operating conditions of combined Brayton and inverse Brayton cycles, where the maximization of thermal efficiency and specific work of the system are considered as the objective functions, and are treated simultaneously for multi-objective optimization. Niknam et al. [20] presented a θ-multi-objective TLBO algorithm to solve the dynamic economic emission dispatch problem, where the optimization procedure considers phase angles attributed to the real value of the design parameters, instead of the design parameters themselves. Rao and Patel [34] proposed a modified version of the TLBO algorithm for the multi-objective optimization of heat exchangers by introducing the concept of the number of teachers and an adaptive teaching factor, and then developed a modified version of the TLBO algorithm for the multi-objective optimization of a two-stage thermoelectric cooler by introducing self-motivated learning [35]. Details of the conceptual basis of TLBO were given by Waghmare [42]. For more details on various benchmark functions and real-life applications, readers may refer to https://sites.google.com/site/tlborao/.

As a stochastic search scheme, TLBO has characteristics of simple computation and rapid convergence. TLBO is a parameter-free evolutionary technique, and is gaining popularity because of its ability to achieve better results comparatively faster than GAs [9], PSO [11], and ABC algorithms [1]. However, in evolutionary computation research, there have always been attempts to further improve any given findings. It should be noted that all learners are learning from the same mean of the class, and so TLBO may easily become trapped in a local optima when solving complex problems containing multiple local optimal solutions. In this paper, we present an improved variant of the TLBO algorithm called Dynamic Group Strategy TLBO (DGSTLBO), which enhances conventional TLBO when considering complex global optimization problems. The proposed algorithm is tested on some benchmark functions, and the results are compared with the output of other algorithms.

The remainder of this paper is organized as follows. Section 2 briefly introduces the TLBO algorithm, before Section 3 describes the proposed DGSTLBO algorithm. Section 4 presents the test functions and experimental setting for each algorithm, and discusses the numerical results. Our conclusions are given in Section 5.

Section snippets

Teaching–learning-based optimization

Inspired by the philosophy of teaching and learning, Rao et al. [28], [29] first developed the concept of TLBO. The main idea behind TLBO is the simulation of a classical learning process consisting of a teacher phase and a learner phase. In the teacher phase, the teacher distributes his knowledge to all students (i.e. students learn from the teacher), whereas in the learner phase, students learn with the help of fellow students (i.e. students learn through interaction with other students).

Description of the proposed algorithm

A good optimization algorithm requires a balance between exploration and exploitation. Exploration refers to the ability of the algorithm to search different regions of the feasible search space, whereas exploitation means the ability of all individuals to converge to optimal solutions as fast as possible. Excessive exploration will lead to a purely random search, whereas excessive exploitation will produce a purely local search.

The purpose of the teacher component is to lead learners to the

Experimental results

We used 18 benchmark functions to test the efficiency of DGSTLBO. To evaluate the performance of DGSTLBO, we also simulated jDE [2], SaDE [24], PSO-cf-Local [12], FDR-PSO [23], and TLBO [28], [29], and now present a comparison of their results.

Conclusion

In this paper, we have presented the DGSTLBO algorithm as an extension of conventional TLBO. The proposed algorithm is based on dynamic groups, and uses the information of groups and the mean values of relative groups in the search space to solve both unimodal and multimodal problems.

From our analysis and experiments, we have shown that the dynamic group strategy enables DGSTLBO to utilize local information more effectively, generating better quality solutions more frequently. By comparing

Acknowledgements

We are grateful to the anonymous referees for their constructive comments to improve the paper. This research was partially supported by National Natural Science Foundation of China (No. 61100173, 61100009, 61272283, 61304082). This work is partially supported by the Natural Science Foundation of Anhui Province (Grants No.1308085MF82) and Doctoral Innovation Foundation of Xi’an University of Technology (207-002J1305).

References (47)

  • J. Brest et al.

    Self-adapting control parameters in differential evolution: a comparative study on numerical benchmark problems

    IEEE Trans. Evol. Comput.

    (2006)
  • M. Dorigo, Optimization, Learning and Natural Algorithms, Ph.D. Thesis, Politecnico di Milano, Italy,...
  • S. Das et al.

    Differential evolution with a neighborhood based mutation operator: a comparative study

    IEEE Trans. Evol. Comput.

    (2009)
  • M.G. Epitropakis et al.

    Enhancing differential evolution utilizing proximity-based mutation operators

    IEEE Trans. Evol. Comput.

    (2011)
  • Z.W. Geem et al.

    A new heuristic optimization algorithm: harmony search

    Simulation

    (2001)
  • D.E. Goldberg, J. Richardson, Genetic algorithms with sharing for multimodal function optimization, in: Proceedings of...
  • J. Holland

    Adaptation in Natural and Artificial Systems

    (1975)
  • G.R. Harik, Finding multimodal solutions using restricted tournament selection, in: Proceedings of the 6th...
  • J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. of IEEE International Conference on Neural Networks,...
  • J. Kennedy, R. Mendes, Population structure and particle swarm performance, in: Proc. of IEEE International Conference...
  • J. Liang et al.

    Comprehensive learning particle swarm optimizer for global optimization of multimodal functions

    IEEE Trans. Evol. Comput.

    (2006)
  • J.P. Li et al.

    A species conserving genetic algorithm for multimodal function optimization

    Evol. Comput.

    (2002)
  • M.Q. Li et al.

    The Foundational Theory and Application of Genetic Algorithms (in Chinese)

    (2003)
  • Cited by (116)

    View all citing articles on Scopus
    View full text