Elsevier

Neurocomputing

Volume 266, 29 November 2017, Pages 579-594
Neurocomputing

A learning and niching based backtracking search optimisation algorithm and its applications in global optimisation and ANN training

https://doi.org/10.1016/j.neucom.2017.05.076Get rights and content

Abstract

A backtracking search optimisation algorithm that uses historic population information for learning was proposed recently for solving optimisation problems. However, the learning ability and the robustness of this algorithm remain relatively poor. To improve the performance of the backtracking search algorithm (BSA), a modified backtracking search optimisation algorithm (MBSA), based on learning and niching strategies, is presented in this paper. Three main strategies, a learning strategy, a niching strategy, and a mutation strategy, are incorporated into the proposed MBSA algorithm. Learning the best individual in current generation and the best position achieved so far is used to improve the convergence speed. Niching and mutation strategies are used to improve the diversity of the MBSA. Finally, some benchmark functions and three chaotic time series prediction problems based on neural networks are simulated to test the effectiveness of MBSA, and the results are compared with those obtained using some other evolutionary algorithms (EAs). The simulation results indicate that the MBSA outperforms other EAs for most functions and chaotic time series.

Introduction

The backtracking search algorithm (BSA) is a new population-based optimisation method that was proposed by Civicioglu in 2013 [1]. The algorithm has a simple structure and only one control parameter that should be determined during the update process. The algorithm has been successfully used for solving some optimisation problems. Studies of the BSA can be classified into two groups. Studies in the first group aimed at extending the applicability of the algorithm, while studies in the second category aimed at improving the algorithm's performance. As an example of studies in the first group, the problem of constructing an optimal array of concentric circular antennae was solved using the BSA [2]. The BSA was used to find an optimal power flow (OPF) in a high-voltage direct current (HVDC) system [3]. The non-convex economic dispatch problem was also solved using the BSA [4]. The BSA was also used for solving multi-objective problems [5]. The oppositional backtracking search algorithm (OBSA) [6] was introduced to solve optimisation problems of identifying parameters of hyper-chaotic systems. The BSA was used with three constraint handling methods [7], to solve constrained optimisation problems. Multi-type distributed generators were optimised using the BSA [8]. Compared with the BSA studies in the first group, the number of studies in the second group is relatively small. Using the BSA with hp-adaptive Gauss pseudo-spectral methods (hpGPMs) was reported to solve a nonlinear optimal control (NOC) problem with complex dynamic constraints (CDC) [9]. The BSA was used with simulated annealing (SA) to solve the permutation flow-shop scheduling problem [10]. The BSA was combined with DE to solve unconstrained optimisation problems [11]. An adaptive BSA [12] with varied crossover and mutation probabilities was used for the optimisation of an induction magnetometer. This body of work indicates that the BSA is an important tool for solving optimisation problems. Yet, we can identify at least two disadvantages of the original BSA. The first one is that the algorithm's ability to learn the population is weak. History information is mainly used for updating the positions of all individuals, and the best information is not fully used during the evolution process. The second disadvantage is that it is difficult to maintain population diversity once it is lost during the evolution process. The algorithm may become stuck in local optima. Increasing the learning ability and maintaining the population diversity are likely to improve the global performance of BSA.

As is well known, learning the best individual is an effective method for improving the convergence speed of EAs. The particle swarm optimisation algorithm (PSO) [13] is a representative intelligence optimisation algorithm with a learning strategy. This algorithm is inspired by the social behaviour of birds. To improve the global performance of the PSO, some variants, such as the FDR-PSO [14], the FIPSO [15], and the CLPSO [16], were proposed. The interested readers can refer to the detail surveys in references [17], [18]. In the PSO, all individuals update their positions according to the best position of the current population and the best solutions achieved so far by the individuals. An individual learns from another individual that has a better position, which can help improve its convergence speed [19]; however, the diversity of the population may be quickly lost with increasing convergence speed.

To balance the increase in the convergence speed and the reduction in the population diversity, the BSA simultaneously uses learning and niching methods. The main contributions of this work are embodied in two aspects. First, learning from the best individual of the current population and the best position achieved so far is introduced into the mutation process according to the values in the mapping matrix of the BSA. This strategy is used to improve the convergence speed of the original BSA. Second, niching strategy is used to remove some similar individuals in the current population, and a novel mutation strategy is designed to generate some new individuals to maintain the scale and the diversity of the population. This method can maintain population diversity and decrease the local convergence probability.

The remainder of the paper is organised as follows. The original BSA is introduced in Section 2. The MBSA is presented in Section 3. Some simulation experiments are shown in Section 4. In Section 5, some conclusions and suggestions for future work are presented.

Section snippets

The original backtracking search algorithm

As a population-based intelligence optimisation algorithm, the structure of the BSA is simple. History-related information plays a very important role in updating the positions of individuals. The basic BSA consists of five steps. The steps are: initialisation, selection-I, mutation, crossover, and selection-II, respectively. In addition, the BSA contains two populations, the evolution population and the trial population. Some history individuals are randomly selected to build the trial

The main motivation

As was shown above, the crossover and mutation mechanisms in the BSA are different from those in other EAs, and the history information plays an important role for generating new individuals. Eq. (6) shows that the jth bit of the ith individual is not changed if mapi, j = 1. There are parts of bits that cannot be changed in each iteration. Moreover, only the difference between the old population and the current population is used for generating the new position; the information about the best

Simulation experiments

In this section, the efficiency of the MBSA is tested on 25 benchmark functions and three time series prediction problems. To compare the performance of the MBSA with those of some other methods, some other related algorithms [1], [14], [15], [16], [26], [27], [28], [29], [30] were also simulated in the experiments.

Conclusions

In this paper, the BSA has been extended to the MBSA, which uses modified mutation and crossover operators to improve the convergence speed as well as a niching strategy to eliminate the duplicate individuals so as to maintain the diversity of the population. In the modified mutation and crossover operators of the MBSA, all bits in the individuals were mutated regardless of the value in the mapping matrix (0 or 1), and the niching strategy was introduced to remove some similar individuals that

Acknowledgments

This work was supported in part by the National Natural Science Foundations of China (Grant Nos. 61572224 and 41475017) and the National Science Fund for Distinguished Young Scholars (Grants No.61425009). This work is also partially supported by the Major Project of Natural Science Research in Anhui Province (Grant No.KJ2015ZD36), Anhui Provincial Natural Science Foundation (Grant No.1708085MF140) and the Natural Science Foundation in Colleges and Universities of Anhui Province (Grant No

Debao Chen received the Ph.D. degree in the School of Computer Science from NanJing University of Science and Technology, Nanjing, China, in 2008. Currently, he is a full professor in Huaibei Normal University, Huaibei, China. His current research interests include evolutionary computation, global optimization, multiobjective optimization, neural network, etc.

References (35)

  • K. Guney et al.

    Backtracking search optimization algorithm for synthesis of concentric circular antenna arrays

    Int. J. Antennas Propag.

    (2014)
  • M.D. Mostafa et al.

    Solving non-convex economic dispatch problem via backtracking search algorithm

    Energy

    (2014)
  • M.D. Mostafa et al.

    Multi-objective backtracking search algorithm for economic emission dispatch problem

    Appl. Soft Comput.

    (2016)
  • J. Lin

    Oppositional backtracking search optimization algorithm for parameter identification of hyperchaotic systems

    Nonlinear Dyn.

    (2015)
  • E.F. Attia

    Optimal allocation of multi-type distributed generators using backtracking search optimization algorithm

    Electr. Power Energy Syst.

    (2015)
  • L. Wang, Y. Zhong, Y.Y. Zhao, W. Wang, et al. A hybrid backtracking search optimization algorithm with differential...
  • H.B. Duan et al.

    Adaptive backtracking search algorithm for induction magnetometer optimization

    IEEE Trans. Magn.

    (2014)
  • Cited by (17)

    • A heuristic whale optimization algorithm with niching strategy for global multi-dimensional engineering optimization

      2022, Computers and Industrial Engineering
      Citation Excerpt :

      The niche binary particle swarm optimization (NBPSO) (Liu & Chen, 2012) algorithm was brought forth with setting weighted hamming distance to separate different niche groups. A niching and mutation strategy based backtracking search optimization algorithm (Chen, Lu, Zou, Li, & Wang, 2017) was presented by Chen et al. to increase the convergence speed and the diversity of the population. A differential evolution algorithm with cluster-based niching was given by Yang et al. (Yang, Xu, He, Wang, & Wen, 2018) to obtain an optimal stable structure of iron clusters.

    • Wind turbine power curve modeling using an asymmetric error characteristic-based loss function and a hybrid intelligent optimizer

      2021, Applied Energy
      Citation Excerpt :

      For BSA, the population diversity can be preserved because the mutation and crossover processes penetrate its whole iteration, which also enables BSA to obtain high accuracy and a strong global search ability [41,50]. However, unlike other evolutionary algorithms, BSA does not receive guidance from better individuals in the updating phase, thus resulting in a slower convergence speed [49]. So, in the prophase of evolution, the convergence speed of BSA is not high enough, so that it may take a long time to complete optimization.

    • Adaptive differential evolution with a Lagrange interpolation argument algorithm

      2019, Information Sciences
      Citation Excerpt :

      BSA is a population-based stochastic search method with genetic operators [8]. It does not require algorithm-specific parameters, enabling an easy implementation and providing the flexibility to handle different complex optimization problems [6]. Duan and Luo [13] proposed an adaptive BSA with a varied mutation and crossover rate to optimize the mass and dimension of induction magnetometers.

    View all citing articles on Scopus

    Debao Chen received the Ph.D. degree in the School of Computer Science from NanJing University of Science and Technology, Nanjing, China, in 2008. Currently, he is a full professor in Huaibei Normal University, Huaibei, China. His current research interests include evolutionary computation, global optimization, multiobjective optimization, neural network, etc.

    Renquan Lu (IEEEM’08) received the Ph.D. degree in the department of Control Science and Engineering from Zhejiang University, Hangzhou, China, in 2004. He is currently a full Professor with the Guangdong Key Laboratory of IoT Information Processing, Guangdong University of Technology, Guangzhou, China. From June to December in 2008, he served as a Visiting Professor in the Department of Electrical and Computer Engineering, the University of Newcastle, Australia.

    He has published more than 60 journal papers in the fields of robust control and networked control systems. His research interests include robust control, singular systems, and networked control systems.

    Feng Zou received the Ph.D. degree in the School of Computer Science and Engineering from Xi'an University of Technology, Xi'an, China, in 2015. Currently, he is an associate professor in Huaibei Normal University, Huaibei, China. His research interests mainly include evolutionary algorithms, swarm intelligence and multi-objective optimization.

    Suwen Li received the Ph.D. degree in Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China, in 2008. She is currently a full professor in Huaibei Normal University, Huaibei, China. Her current research interests include algorithm optimization and optoelectronic information science and engineering, etc.

    Peng Wang is a master degree candidate in Physicals and Information College of Huaibei Normal University, Huaibei, China. His research interests include evolutionary computation, global optimization, neural network, etc.

    View full text