Elsevier

Neurocomputing

Volume 291, 24 May 2018, Pages 175-186
Neurocomputing

Optimization of quantum-inspired neural network using memetic algorithm for function approximation and chaotic time series prediction

https://doi.org/10.1016/j.neucom.2018.02.074Get rights and content

Highlights

  • A novel memetic algorithm based on hybrid genetic algorithm and gradient descent is proposed.

  • We develop a new and efficient type of quantum-inspired neural networks model.

  • The accuracy of the approach is investigated for function approximation and time series prediction problems.

  • Numerical experiments show the excellent effectiveness and efficiency of the proposed approach.

Abstract

Heuristic and deterministic optimization methods are extensively applied for the training of artificial neural networks. Both of these methods have their own advantages and disadvantages. Heuristic stochastic optimization methods like genetic algorithm perform global search, but they suffer from the problem of slow convergence rate near global optimum. On the other hand deterministic methods like gradient descent exhibit a fast convergence rate around global optimum but may get stuck in a local optimum. Motivated by these problems, a hybrid learning algorithm combining genetic algorithm (GA) with gradient descent (GD), called HGAGD, is proposed in this paper. The new algorithm combines the global exploration ability of GA with the accurate local exploitation ability of GD to achieve a faster convergence and also a better accuracy of final solution. The HGAGD is then employed as a new training method to optimize the parameters of a quantum-inspired neural network (QINN) for two different applications. Firstly, two benchmark functions are chosen to demonstrate the potential of the proposed QINN with the HGAGD algorithm in dealing with function approximation problems. Next, the performance of the proposed method in forecasting Mackey–Glass time series and Lorenz attractor is studied. The results of these studies show the superiority of the introduced approach over other published approaches.

Introduction

A variety of learning algorithms have been developed for artificial neural networks (ANNs). Stochastic and deterministic optimization techniques are among the most popular learning algorithms for ANNs [1], [2], [3], [4], [5]. Both techniques have their own advantages and disadvantage. Stochastic search algorithms such as genetic algorithm (GA) [1], [2], shuffled frog-leaping algorithm (SFLA) [3] and particle swarm optimization (PSO) [4] perform global search but they suffer from the problem of slow convergence rate near global optimum. On the other hand deterministic search algorithms like gradient descent [5] exhibit a fast convergence rate around global optimum but may get stuck in a local optimum. Furthermore, the weight initialization of the back-propagation neural network can significantly affect the speed of convergence, the probability of convergence and the generalization. From the point of speed of convergence, the number of iterations of the training algorithm and the convergence time will vary depending on the weight initialization. If the initial values of the weights are close to the solution, the algorithm needs a small number of iterations to achieve an acceptable solution. But, if the initial values of the weights are far from the solution, searching the best solution takes a very long time during the training process of networks.

The memetic algorithm (MA) is a combination of global search and local search algorithms that exploit the advantages of both algorithms [6]. MAs are also called genetic local search, Lamarckian evolutionary algorithms, cultural algorithms, and hybrid evolutionary algorithms. In recent years, memetic algorithms have been proven to be powerful in solving the complex optimization problems in wide areas of engineering fields [7], [8], [9], [10], [11]. For example, in [7] the computation of global structural balance was solved as an optimization problem by using the memetic algorithm. In their work, the optimization algorithm combines genetic algorithm and a greedy strategy as the local search procedure. A memetic algorithm based on genetic algorithm with two different local search strategies was proposed to maximize the modularity density in [8]. In the proposed methodology, one local search strategy was simulated annealing (SA), and the other one was tightness greedy optimization (TGO). A fractional particle swarm optimization-based memetic algorithm (FPSOMA) to solve optimization problem using fractional calculus concept was introduced in [9]. In the proposed algorithm the FPSOMA accomplishes global search over the whole search space through PSO whereas local search was performed by PSO with fractional order velocity to alter the memory of best location of the particles. To optimize the urban transit network a memetic algorithm was proposed in [10]. In the proposed algorithm four types of local search operators were embedded in the classical genetic algorithm with the aim of improving the computational performance. In [11] a comprehensive learning particle swarm optimization (CLPSO)-based memetic algorithm (CLPSO-MA) was developed for short-term load forecasting. In the proposed CLPSO-MA algorithm, CLPSO was applied to explore the solution space, while a problem-specific local search was used for conducting individual learning.

MAs have recently gained much importance for learning of artificial neural network (ANN) because they offer more advantages over the conventional learning algorithms used individually. Many researchers such as the authors in [12], [13], [14], [15] have applied the MAs to examine their proposed algorithms in learning of ANN. In [12] an automatic search methodology for the optimization of the parameters and performance of neural networks was proposed. In the proposed methodology, a combination of evolution strategies, particle swarm optimization, and genetic algorithm was performed to build a hybrid method capable of seeking near-optimal or even optimal neural networks. A hybrid of PSO and gravitational search algorithm (GSA) was introduced as new training method for feed-forward neural networks in [13]. The objective of the study was to investigate the efficiencies of the GSA and PSOGSA in reducing the problems of trapping in local minima and the slow convergence rate of current evolutionary learning algorithms. In [14] a combined genetic algorithm and gradient descent was proposed for the complex permittivity determination of arbitrary shaped materials. In the proposed algorithm the final solution obtained by the GA was the start point for the GD. A two-step learning scheme for radial basis function (RBF) neural networks, which combines the genetic algorithm (GA) with the hybrid learning algorithm (HLA), was proposed in [15]. In their work, the parameters of an RBF neural network were first optimized by a genetic algorithm. Then a hybrid learning process which combines the gradient paradigm and the linear least square (LLS) paradigm was used to modify the weights and continue adjusting the parameters.

In recent years, several researches have been looking for better ways to design neural networks for chaotic time series prediction [16], [17]. A new methodology to model and predict chaotic time series based on a new recurrent predictor neural network (RPNN) was proposed in [16]. In this study a long-term prediction by making accurate multistep predictions was realized. In [17] by combining the advantage of the Bayesian framework and the echo state mechanisms, a novel echo state network (ESN) was proposed for chaotic time series prediction.

Quantum-inspired neural networks (QINNs) model is based on the combination of quantum computing with the properties of neural networks that can be implemented on a classical computer. Quantum computing based on the quantum mechanical nature of physics [18] is a good candidate for enhancing the computational efficiency of the classical neural networks. It should be noted that the quantum-inspired neural networks mistakenly called as quantum neural networks (QNNs) in some literatures. More complete details on this problem can be found in [19], [20]. Several previous studies have considered a QINNs model to solve problems such as image compression, approximation, time series prediction, identification and system controlling [21], [22], [23], [24], [25]. The performance of the quantum-inspired neural network of large size in image compression problems was evaluated in [21]. The results showed that the proposed model has same or more efficient processing abilities for image compression compared with conventional neural network. In [22] was proved that a universal approximation theorem can be uniformly approximated by a quantum based artificial neural network. A complex quantum neuron was developed to conduct time series predictions in [23]. All network parameters were trained by the Levenberg–Marquardt algorithm. In [24] an attempt was made to develop a QINN model and used as identifier to estimate the ammonia in humid air. The results showed that the identification performed by the presented model were quiet well and overcome the inference of humidity caused. In [25] a direct neural network controller using the multi-layer quantum-inspired neural network was proposed, while a real-coded genetic algorithm was used for training. The computational experiments were conducted for controlling a discrete-time nonlinear system and a two-wheeled robot.

Based on this observation, this work proposes to combine the genetic algorithm (GA) with gradient descent (GD), called HGAGD, to form a hybrid learning algorithm. The proposed HGAGD algorithm includes two major phases, namely, global search and local search. In the first phase, a global search is performed by a GA optimization method which starts with an initial population. In the second phase, the algorithm switches to a GD optimization method. At this point the GD algorithm is used as a local search algorithm around the global best individuals (elites) found by GA in each generation. By using the proposed HGAGD method, a full use of the strong global search ability of the GA and the strong local search ability of the GD algorithm, can be achieved. Therefore, they can hopefully remedy the weaknesses of each other. The resulting hybrid algorithm is then used to train the parameters of a quantum-inspired neural network (QINN). Two application studies are done in this paper, including the function approximation and chaotic time series prediction. In the first application study, the proposed approach is used to approximate the functions with one and two variables. The results obtained confirm the potential and effectiveness of the proposed HGAGD compared to other optimization algorithms. The second application study confirms the superiority of the proposed approach in prediction of Mackey–Glass chaotic time series and Lorenz attractor when compared to other prediction approaches reported in the literature.

In summary, the novel contributions of this paper are as follows:

  • A novel memetic algorithm based on hybrid genetic algorithm and gradient descent is proposed.

  • We develop a new and efficient type of quantum-inspired neural networks model.

  • The accuracy of the approach is investigated for function approximation and time series prediction problems.

  • Numerical experiments show the excellent effectiveness and efficiency of the proposed approach.

The rest of the paper is organized as follows. Section 2 introduces the architecture of the quantum-inspired neural network. The gradient descent method for the QINN is described in Section 3. The proposed hybrid learning algorithm based on genetic algorithm and gradient descent is discussed in Section 4. In Section 5, experimental result and discussion are presented. Finally, the conclusions and future work are given in Section 6.

Section snippets

Quantum-inspired neural network

Quantum-inspired neural networks (QINNs) model is based on the combination of quantum computing (QC) [26] with the properties of neural networks. Quantum computing is a good candidate for enhancing the computational efficiency of neural networks because of their novel computational characteristics. In QINN models, the quantum neural computing is generally based on the real assumption, i.e. the quantum probability amplitudes are all real. The majority of proposals for QINN models [27], [28], [29]

Gradient descent method for the QINN

In this section, the gradient descent method is applied to train the QINN parameters. The goal is to minimize the cost function that is calculated as follows: J(k)=12[yd(k)yN(k)]2=12e2(k)where yd(k) is the desired value and yN(k) is the output of the network for the sampling time k. By using the GD method, the parameters of the QINN are adjusted so that the error is minimized after a given number of iterations.

The update rule for δO, λO, wm, δm, λm and wml can be derived, as follows: δO(k+1)=δO

Hybrid learning algorithm for the QINN

In this section, the proposed HGAGD is used to search the optimal values of the QINN parameters. The HGAGD is an optimization algorithm combining the GA with the GD. The GA is a global algorithm, which has a strong ability to find global solution in a complex search space, but it suffers from slow convergence rate near global optimum. On the other hand, the GD algorithm has strong local search ability but, its ability to find the global solution is weak. By using HGAGD method, a full use of the

Experimental result and discussion

This section discusses three numerical examples that are considered to evaluate the QINN model with the HGAGD learning method. The first example involves approximating a benchmark piecewise nonlinear function, the second example involves approximating a two-dimensional function, the third example involves predicting Mackey–Glass chaotic time series, and the fourth example involves predicting Lorenz attractor.

Example 1

In the following, a benchmark piecewise nonlinear function is used to illustrate the

Conclusions and future work

In this paper, a novel memetic algorithm is proposed to overcome the drawbacks of heuristic and deterministic learning approaches. The proposed algorithm combines the genetic algorithm with the gradient descent algorithm called HGAGD. By using HGAGD approach, a full use of the strong global search ability of the GA and the strong local search ability of the GD algorithm, can be achieved. Therefore, they can hopefully remedy the weaknesses of each other. Furthermore, using the HGAGD both the

Soheil Ganjefar was born in Iran, in 1971. He received his B.S. degree from the Ferdoowsi University, Mashhad, Iran, in 1994, and his M.S. and Ph.D. degrees from the Tarbiat Modares University, Tehran, Iran, in 1997 and 2003, respectively, all in Electrical Engineering. He is currently a Professor in the department of Electrical Engineering, Bu-Ali Sina University, Hamedan, Iran. His current research interests include Teleoperation Systems control, Optimal Control, Neural network, Renewable

References (49)

  • S.A. Mirjalili et al.

    Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm

    Appl. Math. Comput.

    (2012)
  • Z.Q. Zhao et al.

    A mended hybrid learning algorithm for radial basis function neural networks to improve generalization capability

    Appl. Math. Model.

    (2007)
  • A.J. da Silva et al.

    Comments on “quantum artificial neural networks with applications

    Inf. Sci.

    (2016)
  • A.J. da Silva et al.

    Quantum perceptron over a field and neural network architecture selection in a quantum computer

    Neural Netw.

    (2016)
  • H. Cao et al.

    Quantum artificial neural networks with applications

    Inf. Sci.

    (2015)
  • Y. Cui et al.

    Complex rotation quantum dynamic neural networks (CRQDNN) using complex quantum neuron (CQN): Applications to time series prediction

    Neural Netw.

    (2015)
  • C.Y. Shen et al.

    Ammonia identification using shear horizontal surface acoustic wave sensor and quantum neural network model

    Sens. Actuators A: Phys.

    (2008)
  • K. Takahashi et al.

    Multi-layer quantum neural network controller trained by real-coded genetic algorithm

    Neurocomputing

    (2014)
  • P. Li et al.

    A hybrid quantum-inspired neural networks with sequence inputs

    Neurocomputing

    (2013)
  • Y. Ma et al.

    Research and application of quantum-inspired double parallel feed-forward neural network

    Knowl.-Based Syst.

    (2017)
  • C.Y. Liu et al.

    Single-hidden-layer feed-forward quantum neural network based on grover learning

    Neural Netw.

    (2013)
  • Z. Gao et al.

    Deep quantum inspired neural network with application to aircraft fuel system fault diagnosis

    Neurocomputing

    (2017)
  • S. Tzeng

    Design of fuzzy wavelet neural networks using GA approach for function approximation and system identification

    Fuzzy Sets Syst.

    (2010)
  • A. Ebadat et al.

    New fuzzy wavelet network for modeling and control: the modeling approach

    Commun. Nonlinear Sci. Numer. Simul.

    (2011)
  • Cited by (29)

    • Chaotic time series prediction using DTIGNet based on improved temporal-inception and GRU

      2022, Chaos, Solitons and Fractals
      Citation Excerpt :

      With the powerful nonlinear fitting ability of neural networks, improved algorithms based on neural networks are favored by researchers [21]. Such as fuzzy wavelet network [22], hybrid learning algorithm combining genetic algorithm and gradient descent algorithm [23], deep echo state network based on additive decomposition improvement [24], interactive combined model of phase optimization algorithm and feedforward neural networks [25]. The parameter selection of neural networks is the part that needs to be adjusted by researchers, so a lot of research on the combination of intelligent optimization algorithms and neural networks has been carried out [26].

    • Effluent ammonia nitrogen prediction using a phase space reconstruction method combining pipelined recurrent wavelet neural network[Formula presented]

      2022, Applied Soft Computing
      Citation Excerpt :

      In the PSR approach, its performance is related with the choice of modeling method. Recently, many modeling methods have been proposed, including the global polynomial algorithm [14], neural network models [15,16], and local least-squares support vector machine approach [17]. Among these modeling methods, the neural networks can obtain better prediction results due to their strong approximation performance to nonlinear systems [18].

    • A hybrid prediction method based on empirical mode decomposition and multiple model fusion for chaotic time series

      2020, Chaos, Solitons and Fractals
      Citation Excerpt :

      However, the convergence rate of the neural network model is too slow due to too many layers; furthermore, it easily falls into the problem of local optimality, so the prediction of a single neural network model is also insufficient. Therefore, to solve the single model prediction accuracy problem, many scholars have proposed the use of hybrid models to predict chaotic time series, WT-LSSVM[14], WT-ARIMA [14], CC-ELM [15], DC-LSTM [16], ESNN-BRR [17], DE-EELM [18] and QINN-MA [19 ]etc. In addition, the hybrid model is also widely used in actual time series prediction.

    • Optimizing wavelet neural networks using modified cuckoo search for multi-step ahead chaotic time series prediction

      2019, Applied Soft Computing Journal
      Citation Excerpt :

      Despite its simplicity, the model’s assumption of data linearity and stationarity has somehow limited their practicability in predicting chaotic time series, which is known to have long term unpredictability. The rapid development of computational intelligence over the last few decades opens up a whole new realm of possibilities, with diverse computational intelligence-based models, including the echo state networks [36], quantum-inspired neural networks [37], radial basis function neural networks [38], fuzzy neural networks [39], and extreme learning machines [40], have been applied to chaotic time series prediction. Considering the successful implementation of WNNs in various domains, the feasibility of the proposed WNNs with MCSA is evaluated in the forecasting of benchmark chaotic time series.

    View all citing articles on Scopus

    Soheil Ganjefar was born in Iran, in 1971. He received his B.S. degree from the Ferdoowsi University, Mashhad, Iran, in 1994, and his M.S. and Ph.D. degrees from the Tarbiat Modares University, Tehran, Iran, in 1997 and 2003, respectively, all in Electrical Engineering. He is currently a Professor in the department of Electrical Engineering, Bu-Ali Sina University, Hamedan, Iran. His current research interests include Teleoperation Systems control, Optimal Control, Neural network, Renewable Energy and Singular perturbation systems.

    Morteza Tofighi was born in Iran, in 1981. He received his M.S. degree in electrical engineering from the University of Bu Ali Sina, Hamedan, Iran, in 2011. He is currently pursuing Ph.D. degree in control engineering at the University of Bu Ali Sina. His research interests include power system stability and control, identification, neural network, fuzzy logic and optimization.

    View full text