Elsevier

Neural Networks

Volume 78, June 2016, Pages 88-96
Neural Networks

2016 Special Issue
Least square neural network model of the crude oil blending process

https://doi.org/10.1016/j.neunet.2016.02.006Get rights and content

Highlights

  • The recursive least square algorithm is employed for the big data learning of a neural network.

  • Some important characteristics of the least square algorithm are analyzed as are the stability and local minimum avoidance.

  • The proposed approach is utilized for the modeling of the crude oil blending process.

Abstract

In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process.

Introduction

In recent years, the recursive least square algorithm has been highly utilized in the big data learning, evolving intelligent systems, and stable intelligent systems.

Big data learning is the learning ability to solve real-world big data problems; some interesting works of this topic are mentioned in Kangin, Angelov, Iglesias, and Sanchis (2015), Kasabov (2014), Luitel and Venayagamoorthy (2014), Mackin, Roy, and Wallenius (2011), Molina, Venayagamoorthy, Liang, and Harley (2013), Roy (2015a), Roy, Mackin, and Mukhopadhyay (2013), Roy (2015b), Schliebs and Kasabov (2013), Xu et al. (2014), and Yuen, King, and Leung (2015). Evolving intelligent systems are learning methods whose structure is flexible to adapt to the environment; some interesting investigations of this issue are detailed in Ballini and Yager (2014), Bouchachia (2014), Bouchachia and Vanaret (2014), Leite, Costa, and Gomide (2013), Lughofer and Sayed-Mouchaweh (2015), Maciel, Gomide, and Ballini (2014), Ordoñez, Iglesias, de Toledo, and Ledezma (2013), Pratama, Anavatti, Er, and Lughofer (2015), Pratama, Anavatti, and Lu (2015), and Toubakh and Sayed-Mouchaweh (2016). The recursive least square algorithm forms a linear combination of regressors which are nonlinear functions of input variables; usually, a large number of regressors must be employed so as to sufficiently cover the space of plant dynamics, it is a problem because the regression matrix would become ill-conditioned due to strong correlation among regressors.

The backpropagation algorithm, also known as the gradient, considered in Luitel and Venayagamoorthy (2014), Molina et al. (2013), Ortega-Zamorano, Jerez, Urda Muñoz, Luque-Baena, and Franco (2015), Xu et al. (2014), and Yuen et al. (2015) is often utilized for the learning of a feedforward neural network, it has the problem of slow convergence because it uses a constant scalar gain as its learning speed.

In this research, the recursive least square algorithm is employed for the big data learning of a feedforward neural network. The combination of the recursive least square algorithm with the feedforward neural network has four advantages as a solution of the two aforementioned problems: (1) the proposed algorithm avoids the regression matrix problem because it only requires the number of regressors utilized by the neural network, (2) the introduced method is faster than the backpropagation because the first uses a time-varying matrix gain as its learning speed, while the second utilizes a constant scalar gain, (3) the proposed combination has the learning ability due to the neural network, (4) the suggested strategy is more compact than the feedforward neural network because the first uses a vector in the hidden layer, while the second utilizes a matrix.

Moreover, the stable intelligent systems are characterized to be systems where the stability is guaranteed and the parameters are bounded; some interesting works of this theme are included in Ahn (2012), Ahn and Lim (2013), Cheng Lv, Yi, and Li (2015), Li and Rakkiyappan (2013), Lughofer (2011), Orozco-Tupacyupanqui, Nakano-Miyatake, and Perez-Meana (2015), Rakkiyappan, Chandrasekar, Lakshmanan, and Park (2014), Rakkiyappan, Zhu, and Chandrasekar (2014), Rubio, Angelov, and Pacheco (2011), Zhang, Zhu, and Zheng (2015), and Zhang, Zhu, and Zheng (2016). The stability of the recursive least square algorithm should be analyzed to avoid the unboundedness of some parameters known as the overfitting.

The backpropagation with variable learning steps, mentioned in Cheng Lv et al. (2015), Li and Rakkiyappan (2013), Lughofer (2011), Orozco-Tupacyupanqui et al. (2015), and Rubio et al. (2011) also is utilized for the learning of a feedforward neural network; even it is an efficient algorithm, it would be interesting to modify this algorithm to improve its performance by the changing of the time-varying scalar gain as its learning speed.

In this study, the stability of the recursive least square algorithm for the big data learning of a feedforward neural network is analyzed, the proposed stable algorithm has two advantages as a solution of the two above mentioned characteristics: (1) the introduced strategy avoids the overfitting because the stability, convergence, boundedness of parameters, and local minimum avoidance are guaranteed, (2) the suggested algorithm is faster than the backpropagation with variable learning steps because the first uses a time-varying matrix gain as its learning speed, while the second utilizes a time-varying scalar gain.

The paper is organized as follows. In Section  2, the feedforward neural network is presented. In Section  3, the feedforward neural network is linearized. In Section  4, the recursive least square algorithm of a feedforward neural network is designed. In Section  5, the stability, convergence, boundedness parameters, and local minimum avoidance of the before mentioned algorithm are assured. In Section  6, the proposed technique is summarized. In Section  7, the suggested method is applied for the modeling of the crude oil blending process. Section  8 presents conclusions and suggests future research directions.

Section snippets

Feedforward neural network

Consider the following unknown discrete-time nonlinear system: y(k)=f[x(k)] where x(k)=[x1(k),,xi(k),,xN(k)]T=[y(k1),,y(kn),u(k1),,u(km)]TN×1(N=n+m) is the input vector, u(k1) is the process input, y(k) is the process output, and f is an unknown nonlinear function, fC. The output of the feedforward neural network with one hidden layer can be expressed as follows: ŷ(k)=v̂(k)ϕ(k)=j=1Mv̂j(k)ϕj(k)ϕ(k)=[ϕ1(k),,ϕj(k),,ϕM(k)]Tϕj(k)=tanh(ŵj(k)i=1Nxi(k)) where i=1,,N, j=1,,M, x(k

Linearization of the neural network

The linearization of the feedforward neural network is required for the recursive least square algorithm design and for the stability analysis.

According to the Stone–Weierstrass theorem, the unknown nonlinear function f of (1) is approximated as follows: y(k)=vϕk+f=j=1Mvjϕj(k)+fϕk=[ϕ1(k),,ϕj(k),,ϕM(k)]Tϕj(k)=tanh(wji=1Nxi(k)) where ϕ(k)M×1, f=y(k)vϕ(k) is the modeling error, ϕj(k), vj, wj, vj and wj are the optimal parameters that can minimize the

Recursive least square algorithm design

In this section, the recursive least square algorithm is designed for the big data learning of a feedforward neural network.

Theorem 1

The recursive least square algorithm which is the updating function of the feedforward neural network   (2)   for the big data learning of the nonlinear system   (1)   is given as follows:θ̂(k+1)=θ̂(k)1q(k)Pk+1b(k)e(k)Pk+1=Pk1r(k)Pkb(k)bT(k)Pkwhere q(k)=r2+bT(k)Pkb(k), r(k)=q(k)+bT(k)Pkb(k), 0<r2, b(k)   and θ(k) are given in   (11), e(k) is the learning error of  

Stability of the recursive least square

In this section, the stability, convergence, and local minimum avoidance of the recursive least square algorithm are analyzed. The following theorem gives the stability and convergence of the aforementioned algorithm. Lyapunov method is selected because it can be used for the stability analysis of nonlinear systems.

Theorem 2

The learning error of the recursive least square algorithm   (12)   which is the updating function of the feedforward neural network   (2)   for the big data learning of the

The proposed algorithm

The proposed algorithm is as follows:

  • (1)

    Obtain the output of the nonlinear system y(k) with Eq. (1). Note, that the nonlinear system may have the structure represented by Eq. (1); the parameter N is selected according to this nonlinear system.

  • (2)

    Select the following parameters; select the weights v̂j(1) and ŵj(1) for (2) as random numbers between 0 and 1; select the number of hidden layer neurons M for (2) as an integer number, select the initial algorithm gain c and the stability parameter r2 for

Experimental results

In this section, three experiments are considered where the recursive least square algorithm is applied for the big data learning of the blending process. In all cases, the proposed algorithm called Least Square is compared with the non-divergence algorithm of Cheng Lv et al. (2015) called Non-Divergence, and with the backpropagation system of Ortega-Zamorano et al. (2015) called Backpropagation. It is important to note that in the backpropagation algorithm a constant scalar learning speed is

Conclusion

In this paper, the recursive least square algorithm was designed for the big data learning of a feedforward neural network, the stability, convergence, bounded parameters, and local minimum avoidance of the proposed algorithm were guaranteed. From the results, it was shown that the recursive least square algorithm achieves better accuracy when compared with both the non-divergence and backpropagation systems for the modeling of nonlinear systems. The proposed least square algorithm could be

Acknowledgments

The author is grateful to the editors and the reviewers for their valuable comments. The author thanks the Secretaría de Investigación y Posgrado, the Comisión de Operación y Fomento de Actividades Académicas del IPN, and Consejo Nacional de Ciencia y Tecnología for their help in this research.

References (35)

  • A. Roy et al.

    Methods for pattern selection, class-specific feature selection and classification for automated learning

    Neural Networks

    (2013)
  • H. Toubakh et al.

    Hybrid dynamic classifier for drift-like fault diagnosis in a class of hybrid dynamic systems: Application to wind turbine converters

    Neurocomputing

    (2016)
  • B. Xu et al.

    Graphical lasso quadratic discriminant function and its application to character recognition

    Neurocomputing

    (2014)
  • C.K. Ahn

    An error passivation approach to filtering for switched neural networks with noise disturbance

    Neural Computing and Applications

    (2012)
  • C.K. Ahn et al.

    Model predictive stabilizer for T–S fuzzy recurrent multilayer neural network models with general terminal weighting matrix

    Neural Computing and Applications

    (2013)
  • R. Ballini et al.

    OWA filters and forecasting models applied to electric power load time series

    Evolving Systems

    (2014)
  • A. Bouchachia et al.

    GT2FC: An online growing interval type-2 self-learning fuzzy classifier

    IEEE Transactions on Fuzzy Systems

    (2014)
  • Cited by (0)

    View full text