2016 Special IssueLeast square neural network model of the crude oil blending process
Introduction
In recent years, the recursive least square algorithm has been highly utilized in the big data learning, evolving intelligent systems, and stable intelligent systems.
Big data learning is the learning ability to solve real-world big data problems; some interesting works of this topic are mentioned in Kangin, Angelov, Iglesias, and Sanchis (2015), Kasabov (2014), Luitel and Venayagamoorthy (2014), Mackin, Roy, and Wallenius (2011), Molina, Venayagamoorthy, Liang, and Harley (2013), Roy (2015a), Roy, Mackin, and Mukhopadhyay (2013), Roy (2015b), Schliebs and Kasabov (2013), Xu et al. (2014), and Yuen, King, and Leung (2015). Evolving intelligent systems are learning methods whose structure is flexible to adapt to the environment; some interesting investigations of this issue are detailed in Ballini and Yager (2014), Bouchachia (2014), Bouchachia and Vanaret (2014), Leite, Costa, and Gomide (2013), Lughofer and Sayed-Mouchaweh (2015), Maciel, Gomide, and Ballini (2014), Ordoñez, Iglesias, de Toledo, and Ledezma (2013), Pratama, Anavatti, Er, and Lughofer (2015), Pratama, Anavatti, and Lu (2015), and Toubakh and Sayed-Mouchaweh (2016). The recursive least square algorithm forms a linear combination of regressors which are nonlinear functions of input variables; usually, a large number of regressors must be employed so as to sufficiently cover the space of plant dynamics, it is a problem because the regression matrix would become ill-conditioned due to strong correlation among regressors.
The backpropagation algorithm, also known as the gradient, considered in Luitel and Venayagamoorthy (2014), Molina et al. (2013), Ortega-Zamorano, Jerez, Urda Muñoz, Luque-Baena, and Franco (2015), Xu et al. (2014), and Yuen et al. (2015) is often utilized for the learning of a feedforward neural network, it has the problem of slow convergence because it uses a constant scalar gain as its learning speed.
In this research, the recursive least square algorithm is employed for the big data learning of a feedforward neural network. The combination of the recursive least square algorithm with the feedforward neural network has four advantages as a solution of the two aforementioned problems: (1) the proposed algorithm avoids the regression matrix problem because it only requires the number of regressors utilized by the neural network, (2) the introduced method is faster than the backpropagation because the first uses a time-varying matrix gain as its learning speed, while the second utilizes a constant scalar gain, (3) the proposed combination has the learning ability due to the neural network, (4) the suggested strategy is more compact than the feedforward neural network because the first uses a vector in the hidden layer, while the second utilizes a matrix.
Moreover, the stable intelligent systems are characterized to be systems where the stability is guaranteed and the parameters are bounded; some interesting works of this theme are included in Ahn (2012), Ahn and Lim (2013), Cheng Lv, Yi, and Li (2015), Li and Rakkiyappan (2013), Lughofer (2011), Orozco-Tupacyupanqui, Nakano-Miyatake, and Perez-Meana (2015), Rakkiyappan, Chandrasekar, Lakshmanan, and Park (2014), Rakkiyappan, Zhu, and Chandrasekar (2014), Rubio, Angelov, and Pacheco (2011), Zhang, Zhu, and Zheng (2015), and Zhang, Zhu, and Zheng (2016). The stability of the recursive least square algorithm should be analyzed to avoid the unboundedness of some parameters known as the overfitting.
The backpropagation with variable learning steps, mentioned in Cheng Lv et al. (2015), Li and Rakkiyappan (2013), Lughofer (2011), Orozco-Tupacyupanqui et al. (2015), and Rubio et al. (2011) also is utilized for the learning of a feedforward neural network; even it is an efficient algorithm, it would be interesting to modify this algorithm to improve its performance by the changing of the time-varying scalar gain as its learning speed.
In this study, the stability of the recursive least square algorithm for the big data learning of a feedforward neural network is analyzed, the proposed stable algorithm has two advantages as a solution of the two above mentioned characteristics: (1) the introduced strategy avoids the overfitting because the stability, convergence, boundedness of parameters, and local minimum avoidance are guaranteed, (2) the suggested algorithm is faster than the backpropagation with variable learning steps because the first uses a time-varying matrix gain as its learning speed, while the second utilizes a time-varying scalar gain.
The paper is organized as follows. In Section 2, the feedforward neural network is presented. In Section 3, the feedforward neural network is linearized. In Section 4, the recursive least square algorithm of a feedforward neural network is designed. In Section 5, the stability, convergence, boundedness parameters, and local minimum avoidance of the before mentioned algorithm are assured. In Section 6, the proposed technique is summarized. In Section 7, the suggested method is applied for the modeling of the crude oil blending process. Section 8 presents conclusions and suggests future research directions.
Section snippets
Feedforward neural network
Consider the following unknown discrete-time nonlinear system: where is the input vector, is the process input, is the process output, and is an unknown nonlinear function, . The output of the feedforward neural network with one hidden layer can be expressed as follows: where , ,
Linearization of the neural network
The linearization of the feedforward neural network is required for the recursive least square algorithm design and for the stability analysis.
According to the Stone–Weierstrass theorem, the unknown nonlinear function of (1) is approximated as follows: where , is the modeling error, , , , and are the optimal parameters that can minimize the
Recursive least square algorithm design
In this section, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. Theorem 1 The recursive least square algorithm which is the updating function of the feedforward neural network (2) for the big data learning of the nonlinear system (1) is given as follows:where , , , and are given in (11), is the learning error of
Stability of the recursive least square
In this section, the stability, convergence, and local minimum avoidance of the recursive least square algorithm are analyzed. The following theorem gives the stability and convergence of the aforementioned algorithm. Lyapunov method is selected because it can be used for the stability analysis of nonlinear systems. Theorem 2 The learning error of the recursive least square algorithm (12) which is the updating function of the feedforward neural network (2) for the big data learning of the
The proposed algorithm
The proposed algorithm is as follows:
- (1)
Obtain the output of the nonlinear system with Eq. (1). Note, that the nonlinear system may have the structure represented by Eq. (1); the parameter is selected according to this nonlinear system.
- (2)
Select the following parameters; select the weights and for (2) as random numbers between 0 and 1; select the number of hidden layer neurons for (2) as an integer number, select the initial algorithm gain and the stability parameter for
Experimental results
In this section, three experiments are considered where the recursive least square algorithm is applied for the big data learning of the blending process. In all cases, the proposed algorithm called Least Square is compared with the non-divergence algorithm of Cheng Lv et al. (2015) called Non-Divergence, and with the backpropagation system of Ortega-Zamorano et al. (2015) called Backpropagation. It is important to note that in the backpropagation algorithm a constant scalar learning speed is
Conclusion
In this paper, the recursive least square algorithm was designed for the big data learning of a feedforward neural network, the stability, convergence, bounded parameters, and local minimum avoidance of the proposed algorithm were guaranteed. From the results, it was shown that the recursive least square algorithm achieves better accuracy when compared with both the non-divergence and backpropagation systems for the modeling of nonlinear systems. The proposed least square algorithm could be
Acknowledgments
The author is grateful to the editors and the reviewers for their valuable comments. The author thanks the Secretaría de Investigación y Posgrado, the Comisión de Operación y Fomento de Actividades Académicas del IPN, and Consejo Nacional de Ciencia y Tecnología for their help in this research.
References (35)
Online dataprocessing
Neurocomputing
(2014)- et al.
Evolving classifier TEDAClass for big data
Procedia Computer Science
(2015) NeuCube: A spiking neural network architecture for mapping, learning and understanding of spatio-temporal brain data
Neural Networks
(2014)- et al.
Evolving granular neural networks from fuzzy data streams
Neural Networks
(2013) - et al.
Autonomous data stream clustering implementing split-and-merge concepts—Towards a plug-and-play approach
Information Sciences
(2015) - et al.
Cellular computational networks—A scalable architecture for learning the dynamics of large networked systems
Neural Networks
(2014) - et al.
Online activity recognition using evolving classifiers
Expert Systems with Applications
(2013) - et al.
Exponential stability of Markovian jumping stochastic Cohen–Grossberg neural networks with mode-dependent probabilistic time-varying delays and impulses
Neurocomputing
(2014) - et al.
Stability of stochastic neural networks of neutral type with Markovian jumping parameters: A delay-fractioning approach
Journal of the Franklin Institute
(2014) A classification algorithm for high-dimensional data
Procedia Computer Science
(2015)