LCA based RBF training algorithm for the concurrent fault situation
Introduction
The radial basis function (RBF) network model [1], [2], [3], [4] has been successfully applied in many domains [5], [6], [7], [8], [9]. One of the important issues in constructing an RBF network is to select appropriate RBF centers. The most simple way is to use all the training samples as the RBF centers [10]. However, with this simple way, the size of the trained network is very large and the trained network may have the overfitting problem.
In the last two decades, several center selection approaches have been proposed. Typical approaches are random selection [11], clustering algorithm [5], orthogonal least squares (OLS) approach [1], [12], and support vector regression (SVR) [13]. However, few approaches take the fault tolerant issue into consideration. It was found that the performance of a trained network could be significantly degraded when weight failure exists [14], [15], [16], [17], [18], [19].
Weight failure, or saying imperfect implementation of a trained weight, may appear in different forms. For example, either in digital or analog implementation, the implemented weights have the precision problem. We can use the multiplicative weight noise model [16], [18], [19], [20], [21], [22] to model the precision problem. Also, physical damage on an implemented weight or neuron may block the signal transmission from the output of an RBF node to the output node. Such physical damage can be modelled by the open weight fault model [23], [24]. Although many training methods [14], [20], [17], [25], [26], [27], [15] were developed, many of them tackle one particular kind of weight failure only. In the real situation, different kinds of weight failure could happen in an RBF network. Although some results related to the concurrent fault situation [20], [28], [29] were reported, they need a separate procedure to select the RBF centers.
In sparse approximation [30], [31], [32], an unknown sparse signal is measured by a number of random-like basis functions. The aim of sparse approximation is to recover the unknown sparse signal from the measurements. Many numerical methods, such as greedy algorithm matching pursuit (MP) [33] and interior point-type methods [31], were developed. In [34], an analog method, namely the local competition algorithm (LCA), was developed. This method is inspired by biological neural systems. The convergence of LCA was addressed in [35], [36]. Although the LCA can be used for selecting RBF centers in the RBF network training, it can be used for handling the fault-free situation only.
The contribution of this paper is to apply the LCA concept for training fault tolerant RBF networks and selecting the RBF centers. First, we assume that all the training samples are used for constructing a fault tolerant RBF network [20], [28], [29]. We then add an l1 norm regularizer into the objective function. According to the nature of the l1 norm regularizer, some trained weights are set to zero automatically during training. That means, some unnecessary RBF nodes are removed. Based on the LCA concept, we propose an analog method, namely fault tolerant LCA (FTLCA), to minimize the fault tolerant objective function. In our formulation, the objective function has a unique optimal solution. We then show that the proposed FTLCA converges to the global optimal solution.
The rest of this paper is organized as follows. Section 2 briefly describes the background of sparse approximation. Section 3 exhibits the RBF network model under the current weight fault situation. Section 4 then presents the FTLCA. Section 5 presents the stability and convergence of the FTLCA. Section 6 shows the simulation results. Section 7 concludes the paper.
Section snippets
Background
This section first gives a brief review on the subdifferential concept. Afterwards, we give an introduction on sparse approximation and LCA.
RBF networks
This paper assumes that there is the training set: , where and yj are the training input and training output of the jth sample, respectively. Furthermore, the trained output is generated by the following equation:where is the unknown function to be approximated by an RBF network, and ej׳s are noise components of the unknown function [40], [41].
In the RBF approach, we use a number of RBFs to approximate the unknown function, given by
FTLCA for fault tolerant RBF networks
In (32), we use all the N training samples as the RBF centers for training a fault tolerant RBF network. Then the size of the resultant network may be very large. To limit the network size, we can add an l1 term into the objective function, given byIn (33), the term “” deals with the training set error of faulty networks. The term “” aims at forcing some wi׳s to zero during training, i.e., unimportant RBF nodes are
Property of the dynamics
In [34], [35], [36], for the original LCA approach, the authors studied the convergent properties. However, the results only limit to the fault-free objective function: only. In this section, we discuss the dynamic behavior of the FTLCA. Sections 5.1–5.3 present some basic background properties of the FTLCA. Section 5.4 shows that the FTLCA converges to the unique optimal solution of the objective function.
Simulation result
In this section, we compare our FTLCA with two comparison algorithms: orthogonal least squares (OLS) [1] and support vector regression (SVR) [13]. The mean square error (MSE) and the number of selected RBF nodes (network size) are chosen as performance indicators.
Conclusion
This paper addressed the center selection for the fault tolerant RBF model with the concurrent weight failure situation. To remove unnecessary RBF nodes, we add an l1 regularization term into the objective function. With the l1 regularization term, the training problem can be considered as a sparse approximation problem. We then proposed the FTLCA for solving the problem. The FTLCA is able to select RBF nodes during training. In our formulation, the objective function has a unique optimal
Acknowledgment
The work presented in this paper is supported by a research grant (CityU 115612) from the Research Grants Council of the Government of the Hong Kong Special Administrative Region.
Rui-Bin Feng received the Bachelor׳s degree in software engineering from Tianjin University in 2012. He is currently a Ph.D. student in the Department of Electronic Engineering, City University of Hong Kong. His research interests include machine learning, computer graphics and GPU computing.
References (44)
- et al.
Robustness of radial basis functions
Neurocomputing
(2007) - et al.
Efficient incremental construction of RBF networks using quasi-gradient method
Neurocomputing
(2015) - et al.
An algorithm to generate radial basis function RBF-like nets for classification problems
Neural Netw.
(1995) - et al.
A hybrid multiobjective RBF-PSO method for mitigating DoS attacks in named data networking
Neurocomputing
(2015) - et al.
Fault tolerant machine learning for nanoscale cognitive radio
Neurocomputing
(2011) - et al.
Improving the tolerance of multilayer perceptrons by minimizing the statistical sensitivity to weight deviations
Neurocomputing
(2000) - et al.
Training neural networks to be insensitive to weight random variations
Neural Netw.
(2000) - et al.
Sensitivity study of binary feedforward neural networks
Neurocomputing
(2014) - et al.
Online training and its convergence for faulty networks with multiplicative weight noise
Neurocomputing
(2015) - et al.
A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks
Neural Netw.
(1999)
Optimal design of regularization term and regularization parameter by subspace information criterion
Neural Netw.
Orthogonal least squares learning algorithm for radial basis function networks
IEEE Trans. Neural Netw.
An incremental design of radial basis function networks
IEEE Trans. Neural Netw. Learn. Syst.
Nonlinear time series modelling and prediction using Gaussian RBF networks with enhanced clustering and RLS learning
Electron. Lett.
Dynamic structure neural networks for stable adaptive control of nonlinear systems
IEEE Trans. Neural Netw.
RBF-based technique for statistical demodulation of pathological tremor
IEEE Trans. Neural Netw. Learn. Syst.
Networks for approximation and learning
Proc. IEEE
Neural Networks: A Comprehensive Foundation
Selecting radial basis function network centers with recursive orthogonal least squares training
IEEE Trans. Neural Netw.
The Nature of Statistical Learning Theory
A fault-tolerant regularizer for RBF networks
IEEE Trans. Neural Netw.
Cited by (9)
Partially-coupled nonlinear parameter optimization algorithm for a class of multivariate hybrid models
2022, Applied Mathematics and ComputationParameter estimation for a class of radial basis function-based nonlinear time-series models with moving average noises
2021, Journal of the Franklin InstituteOrthogonal least squares based center selection for fault-tolerant RBF networks
2019, NeurocomputingCitation Excerpt :Therefore, it is interesting to develop center selection methods targeting the regularized objective function of the RBF network under the concurrent fault situation. A fault-tolerant local competition algorithm (FTLCA) focusing on the concurrent fault situation of RBF networks was proposed in [10]. This method selects RBF centers and trains the network weights by adding an ℓ1-norm regularizer to the training objective function of the fault-tolerant RBF network [17].
Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures
2017, NeurocomputingCitation Excerpt :In [3,4], the algorithms were designed for handling the open weight fault. There are some results related to the concurrent weight failure, where weight noise and weight fault could happen in a single RBF network [18,23,24]. However, in reality, weight and node failure may co-exist in a network.
Aitken-based Acceleration Estimation Algorithms for a Nonlinear Model with Exponential Terms by Using the Decomposition
2021, International Journal of Control, Automation and SystemsConstrained PSO Based Center Selection for RBF Networks Under Concurrent Fault Situation
2020, Neural Processing Letters
Rui-Bin Feng received the Bachelor׳s degree in software engineering from Tianjin University in 2012. He is currently a Ph.D. student in the Department of Electronic Engineering, City University of Hong Kong. His research interests include machine learning, computer graphics and GPU computing.
Chi-Sing Leung received the B.Sci. degree in electronics, the M.Phil. degree in information engineering, and the Ph.D. degree in computer science from the Chinese University of Hong Kong in 1989, 1991, and 1995, respectively. He is currently a Professor in the Department of Electronic Engineering, City University of Hong Kong. His research interests include neural computing, data mining, and computer graphics. In 2005, he received the 2005 IEEE Transactions on Multimedia Prize Paper Award for his paper titled, “The Plenoptic Illumination Function” published in 2002. He was a member of Organizing Committee of ICONIP2006. He was the Program Chair of ICONIP2009 and ICONIP2012. He is/was the guest editors of several journals, including, Neurocomputing, Neural Computing and Applications, and Neural Processing Letters. He is a governing board member of the Asian Pacific Neural Network Assembly (APNNA) and Vice President of APNNA.
A.G. Constantinides is the Professor of Signal and the head of the Communications and Signal Processing Group of the Department of Electrical and Electronic Engineering. He has been actively involved with research in various aspects of digital filter design, digital signal processing, and communications for more than 30 years. Professor Constantinides research spans a wide range of Digital Signal Processing and Communications, both from the theoretical and the practical points of view. His recent work has been directed toward the demanding signal processing problems arising from the various areas of mobile telecommunication. This work is supported by research grants and contracts from various government and industrial organizations.
Professor Constantinides has published several books and over 250 papers in learned journals in the area of Digital Signal Processing and its applications. He has served as the First President of the European Association for Signal Processing (EURASIP) and has contributed in this capacity to the establishment of the European Journal for Signal Processing. He has been on, and is currently serving as, a member of many technical program committees of the IEEE, the IEE and other international conferences. He has organized the first ever international series of meetings on Digital Signal Processing, in London initially in 1967, and in Florence (with Vito Cappellini) since 1972. In 1985 he was awarded the Honor of Chevalier, Palmes Academiques, by the French government, and in 1996, the promotion to Officer, Palmes Academiques. He holds honorary doctorates from European and Far Eastern Universities, several Visiting Professorships, Distinguished Lectureships, Fellowships and other honors around the world.