Elsevier

Neurocomputing

Volume 191, 26 May 2016, Pages 341-351
Neurocomputing

LCA based RBF training algorithm for the concurrent fault situation

https://doi.org/10.1016/j.neucom.2016.01.047Get rights and content

Abstract

In the construction of a radial basis function (RBF) network, one of the most important issues is the selection of RBF centers. However, many selection methods are designed for the fault free situation only. This paper first assumes that all the training samples are used for constructing a fault tolerant RBF network. We then add an l1 norm regularizer into the fault tolerant objective function. According to the nature of the l1 norm regularizer, some unnecessary RBF nodes are removed automatically during training. Based on the local competition algorithm (LCA) concept, we propose an analog method, namely fault tolerant LCA (FTLCA), to minimize the fault tolerant objective function. We prove that the proposed fault tolerant objective function has a unique optimal solution, and that the FTLCA converges to the global optimal solution. Simulation results show that the FTLCA is better than the orthogonal least square approach and the support vector regression approach.

Introduction

The radial basis function (RBF) network model [1], [2], [3], [4] has been successfully applied in many domains [5], [6], [7], [8], [9]. One of the important issues in constructing an RBF network is to select appropriate RBF centers. The most simple way is to use all the training samples as the RBF centers [10]. However, with this simple way, the size of the trained network is very large and the trained network may have the overfitting problem.

In the last two decades, several center selection approaches have been proposed. Typical approaches are random selection [11], clustering algorithm [5], orthogonal least squares (OLS) approach [1], [12], and support vector regression (SVR) [13]. However, few approaches take the fault tolerant issue into consideration. It was found that the performance of a trained network could be significantly degraded when weight failure exists [14], [15], [16], [17], [18], [19].

Weight failure, or saying imperfect implementation of a trained weight, may appear in different forms. For example, either in digital or analog implementation, the implemented weights have the precision problem. We can use the multiplicative weight noise model [16], [18], [19], [20], [21], [22] to model the precision problem. Also, physical damage on an implemented weight or neuron may block the signal transmission from the output of an RBF node to the output node. Such physical damage can be modelled by the open weight fault model [23], [24]. Although many training methods [14], [20], [17], [25], [26], [27], [15] were developed, many of them tackle one particular kind of weight failure only. In the real situation, different kinds of weight failure could happen in an RBF network. Although some results related to the concurrent fault situation [20], [28], [29] were reported, they need a separate procedure to select the RBF centers.

In sparse approximation [30], [31], [32], an unknown sparse signal is measured by a number of random-like basis functions. The aim of sparse approximation is to recover the unknown sparse signal from the measurements. Many numerical methods, such as greedy algorithm matching pursuit (MP) [33] and interior point-type methods [31], were developed. In [34], an analog method, namely the local competition algorithm (LCA), was developed. This method is inspired by biological neural systems. The convergence of LCA was addressed in [35], [36]. Although the LCA can be used for selecting RBF centers in the RBF network training, it can be used for handling the fault-free situation only.

The contribution of this paper is to apply the LCA concept for training fault tolerant RBF networks and selecting the RBF centers. First, we assume that all the training samples are used for constructing a fault tolerant RBF network [20], [28], [29]. We then add an l1 norm regularizer into the objective function. According to the nature of the l1 norm regularizer, some trained weights are set to zero automatically during training. That means, some unnecessary RBF nodes are removed. Based on the LCA concept, we propose an analog method, namely fault tolerant LCA (FTLCA), to minimize the fault tolerant objective function. In our formulation, the objective function has a unique optimal solution. We then show that the proposed FTLCA converges to the global optimal solution.

The rest of this paper is organized as follows. Section 2 briefly describes the background of sparse approximation. Section 3 exhibits the RBF network model under the current weight fault situation. Section 4 then presents the FTLCA. Section 5 presents the stability and convergence of the FTLCA. Section 6 shows the simulation results. Section 7 concludes the paper.

Section snippets

Background

This section first gives a brief review on the subdifferential concept. Afterwards, we give an introduction on sparse approximation and LCA.

RBF networks

This paper assumes that there is the training set: St={(xj,yj):xjRL,yjR,j=1,,N}, where xj and yj are the training input and training output of the jth sample, respectively. Furthermore, the trained output is generated by the following equation:yj=φ(xj)+ej,where φ(·) is the unknown function to be approximated by an RBF network, and ej׳s are noise components of the unknown function [40], [41].

In the RBF approach, we use a number of RBFs to approximate the unknown function, given byφ(x)=i=1Mwi

FTLCA for fault tolerant RBF networks

In (32), we use all the N training samples as the RBF centers for training a fault tolerant RBF network. Then the size of the resultant network may be very large. To limit the network size, we can add an l1 term into the objective function, given byL(w)=12(yΦw22+wT[(pβ+σb2)GpβH]w)+λw1.In (33), the term “yΦw22+wT[(pβ+σb2)GpβH]w” deals with the training set error of faulty networks. The term “λw1” aims at forcing some wi׳s to zero during training, i.e., unimportant RBF nodes are

Property of the dynamics

In [34], [35], [36], for the original LCA approach, the authors studied the convergent properties. However, the results only limit to the fault-free objective function: 12yΦw22 only. In this section, we discuss the dynamic behavior of the FTLCA. Sections 5.1–5.3 present some basic background properties of the FTLCA. Section 5.4 shows that the FTLCA converges to the unique optimal solution of the objective function.

Simulation result

In this section, we compare our FTLCA with two comparison algorithms: orthogonal least squares (OLS) [1] and support vector regression (SVR) [13]. The mean square error (MSE) and the number of selected RBF nodes (network size) are chosen as performance indicators.

Conclusion

This paper addressed the center selection for the fault tolerant RBF model with the concurrent weight failure situation. To remove unnecessary RBF nodes, we add an l1 regularization term into the objective function. With the l1 regularization term, the training problem can be considered as a sparse approximation problem. We then proposed the FTLCA for solving the problem. The FTLCA is able to select RBF nodes during training. In our formulation, the objective function has a unique optimal

Acknowledgment

The work presented in this paper is supported by a research grant (CityU 115612) from the Research Grants Council of the Government of the Hong Kong Special Administrative Region.

Rui-Bin Feng received the Bachelor׳s degree in software engineering from Tianjin University in 2012. He is currently a Ph.D. student in the Department of Electronic Engineering, City University of Hong Kong. His research interests include machine learning, computer graphics and GPU computing.

References (44)

  • M. Sugiyama et al.

    Optimal design of regularization term and regularization parameter by subspace information criterion

    Neural Netw.

    (2002)
  • S. Chen et al.

    Orthogonal least squares learning algorithm for radial basis function networks

    IEEE Trans. Neural Netw.

    (1991)
  • H. Yu et al.

    An incremental design of radial basis function networks

    IEEE Trans. Neural Netw. Learn. Syst.

    (2014)
  • S. Chen

    Nonlinear time series modelling and prediction using Gaussian RBF networks with enhanced clustering and RLS learning

    Electron. Lett.

    (1995)
  • S. Fabri et al.

    Dynamic structure neural networks for stable adaptive control of nonlinear systems

    IEEE Trans. Neural Netw.

    (1996)
  • F. Gianfelici

    RBF-based technique for statistical demodulation of pathological tremor

    IEEE Trans. Neural Netw. Learn. Syst.

    (2013)
  • T. Poggio et al.

    Networks for approximation and learning

    Proc. IEEE

    (1990)
  • S. Haykin

    Neural Networks: A Comprehensive Foundation

    (1998)
  • J. Gomm et al.

    Selecting radial basis function network centers with recursive orthogonal least squares training

    IEEE Trans. Neural Netw.

    (2000)
  • V.N. Vapnik

    The Nature of Statistical Learning Theory

    (1995)
  • C.S. Leung et al.

    A fault-tolerant regularizer for RBF networks

    IEEE Trans. Neural Netw.

    (2008)
  • J. Burr, Digital neural network implementations, in: Neural Networks, Concepts, Applications, and Implementations, vol....
  • Cited by (9)

    • Orthogonal least squares based center selection for fault-tolerant RBF networks

      2019, Neurocomputing
      Citation Excerpt :

      Therefore, it is interesting to develop center selection methods targeting the regularized objective function of the RBF network under the concurrent fault situation. A fault-tolerant local competition algorithm (FTLCA) focusing on the concurrent fault situation of RBF networks was proposed in [10]. This method selects RBF centers and trains the network weights by adding an ℓ1-norm regularizer to the training objective function of the fault-tolerant RBF network [17].

    • Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures

      2017, Neurocomputing
      Citation Excerpt :

      In [3,4], the algorithms were designed for handling the open weight fault. There are some results related to the concurrent weight failure, where weight noise and weight fault could happen in a single RBF network [18,23,24]. However, in reality, weight and node failure may co-exist in a network.

    View all citing articles on Scopus

    Rui-Bin Feng received the Bachelor׳s degree in software engineering from Tianjin University in 2012. He is currently a Ph.D. student in the Department of Electronic Engineering, City University of Hong Kong. His research interests include machine learning, computer graphics and GPU computing.

    Chi-Sing Leung received the B.Sci. degree in electronics, the M.Phil. degree in information engineering, and the Ph.D. degree in computer science from the Chinese University of Hong Kong in 1989, 1991, and 1995, respectively. He is currently a Professor in the Department of Electronic Engineering, City University of Hong Kong. His research interests include neural computing, data mining, and computer graphics. In 2005, he received the 2005 IEEE Transactions on Multimedia Prize Paper Award for his paper titled, “The Plenoptic Illumination Function” published in 2002. He was a member of Organizing Committee of ICONIP2006. He was the Program Chair of ICONIP2009 and ICONIP2012. He is/was the guest editors of several journals, including, Neurocomputing, Neural Computing and Applications, and Neural Processing Letters. He is a governing board member of the Asian Pacific Neural Network Assembly (APNNA) and Vice President of APNNA.

    A.G. Constantinides is the Professor of Signal and the head of the Communications and Signal Processing Group of the Department of Electrical and Electronic Engineering. He has been actively involved with research in various aspects of digital filter design, digital signal processing, and communications for more than 30 years. Professor Constantinides research spans a wide range of Digital Signal Processing and Communications, both from the theoretical and the practical points of view. His recent work has been directed toward the demanding signal processing problems arising from the various areas of mobile telecommunication. This work is supported by research grants and contracts from various government and industrial organizations.

    Professor Constantinides has published several books and over 250 papers in learned journals in the area of Digital Signal Processing and its applications. He has served as the First President of the European Association for Signal Processing (EURASIP) and has contributed in this capacity to the establishment of the European Journal for Signal Processing. He has been on, and is currently serving as, a member of many technical program committees of the IEEE, the IEE and other international conferences. He has organized the first ever international series of meetings on Digital Signal Processing, in London initially in 1967, and in Florence (with Vito Cappellini) since 1972. In 1985 he was awarded the Honor of Chevalier, Palmes Academiques, by the French government, and in 1996, the promotion to Officer, Palmes Academiques. He holds honorary doctorates from European and Far Eastern Universities, several Visiting Professorships, Distinguished Lectureships, Fellowships and other honors around the world.

    View full text