Elsevier

Neurocomputing

Volume 118, 22 October 2013, Pages 21-32
Neurocomputing

A self-organizing neuro-fuzzy network based on first order effect sensitivity analysis

https://doi.org/10.1016/j.neucom.2013.02.009Get rights and content

Abstract

As an effective method that can provide the information about the influence of inputs on the variation of output, variance based sensitivity analysis is widely used to determine the structure of neural networks. In the past, the global sensitivity analysis method for the total effect has been used for the structure learning of neural networks and various growing and pruning algorithms have been developed. In this paper, we find that neuro-fuzzy networks have the characteristics of additive models in which the first order effect index of the influence can provide the same comprehensive information as the total effect index, thus we only need to analyze the first order effects of the inputs to their output layers. Based on this observation, many low-cost effective methods for the first order effect global sensitivity can be used in for developing self-organizing neuro-fuzzy networks. Specifically, Random Balance Designs is employed here for sensitivity analysis. In addition, we also introduce the concept of systemic fluctuation of neuro-fuzzy networks to determine whether adjustment is needed for a network. This concept helps us to build a new procedure about the leaning of self-organizing neuro-fuzzy networks and to accelerate its speed of convergence in learning and organizing. Examples of simulations have demonstrated that our proposed method performs better than other existing procedures for self-organizing neuro-fuzzy networks, especially in learning of the network structure.

Introduction

Fuzzy control system has been applied in area of machine control widely and successfully. This kind of system is based on fuzzy logic, which always has a certain number of fuzzy rules. Hence, fuzzy control system can be understood by human operators well. In order to produce reasonable fuzzy rules, knowledge of experts is used as prior knowledge to determine both the scale of fuzzy rules and the parameters of each fuzzy rule. However, if the actual system cannot be covered by the knowledge of experts precisely, the production of fuzzy rules needs other approaches, like fuzzy neural network.

Neuro-fuzzy network is a hybrid method, which interprets the fuzzy control system into a special kind of neural network with learning capability [1], [2], [3]. In order to learn reasonable parameters of each fuzzy rule, neural network can use supervised or unsupervised learning algorithms to find them [4], [5], [6]. However, the learning about the scale of fuzzy rules needs the help of self-organizing neural network by growing and pruning of nodes online [7]. Hence, the learning of neuro-fuzzy networks can be seen as a hierarchical learning [8], including coarse-grained learning and fine-grained learning. The fine-grained learning is learning about the parameters of network, including the parameters of the neurons and the weights in the network. The coarse-grained learning is the learning about the structure of network. Since the selection of the appropriate number of rules is difficult for designers and experts in technical domain, structure learning is one hot topic of neuro-fuzzy networks.

The self-organizing structure learning of neuro-fuzzy networks, which is also called the growing and pruning of network, can be grouped into two categories, i.e., clustering approach and sensitivity analysis.

According to a clustering criterion, clustering approach determines the number of rules by partitioning of input–output space. In [9], clustering criterion is the firing strength with each incoming pattern to be the rule neurons. If the max firing strength of one pattern cannot exceed the critical value, SONFIN take this pattern to produce a new fuzzy rule. Similarly, a viewpoint that fuzzy system should have at least one fuzzy rule to ensure the match degree (or firing strength) is no less than a threshold, which called ε-completeness, is used in [10], [11]. Hence, GD-FNN determines whether to add new rule neuron by the firing strength of coming pattern. In [8], [12], [13], the distance between incoming pattern and the centers of existing rule neurons is clustering criterion. When this distance exceeds a threshold, a new rule neuron is added. All of these different criteria are similar to the distance criterion of clustering. When the criterion is unsatisfied with the coming of a new pattern, a new cluster is built. These clustering approaches are mainly used for the growth of neural network. However, they can also be used for pruning, i.e., the pruning process can merge the closest neurons.

Sensitivity analysis techniques quantify the relevance of the component of network, which can be input neurons, hidden neurons or weights, as the influence that parameter perturbations have on a performance function [14]. The performance function can be the functional error of system [15], [16], [17] or the relevance of the variance of output to the uncertain input factors [18], [19]. These sensitivity analysis approaches are very different from clustering approaches by considering the influence of rules on output. Hence, sensitivity analysis is mainly used for pruning of neural network. The parameter learning of existing self-organized neuro-fuzzy networks based on sensitivity analysis always takes the error of output as the beacon. Hence, taking the error of system as the criterion for structure learning will only pursue a low approximation error and miss some of the other available information of network, like the contribution of each hidden neuron to output. The generalization error of the system may be not ideal. Since structure selection of neuro-fuzzy networks is the coarse-grained learning, the variance based sensitivity analysis, which takes the influence of rule on the variance of output, can choice one more reasonable network and indeed provides us a new way. It has been firstly successfully used for the learning of neural network in [14]. The method of sensitivity analysis is based on a first-order Taylor expansion of the output function in the neural network, which is one local method of sensitivity analysis [20]. In [19], the authors use Extended Fourier Amplitude Sensitivity Test (EFAST) [21], which is one of the quickest methods in global method of sensitivity analysis for total effect, to accomplish the work about variance based sensitivity analysis for the neural network with single hidden layer and prove its effectiveness. In the work of [18], EFAST is used in structure learning of self-organized neuro-fuzzy networks, and the theoretical convergent of fine-grained learning is given.

For the advantages of variance-based sensitivity analysis, we use it to determine the structure of our self-organizing neuro-fuzzy networks and optimize this approach in two ways. One way is using global sensitivity analysis for first order effect to simplify the process of sensitivity analysis based on the global sensitivity analysis for total effect. The other way is to rebuild a new learning produce to improve the overall performance of our self-organized neuro-fuzzy networks. We introduce the concept about the systemic fluctuation of neuro-fuzzy networks to determine whether needs to do sensitivity analysis of network and make a modification about the structure of neuro-fuzzy networks. Both two ways guarantee that we can find a reasonable structure of neuro-fuzzy networks quicker than the other variance-based sensitivity analysis methods. After that the advantages of parameter learning of other self-organizing neural networks are also used to realize our self-organizing neuro-fuzzy networks based on first order effect sensitivity analysis (NFN-FOESA). Hence, this paper is organized as follows. At first, we introduce the concept of Variance-based Sensitivity analysis in Section 2. Then, we find that the mathematical model of self-organizing neuro-fuzzy networks is one kind of additive model in Section 3. Hence, the sensitivity analysis of self-organizing neuro-fuzzy networks can be based on global sensitivity analysis for first order effect. After that the Random Balance Designs (RBD) approach [22] and its advantages are given in Section 4, which is used in our sensitivity analysis of neuro-fuzzy networks. In Section 5, we detail the whole learning procedure of NFN-FOESA and its characteristics. Comparing with other self-organizing neuro-fuzzy networks, some simulations are given to illustrate the advantages of NFN-FOESA in Section 6. Also one discussion about our method is provided. At last, the conclusion is given in Section 7.

Section snippets

Variance-based sensitivity analysis

Consider a system represented in the mathematical or computational model as follows:y=f(x1,x2,,xk)where x1,x2,,xk are the inputs or factors of the system and y is its output. For the purpose of sensitivity analysis, the function can be decomposed into Sobol's high dimension model representation (HDMR) [23], i.e.f(x1,x2,,xk)=f0+ifi+ijifi>j++f12kwhere each individual item is a function only of the factors in its index. This decomposition is not a series decomposition since it has 2k terms.

Additive model of self-organizing fuzzy neural network

In variance-based sensitivity analysis, if the model of a system is an additive model, the first order sensitivity indexes can provide enough information to analyse the influence of factors on variation of output. Hence, the cost of sensitivity analysis of additive models can be reduced significantly. And the additive model [25] takes the form as follows:y=β0+i=1kfi(xi)+εwhere xi is the factors of model, k is the number of factors, y is the output of model, β0 is one constant, E(ε)=0 and Var(ε)

Global sensitivity analysis for first order effect

Without considering neuro-fuzzy networks as one additive model, the SA of neuro-fuzzy networks is based on global sensitivity analysis for total effect,like Extended Fourier Amplitude Sensitivity Test (EFAST) [18], [19]. By considering neuro-fuzzy networks as one additive model, in this paper, the Random Balance Designs (RBD) [22] can be used to SA of neuro-fuzzy networks.

Learning algorithm of NFN-FOESA

The learning process of our proposed self-organizing fuzzy neural network has two phases: structure learning and parameter learning. In the phase of structure learning, with the introduction of the criterion about systemic performance fluctuation, one growing-and-pruning algorithm based on the global sensitivity analysis for first order effect is proposed. Then, we also detailed the methods used in the phase of parameter learning.

Simulations

Three examples are discussed in this paper to demonstrate the effectiveness of the proposed algorithm. They include nonlinear system identification, the Mackey–Glass chaotic time-series prediction problem and traffic flow forecasting problem. The results of the first two problems are compared with GP-FNN [18], DFNN [8], FAOSPFNN [13], and GD-FNN [10]. The results of traffic flow forecasting problem, which is a real world problem, are compared with the Bayesian approach [27] and the ELM approach

Conclusion

In this paper, the self-organizing neuro-fuzzy networks are represented as an additive model. We prove that the global sensitivity analysis for first order effect can be used to substitute the global sensitivity analysis for total effect in self-organizing neuro-fuzzy networks. Hence, a new self-organizing neuro-fuzzy network, called NFN-FOESA, is introduced. NFN-FOESA can learn appropriate rules without initialization of the number of neurons in the hidden layer and the parameters of them. Two

Acknowledgments

This work is under the support of NSFC 71232006 and 61233001. We would like to express our appreciation to the other students contributed to this work in the State Key Laboratory of Management and Control for Complex Systems, Chinese Academy of Sciences.

Cheng Chen received the B.Eng. degree in Measuring & Control Technology and Instrumentations from HuaZhong University of Science and Technology, Wuhan, China, in 2008. He is currently working toward the Ph.D. degree in Control Theory and Control Engineering at the State Key Laboratory of Management and Control for Complex Systems, Chinese Academy of Sciences, Beijing, China. His research interests include neuro-fuzzy networks, multi-agent systems, distributed artificial intelligence and its

References (31)

  • H.X. Li et al.

    A probabilistic neural-fuzzy learning system for stochastic modeling

    IEEE Trans. Fuzzy Syst.

    (2008)
  • F. Palmieri et al.

    Self-association and Hebbian learning in linear neural networks

    IEEE Trans. Neural Networks

    (1995)
  • D.C. Park

    Centroid neural network for unsupervised competitive learning

    IEEE Trans. Neural Networks

    (2000)
  • N.Y. Liang et al.

    A fast and accurate online sequential learning algorithm for feedforward networks

    IEEE Trans. Neural Networks

    (2006)
  • S. Wu et al.

    Dynamic fuzzy neural networks—a novel approach to function approximation

    IEEE Trans. Syst. Man Cybern. B Cybern.

    (2000)
  • Cited by (14)

    • Modeling of nonlinear systems using the self-organizing fuzzy neural network with adaptive gradient algorithm

      2017, Neurocomputing
      Citation Excerpt :

      Moreover, another disadvantage of these methods is their heavy computational burden since the majority of the training time is spent on training process which are larger than necessary. Recently, Chen et al. proposed a self-organizing neuro-fuzzy network based on first order effect sensitivity analysis (NFN-FOESA) in [18]. The results show that the NFN-FOESA can find a suitable structure faster than the other methods.

    • Research progress of parallel control and management

      2020, IEEE/CAA Journal of Automatica Sinica
    View all citing articles on Scopus

    Cheng Chen received the B.Eng. degree in Measuring & Control Technology and Instrumentations from HuaZhong University of Science and Technology, Wuhan, China, in 2008. He is currently working toward the Ph.D. degree in Control Theory and Control Engineering at the State Key Laboratory of Management and Control for Complex Systems, Chinese Academy of Sciences, Beijing, China. His research interests include neuro-fuzzy networks, multi-agent systems, distributed artificial intelligence and its applications.

    Fei-Yue Wang received the Ph.D. degree in Computer and Systems Engineering from Rensselaer Polytechnic Institute, Troy, NY, in 1990. He joined the University of Arizona, Tucson, AZ, in 1990 and became a Professor and the Director of the Robotics and Automation Laboratory (RAL) and the Program in Advanced Research for Complex Systems (PARCS). In 1999, he found and directed the Intelligent Control and Systems Engineering Center at the Chinese Academy of Sciences (CAS), Beijing, China, under the support of the Outstanding Oversea Chinese Talents Program from the State Planning and Development Council. From 2002–2011, he was the Director of the Key Laboratory on Complex Systems and Intelligence Science, CAS. From 2006 to 2010, he was the Vice President of the Institute of Automation, CAS. Since 2011, he is the founding Director of the State Key Lab of Management and Control for Complex Systems. His research interests include social computing, web science, intelligent control, and complex systems. He was the recipient of the National Prize in Natural Sciences of China in 2007. He was the Editor-in-Chief of the International Journal of Intelligent Control and Systems (1995–2000), the World Scientific Series in Intelligent Control and Intelligent Automation (1997–2003), and IEEE Intelligent Systems (2009–2012). Since 1997, he has served as General or Program Chair of more than 20 IEEE, ACM, and INFORMS conferences. He was the President of the IEEE Intelligent Transportation Systems Society from 2005 to 2007; the Chinese Association for Science and Technology, USA, in 2005; and the American Zhu Kezhen Education Foundation from 2007 to 2008. Since 2008, he has been the Vice President and Secretary-General of the Chinese Association of Automation. He is currently the Editor-in-Chief of IEEE Transactions on Intelligent Transportation Systems and Acta Automatica Sinica. Wang is a member of Sigma Xi, an ACM Outstanding Scientist, and an elected Fellow of IEEE, ASME, AAAS, the International Council on Systems Engineering (INCOSE), and the International Federation of Automatic Control (IFAC).

    View full text