Elsevier

Neural Networks

Volume 24, Issue 7, September 2011, Pages 717-725
Neural Networks

An efficient self-organizing RBF neural network for water quality prediction

https://doi.org/10.1016/j.neunet.2011.04.006Get rights and content

Abstract

This paper presents a flexible structure Radial Basis Function (RBF) neural network (FS-RBFNN) and its application to water quality prediction. The FS-RBFNN can vary its structure dynamically in order to maintain the prediction accuracy. The hidden neurons in the RBF neural network can be added or removed online based on the neuron activity and mutual information (MI), to achieve the appropriate network complexity and maintain overall computational efficiency. The convergence of the algorithm is analyzed in both the dynamic process phase and the phase following the modification of the structure. The proposed FS-RBFNN has been tested and compared to other algorithms by applying it to the problem of identifying a nonlinear dynamic system. Experimental results show that the FS-RBFNN can be used to design an RBF structure which has fewer hidden neurons; the training time is also much faster. The algorithm is applied for predicting water quality in the wastewater treatment process. The results demonstrate its effectiveness.

Introduction

Predicting water quality in the wastewater treatment process can provide a basis for water treatment plant management decisions that can minimize microbial risks and optimize the treatment operation (Ge & Frick, 2009). In many practical situations, however, it is difficult to predict accurately the quality of water in the treatment process due to a lack of knowledge of the parameters used in the process, or the presence of disturbances in the system. Thus, the predictor should take an appropriate action to counteract the presence of disturbances to which the system is subjected and should be able to adjust itself to the changing dynamics of the system. Radial basis function (RBF) neural networks have been successfully applied for solving dynamic system problems, because they can predict the behavior directly from input/output data (Ferrari et al., 2010, Lee and Ko, 2009, Wang and Yu, 2008). However, the number of hidden neurons in these RBF networks (Ferrari et al., 2010, Lee and Ko, 2009, Wang and Yu, 2008) is often assumed to be constant. In fact, if the number of hidden neurons is too large, the computational loading is heavy and, in general, the performance is poor. On the other hand, if the number of hidden neurons is too small, the learning performance may not be good enough to achieve the desired performance. For this reason, it is crucial to optimize the structure of RBF networks to improve performance. A brief review of the existing algorithms is given below.

The resource-allocating network (RAN), proposed by Platt (1991), was the first dynamic structure RBF neural network model. In the training procedure, new Gaussian neurons can be inserted into the hidden layer. Although the growing strategy usually results in a smaller network size than a fixed-size neural network, the RAN may become extremely large because insignificant hidden neurons are not pruned. Yingwei et al. introduced a strategy to prune hidden neurons, whose contribution is relatively small and incorporated this pruning strategy into the RAN, combining it with Extended Kalman Filter (EKF) (Li, Sundararajan, & Saratchandran, 1997). This network is referred to as the minimal resource-allocating network (MRAN); its applications are described in Li, Sundararajan, and Saratchandran (1998). MRAN is a popular tool used to design optimal RBF structures (Panchapakesan, Palaniswami, Ralph, & Manzie, 2002). However, the initial parameters of the new hidden neurons were not considered and the convergence of MRAN was not discussed (Lian, Lee, Sudhoff, & Stanislaw, 2008).

The self-organizing design of RBF neural networks has been discussed by several authors. Huang, Saratchandran, and Sundararajan (2004) proposed a simple sequential learning algorithm for RBF neural networks, which is referred to as the RBF growing and pruning algorithm (GAP-RBF). The original design of GAP-RBF was enhanced to produce a more advanced model known as GGAP-RBF (Huang, Saratchandran, & Sundararajan, 2005). Both GAP-RBF and GGAP-RBF neural networks use a pruning and growing strategy which is based on the “significance” of a neuron and links it to the learning accuracy. The structure of this RBF neural network is simple and the computational time is less than the time used by a conventional RBF. However, when used in practice, GAP-RBF and GGAP-RBF require a complete set of training samples for the training process. In general, it is not possible for designers and technical domain experts to have a priori knowledge of the training samples prior to implementation. Recently, methods based on genetic algorithms (GA) have been used to change the number of hidden neurons (Gonzalez et al., 2003, Wu and Chow, 2007). The great advantage, in theory, of a GA is its ability to do global searching, but this comes at the cost of increased computation requirements. Feng (2006) proposed a self-organizing RBF neural network model based on a particle swarm optimization (PSO) method which attempts to solve the training time issue. The PSO method is used to construct the structure of the RBF neural network in order to simplify and speed up optimization. However, because the PSO method is a population-based evolutionary computation technique, the training time is still too long.

In this paper, a new algorithm, which is called the flexible structure RBF neural network (FS-RBFNN) is presented. This algorithm has several advantages: firstly, a neuron’s average firing rate is used to determine whether new neurons should be inserted. The rate of firing used in the FS-RBFNN is similar to the spiking frequency of the presynaptic neuron in the biological neural system (Neves, Cooke, & Bliss, 2008). When the rate value of the hidden neuron is bigger than a given threshold value, new neurons will be inserted into the hidden layer.

Secondly, the connectivity of hidden neurons is estimated using an information-theoretic methodology. The connectivity between hidden neurons is obtained by measuring the mutual information (MI) (Krivov, Ulanowicz, & Dahiya, 2003) in the training process.

Thirdly, the convergence of the FS-RBFNN is analyzed both theoretically and experimentally. However, most of the self-organizing methods used for RBF structure design (Alexandridis et al., 2003, Bortman and Aladjem, 2009, Shi et al., 2005) only analyze the convergence for learning process by experiments. In order to work in practice it is essential that the FS-RBFNN training process must be convergent. The FS-RBFNN has been specifically designed with this in mind.

Fourthly, the FS-RBFNN is particularly focused on reducing the retraining epochs after the structure has been modified by growing and pruning. The error required (ER) method is used to determine the initial values of the neurons inserted into the FS-RBFNN. It is well known that when a design algorithm adds (or prunes) hidden neurons to (or from) an existing RBF structure, it retrains the modified structure to adapt their connection weights.

The outline of this paper is as follows. Section 2 describes how the MI and the average firing rate are used to design the RBF; it also introduces the FS-RBFNN. Section 3 discusses and analyzes the algorithm. Section 4 presents experimental results which compare the performance of the algorithm with other similar algorithms. Section 5 concludes the paper.

Section snippets

Flexible structure radial basis function neural network (FS-RBFNN)

An RBF neural network has a simple neural network structure in terms of the direction of information flow. Since the performance of an RBF neural network is heavily dependent on its architecture, research has focused on self-organizing methods that can be used to design the architecture of three-layered RBF neural networks.

In order to design the structure of the RBF neural network automatically a dynamic tuning strategy is used in the FS-RBFNN. This strategy changes the topology of the RBF

Convergence discussion

For our proposed FS-RBFNN, the convergence of the algorithm with respect to the topology adjusting is an important issue and needs careful investigation. This is crucial for the successful applications. First, the convergence property of the case without structure changing is determined. Secondly, we investigate the convergence in the structure changing phase, the convergence of the constructive training algorithm will be considered. Furthermore, through this analysis one obtains a better

Experimental studies

The performance of the FS-RBFNN was verified by applying it to a nonlinear dynamic system: predicting water quality in the wastewater treatment process. The performance of the algorithm was evaluated by comparing the results with other similar self-organizing RBFNNs. All the simulations are programmed with Matlab version 7.01, and were run on a Pentium 4 with a clock speed of 2.6 GHz and 1 GB of RAM, under a Microsoft Windows XP environment.

In the following simulations the learning parameters

Conclusion

In this paper, a new algorithm is proposed for creating a self-organizing RBFNN whose architecture is automatically adapted based on neuron activity and mutual information (MI). The advantages of the proposed approach are that it can simplify and accelerate the structure optimization process of the RBFNN, and can solve practical problems for predicting water quality in the wastewater treatment process. The effectiveness and performance of the method are firstly demonstrated by using an example

Acknowledgments

The authors would like to thank Dr. R. Dale-Jones for reading the manuscript and providing valuable comments. The authors would also like to thank the anonymous reviewers for their valuable comments and suggestions, which helped improve this paper greatly. This work was supported by the National 863 Scheme Foundation of China under Grants 2009AA04Z155 and 2007AA04Z160, National Science Foundation of China under Grants 61034008 and 60873043, Ph.D. Program Foundation from Ministry of Chinese

References (27)

  • M. Bortman et al.

    A growing and pruning method for radial basis function networks

    IEEE Transactions on Neural Networks

    (2009)
  • D.A. Brydon et al.

    A model-based approach to predicting BOD5 in settled sewage

    Water Science and Technology

    (2001)
  • S. Ferrari et al.

    A hierarchical RBF online learning algorithm for real-time 3-D scanner

    IEEE Transactions on Neural Networks

    (2010)
  • Cited by (207)

    • DA-Bi-SRU for water quality prediction in smart mariculture

      2022, Computers and Electronics in Agriculture
      Citation Excerpt :

      Han. ( Han et al., 2011) built a model based on an optimized Radial Basis Function with Flexible Structures neural network to predict water body data. Wang (Wang et al., 2017) et al. used LSTM-based neural network to predict water quality.

    View all citing articles on Scopus
    1

    Fax: +86 10 67391631.

    View full text