Elsevier

Automatica

Volume 48, Issue 8, August 2012, Pages 1729-1734
Automatica

Brief paper
Identification and modeling of nonlinear dynamical systems using a novel self-organizing RBF-based approach

https://doi.org/10.1016/j.automatica.2012.05.034Get rights and content

Abstract

In this paper, a novel self-organizing radial basis function (SORBF) neural network is proposed for nonlinear identification and modeling. The proposed SORBF consists of simultaneous network construction and parameter optimization. It offers two important advantages. First, the hidden neurons in the SORBF neural network can be added or removed, based on the neuron activity and mutual information (MI), to achieve the appropriate network complexity and maintain overall computational efficiency for identification and modeling. Second, the model performance can be significantly improved through the parameter optimization. The proposed parameter-adjustment-based optimization algorithm, utilizing the forward-only computation (FOC) algorithm instead of the traditionally forward-and-backward computation, simplifies neural network training, and thereby significantly reduces computational complexity. Additionally, the convergence of the SORBF is analyzed in both the structure organizing process phase and the phase following the modification. Lastly, the proposed approach is applied to model and identify the nonlinear dynamical systems. Simulation results demonstrate its effectiveness.

Introduction

Recent years have witnessed great progress in the identification and modeling of nonlinear dynamical systems due to great demands for controller design (Martensson and Hjalmarsson, 2011, Mayne et al., 2009). In many practical situations, however, it is infeasible to obtain an accurate mathematical model of the system because of the lack of knowledge of some parameters. To make the controller perform well, the main objective of the dynamical system identification is to construct a model from the experimental observations, which reproduces the dynamics of the underlying system as faithfully as possible. The significance and challenges of modeling nonlinear systems are widely recognized (Andersona & Kadirkamanathan, 2007). In summary, there are several types of nonlinear models: black box models such as block structured models (Schon, Wills, & Ninness, 2011), neural networks (Cheng, Hou, & Tan, 2009) and fuzzy models (Chien, Wang, Leu, & Lee, 2011).

Due to their simple topological structure and universal approximation ability, radial basis function (RBF) neural networks have been widely used in nonlinear system modeling and control (Fu and Chai, 2007, Rossomando et al., 2011). For a RBF neural network, the capabilities of the final network are determined by the parameter optimization algorithms and the structure size.

In order to obtain suitable parameters, some researchers utilize unsupervised learning or supervised learning methods to optimize the centers, widths and output weights of the hidden neurons (Buzzi et al., 2001, Schwenker et al., 2001). Among these algorithms the gradient-based backpropagation (BP) training algorithms and the recursive least squares (RLS) training algorithms are perhaps the most popular. Compared to the BP algorithms, the RLS algorithms have a faster convergence speed. However, the RLS algorithms involve more complicated mathematical operations and require more computational resources than the BP algorithms (Al-Batah, Isa, Zamli, & Azizli, 2010). To solve the aforementioned problems, some faster and more advanced training algorithms have been developed. Recently, Li, Peng, and Irwin (2005) proposed a fast recursive algorithm (FRA) for nonlinear dynamic system identification using RBF neural network models. And lately they developed a modified gram-schmidt (MGS) and a continuous forward algorithm (CFA) for both network construction and parameter optimization (Li et al., 2006, Peng et al., 2007), although the FRA, the MGS and the CFA algorithms lead to more significant improvements in modeling performance, greater reduction in memory storage and computational complexity than conventional optimization methods. However, they focus more on selecting the network size than adjusting parameters. Besides, little has been done to examine the convergence.

In RBF neural identification and modeling, one of the other important issues is the effect of network structure on the computational loading and the generality (Qiao & Han, 2010). So, it is crucial to optimize the structure of RBF networks to improve performance. Huang, Saratchandran, and Sundararajan (2004) proposed a sequential learning algorithm for RBF networks, which is referred to as the growing and pruning RBF algorithm (GAP-RBF). The original design of the GAP-RBF was enhanced to produce a more advanced model known as the GGAP-RBF (Huang, Saratchandran, & Sundararajan, 2005). The structures of the GAP-RBF and GGAP-RBF are simpler and the computational time is less than the time used by a conventional RBF. However, GAP-RBF and GGAP-RBF require a complete set of training samples (Kamalabady & Salahshoor, 2009). Recently, a genetic algorithm (GA) has been used to change the number of hidden neurons (Gonzalez et al., 2003, Wu and Chow, 2007). Theoretically, the great advantage of a GA is its ability to search globally, but this is at the cost of increased computation requirements. To solve the problem of training time, Feng (2006) put forward a self-organizing RBF neural network utilizing a particle swarm optimization (PSO) method. The PSO method constructs the structure of the RBF neural network to speed up optimization. However, because the PSO method is a population-based evolutionary computation technique, the training time is still too long (Huang & Du, 2008).

In this paper, a new approach, called a novel self-organizing RBF (SORBF) neural network, is presented for nonlinear identification and modeling. A forward-only computation (FOC) algorithm, reducing memory usage and computational complexity, is presented for parameter optimization. Meanwhile, the structure of the SORBF neural network can be self-organized by the neuron activity and mutual information (MI), to achieve the appropriate network complexity and maintain overall computational efficiency for identification and modeling. The outline of this paper is as follows. Section 2 introduces the RBF neural modeling for the nonlinear dynamical systems. Section 3 describes the neural identification and modeling using the SORBF, which consists of the FOC and the structure self-organizing algorithms. The convergence of the SORBF is analyzed in both the dynamic process phase and the phase following the modification of the structure in Section 4. The performance of the proposed approach is experimentally compared with that of other similar methods, given in Section 5. Finally, Section 6 concludes the paper.

Section snippets

Problem formulation

The nonlinear continuous-time dynamical systems considered in this paper are described by the following differential equation (multi-input and multi-output, MIMO) ẏ(t)=f(y(t),x(t)),y(t0)=y(0), where y(t)=[y1,y2,,yQ]T and x(t)n are the output and input of the dynamical system at time t, respectively. Q is the number of outputs. The function f(,) is unknown.

In order to facilitate the discussion of convergence, we then express Eq. (1) in the following form ẏ(t)=y(t)+g(y(t),x(t)),y(t0)=y0,

Novel self-organizing radial basis function (SORBF) neural identification and modeling

For our proposed SORBF neural identification and modeling approach, the principle of the process is depicted in Fig. 1. Since the performance of a RBF neural network is heavily dependent on its structure and parameters, more efforts will focus on the two important issues.

Convergence analysis

For the proposed SORBF neural identification and modeling approach, the convergence of the SORBF with respect to the parameters and the topology adjusting is an important issue, and therefore, extensive investigation is needed. It is also crucial to the successful applications. First, the convergence property of the case using the FOC algorithm without structure changing is determined. Secondly, the convergence is investigated in the structure changing phase — both in the neuron splitting and

Experimental studies

The performance of the SORBF was verified by its application to the nonlinear dynamic systems. The performance of the algorithm was evaluated by comparing the results with that of other similar algorithms — the HFA (Peng et al., 2007), the GAP-RBF (Huang et al., 2004) and the MGAP-RBF (Kamalabady & Salahshoor, 2009).

The performances of the network are measured using the root mean-square-error (RMSE) function defined as RMSE(t)=12Nt=1N(y(t)yˆ(t))T(y(t)yˆ(t)), where N is the number of data

Conclusion

Increasing attention is being paid to identify and model suitable dynamical models for nonlinear systems. After the nonlinear dynamical model for a system is identified, a variety of nonlinear control design approaches can be utilized for different applications, where adaptive model reference control is the most common approach. If the nonlinear dynamical model is able to be identified for any nonlinear process, a nonlinear dynamical model-based control design approach, such as adaptive model

Acknowledgments

The authors would like to thank Dr. Frank Allgower, Dr. Guo Jiusheng and Dr. R. Dale-Jones for reading the manuscript and providing valuable comments.

Jun-Fei Qiao received B.E. and M.E. degrees in control engineering from Liaoning Technical University, Fu’xin, China, in 1992 and 1995, respectively; and the Ph.D. degree from Northeast University, Shenyang, China, in 1998.

From 1998 to 2000, he was a Postdoctoral Fellow with the School of Automatics, Tianjin University, Tianjin, China. He joined Beijing University of Technology, Beijing, China, where he is currently a Professor. He is the director of the Intelligence Systems Lab. His research

Cited by (90)

View all citing articles on Scopus

Jun-Fei Qiao received B.E. and M.E. degrees in control engineering from Liaoning Technical University, Fu’xin, China, in 1992 and 1995, respectively; and the Ph.D. degree from Northeast University, Shenyang, China, in 1998.

From 1998 to 2000, he was a Postdoctoral Fellow with the School of Automatics, Tianjin University, Tianjin, China. He joined Beijing University of Technology, Beijing, China, where he is currently a Professor. He is the director of the Intelligence Systems Lab. His research interests include neural networks, intelligent systems, self-adaptive/learning systems, and process control systems.

Hong-Gui Han received the B.S. degree in automatic from Civil Aviation University of China, Tianjin, China, in 2005 and the M.E. and Ph.D. degrees from the Beijing University of Technology, Beijing, China, in 2007 and 2011, respectively.

His current research interests include neural networks, intelligent systems, modeling and control in complex process, and intelligent system.

Mr. Han is a member of the IEEE Computational Intelligence Society. He is currently a reviewer of Control Engineering Practice, IEEE Transactions on Neural Networks and Learning Systems, and IEEE Transactions on Fuzzy Systems.

This work was supported by the National Science Foundation of China under Grant 61034008, the Beijing Municipal Natural Science Foundation under Grant 4122006, the Beijing Municipal Education Commission Science and Technology Development Program  KZ201010005005, the Funding Project for Academic Human Resources Development under Grant PHR(IHLB)201006103, and the New Century Excellent Talents Program from the Ministry of Education under Grant NCET-08-0616. The material in this paper was not presented at any conference. This paper was recommended for publication in revised form by Associate Editor Michael A. Henson under the direction of Editor Frank Allgöwer.

1

Tel.: +86 10 67391631; fax: +86 10 67391631.

View full text