A novel hybrid algorithm for function approximation

https://doi.org/10.1016/j.eswa.2006.09.006Get rights and content

Abstract

This paper introduces a novel hybrid algorithm for function approximation. The proposed algorithm consists of a hybrid approach to develop Takagi and Sugeno’s fuzzy model for function approximation. In this paper, a coarse tuning based on Takagi and Sugeno’s fuzzy model is applied to identify the fuzzy structure, and also a fuzzy cluster validity index is utilized to determine the optimal number of clusters. To obtain a more precision model, genetic algorithm (GA) and particle swarm optimization (PSO) are performed to conduct fine-tuning for the obtained parameter set of the premise parts and consequent parts in the aforementioned fuzzy model. The proposed algorithm is successfully applied to three tested examples. Compared with other existing approaches in the literature, the proposed algorithm is very useful for modeling function approximation.

Introduction

Fuzzy model is one of the most important techniques in fuzzy logic which has advantages of excellent capability to deal with complex systems (Bezdek, 1981, Chuang et al., 2001, Yager and Filev, 1994). For building fuzzy model, the most important issue is the structure identification and parameters estimation. The structure identification concerns about the appropriate number of fuzzy rules, and the parameter estimation is related the calculation of parameters that represent a reliable system. Within various fuzzy model techniques, one of the most outstanding models was proposed by Takagi and Sugeno (1985). This fuzzy model is capable of describing a complex system using sufficient rules and training data (Chuang et al., 2001, Dave and Krishnapurum, 1997; Nozaki et al., 1997). However, this fuzzy model must define fuzzy subspaces in advance, and then the parameters in consequences are obtained through a Kalman filter type of estimation (Chuang et al., 2001). Various alternative approaches for modeling Takagi and Sugeno’s fuzzy rules have been proposed in the literature (Dickerson and Kosko, 1996, Frigui and Krishnapuram, 1999, Jang et al., 1997, Klawonn and Kruse, 1997, Kroll, 1996). In general, the input space is decomposed into fuzzy subspaces according to the input data and then the system or function approximation is approximated in each subspace by a simple linear regression function. The most drawbacks of the aforementioned approaches do not account for the interaction between input and output variables (Chuang et al., 2001). Recently, some authors have employed two phases of learning for fuzzy model: the coarse learning phase and the fine-tuning phase (Chuang et al., 2001; Kim et al., 1997–1998; Tsekouras et al., 2005). The idea is to formulate fuzzy structure with input and output variables of the system, and the parameter set of premise parts and consequent parts are approximately first determined. Moreover, the parameter set of premise parts and consequent are precisely adjusted by the fine-tuning phase to improve the modeling accuracy. However, the coarse tuning phase does not incorporate the validity index in fuzzy models, and the number of fuzzy rules is not determined by optimization algorithms. In the fine-tuning phase, the gradient descent approach is used to adjust the parameter set of premise parts and consequent parts. Due to the property of gradient descent approach, the found parameter set may be trapped in local optima (Jang et al., 1997, Lin and Lee, 1996, Michalewicz, 1994).

In this paper, a novel hybrid algorithm for function approximation is proposed. This paper is aimed toward to present a systematic approach for function approximation. The proposed algorithm is also composed of two phases: the coarse tuning phase and the fine tuning phase. The coarse tuning phase consists of structure identification and parameter estimation. Structure identification uses the well-known Xie-Beni (XB) fuzzy cluster validity index to determine the optimal number of fuzzy rules. The XB index is an optimizing index to study the relative merits of a portioned structure (Hanifi and Abdulkadir, 2006, Pakhira et al., 2005). Then the corresponding parameter set of premise parts and consequent parts are also roughly obtained in the coarse tuning phase. In the fine tuning phase, the parameter set of premise parts and consequent parts obtained from the coarse tuning phase are precisely adjusted to minimize the root mean square errors (RMSE) of the data set by a hybrid algorithm of genetic algorithm (GA) and particle swarm optimization (PSO).

The rest of this paper is outlined as follows. Section 2 describes the concept for Takagi and Sugeno’s fuzzy model, which will be used in this paper. Section 3 describes the basic GA and PSO concepts. Section 4, the structure of the proposed algorithm is discussed in detail. Section 5 reports the use of the proposed algorithm for modeling three examples, and the effectiveness of the proposed algorithm is illustrated. From simulation results, it is shown that the proposed algorithm outperforms other existing algorithms. Concluding remarks are presented in Section 6.

Section snippets

The concept of Takagi and Sugeno’s fuzzy model

The fuzzy model proposed by Takagi and Sugeno is one of the most efficient fuzzy models that can represent a general class of nonlinear systems or functions (Tsekouras et al., 2005). It is based on the feature that the input-output relations can be viewed as the expansion of piecewise linear partition. Typically, Takagi and Sugeno’s fuzzy model consists of IF–THEN rules that can be expressed as the following form:Ri:Ifx1isA1i(Ω1i)andx2isA2i(Ω2i),,xmisAmi(Ωmi)thenyi=fi(x1,x2,,xm;ai)=1+a1ix1++

The introduction of genetic algorithm and particle swarm optimization

The proposed algorithm is based on genetic algorithm (GA) and particle swarm optimization (PSO) in fine tuning the values of parameter set obtained from the coarse tuning phase. The general idea is to combine the advantages of PSO and GA, the ability to exploit and explore the search space. In this section, the basic concepts of GA and PSO are introduced.

The proposed algorithm

The function approximation is to obtain a model fˆ from a set of observations, {(x(1),y1),(x(2),y2),,(x(N),yN)} with x(i)Rm and yi  R, where N is the number of training data, x(i)=[x1(i),x2(i),,xm(i)] is the ith input vector, and yi is the desired output for the input x(i). Basically, we want to construct the model fˆ that can represent the real model in term of input–output relationships. In the proposed algorithm, the fuzzy model is generated by the coarse tuning phase, first. Then, a

Simulation results

In simulations, we need to identify a set of values. The size of the initial population for GA and PSO is 20. The crossover probability Pc = 0.5, mutation probability Pm = 0.02, MAXGEN = 5000, c1 = 1, c2 = 1, k1 = 0.9, k2 = 0.4 and l = 10. The values of minr and maxr for GA are set as the maximum and the minimum values of input variables. Three examples are tested to verify the validity of the proposed algorithm. The first example is a nonlinear function (F1) taken from (Kim et al., 1997, Kim et al., 1997–1998

Conclusion

In the proposed algorithm, a novel hybrid algorithm for function approximation is proposed to simultaneously define fuzzy rules and find the parameters in the parameter set of premise and consequent parts. The number of fuzzy rules is determined to an optimum which is calculated using the XB validity index. In our approach, a fine-tuning algorithm based on GA and PSO is further employed to obtain a more precision model. The proposed algorithm is tested for various examples and indeed showed

References (28)

  • J.A. Dickerson et al.

    Fuzzy function approximation with ellipsoidal rules

    IEEE Transactions on Systems Man and Cybernetics

    (1996)
  • H. Frigui et al.

    A robust competitive clustering algorithm with applications in computer vision

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (1999)
  • M. Gen et al.

    Genitic algorithms and engineering design

    (1997)
  • D.A. Goldberg

    Genetic algorithms in search, optimization, and machine learning

    (1989)
  • Cited by (36)

    • CFNN: Correlated fuzzy neural network

      2015, Neurocomputing
      Citation Excerpt :

      In these methods, usually the number of fuzzy rules increases during training, so using a pruning method for removing redundant fuzzy rules, such as the method used in [22,55] is useful. In some methods, metaheuristic search methods based on swarm intelligence or evolutionary algorithms such as particle swarm optimization (PSO) and genetic algorithm (GA) are used to evolve the number of fuzzy rules and their parameters to optimize some objective functions [3,39,42,44,50,62]. A GA-based method proposed in [62] can evolve the number of fuzzy rules as well as all the membership function׳s (MF) parameters.

    • Cost estimation of plastic injection molding parts through integration of PSO and BP neural network

      2013, Expert Systems with Applications
      Citation Excerpt :

      The particles’ experiences tell them to track and memorize the best locations they have ever been. For that very reason, every particle in PSO is capable of memorizing the best location it has experienced, and PSO seeks to find optimum by combining the particle best position approach and population-based search approach (Lee, 2008). In PSO, by randomly generating an initial population, the optimal solution could be found through evolutionary iterations.

    • Comparison and application of four versions of particle swarm optimization algorithms in the sequence optimization

      2011, Expert Systems with Applications
      Citation Excerpt :

      Each bird, then, explores the search space from its new position, the process repeated until the flock reaches a desired destination. It is important to note that the process involves both social interaction and intelligence so the birds learn from their own experience (local search) and also from the experience of others around them (global search) (Emad et al., 2005; Lee, 2008; Yannis & Magdalene, 2010). In holes machining, the drill head moves from one hole to the next on a work-piece mounted on the CNC drill worktable.

    View all citing articles on Scopus
    View full text