Genetic optimization-driven multi-layer hybrid fuzzy neural networks

https://doi.org/10.1016/j.simpat.2005.10.009Get rights and content

Abstract

In this study, we introduce a new architecture of hybrid fuzzy neural networks (gHFNNs) and offer a comprehensive design methodology that supports their development. The gHFNN rule-based architecture results from a synergistic usage of Fuzzy Neural Networks (FNNs) with Polynomial Neural Networks (PNNs). The FNN contributes to the formation of the premise part of the overall network of the gHFNN. The consequence part of the gHFNN is designed taking advantage of PNNs. The optimization of the FNN is realized with the aid of a standard back-propagation learning combined with genetic optimization. The development of the PNN dwells on the extended Group Method of Data Handling (GMDH) and Genetic Algorithms (GAs). Through the consecutive process of such structural and parametric optimization, an optimized topology of the PNN becomes generated in a dynamic fashion. The performance of the gHFNN is evaluated through a series of numeric experiments. A comparative analysis shows that the proposed gHFNN is characterized by higher accuracy as well as significant predictive capabilities when contrasted with other neurofuzzy models presented in the literature.

Section snippets

Introductory remarks

With the continuously growing demand for models for coping with complex systems that are inherently associated with nonlinearity, high-order dynamics, time-varying behavior and imprecise measurements, there is a need for a relevant modeling environment. Efficient modeling techniques should allow for a selection of pertinent variables and a formation of highly representative datasets. The models should be able to take advantage of the existing domain knowledge (such as a prior experience of

Conventional hybrid fuzzy neural networks

The architectures of conventional hybrid fuzzy neural networks (HFNN) [12], [13] result as a synergy between two general constructs such as FNN and PNN. Based on the different PNN topologies, in the HFNNs we distinguish between two kinds of architectures, see Table 1. We will be referring to them as basic and modified ones. Moreover, for the each of these architectures we identify two cases. Considering the connection point, if the number of the input variables of the PNN used in the

The architecture and the development of genetically optimized HFNNs (gHFNNS)

The genetically optimized HFNNs (gHFNN) emerges as a result of a genetically optimized multi-layer perceptron architecture based on fuzzy set-based FNN and the use of GAs and the GMDH method. In this sense, these networks come as a result of synergy between FNNs [14], [15] and PNNs [10].

An overall design procedure of the gHFNNs

Now, we concentrate on the optimization flow of the algorithm as well as link it to the detailed architectural developments of the network.

Experimental studies

The performance of the gHFNN is illustrated with the aid of a time series of gas furnace Box–Jenkins data [16]. The crossover rate of GAs is set to 0.75 and the probability of mutation is equal to 0.065. This time series (296 input–output pairs) has been intensively studied in the previous literature [1], [2], [3], [4], [5], [6], [12], [13], [14], [15], [16]. The delayed terms of methane gas flow rate, u(t) and carbon dioxide density, y(t) are used as system input variables. We use two types of

Concluding remarks

This comprehensive design methodology has resulted in parametrically as well as structurally optimized networks. Regarding the premise structure of the gHFNNs, the optimization of the rule-based FNN hinges here on genetic optimization and the back-propagation learning. GA leads to the auto-tuning of the parameters of the membership functions, while the BP produces the optimal parameters of the consequent polynomials of the rules. The gPNN that forms the consequent structure of the gHFNN

Acknowledgement

This work was supported by Korea Research Foundation Grant (KRF-2004-002-D00257).

References (21)

There are more references available in the full text version of this article.

Cited by (18)

  • Neural network design and model reduction approach for black box nonlinear system identification with reduced number of parameters

    2013, Neurocomputing
    Citation Excerpt :

    However, these methods still suffer from slow convergence [31]. More recently, evolutionary techniques as genetic algorithms (GAs) and particle swarm optimization (PSO) have been employed to optimize weights, layers, number of input–output nodes, neurons and derive optimal neural network structures [31,34]. Ge et al. [35] proposed a learning algorithm centered on a dissimilation particle swarm optimization, serving to compute the optimal synaptic weights, the learning rate, and the architecture evolution (optimal number of neurons in the hidden layer).

  • Genetic algorithms based logic-driven fuzzy neural networks for stability assessment of rubble-mound breakwaters

    2012, Applied Ocean Research
    Citation Excerpt :

    They concluded that the presented type of FNNs needed to improve its computational performance, since the maximum value of the correlation coefficient between predicted stability numbers and measurements from the hydraulic model tests [19], was limited to 0.788. One common way to overcome this problem in such networks is considering a hybrid learning scheme in which structural and parametric optimisation are implemented separately [20,21]. Structural optimisation generally aims to construct a blueprint of the network by discovering the most meaningful connections that shape up an underlying architecture which reflects the logical nature of the data.

  • A PSO-based adaptive fuzzy PID-controllers

    2012, Simulation Modelling Practice and Theory
View all citing articles on Scopus
View full text