Abstract
Local model interpretability is a very important issue in neurofuzzy local linear models applied to nonlinear state estimation, process modelling and control. This paper proposes a new fuzzy membership function with desirable properties for improving the interpretability of neurofuzzy models. A learning algorithm for constructing neurofuzzy models based on this new membership function and a hybrid objective function is derived as well, which aims to achieve optimal balance between global model accuracy and local model interpretability. Experimental results have shown that the proposed approach is simple and effective in improving the interpretability of Takagi-Sugeno fuzzy models while preserving the model accuracy at a satisfactory level.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gan, J.Q., Zhou, SM. (2006). A New Fuzzy Membership Function with Applications in Interpretability Improvement of Neurofuzzy Models. In: Huang, DS., Li, K., Irwin, G.W. (eds) Computational Intelligence. ICIC 2006. Lecture Notes in Computer Science(), vol 4114. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-37275-2_25
Download citation
DOI: https://doi.org/10.1007/978-3-540-37275-2_25
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-37274-5
Online ISBN: 978-3-540-37275-2
eBook Packages: Computer ScienceComputer Science (R0)