Abstract
In this paper we show a new learning algorithm for pattern classification. A scheme to find a solution to the problem of incremental learning algorithm is proposed when the structure becomes too complex by noise patterns included in the learning data set. Our approach for this problem uses a pruning method which terminates the learning process with a predefined criterion. Then an iterative model with a 3 layer feedforward structure is derived from the incremental model by appropriate manipulation. Note that this network is not fully connected between the upper and lower layers. To verify the effectiveness of the pruning method, the network is retrained by EBP. We test this algorithm by comparing the number of nodes in the network with the system performance, and the system is shown to be effective.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Johnson, R.A., Wichern, D.W.: Applied Multivariate Statistical Analysis. 2ED, pp. 470–542. Prentice-Hall International Editioned, Englewood Cliffs (1988)
Lee, J.C., Kim, Y.H., Lee, W.D., Lee, S.H.: Pattern Classifying Neural Network using Fisher ’s linear discriminant function. In: IJCNN, vol. 1 (June 1992)
Lee, J.C., Kim, Y.H., Lee, W.D., Lee, S.H.: A method to find the structure and weights of layered neural networks. In: WCNN, pp. 552–555 (June 1993)
Hagiwara, M.: A simple and effective method for removal of hidden units and weights. Neurocomputing 6, 207–218 (1994)
Kung, S.Y., Hwang, J.N.: An algebric projection analysis for optimal hidden units size and learning rates in back-propagation learning. In: IJCNN, vol. I, pp. 363–370 (1988)
Weigend, A.S., Rumelhart, D.E., Huberman, B.A.: Generalization by weightelimination with application to forecasting. Advances in Neural Information Processing Systems 3, 875–882 (1991)
Sietsma, J., Dow, R.J.F.: Neural net pruning-why and how. In: IJCNN, vol. I, pp. 325–333 (1988)
Hagiwara, M.: Novel back propagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection. In: IJCNN, vol. I, pp. 625–630 (1990)
Hanson, S.J., Pratt, L.Y.: Comparing biases for minimal network construction with back-propagation. Advances in Neural Information Processing Systems 1, 177–185 (1989)
Moody, J.E., Rognvaldsson, T.: Smoothing regularizers for projective basis function networks. Advances in Neural Information Processing Systems 9, 585–591 (1997)
Wangchao, L., Yongbin, W., Wenjing, L., Jie, Z., Li, J.: Sparselized higher-order neural network and its pruning algorithm. In: IJCNN, vol. I, pp. 359–362 (1998)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lee, J.C., Lee, W.D., Han, MS. (1999). An Algorithm to Find the Optimized Network Structure in an Incremental Learning. In: Zhong, N., Skowron, A., Ohsuga, S. (eds) New Directions in Rough Sets, Data Mining, and Granular-Soft Computing. RSFDGrC 1999. Lecture Notes in Computer Science(), vol 1711. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-48061-7_61
Download citation
DOI: https://doi.org/10.1007/978-3-540-48061-7_61
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66645-5
Online ISBN: 978-3-540-48061-7
eBook Packages: Springer Book Archive