Abstract
The learning vector quantization(LVQ) is a model of neural networks, and it is used for complex pattern classifications in which typical feedforward networks don’t give a good performance. Fault tolerance is an important feature in the neural networks, when they are used for critical application. Many methods for enhancing the fault tolerance of neural networks have been proposed, but most of them are for feedforward networks. There is scarcely any methods for fault tolerance of LVQ neural networks. In this paper, I proposed a dependability measure for the LVQ neural networks, and then I presented two idea, the border emphasis and the encouragement of coupling, to improve the learning algorithm for increasing dependability. The experiment result shows that the proposed algorithm trains networks so that they can achieve high dependability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Herts, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1991)
Martin, D.E., Damper, R.I.: Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application. IEEE Trans. on Neural Networks 4(5), 788–793 (1993)
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2254–2558 (1982)
Fahlman, S.E., Hinton, G.E.: Massively parallel architectures for AI: NETL, Thistle, and Boltzmann Machines. In: Proceedings of the National Conference on Artificial Intelligence AAAI 1983, pp. 109–113 (1983)
Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (1995)
Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, Chichester (2001)
Tan, Y., Nanya, T.: Fault-tolerant back-propagation model and its generalization ability. In: Digest IJCNN, pp. 1373–1378 (1991)
Phatak, D.S.: Fault tolerant artificial neural networks. In: 5th Dual Use Technologies and Applications Conference, pp. 193–198 (1995)
Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Fault tolerant constructive algorithm for feedforward neural networks. In: PRFTS 1997, pp. 215–220 (1997)
LVQ Programming Team of the Helsinki University of Technology: LVQ_PAK: The learning vector quantization program package (1995), ftp://cochlea.hut.fi/pub/lvq_pak/
Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annual Eugenics Part II(7), 179–188 (1936)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Minohara, T. (2006). Fault Tolerant Training of Neural Networks for Learning Vector Quantization. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4233. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893257_87
Download citation
DOI: https://doi.org/10.1007/11893257_87
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-46481-5
Online ISBN: 978-3-540-46482-2
eBook Packages: Computer ScienceComputer Science (R0)