Skip to main content

Fault Tolerant Training of Neural Networks for Learning Vector Quantization

  • Conference paper
Neural Information Processing (ICONIP 2006)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4233))

Included in the following conference series:

  • 937 Accesses

Abstract

The learning vector quantization(LVQ) is a model of neural networks, and it is used for complex pattern classifications in which typical feedforward networks don’t give a good performance. Fault tolerance is an important feature in the neural networks, when they are used for critical application. Many methods for enhancing the fault tolerance of neural networks have been proposed, but most of them are for feedforward networks. There is scarcely any methods for fault tolerance of LVQ neural networks. In this paper, I proposed a dependability measure for the LVQ neural networks, and then I presented two idea, the border emphasis and the encouragement of coupling, to improve the learning algorithm for increasing dependability. The experiment result shows that the proposed algorithm trains networks so that they can achieve high dependability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Herts, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1991)

    Google Scholar 

  2. Martin, D.E., Damper, R.I.: Determining and improving the fault tolerance of multilayer perceptrons in a pattern-recognition application. IEEE Trans. on Neural Networks 4(5), 788–793 (1993)

    Article  Google Scholar 

  3. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 79, 2254–2558 (1982)

    Article  MathSciNet  Google Scholar 

  4. Fahlman, S.E., Hinton, G.E.: Massively parallel architectures for AI: NETL, Thistle, and Boltzmann Machines. In: Proceedings of the National Conference on Artificial Intelligence AAAI 1983, pp. 109–113 (1983)

    Google Scholar 

  5. Kohonen, T.: Self-Organizing Maps. Springer, Heidelberg (1995)

    Google Scholar 

  6. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. John Wiley & Sons, Chichester (2001)

    MATH  Google Scholar 

  7. Tan, Y., Nanya, T.: Fault-tolerant back-propagation model and its generalization ability. In: Digest IJCNN, pp. 1373–1378 (1991)

    Google Scholar 

  8. Phatak, D.S.: Fault tolerant artificial neural networks. In: 5th Dual Use Technologies and Applications Conference, pp. 193–198 (1995)

    Google Scholar 

  9. Hammadi, N.C., Ohmameuda, T., Kaneko, K., Ito, H.: Fault tolerant constructive algorithm for feedforward neural networks. In: PRFTS 1997, pp. 215–220 (1997)

    Google Scholar 

  10. LVQ Programming Team of the Helsinki University of Technology: LVQ_PAK: The learning vector quantization program package (1995), ftp://cochlea.hut.fi/pub/lvq_pak/

  11. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annual Eugenics Part II(7), 179–188 (1936)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Minohara, T. (2006). Fault Tolerant Training of Neural Networks for Learning Vector Quantization. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4233. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893257_87

Download citation

  • DOI: https://doi.org/10.1007/11893257_87

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46481-5

  • Online ISBN: 978-3-540-46482-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics