Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3697))

Included in the following conference series:

Abstract

It is widely believed in the pattern recognition field that the number of examples needed to achieve an acceptable level of generalization ability depends on the number of independent parameters needed to specify the network configuration. The paper presents a neural network for classification of high-dimensional patterns. The network architecture proposed here uses a layer which extracts the global features of patterns. The layer contains neurons whose weights are induced by a neural subnetwork. The method reduces the number of independent parameters describing the layer to the parameters describing the inducing subnetwork.

An erratum to this chapter can be found at http://dx.doi.org/10.1007/11550907_163 .

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Diamantaras, K.I., Kung, S.Y.: Principal Component Neural Networks: Theory and Applications. Wiley, Chichester (1996)

    MATH  Google Scholar 

  2. Jacobs, R.A., Jordan, M.I.: Adaptative Mixture of Local Expert. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  3. Jordan, M.I., Jacobs, R.A.: Herarchical mixtures of experts and the EM algorithm. Neural Computation 6, 181–214 (1994)

    Article  Google Scholar 

  4. LeCun, Y., Bengio, Y.: Convolutional networks for images, speech, and time series. In: The Handbook of Brain Theory and Neural Networks, pp. 255–258. MIT Press, Cambridge (1995)

    Google Scholar 

  5. LeCun, Y., Bottou, L., Benigo, Y., Haffner, P.: Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE 86, 2278–2324 (1998)

    Article  Google Scholar 

  6. Riedmiller, M., Braun, H.: A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In: Proceedings of the IEEE International Conference on Neural Networks 1993, ICNN 1993 (1993)

    Google Scholar 

  7. Solla, S., LeCun, Y., Denker, J.: Optimal Brain Damage. In: Advances in Neural Information Processings Systems, vol. 2, pp. 598–605. Morgan Kaufmann Publishers Inc., San Mateo (1990)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Golak, S. (2005). Induced Weights Artificial Neural Network. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Formal Models and Their Applications – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3697. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550907_47

Download citation

  • DOI: https://doi.org/10.1007/11550907_47

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28755-1

  • Online ISBN: 978-3-540-28756-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics