Skip to main content

A Modular Neural Network Architecture with Additional Generalization Abilities for Large Input Vectors

  • Conference paper
Artificial Neural Nets and Genetic Algorithms

Abstract

This paper proposes a two layer modular neural system. The basic building blocks of the architecture are multilayer perceptrons trained with the backpropagation algorithm. Due to the proposed modular architecture the number of weight connections is less than in a fully connected multilayer perceptron. The modular network is designed to combine two different approaches of generalization known from connectionist and logical neural networks; this enhances the generalization abilities of the network. The architecture introduced here is especially useful in solving problems with a large number of input attributes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. I. Aleksander and H. Morton. An Introduction to Neural Computing. Chapman & Hall, second edition, 1995.

    Google Scholar 

  2. R. Bellotti, M. Castellano, C. De Marzo, and G. Satalino. Signal/background classification in a cosmic ray space experiment by a modular neural system. In Proc. of the SPIE — The International Society for Optical Engineering, volume 2492, pages 1153–1161. Springer-Verlag, 1995.

    Google Scholar 

  3. T. Kohonen, G. Barna, and R. Chrisley. Statistical pattern recognition with neural networks: benchmarking studies. In Proc. IEEE International Conference on Neural Networks, pages 61–67. San Diego, 1988.

    Google Scholar 

  4. L. Mui, A. Agarwal, A. Gupta, and P. Shen-Pei Wang. An adaptive modular neural network with application to unconstrained character recognition. International Journal of Pattern Recognition and Artificial Intelligence, 8(5):1189–1204, October 1994.

    Article  Google Scholar 

  5. University of Stuttgart. Picture directory. FTP from ftp.uni-stuttgart.de, /pub/graphics/pictures and tv_film/staxtrek/next-gen/portrait.

    Google Scholar 

  6. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation, volume I: Foundations. MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  7. J. V. Stone and C. J. Thorton. Can Artificial Neural Networks Discover Useful Regularities?, pages 201–205. 1995. Conference Publication No. 409 IEE. 26–28 June 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Wien

About this paper

Cite this paper

Schmidt, A., Bandar, Z. (1998). A Modular Neural Network Architecture with Additional Generalization Abilities for Large Input Vectors. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6492-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6492-1_8

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83087-1

  • Online ISBN: 978-3-7091-6492-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics