Skip to main content

Classification with Synaptic Radial Basis Units

  • Conference paper
  • First Online:
Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence (IWANN 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2084))

Included in the following conference series:

  • 1418 Accesses

Abstract

A new type of Multilayer network including certain class of Radial Basis Units (RBU), whose kernels are implemented at the synaptic level, is compared through simulations with the Multi-Layer Perceptron (MLP) in a classification problem with a high interference of class distributions. The simulations show that the new network gives error rates in the classification near those of the Optimum Bayesian Classifier (OBC), while MLP presents an inherent weakness for these classification tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Ahalt, S. C., Krishnamurthy, A. K., Chen, P., & Melton, D. E. (1990) Competitive learning algorithms for Vector Quantization. Neural Networks. Vol. 3, pp277–290.

    Article  Google Scholar 

  2. Banzhaf, W & Haken, H. (1990) Learning in a Competitive Network. Neural Networks Vol. 3, pp423–435

    Article  Google Scholar 

  3. Buldain, J. D. (1998) Doctoral Thesis: Modelo Neuronal Artificial Multi-Dendritico de Unidades con Campo Receptivo. Dept. Ingeniería Electrónica & Comunicaciones. Universidad de Zaragoza.

    Google Scholar 

  4. Buldain, J. D. & Roy, A. (2000) Multi-Dendritic Neural Networks with Radial Basis Units, In revision in Neural Networks.

    Google Scholar 

  5. Chen, S. & Cowan, C. F. N. & Grant, P. M. (1991) Orthogonal Least Squares Learning Algorithm for Radial Basis Function Network. IEEE Trans.on Neural Networks. Vol. 2, pp302–309.

    Article  Google Scholar 

  6. DeSieno, D. (1988) Adding a conscience to competitive learning. Proc. Int. Conf. on Neural Networks. pp117–124. IEEE Press, New York.

    Chapter  Google Scholar 

  7. Epanechnik, V. A. (1969) Nonparametric Estimation of a Multidimensional Probability Density. Theory of Probability Application. Vol. 14, pp153–158.

    Article  Google Scholar 

  8. Firenze, F. & Morasso, P. (1994) The “Capture Effect ” a new self-organizing network for adaptive resolution clustering in changing environments. Proc. Int. Conference on Neural Networks. pp653–658.

    Google Scholar 

  9. Funahashi, K. (1989) On the Approximate realization of Continuous Mapping by Neural Networks. Neural Networks, Vol. 2, pp183–192.

    Article  Google Scholar 

  10. Guyon, I. & Makhoul, J. & Schwartz, R. & Vapnik, V. (1998) What Size Test Set Gives Good Error Rate Estimates ? IEEE Transactions on Pattern Analysis and Machine Intelligence, 20, pp52–64.

    Article  Google Scholar 

  11. Hornik, K. & Stinchcombe, M. & White, H. (1989) Multilayer Feedforward Networks are Universal Approximators. Neural Networks, Vol. 2, pp359–366.

    Article  Google Scholar 

  12. Kohonen, T. (1982) Self-organized formation of topologically correct feature maps. Biological Cybernetics, 43, pp59–69.

    Article  MATH  MathSciNet  Google Scholar 

  13. Lapedes, A. & Farber, R. (1988) How Neural Networks Work. Neural Information processing Systems, pp442–456, D. Z. Anderson, Ed New York: American Institute of Physics

    Google Scholar 

  14. Moody, J. & Darken, C. (1989) Fast learning in networks of locally-tuned processing units. Neural Computation. Vol. 1 no2 pp281–294.

    Article  Google Scholar 

  15. Poggio, T. & Girosi, F. (1990) Networks for approximation and learning. Proceedings of the IEEE, Special Issue on Neural Networks vol. I, Sept.90, pp1481–1497.

    Google Scholar 

  16. Powell, M. J. D. (1987) Radial Basis Function for Multivariable Interpolation: a Review. Algorithms for Approximation. J. C. Mason and M. G. Cox, Eds, Oxford, UK: Clarendon, pp143–167

    Google Scholar 

  17. Rumelhart, D. E. & McClelland, J. L. (1986) Parallel Distributed Processing. Vol. 1: Foundations. MIT Press.

    Google Scholar 

  18. Geva, S. & Sitte, J. (1992) A Constructive Method for Multivariate Function Approximation by Multilayer perceptrons. IEEE Trans. on Neural Networks. Vol. 3, no 4, pp621–624.

    Article  Google Scholar 

  19. Whitehead, B. A. & Choate, T. D. (1996) Cooperative-Competitive Genetic Evolution of Radial Basis Function Centers and Widths for Time Series Prediction. IEEE Trans. on Neural Networks. Vol. 7, no 4, pp869–881.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Buldain, J.D. (2001). Classification with Synaptic Radial Basis Units. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_26

Download citation

  • DOI: https://doi.org/10.1007/3-540-45720-8_26

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42235-8

  • Online ISBN: 978-3-540-45720-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics