Skip to main content
Log in

Comparison of kernel estimators, perceptrons and radial-basis functions for OCR and speech classification

  • Articles
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

We compare kernel estimators, single and multi-layered perceptrons and radial-basis functions for the problems of classification of handwritten digits and speech phonemes. By taking two different applications and employing many techniques, we report here a two-dimensional study whereby a domain-independent assessment of these learning methods can be possible. We consider a feed-forward network with one hidden layer. As examples of the local methods, we use kernel estimators like k-nearest neighbour (k-nn), Parzen windows, generalised k-nn, and Grow and Learn (Condensed Nearest Neighbour). We have also considered fuzzy k-nn due to its similarity. As distributed networks, we use linear perceptron, pairwise separating linear perceptron and multi-layer perceptrons with sigmoidal hidden units. We also tested the radial-basis function network, which is a combination of local and distributed networks. Four criteria are taken for comparison: correct classification of the test set; network size; learning time; and the operational complexity. We found that perceptrons, when the architecture is suitable, generalise better than local, memory-based kernel estimators, but require a longer training and more precise computation. Local networks are simple, leant very quickly and acceptably, but use more memory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Alpaydin E. Neural Models of Incremental Supervised and Unsupervised Learning. PhD dissertation, No 869, Department d'Informatique, Ecole Polytechnique Fédérale de Lausanne, Switzerland, 1990.

    Google Scholar 

  2. Guyon I, Poujoud I, Personnaz L, Dreyfus G, Denker J, le Cun Y. Comparing different neural architectures for classifying handwritten digits. Int Joint Conf Neural Networks 1989; Washington, USA, vol 2, 127–132.

    Google Scholar 

  3. Duda RO, Hart PE. Pattern Classification and Scene Analysis, Wiley, New York, NY, 1973.

    Google Scholar 

  4. le Cun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD. Handwritten digit recognition with a back-propagation network. In: Touretzky, D. (Ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann, San Mateo, CA, 1990, pp. 396–404.

    Google Scholar 

  5. Rumelhart DE, Hinton GE, Williams RJ. Learning internal representations by error propagation. In: D.E. Rumelhart, J.L. McClelland and the PDP Research Group (Eds), Parallel Distributed Processing: Explorations in the Microstructure of Cognition 1, MIT Press, Cambridge, MA, 1986, pp. 318–362.

    Google Scholar 

  6. Matan O, Baird HS, Bromley J, Burges CJC, Denker JS, Jackel LD, le Cun Y, Pednault EPD, Satterfield WD, Stenard CE, Thompson TJ. Reading handwritten digits: a zip code recognition system. IEEE Computer 1992; 25(7): 59–62.

    Google Scholar 

  7. Martin GL, Pittman JA. Recognizing hand-printed letters and digits using backpropagation learning. In: D. Touretzky (Ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann, San Mateo, CA, 1990, pp. 405–414.

    Google Scholar 

  8. Fukushima K. Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Networks 1988; 1: 119–130.

    Google Scholar 

  9. Moody J, Darken CJ. Fast learning in networks of locally-tuned processing units. Neural Computation 1989; 1: 281–294.

    Google Scholar 

  10. Lee Y. Handwritten digit recognition usingK-nearest-neighbor, radial-basis function, and back-propagation neural networks. Neural Computation 1991; 3: 440–449.

    Google Scholar 

  11. Senior AW. Off-line Handwriting Recognition: A Review and Experiments. Cambridge University Engineering Department, CUED/F-INFENG/TR 105, 1992.

  12. Lippmann RP. Review of neural networks for speech recognition. Neural Computation 1989; 1: 1–38.

    Google Scholar 

  13. Waibel A, Hampshire II JB. Neural network applications to speech. In: P. Antognetti and V. Milutinović (Eds), Neural Networks: Concepts, Applications, and Implementations 1, Prentice Hall, Englewood Cliffs, NJ, 1991, pp. 54–76.

    Google Scholar 

  14. Waibel A, Hanazawa T, Hinton G, Shikano K, Lang KJ. Phoneme recognition using time-delay neural networks. IEEE Trans Acoustics, Speech, and Signal Processing 1989; 37: 328–339.

    Google Scholar 

  15. Lee Y, Lippmann R. Practical characteristics of neural network and conventional pattern classifiers on artificial and speech problems. In: D. Touretzky (Ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann, San Mateo, CA, 1990, pp. 168–177.

    Google Scholar 

  16. Ng K, Lippmann R. Practical characteristics of neural network and conventional pattern classifiers. In: R. Lippmann, J. Moody, D. Touretzky (Eds.), Advances in Neural Information Processing Systems 3, Morgan Kaufmann, San Mateo, CA, 1991, pp. 970–976.

    Google Scholar 

  17. Silverman BW. Density Estimation for Statistics and Data Analysis. Chapman & Hall, London, 1986.

    Google Scholar 

  18. Hart PE. The condensed nearest neighbor rule. IEEE Trans. Infor. Theory 1968; 14: 515–516.

    Google Scholar 

  19. Keller JM, Gray MR, Givens JA. A fuzzyK-nearest neighbor algorithm. IEEE Trans Systems, Man, and Cybernetics 1985; 15: 580–585.

    Google Scholar 

  20. Geman S, Bienenstock E, Doursat R. Neural networks and the bias/variance dilemma. Neural Computation 1992; 4: 1–58.

    Google Scholar 

  21. Alpaydin E. GAL: Networks That Grow When They Learn and Shrink When They Forget, International Computer Science Institute TR 91-032, Berkeley, USA, 1991.

  22. Alpaydin E. Multiple networks for function learning. Int. Neural Network Conf. Vol 1, San Francisco, USA, 1993: 9–14.

    Google Scholar 

  23. Alpaydin E. Single and Multiple Memory-Based Neural Networks for Efficient Function Approximation. In: O. Kaynak G. Honderd, E. Grant (Eds.), Intelligent Systems: Safety, Reliability, and Maintainability. NATO ASI series, Springer-Verlag, Berlin, 1993, pp. 194–204.

    Google Scholar 

  24. Blum EK, Li LK. Approximation theory and feedforward networks. Neural Networks 1991; 4: 511–515.

    Google Scholar 

  25. Kohonen, T. Self-Organization and Associative Memory. Springer-Verlag, Berlin, Germany, 1988.

    Google Scholar 

  26. Carpenter GA, Grossberg S. The ART of adaptive pattern recognition by a self-organizing neural network. IEEE Computer 1988; 21: 77–88.

    Google Scholar 

  27. Miller III WT, Glanz FH, Kraft III LG. CMAC: An associative neural network alternative to backpropagation. Proc IEEE 1990; 78: 61–1567.

    Google Scholar 

  28. Kanerva, P. Sparse Distributed Memory, MIT Press, Cambridge, MA, 1988.

    Google Scholar 

  29. Cover TM, Hart PE. Nearest neighbour pattern classification. IEEE Trans Infor Theory, 1967; 13: 21–27.

    Google Scholar 

  30. Aha DW, Kibler D, Albert MK. Instance-based Learning Methods. Machine Learning 1991; 6: 37–66.

    Google Scholar 

  31. Funahashi K. On the approximate realization of continuous mapping by neural networks. Neural Networks 1989; 2: 183–192.

    Google Scholar 

  32. Hornik K, Stinchcombe M, White H. Multilayer feedforward networks are universal approximators. Neural Networks 1989; 2: 359–366.

    Google Scholar 

  33. Ruck DW, Rogers SK, Kabrisky M, Oxley ME, Suter BW. The multi-layer perceptron as an approximation to a Bayes optimal discriminant function. IEEE Trans Neural Networks 1990; 1: 296–298.

    Google Scholar 

  34. Poggio T, Girosi F. Networks for approximation and learning. Proc IEEE 1990; 78: 1481–1497.

    Google Scholar 

  35. Gallant SI. Neural Network Learning and Expert Systems, MIT Press, Cambridge, MA, 1993.

    Google Scholar 

  36. Denoeux T, Lengellé R. Initializing back propagation networks with prototypes. Neural Networks 1993; 6: 351–363.

    Google Scholar 

  37. Omohundro SM. Efficient algorithms with neural network behavior. Complex Systems, 1987; 1: 273–347.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ethem Alpaydin.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Alpaydin, E., Gürgen, F. Comparison of kernel estimators, perceptrons and radial-basis functions for OCR and speech classification. Neural Comput & Applic 3, 38–49 (1995). https://doi.org/10.1007/BF01414175

Download citation

  • Received:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01414175

Keywords

Navigation