Skip to main content

Pattern Recognition and Neural Networks

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2049))

Abstract

Pattern Recognition (PR) is a fast growing field with applications in many diverse areas such as optical character recognition (OCR), computer – aided diagnosis and speech recognition, to name but a few.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderberg M. R., Cluster Analysis for Applications, Academic Press, 1973.

    Google Scholar 

  2. Barnard E., “Optimization for training neural networks”, IEEE Transactions on Neural Networks, Vol. 3(2), pp. 232–240, 1992.

    Article  Google Scholar 

  3. Battiti R., “First and second order methods for learning: Between steepest descent and Newton’s method”, Neural Computation, Vol. 4, pp. 141–166, 1992.

    Article  Google Scholar 

  4. Bishop C. M., Neural Networks for Pattern Recognition, Oxford University Press, 1995.

    Google Scholar 

  5. Bose N. K., Lianf P., Neural Network Fundamentals with Graphs, Algorithms and Applications, McGraw-Hill, 1996.

    Google Scholar 

  6. Cichocki A., Unbehauen R., Neural Networks for Optimization and Signal Processing, John Wiley, 1993.

    Google Scholar 

  7. Devroye L., Gyorfi L., Lugosi G. A., A Probabilistic Theory of Pattern Recognition, Springer-Verlag, 1996.

    Google Scholar 

  8. Duda R., Hart P. E., Pattern Classification and Scene Analysis, John Wiley, 1973.

    Google Scholar 

  9. Fahlman S. E., Lebiere C., “The cascade-correlation learning architecture”, in Advances in Neural Information Processing Systems, 2 (Touretzky D. S., ed.), pp. 524–532, Morgan Kaufmann, San Mateo, CA, 1990.

    Google Scholar 

  10. Gallant S. I., “Perceptron based learning algorithms”, IEEE Transactions on Neural Networks, Vol., 1(2), pp. 179–191, 1990.

    Article  MathSciNet  Google Scholar 

  11. Hassibi B., Stork D. G., Woff G. J., “Optimal brain surgeon and general network pruning”, Proceedings IEEE Conference in Neural Networks, Vol. 1, pp. 293–299, Sna Francisco, 1993.

    Article  Google Scholar 

  12. Jacobs R. A., “Increased rates of convergence through learning rate of adaptation”, Neural Networks, Vol. 2, pp. 359–366, 1988.

    Google Scholar 

  13. Jain A. K., Dubes R. C., Algorithms for Clustering Data, Prentice Hall, 1998.

    Google Scholar 

  14. Johansson E. M., Dowla F. U., Goodman D.M., “Backpropagation learning for multilayer feedforward neural networks using conjugate gradient method”, International Journal of Neural Systems, Vol. 2(4), pp. 291–301,1992.

    Article  Google Scholar 

  15. Karayiannis N. B., Mi G. W., “Growing radial basis neural networks. Merging supervisedand unsupervised learning with network growth techniques”, IEEE Transactions on Neural Networks, Vol. 8(6) pp. 1492–1506, 1997.

    Article  Google Scholar 

  16. Kramer A. H., Sangiovanni-Vincentelli A., “Effcient parallel learning algorithms for neural networks”, in Advances in Neural Information Processing Systems 3 (Lippmann R. P., Moody J., Touretzky D. S., eds.), pp. 684–692, Morgan Kaufmann, San Mateo, CA, 1991.

    Google Scholar 

  17. Koutroumbas K., Pouliakis A., Kalouptsidis N., “Divide and conquer algorithms for constructing neural network architectures”, EUCIPCO’ 98, Rhodes, 1998.

    Google Scholar 

  18. Le Cun Y., Denker J. S., Solla S. A., “Optimal brain damage”, in Advances in Neural Information Systems 2 (Touretzky D. S. ed.), pp. 598–605, Morgan Kaufmann, San Mateo, CA, 1990.

    Google Scholar 

  19. Mezard M., Nadal J. P., “Learning in feedforward layered networks: The tilling algorithm”, Journal of Physics, Vol. A 22, pp. 2191–2203,1989.

    MathSciNet  Google Scholar 

  20. Minsky M. L., Papert S.A., Perceptrons, expanded edition, MIT Press, Cambridge, MA, 1988.

    Google Scholar 

  21. Palmieri F., Datum M., Shah A., Moiseff A., “Sound localization with a neural network trained with the multiple extended Kalman algorithm”, International JointConference on Neural Networks, Vol. 1, pp. 125–131, Seattle, 1991.

    Google Scholar 

  22. Papoulis A., Probability, Random Variables and Stochastic Processes, 3rd ed., McGraw-Hill, 1991.

    Google Scholar 

  23. Parzen E., “On the estimation of a probability density function and mode”, Ann. Math.m Stat., Vol. 33, pp. 1065–1076, 1962.

    Article  MATH  MathSciNet  Google Scholar 

  24. Platt J., “A resource allocating network for function interpolation”, Neural Computation, Vol. 3, pp. 213–225, 1991.

    Article  MathSciNet  Google Scholar 

  25. Refenes A., Chen L., “Analysis of methods for optimal network construction”, University College London Report, CC30/080:DCN, 1991.

    Google Scholar 

  26. Rosenblatt F., “The perceptron: A probabilistic model for information storage and organization in the brain”, Psychological Review, Vol. 65, pp. 386–408, 1958.

    Article  MathSciNet  Google Scholar 

  27. Rumelhart D. E., McLelland J. L., Parallel Distributed Processing, Cambridge MA: MIT Press, 1986.

    Google Scholar 

  28. Russell R., “Pruning algorithms: A survey”, IEEE Transactions on Neural Networks, Vol. 4(5), pp. 740–747, 1993.

    Article  Google Scholar 

  29. Scholkopf B., Sung K.-K., Burges C. J. C., Girosi F., Niyogi P., Poggio T., Vapnik V., “Comparing support vector machnies with Gaussian kernels to RBF classifiers”, IEEE Transactions on Signal Processing, Vol. 45(11), pp. 2758–2766, 1997.

    Article  Google Scholar 

  30. Theodoridis S., Koutroumbas K., Pattern Recognition, Academic Press, 1998.

    Google Scholar 

  31. Widrow B., Hoff M. E. Jr, “Adaptive switching circuits”, IREWESCON Convention Record, pp. 96–104, 1960.

    Google Scholar 

  32. Widrow B., Lehr M. A., “30 years of adaptive neural networks: Perceptron, madaline and backpropagation”, Proceedings of the IEEE, Vol. 78(9), pp. 1415–1442, 1990.

    Article  Google Scholar 

  33. Werbos P.J., “Beyond regression: New tools for prediction and analysis in the behavioral sciences”, Ph.D. Thesis, Harvard University, Cambridge, MA, 1974.

    Google Scholar 

  34. Yingwei L., Sundarajan N., Saratihandram P., “Performance evaluation of a sequential minimal RBF network learning algorithm”, IEEE Transactions on Leural Networks, Vol. 9(2), pp. 308–318, 1998.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Theodoridis, S., Koutroumbas, K. (2001). Pattern Recognition and Neural Networks. In: Paliouras, G., Karkaletsis, V., Spyropoulos, C.D. (eds) Machine Learning and Its Applications. ACAI 1999. Lecture Notes in Computer Science(), vol 2049. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44673-7_8

Download citation

  • DOI: https://doi.org/10.1007/3-540-44673-7_8

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42490-1

  • Online ISBN: 978-3-540-44673-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics