Abstract
A modified PNN training algorithm is proposed. The standard PNN, though requiring a very short training time, when implemented in hardware exhibits the drawbacks of being costly in terms of classification time and of requiring an unlimited number of units. The proposed modification overcomes the latter drawback by introducing an elimination criterion to avoid the storage of unnecessary patterns. The distortion in the density estimation introduced by this criterion is compensated for by a crossvalidation procedure to adapt the network parameters. The present paper deals with a specific realworld application, i.e. handwritten character classification. The proposed algorithm makes it possible to realise the PNN in hardware and, at the same time, compensates for some inadequacies arising from the theoretical basis of the PNN, which does not perform well with small training sets.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Specht DF. Probabilistic neural networks. Neural Networks 1990; 3: 109–118
Duda RO, Hart PE. Pattern Classification and Scene Analysis, John Wiley, Chichester 1973
Parzen E. On estimation of a probability density function and mode. Annals of Mathematical Statistics 1962; 33: 1065–1076
Specht DF. Probabilistic neural networks and polynomial adaline as a complementary technique for classification. IEEE Trans Neural Networks 1990; 1: 111–121
Widrow B, Hoff ME. Adaptive switching circuits. IRE WESCON Convention Record, New York, 1960, 96–104
Burrascano P. Learning vector quantization for the probabilistic neural network. IEEE Trans Neural Networks 1991; 2(4): 458–61
Kohonen T. Self Organization and Associative Memories, Springer, 1982
Intel Corporation. Ni 1000 Beta Release 2.3 documentation, 1994
Cacoullos T. Estimation of a multivariate density. Annals of the Institute of Statistical Mathematics (Tokyo) 1966; 18(2): 179–189
Moody J, Darken C. Fast learning in networks of locally-tuned processing units. Neural Computation 1989; 1: 281–294
Moody JE. Theeffective number of parameters: an analysis of generalization and regularization in nonlinear learning systems. In: Moody JE, Hanson SJ, Lippman RP (eds). NIPS 4 1991; 4: 847–854
Vapnik VN. The Nature of Statistical Learning, Springer-Verlag, New York, 1995
Wolpert DH. Off-training set error and a priori differences between learning algorithms. Technical Report SFI-TR-95-01-003, The Sante Fe Institute, Santa Fe, NM, 1995
Specht DF. Generation of polynomial discriminant functions for pattern recognition. IEEE Trans Electronic Computers 1967; 16: 308–319
Matyas J. Random optimization. Automation and Remote Control, 1965; 26: 246–253
Solis FJ, Wets JB. Minimization by random search techniques. Mathematics of Operations Research 1981; 6: 19–30
Baba N, Shoman T, Sawaragi Y. A modified convergence theorem for a random optimization method. Information Sciences 1977; 13: 159–166
Battiti R, Colla AM. Democracy in neural nets: Voting schemes for classification. Neural Networks 1994; 7(4): 691–707
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Ancona, F., Colla, A.M., Rovetta, S. et al. Implementing probabilistic Neural Networks. Neural Comput & Applic 5, 152–159 (1997). https://doi.org/10.1007/BF01413860
Issue Date:
DOI: https://doi.org/10.1007/BF01413860