Abstract
The EM-algorithm is a general procedure to get maximum likelihood estimates if part of the observations on the variables of a network are missing. In this paper a stochastic version of the algorithm is adapted to probabilistic neural networks describing the associative dependency of variables. These networks have a probability distribution, which is a special case of the distribution generated by probabilistic inference networks. Hence both types of networks can be combined allowing to integrate probabilistic rules as well as unspecified associations in a sound way. The resulting network may have a number of interesting features including cycles of probabilistic rules, and hidden ‘unobservable’ variables.
This work was supported by the German Federal Department of Research and Technology, grant ITW8900A7
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Anderson, J.A., Rosenfeld, E. (1988) Neurocomputing: Foundations of Research. MIT Press, Cambridge, Ma.
Aarts, E., Korst, J. (1988): Simulated Annealing and Boltzmann Machines. Wiley, Chichester
Besag, J. (1974): Spatial Interaaction and Statistical Analysis of Lattice Systems. Journal of The Royal Statistical Society, Series B., p. 192–236
Celeux, G., Diebolt, J. (1988): A Random Imputation Principle: The Stochastic EM Algorithm. Tech. Rep. No.901, INRIA, 78153 Le Chesnay, France
Darroch, J.N., Ratcliff, D. (1972): Generalized Iterative Scaling for Log-Linear Models. The Annals of Mathematical Statistics, Vol. 43, p. 1470–1480
Dempster, A.P., Laird, N.M., Rubin, D.B. (1977): Maximum Likelihood from Incomplete Data via the EM algorithm (with discussion). Journal of the Royal Statistical Society, Vol.B-39, p. 1–38
Ackley, D.,.Hinton, G.E., Sejnowski, T.J. (1985): A Learning Algorithm for the Boltzmann machine. Cognitive ScienceVol. 9 pp. 147–169
Kindermann, R., Snell, J.L. (1980): Markov Random Fields and their Applications. American Math. Society, Providence, R.I.
Paass, G. (1988): Probabilistic Logic. In: Smets, P., A. Mamdani, D. Dubois, H. Prade (eds.) Non-Standard Logics for Automated Reasoning, Academic press, London, p. 213–252
Paass, G. (1989): Structured Probabilistic Neural Networks. Proc. Neuro-Nimes ‘89p. 345–359
Pearl, J. (1988): Probabilistic Reasoning in Intelligent Systems, Morgan Kaufmann, San Mateo, Cal.
White, H. (1989): Some Asymptotic Results for learning in Single Layer Feedforward Network Models. J. American Statistical Assoc. Vol. 84, p. 1003–1013.
Wu, C.F. (1983): On the Convergence properties of the EM algorithm. Annals of Statistics Vol.11, p. 95–103.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Paaß, G. (1990). A Stochastic EM Learning Algorithm for Structured Probabilistic Neural Networks. In: Dorffner, G. (eds) Konnektionismus in Artificial Intelligence und Kognitionsforschung. Informatik-Fachberichte, vol 252. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-76070-9_22
Download citation
DOI: https://doi.org/10.1007/978-3-642-76070-9_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-53131-9
Online ISBN: 978-3-642-76070-9
eBook Packages: Springer Book Archive