Abstract
A powerful approach to probabilistic modelling involves supplementing a set of observed variables with additional latent, or hidden, variables. By defining a joint distribution over visible and latent variables, the corresponding distribution of the observed variables is then obtained by marginalization. This allows relatively complex distributions to be expressed in terms of more tractable joint distributions over the expanded variable space. One well-known example of a hidden variable model is the mixture distribution in which the hidden variable is the discrete component label. In the case of continuous latent variables we obtain models such as factor analysis. The structure of such probabilistic models can be made particularly transparent by giving them a graphical representation, usually in terms of a directed acyclic graph, or Bayesian network. In this chapter we provide an overview of latent variable models for representing continuous variables. We show how a particular form of linear latent variable model can be used to provide a probabilistic formulation of the well-known technique of principal components analysis (PCA). By extending this technique to mixtures, and hierarchical mixtures, of probabilistic PCA models we are led to a powerful interactive algorithm for data visualization. We also show how the probabilistic PCA approach can be generalized to non-linear latent variable models leading to the Generative Topographic Mapping algorithm (GTM). Finally, we show how GTM can itself be extended to model temporal data.
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Anderson, T. W. (1958). An Introduction to Multivariate Statistical Analysis. New York: John Wiley.
Anderson, T. W. (1963). Asymptotic theory for principal component analysis. Annals of Mathematical Statistics34, 122–148.
Bartholomew, D. J. (1987). Latent Variable Models and Factor Analysis. London: Charles Griffin & Co. Ltd.
Basilevsky, A. (1994). Statistical Factor Analysis and Related Methods. New York: Wiley.
Bishop, C. M. (1995). Neural Networks for Pattern Recognition Oxford University Press.
Bishop, C. M., G. E. Hinton, and I. G. D. Strachan (1997). GTM through time. In Proceedings IEE Fifth International Conference on Artificial Neural Networks Cambridge, U.K., pp. 111–116.
Bishop, C. М. and G. D. James (1993). Analysis of multiphase flows using dual-energy gamma densitometry and neural networks. Nuclear Instruments and Methods in Physics Research A327, 580–593.
Bishop, C. M., M. Svensén, and C. K. I. Williams (1997a). GTM: the generative topographic mapping. Accepted for publication in Neural Computation. To appear in volume 10, number 1. Available as NCRG/96/015 from http://www.ncrg.aston.ac.uk/.
Bishop, C. M., M. Svensén, and C. K. I. Williams (1997b). Magnification factors for the GTM algorithm. In Proceedings IEE Fifth International Conference on Artificial Neural Networks, Cambridge, U.K., pp. 64–69.
Bishop, C. M. and M. E. Tipping (1996). A hierarchical latent variable model for data visualization. Technical Report NCRG/96/028, Neural Computing Research Group, Aston University, Birmingham, UK. Accepted for publication in IEEE PAMI.
Dempster, A. P., N. M. Laird, and D. B. Rubin (1977). Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B 39(1), 1–38.
Hinton, G. E., P. Dayan, and M. Revow (1997). Modeling the manifolds of images of handwritten digits. IEEE Transactions on Neural Networks 8(1), 65–74.
Hinton, G. E., C. K. I. Williams, and M. D. Revow (1992). Adaptive elastic models for hand-printed character recognition. In J. E. Moody, S. J. Hanson, and R. P. Lippmann (Eds.), Advances in Neural Information Processing Systems, Volume 4, pp. 512–519. Morgan Kauffman.
Hotelling, H. (1933). Analysis of a complex of statistical variables into principal components. Journal of Educational Psychology24, 417–441.
Hull, J. J. (1994). A database for handwritten text recognition research. IEEE Transactions on Pattern Analysis and Machine Intelligence16, 550–554.
Jordan, M. I. and R. A. Jacobs (1994). Hierarchical mixtures of experts and the EM algorithm. Neural Computation6 (2), 181–214.
Kohonen, T. (1982). Self-organized formation of topologically correct feature maps. Biological Cybernetics43, 59–69.
Kohonen, T. (1995). Self-Organizing Maps. Berlin: Springer-Verlag.
Krzanowski, W. J. and F. H. C. Marriott (1994). Multivariate Analysis Part I: Distributions, Ordination and Inference. London: Edward Arnold.
Pearson, K. (1901). On lines and planes of closest fit to systems of points in space. The London, Edinburgh and Dublin Philosophical Magazine and Journal of Science, Sixth Series2, 559–572.
Rabiner, L. R. (1989). A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE77 (2), 257–285.
Rao, C. R. (1955). Estimation and tests of significance in factor analysis. Psychometrika20, 93–111.
Rubin, D. B. and D. T. Thayer (1982). EM algorithms for ML factor analysis. Psychometrika 47 (1), 69–76.
Tipping, M. E. and C. M. Bishop (1997a). Mixtures of probabilistic principal component analysers. Technical Report NCRG/97/003, Neural Computing Research Group, Aston University, Birmingham, UK. Submitted to Neural Computation.
Tipping, M. E. and C. M. Bishop (1997b). Probabilistic principal component analysis. Technical report, Neural Computing Research Group, Aston University, Birmingham, UK. Submitted to Journal of the Royal Statistical Society, B.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media Dordrecht
About this chapter
Cite this chapter
Bishop, C.M. (1998). Latent Variable Models. In: Jordan, M.I. (eds) Learning in Graphical Models. NATO ASI Series, vol 89. Springer, Dordrecht. https://doi.org/10.1007/978-94-011-5014-9_13
Download citation
DOI: https://doi.org/10.1007/978-94-011-5014-9_13
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-010-6104-9
Online ISBN: 978-94-011-5014-9
eBook Packages: Springer Book Archive