Skip to main content
Log in

The Latent Variable Data Model for Exploratory Data Analysis and Visualisation: A Generalisation of the Nonlinear Infomax Algorithm

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper presents a generalisation of the nonlinear 'Infomax' algorithm based on the linear latent variable model of factor analysis. The algorithm is based on an information theoretic index for projection pursuit which defines linear projections of observed data onto subspaces of lower dimension. This is applied to the visualisation and interpretation of complex high dimensional data and is empirically compared with the recently developed Generative Topographic Mapping.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. C. Bishop, M. Svensén and C. Williams, “GTM: The generative topographic mapping”, Neural Computation, 1997.

  2. M.C. Jones and R. Sibson, “What is projection pursuit”, The Royal Statistical Society, 1987.

  3. P. Comon, “Independent component analysis, a new concept?” Signal Processing, 36, 287–314, 1994.

    Google Scholar 

  4. B. Ripley, Pattern Recognition and Neural Networks, Cambridge University Press, 1996.

  5. M. Girolami and C. Fyfe, “Extraction of independent signal sources using a deflationary exploratory projection pursuit network with lateral inhibition”, in Press, I.E.E Proceedings on Vision, Image and Signal Processing Journal, 1997a.

  6. M. Girolami and C. Fyfe, “An extended exploratory projection pursuit network with linear and non-linear anti-hebbian connections applied to the cocktail party problem”, in Press, Neural Networks Journal, 1997b.

  7. M. Girolami and C. Fyfe, “Independence is far from normal”, Proc ESANN97, European Symposium on Artificial Neural Networks, pp. 297–302. (Special ICA Session), 1997c.

  8. M. Girolami and C. Fyfe, Negentropy and Kurtosis as Projection Pursuit Indices Provide Generalised ICA Algorithms', NIPS'96 Blind Signal Separation Workshop, Invited Paper, (Org's Prof. A. Cichocki & A. Back), Aspen Colorado, Dec 1996a.

  9. M. Girolami and C. Fyfe, “Blind separation of sources using exploratory projection pursuit networks”, in Proc. Speech and Signal Processing, International Conference on the Engineering Applications of Neural Networks, 1996b.

  10. A. Bell and T. Sejnowski, “An information maximisation approach to blind separation and blind deconvolution”, Neural Computation 7, 1129–1159, 1995.

    Google Scholar 

  11. S. Amari, A. Cichocki H. and Yang, “A new learning algorithm for blind signal separation”, Neural Information Processing, Vol. 8, pp. 757–763. M.I.T Press. 1995.

    Google Scholar 

  12. J. Herault and C. Jutten, “Space or time adaptive signal processing by neural network models”, AIP Conf. Proc., Snowbird, UT, pp. 206–211, 1986.

  13. J. Karhunen, E. Oja, L. Wang, R, Vigario and J. Joutsensalo, “A class of neural networks for independent component analysis”, IEEE Transactions on Neural Networks, 8, pp. 487–504, 1995.

    Google Scholar 

  14. Z. Malouche and O. Macchi, “Extended anti-hebbian adaptation for unsupervised source extraction”, Proc I.E.E.E ICASP' 96, Vol. 3, pp. 1665–1668, 1996.

    Google Scholar 

  15. A. Stuart and J.K. Ord, Kendall's Advanced Theory of Statistic, Vol. 1, Distribution Theory, John Wiley, New York, 1987.

    Google Scholar 

  16. C. Fyfe, “A comparative study of two neural methods of exploratory projection pursuit”, Neural Networks, 9(6), pp. 1–6, 1997.

    Google Scholar 

  17. B.S. Everitt, An Introduction to Latent Variable Models, London: Chapman and Hall, 1984.

    Google Scholar 

  18. G.E. Hinton and G. Zoubin, Generative Models for Discovering Sparse Distributed Representations, to appear in Philosophical Transactions of the Royal Society B, 1997.

  19. D. Mackay, Maximum Likelihood and Covariant Algorithms for Independent Component Analysis. Draft 3.1, Cavendish Laboratory, University of Cambridge, 1996.

  20. B. Pearlmutter and L. Parra, “A context sensitive generalisation of ICA”, International Conference on Neural Information Processing, Hong Kong, Springer, 1996.

    Google Scholar 

  21. C. Bishop and G.D. James, Analysis of Multi-phase Flows Using Dual-energy Gamma Densitometry and Neural Networks, Nuclear Instruments and Methods in Physics Research, A. 327. pp. 580–593, 1993.

    Google Scholar 

  22. T.P. Jung, C. Humphries, T.W. Lee, S. Makeig, M. McKeown, V. Iragui and T. Sejnowski, Extended ICA Removes Artifacts from Electroencephalographic Recordings, to Appear NIPS'97, 1997.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Girolami, M. The Latent Variable Data Model for Exploratory Data Analysis and Visualisation: A Generalisation of the Nonlinear Infomax Algorithm. Neural Processing Letters 8, 27–39 (1998). https://doi.org/10.1023/A:1009613012282

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009613012282

Navigation