Skip to main content
Log in

Projection techniques for nonlinear principal component analysis

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

Principal Components Analysis (PCA) is traditionally a linear technique for projecting multidimensional data onto lower dimensional subspaces with minimal loss of variance. However, there are several applications where the data lie in a lower dimensional subspace that is not linear; in these cases linear PCA is not the optimal method to recover this subspace and thus account for the largest proportion of variance in the data.

Nonlinear PCA addresses the nonlinearity problem by relaxing the linear restrictions on standard PCA. We investigate both linear and nonlinear approaches to PCA both exclusively and in combination. In particular we introduce a combination of projection pursuit and nonlinear regression for nonlinear PCA. We compare the success of PCA techniques in variance recovery by applying linear, nonlinear and hybrid methods to some simulated and real data sets.

We show that the best linear projection that captures the structure in the data (in the sense that the original data can be reconstructed from the projection) is not necessarily a (linear) principal component. We also show that the ability of certain nonlinear projections to capture data structure is affected by the choice of constraint in the eigendecomposition of a nonlinear transform of the data. Similar success in recovering data structure was observed for both linear and nonlinear projections.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Bolton R.J. and Krzanowski W.J. 1999. A characterization of principal components for projection pursuit. The American Statistician 53(2): 108–109.

    Google Scholar 

  • Chalmond B. and Girard S.C. 1999. Nonlinear modeling of scattered multivariate data and its application to shape change. IEEE Transactions on Pattern Analysis and Machine Intelligence 21(5): 422–432.

    Google Scholar 

  • Cook D. 1997. Calibrate your eyes to recognise high-dimensional shapes from their low-dimensional projections. Journal of Statistical Software 2(6). www.stat.ucla.edu/journals/jss.

  • Cook D., Buja A., and Carrera J. 1993. Projection pursuit indices based on orthonormal function expansions. Journal of Computational and Graphical Statistics 2(3): 225–250.

    Google Scholar 

  • Dong D. and McAvoy T.J. 1996. Nonlinear principal component analysis-Based on principal curves and neural networks. Computers and Chemical Engineering 20(1): 65–78.

    Google Scholar 

  • Friedman J.H. and Stuetzle W. 1981. Projection pursuit regression. Journal of the American Statistical Association 76: 817–823.

    Google Scholar 

  • Friedman J.H., Stuetzle W., and Schroeder A. 1984. Projection pursuit density estimation. Journal of the American Statistical Association 79: 599–608.

    Google Scholar 

  • Friedman J.H. and Tukey J.W. 1974. A projection pursuit algorithm for exploratory data analysis. IEEE Transactions on Computers C-23: 881–889.

    Google Scholar 

  • Hastie T.J. and Stuetzle W. 1989. Principal curves. Journal of the American Statistical Association 84(406): 502–516.

    Google Scholar 

  • Holmes C.C. and Mallick B.K. 1998. Bayesian radial basis functions of variable dimension. Neural Computation 10: 1217–1233.

    Google Scholar 

  • Huber P.J. 1985. Projection pursuit (with discussion). The Annals of Statistics 13: 435–525.

    Google Scholar 

  • Jones M.C. and Sibson R. 1987. What is projection pursuit? (with discussion). Journal of the Royal Statistical Society, Series A 150(1): 1–36.

    Google Scholar 

  • Kramer M.A. 1991. Nonlinear principal component analysis using autoassociative neural networks. American Institute of Chemical Engineers Journal 37(2): 233–243.

    Google Scholar 

  • Lowe D. 1995. On the use of nonlocal and non positive definite basis functions in radial basis function networks. In Fourth IEE International Conference on Artificial Neural Networks, Cambridge, pp. 206–211. IEE Conference Publication 409.

  • Malthouse E.C. 1998. Limitations of nonlinear PCA as performed with generic neural networks. IEEE Transactions on Neural Networks 9(1): 165–173.

    Google Scholar 

  • Posse C. 1995. Tools for two-dimensional exploratory projection pursuit. Journal of Computational and Graphical Statistics 4(2): 83–100.

    Google Scholar 

  • Schölkopf B., Mika S., Burges C.J.C., Knirsch P., Müller K.-R., Rätsch G., and Smola A. 1999. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks 10(5): 1000–1017.

    Google Scholar 

  • Webb A.R. 1996. An approach to non-linear principal components analysis using radially symmetric kernel functions. Statistics and Computing 6: 159–168.

    Google Scholar 

  • Wilson D.J.H., Irwin G.W. and Lightbody G. 1999. RBF principal manifolds for process monitoring. IEEE Transactions on Neural Networks 10(6): 1424–1434.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Bolton, R.J., Hand, D.J. & Webb, A.R. Projection techniques for nonlinear principal component analysis. Statistics and Computing 13, 267–276 (2003). https://doi.org/10.1023/A:1024274801715

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1024274801715

Navigation