Skip to main content
Log in

Local Adaptive Subspace Regression

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Incremental learning of sensorimotor transformations in high dimensional spaces is one of the basic prerequisites for the success of autonomous robot devices as well as biological movement systems. So far, due to sparsity of data in high dimensional spaces, learning in such settings required a significant amount of prior knowledge about the learning task, usually provided by a human expert. In this paper we suggest a partial revision of the view. Based on empirical studies, we observed that, despite being globally high dimensional and sparse, data distributions from physical movement systems are locally low dimensional and dense. Under this assumption, we derive a learning algorithm, Locally Adaptive Subspace Regression, that exploits this property by combining a dynamically growing local dimensionality reduction technique as a preprocessing step with a nonparametric learning technique, locally weighted regression, that also learns the region of validity of the regression. The usefulness of the algorithm and the validity of its assumptions are illustrated for a synthetic data set, and for data of the inverse dynamics of human arm movements and an actual 7 degree-of-freedom anthropomorphic robot arm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. C.H. An, C.G. Atkeson and J.M. Hollerbach (1988), Model Based Control of a Robot Manipulator, MIT Press: Cambridge, MA.

    Google Scholar 

  2. C.G. Atkeson (1989), Using local models to control movement, in: D. Touretzky (ed), Advances in Neural Information Processing Systems 1, San Mateo, CA: Morgan Kauffman.

    Google Scholar 

  3. C.G. Atkeson, A.W. Moore and S. Schaal (1997a), Locally Weighted Learning for Control, Artificial Intelligence Review.

  4. C.G. Atkeson, A.W. Moore and S. Schaal (1997b), Locally Weighted Learning, Artificial Intelligence Review.

  5. W.S. Cleveland (1979), “Robust locally weighted regression and smoothing scatterplots”, Journal of the American Statistical Association, Vol. 74, pp. 829-836.

    Google Scholar 

  6. S. Geman, E. Bienenstock and R. Doursat (1992), “Neural networks and the bias-variance dilemma”, Neural Computation, No. 4, pp. 1-58.

  7. T.J. Hastie and R.J. Tibshirani (1990), Generalized Additive Models, London: Chapman-Hall.

    Google Scholar 

  8. N. Kambhatla and T.K. Leen (1994), in: D.S. Touretzky, M.C. Mozer and M.E. Hasselmo (eds), Advances in Neural Information Processing Systems 6, San Fransico, CA: Morgan Kaufmann Publishers.

    Google Scholar 

  9. L. Ljung and T. Soederstroem (1986), Theory and Practice of Recursive Identification, Cambridge, MIT Press.

    Google Scholar 

  10. E. Oja (1982), “A simplified neuron model as a principal component analyzer”, Journal of Mathematical Biology, Vol. 15, pp. 267-273.

    Google Scholar 

  11. J. Rissanen (1989), Stochastic complexity in statistical enquiry, Singapore: World Scientific.

    Google Scholar 

  12. T.D. Sanger (1989), “Optimal unsupervised learning in a single layer linear feedforward neural network”, Neural Networks, Vol. 2, pp. 459-473.

    Google Scholar 

  13. S. Schaal and C.G. Atkeson (1996), “From isolation to cooperation: An alternative view of a system of experts”, in: D.S. Touretzky, M.C. Mozer and M.E. Hasselmo (Eds.), Advances in Neural Information Processing Systems 8, Cambridge, MA: MIT Press.

    Google Scholar 

  14. S. Schaal and C.G. Atkeson (1997), “Receptive field weighted regression”, Technical Report TR-H-209, ATR Human Information Processing Labs., Kyoto 619-02, Japan.

    Google Scholar 

  15. S. Schaal (1997), “Learning from demonstration”, Advances in Neural Information Processing Systems 9, pp. 1040–1046, Cambridge, New York: Wiley.

    Google Scholar 

  16. D.W. Scott, (1992), Multivariate Density Estimation, New York: Wiley.

    Google Scholar 

  17. R.S. Sutton, (1992), “Adapting bias by gradient descent: An incremental version of Delta-Bar-Delta”, Proc. Tenth National Conf. Artificial Intelligence, pp. 171-176.

  18. I.H. Witten, R.M. Neal and J.G. Cleary (1997), “Arithmetic coding for data compression”, Communications of the ACM, Vol. 30, pp. 520-540.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sethu Vijayakumar.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Vijayakumar, S., Schaal, S. Local Adaptive Subspace Regression. Neural Processing Letters 7, 139–149 (1998). https://doi.org/10.1023/A:1009696221209

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009696221209

Navigation