Skip to main content

Fast Approximation Method for Gaussian Process Regression Using Hash Function for Non-uniformly Distributed Data

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2013 (ICANN 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8131))

Included in the following conference series:

Abstract

Gaussian process regression (GPR) has the ability to deal with non-linear regression readily, although the calculation cost increases with the sample size. In this paper, we propose a fast approximation method for GPR using both locality-sensitive hashing and product of experts models. To investigate the performance of our method, we apply it to regression problems, i.e., artificial data and actual hand motion data. Results indicate that our method can perform accurate calculation and fast approximation of GPR even if the dataset is non-uniformly distributed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bishop, C.M.: Pattern recognition and machine learning, 1st edn. corr. 2nd printing edition. Springer (October 2006)

    Google Scholar 

  2. Bo, L., Sminchisescu, C.: Greedy block coordinate descent for large scale gaussian process regression. Computing Research Repository (2012)

    Google Scholar 

  3. Datar, M., Immorlica, N., Indyk, P., Mirrokni, V.S.: Locality-sensitive hashing scheme based on p-stable distributions. In: Proceedings of the Twentieth Annual Symposium on Computational Geometry, pp. 253–262 (2004)

    Google Scholar 

  4. Foster, L., Waagen, A., Aijaz, N., Hurley, M., Luis, A., Rinsky, J., Satyavolu, C., Way, M.J., Gazis, P., Srivastava, A.: Stable and efficient gaussian process calculations. Journal of Machine Learning Research 10, 857–882 (2009)

    MATH  Google Scholar 

  5. Hinton, G.E.: Training products of experts by minimizing contrastive divergence. In: Neural Computation, vol. 14, pp. 1771–1800. MIT Press (2002)

    Google Scholar 

  6. Indyk, P., Motwani, R.: Approximate nearest neighbors: towards removing the curse of dimensionality. In: Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, pp. 604–613 (1998)

    Google Scholar 

  7. Ko, J., Fox, D.: Learning gp-bayesfilters via gaussian process latent variable models. Autonomous Robots 30, 3–23 (2011)

    Article  Google Scholar 

  8. Lawrence, N.: Probabilistic non-linear principal component analysis with gaussian process latent variable models. Journal of Machine Learning Research 6, 1783–1816 (2005)

    MathSciNet  MATH  Google Scholar 

  9. Lawrence, N., Seeger, M., Herbrich, R.: Fast sparse gaussian process methods: The informative vector machine. In: Advances in Neural Information Processing Systems, vol. 15, pp. 609–616 (2003)

    Google Scholar 

  10. Quiñonero-Candela, J., Rasmussen, C.E.: A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research 6, 1939–1959 (2005)

    MATH  Google Scholar 

  11. Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. The MIT Press (2006)

    Google Scholar 

  12. Sanjoy, D., Yoav, F.: Random projection trees and low dimensional manifolds. In: Proceedings of the 40th Annual ACM Symposium on Theory of Computing, pp. 537–546 (2008)

    Google Scholar 

  13. Smola, A.J., Bartlett, P.: Sparse greedy gaussian process regression. In: Advances in Neural Information Processing Systems, vol. 13, pp. 619–625. MIT Press (2001)

    Google Scholar 

  14. Snelson, E., Ghahramani, Z.: Sparse gaussian processes using pseudo-inputs. In: Advances in Neural Information Processing Systems, vol. 18, pp. 1257–1264. MIT Press (2006)

    Google Scholar 

  15. Solak, E., Murray-Smith, R., Leithead, W.E., Leith, D.J., Rasmussen, C.E.: Derivative observations in gaussian process models of dynamic systems. In: Advances in Neural Information Processing Systems, vol. 15, MIT Press (2003)

    Google Scholar 

  16. Waterhouse, S.R.: Classification and regression using mixtures of experts. PhD thesis. Citeseer (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Okadome, Y., Nakamura, Y., Shikauchi, Y., Ishii, S., Ishiguro, H. (2013). Fast Approximation Method for Gaussian Process Regression Using Hash Function for Non-uniformly Distributed Data. In: Mladenov, V., Koprinkova-Hristova, P., Palm, G., Villa, A.E.P., Appollini, B., Kasabov, N. (eds) Artificial Neural Networks and Machine Learning – ICANN 2013. ICANN 2013. Lecture Notes in Computer Science, vol 8131. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-40728-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-40728-4_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-40727-7

  • Online ISBN: 978-3-642-40728-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics