Skip to main content

A Connection between Extreme Learning Machine and Neural Network Kernel

  • Conference paper
Book cover Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K 2010)

Abstract

We study a connection between extreme learning machine (ELM) and neural network kernel (NNK). NNK is derived from a neural network with an infinite number of hidden units. We interpret ELM as an approximation to this infinite network. We show that ELM and NNK can, to certain extent, replace each other. ELM can be used to form a kernel, and NNK can be decomposed into feature vectors to be used in the hidden layer of ELM. The connection reveals possible importance of weight variance as a parameter of ELM. Based on our experiments, we recommend that model selection on ELM should consider not only the number of hidden units, as is the current practice, but also the variance of weights. We also study the interaction of variance and the number of hidden units, and discuss some properties of ELM, that may have been too strongly interpreted previously.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: Theory and applications. Neurocomputing 70, 489–501 (2006)

    Article  Google Scholar 

  2. Rasmussen, C.E., Williams, C.K.I.: Gaussian processes for machine learning. MIT Press (2006)

    Google Scholar 

  3. Frénay, B., Verleysen, M.: Using SVMs with randomised feature spaces: an extreme learning approach. In: Proc. of ESANN, pp. 315–320 (2010)

    Google Scholar 

  4. Miche, Y., Sorjamaa, A., Bas, P., Simula, O., Jutten, C., Lendasse, A.: OP-ELM: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks 21, 158–162 (2010)

    Article  Google Scholar 

  5. Zhu, Q.Y., Qin, A.K., Suganthan, P.N., Huang, G.B.: Evolutionary extreme learning machine. Pattern Recognition 38, 1759–1763 (2005)

    Article  MATH  Google Scholar 

  6. Penrose, R.: A generalized inverse for matrices. Mathematical Proceedings of the Cambridge Philosophical Society 51, 406–413 (1955)

    Article  MathSciNet  MATH  Google Scholar 

  7. McCullagh, P., Nelder, J.A.: Generalized linear models, 2nd edn. Monographs on statistics and applied probability, vol. 37. Chapman & Hall (1989)

    Google Scholar 

  8. Williams, C.K.I.: Computation with infinite neural networks. Neural Computation 10, 1203–1216 (1998)

    Article  Google Scholar 

  9. Cho, Y., Saul, L.K.: Kernel methods for deep learning. In: Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C., Culotta, A. (eds.) Proc. of NIPS, vol. 22, pp. 342–350 (2009)

    Google Scholar 

  10. Vert, J.P., Tsuda, K., Schölkopf, B.: A primer on kernel methods. In: Schölkopf, B., Tsuda, K., Vert, J.P. (eds.) Kernel Methods in Computational Biology, pp. 35–70. MIT Press (2004)

    Google Scholar 

  11. Asuncion, A., Newman, D.: UCI machine learning repository (2007)

    Google Scholar 

  12. Guyon, I., Gunn, S.R., Ben-Hur, A., Dror, G.: Result analysis of the NIPS 2003 feature selection challenge. In: Proc. of NIPS (2004)

    Google Scholar 

  13. Minka, T.: Expectation propagation for approximate Bayesian inference. In: Proc. of UAI (2001)

    Google Scholar 

  14. Young, G., Householder, A.S.: Discussion of a set of points in terms of their mutual distances. Psychometrika 3, 19–22 (1938)

    Article  MATH  Google Scholar 

  15. Golub, G.H., Van Loan, C.F.: Matrix computations, 3rd edn. The Johns Hopkins University Press (1996)

    Google Scholar 

  16. Parviainen, E., Riihimäki, J., Miche, Y., Lendasse, A.: Interpreting Extreme Learning Machine as an approximation to an infinite neural network. In: Proc. of KDIR. INSTICC (2010)

    Google Scholar 

  17. Bartlett, P.L.: The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Transactions on Information Theory 44, 525–536 (1998)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Parviainen, E., Riihimäki, J. (2013). A Connection between Extreme Learning Machine and Neural Network Kernel. In: Fred, A., Dietz, J.L.G., Liu, K., Filipe, J. (eds) Knowledge Discovery, Knowledge Engineering and Knowledge Management. IC3K 2010. Communications in Computer and Information Science, vol 272. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-29764-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-29764-9_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-29763-2

  • Online ISBN: 978-3-642-29764-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics