Skip to main content

Smoothing Regularized Extreme Learning Machine

  • Conference paper
  • First Online:
Engineering Applications of Neural Networks (EANN 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 893))

Abstract

Extreme learning machines have been applied successfully to many real-world applications, due to their faster training speed and good performance. However, in order to guarantee the convergence of the ELM algorithm, it initially requires a large number of hidden nodes. In addition, extreme learning machines have two drawbacks: over-fitting and the sensitivity of accuracy to the number of hidden nodes. The aim of this paper is to propose a new smoothing \(L_{1/2}\) extreme learning machine with regularization to overcome these two drawbacks. The main advantage of the proposed approach is to reduce weights to smaller values during the training, and such nodes with sufficiently small weights can eventually be removed after training so as to obtain a suitable network size. Numerical experiments have been carried out for approximation problems and multi-class classification problems, and preliminary results have shown that the proposed approach works well.

Q.-W. Fan—This work was supported by National Science Foundation of China (No. 11171367).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://archive.ics.uci.edu/ml/datasets.html.

References

  1. Haykin, S.: Neural Networks: A Comprehensive Foundation, 2nd edn. Tsinghua University Press, Prentice Hall, Beijing (2001)

    MATH  Google Scholar 

  2. Magoulas, G.D., Vrahatis, M.N., Androulakis, G.S.: Improving the convergence of the backpropagation algorithm using learning rate adaptation methods. Neural Comput. 11(7), 1769–1796 (1999)

    Article  Google Scholar 

  3. Zhang, X.S.: Neural Networks in Optimization. Kluwer Academic Publishers, Boston (2000)

    Book  Google Scholar 

  4. Liu, W., Dai, Y.H.: Minimization algorithms based on supervisor and searcher cooperation. J. Optim. Theory Appl. 111(2), 359–379 (2001)

    Article  MathSciNet  Google Scholar 

  5. Zhou, W., Zurada, J.M.: Competitive layer model of discrete-time recurrent neural networks with LT neurons. Neural Comput. 22(8), 2137–2160 (2010)

    Article  MathSciNet  Google Scholar 

  6. Poggio, T., Girosi, F.: A theory of networks for approximation and learning. Artificial Intelligence Laboratory, Mass. Inst. Technol., Cambridge, A.I. Memo 1140 (1989)

    Google Scholar 

  7. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366 (1989)

    Article  Google Scholar 

  8. Cybenko, G.: Approximation of superpositions of a sigmoidal function. Math. Contr. Signals Syst. 4(2), 303–314 (1989)

    Article  MathSciNet  Google Scholar 

  9. Funahashi, K.: On the approximate realization of continuous mappings by neural networks. Neural Netw. 2, 183–192 (1989)

    Article  Google Scholar 

  10. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4, 251–257 (1991)

    Article  Google Scholar 

  11. Huang, G.B., Chen, Y.Q., Babri, H.A.: Classification ability of single hidden layer feedforward neural networks. IEEE Trans. Neural Netw. 11(3), 799–801 (2000)

    Article  Google Scholar 

  12. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)

    Article  Google Scholar 

  13. Huang, G.B., Wang, D.H., Lan, Y.: Extreme learning machines: a survey. Int. J. Mach. Learn. Cyber. 2, 107–122 (2011)

    Article  Google Scholar 

  14. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: Proceedings of the IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985–990 (2004)

    Google Scholar 

  15. Huang, G.B., Zhou, H.M., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B Cybern. 42(2), 513–529 (2012)

    Article  Google Scholar 

  16. Balasundaram, S., Kapil, D.G.: 1-Norm extreme learning machine for regression and multiclass classification using Newton method. Neurocomputing 128, 4–14 (2014)

    Article  Google Scholar 

  17. Zhang, L., Zhou, W.: On the sparseness of 1-norm support vector machines. Neural Netw. 23, 373–385 (2010)

    Article  Google Scholar 

  18. Deng, W., Zheng, Q., Chen, L.: Regularized extreme learning machine. In: Proceedings of the IEEE Symposium on Computational Intelligence in Data Mining, pp. 389–395 (2009)

    Google Scholar 

  19. Miche, Y., van Heeswijk, M., Bas, P., Simula, O., Lendasse, A.: TROP-ELM: a double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing 74, 2413–2421 (2011)

    Article  Google Scholar 

  20. Luo, J.H., Vong, C.M., Wong, P.K.: Sparse Bayesian extreme learning machine for multi-classification. IEEE Trans. Neural Netw. Learn. Syst. 25(4), 836–843 (2014)

    Google Scholar 

  21. Candes, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)

    Article  MathSciNet  Google Scholar 

  22. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  23. Donoho, D.L.: Neighborly polytopes and the sparse solution of underdetermined systems of linear equations. Statistics Department, Stanford University, Stanford, CA, Technical report 2005–4 (2005)

    Google Scholar 

  24. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24(3), 20–35 (2008)

    Article  MathSciNet  Google Scholar 

  25. Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-Laplacian priors. In: Neural Information Processing Systems. MIT Press, Cambridge (2009)

    Google Scholar 

  26. Chartrand, R.: Exact reconstruction of sparse signals via nonconvex minimizaion. IEEE Signal Process. Lett. 14(10), 707–710 (2007)

    Article  Google Scholar 

  27. Xu, Z.B., Zhang, H., Wang, Y., et al.: L1/2 regularization. Sci. China Inf. Sci. 53, 1159–1169 (2010)

    Article  MathSciNet  Google Scholar 

  28. Xu, Z.B., Chang, X.Y., Xu, F.M., Zhang, H.: \(L_{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)

    Article  Google Scholar 

  29. Zeng, J.S., Fang, J., Xu, Z.B.: Sparse SAR imaging on \(L_{1/2}\) regularization. Sci. China Inf. Sci. 55(8), 1755–1775 (2012)

    Article  MathSciNet  Google Scholar 

  30. Meng, D.Y., Zhao, Q., Xu, Z.B.: Improve robustness of sparse PCA by \(L_1\)-norm maximization. Pattern Recognit. 45(1), 487–497 (2012)

    Article  Google Scholar 

  31. Fan, Q.W., Zurada, J.M., Wu, W.: Convergence of online gradient method for feedforward neural networks with smoothing \(L_{1/2}\) regularization penalty. Neurocomputing 131, 208–216 (2014)

    Article  Google Scholar 

  32. Wu, W., Fan, Q.W., Zurada, J.M., et al.: Batch gradient method with smoothing L1/2 regularization for training of feedforward neural networks. Neural Netw. 50, 72–78 (2014)

    Article  Google Scholar 

  33. Yang, D.K., Wu, W.: A Smoothing Interval Neural Network, Discrete Dynamics in Nature and Society, vol. 2012, 25p (2012)

    MATH  Google Scholar 

  34. Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and Its Applications. Wiley, New York (1971)

    Google Scholar 

Download references

Acknowledgement

This work has been supported by National Science Foundation of China (No. 11171367).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qin-Wei Fan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fan, QW., He, XS., Yang, XS. (2018). Smoothing Regularized Extreme Learning Machine. In: Pimenidis, E., Jayne, C. (eds) Engineering Applications of Neural Networks. EANN 2018. Communications in Computer and Information Science, vol 893. Springer, Cham. https://doi.org/10.1007/978-3-319-98204-5_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98204-5_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98203-8

  • Online ISBN: 978-3-319-98204-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics