Skip to main content
Log in

Design of Extreme Learning Machine with Smoothed 0 Regularization

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

In extreme learning machine (ELM), a large number of hidden nodes are required due to the randomly generated hidden layer. To improve network compactness, the ELM with smoothed 0 regularizer (ELM-SL0 for short) is studied in this paper. Firstly, the 0 regularization penalty term is introduced into the conventional error function, such that the unimportant output weights are gradually forced to zeros. Secondly, the batch gradient method and the smoothed 0 regularizer are combined for training and pruning ELM. Furthermore, both the weak convergence and strong convergence of ELM-SL0 are investigated. Compared with other existing ELMs, the proposed algorithm obtains better performance in terms of estimation accuracy and network sparsity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

References

  1. Prasanth T, Gunasekaran M (2019) Effective big data retrieval using deep learning modified neural networks. Mobile Netw Appl 24(1):282–294

    Article  Google Scholar 

  2. Lee J -H, Choi I -S, Kim H -T (2003) Natural frequency-based neural network approach to radar target recognition. IEEE Trans Signal Process 51(12):3191–3197

    Article  MathSciNet  MATH  Google Scholar 

  3. Jongho SHJ-K, Youdan K (2012) Autonomous flight of the rotorcraft-based UAV using RISE feedback and NN feedforward terms. IEEE Trans Control Syst Technol 20(5):1392–1399

    Article  Google Scholar 

  4. Zhai X, Guan X, Zhu C, Shu L, Yuan J (2018) Optimization algorithms for multi-access green communications in internet of things. IEEE Internet Things J 5(3):1739–1748

    Article  Google Scholar 

  5. Zhai X, Zheng L, Tan C, Shu L, Yuan J (2014) Energy-infeasibility tradeoff in cognitive radio networks: Price-driven spectrum access algorithms. IEEE J Select Areas Commun 32(3):528–538

    Article  Google Scholar 

  6. Supraja P, Raja PR (2017) Spectrum prediction in cognitive radio with hybrid optimized neural network. Mobile Netw Appl 24(2):357–364

    Article  Google Scholar 

  7. Pandeeswari N, Kumar G (2016) Anomaly detection system in cloud environment using fuzzy clustering based ANN. Mobile Netw Appl 21(3):494–505

    Article  Google Scholar 

  8. Su S, Guo H, Tian H (2017) A novel pattern clustering algorithm based on particle swarm optimization joint adaptive wavelet neural network model. Mobile Netw Appl 22(11):1–10

    Google Scholar 

  9. Rumelhart D -E (1986) Learning representations by back-propagating errors. Nature 323:533–536

    Article  MATH  Google Scholar 

  10. Li W, Liu Y, Yang J (2018) A new conjugate gradient method with smoothing 1/2 regularization based on a modified secant equation for training neural networks. Neural Process Lett 48(2):955– 978

    Article  Google Scholar 

  11. Hu Z -T, Zhou L, Jin B (2018) Applying improved convolutional neural network in image classification. Mobile Netw Appl 2018:1–9

    Google Scholar 

  12. Wang N, Er M -J, Han M (2014) Parsimonious extreme learning machine using recursive orthogonal least squares. IEEE Trans Neural Netw Learn Syst 25(10):1828–1841

    Article  Google Scholar 

  13. Feng G, Huang G -B, Lin Q (2009) Error minimized extreme learning machine with growth of hidden nodes and incremental learning. IEEE Trans Neural Netw 20(8):1352–1357

    Article  Google Scholar 

  14. Kassani P -H, Teoh A -B -J, Kim E (2018) Sparse pseudoinverse incremental extreme learning machine. Neurocomputing 287:128–142

    Article  Google Scholar 

  15. Miche Y, Sorjamaa A, Bas P (2010) OP-ELM: Optimally pruned extreme learning machine. IEEE Trans Neural Netw 21(1):158–162

    Article  Google Scholar 

  16. Luo J, Vong C -M, Wong P -K (2014) Sparse bayesian extreme learning machine for multi-classification. IEEE Trans Neural Netw Learn Syst 25(4):836–843

    Article  Google Scholar 

  17. Miche Y, Heeswijk M -V, Bas P (2011) TROP-ELM: A double-regularized ELM using LARS and Tikhonov regularization. Neurocomputing 74(16):2413–2421

    Article  Google Scholar 

  18. Deng W, Zheng Q, Chen L (2009) Regularized extreme learning machine. IEEE Symposium on Computational Intelligence and Data Mining, pp 389–395

  19. Han B, He B, Nian R (2015) LARSEN-ELM: Selective ensemble of extreme learning machines using LARS for blended data. Neurocomputing 149:285–294

    Article  Google Scholar 

  20. Fan Q -W, He X -S, Yang X -S (2018) Smoothing regularized extreme learning machine. International Conference on Engineering Applications of Neural Networks, pp 83–93

  21. Tibshirani R (1996) Regression shrinkage and selection via the lasso: A retrospective. J R Stat Soc 58(1):267–288

    MATH  Google Scholar 

  22. Zou H, Hastie T (2005) Addendum: Regularization and variable selection via the elastic net. J R Stat Soc 67(5):768–768

    Article  MathSciNet  MATH  Google Scholar 

  23. Efron B, Hastie T, Johnstone I (2004) Least angle regression. Ann Stat 32(2):407–451

    Article  MathSciNet  MATH  Google Scholar 

  24. Zhao J, Zurada J -M, Yang J (2018) The convergence analysis of SpikeProp algorithm with smoothing 1/2 regularization. Neural Netw 103:19–28

    Article  MATH  Google Scholar 

  25. Bertsekas DP, Nedic AO, Asuman E (1982) Convex analysis and optimization

  26. Tropp J -A, Gilbert A -C (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666

    Article  MathSciNet  MATH  Google Scholar 

  27. Determe J -F, Louveaux J, Jacques L (2016) On the noise robustness of simultaneous orthogonal matching pursuit. IEEE Trans Signal Process 65(4):864–875

    Article  MathSciNet  MATH  Google Scholar 

  28. Donoho D -L, Tsaig Y, Drori I (2012) Sparse solution of underdetermined systems of linear equations by stagewise orthogonal matching pursuit. IEEE Trans Inf Theory 58(2):1094–1121

    Article  MathSciNet  MATH  Google Scholar 

  29. Long T, Jiao W, He G (2015) RPC estimation via 1-norm-regularized least squares (L1LS). IEEE Trans Geosci Remote Sens 53(8):4554–4567

    Article  Google Scholar 

  30. Malek-Mohammadi M, Koochakzadeh A, Babaie-Zadeh M (2016) Successive concave sparsity approximation for compressed sensing. IEEE Trans Signal Process 64(21):5657–5671

    Article  MathSciNet  MATH  Google Scholar 

  31. Zhao R, Lin W, Li H (2012) Reconstruction algorithm for compressive sensing based on smoothed 0 norm and revised newton method. J Comput-Aided Des Comput Graph 24:478–484

    Google Scholar 

  32. Mohimani H, Babaie-Zadeh M, Jutten C (2008) A fast approach for overcomplete sparse decomposition based on smoothed 0 norm. IEEE Trans Signal Process 57(1):289–301

    Article  MathSciNet  MATH  Google Scholar 

  33. Zhang H, Tang Y, Liu X (2015) Batch gradient training method with smoothing 0 regularization for feedforward neural networks. Neural Comput Appl 26(2):383–390

    Article  Google Scholar 

  34. Nakama T (2009) Theoretical analysis of batch and on-line training for gradient descent learning in neuralnetworks. Neurocomputing 73(1-3):151–159

    Article  Google Scholar 

  35. Wilson D -R, Martinez T -R (2003) The general inefficiency of batch training for gradient descent learning. Neural Netw 16(10):1429–1451

    Article  Google Scholar 

  36. Donoho D -L, Elad M (2003) Optimally sparse representation in general (nonorthogonal) dictionaries via lminimization. Proc Natl Acad Sci 100(5):2197–2202

    Article  MathSciNet  MATH  Google Scholar 

  37. Gantmacher F-R (1959) The theory of matrices

  38. Huang G -B, Zhu Q -Y, Siew C -K (2006) Extreme learning machine: Theory and applications. Neurocomputing 70(1):489–501

    Article  Google Scholar 

  39. Lorenz E N (1963) Deterministic nonperiodic flow. J Atmos Sci 20(2):130–141

    Article  MathSciNet  MATH  Google Scholar 

  40. Jaeger H (2001) The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148(34):13

    Google Scholar 

  41. Han H -G, Qiao J -F (2010) A self-organizing fuzzy neural network based on a growing-and-pruning algorithm. IEEE Trans Fuzzy Syst 18(6):1129–1143

    Article  Google Scholar 

  42. Yang C -L, Qiao J -F, Lei W (2018) Dynamical regularized echo state network for time series prediction. Neural Comput Appl 31(10):6781–6794

    Article  Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China under Grants 61973010, 61533002 and 61890930, the Beijing Municipal Natural Science Foundation under Grant 4202006, the Major Science and Technology Program for Water Pollution Control and Treatment of China (2018ZX07111005), and the National Key Research and Development Project under Grants 2018YFC1900800-5.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junfei Qiao.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, C., Nie, K., Qiao, J. et al. Design of Extreme Learning Machine with Smoothed 0 Regularization. Mobile Netw Appl 25, 2434–2446 (2020). https://doi.org/10.1007/s11036-020-01587-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-020-01587-3

Keywords

Navigation