Skip to main content

Advertisement

Log in

A pruning ensemble model of extreme learning machine with \(L_{1/2}\) regularizer

  • Published:
Multidimensional Systems and Signal Processing Aims and scope Submit manuscript

Abstract

Extreme learning machine (ELM) as an emerging branch of machine learning has shown its good generalization performance at a very fast learning speed. Nevertheless, the preliminary ELM and other evolutional versions based on ELM cannot provide the optimal solution of parameters between the hidden and output layer and cannot determine the suitable number of hidden nodes automatically. In this paper, a pruning ensemble model of ELM with \(L_{1/2} \) regularizer (PE-ELMR) is proposed to solve above problems. It involves two stages. First, we replace the original solving method of the output parameter in ELM to a minimum squared-error problem with sparse solution by combining ELM with \(L_{1/2}\) regularizer. Second, in order to get the required minimum number for good performance, we prune the nodes in hidden layer with the ensemble model, which reflects the superiority in searching the reasonable hidden nodes. Experimental results present the good performance of our method PE-ELMR, compared with ELM, OP-ELM and PE-ELMR (L1), for regression and classification problems under a variety of benchmark datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Akaike, H. (1998). Information theory and an extension of the maximum likelihood principle, Selected Papers of Hirotugu Akaike. New York: Springer.

    Google Scholar 

  • Argyriou, A., Baldassarre, L., Miccheli, C. A., & Pontil, M. (2013). On sparsity inducing regularization methods for machine learning. Springer, Berlin, Heidelberg: Empirical Inference.

    Book  MATH  Google Scholar 

  • Asuncion, A., & Newman, D. (2007). UCI machine learning repository. http://archive.ics.uci.edu/ml/.

  • Berkin, B., Fan, A. P., & Polimeni, J. R. (2014). Fast quantitative susceptibility mapping with L1 regularization and automatic parameter selection. Magnetic Resonance in Medicine, 72, 1444–1459.

    Article  Google Scholar 

  • Cao, J., Zhao, Y., & Lai, X. (2015). Landmark recognition with sparse representation classification and extreme learning machine. Journal of the Franklin Institute, 352(10), 4528–4545.

    Article  MathSciNet  Google Scholar 

  • Cao, J., Chen, T., & Fan, J. (2016). Landmark recognition with compact BoW histogram and ensemble ELM. Multimedia Tools and Applications, 75(5), 2839–2857.

    Article  Google Scholar 

  • Cao, J., & Lin, Z. (2015). Extreme learning machines on high dimensional and large data applications: A survey. Mathematical Problems in Engineering, 2015, 1–12.

    Google Scholar 

  • Chan, T. F., & Esedoglu, S. (2005). Aspects of total variation regularized L1 function approximation. SIAM Journal on Applied Mathematics, 65, 1817–1837.

    Article  MathSciNet  MATH  Google Scholar 

  • Han, B., He, B., Sun, T., et al. (2014). HSR: L 1/2-regularized sparse representation for fast face recognition using hierarchical feature selection. Neural Computing and Applications, 27(2), 1–16.

    Google Scholar 

  • Hanke, M., & Hansen, P. C. (1993). Regularization methods for large-scale problems. Survey on Mathematics for Industry, 3, 253–315.

    MathSciNet  MATH  Google Scholar 

  • Hornik, K., Stinchcombe, M., & White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2(5), 359–366.

    Article  Google Scholar 

  • Huang, G. B., Zhu, Q. Y., & Siew, C. K. (2004). Extreme learning machine: A new learning scheme of feedforward neural networks. Proceedings of International Joint Conference on Neural Networks, 70, 25–29.

    Google Scholar 

  • Huang, G. B., Zhu, Q. Y., & Siew, C. K. (2006). Extreme learning machine: Theory and applications. Neurocomputing, 70, 489–501.

    Article  Google Scholar 

  • Huang, G. B., Zhou, H., Ding, X., & Zhang, R. (2012). Extreme learning machine for regression and multiclass classification. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 42, 513–528.

    Article  Google Scholar 

  • Koh, K., Kim, S. J., & Boyd, S. l1_ls: Simple matlab solver for L1-regularized least squares problems. http://stanford.edu/~boyd/l1_ls/.

  • Kukreja, S. L., Lofberg, J., & Brenner, M. J. (2006). A least absolute shrinkage and selection operator (LASSO) for nonlinear system identification. IFAC Proceedings Volumes, 39(1), 814–819.

    Article  Google Scholar 

  • Liu, N., & Wang, H. (2010). Ensemble based extreme learning machine. IEEE Transsctions on Signal Processing Letters, 17(8), 754–757.

    Article  Google Scholar 

  • Martinez, A., & Benavente, R. (1998). The AR face database. In CVC technical report.

  • Miche, Y., Sorjamaa, A., Bas, P., & Simula, O. (2010). OP-ELM: Optimally pruned extreme learning machine. IEEE Transactions on Neural Networks, 21, 158–162.

    Article  Google Scholar 

  • Rong, H. J., Ong, Y. S., Tan, A. H., & Zhu, Z. (2008). A fast pruned-extreme learning machine for classification problem. Neurocomputing, 72, 359–366.

    Article  Google Scholar 

  • Saunders, C., Gammerman, A., & Vovk, V. (1998). Ridge regression learning algorithm in dual variables. In (ICML-1998) Proceedings of the 15th international conference on machine learning. Morgan Kaufmann.

  • Schmidt, M., Fung, G., & Rosales, R. (2007). Fast optimization methods for l1 regularization: A comparative study and two new approaches. Machine Learning: ECML 2007, 4701, 286–297.

    Google Scholar 

  • Sun, Z. L., & Choi, T. M. (2008). Sales forecasting using extreme learning machine with applications in fashion retailing. Decision Support Systems, 46, 411–419.

    Article  Google Scholar 

  • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1), 267–288.

    MathSciNet  MATH  Google Scholar 

  • Xu, Z. B., Guo, H. L., Wang, Y., & Zhang, H. (2012). Representative of L 1/2 regularization among L \(q (0 < q \le 1) \)regularizations: An experimental study based on phase diagram. Acta Automatica Sinica, 38, 1225–1228.

    MathSciNet  Google Scholar 

  • Xu, Z., Chang, X., Xu, F., & Zhang, H. (2012). regularization: A thresholding representation theory and a fast solver. IEEE Transactions on Neural Networks and Learning Systems, 23(7), 1013–1027.

    Article  Google Scholar 

  • Xue, X., Yao, M., Wu, Z., & Yang, J. (2014). Genetic ensemble of extreme learning machine. Neurocomputing, 129, 175–184.

    Article  Google Scholar 

  • Yang, M., & Zhang, L. (2010). Gabor feature based sparse representation for face recognition with gabor occlusion dictionary. In Computer Vision–ECCV 2010 (pp. 448–461). Springer, Berlin, Heidelberg.

  • Zeng, J., Lin, S., Wang, Y., & Xu, Z. (2014). L1/2 regularization: Convergence of iterative half thresholding algorithm. IEEE Transactions on Signal Processing, 62(9), 2317–2329.

    Article  MathSciNet  Google Scholar 

  • Zhou ZH, Wu JX, Jiang Y (2001) Genetic algorithm based selective neural network ensemble. In Proceedings of the 17th international joint conference on artificial intelligence (Vol 2).

  • Zhou, Z. H., & Chen, S. F. (2002). Neural network ensemble. Chinese Journal of Computers, 25, 1–8.

    MathSciNet  Google Scholar 

Download references

Acknowledgments

This work is partially supported by the Natural Science Foundation of China (41176076, 51075377, 51379198), the High Technology Research and Development Program of China (2006AA09Z231, 2014AA093410).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Bo He or Tianhong Yan.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

He, B., Sun, T., Yan, T. et al. A pruning ensemble model of extreme learning machine with \(L_{1/2}\) regularizer. Multidim Syst Sign Process 28, 1051–1069 (2017). https://doi.org/10.1007/s11045-016-0437-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11045-016-0437-9

Keywords