Skip to main content
Log in

Robust Extreme Learning Machines with Different Loss Functions

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Extreme learning machine (ELM) has demonstrated great potential in machine learning owing to its simplicity, rapidity and good generalization performance. However, the traditional ELM is sensitive to noise and outliers due to using traditional least square loss function. In this paper, we present a new mixed loss function from a combination of pinball loss and least square loss. Then three robust ELM frameworks are proposed based on rescaled hinge loss function, pinball loss function and mixed loss function respectively to enhance noise robustness. To train the proposed ELM with rescaled hinge loss, the half-quadratic optimization algorithm is used to handle nonconvexity, and we demonstrate the convergence of the resulting algorithm. Furthermore, the proposed methods are applied to various datasets including classification data and regression data, with different types of noises such as feature noise and target noise. Compared with traditional methods, experiment results on UCI benchmark datasets show that the proposed methods are less sensitive to noises and achieve better performance in classification and regression applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Yu D, Deng L (2012) Efficient and effective algorithms for training single-hidden-layer neural networks. Pattern Recognit Lett 33(5):554–558

    Article  MathSciNet  Google Scholar 

  2. Ding S, Su C, Yu J (2011) An optimizing BP neural network algorithm based on genetic algorithm. Artif Intell Rev 36(2):153–162

    Article  Google Scholar 

  3. Luo X, Chang X, Liu H (2014) A Taylor based localization algorithm for wireless sensor network using extreme learning machine. IEICE Trans Inf Syst 97(10):2652–2659

    Article  Google Scholar 

  4. Huang G, Huang GB, Song S et al (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48

    Article  MATH  Google Scholar 

  5. Deng W, Zheng Q, Chen L (2009) Regularized extreme learning machine. In: Computational intelligence and data mining (CIDM ’09) IEEE, pp 389–395

  6. Hu JK, Zhang XG (2012) Extreme learning machine on robust estimation. Appl Res Comput 29(8):2926–2930

    Google Scholar 

  7. Horata P, Chiewchanwattana S, Sunat K (2013) Robust extreme learning machine. Neurocomputing 102(2):31–44

    Article  Google Scholar 

  8. Xing HJ, Wang XM (2013) Training extreme learning machine via regularized correntropy criterion. Neural Comput Appl 23(7):1977–1986

    Article  Google Scholar 

  9. Zhang K, Luo M (2015) Outlier-robust extreme learning machine for regression problems. Neurocomputing 151:1519–1527

    Article  Google Scholar 

  10. Chen K, Lv Q, Lu Y et al (2017) Robust regularized extreme learning machine for regression using iteratively reweighted least squares. Neurocomputing 230(22):345–358

    Article  Google Scholar 

  11. Liu W, Pokharel PP, Principe JC (2007) Correntropy: properties and applications in non-Gaussian signal processing. IEEE Trans Signal Proces 55(11):5286–5298

    Article  MathSciNet  MATH  Google Scholar 

  12. Yang L, Ren Z, Wang Y et al (2017) A robust regression framework with laplace kernel-induced loss. Neural Comput 29(11):1–26

    MathSciNet  Google Scholar 

  13. Singh A, Pokharel R, Principe J (2014) The C-loss function for pattern classification. Pattern Recognit 47(1):441–453

    Article  MATH  Google Scholar 

  14. Xu G, Cao Z, Hu BG et al (2017) Robust support vector machines based on the rescaled hinge loss function. Pattern Recognit 63:139–148

    Article  Google Scholar 

  15. Vapnik V (1995) The nature of statistical learning theory. Springer, New York

    Book  MATH  Google Scholar 

  16. Huang X, Shi L, Suykens JA (2014) Support vector machine classifier with pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997

    Article  Google Scholar 

  17. Koenker R (2005) Quantile regression. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  18. Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062

    Article  Google Scholar 

  19. Huang GB, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18):3460–3468

    Article  Google Scholar 

  20. Huang GB, Zhou H, Ding X et al (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern 42(2):513–29

    Article  Google Scholar 

  21. Chen X, Wan ATK, Zhou Y (2015) Efficient quantile regression analysis with missing observations. J Am Stat Assoc 110(510):723–741

    Article  MathSciNet  MATH  Google Scholar 

  22. Feng Y, Chen Y, He X (2015) Bayesian quantile regression with approximate likelihood. Bernoulli 21(2):832–850

    Article  MathSciNet  MATH  Google Scholar 

  23. Nikolova M, Ng MK (2005) Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J Sci Comput 27(3):937–966

    Article  MathSciNet  MATH  Google Scholar 

  24. He R, Zheng WS, Tan T et al (2014) Half-quadratic-based iterative minimization for robust sparse representation. IEEE Trans Pattern Anal Mach Intell 36(2):261–275

    Article  Google Scholar 

  25. Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, New York

    Book  MATH  Google Scholar 

  26. MATLAB (2014) http://www.mathworks.com

  27. Blake CL, Merz CJ (1998) UCI repository for machine learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html

  28. Fawcett T (2006) An introduction to ROC analysis. Pattern Recognit Lett 27(8):61–874

    MathSciNet  Google Scholar 

  29. Huang GB, Ding X, Zhou H (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74(1–3):155–163

    Article  Google Scholar 

  30. Peng XJ (2010) TSVR: an efficient twin support vector machine for regression. Neural Netw 23(3):365–372

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by National Nature Science Foundation of China (11471010) and Chinese Universities Scientific Fund.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liming Yang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ren, Z., Yang, L. Robust Extreme Learning Machines with Different Loss Functions. Neural Process Lett 49, 1543–1565 (2019). https://doi.org/10.1007/s11063-018-9890-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-018-9890-9

Keywords

Navigation