Abstract
Extreme learning machine (ELM) has demonstrated great potential in machine learning owing to its simplicity, rapidity and good generalization performance. However, the traditional ELM is sensitive to noise and outliers due to using traditional least square loss function. In this paper, we present a new mixed loss function from a combination of pinball loss and least square loss. Then three robust ELM frameworks are proposed based on rescaled hinge loss function, pinball loss function and mixed loss function respectively to enhance noise robustness. To train the proposed ELM with rescaled hinge loss, the half-quadratic optimization algorithm is used to handle nonconvexity, and we demonstrate the convergence of the resulting algorithm. Furthermore, the proposed methods are applied to various datasets including classification data and regression data, with different types of noises such as feature noise and target noise. Compared with traditional methods, experiment results on UCI benchmark datasets show that the proposed methods are less sensitive to noises and achieve better performance in classification and regression applications.
Similar content being viewed by others
References
Yu D, Deng L (2012) Efficient and effective algorithms for training single-hidden-layer neural networks. Pattern Recognit Lett 33(5):554–558
Ding S, Su C, Yu J (2011) An optimizing BP neural network algorithm based on genetic algorithm. Artif Intell Rev 36(2):153–162
Luo X, Chang X, Liu H (2014) A Taylor based localization algorithm for wireless sensor network using extreme learning machine. IEICE Trans Inf Syst 97(10):2652–2659
Huang G, Huang GB, Song S et al (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48
Deng W, Zheng Q, Chen L (2009) Regularized extreme learning machine. In: Computational intelligence and data mining (CIDM ’09) IEEE, pp 389–395
Hu JK, Zhang XG (2012) Extreme learning machine on robust estimation. Appl Res Comput 29(8):2926–2930
Horata P, Chiewchanwattana S, Sunat K (2013) Robust extreme learning machine. Neurocomputing 102(2):31–44
Xing HJ, Wang XM (2013) Training extreme learning machine via regularized correntropy criterion. Neural Comput Appl 23(7):1977–1986
Zhang K, Luo M (2015) Outlier-robust extreme learning machine for regression problems. Neurocomputing 151:1519–1527
Chen K, Lv Q, Lu Y et al (2017) Robust regularized extreme learning machine for regression using iteratively reweighted least squares. Neurocomputing 230(22):345–358
Liu W, Pokharel PP, Principe JC (2007) Correntropy: properties and applications in non-Gaussian signal processing. IEEE Trans Signal Proces 55(11):5286–5298
Yang L, Ren Z, Wang Y et al (2017) A robust regression framework with laplace kernel-induced loss. Neural Comput 29(11):1–26
Singh A, Pokharel R, Principe J (2014) The C-loss function for pattern classification. Pattern Recognit 47(1):441–453
Xu G, Cao Z, Hu BG et al (2017) Robust support vector machines based on the rescaled hinge loss function. Pattern Recognit 63:139–148
Vapnik V (1995) The nature of statistical learning theory. Springer, New York
Huang X, Shi L, Suykens JA (2014) Support vector machine classifier with pinball loss. IEEE Trans Pattern Anal Mach Intell 36(5):984–997
Koenker R (2005) Quantile regression. Cambridge University Press, Cambridge
Huang GB, Chen L (2007) Convex incremental extreme learning machine. Neurocomputing 70(16–18):3056–3062
Huang GB, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neurocomputing 71(16–18):3460–3468
Huang GB, Zhou H, Ding X et al (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern 42(2):513–29
Chen X, Wan ATK, Zhou Y (2015) Efficient quantile regression analysis with missing observations. J Am Stat Assoc 110(510):723–741
Feng Y, Chen Y, He X (2015) Bayesian quantile regression with approximate likelihood. Bernoulli 21(2):832–850
Nikolova M, Ng MK (2005) Analysis of half-quadratic minimization methods for signal and image recovery. SIAM J Sci Comput 27(3):937–966
He R, Zheng WS, Tan T et al (2014) Half-quadratic-based iterative minimization for robust sparse representation. IEEE Trans Pattern Anal Mach Intell 36(2):261–275
Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, New York
MATLAB (2014) http://www.mathworks.com
Blake CL, Merz CJ (1998) UCI repository for machine learning databases. http://www.ics.uci.edu/mlearn/MLRepository.html
Fawcett T (2006) An introduction to ROC analysis. Pattern Recognit Lett 27(8):61–874
Huang GB, Ding X, Zhou H (2010) Optimization method based extreme learning machine for classification. Neurocomputing 74(1–3):155–163
Peng XJ (2010) TSVR: an efficient twin support vector machine for regression. Neural Netw 23(3):365–372
Acknowledgements
This work is supported by National Nature Science Foundation of China (11471010) and Chinese Universities Scientific Fund.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Ren, Z., Yang, L. Robust Extreme Learning Machines with Different Loss Functions. Neural Process Lett 49, 1543–1565 (2019). https://doi.org/10.1007/s11063-018-9890-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-018-9890-9