Skip to main content

Advertisement

Log in

C-Loss-Based Doubly Regularized Extreme Learning Machine

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Extreme learning machine has become a significant learning methodology due to its efficiency. However, extreme learning machine may lead to overfitting since it is highly sensitive to outliers. In this paper, a novel extreme learning machine called the C-loss-based doubly regularized extreme learning machine is presented to handle dimensionality reduction and overfitting problems. The proposed algorithm benefits from both L1 norm and L2 norm and replaces the square loss function with a C-loss function. And the C-loss-based doubly regularized extreme learning machine can complete the feature selection and the training processes simultaneously. Additionally, it can also decrease noise or irrelevant information of data to reduce dimensionality. To show the efficiency in dimension reduction, we test it on the Swiss Roll dataset and obtain high efficiency and stable performance. The experimental results on different types of artificial datasets and benchmark datasets show that the proposed method achieves much better regression results and faster training speed than other compared methods. Performance analysis also shows it significantly decreases the training time, solves the problem of overfitting, and improves generalization ability.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back–propagating errors. Nature. 1986;323:533–6.

    Article  MATH  Google Scholar 

  2. Vapnik V, Golowich S, Smola A. Support vector method for function approximation, regression estimation, and signal processing. The 9th Int Conf Neural Inform Proc Sys. 1996;281–287.

  3. Furfaro R, Barocco R, Linares R, Topputo F, Reddy V, Simo J, et al. Modeling irregular small bodies gravity field via extreme learning machines and Bayesian optimization. Adv Space Res. 2020;67(1):617–38.

    Article  Google Scholar 

  4. Huang GB, Zhu QY, Siew CK. Extreme learning machine: theory and applications. Neurocomputing. 2006;70(1–3):489–501.

    Article  Google Scholar 

  5. Kaleem K, Wu YZ, Adjeisah M. Consonant phoneme based extreme learning machine (ELM) recognition model for foreign accent identification. The World Symp Software Eng. 2019;68–72.

  6. Liu X, Huang H, Xiang J. A personalized diagnosis method to detect faults in gears using numerical simulation and extreme learning machine. Knowl Based Syst. 2020;195(1): 105653.

    Article  Google Scholar 

  7. Fellx A, Daniela G, Liviu V, Mihaela–Alexandra P. Neural network approaches for children's emotion recognition in intelligent learning applications. The 7th Int Conf Education and New Learning Technol. 2015;3229–3239.

  8. Huang GB, Zhou H, Ding X. Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern B. 2011;42(2):513–29.

    Article  Google Scholar 

  9. Huang S, Zhao G, Chen M. Tensor extreme learning design via generalized Moore-Penrose inverse and triangular type–2 fuzzy sets. Neural Comput Applical. 2018;31:5641–51.

    Article  Google Scholar 

  10. Bai Z, Huang GB, Wang D. Sparse Extreme learning machine for classification. IEEE Trans Cybern. 2014;44(10):1858–70.

    Article  Google Scholar 

  11. Wang Y, Yang L, Yuan C. A robust outlier control framework for classification designed with family of homotopy loss function. Neural Netw. 2019;112:41–53.

    Article  MATH  Google Scholar 

  12. Deng WY, Zheng Q, Lin C. Regularized extreme learning machine. IEEE symposium on computational intelligence and data mining. 2009;2009:389–95.

    Article  Google Scholar 

  13. Balasundaram S, Gupta D. 1–Norm extreme learning machine for regression and multiclass classification using Newton method. Neurocomputing. 2014;128:4–14.

    Article  Google Scholar 

  14. Christine DM, Ernesto DV, Lorenzo R. Elastic–net regularization in learning theory. J complexity. 2009;25(2):201–30.

    Article  MathSciNet  MATH  Google Scholar 

  15. Luo X, Chang XH, Ban XJ. Regression and classification using extreme learning machine based on L-1-norm and L-2-norm. Neurocomputing. 2016;174:179–86.

    Article  Google Scholar 

  16. Abhishek S, Rosha P, Jose P. The C–loss function for pattern classification. Pattern Recognit. 2014;47(1):441–53.

    Article  MATH  Google Scholar 

  17. Zhao YP, Tan JF, Wang JJ. C–loss based extreme learning machine for estimating power of small–scale turbojet engine. Aerosp Sci Technol. 2019;89(6):407–19.

    Article  Google Scholar 

  18. Jing TT, Xia HF, and Ding ZM. Adaptively-accumulated knowledge transfer for partial domain adaptation. In Proceedings of the 28th ACM International Conference on Multimedia. 2020;1606–1614.

  19. Fu YY, Zhang M, Xu X, et al. Partial feature selection and alignment for multi-source domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021;16654–16663.

  20. Khalajmehrabadi A, Gatsis N, Pack D. A joint indoor WLAN localization and outlier detection scheme using LASSO and Elastic-Net optimization techniques. IEEE Trans Mob Comput. 2017;16(8):1–1.

    Article  Google Scholar 

  21. Boyd S, Vandenberghe L, Faybusovich L. Convex optimization IEEE Trans Automat Contr. 2006;51(11):1859.

    Article  Google Scholar 

  22. Huang GB, Wang DH, Lan Y. Extreme learning machines: a survey. Int J Mach Learn Cyb. 2011;2(2):107–22.

    Article  Google Scholar 

  23. Peng HY, Liu CL. Discriminative feature selection via employing smooth and robust hinge loss. IEEE T Neur Net Lear. 2019;99:1–15.

    Google Scholar 

  24. Lei Z, Mammadov MA. Yearwood J. From convex to nonconvex: a loss function analysis for binary classification. 2010 IEEE International Conference On Data Mining Workshops. 2010;1281–1288.

  25. Hajiabadi H, Molla D, Monsefi R, et al. Combination of loss functions for deep text classification. Int J Mach Learn Cyb. 2019;11:751–61.

    Article  Google Scholar 

  26. Hajiabadi H, Monsefi R, Yazdi HS. RELF: robust regression extended with ensemble loss function. Appl Intell. 2018;49:473.

    Google Scholar 

  27. Zou H, Hastie T. Addendum: Regularization and variable selection via the elastic net. J Roy Stat Soc. 2010;67(5):768–768.

    Article  Google Scholar 

  28. Golub GH, Loan CFV. Matrix computations 3rd edition. Johns Hopkins studies in mathematical sciences. 1996.

  29. Dinoj S “Swiss roll datasets”, http://people.cs.uchicago.edu/~dinoj/manifold/swissroll.html, accessed on 12 Apr 2021.

  30. UCI machine learning repository http://archive.ics.uci.edu/ml/datasets.php, accessed on 12 Apr 2021

  31. Kaggle datasets https://www.kaggle.com/, accessed on 12 April 2021

  32. Hua XG, Ni YQ, Ko JM, et al. Modeling of temperature–frequency correlation using combined principal component analysis and support vector regression technique. J Comput Civil Eng. 2007;21(2):122–35.

    Article  Google Scholar 

  33. Frost P, Kailath T. An innovations approach to least–squares estimation––part III: nonlinear estimation in white Gaussian noise. IEEE Trans Automat Contr. 2003;16(3):217–26.

    Article  Google Scholar 

  34. Demšar J. Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res. 2006;7:1–30.

    MathSciNet  MATH  Google Scholar 

  35. Iman L, Davenport JM. Approximations of the critical region of the Friedman statistic. Commun Stat–Simul C. 1998;571–595.

  36. Fei Z, Webb GI, Suraweera P, et al. Subsumption resolution: an efficient and effective technique for semi–naive Bayesian learning. Mach Learn. 2012;87(1):93–125.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the anonymous reviewers for their constructive comments and suggestions. This work was supported in part by the National Natural Science Foundation of China under Grant 51875457, the Key Research Project of Shaanxi Province (2022GY-050), the Natural Science Foundation of Shaanxi Province of China (2022JQ-636, 2021JQ–701), and the Special Scientific Research Plan Project of Shaanxi Province Education Department (21JK0905).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan–Lin Fu.

Ethics declarations

Informed Consent

Informed consent was not required as no human beings or animals were involved.

Human and Animal Rights

This article does not contain any studies with human or animal subjects performed by any of the authors.

Conflict of Interest

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, Q., Fu, Y., Cui, D. et al. C-Loss-Based Doubly Regularized Extreme Learning Machine. Cogn Comput 15, 496–519 (2023). https://doi.org/10.1007/s12559-022-10050-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-022-10050-2

Keywords