Skip to main content
Log in

A derived least square fast learning network model

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The extreme learning machine (ELM) requires a large number of hidden layer nodes in the training process. Thus, random parameters will exponentially increase and affect network stability. Moreover, the single activation function affects the generalization capability of the network. This paper proposes a derived least square fast learning network (DLSFLN) to solve the aforementioned problems. DLSFLN uses the inheritance of some functions to obtain various activation functions through continuous differentiation of functions. The types of activation functions were increased and the mapping capability of hidden layer neurons was enhanced when the random parameter dimension was maintained. DLSFLN randomly generates the input weights and hidden layer thresholds and uses the least square method to determine the connection weights between the output and the input layers and that between the output and the input nodes. The regression and classification experiments show that DLSFLN has a faster training speed and better training accuracy, generalization capability, and stability compared with other neural network algorithms, such as fast learning network(FLN).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Ak R, Fink O, Zio E (2015) Two machine learning approaches for short-term wind speed time-series prediction. IEEE T Neur Net Lear 27(8):1734–1747

    Article  MathSciNet  Google Scholar 

  2. Anam K, Al-Jumaily A (2017) Evaluation of extreme learning machine for classification of individual and combined finger movements using electromyography on amputees and non-amputees. Neural Netw 85:51–68

    Article  Google Scholar 

  3. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE T Inform Theory 44(2):525–536

    Article  MathSciNet  Google Scholar 

  4. Chen J, Chen H, Wan X, Zheng G (2016) Mr -elm : a mapreduce-based framework for large-scale elm training in big data era. Neural Comput Appl 27(1):101–110

    Article  Google Scholar 

  5. Chen Z, Gryllias K, Li W (2019) Mechanical fault diagnosis using convolutional neural networks and extreme learning machine. Mech Syst Signal Process 133

  6. Dai H, Cao J, Wang T, Deng M, Yang Z (2019) Multilayer one-class extreme learning machine. Neural Netw 115:11–22

    Article  Google Scholar 

  7. Deng C, Huang GB, Xu J, Tang J (2015) Extreme learning machines: new trends and applications. Sci China (Inform Sci) 58(2):20301–020301

    Google Scholar 

  8. Huang G, Huang GB, Song S, You K (2015) Trends in extreme learning machines: a review. Neural Netw 61:32–48

    Article  Google Scholar 

  9. Huang GB, Zhu QY, Siew C (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. Neural netw 2:985–990

    Google Scholar 

  10. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1-3):489–501

    Article  Google Scholar 

  11. Huang GB, Zhou H, Ding X, Zhang R (2011) Extreme learning machine for regression and multiclass classification. IEEE T Syst Man Cy B) 42(2):513–529

    Article  Google Scholar 

  12. Kim J, Kim J, Jang G, Lee M (2017) Fast learning method for convolutional neural networks using extreme learning machine and its application to lane detection. Neural Netw 87:109–121

    Article  Google Scholar 

  13. Kumar N, Savitha R, Mamun A (2018) Ocean wave height prediction using ensemble of extreme learning machine. Neurocomputing 277:12–20

    Article  Google Scholar 

  14. Kutlu Y, Yayık A, Yildirim E, Yildirim S (2019) Lu triangularization extreme learning machine in eeg cognitive task classification. Neural Comput Appl 31(4):1117–1126

    Article  Google Scholar 

  15. Li G, Niu P (2016) Combustion optimization of a coal-fired boiler with double linear fast learning network. Soft Comput 20(1):149–156

    Article  Google Scholar 

  16. Li G, Niu P, Duan X, Zhang X (2014a) Fast learning network: a novel artificial neural network with a fast learning speed. Neural Comput Applic 24(7-8):1683–1695

    Article  Google Scholar 

  17. Li GQ, Niu PF, Wang HB, Liu YC (2014b) Least square fast learning network for modeling the combustion efficiency of a 300wm coal-fired boiler. Neural Netw 51:57–66

    Article  Google Scholar 

  18. Li K, Xiong M, Li F, et al. (2019) A novel fault diagnosis algorithm for rotating machinery based on a sparsity and neighborhood preserving deep extreme learning machine. Neurocomputing 350:261–270

    Article  Google Scholar 

  19. Li Z, Fan X, Chen G, Yang G, Sun Y (2017) Optimization of iron ore sintering process based on elm model and multi-criteria evaluation. Neural Comput Applic 28(8):2247–2253

    Article  Google Scholar 

  20. Ma YP, Niu PF, Zhang XX, Li GQ (2017) Research and application of quantum-inspired double parallel feed-forward neural network. Knowl-Based Syst 136:140–149

    Article  Google Scholar 

  21. Maliha A, Yusof R, Shapiai M (2018) Extreme learning machine for structured output spaces. Neural Comput Applic 30(4):1251–1264

    Article  Google Scholar 

  22. Mirza B, Lin Z (2016) Meta-cognitive online sequential extreme learning machine for imbalanced and concept-drifting data classification. Neural Netw 80:79–94

    Article  Google Scholar 

  23. Nayak D, Dash R, Majhi B (2018) Discrete ripplet-ii transform and modified pso based improved evolutionary extreme learning machine for pathological brain detection. Neurocomputing 282:232–247

    Article  Google Scholar 

  24. Raghuwanshi B, Shukla S (2018) Class-specific extreme learning machine for handling binary class imbalance problem. Neural Netw 105:206–217

    Article  Google Scholar 

  25. Rumelhart D, Hinton G, Williams R (1988) Learning representations by back-propagating errors. Cognitive modeling 5 (3):1

    MATH  Google Scholar 

  26. Shinozaki N, Sibuya M, Tanabe K (1972) Numerical algorithms for the moore-penrose inverse of a matrix: direct methods. Ann Inst Stat Math 24(1):193–203

    Article  MathSciNet  Google Scholar 

  27. Singh Y, Chandra P (2003) A class+ 1 sigmoidal activation functions for ffanns. J Econ Dyn Control 28(1):183–187

    Article  MathSciNet  Google Scholar 

  28. Söderström T, Stewart G (1974) On the numerical properties of an iterative method for computing the moore–penrose generalized inverse. SIAM J Numer Anal 11(1):61–74

    Article  MathSciNet  Google Scholar 

  29. Xiao W, Zhang J, Li Y, Zhang S, Yang W (2017) Class-specific cost regulation extreme learning machine for imbalanced classification. Neurocomputing 261:70–82

    Article  Google Scholar 

  30. Xie J, Liu S, Dai H (2019) Manifold regularization based distributed semi-supervised learning algorithm using extreme learning machine over time-varying network. Neurocomputing 355:24–34

    Article  Google Scholar 

  31. Yildirim H, Özkale M (2019) The performance of elm based ridge regression via the regularization parameters. Expert Syst Appl 134:225–233

    Article  Google Scholar 

  32. Yu Y, Sun Z (2017) Sparse coding extreme learning machine for classification. Neurocomputing 261:50–56

    Article  Google Scholar 

  33. Zhang J, Xiao W, Li Y, Zhang S (2018) Residual compensation extreme learning machine for regression. Neurocomputing 311:126–136

    Article  Google Scholar 

  34. Zheng W, Qian Y, Lu H (2013) Text categorization based on regularization extreme learning machine. Neural Comput Applic 22(3-4):447–456

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the following grants: Major Program of the National Natural Science Foundation of China (No. 11790282); National Natural Science Foundation of China (No. 11702179,51605315); Young Top-Notch Talents Program of Higher School in Hebei Province (No. BJ2019035); the Independent Project of State Key Laboratory of Mechanical Behavior and System Safety of Traffic Engineering Structures (No. ZZ2020-42); Preferred Hebei Postdoctoral Research Project (No. B2019003017). S&T Promgram of Hebei(No.20310803D); Postgraduate innovation funding project of Shijiazhuang Tiedao University(YC2020030) .

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Enli Chen.

Ethics declarations

Conflict of interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, M., Jia, S., Chen, E. et al. A derived least square fast learning network model. Appl Intell 50, 4176–4194 (2020). https://doi.org/10.1007/s10489-020-01773-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-01773-6

Keywords

Navigation