Skip to main content
Log in

An initial study on the rank of input matrix for extreme learning machine

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

It is noted that the rank of input data matrix has a critical impact on the performance of a trained classifier model. This paper presents a study on the rank of input data matrix based on a classification model of extreme learning machine which is a single hidden layer feed-forward neural network with non-iterative training. The changing tendency of model accuracy with the increase of input data matrix rank is experimentally investigated and the relationship between the input matrix rank and classification problem complexity is addressed. The analysis and experiments show some meaningful results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Huang G, Huang GB, Song S (2015) Trends in extreme learning machines: a review. Neural Netw Off J Int Neural Netw Soc 61:32–48

    Article  MATH  Google Scholar 

  2. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1–3):489–501

    Article  Google Scholar 

  3. Huang GB, Chen L (2008) Enhanced random search based incremental extreme learning machine. Neuro-computing 71(16–18):3460–3468

    Google Scholar 

  4. Huang GB, Chen L, Siew CK (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  5. Huang GB, Chen YQ, Babri HA (2000) Classification ability of single hidden layer feedforward neural networks.[J]. IEEE Trans Neural Netw 11(3):799–801

    Article  Google Scholar 

  6. Li MB, Huang GB, Saratchandran P (2005) Fully complex extreme learning machine. Neurocomputing 68(1):306–314

    Article  Google Scholar 

  7. Liang NY, Huang GB, Saratchandran P (2006) A fast and accurate online sequential learning algorithm for feedforward networks. IEEE Trans Neural Netw 17(6):1411–1423

    Article  Google Scholar 

  8. Heeswijk MV, Miche Y, Lindh-Knuutila T (2009) Adaptive Ensemble models of extreme learning machines for time series prediction, artificial neural networks—ICANN. Springer, Berlin, pp 305–314

    Google Scholar 

  9. Rong HJ, Ong YS, Tan AH (2008) A fast pruned-extreme learning machine for classification problem. Neurocomputing 72(1–3):359–366

    Article  Google Scholar 

  10. Deng W, Zheng Q, Chen L (2009) Regularized extreme learning machine //computational intelligence and data mining. CIDM ‘09. IEEE Symposium on. IEEE, pp 389–395

  11. Soria-Olivas E, Gómez-Sanchis J, Martín JD (2011) BELM: Bayesian extreme learning machine. IEEE Trans Neural Netw 22(3):505–509

    Article  Google Scholar 

  12. Lan Y, Soh YC, Huang GB (2010) Two-stage extreme learning machine for regression. Neurocomputing 73(16):3028–3038

    Article  Google Scholar 

  13. Deng WY, Bai Z, Huang GB (2016) A fast SVD-hidden-nodes based extreme learning machine for large-scale data analytics. Neural Netw Off J Int Neural Netw Soc 77:14–28

    Article  Google Scholar 

  14. Zhou H, Huang GB, Lin Z (2014) Stacked extreme learning machines. IEEE Trans Cybern 45(9):1

    Google Scholar 

  15. Liu X, Wang L, Huang GB (2013) Multiple kernel extreme learning machine. Neurocomputing 149(PA):253–264

    Google Scholar 

  16. Fu AM, Wang XZ, He YL (2014) A study on residence error of training an extreme learning machine and its application to evolutionary algorithms. Neurocomputing 146(C):75–82

    Article  Google Scholar 

  17. Huang GB, Zhou H, Ding X (2012) Extreme learning machine for regression and multiclass classification. IEEE Trans Syst Man Cybern Part B Cybern Publ IEEE Syst Man Cybern Soc 42(2):513–529

    Article  Google Scholar 

  18. Lu SX, Wang XZ, Zhang GQ, Zhou X (2015) Effective algorithms of the Moore-Penrose inverse matrices for extreme learning machine. Intell Data Anal 19(4):743–760

    Article  Google Scholar 

  19. Yuan Y, Wang Y, Cao F (2011) Optimization approximation solution for regression problem based on extreme learning machine. Neurocomputing 74(16):2475–2482

    Article  Google Scholar 

  20. Michie D, Spiegelhalter D, Taylor C (1994) Machine learning, neural and statistical classification. Prentice Hall, Englewood Cliffs

  21. Sohn SY (1999) Meta analysis of classification algorithms for pattern recognition. IEEE Trans Pattern Anal Mach Intell 21(11):1137–1144

    Article  Google Scholar 

  22. Ho TK, Basu M (2002) Complexity measures of supervised classification problems. Pattern Anal Mach Intell IEEE Trans 24(3):289–300

    Article  Google Scholar 

  23. Hoekstra A, Duin RPW (1996) On the nonlinearity of pattern classifiers//International Conference on Pattern Recognition IV. IEEE Computer Society p 271

  24. Huang GB (2014) An insight into extreme learning machines: random neurons, random features and kernels. Cogn Comput 6(3):376–390

    Article  Google Scholar 

  25. Alencar ASC, Neto ARR, Gomes JPP (2016) A new pruning method for extreme learning machines via genetic algorithms. Appl Soft Comput 44:101–107

    Article  Google Scholar 

  26. Zhai JH, Shao QY, Wang XZ (2015) Architecture selection of ELM networks based on sensitivity of hidden nodes. Neural Process Lett 1–19

  27. Zhai JH, Xu HY, Wang XZ (2012) Dynamic ensemble extreme learning machine based on sample entropy. Soft Comput 16(9):1493–1502

    Article  Google Scholar 

  28. Zhong HM, Miao CY, Shen ZQ, Feng YH (2013) Comparing the learning effectiveness of BP, ELM, I-ELM, and SVM for corparate credit ratings. Neurocomputing 128(5):285–295

    Google Scholar 

  29. You ZH, Lei YK, Zhu L, Xia JF, Wang B (2013) Prediction of protein-protein interactions from amino acid sequences with ensemble extreme learning machines and principal component analysis. BMC Bioinformatics 14(8):1–11

    Google Scholar 

Download references

Acknowledgements

The first author, Miss Xingmin Zhao, would like to thank her supervisor Professor Xizhao Wang for his guidance and suggestion to improve this paper. This study was supported by Basic Research Project of Knowledge Innovation Program in Shenzhen (JCYJ20150324140036825), and National Natural Science Foundations of China (71371063, 61672358).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xingmin Zhao.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, X., Cao, W., Zhu, H. et al. An initial study on the rank of input matrix for extreme learning machine. Int. J. Mach. Learn. & Cyber. 9, 867–879 (2018). https://doi.org/10.1007/s13042-016-0615-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-016-0615-y

Keywords

Navigation