Skip to main content
Log in

A study on the relationship between the rank of input data and the performance of random weight neural network

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Random feature mapping (RFM) is the core operation in the random weight neural network (RWNN). Its quality has a significant impact on the performance of a RWNN model. However, there has been no good way to evaluate the quality of RFM. In this paper, we introduce a new concept called dispersion degree of matrix information distribution (DDMID), which can be used to measure the quality of RFM. We used DDMID in our experiments to explain the relationship between the rank of input data and the performance of the RWNN model and got some interesting results. We found that: (1) when the rank of input data reaches a certain threshold, the model’s performance increases with the increase in the rank; (2) the impact of the rank on the model performance is insensitive to the type of activation functions and the number of hidden nodes; (3) if the DDMID of an RFM matrix is very small, it implies that the first \(\mathbf{k}\) singular values in the singular value matrix of the RFM matrix contain too much information, which usually has a negative impact on the final closed-form solution of the RWNN model. Besides, we verified the improvement effect of intrinsic plasticity (IP) algorithm on RFM by using DDMID. The experimental results showed that DDMID allows researchers evaluate the mapping quality of data features before model training, so as to predict the effect of data preprocessing or network initialization without model training. We believe that our findings could provide useful guidance when constructing and analyzing a RWNN model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44(2):525–536

    Article  MathSciNet  MATH  Google Scholar 

  2. Cao W, Gao J, Ming Z, Cai S, Zheng H (2017) Impact of probability distribution selection on RVFL performance. In: International conference on smart computing and communication. Springer, pp 114–124

  3. Cao W, Wang X, Ming Z, Gao J (2018) A review on neural networks with random weights. Neurocomputing 275:278–287

    Article  Google Scholar 

  4. Chen C, Jin X, Jiang B, Li L (2019) Optimizing extreme learning machine via generalized hebbian learning and intrinsic plasticity learning. Neural Process Lett 49(3):1593–1609

    Article  Google Scholar 

  5. Chen Y, Hu C, Hu B, Hu L, Yu H, Miao C (2018) Inferring cognitive wellness from motor patterns. IEEE Trans Knowl Data Eng 30:2340

    Article  Google Scholar 

  6. Chen Y, Song S, Li S, Yang L, Wu C (2018) Domain space transfer extreme learning machine for domain adaptation. IEEE Trans Cybern 49:1909

    Article  Google Scholar 

  7. Cui W, Zhang L, Li B, Guo J, Meng W, Wang H, Xie L (2018) Received signal strength based indoor positioning using a random vector functional link network. IEEE Trans Ind Inform 14(5):1846–1855

    Article  Google Scholar 

  8. Dai P, Gwadry-Sridhar F, Bauer M, Borrie M, Teng X (2017) Healthy cognitive aging: a hybrid random vector functional-link model for the analysis of alzheimer’s disease. In: AAAI, pp 4567–4573

  9. Fu A (2015) Study on the residence error, stability, and generalization capability of extreme learning machine. Ph.D. thesis, China Agricultural University

  10. Fu AM, Wang XZ, He YL, Wang LS (2014) A study on residence error of training an extreme learning machine and its application to evolutionary algorithms. Neurocomputing 146:75–82

    Article  Google Scholar 

  11. Golub GH, Reinsch C (1970) Singular value decomposition and least squares solutions. Numer Math 14(5):403–420

    Article  MathSciNet  MATH  Google Scholar 

  12. Hecht-Nielsen R (1992) Theory of the backpropagation neural network. In: Wechsler H (ed) Neural networks for perception. Elsevier, Amsterdam, pp 65–93

    Chapter  Google Scholar 

  13. Horn RA, Johnson CR (2012) Matrix analysis. Cambridge University Press, Cambridge

    Book  Google Scholar 

  14. Huang GB, Chen L, Siew CK et al (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892

    Article  Google Scholar 

  15. Huang GB, Zhu QY, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, 2004. Proceedings, vol 2. IEEE, pp 985–990

  16. Huang Z, Wang X (2018) Sensitivity of data matrix rank in non-iterative training. Neurocomputing 313:386–391

    Article  Google Scholar 

  17. Kasun LLC, Zhou H, Huang GB, Vong CM (2013) Representational learning with extreme learning machine for big data. IEEE Intell Syst 28(6):31–34

    Google Scholar 

  18. Laub AJ (1980) The singular value decomposition: its computation and some applications. IEEE Trans Autom Control 25(2):164–176

    Article  MathSciNet  MATH  Google Scholar 

  19. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436

    Article  Google Scholar 

  20. Li M, Wang D (2017) Insights into randomized algorithms for neural networks: practical issues and common pitfalls. Inf Sci 382:170–178

    Article  MATH  Google Scholar 

  21. Neumann K, Emmerich C, Steil JJ (2012) Regularization by intrinsic plasticity and its synergies with recurrence for random projection methods. J Intell Learn Syst Appl 4(3):12

    Google Scholar 

  22. Neumann K, Steil JJ (2011) Batch intrinsic plasticity for extreme learning machines. In: International conference on artificial neural networks. Springer, pp 339–346

  23. Ouyang H, Gao L, Li S, Kong X (2017) Improved global-best-guided particle swarm optimization with learning operation for global optimization problems. Appl Soft Comput 52:987–1008

    Article  Google Scholar 

  24. Pao YH, Takefuji Y (1992) Functional-link net computing: theory, system architecture, and functionalities. Computer 25(5):76–79

    Article  Google Scholar 

  25. Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386

    Article  Google Scholar 

  26. Santos JDA, Barreto GA, Medeiros CM (2010) Estimating the number of hidden neurons of the MLP using singular value decomposition and principal components analysis: a novel approach. In: 2010 Eleventh Brazilian symposium on neural networks. IEEE, pp 19–24

  27. Scardapane S, Wang D, Uncini A (2018) Bayesian random vector functional-link networks for robust data modeling. IEEE Trans Cybern 48(7):2049–2059

    Article  Google Scholar 

  28. Schmidt WF, Kraaijveld MA, Duin RP (1992) Feedforward neural networks with random weights. In: 11th IAPR international conference on pattern recognition, 1992. Vol. II. Conference B: pattern recognition methodology and systems, proceedings. IEEE, pp 1–4

  29. Tao X, Zhou X, He YL, Ashfaq RAR (2016) Impact of variances of random weights and biases on extreme learning machine. JSW 11(5):440–454

    Article  Google Scholar 

  30. Teoh EJ, Tan KC, Xiang C (2006) Estimating the number of hidden neurons in a feedforward network using the singular value decomposition. IEEE Trans Neural Netw 17(6):1623–1629

    Article  Google Scholar 

  31. Cooper SB, Leeuwen JV (2013) Intelligent machinery. In: Alan turing his work and impact, pp 499–549

  32. Turing AM (1996) Intelligent machinery, a heretical theory. Philos Math 4(3):256–260

    Article  MathSciNet  MATH  Google Scholar 

  33. Uzair M, Mian A (2017) Blind domain adaptation with augmented extreme learning machine features. IEEE Trans Cybern 47(3):651–660

    Article  Google Scholar 

  34. Wang D, Li M (2017) Stochastic configuration networks: fundamentals and algorithms. IEEE Trans Cybern 47(10):3466–3479

    Article  Google Scholar 

  35. Wang S, Deng C, Lin W, Huang GB, Zhao B (2017) Nmf-based image quality assessment using extreme learning machine. IEEE Trans Cybern 47(1):232–243

    Article  Google Scholar 

  36. Wang W, Liu X (2017) The selection of input weights of extreme learning machine: a sample structure preserving point of view. Neurocomputing 261:28–36

    Article  Google Scholar 

  37. Webster CS (2012) Alan turing’s unorganized machines and artificial neural networks: his remarkable early work and future possibilities. Evol Intell 5(1):35–43

    Article  Google Scholar 

  38. Yang Y, Wu QJ (2016) Extreme learning machine with subnetwork hidden nodes for regression and classification. IEEE Trans Cybern 46(12):2885–2898

    Article  Google Scholar 

  39. Yang YM, Wu QJ (2016) Multilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern 46(11):2570–2583

    Article  Google Scholar 

  40. Ye H, Cao F, Wang D, Li H (2018) Building feedforward neural networks with random weights for large scale datasets. Expert Syst Appl 106:233–243

    Article  Google Scholar 

  41. Zhang L, Deng P (2017) Abnormal odor detection in electronic nose via self-expression inspired extreme learning machine. IEEE Trans Syst Man Cybern Syst 99:1–11

    Google Scholar 

  42. Zhang L, Suganthan PN (2016) A comprehensive evaluation of random vector functional link networks. Inf Sci 367:1094–1105

    Article  Google Scholar 

  43. Zhang L, Suganthan PN (2017) Visual tracking with convolutional random vector functional link network. IEEE Trans Cybern 47(10):3243–3253

    Article  Google Scholar 

  44. Zhao X, Cao W, Zhu H, Ming Z, Ashfaq RAR (2018) An initial study on the rank of input matrix for extreme learning machine. Int J Mach Learn Cybern 9(5):867–879

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (Grant 61672358 and Grant 61836005) and the Guangdong Science and Technology Department (Grant 2018B010107004)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weipeng Cao.

Ethics declarations

Conflict of interest

The authors declared no potential conflict of interest with respect to the research, authorship, and/or publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cao, W., Hu, L., Gao, J. et al. A study on the relationship between the rank of input data and the performance of random weight neural network. Neural Comput & Applic 32, 12685–12696 (2020). https://doi.org/10.1007/s00521-020-04719-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-020-04719-8

Keywords

Navigation