Abstract
Random feature mapping (RFM) is the core operation in the random weight neural network (RWNN). Its quality has a significant impact on the performance of a RWNN model. However, there has been no good way to evaluate the quality of RFM. In this paper, we introduce a new concept called dispersion degree of matrix information distribution (DDMID), which can be used to measure the quality of RFM. We used DDMID in our experiments to explain the relationship between the rank of input data and the performance of the RWNN model and got some interesting results. We found that: (1) when the rank of input data reaches a certain threshold, the model’s performance increases with the increase in the rank; (2) the impact of the rank on the model performance is insensitive to the type of activation functions and the number of hidden nodes; (3) if the DDMID of an RFM matrix is very small, it implies that the first \(\mathbf{k}\) singular values in the singular value matrix of the RFM matrix contain too much information, which usually has a negative impact on the final closed-form solution of the RWNN model. Besides, we verified the improvement effect of intrinsic plasticity (IP) algorithm on RFM by using DDMID. The experimental results showed that DDMID allows researchers evaluate the mapping quality of data features before model training, so as to predict the effect of data preprocessing or network initialization without model training. We believe that our findings could provide useful guidance when constructing and analyzing a RWNN model.
Similar content being viewed by others
References
Bartlett PL (1998) The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network. IEEE Trans Inf Theory 44(2):525–536
Cao W, Gao J, Ming Z, Cai S, Zheng H (2017) Impact of probability distribution selection on RVFL performance. In: International conference on smart computing and communication. Springer, pp 114–124
Cao W, Wang X, Ming Z, Gao J (2018) A review on neural networks with random weights. Neurocomputing 275:278–287
Chen C, Jin X, Jiang B, Li L (2019) Optimizing extreme learning machine via generalized hebbian learning and intrinsic plasticity learning. Neural Process Lett 49(3):1593–1609
Chen Y, Hu C, Hu B, Hu L, Yu H, Miao C (2018) Inferring cognitive wellness from motor patterns. IEEE Trans Knowl Data Eng 30:2340
Chen Y, Song S, Li S, Yang L, Wu C (2018) Domain space transfer extreme learning machine for domain adaptation. IEEE Trans Cybern 49:1909
Cui W, Zhang L, Li B, Guo J, Meng W, Wang H, Xie L (2018) Received signal strength based indoor positioning using a random vector functional link network. IEEE Trans Ind Inform 14(5):1846–1855
Dai P, Gwadry-Sridhar F, Bauer M, Borrie M, Teng X (2017) Healthy cognitive aging: a hybrid random vector functional-link model for the analysis of alzheimer’s disease. In: AAAI, pp 4567–4573
Fu A (2015) Study on the residence error, stability, and generalization capability of extreme learning machine. Ph.D. thesis, China Agricultural University
Fu AM, Wang XZ, He YL, Wang LS (2014) A study on residence error of training an extreme learning machine and its application to evolutionary algorithms. Neurocomputing 146:75–82
Golub GH, Reinsch C (1970) Singular value decomposition and least squares solutions. Numer Math 14(5):403–420
Hecht-Nielsen R (1992) Theory of the backpropagation neural network. In: Wechsler H (ed) Neural networks for perception. Elsevier, Amsterdam, pp 65–93
Horn RA, Johnson CR (2012) Matrix analysis. Cambridge University Press, Cambridge
Huang GB, Chen L, Siew CK et al (2006) Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans Neural Netw 17(4):879–892
Huang GB, Zhu QY, Siew CK (2004) Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2004 IEEE international joint conference on neural networks, 2004. Proceedings, vol 2. IEEE, pp 985–990
Huang Z, Wang X (2018) Sensitivity of data matrix rank in non-iterative training. Neurocomputing 313:386–391
Kasun LLC, Zhou H, Huang GB, Vong CM (2013) Representational learning with extreme learning machine for big data. IEEE Intell Syst 28(6):31–34
Laub AJ (1980) The singular value decomposition: its computation and some applications. IEEE Trans Autom Control 25(2):164–176
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436
Li M, Wang D (2017) Insights into randomized algorithms for neural networks: practical issues and common pitfalls. Inf Sci 382:170–178
Neumann K, Emmerich C, Steil JJ (2012) Regularization by intrinsic plasticity and its synergies with recurrence for random projection methods. J Intell Learn Syst Appl 4(3):12
Neumann K, Steil JJ (2011) Batch intrinsic plasticity for extreme learning machines. In: International conference on artificial neural networks. Springer, pp 339–346
Ouyang H, Gao L, Li S, Kong X (2017) Improved global-best-guided particle swarm optimization with learning operation for global optimization problems. Appl Soft Comput 52:987–1008
Pao YH, Takefuji Y (1992) Functional-link net computing: theory, system architecture, and functionalities. Computer 25(5):76–79
Rosenblatt F (1958) The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386
Santos JDA, Barreto GA, Medeiros CM (2010) Estimating the number of hidden neurons of the MLP using singular value decomposition and principal components analysis: a novel approach. In: 2010 Eleventh Brazilian symposium on neural networks. IEEE, pp 19–24
Scardapane S, Wang D, Uncini A (2018) Bayesian random vector functional-link networks for robust data modeling. IEEE Trans Cybern 48(7):2049–2059
Schmidt WF, Kraaijveld MA, Duin RP (1992) Feedforward neural networks with random weights. In: 11th IAPR international conference on pattern recognition, 1992. Vol. II. Conference B: pattern recognition methodology and systems, proceedings. IEEE, pp 1–4
Tao X, Zhou X, He YL, Ashfaq RAR (2016) Impact of variances of random weights and biases on extreme learning machine. JSW 11(5):440–454
Teoh EJ, Tan KC, Xiang C (2006) Estimating the number of hidden neurons in a feedforward network using the singular value decomposition. IEEE Trans Neural Netw 17(6):1623–1629
Cooper SB, Leeuwen JV (2013) Intelligent machinery. In: Alan turing his work and impact, pp 499–549
Turing AM (1996) Intelligent machinery, a heretical theory. Philos Math 4(3):256–260
Uzair M, Mian A (2017) Blind domain adaptation with augmented extreme learning machine features. IEEE Trans Cybern 47(3):651–660
Wang D, Li M (2017) Stochastic configuration networks: fundamentals and algorithms. IEEE Trans Cybern 47(10):3466–3479
Wang S, Deng C, Lin W, Huang GB, Zhao B (2017) Nmf-based image quality assessment using extreme learning machine. IEEE Trans Cybern 47(1):232–243
Wang W, Liu X (2017) The selection of input weights of extreme learning machine: a sample structure preserving point of view. Neurocomputing 261:28–36
Webster CS (2012) Alan turing’s unorganized machines and artificial neural networks: his remarkable early work and future possibilities. Evol Intell 5(1):35–43
Yang Y, Wu QJ (2016) Extreme learning machine with subnetwork hidden nodes for regression and classification. IEEE Trans Cybern 46(12):2885–2898
Yang YM, Wu QJ (2016) Multilayer extreme learning machine with subnetwork nodes for representation learning. IEEE Trans Cybern 46(11):2570–2583
Ye H, Cao F, Wang D, Li H (2018) Building feedforward neural networks with random weights for large scale datasets. Expert Syst Appl 106:233–243
Zhang L, Deng P (2017) Abnormal odor detection in electronic nose via self-expression inspired extreme learning machine. IEEE Trans Syst Man Cybern Syst 99:1–11
Zhang L, Suganthan PN (2016) A comprehensive evaluation of random vector functional link networks. Inf Sci 367:1094–1105
Zhang L, Suganthan PN (2017) Visual tracking with convolutional random vector functional link network. IEEE Trans Cybern 47(10):3243–3253
Zhao X, Cao W, Zhu H, Ming Z, Ashfaq RAR (2018) An initial study on the rank of input matrix for extreme learning machine. Int J Mach Learn Cybern 9(5):867–879
Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (Grant 61672358 and Grant 61836005) and the Guangdong Science and Technology Department (Grant 2018B010107004)
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declared no potential conflict of interest with respect to the research, authorship, and/or publication of this article.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Cao, W., Hu, L., Gao, J. et al. A study on the relationship between the rank of input data and the performance of random weight neural network. Neural Comput & Applic 32, 12685–12696 (2020). https://doi.org/10.1007/s00521-020-04719-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-020-04719-8