Skip to main content

Minimum Variance Embedded Random Vector Functional Link Network

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1333))

Abstract

In this paper, we propose an improved randomized based feed forward neural networks, known as Total variance minimization based random vector functional link network (Total-Var-RVFL) and intraclass variance minimization based random vector functional link network (Class-Var-RVFL). Total-Var-RVFL exploits the training data dispersion by minimizing the total variance while as Class-Var-RVFL minimizes the intraclass variance of the training data. The proposed classification models are evaluated on 18 datasets (UCI datasets). From the experimental analysis, one can see that the proposed classification models show better generalization performance as compared to the given baseline models.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 2(5), 359–366 (1989)

    Article  Google Scholar 

  2. Igelnik, B., Pao, Y.-H.: Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Networks 6(6), 1320–1329 (1995)

    Article  Google Scholar 

  3. Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks 6(6), 861–867 (1993)

    Article  Google Scholar 

  4. Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks. Neural Comput. 3(2), 246–257 (1991)

    Article  Google Scholar 

  5. LeCun, Y., et al.: Handwritten digit recognition with a back-propagation network. In: Advances in Neural Information Processing Systems, pp. 396–404 (1990)

    Google Scholar 

  6. Denker, J.S., et al.: Neural network recognizer for hand-written zip code digits. In: Advances in Neural Information Processing Systems, pp. 323–331 (1989)

    Google Scholar 

  7. Schmidt, W.F., Kraaijveld, M.A., Duin, R.P.: Feed forward neural networks with random weights. In: International Conference on Pattern Recognition. IEEE Computer Society Press, pp. 1–1 (1992)

    Google Scholar 

  8. Te Braake, H.A., Van Straten, G.: Random activation weight neural net (rawn) for fast non-iterative training. Eng. Appl. Artif. Intell. 8(1), 71–80 (1995)

    Article  Google Scholar 

  9. Guo, P., Chen, C.P., Sun, Y.: An exact supervised learning for a three-layer supervised neural network. In: Proceedings of 1995 International Conference on Neural Information Processing, pp. 1041–1044 (1995)

    Google Scholar 

  10. Guo, P.: A vest of the pseudoinverse learning algorithm. arXiv preprint arXiv:1805.07828 (2018)

  11. Widrow, B., Greenblatt, A., Kim, Y., Park, D.: The no-prop algorithm: a new learning algorithm for multilayer neural networks. Neural Networks 37, 182–188 (2013)

    Article  Google Scholar 

  12. White, H.: Approximate nonlinear forecasting methods. Handb. Econ. Forecast. 1, 459–512 (2006)

    Article  Google Scholar 

  13. Dash, Y., Mishra, S.K., Sahany, S., Panigrahi, B.K.: Indian summer monsoon rainfall prediction: a comparison of iterative and non-iterative approaches. Appl. Soft Comput. 70, 1122–1134 (2018)

    Article  Google Scholar 

  14. Ganaie, M.A., Tanveer, M., Suganthan, P.N.: Oblique decision tree ensemble via twin bounded SVM. Expert Syst. Appl. 143, 113072 (2020)

    Article  Google Scholar 

  15. Ganaie, M.A., Ghosh, N., Mendola, N., Tanveer, M., Jalan, S.: Identification of chimera using machine learning. Chaos: Interdisc. J. Nonlinear Sci. 30(6), 063128 (2020)

    Article  MathSciNet  Google Scholar 

  16. Ganaie, M.A., Tanveer, M., Suganthan, P.N.: Regularized robust fuzzy least squares twin support vector machine for class imbalance learning. In: 2020 International Joint Conference on Neural Networks, IJCNN, pp. 1–8. IEEE (2020)

    Google Scholar 

  17. Tanveer, M., Rajani, T., Ganaie, M.A.: Improved sparse pinball twin SVM. In: 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), pp. 3287–3291. IEEE (2019)

    Google Scholar 

  18. Huang, G.-B., Zhou, H., Ding, X., Zhang, R.: Extreme learning machine for regression and multiclass classification. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 42(2), 513–529 (2011)

    Article  Google Scholar 

  19. Pao, Y.-H., Park, G.-H., Sobajic, D.J.: Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2), 163–180 (1994)

    Article  Google Scholar 

  20. Pao, Y.-H., Takefuji, Y.: Functional-link net computing: theory, system architecture, and functionalities. Computer 25(5), 76–79 (1992)

    Article  Google Scholar 

  21. Iosifidis, A., Tefas, A., Pitas, I.: Minimum class variance extreme learning machine for human action recognition. IEEE Trans. Circuits Syst. Video Technol. 23(11), 1968–1979 (2013)

    Article  Google Scholar 

  22. Wang, Y., Cao, F., Yuan, Y.: A study on effectiveness of extreme learning machine. Neurocomputing 74(16), 2483–2490 (2011)

    Article  Google Scholar 

  23. Iosifidis, A., Tefas, A., Pitas, I.: Minimum variance extreme learning machine for human action recognition. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5427–5431. IEEE (2014)

    Google Scholar 

  24. Tang, J., Deng, C., Huang, G.-B.: Extreme learning machine for multilayer perceptron. IEEE Trans. Neural Networks Learn. Syst. 27(4), 809–821 (2015)

    Article  MathSciNet  Google Scholar 

  25. Chang, P., Zhang, J., Hu, J., Song, Z.: A deep neural network based on ELM for semi-supervised learning of image classification. Neural Process. Lett. 48(1), 375–388 (2018)

    Article  Google Scholar 

  26. Sun, K., Zhang, J., Zhang, C., Hu, J.: Generalized extreme learning machine autoencoder and a new deep neural network. Neurocomputing 230, 374–381 (2017)

    Article  Google Scholar 

  27. Vuković, N., Petrović, M., Miljković, Z.: A comprehensive experimental evaluation of orthogonal polynomial expanded random vector functional link neural networks for regression. Appl. Soft Comput. 70, 1083–1096 (2018)

    Article  Google Scholar 

  28. Ganaie, M.A., Tanveer, M.: LSTSVM classifier with enhanced features from pre-trained functional link network. Appl. Soft Comput., 106305 (2020)

    Google Scholar 

  29. Tang, L., Wu, Y., Yu, L.: A non-iterative decomposition-ensemble learning paradigm using rvfl network for crude oil price forecasting. Appl. Soft Comput. 70, 1097–1108 (2018)

    Article  Google Scholar 

  30. Henríquez, P.A., Ruz, G.A.: A non-iterative method for pruning hidden neurons in neural networks with random weights. Appl. Soft Comput. 70, 1109–1121 (2018)

    Article  Google Scholar 

  31. Mesquita, D.P., Gomes, J.P.P., Rodrigues, L.R., Oliveira, S.A., Galvão, R.K.: Building selective ensembles of randomization based neural networks with the successive projections algorithm. Appl. Soft Comput. 70, 1135–1145 (2018)

    Article  Google Scholar 

  32. Zhang, L., Suganthan, P.N.: A comprehensive evaluation of random vector functional link networks. Inf. Sci. 367, 1094–1105 (2016)

    Article  Google Scholar 

  33. Kearns, M.J., Vazirani, U.V., Vazirani, U.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1994)

    Book  Google Scholar 

  34. Dua, D., Graff, C.: UCI machine learning repository (2017). http://archive.ics.uci.edu/ml

  35. Fernández-Delgado, M., Cernadas, E., Barro, S., Amorim, D.: Do we need hundreds of classifiers to solve real world classification problems? J. Mach. Learn. Res. 15(1), 3133–3181 (2014)

    MathSciNet  MATH  Google Scholar 

  36. Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgement

This work was supported by Science & Engineering Research Board (SERB) under Ramanujan fellowship Grant No. SB/S2/ RJN-001/2016 and Early Career Research Award Grant No. ECR/2017/000053. It is also supported by Council of Scientific & Industrial Research (CSIR), New Delhi, INDIA under Extra Mural Research (EMR) Scheme Grant No. 22(0751)/17/ EMR-II. We gratefully acknowledge the Indian Institute of Technology Indore for providing facilities and support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Tanveer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ganaie, M.A., Tanveer, M., Suganthan, P.N. (2020). Minimum Variance Embedded Random Vector Functional Link Network. In: Yang, H., Pasupa, K., Leung, A.CS., Kwok, J.T., Chan, J.H., King, I. (eds) Neural Information Processing. ICONIP 2020. Communications in Computer and Information Science, vol 1333. Springer, Cham. https://doi.org/10.1007/978-3-030-63823-8_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-63823-8_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-63822-1

  • Online ISBN: 978-3-030-63823-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics