Skip to main content
Log in

Incremental learning paradigm with privileged information for random vector functional-link networks: IRVFL+ 

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Learning using privileged information (LUPI) paradigm, which pioneered teacher–student interaction mechanism, makes the learning models use additional information in the training stage. This paper is the first to propose an incremental learning algorithm with LUPI paradigm for random vector functional-link (RVFL) networks, named IRVFL+ . This novel algorithm can leverage privileged information into incremental RVFL (IRVFL) networks in the training stage, which provides a new constructive method to train IRVFL networks. In order to solve two scenarios that require fast speed of modeling but low-accuracy requirements and high accuracy but slow speed of modeling requirements, two algorithmic implementations of IRVFL+ , respectively, based on local update and global update strategies are presented for data classification and regression problems in this paper. Specifically, the first algorithm, named IRVFL-I+ , calculates the output weights of the newly added hidden nodes, while the input and output parameters of all the existing hidden nodes are fixed. In contrast to IRVFL-I+ , the second one named IRVFL-II + can update all the parameters of all the existing hidden nodes and newly added hidden nodes. Moreover, the convergences of two implementations have been studied in this paper. Finally, experimental results indicate that IRVFL+ indeed performs favorably.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133

    Article  MathSciNet  Google Scholar 

  2. LeCun Y, Bengio Y, Hinton GJN (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  3. Alhamdoosh M, Wang D-H (2014) Fast decorrelated neural network ensembles with random weights. Inf Sci 264:104–117

    Article  MathSciNet  Google Scholar 

  4. Pascanu R, Mikolov T, Bengio Y (2013) On the difficulty of training recurrent neural networks. Comput Sci 52(3):337–345

    Google Scholar 

  5. Rigotti M, Barak O, Warden MR (2013) The importance of mixed selectivity in complex cognitive tasks. Nature 497(7451):585–590

    Article  Google Scholar 

  6. Zhang L, Suganthan PN (2016) A survey of randomized algorithms for training neural networks. Inf Sci 364:146–155

    Article  Google Scholar 

  7. Pao YH, Park GH, Sobajic DJ (1994) Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180

    Article  Google Scholar 

  8. Ren Y, Suganthan PN, Srikanth N, Amaratunga G (2016) Random vector functional link network for short-term electricity load demand forecasting. Inf Sci 367:1078–1093

    Article  Google Scholar 

  9. Zhang L, Suganthan PN (2016) A comprehensive evaluation of random vector functional link networks. Inf Sci 367:1094–1105

    Article  Google Scholar 

  10. Shi Q, Katuwal R, Suganthan P, Tanveer M (2021) Random vector functional link neural network based ensemble deep learning. Pattern Recognit 117(7553):107978

    Article  Google Scholar 

  11. Igelnik B, Pao YH (1995) Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans Neural Netw 6(6):1320–1329

    Article  Google Scholar 

  12. Panella M, Rosato A (2019) A training procedure for quantum random vector functional-link networks. In: IEEE international conference on acoustics, pp 7973–7977

  13. Ertugrul ÖF (2019) A novel randomized recurrent artificial neural network approach: recurrent random vector functional link network. Turk J Electr Eng Comput Sci 27(6):4246–4255

    Article  MathSciNet  Google Scholar 

  14. Vukovic N, Petrovic M, Miljkovic Z (2018) A comprehensive experimental evaluation of orthogonal polynomial expanded random vector functional link neural networks for regression. Appl Soft Comput 70:1083–1096

    Article  Google Scholar 

  15. Priyadarshini L, Dash P, Dhar S (2020) A new exponentially expanded robust random vector functional link network based mppt model for local energy management of pv-battery energy storage integrated microgrid. Eng Appl Artif Intell 91:103633

    Article  Google Scholar 

  16. Zhang P-B, Yang Z-X (2020) A new learning paradigm for random vector functional-link network: RVFL+. Neural Netw 122:94–105

    Article  Google Scholar 

  17. Vapnik V, Vashist A (2009) A new learning paradigm: learning using privileged information. Neural Netw 22(5–6):544–557

    Article  Google Scholar 

  18. Lapin M, Hein M, Schiele B (2014) Learning using privileged information: SVM+ and weighted SVM. Neural Netw Off J Int Neural Netw Soc 53:95–108

    Article  Google Scholar 

  19. Fouad S, Tino P, Raychaudhury S (2013) S Incorporating privileged information through metric learning. IEEE Trans Neural Netw Learn Syst 24(7):1086–1098

    Article  Google Scholar 

  20. Xu X-X, Li W, Xu D (2015) Distance metric Learning using privileged information for face verification and person re-Identification. IEEE Trans Neural Netw Learn Syst 26(12):3150

    Article  MathSciNet  Google Scholar 

  21. Sharmanska V, Quadrianto N, Lampert CH (2013) Learning to rank using privileged information. In: Proceedings of the IEEE international conference on computer vision, pp 825–832

  22. He Y-W, Tian Y-J, Liu D-L (2019) Multi-view transfer learning with privileged learning framework. Neurocomputing 335:131–142

    Article  Google Scholar 

  23. Xu W, Liu W, Chi H-Y, Qiu S, Jin Y (2019) Self-paced learning with privileged information. Neurocomputing 362:147–155

    Article  Google Scholar 

  24. Qi Z-Q, Tian Y-J, Shi Y (2014) A new classification model using privileged information and its application. Neurocomputing 129:146–152

    Article  Google Scholar 

  25. Shu Y-Y, Li Q, Liu S-W, Xu G-D (2020) Learning with privileged information for photo aesthetic assessment. Neurocomputing 404:304–316

    Article  Google Scholar 

  26. Meng F, Qi Z-Q, Tian Y-J, Niu L-F (2018) Pedestrian detection based on the privileged information. Neural Comput Appl 29(23):1485–1494

    Article  Google Scholar 

  27. Li J-P, Hua C-C, Yang Y-N (2018) Bayesian block structure sparse based T-S fuzzy modeling for dynamic prediction of hot metal silicon content in the blast furnace. IEEE Trans Industr Electron 65(6):4933–4942

    Article  Google Scholar 

  28. Lehtokangas M (1999) Modelling with constructive backpropagation. Neural Netw Off J Int Neural Netw Soc 12(4–5):707

    Article  Google Scholar 

  29. Reed R (1993) Pruning algorithms—a survey. IEEE Trans Neural Networks 4:740–747

    Article  Google Scholar 

  30. Fiesler E (1994) Comparative bibliography of ontogenic neural networks. In: Proc Int Conf Artificial Neural Networks, pp 793–796

  31. Kwok TY, Yeung DY (1996) Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Trans Neural Netw 7:1168–1183

    Article  Google Scholar 

  32. Nelson DE, Rogers SK (1992) A taxonomy of neural-network optimality. In: Proc IEEE Nat Aerospace and Electron, pp 894–899

  33. Kwok TY, Yeung DY (1997) Objective functions for training new hidden units in constructive neural networks. IEEE Trans Neural Netw 8(5):1131–1148

    Article  Google Scholar 

  34. Li S, You Z-H, Guo H (2015) Inverse-free extreme learning machine with optimal information updating. IEEE Trans Cybern 46(5):1229–1241

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 61973306, the Natural Science Foundation of Jiangsu Province under Grant BK20200086, the Open Project Foundation of State Key Laboratory of Synthetical Automation for Process Industries under Grant 2020-KF-21-10 and the Postgraduate Research & Practice Innovation Program of Jiangsu Province under Grant KYCX21_2254.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Dai.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dai, W., Ao, Y., Zhou, L. et al. Incremental learning paradigm with privileged information for random vector functional-link networks: IRVFL+ . Neural Comput & Applic 34, 6847–6859 (2022). https://doi.org/10.1007/s00521-021-06793-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-021-06793-y

Keywords

Navigation