Skip to main content

A Generalized I-ELM Algorithm for Handling Node Noise in Single-Hidden Layer Feedforward Networks

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Included in the following conference series:

Abstract

The incremental extreme learning machine (I-ELM) algorithm provides a low computational complexity training mechanism for single-hidden layer feedforward neworks (SLFNs). However, the original I-ELM algorithm does not consider the node noise situation, and node noise may greatly degrade the performance of a trained SLFN. This paper presents a generalized node noise resistant I-ELM (GNNR-I-ELM) for SLFNs. We first define a noise resistant training objective function for SLFNs. Afterwards, we develop the GNNR-I-ELM algorithm which adds \(\tau \) nodes into the network at each iteration. The GNNR-I-ELM algorithm estimates the output weights of the newly additive nodes and does not change all the previously trained output weights. Its noise tolerant ability is much better than that of the original I-ELM. Besides, we prove that in terms of the training set mean squared error of noisy networks, the GNNR-I-ELM algorithm converges.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: a new learning scheme of feedforward neural networks. In: 2006 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 985–990 (2006)

    Google Scholar 

  2. Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70(1), 489–501 (2006)

    Article  Google Scholar 

  3. Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)

    Article  MathSciNet  Google Scholar 

  4. Huang, G.B., Chen, L., Siew, C.K.: Universal approximation using incremental constructive feedforward networks with random hidden nodes. IEEE Trans. Neural Netw. 17(4), 879–892 (2006)

    Article  Google Scholar 

  5. Tissera, M.D., McDonnell, M.D.: Deep extreme learning machines: supervised autoencoding architecture for classification. Neurocomputing 174, 42–49 (2016)

    Article  Google Scholar 

  6. Ding, S., Zhang, N., Xu, X., Guo, L., Zhang, J.: Deep extreme learning machine and its application in EEG classification. Mathe. Probl. Eng. 2015 (2015). Article ID 129021

    Google Scholar 

  7. Ding, S., Zhao, H., Zhang, Y., Xu, X., Nie, R.: Extreme learning machine: algorithm, theory and applications. Artif. Intell. Rev. 44(1), 103–115 (2015)

    Article  Google Scholar 

  8. Burr, J.: Digital neural network implementations. In: Neural Networks, Concepts, Applications, and Implementations, pp. 237–285. Prentice Hall (1995)

    Google Scholar 

  9. Liu, B., Kaneko, K.: Error analysis of digital filter realized with floating-point arithmetic. Proc. IEEE 57(10), 1735–1747 (1969)

    Article  Google Scholar 

  10. Bernier, J.L., Ortega, J., Rojas, I., Ros, E., Prieto, A.: Obtaining fault tolerant multilayer perceptrons using an explicit regularization. Neural Process. Lett. 12(2), 107–113 (2000)

    Article  MATH  Google Scholar 

  11. Mahdiani, H.R., Fakhraie, S.M., Lucas, C.: Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE Trans. Neural Netw. Learn. Syst. 23(8), 1215–1228 (2012)

    Article  Google Scholar 

  12. Leung, C.S., Sum, J.P.F.: A fault-tolerant regularizer for RBF networks. IEEE Trans. Neural Netw. 19(3), 493–507 (2008)

    Article  Google Scholar 

  13. Feng, R.B., Han, Z.F., Wan, W.Y., Leung, C.S.: Properties and learning algorithms for faulty RBF networks with coexistence of weight and node failures. Neurocomputing 224, 166–176 (2017)

    Article  Google Scholar 

  14. Leung, C.S., Wan, W.Y., Feng, R.: A regularizer approach for RBF networks under the concurrent weight failure situation. IEEE Trans. Neural Netw. Learn. Syst. 28(6), 1360–1372 (2017)

    Article  Google Scholar 

  15. Lichman, M.: UCI machine learning repository (2013).http://archive.ics.uci.edu/ml

Download references

Acknowledgment

The work was supported by a research grant from the Government of the Hong Kong Special Administrative Region (CityU 11259516).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chi-Sing Leung .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Wong, H.T., Leung, CS., Kwong, S. (2017). A Generalized I-ELM Algorithm for Handling Node Noise in Single-Hidden Layer Feedforward Networks. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics