Skip to main content

Echo State Network Based on L0 Norm Regularization for Chaotic Time Series Prediction

  • Conference paper
  • First Online:
Book cover Green, Pervasive, and Cloud Computing (GPC 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12398))

Included in the following conference series:

  • 1131 Accesses

Abstract

The echo state network introduces a large and sparse reservoir to replace the hidden layer of the traditional recurrent neural network, which can solve the gradient-based problem of most recurrent neural networks in the training process. However, there may be an ill-posed problem when the least square method is used to calculate the output weight. In this paper, we proposed an echo state network based on L0 norm regularization. The main idea is to limit the number of output connections to compute the output effectively by removing unimportant ones. The simulation results of the chaotic time series prediction show the effectiveness of the proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Haykin, S., Principe, J.: Making sense of a complex world [chaotic events modeling]. IEEE Signal Process. Magaz. 15(3), 66–81 (1998)

    Article  Google Scholar 

  2. Yu, Z.: Chaos and symbol analysis and practice of time series (in Chinese). National University of Defense Technology Press, Changsha (2007)

    Google Scholar 

  3. Namikawa, J., Tani, J.: Building recurrent neural networks to implement multiple attractor dynamics using the gradient descent method. Advances in Artificial Neural Systems (2009)

    Google Scholar 

  4. Jaeger, H.: The “Echo State” approach to analyzing and training recurrent neural networks. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148(34), 13 (2001)

    Google Scholar 

  5. Qiao, J.F., Bo, Y.C., Han, G.: Application of ESN-based multi indices dual heuristic dynamic programming on wastewater treatment process. Acta Automatica Sinica 39(7), 1146–1151 (2013)

    Article  Google Scholar 

  6. Ongenae, F., Van, L.S., Verstraeten, D., Verplancke, T., Benoit, D., De, T.F., Dhaene, T., Schrauwen, B., Decruyenaere, J.: Time series classification for the prediction of dialysis in critically ill patients using echo state networks. Eng. Appl. Artif. Intell. 26(3), 984–996 (2013)

    Article  Google Scholar 

  7. Sheng, C., Zhao, J., Liu, Y., et al.: Prediction for noisy nonlinear time series by echo state network based on dual estimation. Neurocomputing 82, 186–195 (2012)

    Article  Google Scholar 

  8. Peng, Y., Wang, J.M., Peng, X.: Researches on time series prediction with echo state networks. Acta Electronica Sinica 38(b02), 148–154 (2010)

    Google Scholar 

  9. Chatzis, S.P., Demiris, Y.: Echo state Gaussian process. IEEE Trans. Neural Netw. 22(9), 1435–1445 (2011)

    Article  Google Scholar 

  10. Haykin, S.: Neural Networks: a compressive foundation. Tsinghua University Press, Beijing (2001)

    MATH  Google Scholar 

  11. Han, M., Ren, W.J., Xu, M.L.: An Improved Echo State Network via L1-norm Regularization. J. Autom. 000(011), 2428–2435 (2014)

    MATH  Google Scholar 

  12. Xu, M., Han, M., Kanae, S.: L1/2norm regularized echo state network for chaotic time series prediction. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, Minho, Liu, D. (eds.) ICONIP 2016. LNCS, vol. 9949, pp. 12–19. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46675-0_2

    Chapter  Google Scholar 

  13. Iosifidis, A., Tefas, A.: DropELM: Fast Neural Network Regularization with Dropout and DropConnect. Neurocomputing 162, 57–66 (2015)

    Article  Google Scholar 

  14. Dicker, L., Huang, B.S., Lin, X.L.: Variable selection and estimation with the seamless-L0 penalty. Statistica Sinica 23(2), 929–962 (2013)

    MathSciNet  MATH  Google Scholar 

  15. Goodfellow, I., Haykin, Y., Courville, A.: Deep learning. MIT press, US (2016)

    MATH  Google Scholar 

  16. Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: Proceedings of 27th Asilomar Conference on Signals, Systems and Computers, IEEE, pp. 40–44 (1993)

    Google Scholar 

  17. Manat, S., Zhang, Z.: Matching pursuit in a time-frequency dictionary. IEEE Trans. Signal Process. 12, 3397–3451 (1993)

    Google Scholar 

  18. Bianchi, F.M., Maiorino, E., Kampffmeyer, M.C., et al.: An overview and comparative analysis of recurrent neural networks for short term load forecasting. arXiv preprint arXiv:1705.04378 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fangwan Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, L., Huang, F., Yu, Z. (2020). Echo State Network Based on L0 Norm Regularization for Chaotic Time Series Prediction. In: Yu, Z., Becker, C., Xing, G. (eds) Green, Pervasive, and Cloud Computing. GPC 2020. Lecture Notes in Computer Science(), vol 12398. Springer, Cham. https://doi.org/10.1007/978-3-030-64243-3_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-64243-3_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-64242-6

  • Online ISBN: 978-3-030-64243-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics