Skip to main content

Performance Improvement of FORCE Learning for Chaotic Echo State Networks

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 13109))

Abstract

Echo state network (ESN) is a kind of recurrent neural networks (RNNs) which emphasizes randomly generating large-scale and sparsely connected RNNs coined reservoirs and only training readout weights. First-order reduced and controlled error (FORCE) learning is an effective online training approach for chaotic RNNs. This paper proposes a composite FORCE learning approach enhanced by memory regressor extension to train chaotic ESNs efficiently. In the proposed approach, a generalized prediction error is obtained by using regressor extension and linear filtering operators with memory to retain past excitation information, and the generalized prediction error is applied as additional feedback to update readout weights such that partial parameter convergence can be achieved rapidly even under weak partial excitation. Simulation results based on a dynamics modeling problem indicate that the proposed approach largely improves parameter converging speed and parameter trajectory smoothness compared with the original FORCE learning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Funahashi, K.I., Nakamura, Y.: Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw. 6(6), 801–806 (1993)

    Article  Google Scholar 

  2. Werbos, P.J.: Backpropagation through time: what it does and how to do it. Proc. IEEE 78(10), 1550–1560 (1990)

    Article  Google Scholar 

  3. Doya, K.: Bifurcations in the learning of recurrent neural networks. In: IEEE International Symposium on Circuits and Systems, San Diego, CA, pp. 2777–2780 (1992)

    Google Scholar 

  4. Jaeger, H.: Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the echo state network approach. Technical report, GMD Report 159, German National Research Center for Information Technology, Bonn, Germany (2002)

    Google Scholar 

  5. Chen, G.: A gentle tutorial of recurrent neural network with error backpropagation. arXiv preprint, arXiv:1610.02583 (2016)

  6. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical report, GMD Report 148, German National Research Center for Information Technology, Bonn, Germany (2001)

    Google Scholar 

  7. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)

    Article  Google Scholar 

  8. Steil, J.J.: Backpropagation-decorrelation: online recurrent learning with o(n) complexity. In: IEEE International Joint Conference on Neural Networks, Budapest, Hungary, pp. 843–848 (2004)

    Google Scholar 

  9. Schrauwen, B., Verstraeten, D., Van Campenhout, J.: An overview of reservoir computing: theory, applications and implementations. In: European Symposium on Artificial Neural Networks, pp. 471–482 (2007)

    Google Scholar 

  10. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    Article  Google Scholar 

  11. Nakajima, K.: Physical reservoir computing-an introductory perspective. Jpn. J. Appl. Phys. 59(6), Article ID 060501 (2020)

    Google Scholar 

  12. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  13. Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: International Conference on Neural Information Processing Systems, Cambridge, MA, USA, pp. 609–616 (2002)

    Google Scholar 

  14. Jaeger, H.: Short term memory in echo state networks. Technical report, GMD Report 152, German National Research Center for Information Technology, Bonn, Germany (2002)

    Google Scholar 

  15. Jaeger, H., Lukoševičius, M., Popovici, D., Siewert, U.: Optimization and applications of echo state networks with leaky-integrator neurons. Neural Netw. 20(3), 335–352 (2007)

    Article  Google Scholar 

  16. Bush, K., Anderson, C.: Modeling reward functions for incomplete state representations via echo state networks. In: IEEE International Joint Conference on Neural Networks, Montreal, Canada, pp. 2995–3000 (2005)

    Google Scholar 

  17. Sun, X., Li, T., Li, Q., Huang, Y., Li, Y.: Deep belief echo-state network and its application to time series prediction. Knowl.-Based Syst. 130, 17–29 (2017)

    Article  Google Scholar 

  18. Chen, Q., Shi, H., Sun, M.: Echo state network-based backstepping adaptive iterative learning control for strict-feedback systems: an error-tracking approach. IEEE Trans. Cybern. 50(7), 3009–3022 (2019)

    Article  Google Scholar 

  19. Waegeman, T., Schrauwen, B., et al.: Feedback control by online learning an inverse model. IEEE Trans. Neural Netw. Learn. Syst. 23(10), 1637–1648 (2012)

    Article  Google Scholar 

  20. Deihimi, A., Showkati, H.: Application of echo state networks in short-term electric load forecasting. Energy 39(1), 327–340 (2012)

    Article  Google Scholar 

  21. Ishu, K., van Der Zant, T., Becanovic, V., Ploger, P.: Identification of motion with echo state network. In: Oceans 2004 MTS/IEEE Techno-Ocean 2004 Conference, Kobe, Japan, pp. 1205–1210 (2004)

    Google Scholar 

  22. Skowronski, M.D., Harris, J.G.: Automatic speech recognition using a predictive echo state network classifier. Neural Netw. 20(3), 414–423 (2007)

    Article  Google Scholar 

  23. Sussillo, D., Abbott, L.F.: Generating coherent patterns of activity from chaotic neural networks. Neuron 63(4), 544–557 (2009)

    Article  Google Scholar 

  24. Nicola, W., Clopath, C.: Supervised learning in spiking neural networks with FORCE training. Nat. Commun. 8, Article ID 2208 (2017)

    Google Scholar 

  25. DePasquale, B., Cueva, C.J., Rajan, K., Escola, G.S., Abbott, L.: Full-FORCE: a target-based method for training recurrent networks. PLoS ONE 13(2), Article ID e0191527 (2018)

    Google Scholar 

  26. Inoue, K., Nakajima, K., Kuniyoshi, Y.: Designing spontaneous behavioral switching via chaotic itinerancy. Sci. Adv. 6(46), Article ID eabb3989 (2020)

    Google Scholar 

  27. Tran, Q., Nakajima, K.: Higher-order quantum reservoir computing. arXiv preprint arXiv:2006.08999 (2020)

  28. Sastry, S., Bodson, M.: Adaptive Control: Stability, Convergence and Robustness. Prentice Hall, New Jersey (1989)

    MATH  Google Scholar 

  29. Pan, Y., Yu, H.: Composite learning robot control with guaranteed parameter convergence. Automatica 89, 398–406 (2018)

    Article  MathSciNet  Google Scholar 

  30. Chowdhary, G., Johnson, E.N.: Theory and flight-test validation of a concurrent-learning adaptive controller. J. Guid. Control. Dyn. 34(2), 592–607 (2011)

    Article  Google Scholar 

  31. Pan, Y., Yu, H.: Composite learning from adaptive dynamic surface control. IEEE Trans. Autom. Control 61(9), 2603–2609 (2016)

    Article  MathSciNet  Google Scholar 

  32. Aranovskiy, S., Bobtsov, A., Ortega, R., Pyrkin, A.: Performance enhancement of parameter estimators via dynamic regressor extension and mixing. IEEE Trans. Autom. Control 62(7), 3546–3550 (2017)

    Article  MathSciNet  Google Scholar 

  33. Ortega, R., Nikiforov, V., Gerasimov, D.: On modified parameter estimators for identification and adaptive control. A unified framework and some new schemes. Ann. Rev. Control 50, 278–293 (2020)

    Article  MathSciNet  Google Scholar 

  34. Pan, Y., Bobtsov, A., Darouach, M., Joo, Y.H.: Learning from adaptive control under relaxed excitation conditions. Int. J. Adapt. Control Signal Process. 33(12), 1723–1725 (2019)

    Article  MathSciNet  Google Scholar 

  35. Pan, Y., Sun, T., Liu, Y., Yu, H.: Composite learning from adaptive backstepping neural network control. Neural Netw. 95, 134–142 (2017)

    Article  Google Scholar 

  36. Pan, Y., Yang, C., Pratama, M., Yu, H.: Composite learning adaptive backstepping control using neural networks with compact supports. Int. J. Adapt. Control Signal Process. 33(12), 1726–1738 (2019)

    Article  MathSciNet  Google Scholar 

  37. Huang, D., Yang, C., Pan, Y., Cheng, L.: Composite learning enhanced neural control for robot manipulator with output error constraints. IEEE Trans. Industr. Inf. 17(1), 209–218 (2021)

    Article  Google Scholar 

  38. Wu, R., Li, Z., Pan, Y.: Adaptive echo state network robot control with guaranteed parameter convergence. In: International Conference on Intelligent Robotics and Applications, pp. 587–595. Yantai, China (2021)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the Guangdong Pearl River Talent Program of China under Grant No. 2019QN01X154.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongping Pan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, R., Nakajima, K., Pan, Y. (2021). Performance Improvement of FORCE Learning for Chaotic Echo State Networks. In: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (eds) Neural Information Processing. ICONIP 2021. Lecture Notes in Computer Science(), vol 13109. Springer, Cham. https://doi.org/10.1007/978-3-030-92270-2_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-92270-2_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-92269-6

  • Online ISBN: 978-3-030-92270-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics