Skip to main content

Two-Step FORCE Learning Algorithm for Fast Convergence in Reservoir Computing

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2020 (ICANN 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12397))

Included in the following conference series:

Abstract

Reservoir computing devices are promising as energy-efficient machine learning hardware for real-time information processing. However, some online algorithms for reservoir computing are not simple enough for hardware implementation. In this study, we focus on the first order reduced and controlled error (FORCE) algorithm for online learning with reservoir computing models. We propose a two-step FORCE algorithm by simplifying the operations in the FORCE algorithm, which can reduce necessary memories. We analytically and numerically show that the proposed algorithm can converge faster than the original FORCE algorithm.

This work was partially supported by JSPS KAKENHI Grant Numbers JP20J13556 (HT), JP20K11882 (GT), and JST CREST Grant Number JPMJCR19K2, Japan.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Haykin, S.: Adaptive Filter Theory. Prentice-Hall Inc., Upper Saddle River (1996)

    MATH  Google Scholar 

  2. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German Natl. Res. Center Inf. Technol. GMD Tech. Rep. 148(34), 13 (2001)

    Google Scholar 

  3. Kim, C.M., Chow, C.C.: Learning recurrent dynamics in spiking networks. elife 7, e37124 (2018)

    Article  Google Scholar 

  4. Ljung, L.: Analysis of recursive stochastic algorithms. IEEE Trans. Autom. Control 22(4), 551–575 (1977)

    Article  MathSciNet  Google Scholar 

  5. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    Article  Google Scholar 

  6. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)

    Article  Google Scholar 

  7. Nicola, W., Clopath, C.: Supervised learning in spiking neural networks with force training. Nat. Commun. 8(1), 1–15 (2017)

    Article  Google Scholar 

  8. Shi, W., Cao, J., Zhang, Q., Li, Y., Xu, L.: Edge computing: vision and challenges. IEEE Internet Things J. 3(5), 637–646 (2016)

    Article  Google Scholar 

  9. Slotine, J.J.E., Li, W., et al.: Applied Nonlinear Control, vol. 199. Prentice Hall, Englewood Cliffs (1991)

    MATH  Google Scholar 

  10. Sussillo, D., Abbott, L.F.: Generating coherent patterns of activity from chaotic neural networks. Neuron 63(4), 544–557 (2009)

    Article  Google Scholar 

  11. Tanaka, G., et al.: Recent advances in physical reservoir computing: a review. Neural Netw. 115, 100–123 (2019)

    Article  Google Scholar 

  12. Thalmeier, D., Uhlmann, M., Kappen, H.J., Memmesheimer, R.M.: Learning universal computations with spikes. PLoS Comput. Biol. 12(6), e1004895 (2016)

    Article  Google Scholar 

Download references

Acknowledgements

The authors thank Dr. K. Fujiwara for stimulating discussions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hiroto Tamura .

Editor information

Editors and Affiliations

Appendix

Appendix

We used the parameters setting shown in Table 1 in all the simulations. \(\varDelta t\) denotes the time step width used when we convert continuous-time systems into discrete-time systems. (i.e., the discrete time n corresponds to the continuous time \(n\varDelta t\).)

In Fig. 2, we used the following equation for the periodic teacher signal d(t):

$$\begin{aligned} d(t) = 0.2 \cdot \left[ \sin \left( \frac{2\pi t }{1.0}\right) + \sin \left( \frac{2\pi t}{2.0}\right) + \sin \left( \frac{2\pi t}{4.0}\right) \right] + 1.5. \end{aligned}$$
(23)

In Fig. 3, we used the following equation for the periodic teacher signal d(t):

$$\begin{aligned} d(t) = 0.5 \cdot \left[ \sin \left( \frac{2\pi t}{2.0}\right) \right] + 1.5. \end{aligned}$$
(24)

In Fig. 4, we used the following equation for the periodic teacher signal d(t) with the variable period T:

$$\begin{aligned} d(t) = 0.5 \cdot \left[ \sin \left( \frac{2\pi t}{T}\right) \right] + 1.5, \end{aligned}$$
(25)

and we searched the minimum appropriate \(T_0\) with the interval of 0.1 s.

Table 1. Model parameters common in all the simulations.

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tamura, H., Tanaka, G. (2020). Two-Step FORCE Learning Algorithm for Fast Convergence in Reservoir Computing. In: Farkaš, I., Masulli, P., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2020. ICANN 2020. Lecture Notes in Computer Science(), vol 12397. Springer, Cham. https://doi.org/10.1007/978-3-030-61616-8_37

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61616-8_37

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61615-1

  • Online ISBN: 978-3-030-61616-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics