Skip to main content

Solving Number Series with Simple Recurrent Networks

  • Conference paper
Natural and Artificial Models in Computation and Biology (IWINAC 2013)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7930))

  • 1276 Accesses

Abstract

Number series tests are a popular task in intelligence tests to measure a person’s ability of numerical reasoning. The function represented by a number series can be learned by artificial neural networks. In contrast to earlier research based on feedforward networks, we apply simple recurrent networks to the task of number series prediction. We systematically vary the number of input and hidden units in the networks to determine the optimal network configuration for the task. While feedforward networks could solve only 18 of 20 test series, a very small simple recurrent network could find a solution for all series. This underlines the importance of recurrence in such systems, which further is a basic concept in human cognition.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barbounis, T., Theocharis, J., Alexiadis, M., Dokopoulos, P.: Long-term wind speed and power forecasting using local recurrent neural network models. IEEE Transactions on Energy Conversion 21(1), 273–284 (2006)

    Article  Google Scholar 

  2. Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5(2), 157–166 (1994)

    Article  Google Scholar 

  3. Bishop, C.M.: Neural Networks for Pattern Recognition. Oxford University Press, Inc., New York (1995)

    Google Scholar 

  4. Cleeremans, A.: Mechanisms of Implicit Learning: Connectionist Models of Sequence Processing. Neural Network Modeling & Connectionism. Mit Press (1993)

    Google Scholar 

  5. Elman, J.L.: Finding structure in time. Cognitive Science 14(2), 179–211 (1990)

    Article  Google Scholar 

  6. Erhan, D., Pierre-Antoine, M., Bengio, Y., Bengio, S., Vincent, P.: The difficulty of training deep architectures and the effect of unsupervised pre-training. In: Proceedings of the 12th International Conference on Artificiall Intelligence and Statistics, pp. 153–160 (2009)

    Google Scholar 

  7. Glüge, S., Böck, R., Wendemuth, A.: Implicit sequence learning - a case study with a 4-2-4 encoder simple recurrent network. In: IJCCI (ICFC-ICNC), pp. 279–288 (2010)

    Google Scholar 

  8. Glüge, S., Hamid, O., Wendemuth, A.: A simple recurrent network for implicit learning of temporal sequences. Cognitive Computation 2, 265–271 (2010)

    Article  Google Scholar 

  9. Hofstadter, D.: I am a strange loop. Basic Books, New York (2007)

    MATH  Google Scholar 

  10. LeFevre, J., Bisanz, J.: A cognitive analysis of number-series problems: sources of individual differences in performance. Memory and Cognition 14(4), 287–298 (1986)

    Article  Google Scholar 

  11. Møller, M.F.: A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6(4), 525–533 (1993)

    Article  Google Scholar 

  12. Park, D.C.: Sunspot series prediction using adaptively trained multiscale-bilinear recurrent neural network. In: 9th IEEE International Conference on Computer Systems and Applications (AICCSA), pp. 135–139 (2011)

    Google Scholar 

  13. Ragni, M., Klein, A.: Predicting numbers: an AI approach to solving number series. In: Bach, J., Edelkamp, S. (eds.) KI 2011. LNCS, vol. 7006, pp. 255–259. Springer, Heidelberg (2011)

    Chapter  Google Scholar 

  14. Ragni, M., Klein, A.: Solving number series - architectural properties of successful artificial neural networks. In: Madani, K., Kacprzyk, J., Filipe, J. (eds.) IJCCI (NCTA), pp. 224–229. SciTePress (2011)

    Google Scholar 

  15. Tino, P., Schittenkopf, C., Dorffner, G.: Financial volatility trading using recurrent neural networks. IEEE Transactions on Neural Networks 12(4), 865–874 (2001)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Glüge, S., Wendemuth, A. (2013). Solving Number Series with Simple Recurrent Networks. In: Ferrández Vicente, J.M., Álvarez Sánchez, J.R., de la Paz López, F., Toledo Moreo, F.J. (eds) Natural and Artificial Models in Computation and Biology. IWINAC 2013. Lecture Notes in Computer Science, vol 7930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-38637-4_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-38637-4_43

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-38636-7

  • Online ISBN: 978-3-642-38637-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics