Skip to main content

Stabilize Sequential Data Representation via Attraction Module

  • Conference paper
  • First Online:
Brain Informatics (BI 2023)

Abstract

Artificial intelligence systems operating in the sequential decision making paradigm are inevitably required to do effective spatio-temporal processing. The memory models for such systems are often required not just to memorize the observed data stream, but also to encode it so it is possible to separate dissimilar sequences and consolidate similar ones. Moreover, for solving complex problems, it is advantageous to have the ability to treat sequences as unit abstractions, which imposes restrictions on the topology of the representation space and the information contained in the representations themselves. In this paper, we propose a method for encoding sequences that allows efficient memorization, but at the same time retains the degree of similarity between sequences. We based our approach on the combination of biologically-inspired temporal memory and spatial attractor that stabilize temporal coding. The experiments performed on synthetic data confirm the coding efficiency and allow us to identify promising directions for further development of methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ba, J., Hinton, G.E., Mnih, V., Leibo, J.Z., Ionescu, C.: Using fast weights to attend to the recent past. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016)

    Google Scholar 

  2. Barak, O., Tsodyks, M.: Working models of working memory. Curr. Opin. Neurobiol. 25, 20–24 (2014). theoretical and computational neuroscience

    Google Scholar 

  3. Bessonov, A., Staroverov, A., Zhang, H., Kovalev, A.K., Yudin, D., Panov, A.I.: Recurrent memory decision transformer. arXiv preprint arXiv:2306.09459 (2023)

  4. Botvinick, M.M., Plaut, D.C.: Short-term memory for serial order: a recurrent neural network model. Psychol. Rev. 113(2), 201 (2006)

    Article  Google Scholar 

  5. Burtsev, M.S., Kuratov, Y., Peganov, A., Sapunov, G.V.: Memory transformer. arXiv preprint arXiv:2006.11527 (2020)

  6. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)

  7. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q.V., Salakhutdinov, R.: Transformer-xl: attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860 (2019)

  8. Dedieu, A., Gothoskar, N., Swingle, S., Lehrach, W., Lázaro-Gredilla, M., George, D.: Learning higher-order sequential structure with cloned HMMs (2019)

    Google Scholar 

  9. Dzhivelikian, E., Latyshev, A., Kuderov, P., Panov, A.I.: Hierarchical intrinsically motivated agent planning behavior with dreaming in grid environments. Brain Inform. 9(1), 8 (2022)

    Article  Google Scholar 

  10. Graham, D., Field, D.: Sparse coding in the neocortex. Evol. Nerv. Syst. 3 (2007). https://doi.org/10.1016/B0-12-370878-8/00064-1

  11. Hawkins, J., Ahmad, S., Cui, Y.: A theory of how columns in the neocortex enable learning the structure of the world. Front. Neural Circuits 11, 81 (2017). https://doi.org/10.3389/fncir.2017.00081

    Article  Google Scholar 

  12. Heeger, D.J.: Theory of cortical function. Proc. Natl. Acad. Sci. 114(8), 1773–1782 (2017). https://doi.org/10.1073/pnas.1619788114

  13. Himberger, K.D., Chien, H.Y., Honey, C.J.: Principles of temporal processing across the cortical hierarchy. Neuroscience 389, 161–174 (2018). 8

    Google Scholar 

  14. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9, 1735–80 (1997). https://doi.org/10.1162/neco.1997.9.8.1735

  15. Jaegle, A., Gimeno, F., Brock, A., Vinyals, O., Zisserman, A., Carreira, J.: Perceiver: general perception with iterative attention. In: International Conference on Machine Learning, pp. 4651–4664. PMLR (2021)

    Google Scholar 

  16. Kuderov, P., Panov, A.: Planning with hierarchical temporal memory for deterministic Markov decision problem. In: Rocha, A.P., Steels, L., Herik, J.V.D. (eds.) Proceedings of the 13th International Conference on Agents and Artificial Intelligence, vol. 2, pp. 1073–1081. SCITEPRESS - Science and Technology Publications (2021). https://doi.org/10.5220/0010317710731081

  17. Miconi, T.: Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks. Elife 6, e20899 (2017)

    Article  Google Scholar 

  18. Oster, M., Douglas, R., Liu, S.C.: Computation with spikes in a winner-take-all network. Neural Comput. 21(9), 2437–2465 (2009)

    Google Scholar 

  19. Rolls, E.T., Mills, W.P.C.: Computations in the deep vs superficial layers of the cerebral cortex. Neurobiol. Learn. Mem. 145, 205–221 (2017)

    Article  Google Scholar 

  20. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. Technical report, California Univ San Diego La Jolla Inst for Cognitive Science (1985)

    Google Scholar 

  21. Smith, S.L., Smith, I.T., Branco, T., Häusser, M.: Dendritic spikes enhance stimulus selectivity in cortical neurons in vivo. Nature 503(7474), 115–120 (2013). https://doi.org/10.1038/nature12600

    Article  Google Scholar 

  22. Staiger, J.F., Petersen, C.C.H.: Neuronal circuits in barrel cortex for whisker sensory perception. Physiol. Rev. 101(1), 353–415 (2021). https://doi.org/10.1152/physrev.00019.2019

    Article  Google Scholar 

  23. Stuart, G.J., Spruston, N.: Dendritic integration: 60 years of progress. Nat. Neurosci. 18(12), 1713–1721 (2015). https://doi.org/10.1038/nn.4157

    Article  Google Scholar 

  24. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  25. Whittington, J.C., et al.: The Tolman-Eichenbaum machine: unifying space and relational memory through generalization in the hippocampal formation. Cell 183(5), 1249-1263.e23 (2020)

    Article  Google Scholar 

  26. Yu, Y., Si, X., Hu, C., Zhang, J.: A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 31(7), 1235–1270 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Petr Kuderov .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kuderov, P., Dzhivelikian, E., Panov, A.I. (2023). Stabilize Sequential Data Representation via Attraction Module. In: Liu, F., Zhang, Y., Kuai, H., Stephen, E.P., Wang, H. (eds) Brain Informatics. BI 2023. Lecture Notes in Computer Science(), vol 13974. Springer, Cham. https://doi.org/10.1007/978-3-031-43075-6_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43075-6_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43074-9

  • Online ISBN: 978-3-031-43075-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics