Skip to main content

Temporal Coding of Neural Stimuli

  • Conference paper
  • First Online:
  • 5230 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11731))

Abstract

Contemporary artificial neural networks use various metrics to code input data and usually do not use temporal coding, unlike biological neural systems. Real neural systems operate in time and use the time to code external stimuli of various kinds to produce a uniform internal data representation that can be used for further neural computations. This paper shows how it can be done using special receptors and neurons which use the time to code external data as well as internal results of computations. If neural processes take different time, the activation time of neurons can be used to code the results of computations. Such neurons can automatically find data associated with the given inputs. In this way, we can find the most similar objects represented by the network and use them for recognition or classification tasks. Conducted research and results prove that time space, temporal coding, and temporal neurons can be used instead of data feature space, direct use of input data, and classic artificial neurons. Time and temporal coding might be an important branch for the development of future artificial neural networks inspired by biological neurons.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Carpenter, G.A., Grossberg, S.: Adaptive resonance theory. In: Arbib, M. (ed.) The Handbook of Brain Theory and Neural Networks, pp. 87–90. MIT Press, Cambridge (2003)

    Google Scholar 

  2. Cormen, T., Leiserson, Ch., Rivest, R., Stein, C.: Introduction to Algorithms. 3nd edn., pp. 484–504. MIT Press and McGraw-Hill (2009)

    Google Scholar 

  3. Deuker, L., et al.: Memory consolidation by replay of stimulus-specific neural activity. J. Neurosci. 33(49), 19373–19383 (2013)

    Article  Google Scholar 

  4. Duch, W.: Brain-inspired conscious computing architecture. J. Mind Behav. 26, 1–22 (2005)

    Google Scholar 

  5. Franklin, S., Madl, T., D’Mello, S., Snaider, J.: LIDA: a systems-level architecture for cognition, emotion, and learning. IEEE Trans. Auton. Ment. Dev. 6(1), 19–41 (2014). https://doi.org/10.1109/TAMD.2013.2277589

    Article  Google Scholar 

  6. Gerstner, W., Kistler, W.: Spiking Neuron Models. Cambridge University Press, Cambridge (2002). https://doi.org/10.1017/cbo9780511815706

  7. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  8. Graupe, D.: Deep Learning Neural Networks. World Scientific, Singapore (2016). https://doi.org/10.1142/10190

    Book  Google Scholar 

  9. Horzyk, A.: Neurons can sort data efficiently. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2017. LNCS (LNAI), vol. 10245, pp. 64–74. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59063-9_6

    Chapter  Google Scholar 

  10. Horzyk, A.: Deep associative semantic neural graphs for knowledge representation and fast data exploration. In: Proceedings of KEOD 2017, pp. 67–79. Scitepress Digital Lib. (2017). https://doi.org/10.5220/0006504100670079

  11. Horzyk, A., Starzyk, J.A.: Fast neural network adaptation with associative pulsing neurons. In: 2017 IEEE Symposium Series on Computational Intelligence, pp. 339–346. IEEE Xplore (2017). https://doi.org/10.1109/ssci.2017.8285369

  12. Horzyk, A., Gołdon, K.: Associative graph data structures used for acceleration of K nearest neighbor classifiers. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11139, pp. 648–658. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01418-6_64

    Chapter  Google Scholar 

  13. Horzyk, A.: Associative graph data structures with an efficient access via AVB+trees. In: 2018 11th International Conference on Human System Interaction (HSI), pp. 169–175. IEEE Xplore (2018). https://doi.org/10.1109/hsi.2018.8430973

  14. Izhikevich, E.M.: Neural excitability, spiking, and bursting. Int. J. Bifurcat. Chaos 10, 1171–1266 (2000). https://doi.org/10.1142/S0218127400000840

    Article  MathSciNet  MATH  Google Scholar 

  15. Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14(6), 1569–1572 (2003). https://doi.org/10.1109/TNN.2003.820440

    Article  MathSciNet  Google Scholar 

  16. Kalat, J.W.: Biological Grounds of Psychology, 10th edn. Wadsworth Publishing, Belmont (2008)

    Google Scholar 

  17. Laird, J.E.: Extending the soar cognitive architecture. In: Proceedings of the First Conference on AGI, Memphis, Tennessee, pp. 224–235 (2008)

    Google Scholar 

  18. Longstaff, A.: Neurobiology. PWN, Warsaw (2006)

    Google Scholar 

  19. Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997)

    Article  Google Scholar 

  20. Read, J., Pfahringer, B., Holmes, G., Frank, E.: Classifier chains for multi-label classification. Mach. Learn. J. 85(3), 333 (2011). https://doi.org/10.1007/978-3-642-38067-9_13

    Article  MathSciNet  Google Scholar 

  21. Rutkowski, L.: Techniques and Methods of Artificial Intelligence. PWN, Warsaw (2012)

    Google Scholar 

  22. Tadeusiewicz, R.: Introduction to Intelligent Systems, Fault Diagnosis. Models, Artificial Intelligence, Applications. CRC Press, Boca Raton (2011)

    Google Scholar 

  23. Tyukin, I., Gorban, A.N., Calvo, C., Makarova, J., Makarov, V.A.: High-dimensional brain: a tool for encoding and rapid learning of memories by single neurons. Bull. Math. Biol. 1–33 (2018). https://doi.org/10.1007/s11538-018-0415-5. Special Issue: Modelling Biological Evolution: Developing Novel Approaches

  24. Zhang, M.L., Zhou, Z.H.: Multi-label neural networks with applications to functional genomics and text categorization. IEEE Trans. Knowl. Data Eng. 18, 1338–1351 (2006). https://doi.org/10.1007/978-3-319-00563-8_22

    Article  Google Scholar 

  25. UCI ML Repository. http://archive.ics.uci.edu/ml/datasets/Iris. Accessed 14 Apr 2018

Download references

Acknowledgement

This work was supported by the grant from the National Science Centre, Poland DEC-2016/21/B/ST7/02220 and AGH 11.11.120.612.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrian Horzyk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Horzyk, A., Gołdon, K., Starzyk, J.A. (2019). Temporal Coding of Neural Stimuli. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions. ICANN 2019. Lecture Notes in Computer Science(), vol 11731. Springer, Cham. https://doi.org/10.1007/978-3-030-30493-5_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30493-5_56

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30492-8

  • Online ISBN: 978-3-030-30493-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics