Abstract
Contemporary artificial neural networks use various metrics to code input data and usually do not use temporal coding, unlike biological neural systems. Real neural systems operate in time and use the time to code external stimuli of various kinds to produce a uniform internal data representation that can be used for further neural computations. This paper shows how it can be done using special receptors and neurons which use the time to code external data as well as internal results of computations. If neural processes take different time, the activation time of neurons can be used to code the results of computations. Such neurons can automatically find data associated with the given inputs. In this way, we can find the most similar objects represented by the network and use them for recognition or classification tasks. Conducted research and results prove that time space, temporal coding, and temporal neurons can be used instead of data feature space, direct use of input data, and classic artificial neurons. Time and temporal coding might be an important branch for the development of future artificial neural networks inspired by biological neurons.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Carpenter, G.A., Grossberg, S.: Adaptive resonance theory. In: Arbib, M. (ed.) The Handbook of Brain Theory and Neural Networks, pp. 87–90. MIT Press, Cambridge (2003)
Cormen, T., Leiserson, Ch., Rivest, R., Stein, C.: Introduction to Algorithms. 3nd edn., pp. 484–504. MIT Press and McGraw-Hill (2009)
Deuker, L., et al.: Memory consolidation by replay of stimulus-specific neural activity. J. Neurosci. 33(49), 19373–19383 (2013)
Duch, W.: Brain-inspired conscious computing architecture. J. Mind Behav. 26, 1–22 (2005)
Franklin, S., Madl, T., D’Mello, S., Snaider, J.: LIDA: a systems-level architecture for cognition, emotion, and learning. IEEE Trans. Auton. Ment. Dev. 6(1), 19–41 (2014). https://doi.org/10.1109/TAMD.2013.2277589
Gerstner, W., Kistler, W.: Spiking Neuron Models. Cambridge University Press, Cambridge (2002). https://doi.org/10.1017/cbo9780511815706
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
Graupe, D.: Deep Learning Neural Networks. World Scientific, Singapore (2016). https://doi.org/10.1142/10190
Horzyk, A.: Neurons can sort data efficiently. In: Rutkowski, L., Korytkowski, M., Scherer, R., Tadeusiewicz, R., Zadeh, L.A., Zurada, J.M. (eds.) ICAISC 2017. LNCS (LNAI), vol. 10245, pp. 64–74. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59063-9_6
Horzyk, A.: Deep associative semantic neural graphs for knowledge representation and fast data exploration. In: Proceedings of KEOD 2017, pp. 67–79. Scitepress Digital Lib. (2017). https://doi.org/10.5220/0006504100670079
Horzyk, A., Starzyk, J.A.: Fast neural network adaptation with associative pulsing neurons. In: 2017 IEEE Symposium Series on Computational Intelligence, pp. 339–346. IEEE Xplore (2017). https://doi.org/10.1109/ssci.2017.8285369
Horzyk, A., Gołdon, K.: Associative graph data structures used for acceleration of K nearest neighbor classifiers. In: Kůrková, V., Manolopoulos, Y., Hammer, B., Iliadis, L., Maglogiannis, I. (eds.) ICANN 2018. LNCS, vol. 11139, pp. 648–658. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01418-6_64
Horzyk, A.: Associative graph data structures with an efficient access via AVB+trees. In: 2018 11th International Conference on Human System Interaction (HSI), pp. 169–175. IEEE Xplore (2018). https://doi.org/10.1109/hsi.2018.8430973
Izhikevich, E.M.: Neural excitability, spiking, and bursting. Int. J. Bifurcat. Chaos 10, 1171–1266 (2000). https://doi.org/10.1142/S0218127400000840
Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14(6), 1569–1572 (2003). https://doi.org/10.1109/TNN.2003.820440
Kalat, J.W.: Biological Grounds of Psychology, 10th edn. Wadsworth Publishing, Belmont (2008)
Laird, J.E.: Extending the soar cognitive architecture. In: Proceedings of the First Conference on AGI, Memphis, Tennessee, pp. 224–235 (2008)
Longstaff, A.: Neurobiology. PWN, Warsaw (2006)
Maass, W.: Networks of spiking neurons: the third generation of neural network models. Neural Netw. 10(9), 1659–1671 (1997)
Read, J., Pfahringer, B., Holmes, G., Frank, E.: Classifier chains for multi-label classification. Mach. Learn. J. 85(3), 333 (2011). https://doi.org/10.1007/978-3-642-38067-9_13
Rutkowski, L.: Techniques and Methods of Artificial Intelligence. PWN, Warsaw (2012)
Tadeusiewicz, R.: Introduction to Intelligent Systems, Fault Diagnosis. Models, Artificial Intelligence, Applications. CRC Press, Boca Raton (2011)
Tyukin, I., Gorban, A.N., Calvo, C., Makarova, J., Makarov, V.A.: High-dimensional brain: a tool for encoding and rapid learning of memories by single neurons. Bull. Math. Biol. 1–33 (2018). https://doi.org/10.1007/s11538-018-0415-5. Special Issue: Modelling Biological Evolution: Developing Novel Approaches
Zhang, M.L., Zhou, Z.H.: Multi-label neural networks with applications to functional genomics and text categorization. IEEE Trans. Knowl. Data Eng. 18, 1338–1351 (2006). https://doi.org/10.1007/978-3-319-00563-8_22
UCI ML Repository. http://archive.ics.uci.edu/ml/datasets/Iris. Accessed 14 Apr 2018
Acknowledgement
This work was supported by the grant from the National Science Centre, Poland DEC-2016/21/B/ST7/02220 and AGH 11.11.120.612.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Horzyk, A., Gołdon, K., Starzyk, J.A. (2019). Temporal Coding of Neural Stimuli. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Workshop and Special Sessions. ICANN 2019. Lecture Notes in Computer Science(), vol 11731. Springer, Cham. https://doi.org/10.1007/978-3-030-30493-5_56
Download citation
DOI: https://doi.org/10.1007/978-3-030-30493-5_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30492-8
Online ISBN: 978-3-030-30493-5
eBook Packages: Computer ScienceComputer Science (R0)