Abstract
In supervised learning algorithms, it is necessary to define an error function for the parameter adjustment process to take place. This function generation requires the input feature vectors and their respective desired outputs. In the context of neural networks, the network outputs are compared with the desired outputs so as to compute the error function. For standard networks, such as Perceptron, Adaline and others, the desired outputs are basically a class label or output value that will be directly used to calculate the network error. In the case of bioinspired networks, such as those using Leaky Integrate-and-Fire (LIF) neurons, their output are electrical impulses (spikes). In such cases, the electrical impulse has a built-in temporal dependence that does not occur for Perceptron neurons, thus representing a challenge to calculate the desired output values (spikes) for Spike Neural Networks (SNN). The purpose of this paper is to define an analytical solution to build the desired spikes for each category in a classification problem for a SNN. The computational challenge encountered to represent the dynamics of spike generation in bioinspired neurons will also be discussed, which has a direct impact on the objective of the proposed solution.
Supported by FAPESP, CNPq and MackPesquisa.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
de Carvalho, A.C.P.L.F., Freitas, A.A.: A tutorial on multi-label classification techniques. In: Abraham, A., Hassanien, A.E., Snášel, V. (eds.) Foundations of Computational Intelligence Volume 5. Studies in Computational Intelligence, vol. 205. Springer, Berlin, Heidelberg (2009)
Kinaneva, D., Hristov, G., Kyuchukov, P., Georgiev, G., Zahariev, P., Daskalov, R.: Machine learning algorithms for regression analysis and predictions of numerical data. In: 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–6 (2021)
Wang, X., Lin, X., Dang, X.: Supervised learning in spiking neural networks: a review of algorithms and evaluations. Neural Netw. 125, 258–280 (2020)
Kaur, P., Gaba, G.S.: Computational neuroscience models and tools: a review. In: Bhoi, A., Mallick, P., Liu, CM., Balas, V. (eds.) Bio-inspired Neurocomputing. Studies in Computational Intelligence, vol. 903. Springer, Singapore (2021)
Dayan, P., Abbott, L. F.: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. The MIT Press (2005)
Miller, P.: An Introductory Course in Computational Neuroscience. The MIT Press (2018)
Maass, W., Bishop, C.M.: Pulsed Neural Networks. MIT Press (1999)
Python Programming Language Homepage. https://www.python.org/. Last accessed 29 Apr 2022
Velichko, A., Boriskov, P.: Concept of LIF neuron circuit for rate coding in spike neural networks. IEEE Trans. Circuits Syst. II Express Briefs 67(12), 3477–3481 (2020)
Kreuz, T., Chicharro, D., Houghton, C., Andrzejak, G.R., Mormann, F.: Monitoring spike train synchrony. J. Neurophysiol. 109(5), 1457–1472 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Menescal, D.D., de Castro, L.N. (2023). On the Generation of Desired Outputs for Spike Neural Networks (SNN). In: Omatu, S., Mehmood, R., Sitek, P., Cicerone, S., Rodríguez, S. (eds) Distributed Computing and Artificial Intelligence, 19th International Conference. DCAI 2022. Lecture Notes in Networks and Systems, vol 583. Springer, Cham. https://doi.org/10.1007/978-3-031-20859-1_11
Download citation
DOI: https://doi.org/10.1007/978-3-031-20859-1_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-20858-4
Online ISBN: 978-3-031-20859-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)