Skip to main content

On the Generation of Desired Outputs for Spike Neural Networks (SNN)

  • Conference paper
  • First Online:
Distributed Computing and Artificial Intelligence, 19th International Conference (DCAI 2022)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 583))

  • 333 Accesses

Abstract

In supervised learning algorithms, it is necessary to define an error function for the parameter adjustment process to take place. This function generation requires the input feature vectors and their respective desired outputs. In the context of neural networks, the network outputs are compared with the desired outputs so as to compute the error function. For standard networks, such as Perceptron, Adaline and others, the desired outputs are basically a class label or output value that will be directly used to calculate the network error. In the case of bioinspired networks, such as those using Leaky Integrate-and-Fire (LIF) neurons, their output are electrical impulses (spikes). In such cases, the electrical impulse has a built-in temporal dependence that does not occur for Perceptron neurons, thus representing a challenge to calculate the desired output values (spikes) for Spike Neural Networks (SNN). The purpose of this paper is to define an analytical solution to build the desired spikes for each category in a classification problem for a SNN. The computational challenge encountered to represent the dynamics of spike generation in bioinspired neurons will also be discussed, which has a direct impact on the objective of the proposed solution.

Supported by FAPESP, CNPq and MackPesquisa.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. de Carvalho, A.C.P.L.F., Freitas, A.A.: A tutorial on multi-label classification techniques. In: Abraham, A., Hassanien, A.E., Snášel, V. (eds.) Foundations of Computational Intelligence Volume 5. Studies in Computational Intelligence, vol. 205. Springer, Berlin, Heidelberg (2009)

    Google Scholar 

  2. Kinaneva, D., Hristov, G., Kyuchukov, P., Georgiev, G., Zahariev, P., Daskalov, R.: Machine learning algorithms for regression analysis and predictions of numerical data. In: 3rd International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1–6 (2021)

    Google Scholar 

  3. Wang, X., Lin, X., Dang, X.: Supervised learning in spiking neural networks: a review of algorithms and evaluations. Neural Netw. 125, 258–280 (2020)

    Article  Google Scholar 

  4. Kaur, P., Gaba, G.S.: Computational neuroscience models and tools: a review. In: Bhoi, A., Mallick, P., Liu, CM., Balas, V. (eds.) Bio-inspired Neurocomputing. Studies in Computational Intelligence, vol. 903. Springer, Singapore (2021)

    Google Scholar 

  5. Dayan, P., Abbott, L. F.: Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. The MIT Press (2005)

    Google Scholar 

  6. Miller, P.: An Introductory Course in Computational Neuroscience. The MIT Press (2018)

    Google Scholar 

  7. Maass, W., Bishop, C.M.: Pulsed Neural Networks. MIT Press (1999)

    Google Scholar 

  8. Python Programming Language Homepage. https://www.python.org/. Last accessed 29 Apr 2022

  9. Velichko, A., Boriskov, P.: Concept of LIF neuron circuit for rate coding in spike neural networks. IEEE Trans. Circuits Syst. II Express Briefs 67(12), 3477–3481 (2020)

    Google Scholar 

  10. Kreuz, T., Chicharro, D., Houghton, C., Andrzejak, G.R., Mormann, F.: Monitoring spike train synchrony. J. Neurophysiol. 109(5), 1457–1472 (2013)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Diego Duarte Menescal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Menescal, D.D., de Castro, L.N. (2023). On the Generation of Desired Outputs for Spike Neural Networks (SNN). In: Omatu, S., Mehmood, R., Sitek, P., Cicerone, S., Rodríguez, S. (eds) Distributed Computing and Artificial Intelligence, 19th International Conference. DCAI 2022. Lecture Notes in Networks and Systems, vol 583. Springer, Cham. https://doi.org/10.1007/978-3-031-20859-1_11

Download citation

Publish with us

Policies and ethics