Abstract
Spiking neural networks (SNNs) have gained attention as models of sparse and event-driven communication of biological neurons, and as such have shown increasing promise for energy-efficient applications in neuromorphic hardware. As with classical artificial neural networks (ANNs), predictive uncertainties are important for decision making in high-stakes applications, such as autonomous vehicles, medical diagnosis, and high frequency trading. Yet, discussion of uncertainty estimation in SNNs is limited, and approaches for uncertainty estimation in ANNs are not directly applicable to SNNs. Here, we propose an efficient Monte Carlo(MC)-dropout based approach for uncertainty estimation in SNNs. Our approach exploits the time-step mechanism of SNNs to enable MC-dropout in a computationally efficient manner, without introducing significant overheads during training and inference while demonstrating high accuracy and uncertainty quality.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
For LTS-SNNs, dropout is not enabled at inference time as this leads to notably weak performance for LTS-SNNs, similar to that of ANNs.
References
Brier, G.W., et al.: Verification of forecasts expressed in terms of probability. Mon. Weather Rev. 78(1), 1–3 (1950)
Damianou, A., Lawrence, N.D.: Deep gaussian processes. In: Artificial Intelligence and Statistics, pp. 207–215. PMLR (2013)
Fang, W., Yu, Z., Chen, Y., Masquelier, T., Huang, T., Tian, Y.: Incorporating learnable membrane time constant to enhance learning of spiking neural networks. In: CVPR, pp. 2661–2671 (2021)
Gal, Y.: Uncertainty in Deep Learning. Ph.D. thesis, Department of Engineering, University of Cambridge, Cambridge (2016)
Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: representing model uncertainty in deep learning. In: ICML, pp. 1050–1059. PMLR (2016)
Gawlikowski, J., et al.: A survey of uncertainty in deep neural networks. arXiv preprint arXiv:2107.03342 (2021)
Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge (2002)
Gneiting, T., Raftery, A.E.: Strictly proper scoring rules, prediction, and estimation. J. Am. Stat. Assoc. 102(477), 359–378 (2007)
Graves, A.: Practical variational inference for neural networks. In: NIPS, vol. 24 (2011)
Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: ICML, pp. 1321–1330. PMLR (2017)
Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations. In: ICLR (2019). https://openreview.net/forum?id=HJz6tiCqYm
Jang, H., Simeone, O.: Multisample online learning for probabilistic spiking neural networks. IEEE Trans. Neural Netw. Learn Syst. 33(5), 2034–2044 (2022)
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: NIPS, vol. 30 (2017)
Mackay, D.J.C.: Bayesian methods for adaptive models. Ph.D. thesis, California Institute of Technology (1992)
Naeini, M.P., Cooper, G., Hauskrecht, M.: Obtaining well calibrated probabilities using bayesian binning. In: AAAI (2015)
Neftci, E.O., Mostafa, H., Zenke, F.: Surrogate gradient learning in spiking neural networks: bringing the power of gradient-based optimization to spiking neural networks. IEEE Sig. Process. Mag. 36(6), 51–63 (2019)
Ovadia, Y., et al.: Can you trust your model’s uncertainty? evaluating predictive uncertainty under dataset shift. In: NIPS, vol. 32 (2019)
Pouget, A., Beck, J.M., Ma, W.J., Latham, P.E.: Probabilistic brains: knowns and unknowns. Nat. Neurosci. 16(9), 1170–1178 (2013)
Rathi, N., Srinivasan, G., Panda, P., Roy, K.: Enabling deep spiking neural networks with hybrid conversion and spike timing dependent backpropagation. In: ICML (2020). https://openreview.net/forum?id=B1xSperKvH
Savin, C., Deneve, S.: Spatio-temporal representations of uncertainty in spiking neural networks. Adv. Neural Inf. Process Syst. (2014)
Schuman, C.D., Kulkarni, S.R., Parsa, M., Mitchell, J.P., Date, P., Kay, B.: Opportunities for neuromorphic computing algorithms and applications. Nat. Comput. Sci. 2(1), 10–19 (2022)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Wan, L., Zeiler, M., Zhang, S., Le Cun, Y., Fergus, R.: Regularization of neural networks using dropconnect. In: ICML, pp. 1058–1066. PMLR (2013)
Wilson, A.G., Izmailov, P.: Bayesian deep learning and a probabilistic perspective of generalization. NIPS 33, 4697–4708 (2020)
Yin, B., Corradi, F., Bohté, S.M.: Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks. Nat. Mach. Intell. 3(10), 905–913 (2021)
Yin, B., Corradi, F., Bohté, S.M.: Accurate online training of dynamical spiking neural networks through forward propagation through time. Nat. Mach. Intell. (2023)
Yue, Y., et al.: Hybrid spiking neural network fine-tuning for hippocampus segmentation. arXiv preprint arXiv:2302.07328 (2023)
Acknowledgments
TS is supported by NWO-NWA grant NWA.1292.19.298. SB is supported by the European Union (grant agreement 7202070 “HBP”).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
Proper Scoring Rules. A scoring rule \(S(\textbf{p},y)\) assigns a value for a predictive distribution \(\textbf{p}\) and one of the labels y. A scoring function \(s(\textbf{p},\textbf{q})\) is defined as the expected score of \(S(\textbf{p},y)\) under the distribution \(\textbf{q}\)
If a scoring rule satisfies \(s(\textbf{p},\textbf{q}) <= s(\textbf{q},\textbf{q})\), it is called a proper scoring rule. If \(s(\textbf{p},\textbf{q}) = s(\textbf{q},\textbf{q})\) implies \(\textbf{q}=\textbf{p}\), this scoring rule is a strictly proper scoring rule. When evaluating quality of probabilities, an optimal score output by a proper scoring rule indicates a perfect prediction [17]. In contrast, trivial solutions could generate optimal values for an improper scoring rule [8, 17].
The two most commonly used proper scoring rules are Brier score [1] and NLL. Brier score is the squared \(L_2\) norm of the difference between \(\textbf{p}\) and one-hot encoding of the true label y. NLL is defined as \( S(\textbf{p}, y) = -\textrm{log} p(y\vert \textbf{x})\) with y being the true label of the sample \(\textbf{x}\). Among these two rules, the Brier score is more recommendable because NLL can unacceptably over-emphasize small differences between small probabilities [17]. Note that proper scoring rules are often used as loss functions to train neural networks. [8, 13].
ECE. The ECE is a scalar summary statistic of calibration that approximates miscalibration [10, 15]. To calculate ECE, the predicted probabilities,
\(\hat{y}_n = \textrm{argmax}_y \textbf{p}(y\vert \mathbf {x_n})\), of test instances are grouped into M equal-interval bins. The ECE is defined as
where \(o_m\) is the fraction of corrected classified instances in the \(m^{th}\) bin, \(e_m\) the average of all the predicted probabilities in the \(m^{th}\) bin, and \(f_m\) the fraction of all the test instances falling into the \(m^{th}\) bin. The ECE is not a proper scoring rule and thus optimum ECEs could come from trivial solutions.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sun, T., Yin, B., Bohté, S. (2023). Efficient Uncertainty Estimation in Spiking Neural Networks via MC-dropout. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14254. Springer, Cham. https://doi.org/10.1007/978-3-031-44207-0_33
Download citation
DOI: https://doi.org/10.1007/978-3-031-44207-0_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44206-3
Online ISBN: 978-3-031-44207-0
eBook Packages: Computer ScienceComputer Science (R0)