Skip to main content

A Bayesian-Optimized Convolutional Neural Network to Decode Reach-to-Grasp from Macaque Dorsomedial Visual Stream

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2022)

Abstract

Neural decoding is crucial to translate the neural activity for Brain-Computer Interfaces (BCIs) and provides information on how external variables (e.g., movement) are represented and encoded in the neural system. Convolutional neural networks (CNNs) are emerging as neural decoders for their high predictive power and are largely applied with electroencephalographic signals; these algorithms, by automatically learning the more relevant class-discriminative features, improve decoding performance over classic decoders based on handcrafted features. However, applications of CNNs for single-neuron decoding are still scarce and require further validation. In this study, a CNN architecture was designed via Bayesian optimization and was applied to decode different grip types from the activity of single neurons of the posterior parietal cortex of macaque (area V6A). The Bayesian-optimized CNN significantly outperformed a naïve Bayes classifier, commonly used for neural decoding, and proved to be robust to a reduction of the number of cells and of training trials. Adopting a sliding window decoding approach with a high time resolution (5 ms), the CNN was able to capture grip-discriminant features early after cuing the animal, i.e., when the animal was only attending the object to grasp, further supporting that grip-related neural signatures are strongly encoded in V6A already during movement preparation. The proposed approach may have practical implications in invasive BCIs to realize accurate and robust decoders, and may be used together with explanation techniques to design a general tool for neural decoding and analysis, boosting our comprehension of neural encoding.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wolpaw, J., Wolpaw, E.W.: Brain-Computer Interfaces: Principles and Practice. Oxford University Press, USA (2012)

    Book  Google Scholar 

  2. Filippini, M., Borra, D., Ursino, M., Magosso, E., Fattori, P.: Decoding sensorimotor information from superior parietal lobule of macaque via Convolutional Neural Networks. Neural Netw. 151, 276–294 (2022). https://doi.org/10.1016/j.neunet.2022.03.044

    Article  Google Scholar 

  3. Filippini, M., Breveglieri, R., Akhras, M.A., Bosco, A., Chinellato, E., Fattori, P.: Decoding information for grasping from the macaque dorsomedial visual stream. J. Neurosci. 37, 4311–4322 (2017). https://doi.org/10.1523/JNEUROSCI.3077-16.2017

    Article  Google Scholar 

  4. Filippini, M., Breveglieri, R., Hadjidimitrakis, K., Bosco, A., Fattori, P.: Prediction of reach goals in depth and direction from the parietal cortex. Cell Rep. 23, 725–732 (2018). https://doi.org/10.1016/j.celrep.2018.03.090

    Article  Google Scholar 

  5. Solon, A.J., Lawhern, V.J., Touryan, J., McDaniel, J.R., Ries, A.J., Gordon, S.M.: Decoding P300 variability using convolutional neural networks. Front. Hum. Neurosci. 13, 201 (2019). https://doi.org/10.3389/fnhum.2019.00201

    Article  Google Scholar 

  6. Borra, D., Fantozzi, S., Magosso, E.: Interpretable and lightweight convolutional neural network for EEG decoding: application to movement execution and imagination. Neural Netw. 129, 55–74 (2020). https://doi.org/10.1016/j.neunet.2020.05.032

    Article  Google Scholar 

  7. Borra, D., Fantozzi, S., Magosso, E.: A lightweight multi-scale convolutional neural network for p300 decoding: analysis of training strategies and uncovering of network decision. Front. Hum. Neurosci. 15, 655840 (2021). https://doi.org/10.3389/fnhum.2021.655840

    Article  Google Scholar 

  8. Borra, D., Magosso, E.: Deep learning-based EEG analysis: investigating P3 ERP components. J. Integr. Neurosci. 20, 791–811 (2021). https://doi.org/10.31083/j.jin2004083

    Article  Google Scholar 

  9. Livezey, J.A., Glaser, J.I.: Deep learning approaches for neural decoding across architectures and recording modalities. Brief. Bioinform. 22, 1577–1591 (2021). https://doi.org/10.1093/bib/bbaa355

    Article  Google Scholar 

  10. Craik, A., He, Y., Contreras-Vidal, J.L.: Deep learning for electroencephalogram (EEG) classification tasks: a review. J. Neural Eng. 16, 031001 (2019). https://doi.org/10.1088/1741-2552/ab0ab5

    Article  Google Scholar 

  11. Suhaimi, N.S., Mountstephens, J., Teo, J.: EEG-based emotion recognition: a state-of-the-art review of current trends and opportunities. Comput. Intell. Neurosci. 2020, 1–19 (2020). https://doi.org/10.1155/2020/8875426

    Article  Google Scholar 

  12. Simões, M., et al.: BCIAUT-P300: a multi-session and multi-subject benchmark dataset on autism for p300-based brain-computer-interfaces. Front. Neurosci. 14, 568104 (2020). https://doi.org/10.3389/fnins.2020.568104

    Article  Google Scholar 

  13. Borra, D., Magosso, E., Castelo-Branco, M., Simoes, M.: A Bayesian-optimized design for an interpretable convolutional neural network to decode and analyze the P300 response in autism. J. Neural Eng. 19 (2022). https://doi.org/10.1088/1741-2552/ac7908

  14. Schirrmeister, R.T., et al.: Deep learning with convolutional neural networks for EEG decoding and visualization. Hum. Brain Mapp. 38, 5391–5420 (2017)

    Article  Google Scholar 

  15. Mulliken, G.H., Musallam, S., Andersen, R.A.: Decoding trajectories from posterior parietal cortex ensembles. J. Neurosci. 28, 12913–12926 (2008). https://doi.org/10.1523/JNEUROSCI.1463-08.2008

    Article  Google Scholar 

  16. Aflalo, T., et al.: Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science 348, 906–910 (2015). https://doi.org/10.1126/science.aaa5417

    Article  Google Scholar 

  17. Chinellato, E., Grzyb, B.J., Marzocchi, N., Bosco, A., Fattori, P., del Pobil, A.P.: The Dorso-medial visual stream: from neural activation to sensorimotor interaction. Neurocomputing 74, 1203–1212 (2011). https://doi.org/10.1016/j.neucom.2010.07.029

    Article  Google Scholar 

  18. Fattori, P., Breveglieri, R., Bosco, A., Gamberini, M., Galletti, C.: Vision for prehension in the medial parietal cortex. Cereb. Cortex. bhv302 (2015). https://doi.org/10.1093/cercor/bhv302

  19. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Bach, F. and Blei, D. (eds.) Proceedings of the 32nd International Conference on Machine Learning. pp. 448–456. PMLR, Lille (2015)

    Google Scholar 

  20. Clevert, D.-A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint (2015)

    Google Scholar 

  21. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15, 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  22. Frazier, P.I.: A tutorial on Bayesian optimization (2018). http://arxiv.org/abs/1807.02811

  23. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv:1412.6980 [cs] (2017)

  24. Smith, S., Nichols, T.: Threshold-free cluster enhancement: Addressing problems of smoothing, threshold dependence and localisation in cluster inference. Neuroimage 44, 83–98 (2009). https://doi.org/10.1016/j.neuroimage.2008.03.061

    Article  Google Scholar 

  25. Nowak, M., Zich, C., Stagg, C.J.: Motor cortical gamma oscillations: what have we learnt and where are we headed? Curr. Behav. Neurosci. Rep. 5(2), 136–142 (2018). https://doi.org/10.1007/s40473-018-0151-z

    Article  Google Scholar 

  26. Simonyan, K., Vedaldi, A., Zisserman, A.: Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv:1312.6034 [cs] (2014)

Download references

Funding

This study was supported by PRIN 2017 – Prot. 2017KZNZLN and MAIA project. MAIA project has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement No 951910. This article reflects only the author’s view, and the Agency is not responsible for any use that may be made of the information it contains.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Davide Borra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Borra, D., Filippini, M., Ursino, M., Fattori, P., Magosso, E. (2023). A Bayesian-Optimized Convolutional Neural Network to Decode Reach-to-Grasp from Macaque Dorsomedial Visual Stream. In: Nicosia, G., et al. Machine Learning, Optimization, and Data Science. LOD 2022. Lecture Notes in Computer Science, vol 13811. Springer, Cham. https://doi.org/10.1007/978-3-031-25891-6_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25891-6_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25890-9

  • Online ISBN: 978-3-031-25891-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics