Skip to main content

An Experimental Study on the Relationships Among Neural Codes and the Computational Properties of Neural Networks

  • Conference paper
  • First Online:
From Bioinspired Systems and Biomedical Applications to Machine Learning (IWINAC 2019)

Abstract

Biological neural networks (BNNs) have inspired the creation of artificial neural networks (ANNs) [19]. One of the properties of BNNs is computational robustness, but this property is often overlooked in computer science because ANNs are usually virtualizations executed in a physical machine that lacks computational robustness. However, it was recently proposed that computational robustness could be a key feature that drives the selection of the computational model in the evolution of animals [20]. Until now, only energetic cost and processing time had been considered as the features that drove the evolution of the nervous system. The new standpoint leads us to consider whether computational robustness could have driven the evolution of not only the computational model but also other nervous system traits in animals through the process of natural selection. Because an important feature of an animal’s nervous system is its neural code, we tested the relationship among the computational properties of feed-forward neural networks and the neural codes through in silico experiments. We found two main results: There is a relationship between the number of epochs needed to train a feed-forward neural network using back-propagation and the neural code of the neural network, and a relationship exists between the computational robustness and the neural code of a feed-forward neural network. The first result is important to ANNs and the second to BNNs.

This research was supported by Universidad Isabel I. I would like to thank Lori-Ann Tuscan for proofreading this paper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aggarwal, C.: Neural Networks and Deep Learning: A Textbook. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94463-0

    Book  MATH  Google Scholar 

  2. Albert, R., et al.: Error and attack tolerance of complex networks. Nature 406, 378–382 (2000)

    Article  Google Scholar 

  3. Antonopoulos, C., et al.: Evaluating performance of neural codes in model neural communication networks. Neural Netw. 109, 90–102 (2019)

    Article  Google Scholar 

  4. Beaulieu-Laroche, L., et al.: Enhanced dendritic compartmentalization in human cortical neurons. Cell 175(3), 643–651 (2018)

    Article  Google Scholar 

  5. Bullmore, E., Sporns, O.: The economy of brain network organization. Nat. Rev. Neurosci. 12, 336–349 (2012)

    Article  Google Scholar 

  6. Cherniak, C.: Component placement optimization in the brain. J. Neurosci. 14(4), 2418–2427 (1994)

    Article  Google Scholar 

  7. Ghosh, A., Pal, N., Pal, S.: Modeling of component failure in neural networks for robustness evaluation: an application to object extraction. IEEE Trans. Neural Netw. 6(3), 648–656 (1995)

    Article  Google Scholar 

  8. Ghosh, A., Tanaka, H.: On making neural network based learning systems robust. IETE J. Res. 44(4–5), 219–225 (1998)

    Article  Google Scholar 

  9. Guerguiev, J., et al.: Towards deep learning with segregated dendrites. eLife 6, e22901 (2017)

    Google Scholar 

  10. Gulyás, A.E.A.: Navigable networks as nash equilibria of navigation games. Nat. Commun. 6(7651), 1–10 (2015)

    Google Scholar 

  11. Kalampokis, A., et al.: Robustness in biological neural networks. Physica A: Stat. Mech. Appl. 317(3–4), 581–590 (2003)

    Article  Google Scholar 

  12. Kazantsev, V.B., et al.: Self-referential phase reset based on inferior olive oscillator dynamics. Proc. Nat. Acad. Sci. 101(52), 18183–18188 (2004)

    Article  Google Scholar 

  13. Kong, Q., et al.: Efficient coding matters in the organization of the early visual system. Neural Netw. 105, 218–226 (2018)

    Article  Google Scholar 

  14. Laughlin, S.B., Sejnowski, T.J.: Communication in neural networks. Science 301(5641), 1870–1874 (2003)

    Article  Google Scholar 

  15. Lianchun, Y., Yuguo, Y.: Energy-efficient neural information processing in individual neurons and neuronal networks. J. Neurosci. Res. 95(11), 2253–2266 (2017)

    Article  Google Scholar 

  16. Lucal, H.M.: Arithmetic operations for digital computers using a modified reflected binary code. IRE Trans. Electron. Comput. EC-8(4), 449–458 (1959)

    Article  Google Scholar 

  17. Makarenko, V., Llinás, R.: Experimentally determined chaotic phase synchronization in a neuronal system. Proc. Nat. Acad. Sci. 95(26), 15747–15752 (1998)

    Article  Google Scholar 

  18. Manin, Y.I.: Error-correcting codes and neural networks. Sel. Math. 24(1), 521–530 (2018)

    Article  MathSciNet  Google Scholar 

  19. McCulloch, W., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5, 115–133 (1943)

    Article  MathSciNet  Google Scholar 

  20. Miguel-Tomé, S.: The influence of computational traits on the natural selection of the nervous system. Natural Comput. 17(2), 403–425 (2018)

    Article  MathSciNet  Google Scholar 

  21. Moreno, H., et al.: Synaptic transmission block by presynaptic injection of oligomeric amyloid beta. Proc. Nat. Acad. Sci. 106(14), 5901–5906 (2009)

    Article  Google Scholar 

  22. Pryluk, R., et al.: A tradeoff in the neural code across regions and species. Cell 176(3), 597–609.e18 (2019)

    Article  Google Scholar 

  23. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning internal representations by error propagation. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 318–362. MIT Press (1986)

    Google Scholar 

  24. Werbos, P.: Beyond regression: new tools for prediction and analysis in the behavior sciences. Doctor in philosophy, Harvard University (1974)

    Google Scholar 

  25. Yeung, R.: Information Theory and Network Coding. Springer, Boston (2008). https://doi.org/10.1007/978-0-387-79234-7

    Book  MATH  Google Scholar 

  26. Yuste, R.: From the neuron doctrine to neural networks. Nat. Rev. Neurosci. 16, 487–497 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergio Miguel-Tomé .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Miguel-Tomé, S. (2019). An Experimental Study on the Relationships Among Neural Codes and the Computational Properties of Neural Networks. In: Ferrández Vicente, J., Álvarez-Sánchez, J., de la Paz López, F., Toledo Moreo, J., Adeli, H. (eds) From Bioinspired Systems and Biomedical Applications to Machine Learning. IWINAC 2019. Lecture Notes in Computer Science(), vol 11487. Springer, Cham. https://doi.org/10.1007/978-3-030-19651-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-19651-6_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-19650-9

  • Online ISBN: 978-3-030-19651-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics