Avoid common mistakes on your manuscript.
Never before has work in Computational Neuroscience and Artificial Intelligence been recognized as clearly as last month when John Hopfield and Geoffrey Hinton were awarded the Nobel Prize in Physics, and David Baker, Demis Hassabis and John Jumper were awarded the Nobel Prize in Chemistry. As editors-in-chief of Biological Cybernetics we must point out that some of the seminal work by Hopfield, demonstrating the usefulness of neural networks to solve notoriously difficult optimization problems, such as, the Travelling Salesman/Salesperson Problem (Hopfield and Tank 1985) or their usefulness in understanding oscillatory and dynamical firing patterns (Li and Hopfield 1989) was in fact published in this journal. These publications were directly followed up by many authors (Bizzarri 1991; Braham and Hamblen 1988; Breston et al. 2021; Collins 2019; Daucé et al. 2002; Gershman 2024; Ghosh et al. 1991; Greve et al. 2009; Jayadeva and Bhaumik 1992; Kamgarparsi et al. 1990; Kamgarparsi and Kamgarparsi 1990; Kawato and Cortese 2021; Kononenko 1989; Kubat et al. 1994; Kunstmann et al. 1994; Kunz 1991; Lei 1990; Li and Hopfield 1989; Linhares 1998; Mandziuk 1995; Mandziuk and Macukow 1992; Mitra and Sapolsky 2009; Neelakanta et al. 1991; Ozawa et al. 1998; Porat 1989; Samardzija 1990; Sterne 2012; Trianni and Dorigo 2006; Vandenbout and Miller 1989; Vanhulle 1991; Wacholder et al. 1989; Wilson and Pawley 1988; Yang and França 2003; Yuille 1989; Zak 1990; Zheng et al. 2010; Destexhe and Sejnowski 2009; Suri and Sejnowski 2002; Sejnowski 1976a, b; Ermentrout and Cowan 1979; Ramirez-Moreno and Sejnowski 2012). Artificial Intelligence also drew the attention of many of our authors (Bardal and Chalmers 2023; Bermudez-Contreras 2021; Collins 2019; Gershman 2024; Kawato and Cortese 2021; Kubat et al. 1994; Linhares 1998; Porat 1989; Trianni and Dorigo 2006; Zak 1990).
Nobel recognition is a double-edged sword. While it validates a large body of established work, there is the risk that it could decrease the motivation for researchers to do more. On the other hand, it may attract the attention of the scientific community to contributions made so far and perhaps encourage others to leverage this work and use it in their future research. We stand firmly convinced that more can and will be done. We outline below a few directions we believe show great promise, and we, of course, would welcome such contributions to our journal.
1- The data centers and computing facilities providing the physical infrastructure for modern machine learning and artificial intelligence require vast amounts of energy, water, and other resources, that may prove unsustainable. In contrast, human and animal intelligence require only the resources of a living individual. What lessons remain to be learned about resource-efficient computation in living organisms that can inspire practical innovations leading to sustainable industrial computing? Furthermore, what are the impacts of climate change (e.g. chronic changes in temperature or humidity) on the neural mechanisms of perception, action or cognition? Can they be modeled and predicted?
2- Mimicking certain neural computations by artificial neural networks has been tremendously successful over the last 15 years. However, what about a better incorporation of synapses, in particular synaptic dynamics? In a biological neural network, firing at high frequencies becomes ineffective if synapses are depressed and do not transmit information. Moreover, memory is stored in synapses, not neurons. Synaptic transmission has far richer and more diverse time scales than neural or dendritic computations, and the potential of neuromodulation to influence brain computation (e.g. via dopamine or norepinephrine) is at least as important at the synaptic level as it is at the neural level. Neuromodulation is known to endow computations with flexibility and complexity in ways we are just beginning to understand. And there are many more synapses than neurons or glial cells. Could the next step beyond neural computation and artificial neural networks be synaptic computation and artificial synaptic networks? What would be the impact of synaptic computations on modern brain-inspired AI algorithms, which by and large reduce synapses to a single number?
3- Much effort for the past few decades has been put towards understanding cognitive processes such as perception, decision-making or spatial navigation. But what about emotion? While emotion plays an undeniable role in adaptation, homeostasis, and efficiency, investigations in robots and machines have involved ‘add-ons’, addition of ‘emotional modules’ or ‘mechanisms’, to classical cognitive architectures, typically as an afterthought. There are no dedicated emotional centers in the brain that one can lesion or stimulate to causally prevent or trigger a specific emotion, only centers that can bias towards them. We argue that it is time to rethink the emotional processing from the ground up and build a new generation of neurally-inspired perceptual, decision-making and navigational algorithms that use emotional processing intrinsically.
4- One of the major challenges facing AI is the (often accurate) perception that it is a black box. Algorithms are relatively easy to implement but their outputs, because they are based on massive computing power and massive training datasets, are too complex for a single human being to understand. For this reason, AI has faced and may continue to face skepticism from users and developers alike. It may be time to redesign the current approaches to intrinsically include explainability and trustworthiness.
5- Many current AI tools and algorithms such as transformers, deep learning networks or reinforcement learning approaches are loosely inspired by neurobiology. They are however largely simplified. For example, transformers are generally feed-forward networks, where learning only involves single synaptic weights between neurons. The brain has found solutions (perhaps suboptimal ones, but solutions nonetheless) to all the major questions AI is trying to answer. Can AI algorithms and architectures use the deeper insights that are still being obtained from the brains of insects or mammals? Isn’t there more AI can do, or do better, if it were more closely inspired by the brain and its many forms of computations? Reciprocally, many modern neural network architectures were obtained seemingly in an ad hoc fashion, because ‘they work better that way’ (e.g. faster convergence, greater robustness, closer to human performance). This essentially trial-and-error engineering approach has produced very successful algorithms, such as ‘context’ processing and ‘attention’ in transformers, and the use of a reinforcement variable in reinforcement learning. Could these algorithms be in fact those used by the brain? For example, could dopamine in fact be one of the reinforcement variable postulated by reinforcement learning algorithms? Could the principles of engineering be similar to that of evolution, albeit operating on a different time scale? Should experimentalists pay more attention to the details of AI algorithms?
6- The brain is a massively parallel device, tolerating a massively large amount of apparent noise. Yet, its computations can be exquisitely precise and reliable (as in the auditory system for example). It is clear that noise and stochasticity are features of the system, not merely bugs to be compensated for by redundancy. A fundamental rethinking of computational and AI models may be called for, and new AI architectures built to make use of stochasticity and to leverage its benefits. Conversely, experimentalists may gain more insights into their data if they minimized the amount of averaging and smoothing, and focused on those ‘outliers’ as possible source of discovery. New computational methods in AI (e.g. stochastic AI: AI algorithms with intrinsic noise) and neural data analyses (e.g. systematic trial by trial analyses methods) may be required.
We stand firmly convinced that the recognition of the outstanding contributions of the Nobel laureates and many others not cited, is the beginning, not the end, of a long journey. The opportunity for fundamental and paradigm shifting advancements in both Computational Neuroscience and Artificial Intelligence, together, hand-in-hand, is clear. This synergy will undoubtedly revolutionize other fields such as Robotics, Computer Science, or Biology. This is an exciting time, and Biological Cybernetics stands ready to support this progress by providing a powerful venue to publish impactful research.
Data availability
No datasets were generated or analysed during the current study.
References
Bardal M, Chalmers E (2023) Four attributes of intelligence, a thousand questions. Biol Cybern 117(6):407–409. https://doi.org/10.1007/s00422-023-00979-4
Bermudez-Contreras E (2021) Deep reinforcement learning to study spatial navigation, learning and memory in artificial and biological agents. Biol Cybern 115(2):131–134. https://doi.org/10.1007/s00422-021-00862-0
Bizzarri AR (1991) Convergence properties of a modified Hopfield-Tank Model. Biol Cybern 64(4):293–300 doi:Doi 10.1007/Bf00199592
Braham R, Hamblen JO (1988) On the Behavior of some associative neural networks. Biol Cybern 60(2):145–151 doi:Doi 10.1007/Bf00202902
Breston L, Leonardis EJ, Quinn LK, Tolston M, Wiles J, Chiba AA (2021) Convergent cross sorting for estimating dynamic coupling. Sci Rep 11(1):20374. https://doi.org/10.1038/s41598-021-98864-2
Collins LT (2019) The case for emulating insect brains using anatomical wiring diagrams equipped with biophysical models of neuronal activity. Biol Cybern 113(5–6):465–474. https://doi.org/10.1007/s00422-019-00810-z
Daucé E, Quoy M, Doyon B (2002) Resonant spatiotemporal learning in large random recurrent networks. Biol Cybern 87(3):185–198. https://doi.org/10.1007/s00422-002-0315-4
Destexhe A, Sejnowski TJ (2009) The Wilson-Cowan model, 36 years later. Biol Cybern 101(1):1–2. https://doi.org/10.1007/s00422-009-0328-3
Ermentrout GB, Cowan JD (1979) A mathematical theory of visual hallucination patterns. Biol Cybern 34(3):137–150. https://doi.org/10.1007/BF00336965
Gershman SJ (2024) What have we learned about artificial intelligence from studying the brain? Biol Cybern 118(1–2):1–5. https://doi.org/10.1007/s00422-024-00983-2
Ghosh A, Pal NR, Pal SK (1991) Image segmentation using a neural network. Biol Cybern 66(2):151–158 doi:Doi 10.1007/Bf00243290
Greve A, Sterratt DC, Donaldson DI, Willshaw DJ, van Rossum MCW (2009) Optimal learning rules for familiarity detection. Biol Cybern 100(1):11–19. https://doi.org/10.1007/s00422-008-0275-4
Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52(3):141–152. https://doi.org/10.1007/BF00339943
Jayadeva, Bhaumik B (1992) Optimization with neural networks - a recipe for improving convergence and solution quality. Biol Cybern 67(5):445–449 doi:Doi 10.1007/Bf00200988
Kamgarparsi B, Kamgarparsi B (1990) On Problem-solving with Hopfield Neural Networks. Biol Cybern 62(5):415–423 doi:Doi 10.1007/Bf00197648
Kamgarparsi B, Gualtieri JA, Devaney JE, Kamgarparsi B (1990) Clustering with neural networks. Biol Cybern 63(3):201–208 doi:Doi 10.1007/Bf00195859
Kawato M, Cortese A (2021) From internal models toward metacognitive AI. Biol Cybern 115(5):415–430. https://doi.org/10.1007/s00422-021-00904-7
Kononenko I (1989) Bayesian neural networks. Biol Cybern 61(5):361–370 doi:Doi 10.1007/Bf00200801
Kubat M, Pfurtscheller G, Flotzinger D (1994) Ai-Based Approach to Automatic Sleep classification. Biol Cybern 70(5):443–448 doi:DOI 10.1007/s004220050047
Kunstmann N, Hillermeier C, Rabus B, Tavan P (1994) An associative memory that Can Form hypotheses - a phase-coded neural-network. Biol Cybern 72(2):119–132 doi:DOI 10.1007/s004220050117
Kunz D (1991) Suboptimum solutions obtained by the Hopfield-Tank neural network Algorithm. Biol Cybern 65(2):129–133 doi:Doi 10.1007/Bf00202388
Lei G (1990) A Neuron Model with Fluid properties for solving labyrinthian puzzle. Biol Cybern 64(1):61–67 doi:Doi 10.1007/Bf00203631
Li Z, Hopfield JJ (1989) Modeling the olfactory-bulb and its neural oscillatory processings. Biol Cybern 61(5):379–392 doi:Doi 10.1007/Bf00200803
Linhares A (1998) State-space search strategies gleaned from animal behavior: a traveling salesman experiment. Biol Cybern 78(3):167–173 doi:DOI 10.1007/s004220050423
Mandziuk J (1995) Solving the N-Queens Problem with a binary hopfield-type network - synchronous and asynchronous model. Biol Cybern 72(5):439–445 doi:DOI 10.1007/s004220050146
Mandziuk J, Macukow B (1992) A neural network designed to solve the N-Queens Problem. Biol Cybern 66(4):375–379 doi:Doi 10.1007/Bf00203674
Mitra R, Sapolsky RM (2009) Effects of enrichment predominate over those of chronic stress on fear-related behavior in male rats. Stress 12(4):305–312. https://doi.org/10.1080/10253890802379955
Neelakanta PS, Sudhakar R, Degroff D (1991) Langevin Machine - a neural network based on stochastically justifiable sigmoidal function. Biol Cybern 65(5):331–338 doi:Doi 10.1007/Bf00216966
Ozawa S, Tsutsumi K, Baba N (1998) An artificial modular neural network and its basic dynamical characteristics. Biol Cybern 78(1):19–36 doi:DOI 10.1007/s004220050409
Porat S (1989) Stability and Looping in Connectionist models with Asymmetric weights. Biol Cybern 60(5):335–344
Ramirez-Moreno DF, Sejnowski TJ (2012) A computational model for the modulation of the prepulse inhibition of the acoustic startle reflex. Biol Cybern 106(3):169–176. https://doi.org/10.1007/s00422-012-0485-7
Samardzija N (1990) Information-storage matrices in neural networks. Biol Cybern 63(2):81–89 doi:Doi 10.1007/Bf00203029
Sejnowski TJ (1976a) On global properties of neuronal interaction. Biol Cybern 22(2):85–95. https://doi.org/10.1007/BF00320133
Sejnowski TJ (1976b) On the stochastic dynamics of neuronal interaction. Biol Cybern 22(4):203–211. https://doi.org/10.1007/BF00365086
Sterne P (2012) Efficient and robust associative memory from a generalized Bloom filter. Biol Cybern 106(4–5):271–281. https://doi.org/10.1007/s00422-012-0494-6
Suri RE, Sejnowski TJ (2002) Spike propagation synchronized by temporally asymmetric hebbian learning. Biol Cybern 87(5–6):440–445. https://doi.org/10.1007/s00422-002-0355-9
Trianni V, Dorigo M (2006) Self-organisation and communication in groups of simulated and physical robots. Biol Cybern 95(3):213–231. https://doi.org/10.1007/s00422-006-0080-x
Vandenbout DE, Miller TK (1989) Improving the performance of the Hopfield-Tank Neural Network through normalization and annealing. Biol Cybern 62(2):129–139 doi:Doi 10.1007/Bf00203001
Vanhulle MM (1991) A goal Programming Network for Linear-Programming. Biol Cybern 65(4):243–252 doi:Doi 10.1007/Bf00206222
Wacholder E, Han J, Mann RC (1989) A neural network algorithm for the multiple traveling salesmen Problem. Biol Cybern 61(1):11–19
Wilson GV, Pawley GS (1988) On the Stability of the traveling salesman Problem Algorithm of Hopfield and Tank. Biol Cybern 58(1):63–70 doi:Doi 10.1007/Bf00363956
Yang ZJ, França FMG (2003) A generalized locomotion CPG architecture based on oscillatory building blocks. Biol Cybern 89(1):34–42. https://doi.org/10.1007/s00422-003-0409-7
Yuille AL (1989) Energy Functions for Early Vision and Analog Networks. Biol Cybern 61(2):115–123
Zak M (1990) Creative Dynamics Approach to neural intelligence. Biol Cybern 64(1):15–23 doi:Doi 10.1007/Bf00203626
Zheng PS, Zhang JX, Tang WS (2010) Analysis and design of asymmetric Hopfield networks with discrete-time dynamics. Biol Cybern 103(1):79–85. https://doi.org/10.1007/s00422-010-0391-9
Author information
Authors and Affiliations
Contributions
All authors contributed equally.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Communicated by Benjamin Lindner.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Fellous, JM., Thomas, P., Tiesinga, P. et al. Beyond the Nobel prizes: towards new synergies between Computational Neuroscience and Artificial Intelligence. Biol Cybern 119, 1 (2025). https://doi.org/10.1007/s00422-024-01002-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00422-024-01002-0