Never before has work in Computational Neuroscience and Artificial Intelligence been recognized as clearly as last month when John Hopfield and Geoffrey Hinton were awarded the Nobel Prize in Physics, and David Baker, Demis Hassabis and John Jumper were awarded the Nobel Prize in Chemistry. As editors-in-chief of Biological Cybernetics we must point out that some of the seminal work by Hopfield, demonstrating the usefulness of neural networks to solve notoriously difficult optimization problems, such as, the Travelling Salesman/Salesperson Problem (Hopfield and Tank 1985) or their usefulness in understanding oscillatory and dynamical firing patterns (Li and Hopfield 1989) was in fact published in this journal. These publications were directly followed up by many authors (Bizzarri 1991; Braham and Hamblen 1988; Breston et al. 2021; Collins 2019; Daucé et al. 2002; Gershman 2024; Ghosh et al. 1991; Greve et al. 2009; Jayadeva and Bhaumik 1992; Kamgarparsi et al. 1990; Kamgarparsi and Kamgarparsi 1990; Kawato and Cortese 2021; Kononenko 1989; Kubat et al. 1994; Kunstmann et al. 1994; Kunz 1991; Lei 1990; Li and Hopfield 1989; Linhares 1998; Mandziuk 1995; Mandziuk and Macukow 1992; Mitra and Sapolsky 2009; Neelakanta et al. 1991; Ozawa et al. 1998; Porat 1989; Samardzija 1990; Sterne 2012; Trianni and Dorigo 2006; Vandenbout and Miller 1989; Vanhulle 1991; Wacholder et al. 1989; Wilson and Pawley 1988; Yang and França 2003; Yuille 1989; Zak 1990; Zheng et al. 2010; Destexhe and Sejnowski 2009; Suri and Sejnowski 2002; Sejnowski 1976a, b; Ermentrout and Cowan 1979; Ramirez-Moreno and Sejnowski 2012). Artificial Intelligence also drew the attention of many of our authors (Bardal and Chalmers 2023; Bermudez-Contreras 2021; Collins 2019; Gershman 2024; Kawato and Cortese 2021; Kubat et al. 1994; Linhares 1998; Porat 1989; Trianni and Dorigo 2006; Zak 1990).

Nobel recognition is a double-edged sword. While it validates a large body of established work, there is the risk that it could decrease the motivation for researchers to do more. On the other hand, it may attract the attention of the scientific community to contributions made so far and perhaps encourage others to leverage this work and use it in their future research. We stand firmly convinced that more can and will be done. We outline below a few directions we believe show great promise, and we, of course, would welcome such contributions to our journal.

1- The data centers and computing facilities providing the physical infrastructure for modern machine learning and artificial intelligence require vast amounts of energy, water, and other resources, that may prove unsustainable. In contrast, human and animal intelligence require only the resources of a living individual. What lessons remain to be learned about resource-efficient computation in living organisms that can inspire practical innovations leading to sustainable industrial computing? Furthermore, what are the impacts of climate change (e.g. chronic changes in temperature or humidity) on the neural mechanisms of perception, action or cognition? Can they be modeled and predicted?

2- Mimicking certain neural computations by artificial neural networks has been tremendously successful over the last 15 years. However, what about a better incorporation of synapses, in particular synaptic dynamics? In a biological neural network, firing at high frequencies becomes ineffective if synapses are depressed and do not transmit information. Moreover, memory is stored in synapses, not neurons. Synaptic transmission has far richer and more diverse time scales than neural or dendritic computations, and the potential of neuromodulation to influence brain computation (e.g. via dopamine or norepinephrine) is at least as important at the synaptic level as it is at the neural level. Neuromodulation is known to endow computations with flexibility and complexity in ways we are just beginning to understand. And there are many more synapses than neurons or glial cells. Could the next step beyond neural computation and artificial neural networks be synaptic computation and artificial synaptic networks? What would be the impact of synaptic computations on modern brain-inspired AI algorithms, which by and large reduce synapses to a single number?

3- Much effort for the past few decades has been put towards understanding cognitive processes such as perception, decision-making or spatial navigation. But what about emotion? While emotion plays an undeniable role in adaptation, homeostasis, and efficiency, investigations in robots and machines have involved ‘add-ons’, addition of ‘emotional modules’ or ‘mechanisms’, to classical cognitive architectures, typically as an afterthought. There are no dedicated emotional centers in the brain that one can lesion or stimulate to causally prevent or trigger a specific emotion, only centers that can bias towards them. We argue that it is time to rethink the emotional processing from the ground up and build a new generation of neurally-inspired perceptual, decision-making and navigational algorithms that use emotional processing intrinsically.

4- One of the major challenges facing AI is the (often accurate) perception that it is a black box. Algorithms are relatively easy to implement but their outputs, because they are based on massive computing power and massive training datasets, are too complex for a single human being to understand. For this reason, AI has faced and may continue to face skepticism from users and developers alike. It may be time to redesign the current approaches to intrinsically include explainability and trustworthiness.

5- Many current AI tools and algorithms such as transformers, deep learning networks or reinforcement learning approaches are loosely inspired by neurobiology. They are however largely simplified. For example, transformers are generally feed-forward networks, where learning only involves single synaptic weights between neurons. The brain has found solutions (perhaps suboptimal ones, but solutions nonetheless) to all the major questions AI is trying to answer. Can AI algorithms and architectures use the deeper insights that are still being obtained from the brains of insects or mammals? Isn’t there more AI can do, or do better, if it were more closely inspired by the brain and its many forms of computations? Reciprocally, many modern neural network architectures were obtained seemingly in an ad hoc fashion, because ‘they work better that way’ (e.g. faster convergence, greater robustness, closer to human performance). This essentially trial-and-error engineering approach has produced very successful algorithms, such as ‘context’ processing and ‘attention’ in transformers, and the use of a reinforcement variable in reinforcement learning. Could these algorithms be in fact those used by the brain? For example, could dopamine in fact be one of the reinforcement variable postulated by reinforcement learning algorithms? Could the principles of engineering be similar to that of evolution, albeit operating on a different time scale? Should experimentalists pay more attention to the details of AI algorithms?

6- The brain is a massively parallel device, tolerating a massively large amount of apparent noise. Yet, its computations can be exquisitely precise and reliable (as in the auditory system for example). It is clear that noise and stochasticity are features of the system, not merely bugs to be compensated for by redundancy. A fundamental rethinking of computational and AI models may be called for, and new AI architectures built to make use of stochasticity and to leverage its benefits. Conversely, experimentalists may gain more insights into their data if they minimized the amount of averaging and smoothing, and focused on those ‘outliers’ as possible source of discovery. New computational methods in AI (e.g. stochastic AI: AI algorithms with intrinsic noise) and neural data analyses (e.g. systematic trial by trial analyses methods) may be required.

We stand firmly convinced that the recognition of the outstanding contributions of the Nobel laureates and many others not cited, is the beginning, not the end, of a long journey. The opportunity for fundamental and paradigm shifting advancements in both Computational Neuroscience and Artificial Intelligence, together, hand-in-hand, is clear. This synergy will undoubtedly revolutionize other fields such as Robotics, Computer Science, or Biology. This is an exciting time, and Biological Cybernetics stands ready to support this progress by providing a powerful venue to publish impactful research.