A tribute to Richard Ennals
Purveyor of socially useful AI
Founding editorial team, AI & Society.
Avoid common mistakes on your manuscript.
In this age of ChatGPTs and LLMs, we are reminded of Richard Ennals’ reflections on the age of innocence. He argued (Ennals 2016) that the days of powerful ideas in artificial intelligencehas long passed. Back in the 1980s, one definition of AI was ‘those bits of Computer Science which do not quite work yet’. Often it is only in their applications that we discover what does not work, and this requires us to monitor details of the domains of application. Tentative ideas that enthralled researchers from the 1970s are now embedded in affordable commercial products. The atmosphere has changed, from exploration to exploitation. The technology has now left the laboratory. The genie is out of the bottle. AI research contributed to the development of devices that are now in the hands of billions of users. Powerful ideas cannot sensibly be explored without addressing the contexts of application.
Commenting on the recent Coronavirus pandemic as a disruptive event, a global “Kodak Moment”, Richard Ennals recalls how back in 1985’s conference on “Artificial Intelligence for Society”, we envisaged such future disruptions and way forward. Richard was then working at Imperial College and in the UK Alvey Directorate, designing and managing research and development in Advanced IT. He outlined a suggested “Strategic Health Initiative”, which was published in AI For Society (Ennals 1986), and in Star Wars: A Question of Initiative (Ennals 1986). Over the last 39 years, there have of course been many major technological advances. However, the principles set out in the paper remain valid for the new context. The Strategic Health Initiative in 1986 argued for the application of advanced technology for health care provision drawing on progress in medical science, advanced computing and social administration. It was then held that success would have enormous potential benefits, not only for the health of the nation but also for the economy. Improved health and medical services would provide considerable financial benefits, as would the development of a better-trained workforce. The health of individuals is seen as integral to the health of the nation and this was enshrined in the establishment, in 1948, of the National Health Service (NHS). Even in those early days of 1986, it was clear to Richard that with the advent of artificial intelligence techniques, further advances are made possible. Artificial intelligence is concerned with the study of human thinking, and its modelling in computer programs. We can learn about particular problems by attempting to model them, and the consequent programs can be of use in helping people to solve such problems themselves. Richard’s commitment to socially useful technologies was rooted in the belief that researchers prefer to work on projects they believe in. Their brains cannot simply be hired for whatever purpose. Their choice of where to work need not be determined by money, they can choose instead to focus on fundamental research efforts that attempt to solve human problems. In this spirit, he again suggests an initiative to tap this supply of idealism and argued for the need for a strategic focus for the next stage of development of an infant generation of technology, to the benefit of society in general: a Strategic Health Initiative. If we abdicate from participation in the decisions as to how the technology is to be used, we must accept responsibility. Richard reminds us of the words of Lord Beveridge, whose work laid the foundations of the British Welfare State, including the National Health Service. “The object of the government in peace and in war is not the glory of rulers or of races, but the happiness of the common man.” Beveridge Report 1942.
Caught in the currents of capital, it seems that both experts and lay people have lost control and lack sufficient knowledge of what they are doing. Commenting on the financial and banking collapse of 1980s Ennals cites how the notion of AIs as ‘‘Money Machines’’ allowed financial institutions to be driven by technologies that they do not understand. He says that ignorance is not an acceptable excuse for the banking collapse which precipitated recession and austerity. For him, there were fundamental issues of responsibility, both corporate and social responsibility. Thus, he emphasised the necessity for reflection by finance professionals on their own responsibility, and this goes for Business Schools as well. For some, CSR (computers for social responsibility) is an academic field where the task is to arrive at objective and detached descriptions of what is going on in the corporate world. For others, the identification of problems should be followed by practical interventions, for example with the objective of advancing the cause of ‘‘capitalism with a human face’’. We may feel that this objective is complicated by the development of ‘‘varieties of capitalism’’: Liberal Capitalism in the USA and UK is not the same as the European Social Model or the Scandinavian Model.
Reflecting on our reliance on the application of complex technologies across traditional disciplinary boundaries, he posited that for the technologies to be used safely, we need to regain control of our technological destiny, and for this to happen we need to understand the risks and uncertainties in the application domains, which range from autonomous robots to designer drugs, including missile systems and drones. As he said in his article, ‘‘Age of Innocence’’, as AI technology has left the laboratory, and AI research focuses on creating commercial devices to be used by billions of users, the atmosphere has changed from exploration to exploitation. In the 1980s, when one definition of AI was ‘‘those bits of Computer Science which do not quite work yet’’ and often it was only in their applications that we discovered what does not work, this required us to monitor details of the domains of application. Now, when we are offered the use of new technologies, there is a necessary context of ethics and social responsibility, which may or may not be considered. Often the underlying technology is not fundamentally new, but it may be poorly understood today. There is a sense in which, when we evaluate practical applications, the AI now tends to drop out of consideration as we address the world of business. We are offered tools, and their intellectual ancestry may be seen as less important. As we talk to our mobile phones, obtain online translations, search databases, and receive web answers, we are using long-standing insights, underpinned by new engineering. This suggests that the realignment of intellectual activity has radical implications for technology, education, research and management. We are accustomed to encountering gaps in communication across the disciplines. We might suggest that both the technology and business communities have long been in denial. Neither has accepted responsibility for failing to ask the difficult questions, which have been left to others. This represents a challenge for AI & Society. The challenge of how AI can be rescued from commercialism poses a set of contested issues that merit wide discussion. In a world where machine learning is commonplace and applied to ‘‘big data’’, we do not yet understand common sense intelligence, yet we should worry if systems without common sense are making decisions where common sense is needed. It may be that the challenge lies in exploring the workings of the mind, and not just in building useful pieces of technology. We need a new public debate around AI, with sound arguments located in wider contexts to counteract earlier confusions arising from the success of technical arguments in creating enthusiasm among entrepreneurs and journalists.
At the time AI & Society was founded, we also launched the AI For Society Club, with the objective of applying the new technologies of AI to the alleviation of problems in society, through group activities. Of course, this required us to understand these problems. Gaps in understanding emerged, between the AI specialist and ordinary citizen, constraining what could be achieved. If we were to relaunch the AI For Society Club today, we could benefit from reference use of the Encyclopaedia and the Dictionary, as we address the social dimension. They spell out example cases and cite specific literature, which could enable AI to be used for practical initiatives in particular contexts. We can envisage a range of action research projects, based on sound descriptions of problems in society, the choice of appropriate tools for interventions, and the capacity to reflect on experience. We might decide that there is now an effective convergence between AI For Society, CSR and Action Research. We can learn by taking action. For many years, philosophers have been interpreting the world. It is time to change it. The potential implications for teaching, professional practice and interventions are radical, as we pave the way for action. There are new challenges for AI & Society. We have survived since 1987. Arguably, we cannot stand on the sidelines: we are actors in the ongoing process of change.
We operate in a world where academic disciplines have been developing ever-stronger institutional structures. In many cases, partly due to resource constraints, the authorities attempt to legislate what we should think, prescribe how we should write and determine how our publications should be regarded. How are we to respond? It is easy for academic researchers to regard themselves as intellectual citizens of a particular specialist field, who are obliged to comply with norms if their careers are to be continued. The acceptance of these constraints can have a distorting effect on academic life, and on the economy and society which build on academic work. We may feel under pressure to sell our souls, to comply with what is required. The more we comply, the harder it is to distinguish ourselves from computers. The literature on Socio-Technical Systems thinking reminds us that technology does not exist in isolation. Systems also involve people and organisations.
As we use language for communication, we must recognise the roles of dialogue, learning and engagement. When we build a physical bridge, we need to understand the terrain at both ends, the strength of the construction, and the expectations of prospective users.
This was not the case with our idealistic knowledge engineers 30–40 years ago, who were prepared to make use of prototypes and encountered unfamiliar users with initial confidence. There has been a false assumption that we can arrive at a single perfect scientific language, which can be made explicit, and used to describe the world. Within that language it was assumed that the knowledge of particular human experts could be captured and then represented on computers, rendering the personal availability of experts unnecessary. On this basis, having established such a perfect language, it was simply a matter of requiring more people to learn it. We would be able to dispense with the services of many expensive experts and teachers, in a world where education would be delivered by technology. It turns out that while explicit knowledge is susceptible to representation and computerisation, it is harder to deal with implicit knowledge, and tacit knowledge presents major problems. If we rely merely on explicit knowledge, our connections may be only at a superficial level. Organisations are beginning to recognise the importance of the tacit knowledge of their employees, which is not susceptible to conventional analysis.
Citing the novelist E. M. Forster on human communication across barriers of race, social class and sexual orientation, Ennals notes how using powerful narrative and rich description, he concentrated on the challenges for individuals, of escaping the restrictions of narrow stereotypes and encountering broader humanity. In the theme, ‘‘Only Connect’’ in the novel Howards End, Forster depicted tragic gaps in understanding. Connections have to be made between individuals, informed by context. It is not enough to recognise the separate individual and societal dimensions. Action is required. Mere compliance with authority is not enough. Maintaining individual human friendships, which Forster regarded as being of prime importance, could mean defying authority. He did not expect to be understood in his lifetime, and the publication of some of his work was delayed until after his death.
A vital translation and mediation role is required, if we are to make sense of life in an era of Ubiquitous Technology. In practical terms, this poses challenges, for example, for those developing international joint programmes in social technologies. We must remember that behind the surface technologies there are distinct communities of practice. It is for humans to handle the necessary ambiguity and flexibility, regarding ambiguity as a priceless resource as we seek to enable new dialogue. As we draw on our own resources of knowledge, we must go deeper than the superficial explicit levels and access both implicit and tacit knowledge. In order to connect, and to sustain relationships, we need to make use of vocabulary and concepts which can be grasped and deployed by others. It does not necessarily follow that the same meanings will be shared. On the other hand, sustained interaction and cumulative sharing of meanings can enable connections to be strengthened and institutionalised over time. This process of social capital requires the incremental development of trust. This derives from engagement in common activities, resulting in shared experience. It cannot be achieved by detachment alone. Differences continue, beyond the connections, even when we may assume that a common environment and language have been established. Sharing the same road does not mean that we all know how to drive or that we all comply with the same laws and regulations. We come from previous backgrounds and are heading in different directions. Our connection may be only momentary and transient and may not be conscious.
Having retired from a university professorship after over 22 years, Richard often reflected on the world of words, action and research, seen from inside and outside. He argued that we cannot simply stand outside: we are part of the problem. There is no one preexisting intellectual debate: we each have to make sense of the world, acting and provoking responses. We can look back, reflect and try to make sense of what might seem to have been a disorderly set of interests, rarely characterised by conventional academic disciplines, at least on his part. “At each stage of my working life”, He says, “where I have often been regarded as a tourist in a specialist field, I have needed to learn the language of the particular discourse, how to pass muster as I try to make intelligent contributions, and how to respond appropriately to local rhetoric.” As a student of Moral Sciences at King’s College Cambridge from 1969, Richard was immersed in the later work of Wittgenstein, such as Philosophical Investigations, which he tried to reconcile with earlier work, such as the Tractatus Logico Philosophicus. With a background as an open scholar in English, he could not easily accept the approach of the logical positivists, who saw language as being used only to describe the world. It was many years later, he says, that he encountered Wittgenstein’s explanation that there had been two volumes of the Tractatus, only one of which could be written down. “My undergraduate essay”, Richard reflects, “was completed about 40 years late”.
In this volume, our authors continue the AI and Society debates on the convergence and divergence of AI and society. For example, in “Negotiating the authenticity of AI: how the discourse on AI rejects human indeterminacy” (this volume), authors discuss how we understand our human intelligence and condition, how we negotiate what it means to be human, existentially, culturally, politically, and legally. They argue that the “authenticity negotiation process” has implications for scientific theory, research directions, ethical guidelines, design principles, funding, media attention, and the way people relate to and act upon AI. Their main argument is that AI itself, as well as the products, services, and decisions delivered by AI systems are negotiated as authentic or inauthentic intelligence. In this negotiation process, AI stakeholders indirectly define and essentialize what being human(like) means. The main argument is that this process of indirectly defining and essentializing humans results in an elimination of the space for humans to be indeterminate. By eliminating this space and, hence, denying indeterminacy, the existential condition of the human being is jeopardized. Rather than re-creating humanity in AI, the AI discourse is redefining what it means to be human and how humanity is valued and should be treated.
In “Freedom, AI and God: why being dominated by a friendly super-AI might not be so bad” (this volume), the author poses a question as to whether a friendly super-AI would be the right kind of agent to permissibly dominate us. One that would try to optimise our freedom. Although not feeling particularly confident that it would be, the author would not begrudge its existence if it were, despite the loss of freedom that might result. Even if our freedom is reduced by the right kind of agent, the agent would seek to optimise our freedom—to give us as much freedom as possible, but not so much it could not interfere for the right reasons. For example, the author says, that an alien civilization that is so technologically advanced it could interfere in our choices if they wanted to, but also so incredibly benevolent and intelligent that they would not, unless this was the right thing to do. Should we want there to be no such benevolent civilizations out there? Such a desire seems suspect, striking the author ‘as a particularly egregious instance of anthropocentricity’.
In “Narrativity and responsible and transparent AI practices” (this volume) reflect on the relations between narrative, transparency and responsibility, building an argument that narratives (about AI, practices, and those persons implicated in its design, implementation, and deployment) support the kind of knowing and understanding that is the aim of transparency, and, moreover, that such knowledge supports responsibility in informing agents and activating responsibility and ethical acceptability through creating knowledge about something that can and should be responded to. Further, the authors argue for an expansion of narratives and narrative sources to be considered in questions of AI, understanding that transparency is multi-faceted and found in stories from diverse sources and people.
In “Lessons from the California Gold Rush of 1849: prudence and care before advancing generative AI initiatives within your enterprise” (this volume), the authors shed a light on the risks and rewards of Generative AI for businesses, drawing the intriguing parallels between the California Gold Rush of 1848 and the contemporary rise of Generative AI technology. The thesis is that both serve as compelling symbols of change, opportunity, and caution. As history has shown, however, while many rush towards the allure of newfound riches, only a select few truly prosper during times of revolutionary advancements. It was not the gold miners who amassed the most wealth from the Gold Rush, but it was those who supplied the necessary tools, equipment, and provisions to the gold seekers who reaped consistent profits. Individuals like Levi Strauss, with his durable denim jeans, and Sam Brannan, a merchant capitalizing on mining supplies, epitomized this success. Their prosperity stemmed not from discovering gold themselves but from recognizing and catering to the demands of the prospectors. Their success was not rooted in direct participation in the gold rush, but in their ability to capitalize on the uncertainty and evolving needs brought about by this new gold mining venture, providing essential goods and services to those navigating the uncharted territory of gold mining. The authors posit that executives must avoid ignoring or becoming overly fixated on the long-term vision of AI autonomously guiding strategic decisions. While AI has the potential to revolutionize decision-making, it is equally important to recognize the immediate and practical benefits of laying the groundwork for long-term strategy and operational improvement. In this rapidly evolving landscape, it is essential to acknowledge that as AI adoption becomes more widespread, competitors will harness its power, potentially creating new threats. Few firms can exclusively focus on transformation. Therefore, as with any paradigm shift that creates both opportunity and threats, balancing long-term vision with immediate action will remain key to thriving in the age of AI.
In ‘Artificial intelligence and identity: the rise of the statistical individual’ (this volume) authors discuss, how algorithms represent human identity. Algorithms are used across a wide range of societal sectors such as banking, administration, and healthcare to make predictions that impact our lives. While the predictions can be incredibly accurate about our present and future behaviour, there is an important question about how these algorithms in fact represent human identity. In this paper, we explore this question and argue that machine learning algorithms represent human identity in terms of what we shall call the statistical individual. This statistical representation of individuals, we shall argue, differs significantly from our ordinary conception of human identity, which is tightly intertwined with considerations about biological, psychological, and narrative continuity—as witnessed by our most well-established philosophical views on personal identity. Indeed, algorithmic representations of individuals give no special attention to biological, psychological, and narrative continuity and instead rely on predictive properties that significantly exceed and diverge from those that we would ordinarily take to be relevant for questions about how we are.
In ‘Machine learning and human learning: a socio-cultural and -material perspective on their relationship and the implications for researching working and learning’ (this volume), authors discuss the nature of human–machine relationship. The paper adopts an inter-theoretical socio-cultural and -material perspective on the relationship between human + machine learning to propose a new way to investigate the human + machine assistive assemblages emerging in professional work (e.g. medicine, architecture, design and engineering). They point out that the concepts of ‘distributed cognition’ and of ‘cultural ecosystems’ constitutes a unit of analysis to investigate collective human + machine working and learning. The argument is that: (i) the former offers a way to reveal the cultural constitution of and enactment of human + machine cognition and, in the process, the limitations of the computational and connectionist assumptions about learning that underpin, respectively, good old-fashioned AI and deep learning; and (2) the latter offers a way to identify, when amplified with insights from Socio-Materialism and Cultural-Historical Activity Theory, how ML is further rearranging and reorganising the distributed basis of cognition in assistive assemblages. The paper concludes by outlining a set of conjectures that researchers could use to guide their investigations into the ongoing design and deployment of HL + ML assemblages and challenges associated with the interaction between HL + ML.
In ‘On prediction-modelers and decision-makers: why fairness requires more than a fair prediction model’ (this volume), authors explore the distinction between the concepts of prediction and decision and show the different ways in which these two elements influence the final fairness properties of a prediction-based decision system. By pointing to the different roles of the ‘prediction-modeler’ and that of the ‘decision-maker,’ the paper provides an insight into ethical and legal requirements. It offers a perspective of shifting the focus from an abstract concept of algorithmic fairness to the concrete context-dependent nature of algorithmic decision-making, where different actors exist, can have different goals, and may act independently.
In ‘Human presencing: an alternative perspective on human embodiment and its implications for technology’ (this volume), authors explore how people’s past encounters with others shape their present actions. The paper presents an alternative perspective on human embodiment in which the re-evoking of the absent can be traced to the intricate interplay of bodily dynamics. The argument is that by situating the phenomenon within distributed, embodied, and dialogic approaches to language and cognition, we overcome the theoretical and methodological challenges involved in perceiving and acting upon what is not perceptually present. This, the paper asserts, has implications for how people act in online learning environments and how human activity shapes the machines we use every day.
In ‘Abundance of words versus poverty of mind: the hidden human costs co-created with LLMs” (this volume), the paper notes that new technologies have always changed society in unexpected ways, but deep cultural elements are never easily uprooted. Hence, meanings of concepts are hard to convey and explain within their native language, let alone to others. Our new AI tools might help us spread and get shallowly familiar with more concepts, but will LLM and its derivative seamless language translation/generation gadgets, deliver deeper understanding? Or will this process only generate more confirmation biases, wishful thinking, and an illusion of explanatory depth? Will LLMs, rather than liberate, colonialize the minds of people, especially those who speak ‘weaker’ languages? As LLMs start to dominate our everyday activities, there might be an unrealistic expectation of how people in institutional settings, such as schools, workplaces, governments, etc., need to behave and perform. We might lose our humanity in this process. Now, there is an ever-urgent need for developing a set of values, ethics and codes of conduct actions when interacting with and using AI.
I met Richard at the first international conference, AI For Society, which I organised in 1983 at the University of Brighton. The conference was chaired by the eminent philosopher, Professor Michael Dummett of Oxford. Richard was a key speaker among the European AI pioneers of the time, Maggie Boden, Alan Bundy, Ajit Narayan, Massimo Negrotti, David Smith, Bob Muller and Satinder Gill. Richard as a founding editor, since the foundation of AI&Society 1986, was a key player in shaping the evolution of AI&Society. We will fondly remember Richard’s passionate engagement with socially responsible AI debates ranging from health, quality circles, indigenous heritage, Brexit Pantomime, Corporate Social Responsibility, and ‘artificial stupidity’ of nuclear weapons.
Data availability
Not applicable.
References
Ennals R (1986) A way forward for advanced information technology: SHI—a strategic health initiative. Star wars: a question of initiative. Wiley, Chichester, pp 122–135
Ennals R (2016) Beyond the age innocence. AI Soc 31:127–128
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Gill, K.S. The end AI innocence: genie is out of the bottle. AI & Soc 40, 257–261 (2025). https://doi.org/10.1007/s00146-025-02267-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-025-02267-0