Around the turn of this century a number of emerging technologies were in the news, raising some potentially significant ethical questions. Given that they were emerging they as yet had no, or very few, impacts, so it was not obvious how best to assess them ethically. Computer ethics was thriving but most of that work involved examination of applications and their consequences. The emerging technologies by and large had no significant consequences yet at that time. This is the environment in which Jim Moor published his paper “Why we need better ethics for emerging technologies” (Moor, 2005), focussing on nanotechnology, neurotechnology and genetic technology. His first foray into this issue however was over a quarter of a century earlier with the publication of “Are there decisions that computers should never make?” (Moor, 1979). Back then, computers were in the process of becoming more pervasive but AI was only emerging. A few years earlier, Joseph Weizenbaum came out strongly against computers making decisions in certain contexts (Weizenbaum, 1976). Moor argued that if computers made better decisions than humans, then on moral grounds, they should be used to make them. This is an early example of his concern with ethics for emerging technologies, a concern that he returned to in his 1984 paper “What is computer ethics” (Moor, 1985), where he introduced the concept of policy vacuums. Because computers were new and different in certain ways from what went before, their use created issues that had not arisen previously. An example is a computer program which does not, or did not then, fit easily into any model of intellectual property.

In his 2005 paper Moor has three suggestions for a better ethics for emerging technology. This built on a discussion of how best to approach the ethics of nanotechnology that he and I co-authored (Moor, 2005). We argued there that given the undeveloped nature of nanotechnology, ethical examination was needed in the very early stages of development, including consideration of the precautionary principle, during development and later when the technology was in use (Weckert & Moor, 2006). This was the basis of his first suggestion that ethics should be dynamic. It should be an ongoing process before, during and after the technological development. Second, there should be close collaboration between the researchers and developers on the one hand, and ethicists and social scientists on the other. Ethicists need to have some understanding of the science and technology, and the researchers and developers need to be aware of and confront the possible consequences of the technology. Finally, ethical analyses should be more sophisticated. It is not enough to merely apply abstract ethical theories to problems. More actual ethical guidance should be given in the analyses.

Moor was undoubtedly onto something important when he argued that we need better ethics for emerging technologies. I want to extend his argument here to include potential environmental consequences and a more serious questioning of our underlying values, both of which cohere with his suggestions just mentioned. We will focus on AI because it is a topical technology and was one of his main interests.

Dynamic ethics, his first suggestion, involves looking at the issues before, during and after development. Given the current state of the natural environment, this should be done in a broader context than he envisaged. The context of technology ethics should include the natural environment (a suggestion in Lemmens et al., 2017). It follows then that both scientists involved with environmental issues and environmental ethicists should contribute to the ethics of AI. Dynamic ethics especially in the very early stages of development, provides a space for assessing, or reassessing, our core values. For example, one core value for Moor is autonomy, and autonomy implies making decisions and taking responsibility. What kinds of decisions should we allow AI to make for us? How does that affect our responsibility for those decisions? Is our autonomy so important after all when in competition with other goods?

Consider the natural environment. In indigenous societies, technologies, came directly from nature (stone axes and so on), and currently, although less obviously, the material out of which our technologies are made, including AI technologies, also ultimately come from the earth and what it produces. Given its power to analyse large amounts of data, AI has the potential to help solve some of our environmental problems, or so it is claimed. AI advances, it is argued, “will support the understanding of climate change and the modelling of its possible impacts,” “support low-carbon energy systems with high integration of renewable energy and energy efficiency,” “help improve the health of ecosystems,” “prevent and significantly reduce marine pollution,” and help combate “desertification and restoring degraded land and soil” (Vinuesa, et al., 2020, 4).

AI can benefit farming (Lee, 2024). In the growing of crops weeds are often controlled using herbicides. Sensors and cameras mounted on the spraying equipment can dramatically reduce their use by spraying only the individual weeds and not the whole field. This is obviously beneficial. Reducing the use of herbicidal chemicals saves money, but more importantly, is better for the environment. Animal husbandry too can benefit. In dairying, sensing devices and cameras are attached to cows to constantly monitor their health, feeding requirements, when they need milking and when they will give birth. This leads to improved animal welfare, which again is obviously good.

AI is dependant on the natural environment. Without the earth’s resources there would be no computers or other devise on which AI relies. This has important implications not just for the environment itself, but also for we humans. Whatever else we might be, we are mammals and depend on that environment, not only for our survival but also for our well-being. The natural environment links us to AI, something that is easily overlooked.

The environmental impacts of AI and in fact all electronic technologies, are becoming concerning issues, especially energy use (de Vries, 2023), electronic waste (e-waste), mining of the required metals, and water requirements (Gupta, 2024). It was reported that in 2022 62 million tones of e-waste was generated (WHO, 2024). This leads to contamination problems in soil, water and air, and this is detrimental to human health. Mining is not generally environmentally friendly and also raises issues of justice, a key component of Moor’s Just Consequentialism (Moor, 1999). Lithium mining in Chile, for example, is causing problems for Indigenous communities there (Greenfield, 2022). Similar problems are arising in Australia where rare earth metals are on sites of significance to Indigenous people (Kemp, 2024). These justice questions could probably be overcome but most likely won’t be because of power inequality and the profit motive. Clean energy should certainly be pursued but greater effort needs to be placed on reducing energy use. Unfortunately it is unclear if the benefits of AI technology outweigh the environmental costs.

Turning to the agricultural uses of AI, given the benefits, what is there to worry about? In the weed spraying case, there are environment benefits from using less herbicidal chemicals. Given the controversies about the effects of these chemicals on human health, using less could have medical advantages too. AI here is being used to improve large scale monoculture, something which itself is challenged on environmental grounds as being harmful to the natural flora and fauna. So while on the one hand, this use of AI benefits the environment, on the other it helps perpetuate an environmentally dubious system. AI does not need to be used in this way of course. It could be used to assist small-scale family farmers.

In the dairying case some environment damage might be permissible, given the importance of animal welfare issues. A better way though of caring for cows might be to have smaller dairy farms and more farmers to look after the animals. Here too AI is used to enable large-scale farming. Large-scale farming, whether it be dairying, crop growing or any other kind, has a significant impact on rural communities. With fewer people needed in agriculture, communities become depleted and then lose medical, educational and other resources. This may not matter because people are free to move to the cities if they so desire but it does make us think about what sort of society we want. Do we want vast depopulated areas of productive farming country? While AI in agriculture is promoted as a very positive thing for the environment and animal welfare, there are many underlying questions to be asked and issues examined before we become too excited (see Coghlan & Parker, 2023).

Earlier we mentioned the environmental problems associated with the manufacture and disposal of the electronic components required for AI, that is the sensors, cameras and so on, and the energy requirements for the training of the AI systems. These all have large environmental costs. When we look at the larger picture then, the environmental benefits of AI in agriculture may not be so impressive. In fact, the researchers who outlined some of the environmental benefits, also argue for more research on the ethics of AI:

Therefore, novel methodologies are required to ensure that.

the impact of new technologies are assessed from the points of.

view of efficiency, ethics, and sustainability, prior to launching.

large-scale AI deployments (Vinuesa, et al., 2020, 5).

What are the implications of this discussion for the ethics of technology? The obvious one is that we must broaden ethical consideration of technology to include environmental issues. This is clear in the environmental and farming areas, but also important, if less obvious, in other areas, for example monitoring and surveillance. Not only are privacy issues important here but so are questions of energy use, e-waste and so on in the devices used. All technologies create environmental problems. What sort of natural environment do we need in order to lead good lives? This is a question lurking in the background, leading to other questions that we should be asking in the ethics of AI. Should we look at the earth as Mother Earth as many Indigenous people’s do, rather than as a resource to be exploited? Should we place more emphasis on ourselves as part of nature rather than as above it?

What is a good life? How does technology make life better? These are core value questions that Moor alludes to in his 1999 paper Just Consequentialism, where he writes “from an ethical point of view we seek computing policies that at least protect, if not promote, human flourishing” (Moor, 1999, p. 66). To what extent does AI protect, or promote, human flourishing? What are the benefits of AI, or technology in general? We obviously need some technology in order to flourish but it is less obvious that technological innovations in themselves make us happier or more satisfied with life. The progress paradox shows this. Some new technology satisfies us for a time but then we become dissatisfied again and want something else new. A consumerist society welcomes this and it might be harmless if it were not for the environmental cost in energy use, e-waste and so on. Focusing on AI, much more consideration needs to be given to which AI technologies contribute to human flourishing when the environmental costs are taken into account. AI technologies that reduce suffering, energy use, pollution and waste, for example, are important, but what about those that merely make life easier, possibly in relatively trivial ways? Not everything that makes life easier contributes to human happiness or flourishing. These costs, while having damaging effects now, are also a factor in intergenerational justice, something that needs consideration in Moor’s Just Consequentialism. Are we just pushing these costs onto future generations for our own benefit?

This leads to further questions about human flourishing. It is important to remember that whatever else we are, we are mammals so in many ways not too different from non-human creatures. Not all natural environments are conducive to our flourishing and the same is true for the social environment. We have evolved as gregarious mammals and most of us need meaningful interaction with other humans to lead fulfilling lives. We are spiritual beings, at least in the weak sense that we recognise the importance of some non-material values, for example love, friendship and beauty. It is not clear that all AI applications are helpful in either sphere.

In a dynamic ethics, the early stages of development are an ideal time to ask important questions about how the new technology is likely to affect these environments, and on what is ultimately important for us to flourish. These thoughts are not new. In 2017 there was a call for a “terrestrial turn” in the philosophy of technology. This points in a direction that would broaden the way that we approach the ethics of technology. The argument there was that

[w]e need to start thinking about the … general conception of technology and approach to technological innovation, its methodologies and research orientations, and its frameworks for understanding both the human-technology relation and the nature-technology relation (Lemmens et al., 2017, p. 117).

A better ethics for AI and emerging technologies in general, would be based on the kind of beings that we are and on what is required for human flourishing. Moor’s dynamic ethics provides the space for this. Prior to the technology’s development, there is time to consider how, if at all, it will contribute to flourishing, what harms might result from certain uses, should precautionary principle be applied and so on. This examination continues if development proceeds and should involve technologists, scientists, social scientists, philosophers and other humanities experts. As well as utilitarian considerations, issues of justice must also play a central role in the assessments. Just Consequentialism embedded in the framework of the better ethics for emerging technologies can provide the ethical space. Environmental issues, especially energy and water use, e-waste and mining, along with consideration of our core values, must be included as essential in ethical examination of AI and other technologies. Moor in his various papers has given us a viable theory to extend the ethics of emerging technologies.