My comments begin with noting the primary, foundational contributions to computer ethics (CE) in its first decade or so made in Jim Moor’s 1985 paper “What is Computer Ethics,” I then turn to his still earlier paper, “Are There Decisions Computers Should Never Make” (1979). As with his 1985 paper, Moor deftly identified in 1979 several central elements that continue to define contemporary discussions of especially Artificial Intelligence (AI) and Machine Learning (ML) Systems—discussions I approach primarily in terms of phronēsis as a form of self-correcting ethical judgment. Thirdly, I note his equally pivotal contributions to theories of privacy as these have unfolded over the past 20 years or so. While by no means a complete summary of Moor’s contributions to these fields, these comments aim to foreground some of his most central and definitive ones from my perspective as a scholar and researcher in these domains.

I then give an example showing how Moor’s overviews of digital information and digital ethics include what I have found to be especially pedagogically effective in my writing and teaching. Last but not least: as this special issue attests, I think it safe to say that the very great majority of us who were privileged to know and work with Jim Moor were as much taken with and grateful for his simple humanity and collegiality as with his keen insights and contributions to these fields. I try to give a taste of this by way of his decades’ long work in crossing the often deep abysses between philosophy and computer science for the sake of (still) urgent dialogue and debate regarding the ethical issues continuously evoked by rapidly changing technologies.

In all these ways, I hope to present an accurate picture of Jim Moor as an exemplary philosopher and colleague, one whose philosophical contributions and personal engagements have deeply shaped and continue to define these domains.

In my experience, the very great majority of my colleagues in CE take Moor’s 1985 paper “What is Computer Ethics?” as a foundational turning point in the development of CE—starting with its proposed definition of CE (pp. 266f.) This definition includes conceptual elements that became central to our subsequent analyses and approaches to CE, starting with Moor’s foregrounding the “policy vacuum” and “conceptual vacuum” that arise as new technologies introduce often novel issues that can not be fruitfully resolved by way of extant approaches (p. 266). The upshot is a “conceptual muddle”: one that calls for “an analysis which provides a coherent conceptual framework within which to formulate a policy for action” (ibid). Hereby, Moor points us towards the meta-ethical level first, with our challenges here initially defined within the parameters of such policy and conceptual vacuums—contra the more first level, “mechanical” effort to directly address the issue, e.g., by applying utilitarian or deontological ethics.

Moor supports his analysis in part by highlighting what he identifies as the “logical malleability” of computing devices (pp. 269f.) as a central characteristic that opens up just these sorts of policy vacuums and conceptual muddles: and, more broadly, adding to the agenda of CE the need to properly conceive “the nature and impact of computer technology”—a requirement that he rightly foresaw would only increase as the computer revolution continued (p. 270). This is to say: our philosophical engagements must be informed by our best possible understandings of the technologies involved, contra the temptation to “black-box” them, and/or leave the technical details solely to “the experts.” Too many examples from early CE demonstrate that doing so falls all too easily into the now well-known traps of either a technological determinism and/or techno-utopianism. (This also makes clear, as I discuss below, that philosophers’ efforts in these directions are deeply dependent upon our cross-disciplinary dialogues with our colleagues in CS and related fields—i.e., another key way in which Jim Moor was a leading light and pioneer.)

To fully document how far these conceptions and approach shaped and defined the development of CE over the following 40 years or so would, of course, be its own research project. But I can say with confidence that among the many foremost and leading scholars and researchers I’ve been privileged to know and collaborate with over the past 2 + decades, the paper is taken a watershed publication and primary reference in a very great deal of research and scholarship that followed.

By the same token, Moor’s perhaps less well known paper, “Are There Decisions Computers Should Never Make?” (1979), is likewise a primary contribution to the then burgeoning debate over the use of AI and related systems to replace human judgment and decision-making. Moor argues here for what he calls an “empirical position” regarding the question—contra what we can call more theoretical analyses offered by Hubert Dreyfus (as grounded in phenomenology) and Joseph Weizenbaum (1976) which argue that some forms of decision-making are not computationally tractable. For my part, I have pursued the latter approach, as complimented and reinforced by more contemporary work (perhaps most notably, Zweig, [2019] 2021) and by way of focusing specifically on phronēsis as first of all the virtue (capacity or excellence) of making context-sensitive, self-correcting judgments (e.g., Ess, 2023). To my knowledge, there is now broad agreement in the relevant AI communities that phronēsis (and related capacities, perhaps) are indeed not fully realizable by way of contemporary technologies and approaches (e.g., Bringsjord, 2024): but all of this must be tempered with the proviso that Moor remains fully correct that this will finally be an empirical matter to be determined as future technologies develop and unfold.

Last, but certainly not least: Moor’s contributions to an emerging global and thereby necessarily pluralistic CE (e.g., Moor, 1985; cf. Ess, 2020) include his shaping a pivotal turn—in collaboration with Herman Tavani—in our conceptions of privacy. It can hardly be overstated how far twentieth century, especially Anglophone ethical and legal understandings of privacy were foundationally challenged and ultimately transformed by emerging computing technologies—most especially computer-facilitated forms of networked communication brought into broader view and use by the opening of the internet in the early 1990s. In addition to the rapidly growing fora and channels for what was then called computer-mediated communication (CMC)—e.g., bulletin boards, listservs, chat rooms, MUDs, MOOs, etc., followed on by the emergence of social media by ca. 2005, amplified by the mobility revolution ca. 2008, and so on—the increasingly global reach of CMC and related technologies led directly to culture clashes regarding our most foundational conceptions of the self and thereby privacy (e.g., Hongladarom & Ess, 2007; Lü, 2005). Most simply, in the 20th ct. “West,” privacy was conceived as a largely static, substantive right attending to the self as an atomistic individual. By contrast, both pre-modern Western as well as contemporary non-Western societies and cultures stress a relational self, one primarily constituted by the various relationships—familial, social, communal, natural—that constitute its existence. As our earliest cross-cultural explorations of privacy concepts made clear, for such relational selves, “privacy” is a highly undesirable, negative, and potentially dangerous condition: to cut off any of our relationships with such others is to diminish what we are as such relational selves. As should be easy to see, however: such relational selves are also the grounding presumption and primary “products” of especially social media—i.e., a selfhood deeply dependent upon connection with and recognition by others, e.g.., in the form of “likes,” etc. At the same time, however, such relational selves consistently correlate with hierarchical and non-democratic polities and social structures. (For an overview, including important contrasts within the West/Global North, see Ess, 2019).

Attempting to navigate towards some sort of middle ground conceptions of selfhood and privacy that might nonetheless preserve primary conceptions of the self as an autonomy to be protected by basic rights, including equality, privacy, and so on, in a democratic society ever more shaped and infused by social media and its cousins thus became a primary conundrum in information and computing ethics from the late 1990s on. For my part, I have foregrounded notions of relational autonomy that conjoin elements of more individual and more relational selves (e.g., Ess, 2014). The history of these developments is again a long and complex one—but it is again one that Moor contributed foundationally to very early on with his analyses of privacy in the information age as entailing precisely the relational dimensions of our lives. As further developed by Tavani (2013), privacy concerns the context of specific relationships—“spheres of life” (p. 138), such as education, the marketplace, political life and so on—within which a given bit of information is exchanged. Each context is shaped by its own “norms of appropriateness” that “determine whether a given type of personal information is either appropriate or inappropriate to divulge within a particular context” (ibid.). At the same time, each context is further accompanied by its own “norms of distribution [that] restrict or limit the flow of information within and across contexts” (ibid.: see Ess, 2020, pp. 75f). Especially as later elaborated by Helen Nissenbaum’s account of privacy as “contextual integrity” (2010), these relational accounts are now central in a range of computer ethics and policy guidelines. Still more broadly, they became increasingly prominent in ethics and philosophy of technology, especially within the approaches of virtue ethics and ethics of care (e.g., Vallor, 2016): by 2020, it was possible to take up “the relational turn” in philosophy of robotics and AI, for example (as a start, Coeckelbergh, 2020). If I have this right, then once again, Moor was foundational and well ahead of his time in central ways.

These—by my lights, foundational—concepts and approaches make it clear that Jim Moor had an exceptional gift for deftly putting his finger on and clearly articulating a central insight or idea. In my work and teaching, I have found several of these pedagogically useful as well. As but one example: in my Digital Media Ethics I use Moor’s characterization of digital information as “greased” (1997) to help explain both the differences between analogue and digital communication technologies as well as to highlight the novel risks of the latter. So Moor writes: “When information is computerized, it is greased to slide easily and quickly to many ports of call” (1997, p. 27; in Ess, 2020, p. 16). This simple metaphor makes it very easy to understand how digital information thereby leads to novel ethical challenges, from cyberbullying to fraught issues of copying, copyright and so on (ibid).

This is to say: there are any number of brilliant colleagues in these domains whose conceptual analyses and argumentation have taken us many miles down the road in the on-going development of information and computing ethics. But in my experience, not all of these are simultaneously most adept in teaching and making their ideas clear and accessible to non-professional audiences. This is by no means a critique: there are host of reasons and factors in the contemporary academy that weigh heavily against teaching as a primary aim, whatever one’s gifts and ambitions in these directions might be. It is rather to say that we have all the more reason to be grateful to Jim Moor as contributing to the successful teaching of information and computing ethics along the way.

Last but certainly not least, allow me to comment on Jim Moor as a person and colleague—first of all, as what I found to be his consistent and characteristic humility as coupled with a sharp but gentle humor. These virtues were further conjoined with an always lively curiosity and genuine interest in what his colleagues were working on and coming up with. Especially for those of us who, 20 + years ago, were very fresh and very young indeed, all of this made getting to know and work with Jim that much more of a pleasure and fruitful experience. At the same time, humility is one of several virtues now recognized to be critical to interdisciplinary work (e.g., Balsamo & Mitchem, 2010). And it seems clear that Jim Moor was likewise such a foundational figure in our domains as these virtues allowed him to so effectively take up the thorny but essential work of crossing the often nasty, daunting, sometimes seeming impenetrable disciplinary boundaries that all too frequently hermetically seal off philosophers and ethicists from computer scientists and those more focused on and trained in computational technologies. Here again, Jim was a pioneer in an initially very small community of philosophers and CS folk who gradually came together in the late 1980s and early 1990s via—among other venues—the first Computing and Philosophy (CAP) conferences, which then grew into the International Association for Computing and Philosophy (IACAP). After some 3 + decades of work in these domains, I’m deeply convinced that to cross these boundaries so as to foster productive and creative dialogue requires many talents and virtues—starting with just such modesty and humility, including the recognition of how limited one’s own knowledge is vis-a-vis the real experts sitting in the room; and then tolerance and patience with one another as we inevitably make every amateur’s mistake and faux pas as we trespass on others’ territories of expertise; and the courage, finally, to do so, as the immediate and long-term prices for violating disciplinary specialization can be very high indeed, starting with personal humiliation in front of others and including loss of reputation and standing among the specialists who tend to assume that your crossing boundaries thereby disqualifies you as a “real philosopher”, for example.

Jim’s innumerable examples of showing the rest of us how do to so was inspiring and invaluable for so many of us. This is to say: in addition to his impressive record of pioneering and often foundational work in these fields—he leaves us with an invaluable legacy and example of how to do this essential interdisciplinary work as a human being in critical but respectful dialogue with other human beings, whatever their disciplinary training and allegiances.