skip to main content
10.1145/1141277.1141341acmconferencesArticle/Chapter ViewAbstractPublication PagessacConference Proceedingsconference-collections
Article

A dialogue on responsibility, moral agency, and IT systems

Published:23 April 2006Publication History

ABSTRACT

The dialogue that follows was written to express some of our ideas and remaining questions about IT systems, moral agency, and responsibility. We seem to have made some progress on some these issues, but we haven't come to anything close to agreement on several important points. While the issues are becoming more clearly drawn, what we have discovered so far is closer to a web of connecting ideas, than to formal claims or final conclusions.

References

  1. Desharnais, P, Lu, J, and Radkakrishnan (2002), T. Exploring agent support at the user interface in e-commerce applications, International Journal on Digital Libraries, Vol. 3, No. 4, 284--290.Google ScholarGoogle ScholarCross RefCross Ref
  2. Dijkstra, E. Guarded commands, nondeterminacy and formal derivation of programs (1975), Communications of the ACM, Vol.18, No.8, 453--457. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Floridi, L, and Sanders, J. (2004) On the morality of artificial agents. Minds and Machines, 14.3, 349--379. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, Vol. 6, No. 3, 175--183. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Moor, J (1985), What is computer ethics?, Metaphilosophy Vol 16 No 4, pp 266--279.Google ScholarGoogle ScholarCross RefCross Ref
  6. Russo, M. Herman Hollerith: the world's first statistical engineer. http://www.history.rochester.edu/steam/hollerith/loom.htm, accessed September 8, 2005.Google ScholarGoogle Scholar
  7. Smith, L (1994) B. F. Skinner. PROSPECTS, Vol. XXIV, No. 3/4, 519--32, http://www.ibe.unesco.org/International/Publications/Thinkers/Th inkersPdf/skinnere.PDF, accessed September 8, 2005.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A dialogue on responsibility, moral agency, and IT systems

      Recommendations

      Reviews

      Karen A Mather

      "A bad workman always blames his tools," warns the age-old adage on responsibility. But no one had heard of softbots in those days! Intelligent software agents, such as Web search programs and land demining robots, are tools that decide for themselves what they should do. There are now generously funded projects well underway (for example, the New Ties information society technologies (IST) project) to construct software agents that, for instance, are capable of developing their own culture and language, and procreating. Note the phrase "what they should do" above, because this highlights the "ought-ness" that constitutes morality, or ethicality. Ethicists, and especially computer ethicists such as Johnson and Miller, are currently puzzled by the question of whether or not agents embodying artificial intelligence (AI) could be judged guilty of immoral behavior. This paper sees it as mistaken to try to assign morality to computer systems (AI or otherwise), on the basis that responsibility for any damage caused by a human-made tool can be traced back to the human maker. The authors specifically oppose the philosophical position of Floridi and Sanders, whose cybernetics-influenced view is that humans are not alone in being able to create evil; thus, they are not alone in being seen as moral agents. This paper sets out the debate as a dialogue between two people who use plain English to discuss these abstract ideas. Readers interested in computer ethics and the philosophical questions that spring from artificial intelligence will enjoy following this dialogue and the paths that it takes. Online Computing Reviews Service

      John M. Artz

      Written in dialogue fashion, this paper is an overview of some key elements of the discussion of computer systems as moral agents. Due to the unique format, it is necessary to critique it in terms of format and content. First, in format, this is a somewhat novel approach to exploring tricky issues in the debate about machines and moral agency. The dialogue, all too rarely used in academic literature today, has it roots in Plato, who is still the unchallenged master of the form. Today, it is not at all common in academic discourse. And yet, the dialogue format provides several important benefits. First, the author can bring out multiple perspectives on an issue without having to resolve those perspectives or come to a specific conclusion. This is good for evolving ideas and provides a format for discussion, even if the author does not reach a definitive conclusion. Second, a dialogue format provides a foundation for future discussions rather than the attack-counterattack model of more traditional persuasive arguments. Finally, the dialogue format is ideal for classroom discussions. Students are more reluctant to challenge a position that is supported not only by evidence and arguments, but also by the weighty endorsement of publication. A dialogue, on the other hand, provides two or more sides of an argument, allowing students to pick out the points with which they agree or disagree. So, from that perspective, the dialogue format is used far too infrequently and makes this paper both novel and more valuable. The content is a little less impressive. The interlocutors exchange fairly traditional arguments regarding machines and moral agency. Computers are different from other technologies because they require instructions and have unexpected outcomes. Computers cannot be moral agents because they lack free will. Computers have moral consequences and, hence, cannot be considered morally neutral. It is difficult to critique these claims because the interlocutors are, in effect, critiquing each other. So we get down to this question: Does this paper add anything to the debate__?__ Clearly computer systems have moral consequences and clearly somebody must be held accountable for those consequences. One of the interlocutors observes that it is essential to understand where the responsibility for IT systems lies. Regardless of your opinions on the moral agency of machines, you cannot disagree with this observation. Online Computing Reviews Service

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        SAC '06: Proceedings of the 2006 ACM symposium on Applied computing
        April 2006
        1967 pages
        ISBN:1595931082
        DOI:10.1145/1141277

        Copyright © 2006 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 April 2006

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • Article

        Acceptance Rates

        Overall Acceptance Rate1,650of6,669submissions,25%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader