Abstract
As emerging technologies support new ways in which people relate, ethical discourse is important to help guide designers of new technologies. This article endeavors to do just that by presenting an ethical analysis and design of technology intended to gather and act upon information on behalf of its users. The article elaborates on socio-technological factors that affect the development of technology to support ethical action. Research and practice implications are outlined.

Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Note that our design requires full agreement, not just a majority vote, by system users for proposed actions to be executed. We discuss challenges associated with this assumption later
Social contract theory presents “the view that morality is founded solely on uniform social agreements that serve the best interests of those who make the agreement” (The Internet Encyclopedia of Philosophy 2004)
Information systems are sociotechnical, involving technology and users. In this article, we limit our examination of system ethics to technology impacts on the user community
In this case, it may be argued that the user community is responsible. We examine the system ethics with respect to this very same community
References
Berners-Lee T, Hendler J, Lassila O (2001) The semantic web. Scientific American
Boyd RN (1988) How to be a moral realist. In: Sayre-McCord (ed) Essays on moral realism. Cornell University Press, Ithaca
Causal Determinism (2003) Stanford encyclopedia of philosophy, http://plato.stanford.edu/entries/determinism-causal/ June 2004
Dixit A, Skeath S (1999) Games of strategy. W.W. Norton and Company, New York
Gauthier D (1986) Morals by agreement. Oxford Claredon Press, Oxford
Gleick J (1988) Chaos: making a new science. Penguin Books, New York
Groueff S (1967) Manhattan project: the untold story of the making of the atomic bomb. Little Brown, Boston
Held V (1993) Feminist morality: transforming culture, society, and politics. The University of Chicago Press, Chicago
Hume D (1748) Of the original contract. In: Barker (ed) Social contract (1960). Oxford University Press, London
Kohno T, Stubblefield A, Rubin A, Wallach D (2004) Analysis of an electronic voting system. IEEE Symposium on Security and Privacy
Krueger A (1974) The political economy of the rent-seeking society. Am Econ Rev 64:291–303
Locke J (1689) An essay concerning the true original, extent, and end of civil government. In: Barker (ed) Social contract (1960). Oxford University Press, London
Malkiel B (2003) The efficient market hypothesis and its critics. J Econ Perspect 17:59–82
Miller E, Swick R, Brickley D (2004) Resource description framework, world wide web consortium, http://www.w3.org/RDF/ June 2004
Minksy M (1994) Will robots inherit the earth? Scientific American. Also available at http://web.media.mit.edu/∼minsky/papers/sciam.inherit.html
Moderation system (2003) Wikipedia: the free encyclopedia, http://en.wikipedia.org/wiki/Moderation_system June 2004
Murtha R (1998) Open source software (OSS): a new business paradigm? http://is.gseis.ucla.edu/impact/f98/Projects/murtha/#OSSthreat June 2004
Nash JF (1951) Non-cooperative games. Ann Math 54:289–295
Pateman C (1979) The problem of political obligation: a critical analysis of liberal theory. Wiley, New York
Presence, Special Issue on Augmented Reality (1997) Also available at http://web.media.mit.edu/∼testarne/TR397/main-tr397.html
Rachels J (ed) (1998) Ethical theory. Oxford University Press, London
Rheingold H (2002) Smart mobs: the next social revolution. Perseus Publishing, New York
Rousseau J (1763) The social contract. In: Barker (ed) Social contract (1960). Oxford University Press, London
Russell P (1983) The global brain. In: Tarcher JP (ed) Los angeles. Also available at http://pespmc1.vub.ac.be/GBRAIFAQ.html
Segaly U, Sobelz J (1999) Tit for tat: foundations of preferences for reciprocity in strategic, http://www.ssc.uwo.ca/economics/econref/html/WP99/wp9905.pdf June 2004
Social Contract (2001) The internet encyclopedia of philosophy, http://www.iep.utm.edu/s/soc-cont.htm June 2004
Acknowledgments
The authors are grateful to Anna Dekker for her editorial assistance and to the Social Sciences and Humanities Research Council of Canada for the funding received for this research.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1 ethical analysis
1.1 Information and ethics
In this appendix, we discuss ethics with respect to technology that is meant to function as an information gatherer for a finite collection of users. First, we consider the actual information that the system creates. Are there any criteria by which the acquisition of certain information can, by itself, be judged to be an ethical act or not? Intuitively, it seems there may very well be. A historical example is the controversial research performed within the Manhattan project (Groueff 1967), in which the atomic bomb was developed by Einstein, Oppenheimer, Fermi, and others. It was not uncommon at the time of the research to condemn its potential destructiveness, and the issue is seldom broached without similar debate today.
To decide about the ethics of information, one can consider alternative situations in which the information is used differently. For instance, suppose the Manhattan project failed and the global community was later threatened by a geological situation, which could destroy the human race if not attended to. If nuclear research were the only way of making our planet secure, would we reject this research? This involves the question, “Is it proper to enable the possibility of wide ranging destruction for the sake of self defence?” In this case a property of the information is challenged, namely its potential destructiveness. We argue, however, that user opportunity to mitigate the output of the system based on such properties can provide sufficient ethical control of the system for those directly involved Footnote 3. Thus, technology of the sort we are discussing ought not to be prevented from gathering certain information for its users, as long as they are then able to vote on, and limit, proposed actions.
1.2 Technology and ethics
Now we expand on how the consent of individuals ensures that the proposed technology is ethically sanctioned. We do this in the context of moral realism. Moral realism is a school of philosophical thought which claims that ultimately correct ethical axioms are genuine features of the world. Richard N. Boyd elaborates, “Moral statements are the sorts of statements which are true or false” (Boyd 1988). This contrasts with the view that any such axioms are to some degree constructed from concerns that are not genuine features of the world, arising from social or psychological contingencies; thus, moral statements are not true or false, but only fallibly believed to be so. In this article, we do not seek to resolve this dispute. We only point out that our ethical argument is consistent with the tenets of moral realism.
In our system design, the technology will only act on information if unanimous consent is achieved. This, though ethically significant, is not entirely convincing as landmark to the ethical quality of the system. If the system is to be more than a passive encyclopedia, which is ethically inert, it must assist with achieving appropriate consent. This can be accomplished by applying the system’s capacity for abstraction to all opportunities for consent. It is by constantly working to revise a conception of what it is that achieves this consent that the system can realize a “disposition” that aims to provide beneficial information to its users. This notion of benefit, in turn, is informed by aspects of prior exchanges that contributed to consent (or its lack). Thus, we have a system with a high capacity for abstraction that will only act on information in the case of unanimous user consent, and which seeks to optimize opportunities for information exchange with respect to the consent of the users.
The proposed system only operates in a state of affairs in which ethical consensus is reached. The system actively works to effect such a consensus. In this way, it can be said that the system is promoting ethical action. It executes only ethically sanctioned acts. It is important here to point out that we are using the term “ethically sanctioned” rather than “ethical” to describe the events that are enabled by the system. This is because it is possible to judge an act of having consented to a given situation unethical (e.g. in the case of the nuclear research example provided above). However, a “failure” in terms of consent on the part of all relevant parties ought not to indicate an ethical limitation for which the technology is responsible Footnote 4. Thus, we say that condition of unanimous consent provides sufficient authorization for the opportunities enabled by the technology so that the technology is not itself responsible for these opportunities conflicting with a particular ethical viewpoint.
Appendix 2 limits to the realization of contractual equilibrium
Without a contractual equilibrium, to the degree to which disclosed information is true or verifiable, and believed, a Nash equilibrium will exist. The degree to which information is true or verifiable, and believed, determines the certainty with which actions proposed through a system facilitating ethical action can be selected. If misinformation or failures to disclose information are considered to result in inequality, uncertainty could potentially result in suboptimal perceived benefits, affecting the instantiation of strategy, and value distribution. For example, consider the situation in which two people have guns pointed at each other, and, although only the opponent is aware of it, one person is out of bullets. If it is considered inequitable for the opponent not to reveal this information, a suboptimal value distribution may result if the opponent fires his/her weapon. Ignoring misinformation and failures to disclose information, from a macro perspective, a Nash equilibrium would always exist, as no inequality would be possible.
A simple, alternative, game theory explanation of the effectiveness of the proposed system can be derived from the Tit for Tat strategy (Segaly and Sobelz 1999). Should any individual take advantage of an opportunity to have an action implemented based on a manner which is at any later moment considered to result in an inequitable outcome (for instance, this might occur if an individual acted on undisclosed information), further actions would not be accepted which did not rebalance the value distribution. Since consensus is required, it is impossible not to address each participant’s interests. Tit for Tat is an effective strategy involving co-operation when competing strategies are in equilibrium. However, by definition, perfect information regarding the effect of every action must be obtainable for a contractual equilibrium to exist.
A contractual equilibrium will be approached asymptotically but will never be fully realized. Every action has persisting effects. In an infinite universe, the full effects of actions cannot be accounted for by any finite resources. Without perfect information regarding the effect of every action, a contractual equilibrium cannot exist. Thus, in an infinite universe, it is impossible to remove all of the uncertainty regarding the outcome of an action. The expression “entropy of consciousness” can be used to refer to the inability to know that a decision will result in the preferred outcome, and therefore, whether it is the optimal choice.
We now provide a secondary definition for a contractual equilibrium, which is not subject to an infinite limit, by applying financial market theory. The efficient market hypothesis, which states that markets cannot be outperformed because all available information is already accounted for in stock prices (Malkiel 2003), is only true with respect to the information available to all participants within the market. This defines an achievable contractual equilibrium. It would occur if each individual disclosed his/her experiences and interests to every other individual, so that each individual could evaluate proposed actions on the basis of all the information available to all individuals.
The definition of ethical action established in this article entails consensus. Rent-seeking is defined as action taken to receive value which exceeds price (in economic terms, the act of trying to secure the rights to premiums on rents which are above their respective opportunity costs). That is, when economic policies create something that is to be allocated at less than its value by any sort of government process, resources will be used in an effort to capture the rights to the items of value (Krueger 1974).
Consensus removes rent-seeking, allowing for optimal value distribution. The elimination of rent-seeking behavior would cause prices to accurately reflect the value to be transferred. The value created by a system facilitating ethical action can be considered equal to the value created by removing the rent-seeking behavior which it eliminates, reduced by the overhead required to implement the system.
The actions which are implemented by the described system can be considered resource allocations. As more information becomes available, these allocations will become increasingly efficient. That is, they will become increasingly reflective of the desires of those creating the consensus.
The implementation of a system requiring every individual to manually select a decision for every action is only feasible for extremely narrow applications. This is a direct result of the overwhelming overhead required. However, a scalable architecture can be developed by taking advantage of the fact that individuals create machines to automate actions which are valuable and yet undesirable to perform manually. Automation reduces overhead. The value created by a system facilitating ethical action can be considered equal to the value created by removing the rent-seeking behavior which it eliminates, reduced by the overhead required to implement the system. Therefore, given the capacity for sufficient automation, the implementation of such a system will be valuable in all environments exhibiting rent-seeking behavior. Accepting that given the capacity for sufficient automation, a system to facilitate ethical action would be valuable, the improvement of currently available technology should be considered.
Rights and permissions
About this article
Cite this article
Wightman, D.H., Jurkovic, L.G. & Chan, Y.E. Technology to facilitate ethical action: a proposed design. AI & Soc 19, 250–264 (2005). https://doi.org/10.1007/s00146-005-0336-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-005-0336-3