Skip to main content
Log in

Technology to facilitate ethical action: a proposed design

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

As emerging technologies support new ways in which people relate, ethical discourse is important to help guide designers of new technologies. This article endeavors to do just that by presenting an ethical analysis and design of technology intended to gather and act upon information on behalf of its users. The article elaborates on socio-technological factors that affect the development of technology to support ethical action. Research and practice implications are outlined.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. Note that our design requires full agreement, not just a majority vote, by system users for proposed actions to be executed. We discuss challenges associated with this assumption later

  2. Social contract theory presents “the view that morality is founded solely on uniform social agreements that serve the best interests of those who make the agreement” (The Internet Encyclopedia of Philosophy 2004)

  3. Information systems are sociotechnical, involving technology and users. In this article, we limit our examination of system ethics to technology impacts on the user community

  4. In this case, it may be argued that the user community is responsible. We examine the system ethics with respect to this very same community

References

Download references

Acknowledgments

The authors are grateful to Anna Dekker for her editorial assistance and to the Social Sciences and Humanities Research Council of Canada for the funding received for this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yolande E. Chan.

Appendices

Appendix 1 ethical analysis

1.1 Information and ethics

In this appendix, we discuss ethics with respect to technology that is meant to function as an information gatherer for a finite collection of users. First, we consider the actual information that the system creates. Are there any criteria by which the acquisition of certain information can, by itself, be judged to be an ethical act or not? Intuitively, it seems there may very well be. A historical example is the controversial research performed within the Manhattan project (Groueff 1967), in which the atomic bomb was developed by Einstein, Oppenheimer, Fermi, and others. It was not uncommon at the time of the research to condemn its potential destructiveness, and the issue is seldom broached without similar debate today.

To decide about the ethics of information, one can consider alternative situations in which the information is used differently. For instance, suppose the Manhattan project failed and the global community was later threatened by a geological situation, which could destroy the human race if not attended to. If nuclear research were the only way of making our planet secure, would we reject this research? This involves the question, “Is it proper to enable the possibility of wide ranging destruction for the sake of self defence?” In this case a property of the information is challenged, namely its potential destructiveness. We argue, however, that user opportunity to mitigate the output of the system based on such properties can provide sufficient ethical control of the system for those directly involved Footnote 3. Thus, technology of the sort we are discussing ought not to be prevented from gathering certain information for its users, as long as they are then able to vote on, and limit, proposed actions.

1.2 Technology and ethics

Now we expand on how the consent of individuals ensures that the proposed technology is ethically sanctioned. We do this in the context of moral realism. Moral realism is a school of philosophical thought which claims that ultimately correct ethical axioms are genuine features of the world. Richard N. Boyd elaborates, “Moral statements are the sorts of statements which are true or false” (Boyd 1988). This contrasts with the view that any such axioms are to some degree constructed from concerns that are not genuine features of the world, arising from social or psychological contingencies; thus, moral statements are not true or false, but only fallibly believed to be so. In this article, we do not seek to resolve this dispute. We only point out that our ethical argument is consistent with the tenets of moral realism.

In our system design, the technology will only act on information if unanimous consent is achieved. This, though ethically significant, is not entirely convincing as landmark to the ethical quality of the system. If the system is to be more than a passive encyclopedia, which is ethically inert, it must assist with achieving appropriate consent. This can be accomplished by applying the system’s capacity for abstraction to all opportunities for consent. It is by constantly working to revise a conception of what it is that achieves this consent that the system can realize a “disposition” that aims to provide beneficial information to its users. This notion of benefit, in turn, is informed by aspects of prior exchanges that contributed to consent (or its lack). Thus, we have a system with a high capacity for abstraction that will only act on information in the case of unanimous user consent, and which seeks to optimize opportunities for information exchange with respect to the consent of the users.

The proposed system only operates in a state of affairs in which ethical consensus is reached. The system actively works to effect such a consensus. In this way, it can be said that the system is promoting ethical action. It executes only ethically sanctioned acts. It is important here to point out that we are using the term “ethically sanctioned” rather than “ethical” to describe the events that are enabled by the system. This is because it is possible to judge an act of having consented to a given situation unethical (e.g. in the case of the nuclear research example provided above). However, a “failure” in terms of consent on the part of all relevant parties ought not to indicate an ethical limitation for which the technology is responsible Footnote 4. Thus, we say that condition of unanimous consent provides sufficient authorization for the opportunities enabled by the technology so that the technology is not itself responsible for these opportunities conflicting with a particular ethical viewpoint.

Appendix 2 limits to the realization of contractual equilibrium

Without a contractual equilibrium, to the degree to which disclosed information is true or verifiable, and believed, a Nash equilibrium will exist. The degree to which information is true or verifiable, and believed, determines the certainty with which actions proposed through a system facilitating ethical action can be selected. If misinformation or failures to disclose information are considered to result in inequality, uncertainty could potentially result in suboptimal perceived benefits, affecting the instantiation of strategy, and value distribution. For example, consider the situation in which two people have guns pointed at each other, and, although only the opponent is aware of it, one person is out of bullets. If it is considered inequitable for the opponent not to reveal this information, a suboptimal value distribution may result if the opponent fires his/her weapon. Ignoring misinformation and failures to disclose information, from a macro perspective, a Nash equilibrium would always exist, as no inequality would be possible.

A simple, alternative, game theory explanation of the effectiveness of the proposed system can be derived from the Tit for Tat strategy (Segaly and Sobelz 1999). Should any individual take advantage of an opportunity to have an action implemented based on a manner which is at any later moment considered to result in an inequitable outcome (for instance, this might occur if an individual acted on undisclosed information), further actions would not be accepted which did not rebalance the value distribution. Since consensus is required, it is impossible not to address each participant’s interests. Tit for Tat is an effective strategy involving co-operation when competing strategies are in equilibrium. However, by definition, perfect information regarding the effect of every action must be obtainable for a contractual equilibrium to exist.

A contractual equilibrium will be approached asymptotically but will never be fully realized. Every action has persisting effects. In an infinite universe, the full effects of actions cannot be accounted for by any finite resources. Without perfect information regarding the effect of every action, a contractual equilibrium cannot exist. Thus, in an infinite universe, it is impossible to remove all of the uncertainty regarding the outcome of an action. The expression “entropy of consciousness” can be used to refer to the inability to know that a decision will result in the preferred outcome, and therefore, whether it is the optimal choice.

We now provide a secondary definition for a contractual equilibrium, which is not subject to an infinite limit, by applying financial market theory. The efficient market hypothesis, which states that markets cannot be outperformed because all available information is already accounted for in stock prices (Malkiel 2003), is only true with respect to the information available to all participants within the market. This defines an achievable contractual equilibrium. It would occur if each individual disclosed his/her experiences and interests to every other individual, so that each individual could evaluate proposed actions on the basis of all the information available to all individuals.

The definition of ethical action established in this article entails consensus. Rent-seeking is defined as action taken to receive value which exceeds price (in economic terms, the act of trying to secure the rights to premiums on rents which are above their respective opportunity costs). That is, when economic policies create something that is to be allocated at less than its value by any sort of government process, resources will be used in an effort to capture the rights to the items of value (Krueger 1974).

Consensus removes rent-seeking, allowing for optimal value distribution. The elimination of rent-seeking behavior would cause prices to accurately reflect the value to be transferred. The value created by a system facilitating ethical action can be considered equal to the value created by removing the rent-seeking behavior which it eliminates, reduced by the overhead required to implement the system.

The actions which are implemented by the described system can be considered resource allocations. As more information becomes available, these allocations will become increasingly efficient. That is, they will become increasingly reflective of the desires of those creating the consensus.

The implementation of a system requiring every individual to manually select a decision for every action is only feasible for extremely narrow applications. This is a direct result of the overwhelming overhead required. However, a scalable architecture can be developed by taking advantage of the fact that individuals create machines to automate actions which are valuable and yet undesirable to perform manually. Automation reduces overhead. The value created by a system facilitating ethical action can be considered equal to the value created by removing the rent-seeking behavior which it eliminates, reduced by the overhead required to implement the system. Therefore, given the capacity for sufficient automation, the implementation of such a system will be valuable in all environments exhibiting rent-seeking behavior. Accepting that given the capacity for sufficient automation, a system to facilitate ethical action would be valuable, the improvement of currently available technology should be considered.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wightman, D.H., Jurkovic, L.G. & Chan, Y.E. Technology to facilitate ethical action: a proposed design. AI & Soc 19, 250–264 (2005). https://doi.org/10.1007/s00146-005-0336-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-005-0336-3

Keywords