Abstract
Although many scientists and engineers insist that technologies are value-neutral, philosophers of technology have long argued that they are wrong. In this paper, I introduce a new argument against the claim that technologies are value-neutral. This argument complements and extends, rather than replaces, existing arguments against value-neutrality. I formulate the Value-Neutrality Thesis, roughly, as the claim that a technological innovation can have bad effects, on balance, only if its users have “vicious” or condemnable preferences. After sketching a microeconomic model for explaining or predicting a technology’s impact on individuals’ behavior, I argue that a particular technological innovation can create or exacerbate collective action problems, even in the absence of vicious preferences. Technologies do this by increasing the net utility of refusing to cooperate. I also argue that a particular technological innovation can induce short-sighted behavior because of humans’ tendency to discount future benefits too steeply. I suggest some possible extensions of my microeconomic model of technological impacts. These extensions would enable philosophers of technology to consider agents with mixed motives—i.e., agents who harbor some vicious preferences but also some aversion to acting on them—and to apply the model to questions about the professional responsibilities of engineers, scientists, and other inventors.
Similar content being viewed by others
Notes
This opens the door to an even more general criticism of the Value-Neutrality Thesis. It is arguably the case that, within limits or in certain contexts (e.g., competitive markets), it is not vicious to prefer to impose costs on others to benefit oneself, one’s family, one’s country, etc., rather than to forgo those benefits to protect others. In that case, if the costs to others exceed the benefits to the T-user (etc.), then the T-user could create, on balance, bad effects without acting on vicious preferences. Especially when T is generally used by a small elite at the expense of the rest of society, T’s aggregate effect may well be large and negative, despite each individual T-user’s preferences being minimally decent.
This predictive ability is one way in which the microeconomic model’s analytic tractability is an important advantage over Illies and Meijers’ qualitative approach. Consider, for instance, the case of the energy efficient light bulb (Illies and Meijers 2009, p. 436). Both approaches can explain why introducing such a bulb might actually raise total energy consumption. Given the relevant supply and demand schedules, the microeconomic model can predict the sign of the net effect. That is, it can predict whether the increased use of electric lights will swamp the increased efficiency of the bulbs. Illies and Meijers’ model cannot; its strengths lie elsewhere.
References
Brey, P. (2010). Philosophy of technology after the empirical turn. Techné: Research in Philosophy and Technology, 14(1), 36–48.
Green, L., & Myerson, J. (2004). A discounting framework for choice with delayed and probabilistic rewards. Psychological Bulletin, 130(5), 769–792.
Hardin, G. (1968). The tragedy of the commons. Science, 162, 1243–1248.
Held, V. (2006). The ethics of care: Personal, political, and global. New York: Oxford University Press.
Illies, C., & Meijers, A. (2009). Artefacts without agency. The Monist, 92(3), 420–440.
Kamm, F. M. (2006). Intricate ethics: Rights, responsibilities, and permissible harm. New York: Oxford University Press.
Koepsell, D. (2010). On genies and bottles: Scientists’ moral responsibility and dangerous technology R&D. Science and Engineering Ethics, 16(1), 119–133.
Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. Berkeley, CA: University of California Press.
Olson, M. (1965). The logic of collective action: Public goods and the theory of groups. Cambridge, MA: Harvard University Press.
Putnam, R. (2000). Bowling alone: The collapse and revival of American community. New York: Simon & Schuster.
Radder, H. (2009). Why technologies are inherently normative. In A. Meijers (Ed.), Handbook of the philosophy of science, vol. 9: Philosophy of technology and engineering sciences (pp. 887–921). Amsterdam: Elsevier.
Railton, P. (1984). Alienation, consequentialism, and the demands of morality. Philosophy & Public Affairs, 13(2), 134–171.
Van de Poel, I. (2001). Investigating ethical issues in engineering design. Science and Engineering Ethics, 7(3), 429–446.
Williams, B. (1973). A critique of utilitarianism. In J. J. C. Smart & B. Williams (Eds.), Utilitarianism: For and against. Cambridge: Cambridge University Press.
Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. Chicago: University of Chicago Press.
Acknowledgments
Thanks to Chris Alen Sula and two anonymous referees for helpful comments on earlier drafts of this paper.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Morrow, D.R. When Technologies Makes Good People Do Bad Things: Another Argument Against the Value-Neutrality of Technologies. Sci Eng Ethics 20, 329–343 (2014). https://doi.org/10.1007/s11948-013-9464-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11948-013-9464-1