Skip to main content

Advertisement

Log in

Ethics Inside the Black Box: Integrating Science and Technology Studies into Engineering and Public Policy Curricula

  • Original Research/Scholarship
  • Published:
Science and Engineering Ethics Aims and scope Submit manuscript

Abstract

There is growing need for hybrid curricula that integrate constructivist methods from Science and Technology Studies (STS) into both engineering and policy courses at the undergraduate and graduate levels. However, institutional and disciplinary barriers have made implementing such curricula difficult at many institutions. While several programs have recently been launched that mix technical training with consideration of “societal” or “ethical issues,” these programs often lack a constructivist element, leaving newly-minted practitioners entering practical fields ill-equipped to unpack the politics of knowledge and technology or engage with skeptical publics. This paper presents a novel format for designing interdisciplinary coursework that combines conceptual content from STS with training in engineering and policy. Courses following this format would ideally be team taught by instructors with advanced training in diverse fields, and hence co-learning between instructors and disciplines is a key element of the format. Several instruments for facilitating both student and instructor collaborative learning are introduced. The format is also designed for versatility: in addition to being adaptable to both technical and policy training environments, topics are modularized around a conceptual core so that issues ranging from biotech to nuclear security can be incorporated to fit programmatic needs and resources.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. This observation made by authors during conversations with educators at various institutions. See also (National Academy of Science, Engineering and Medicine, 2018).

  2. The STS cannon is exemplified in the Handbooks of STS (Jasanoff, 1995), which have entered their fourth edition (Felt, 2018); flagship journals include Social Studies of Science (https://journals.sagepub.com/home/sss), Science, Technology and Human Values (https://www.4sonline.org/publications); field fostered by Society for the Social Studies of Science (https://www.4sonline.org) which holds annual meetings. While some traditional ethics approaches—such as care-ethics (Giligan, 2008) and virtue-ethics (Carr & Steutel, 1999)—do attend to complexity in ethical life, they are less concerned with how ethical life co-evolves with sociotechnical systems.

  3. A few exceptions: scholarly writing under the heading “engaged STS” generally seeks to promote engagement with practical disciplines, e.g. (Kuhlmann et al., 2017). A “science-outside the lab” approach (Bernstein et al., 2017) incorporates STS but is more explicitly focused on science policy than on breaking open technology. Value-sensitive design (VSD) dovetails with STS in its attention to the values of diverse stakeholder groups (Friedman & Hendry, 2019), but it often formulates a predefined space of possible values that is selected from throughout the evolution of a sociotechnical system (Le Dentac et al., 2009; Manders-Huits, 2011). Constructivist STS explicitly attends to the continual coproduction of sociotechnical and value systems, both historically and in the construction of possible future pathways. This is regardless of whether there was a particular attention to values in the processes of design. It thus complements VSD in important ways.

  4. This article will make a loose distinction between “conceptual content” that is drawn from theoretical literature, and “issue content”—such as biotechnology, nuclear security, environmental science, information technology, etc.—from which case studies are drawn for exercises, and to which the conceptual content can be applied.

  5. The study of complex systems is an interdisciplinary field that investigates physical or social systems that defy traditional reductionist analysis. For a useful summary of the field and its origins for the lay reader, see (Mitchell, 2009). For academic summaries, see Turner and Baker (2016), Rickles (2007) and Astil and Cairney (2015). An example of a prominent journal is Complex Systems, https://www.complex-systems.com.

  6. Critical policy studies (CPS) generally analyzes policy choices from a post-positivist perspective. For an academic handbook, see (Fischer et al., 2015). An example of a prominent journal in the field is Critical Policy Studies, https://www.tandfonline.com/journals/rcps20.

  7. Much intuition can be gained, for example, from the basic multilayer perceptron (Brownlee, 2016).

References

  • Abbot, A. (2016). US mental health chief: Psychiatry must get serious about mathematics. Nature, 539, 18–19.

    Article  Google Scholar 

  • ABET (2000). Improving ethics awareness in higher education. https://www.abet.org/wp-content/uploads/2015/05/Viewpoints_Vol1.pdf Accessed July 10, 2020.

  • ABET EC (2016). Criteria for accrediting engineering programs. https://www.abet.org/wp-content/uploads/2015/10/E001-16-17-EAC-Criteria-10-20-15.pdf Accessed July 10, 2020.

  • Anderson, B. (1983). Imagined communities: Reflections on the origin and spread of nationalism. Verso.

    Google Scholar 

  • Arrow, K., et al. (2004). Are we consuming too much? Journal of Economic Perspectives, 18(3), 147–172.

    Article  Google Scholar 

  • Astil, S., & Cairney, P. (2015). Complexity theory and political science: Do new theories require new methods? In Handbook on complexity and public policy. Elgar.

  • Axelrod, R. (1984). The evolution of cooperation. Basic Books.

    Google Scholar 

  • Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84(2), 191–215.

    Article  Google Scholar 

  • Barabas, C. (2020). Beyond bias: Reimagining the terms of ‘Ethical AI’ in criminal law. 12 Georgetown Journal of Law and Critical Race Perspectives. https://doi.org/10.2139/ssrn.3377921

    Article  Google Scholar 

  • Beck, U., & Wehling, P. (2011). The politics of non-knowing. In F. Rubio & P. Baert (Eds.), The politics of knowledge. Routledge.

    Google Scholar 

  • Bergstrom, C., & West, J. (2020). Calling bullshit: The art of skepticism in a data-driven world. Random House.

    Google Scholar 

  • Bernstein, M., Reifschneider, K., Bennett, I., & Wetmore, J. (2017). Science outside the lab: Helping graduate students in science and engineering understand the complexities of science policy. Science and Engineering Ethics, 23, 861–882.

    Article  Google Scholar 

  • Bijker, W., Hughes, T. P., & Pinch, T. (1987). The social construction of technological systems. MIT Press.

    Google Scholar 

  • Bovill, C., & Bulley, C. J. (2011). A model of active student participation in curriculum design: exploring desirability and possibility. In C. Rust (Ed.), Improving student learning (ISL) 18: Global theories and local practices: Institutional, disciplinary and cultural variations (pp. 176–188). Oxford Brookes University: Oxford Centre for Staff and Learning Development.

    Google Scholar 

  • Bower, J. L. & Christensen, C. M. (1995). Disruptive technologies: Catching the wave. Harvard Business Review.

  • Brownlee, J. (2016). Crash course on multilayer perceptron neural networks. https://machinelearningmastery.com/neural-networks-crash-course/

  • Buchthal, J., Evans, S. W., Lunshof, J., et al. (2019). Mice atainst ticks: An experimental community-guided effort to prevent tick-borne disease by altering the shared environment. Philosophical Transactions of the Royal Society B: Biological Sciences, 374(1772), 20180105. https://doi.org/10.1098/rstb.2018.0105

    Article  Google Scholar 

  • Butler, J. (1990). Gender trouble. Routledge.

    Google Scholar 

  • Carr, D. & Steutel, J. (Eds.) (1999). Virtue ethics and moral education. Routledge.

  • Collins, H. (1985). Changing order: Replication and induction in scientific practice. University of Chicago Press.

    Google Scholar 

  • Collins, H., & Pinch, T. (1993). The golem: What everyone should know about science. Cambridge University Press.

    Google Scholar 

  • Collins, P., Patel, V., & Walport, M. (2011). Grand challenges in global mental health. Nature, 475, 27–30.

    Article  Google Scholar 

  • Cook-Sather, A., Bovill, C., & Felten, P. (2014). Engaging students as partners in learning and teaching: A guide for faculty (1st ed.). John Wiley & Sons.

    Google Scholar 

  • Daly, H., et al. (2007). Are we consuming too much—For what? Conservation Biology, 21(5), 1359–1362. https://doi.org/10.1111/j.1523-1739.2007.00770.x

    Article  Google Scholar 

  • Dewey, J. (1927). The public and its problems. Henry Holt.

    Google Scholar 

  • Douthat, R. (2020). In the fog of coronavirus, there are no experts. The New York Times.

  • Downer, J. (2013). Disowning Fukushima: Managing the credibility of nuclear reliability assessment in the wake of disaster. Regulation and Governance. https://doi.org/10.1111/rego.12029

    Article  Google Scholar 

  • DSM-5 (2013). Diagnostic and statistical manual of mental disorders. American Psychiatric Association.

  • Epstein, S. (1995). The construction of lay expertise: AIDS activism and the forging of credibility in the reform of clinical trials. Science, Technology and Human Values, 20(4), 408–437. https://doi.org/10.1177/016224399502000402

    Article  Google Scholar 

  • Epstein, S. (1996). Impure science: Aids, activism and the politics of knowledge. University of California Press.

  • Espeland, W., & Sauder, M. (2007). Rankings and reactivity: How public measures re-create social worlds. American Journal of Sociology, 113(1), 1–40.

    Article  Google Scholar 

  • Felt, E. (2018). Handbook of science and technology studies. Fourth Edition. MIT Press.

  • Fischer, F., Torgerson, D., Durnova, A., & Orsini, M. (2015). Handbook of critical policy studies. Elgar Publishing.

    Book  Google Scholar 

  • Foucault, M. (1970). The order of things: An archeology of human sciences. Pantheon Books.

    Google Scholar 

  • Foucault, M. (1978). Discipline and punish. Pantheon Books.

    Google Scholar 

  • Foucault, M. (1979). History of sexuality. Pantheon Books.

    Google Scholar 

  • Friedman, B., & Hendry, D. (2019). Value sensitive design: Shaping technology with moral imagination. MIT Press.

  • Gardiner, S. (2006). The perfect moral storm. Environmental Values, 15, 397–413.

    Google Scholar 

  • Gieryn, T., et al. (1995). Boundaries of science. In S. Jasanoff (Ed.), Handbook of science and technology studies. Sage Publications.

    Google Scholar 

  • Gilligan, C. (2008). Moral orientation and moral development. In A. Bailey & C. J. Cuomo (Eds.), The feminist philosophy reader. McGraw-Hill.

    Google Scholar 

  • Hacking, I. (1983). Representing and intervening: Introductory topics in the philosophy of science. Cambridge University Press.

    Book  Google Scholar 

  • Hacking, I. (1987). The taming of chance. Cambridge University Press.

    Google Scholar 

  • Hacking, I. (2000). Social construction of what? Harvard University Press.

    Book  Google Scholar 

  • Hardin, G. (1968). Tragedy of the commons. Science, 162(5859), 1243–1248.

    Article  Google Scholar 

  • Haslanger, S. (1995). Ontology and social construction. Philosophical Topics, 23(2), 95–125.

    Article  Google Scholar 

  • Hecht, G. (2009). The radiance of France: Nuclear power and national identity after WWII. MIT Press.

    Book  Google Scholar 

  • Hess, J., & Fore, G. (2018). A systematic literature review of us engineering ethics interventions. Science and Engineering Ethics, 24, 551–583. https://doi.org/10.1007/s11948-017-9910-6

    Article  Google Scholar 

  • Hoffman, M. (2020). Reflective consensus building on wicked problems with the reflect! platform. Science and Engineering Ethics, 26, 793–817. https://doi.org/10.1007/s11948-019-00132-0

    Article  Google Scholar 

  • Insel T (2011). Post by former NIMH Director Thomas Insel: Mental illness defined as disruption in brain circuits. National Institutes of Mental Health Information Resource Center, https://www.nimh.nih.gov/about/directors/thomas-insel/blog/2011/mental-illness-defined-as-disruption-in-neural-circuits.shtml.

  • Insel, T. (2010). Faulty circuits. Scientific American, 302(4), 44–52.

    Article  Google Scholar 

  • Jasanoff, S. (2004). States of knowledge: The co-production of science and social order. Routledge.

    Book  Google Scholar 

  • Jasanoff, S. (2005). Designs on nature: Science and democracy in Europe and the United States. Princeton University Press.

    Book  Google Scholar 

  • Jasanoff, S. (2013). Fields and fallows: The normative logics of STS. In A. Barry & G. Born (Eds.), Interdisciplinarity: Reconfigurations of the social and natural sciences (pp. 99–118). Routledge.

    Google Scholar 

  • Jasanoff, S. (2016). The ethics of invention: Technology and the human future. Norton.

    Google Scholar 

  • Jasanoff, S. (2018). Can science make sense of life. Polity Press.

    Google Scholar 

  • Jasanoff, S., & Kim, S. (2015). Dreamscapes of modernity: Sociotechnical imaginaries and the fabrication of power. University of Chicago Press.

    Book  Google Scholar 

  • Jasanoff, S., & Wynne, B. (1998). Science and decisionmaking. In S. Rayner & E. L. Malone (Eds.), Human choice and climate change (pp. 1–87). Battelle Press.

    Google Scholar 

  • Jasanoff, S. et al. (edt. 1995). Handbook of science and technology studies. Revised Edition. Sage Publications.

  • Kleinberg, J. (2016). A guide to solving social problems with machine learning. Harvard Business Review.

  • Knutsen, B., et al. (1998). Selective alteration of personality and social behavior by serotinergic intervention. American Journal of Psychiatry, 155(3), 373–379.

    Article  Google Scholar 

  • Kreibel, D., et al. (2001). The precautionary principle in environmental science. Environmental Health Perspectives, 109(9), 871–876.

    Article  Google Scholar 

  • Kuhlman, S., et al. (2017). Engaging science technology and policy studies. EASST Review, 36, 3.

    Google Scholar 

  • Kuhn, T. (1962). The structure of scientific revolutions. University of Chicago Press.

    Google Scholar 

  • Lahoff, F. (2010). Overview of the genetics of major depression. Current Psychiatry Reports, 12(6), 539–546.

    Article  Google Scholar 

  • Lakoff, G. (2004). Don't think of an elephant: Know your values and frame the debate — An essential guide for progressives. Chelsea Green Publishing.

  • Lander, E. et al. (2019). Adopt a moratorium on heritable gene editing. Nature, 567, 165–168. https://doi.org/10.1038/d41586-019-00726-5

    Article  Google Scholar 

  • Latour, B. (1993). We have never been modern. Harvard University Press.

    Google Scholar 

  • Le Dantec C., Poole, E., & Wyche, S. (2009). Values as lived experience. In Proceedings of the 27th international conference on human factors in computing systems (CHI 09), New York, (p. 1141). ACM Press.

  • Lorenz, E. (1993). The essence of chaos. University of Wisconsin Press.

    Book  Google Scholar 

  • MacKenzie, D. (1993). Inventing accuracy: A historical sociology of nuclear missile guidance. MIT Press.

    Google Scholar 

  • MacKenzie, D. (2009). Material markets: How economic agents are constructed. Oxford University Press.

    Google Scholar 

  • Mahadevan, L., & Deutch, J. M. (2010). Influence of feedback on the stochastic evolution of simple climate systems. Proceedings of the Royal Society A, 466, 993–1003.

    Article  Google Scholar 

  • Manders-Huits, N. (2011). What is value-centered design? The challenge of incorporating moral values into design. Science and Engineering Ethics, 17(2), 271–287.

    Article  Google Scholar 

  • Mitchell, M. (2009). Complexity: A guided tour. Oxford University Press.

    Google Scholar 

  • Monbiot, G. (2016). Neoliberalism is creating loneliness. That’s what’s wrenching society apart. The Guardian.

  • Murphy, M. (2006). Sick building syndrome and the problem of uncertainty: Environmental politics, technoscience, and women workers. Duke University Press.

    Book  Google Scholar 

  • National Academy of Sciences, Engineering and Medicine, (2018). Branches from the same tree: The integration of humanities and arts with sciences, engineering and medicine. Consensus Study Report, https://www.nap.edu/read/24988/chapter/1#ix.

  • Newman, M. (2018). Networks: An introduction (2nd ed.). Oxford University Press.

    Book  Google Scholar 

  • Obeyesekere, G., et al. (1985). Depression, Buddhism, and the work of culture in Sri Lanka. In A. Kleinman (Ed.), Culture and depression: Studies in the anthropology of cross-cultural psychiatry of affect and disorder. University of California Press.

    Google Scholar 

  • Osnos, E. (2018). Can Mark Zuckerberg fix Facebook before it breaks democracy? The New Yorker.

  • Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Princeton University Press.

    Google Scholar 

  • Porter, T. (1995). Trust in numbers: The pursuit of objectivity in science and public life. Princeton University Press.

    Book  Google Scholar 

  • Putnam, R. (1995). Bowling along: America’s declining social capital. Journal of Democracy, 6(1), 65–78.

    Article  Google Scholar 

  • Raynor, S. (1997). Zen and the art of climate maintenance. Nature, 390(6658), 332–334.

    Article  Google Scholar 

  • Reif, F. (1965). Fundamentals of statistical and thermal physics. Waveland Press.

    Google Scholar 

  • Rickels, D., Howe, P., & Shiell, A. (2007). A simple guide to complexity and chaos. Journal of Epidemiology and Community Health, 61, 933–937.

    Article  Google Scholar 

  • Rittel, H., & Weber, M. (1973). Dilemmas in a general theory of planning. Policy Sciences, 4, 155–169.

    Article  Google Scholar 

  • Rose, N. (2019). Our psychiatric future. Polity Press.

    Google Scholar 

  • Sanderson, G. (2018). But what is a neural network? Chapter 1, Deep learning. https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi

  • Scheffer, M., et al. (2012). Anticipating critical transitions. Science, 338, 344–348.

    Article  Google Scholar 

  • Schon, D. A., & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies. Basic Books.

    Google Scholar 

  • Serewitz, D. (2004). How science makes environmental controversies worse. Environmental Science and Policy, 7, 385–403. https://doi.org/10.1016/j.envsci.2004.06.001

    Article  Google Scholar 

  • Shapin, S., & Schaffer, S. (1985). Leviathan and the air pump: Hobbes, Boyle, and the experimental life. Princeton University Press.

    Google Scholar 

  • Sunderland, M. (2019). Using student engagement to relocate ethics to the core of the engineering curriculum. Science and Engineering Ethics, 25, 1771–1788. https://doi.org/10.1007/s11948-013-9444-5

    Article  Google Scholar 

  • Taleb, N. (2007). The black swan: The impact of the highly improbable. Random House.

    Google Scholar 

  • Tang, B. L., & Lee, J. S. C. (2020). A reflective account of a research ethics course for an interdisciplinary cohort of graduate students. Science and Engineering Ethics, 26, 1089–1105. https://doi.org/10.1007/s11948-020-00200-w

    Article  Google Scholar 

  • Turner, H., & Baker, R. (2016). Complexity theory: An overview with potential applications for the social sciences. Systems, 7, 4.

  • Valenstein, E. S. (2005). The war of the soups and the sparks: The discovery of neurotransmitters and the dispute over how nerves communicate. Columbia University Press.

    Book  Google Scholar 

  • Winner, L. (1986). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.

    Google Scholar 

  • Wittchen, H., et al. (2011). The size and burden of mental disorders and other disorders of the brain. European Neuropsychopharmacology, 21, 655–679.

    Article  Google Scholar 

  • Wynne, B., et al. (1996). Misunderstood misunderstandings: Social identities and public uptake of science. In B. Wynne (Ed.), Misunderstanding science? The public reconstruction of science and technology. Cambridge University Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christopher Lawrence.

Ethics declarations

Conflict of interest

Not Applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Core Concepts Crib Sheet.

Eng-Sci 28: Science, Technology and Society.

Spring 2020.

This course will draw from a set of core concepts to analyze the societal origins and implications of contemporary developments in science and technology. Many of these concepts are drawn from the field of science and technology studies (STS), but other fields, such as complex systems and probability theory, are represented as well. Many of the concepts will be introduced in the first week of class, and following weeks will feature one or two concepts for deeper engagement and academic reading.

Framing and Incommensurability

Below is a picture of the famous Rubin Vase. Many people will look at the picture and see a flower vase. Others will look at the same visual data and see two faces looking at one another. Each is a perfectly valid thing to “see” in the picture, yet it’s difficult to see both the vase and the two faces at the same time. This is because the two images are incommensurable with one another in the sense that the features of a vase cannot also be the contours of a face. Cognitive scientists who study the way vision works have pointed to this example to illustrate how the brain can organize visual data in different ways, allowing us to see very different “objects” in the same visual data.

figure a

In order to make sense of reality, we must organize our experiences into frameworks of meaning. These interpretive frames are needed to separate meaningful data from noise and randomness; identify cause and effect; sort objects into categories; and express or interpret human roles and identities. However, a given reality can give rise to multiple, competing frames of meaning that may lead to different understandings. Further, competing frames may be incommensurable with one another, making it difficult to weigh competing values and sources of evidence against one another. The Rubin Vase example illustrates that even at the level of visual perception, simple tasks like seeing objects in a picture rely on interpretive framing to make sense of visual stimuli, and that multiple incommensurable frames may lead to seeing different things.

When considering the social implications of science and technology, we will be thinking about framing and incommensurability at many different levels. Rather than individual tasks of visual perception, we will often consider how groups make sense of things like scientific data, news stories, technological innovations, weather patterns, public health statistics, etc. Yet just as with the Rubin Vase, we will encounter competing interpretive frames that allow us to “see” very different objects, risks, choices and facts.

Example—Automobile deaths may be framed as arising from “random accidents,” from “careless drunk driving,” or from societal preferences for individualized transportation over communal public transit. Each of these frames reveals different relationships between cause and effect, random and systematic events, personal freedom and determination.

Related reading:

Erving Goffman (1974), Frame Analysis: an essay on the organization of experience.

Donald Schon and Martin Rein (1994), Frame Reflection: Toward the Resolution of Intractable Policy Controversies, Ch. 2, “Policy Controversies as Frame Conflicts.”

Sheila Jasanoff (2005), Designs on Nature, Ch. 2, “Controlling Narratives.”

Thomas Kuhn (1962), The Structure of Scientific Revolutions.

Social Construction of Knowledge

Our common-sense view of scientific knowledge assumes that seeing or knowing the world is a passive process that is separated from social or political forces. When social forces influence the content of scientific knowledge, we often say that knowledge is “biased.” However, scholars who look closely at the construction of scientific knowledge have come to recognize that it is an active process that depends crucially on social factors like trust, authority, shared metaphors and narratives, etc. If social factors are a crucial ingredient in constructing scientific knowledge, then it is meaningless to say that they “bias” that knowledge.

Examples: social construction of vision—We saw in the Rubin-Vase example above that vision is not a passive response to visual stimuli. Rather, our minds must combine that visual stimuli with other, pre-existing concepts that derive from our social experiences, to “construct” an image in our minds. For instance, in order to see a vase, we must have already learned what a vase is. Perhaps we have been given flowers to celebrate a birthday or decorate a new apartment. Someone from another culture who has never had these social experiences would not be able to see a vase in the Rubin-vase picture. Similarly, seeing the picture as two faces talking relies on shared human experiences of conversation. An alien or computer intelligence would scarcely recognize two conversing faces from the picture itself without some prior training.

Related reading:

H.M. Collins (1982), Sociology of Scientific Knowledge: A Source Book. Bath, Avon, England: Bath University Press.

Interpretative Flexibility

If you put the concepts of framing and social construction together, it becomes clear that people can see or interpret the world in multiple ways. This includes objects of science and technology, and how they relate to society.

Example: What is a bicycle? It is tempting to think a bicycle as a single object, but it can be many different things at the same time. Different social groups attribute different meanings to the bike because they are trying to address different types of problems.

Social group

What is a bike?

What issue is it involved with?

Campus guard

A menace on footpaths

Campus safety

Professional Cyclist

A means of success

Career advancement

City Planner

A greener alternative to commuting

Sustainable city development

Use this concept often and you will start seeing everything as ‘multiple’. This is a fantastic way to start breaking apart arguments about ‘technological determinism,’ which assumes technology has a single linear path that it inevitably progresses upon. By showing how there are always multiple ways of understanding a piece of knowledge, a technology, or even a ‘natural process’ in the world, you are demonstrating the constant processes of social construction and maintenance that go on to hold our science and technology together.

Related reading:

H.M. Collins (1981), “Stages in the Empirical Program of Relativism,” Social Studies of Science, 11(1):3–10 https://doi.org/10.1177/030631278101100101.

T. Pinch et al. (1987), “The Social Construction of Facts and Artifacts,” Ch. 1, The Social Construction of Technological Systems.

Boundary Work

Where is the border between science and society? How do we assign responsibility for technical or scientific knowledge on one hand, and ethical or political considerations on the other? Sociologists of science and technology have shown that the boundary between science and non-science is established differently in different contexts, through social processes of negotiation, contestation and representation known as boundary work. At stake in boundary work is the cognitive authority that attends scientific knowledge, and the liability of that knowledge to ethical and political scrutiny.

One common mode of boundary work is carried out through specialized technical languages that are comprehensible to “scientists,” yet incomprehensible to the “lay people.” Another mode can be found in the distinction between “pure” science that claims to seek knowledge for its own sake, and “applied” science that is directed toward serving the goals of society.

Other forms of boundary work may involve any contested demarcation between categories: such as “natural/unnatural,” “real/fake,” “living/nonliving,” “human/nonhuman,” etc. But when examining these demarcations, we should consider how they may map onto different ways to delegate responsibility, rights and cognitive authority amongst members of a society.

Example—We encountered a stark episode of boundary work during the 2019 public controversy over Dr. He Jiankui’s creation of the first CRISPR babies, when professional ethicists (e.g. Dr. Lunshof) and scientists (e.g. Zhang lab statement) called for a moratorium on implanting edited embryos into human wombs. This boundary-work move offers a way to cordon off a space for “basic research” that is allegedly geared toward the pursuit of pure knowledge, and is exempt from the broader ethical and societal concerns that erupted after the Jiankui’s controversial experiment.

Related reading:

Thomas Gieryn (1995), “Boundaries of Science,” Ch. 18 in The Handbook of Science and Technology Studies.

Panopticism

Panopticism is a mode of governance in which political subjects are rendered visible by their built environment, and are thereby encouraged to self-govern. The extreme case is Jeremy Bentham’s idealized prison architecture known as the panopticon (see Fig. below), which places prison cells in a circular arrangement around a central guard tower. At any given time, prisoners are aware of the possibility that they may be observed by a guard in the tower, but unsure whether the guard is looking at that moment. In this way, a strategically designed architecture can act as a force multiplier for the security staff of the prison by instilling self-governance into the prison population itself.

figure b

Michel Foucault famously argued that this principle of governance through visibility is not limited to prison environments, but has permeated modern societies. He finds panoptic architecture in seemingly-innocuous environments ranging from military camps to schools, hospitals and workplaces.

Examples—In our contemporary information society, surveillance cameras, big data and machine learning algorithms can function as force multipliers for governance in a way very similar to panoptic architectures that Foucault describes.

Related reading:

Michel Foucault (1978), Discipline and Punish, ch. 3, “Panopticism.”

Gary Gutting (2005), Foucault: A Very Short Introduction, ch. 8, “Crime and Punishment.”

Construction of Subjects and Publics

Identity construction is a continual process through which actors’ identities are forged, expressed and evolve over time. This process is deeply shaped by experience, which in turn is both socially and technologically mediated. For instance, a given technology or experience may encourage some people to group together as “users,” “victims,” “consumers,” “patients,” “experts,” “lay-people,” “criminals,” etc. These identities can, in turn, shape the preferences, political interests, and desires of the people to subscribe to them, or to which they are attached.

Examples—Big data technologies are an important venue for the construction of identities. For instance, in order to record and process data on patient groups for use in medical databases, those groups must be classified based on various markers of identity, such as race, gender, treatment history, etc. But since patients can be classified in an infinite number of ways (see framing and interpretive flexibility), designers must choose which parameters are most “medically meaningful,” and these choices in turn have implications for how treatment is prescribed, how patients self-identify, etc. (see performativity below).

Related reading:

Steven Epstein (2007), Inclusion: The Politics of Difference in Medical Research, “Introduction: Health Research and the Re-making of Common Sense;” ch. 1, “How to Study a Biopolitical Paradigm;” ch. 2, “Histories of the Human Subject.”

Jenny Reardon (2011), “Human Population Genomics and the Dilemma of Difference,” in Sheila Jasanoff (edt.), Reframing Rights.

Performativity of Knowledge

Performativity of knowledge is the power of knowledge to affect and alter the world.

We often think of knowledge as a “representation” or “mirror” of reality. This is often a useful shorthand, and brings us to ask whether the representation is “accurate” or “biased,” and whether the construction of the knowledge is “transparent” or “black-boxed.” However, the representational concept of knowledge does not account for the fact that, when knowledge is articulated or acted upon, it can feed back and transform the objects or reality that it represents.

Alternatively, we can think of knowledge as performative in the sense that it is embodied in human practices and artifacts. Since the performance of knowledge takes place in the same reality as the objects that knowledge is “about,” we must attend to the interactions through which knowledge can transform the world. The concept of performativity can be particularly illuminating when combined with the concept of framing.

Examples—To explore the difference between these two concepts of knowledge, consider the music curation algorithms used by Spotify. To first approximation, we might say that the goal of the algorithm is to “discover” the music preferences of the user, and then use that new knowledge of user preference to offer the user listening suggestions that are likely to satisfy those preferences. We might then ask questions about whether that knowledge “accurately represents” the user, or whether it is “biased” toward certain types of music. However, after several listening experiences mediated by the Spotify algorithm, user preference is likely to evolve based on what music is suggested. In other words, Spotify’s framing choices for categorizing music taste have the potential to perform or “operate” on the user that is categorized throughout the listening experience, and thus change their musical taste.

Perhaps a more consequential example can be seen in the curation of political discourse via social media’s targeted dissemination of news stories. Just as in the Spotify case, this curation of news stories is governed by algorithms that gather knowledge about readers based on their past viewings. A representational depiction of the knowledge gathered—which would ask whether an algorithm “accurately discovers” the political views of a user—cannot address the question of how these curation services may have contributed to greater political polarization leading up to the 2016 presidential election.

Related reading:

Donald MacKenzie (2009), Material Markets, “Precept 7: Economics Does Things,” pp. 30–31.

Evan Osnos (2018), “Can Mark Zuckerberg Fix Facebook Before it Breaks Democracy?,” The New Yorker.

John Austin (1962), How to Do Things with Words.

Judith Butler (1990), Gender Trouble.

Chaotic and Complex Systems

Many of the physical systems that we model mathematically or encounter in the lab respond to stimuli in predictable ways. With such “well-behaved” systems, we can predict future behavior based on data from previous experiments with the system, or from an analytical understanding of the system’s constituent parts. When we measure certain parameters or observables, we can calculate margins of error (i.e. “error bars”) to characterize the precision of our measurements, and propagate those through predictive calculations so as to estimate the uncertainty of our predictions. We can also isolate variables such that variation of one doesn’t influence our understanding of another. A student may master these basic skills in an undergraduate laboratory course.

However, scientists have come to wonder if the predictability of “well-behaved” systems may be more an achievement of human ingenuity in the lab than a product of how systems behave in the natural world. They have since developed a set of concepts to characterize how predictability of real-world systems can break down, and we will make use of two of these concepts.

The first of these is called chaos. A chaotic system is one in which very small variation in inputs (or causes) to the system can lead to radically different outcomes. This “sensitive dependence” to initial conditions can be so extreme that even if we can measure inputs with high precision, it may be impossible to calculate probabilities of distinct outcomes. In these cases, our experiences of past outcomes may not help us predict the future, because previous outcomes are very unlikely to recur.

Examples—A classic illustration of chaos is called the “butterfly effect.” Imagine that atmospheric weather patterns over the continental U.S. are so unpredictable that a butterfly flapping its wings in San Francisco may create ripple effects that amplify to produce rainstorms in Cambridge. Alternatively, the butterfly may instead sit still, resulting in sunny weather in Cambridge. While this fictitious example is only illustrative, the Earth’s atmosphere may indeed respond chaotically to small localized perturbations.

A second concept for characterizing unpredictability is called complexity. A complex system is one whose component parts may behave in very simple and predictable ways on their own, but when those parts are brought together into a system, their composite behavior becomes very unpredictable. These complex systems may produce “emergent phenomena” that cannot be reduced to the behaviors of their component parts.

Examples—Imagine that an individual neuron responds to its inputs in a very predictable way, producing an output pulse if (and only if) its input signals combine to surpass a given threshold. However, when these simple neurons are combined into networks, the aggregate response of the network to input signals can be very difficult to predict or understand.

Related reading:

Edward Lorenz (1993), The Essence of Chaos.

James Gleick (1987), Chaos: The Making of a New Science.

Dean Rickles et al. (2007), A Simple Guide to Chaos and Complexity, Journal of Epidemiology and Community Health.

Critical Transitions

Many complex systems behave unpredictably because they can have tipping points, where some new stimulus causes a sudden and significant change in the state of the system that is irreversible. Many of the systems that humans rely on—such as the climate, economic markets, ecosystems, political systems, the internet, etc.—are complex in this way. Sudden changes in these systems can have catastrophic consequences for humanity.

Examples—The Earth has already measurably warmed due to climate change, yet society has so far been able to manage the consequences. This would seem to suggest that the consequences of further warming will be proportional to what we have already seen, and hence not catastrophic. However, many of the complex systems that society relies on may be approaching a tipping point, at which a small amount of further warming may provoke changes that are qualitatively different from those we have seen in the past. If that is the case, then our past experiences in dealing with climate change may not be a good guide for predicting future consequences.

Related reading:

Marten Scheffer et al. (2012), “Anticipating Critical Transitions.”

Normal Versus Fat-Tailed Distributions

Many statistical claims about natural or social parameters assume that a parameter follows a normal or Gaussian distribution, such that we can define an “average” or “norm” value for the parameter, and a “standard deviation” or “variance” from that average value. This turns out to be a good assumption when the parameter in question is itself a composite of multiple statistically-independent random variables. In these cases, it is reasonable for us to expect very large deviations from the average value to be prohibitively unlikely.

For example—imagine I have written a physics article, and I wonder how many times the paper will be cited in the next ten years. I may assume that other physicists each have some probability of reading the article, and readers in turn have some probability of citing it. If the probability of any given physicist having read my paper after ten years is independent of whether others have read it, then I might expect that the number of total citations after ten years might follow a normal probability distribution. In this case, I can simply look at the average number of citations for past physics articles, and predict that my article will have a similar citation count in the future.

Unfortunately, many natural and social parameters are made up of composite variables that are not statistically independent, and thus follow a fat-tailed distribution. In these cases, probabilistic claims are much more difficult to justify. “Standard deviation” from the average, in particular, becomes meaningless in the case of fat-tailed distributions.

For example—if we are interested in the number of citations for my physics paper from the above example, we might suspect that the number of readers in a given year may depend on how many citations the paper might have from previous years. In this case, each citation may enhance the probability that additional physicists may read my paper in the future, and that each reader may in turn cite the paper. If this is so, it may be difficult to predict whether my paper will languish in low readership and citation numbers, or whether citations may “snowball” and result in very large readership.

figure c

Related reading:

Nassim Taleb (2007), The Black Swan: The Impact of the Highly Improbable, ch. 14: “From Mediocristan to Extremistan and Back”; ch. 15: “The Bell Curve, That Great Intellectual Fraud.”

The Politics of Numbers and Probabilities

In order to count the number of instances or members within a given category, we must agree on the definition of the category, as well how to separate distinct instances. If stakeholders agree on these, then quantitative evidence may appear to be “apolitical.” However, when stakeholders disagree on how to categorize and differentiate occurrences of a phenomenon—for instance when they frame the phenomenon in different ways—then political choices inherent in quantification and mathematics become apparent.

For Example—imagine we want to compare the incidence rates of major depression and anxiety disorders within a population, and that the allocation of national health-care resources depends on this quantitative comparison. It turns out that these two disorders are notoriously difficult to differentiate, making it difficult to determine which diagnosis fits a given patient. Further, even if we agree on how to differentiate the two disorders, it may be difficult to differentiate individual cases from one another. Do we count the number of patients suffering within a population? Or do we count the number of episodes of a disorder, such that a single patient may register several episodes. Different public-health practitioners may arrive at very different answers for what may seem like a simple question of arithmetic.

Similar considerations pertain to probabilistic claims. In order to write down a probability distribution, we must agree on a space of possible future outcomes, as well as on how that space is delimited or parsed into distinct outcomes.

Tragedy of the Commons and Collective Action

The tragedy of the commons is a situation in a shared-resource system where individual users acting independently according to their own self-interest behave contrary to the common good of all users by depleting or spoiling that resource through their collective action. If each member changes their consumption patterns, the resource may be conserved, but that would require collective action. Similar to the prisoner’s dilemma, the tragedy of the commons concept helps place individual and collective interests in relation to one another, and illustrate where they may be in opposition. The fact that action must be collective in order to overcome these dilemmas is one of the primary challenges of governance.

Related reading:

Garrett Hardin (1968), “Tragedy of the Commons.”

Robert Axelrod (1980), “The Evolution of Cooperation.”

Herman Daly et al. (2007), “Are We Consuming Too Much — For What?,” Conservation Biology, 21:5:1359–1362.

The Precautionary Principle

The precautionary principle states that if an action or policy has a suspected risk of causing severe harm to the public domain, the action should not be taken in the absence of scientific near-certainty about its safety. Under these conditions, the burden of proof about absence of harm falls on those proposing an action, not those opposing it. This is in contrast to many risk-analytic approaches, in which the burden of proof is to demonstrate the risk outweighs the benefits.

The precautionary principle is intended to deal with uncertainty and risk in cases where the absence of evidence and the incompleteness of scientific knowledge carries profound implications and in the presence of risks of "black swans", unforeseen and un-foreseeable events of extreme consequence (this definition is extracted from Taleb (2014), but modified).

Related reading:

Nasim Taleb et al. (2014), “The Precautionary Principle (with Application to Genetic Modification of Organisms),” Extreme Risk Initiative, NYU School of Engineering Working Paper Series.

David Kreibel et al. (2001), “The Precautionary Principle in Environmental Science,” Environmental Health Perspectives, 109:9.

Additional Black-box Exercises

Gene-drive technology and the re-making of natural and public spaces (complex systems, interpretive flexibility and construction of publics). Gene drive proposals made to the residents of two small islands (Buchthal et al., 2019) were compared as experiments in natural and civic design. The island communities are relatively monocultural and yet distinct, and thus provide good opportunities to examine interpretive flexibility of gene drive technology. Students compared the risks and opportunities of altering the genomes of wild species, and how those were perceived from the perspectives of residents on each island. They were then asked to imagine and evaluate possible mechanisms for containment or propagation of genotypic edits and their social meanings within and beyond island confines, and to deliberate whether the forms of public engagement carried out in these specific locations might be applicable in more general contexts.

Media curation from music to political discourse (construction of subjects and publics, politics of numbers)? This exercise began by illustrating content-curation algorithms used by Spotify, Youtube and Facebook to suggest content to users. The general concept of the “filter bubble” was discussed, alongside mechanisms of political polarization in the United States. Students then explored how these algorithms allowed Cambridge Analytica and other entities to use “psycho-graphic techniques” to optimize traffic to extreme political content during the run-up to the 2016 presidential election (Osnos, 2018).

Writing down an equation to optimize human welfare (collective action, politics of numbers). A mathematical expression for inter-temporal welfare was derived from “first principles” (Arrow et al., 2004), and students considered how resources might be distributed across time to “maximize” that expression. They then unpacked those “first principles” and were asked to identify frames, metaphors and ethical values embedded therein. Students were able to quickly identify a frame of “fiscal responsibility” that underlies the mathematics of intertemporal welfare, and they discussed the cultural origins and implications of that ethical sensibility. They were also able to identify several important ethical issues that were washed out of the equation, such as economic inequality across peoples at a given time. The exercise was repeated on alternative frameworks for resource distribution across space and time (Daly et al., 2007).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lawrence, C., Jasanoff, S., Evans, S.W. et al. Ethics Inside the Black Box: Integrating Science and Technology Studies into Engineering and Public Policy Curricula. Sci Eng Ethics 29, 23 (2023). https://doi.org/10.1007/s11948-023-00440-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11948-023-00440-6

Keywords

Navigation