Abstract
Technology policy needs to be receptive to different social needs and realities to ensure that innovations are both ethically developed and accessible. This article proposes a new method to integrate social controversies into foresight scenarios as a means to enhance the trustworthiness and inclusivity of policymaking around Artificial Intelligence. Foresight exercises are used to anticipate future tech challenges and to inform policy development. However, the integration of social controversies within these exercises remains an unexplored area. This article aims to bridge this gap by providing insights and guidelines on why and how we should incorporate social controversies into designing foresight scenarios. We emphasize the importance of considering social controversies, as they allow to understand non-mainstream perspectives, re-balance power dynamics, de-black box technologies and have accountable policymaking processes that are open to listening different social needs. Building on empirical research, we present a step-by-step method that involves identifying the key policy challenges and relevant controversies related to an emerging technology, deconstructing the identified controversies, and mapping them onto future scenarios to test policy options and build policy road mapping. Furthermore, we discuss the importance of strategically engaging involved stakeholders, including affected communities, civil society organizations, and experts, to ensure a comprehensive and inclusive perspective. Finally, we showcase the application of the method to popAI, an EU-funded project on AI use in law enforcement.


Similar content being viewed by others
Notes
The process of repurposing classifiers involves adapting a pre-trained classifier to recognize and detect specific patterns, features, or objects in a different application or problem domain.
Trust as attitude of the trustor and trustworthiness as property of the trustee (Jacovi et al., 2021).
The concept of human-in-the-loop refers to different types of interaction that can occur between humans and algorithms (Mosqueira-Rey et al., 2023).
The full report comprising the scenarios is available on the popAI website under “D3.5: Foresight scenarios for AI in policing”.
More information can be found in the popAI website, under “D5.8—popAI roadmaps”.
References
Abelson, J., Forest, P.-G., Eyles, J., Smith, P., Martin, E., & Gauvin, F.-P. (2003). Deliberations about deliberative methods: Issues in the design and evaluation of public participation processes. Social Science & Medicine, 57(2), 239–251. https://doi.org/10.1016/S0277-9536(02)00343-X
AI HLEG. (2019). Ethics guidelines for trustworthy AI. European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Armstrong, S., & Sotala, K. (2015). How we’re predicting AI—or failing to. In J. Romportl, E. Zackova, & J. Kelemen (Eds.), Beyond artificial intelligence (pp. 11–29). Springer International Publishing.
Benifei, B., & Tudorache, D. (2023). Draft compromise amendments on the draft report. European Parliament. https://www.europarl.europa.eu/resources/library/media/20230516RES90302/20230516RES90302.pdf
Bonaccorsi, A., Apreda, R., & Fantoni, G. (2020). Expert biases in technology foresight why they are a problem and how to mitigate them. Technological Forecasting and Social Change. https://doi.org/10.1016/j.techfore.2019.119855
Bostrom, A., Demuth, J. L., Wirz, C. D., Cains, M. G., Schumacher, A., Madlambayan, D., Bansal, A. S., Bearth, A., Chase, R., Crosman, K. M., Ebert-Uphoff, I., Gagne, D. J., Guikema, S., Hoffman, R., Johnson, B. B., Kumler-Bonfanti, C., Lee, J. D., Lowe, A., McGovern, A., & Williams, J. K. (2023). Trust and trustworthy artificial intelligence: A research agenda for AI in the environmental sciences. Risk Analysis. https://doi.org/10.1111/risa.14245
Bourke, B. (2014). Positionality: Reflecting on the research process. The qualitative report, 19(33), 1–9.
Bourdieu, P. (1986). The forms of capital. Handbook of theory and research for the sociology of education (pp. 241–258). Greenword.
Bradford, B., Yesberg, J. A., Jackson, J., & Dawson, P. (2020). Live facial recognition: Trust and legitimacy as predictors of public support for police use of new technology. The British Journal of Criminology. https://doi.org/10.1093/bjc/azaa032
Bryson, J. M. (2004). What to do when stakeholders matter: Stakeholder identification and analysis techniques. Public Management Review, 6(1), 21–53. https://doi.org/10.1080/14719030410001675722
Burgess, A. (2004). Cellular phones, public fears, and a culture of precaution. Cambridge University Press.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https://doi.org/10.1177/2053951715622512
Christensen, J. (2021). Expert knowledge and policymaking: A multi-disciplinary research agenda. Policy & Politics, 49(3), 455–471. https://doi.org/10.1332/030557320X15898190680037
Council of Europe. (2023). The council of Europe and artificial intelligence. https://rm.coe.int/brochure-artificial-intelligence-en-march-2023-print/1680aab8e6
Dougherty, G. W., & Easton, J. (2011). Appointed public volunteer boards: Exploring the basics of citizen participation through boards and commissions. The American Review of Public Administration, 41(5), 519–541. https://doi.org/10.1177/0275074010385838
Elbanna, A. (2011). Applying actor network theory and managing controversy. Information systems theory: Explaining and predicting our digital economy (pp. 117–129). Springer.
European Commission. (2020). 2020 Strategic foresight report. Available at: https://eur-lex.europa.eu/legalcontent/EN/TXT/?qid=1601279942481&uri=CELEX%3A52020DC0493
European Commission. (2021). Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206
European Defence Agency. (2014). Technology watch & foresight. https://eda.europa.eu/what-we-do/allactivities/activities-search/technology-watch-foresight
European Defence Agency. (2021). EDA technology foresight exercise 2021. https://eda.europa.eu/docs/default-source/documents/eda-technology-foresight-exercise-(2021)---methodology88ffba3fa4d264cfa776ff000087ef0f.pdf
European Foresight Platform. (2009). Scenario method. Scenario Method. http://foresight-platform.eu/community/forlearn/how-to-do-foresight/methods/scenario/
European Parliament. (2021). European Parliament resolution of 6 October 2021 on artificial intelligence in criminal law and its use by the police and judicial authorities in criminal matters. European Parliament. https://www.europarl.europa.eu/doceo/document/TA-9-2021-0405_EN.html
European Parliament. (2023). Artificial intelligence act: Deal on comprehensive rules for trustworthy AI. https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai
Foucault, M. (1975). Discipline and punish. Gallimard.
Fung, A., & Wright, E. O. (2001). Deepening democracy: Innovations in empowered participatory governance. Politics & Society, 29(1), 5–41. https://doi.org/10.1177/0032329201029001002
Gruetzemacher, R., Dorner, F. E., Bernaola-Alvarez, N., Giattino, C., & Manheim, D. (2021). Forecasting AI progress: A research agenda. Technological Forecasting and Social Change, 170, 120909. https://doi.org/10.1016/j.techfore.2021.120909
Isaak, J., & Hanna, M. J. (2018). User data privacy: Facebook, Cambridge analytica, and privacy protection. Computer, 51(8), 56–59. https://doi.org/10.1109/MC.2018.3191268
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 624–635. http://arxiv.org/abs/2010.07487
Jasanoff, S. (2015). Future imperfect: Science, technology and the imaginations of modernity. Dreamscapes of modernity sociotechnical imaginaries and the fabrication of power. The University of Chicago Press.
Jasanoff, S., & Hurlbut, J. B. (2018). A global observatory for gene editing. Nature, 555(7697), 435–437. https://www.nature.com/articles/d41586-018-03270-w
Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.
Jolivet, E., & Heiskanen, E. (2010). Blowing against the wind—An exploratory application of actor network theory to the analysis of local controversies and participation processes in wind energy. Energy Policy, 38(11), 6746–6754. https://doi.org/10.1016/j.enpol.2010.06.044
Klosowski, T. (2022). How mobile phones became a privacy battleground. The New York Times. https://www.nytimes.com/wirecutter/blog/protect-your-privacy-in-mobile-phones/
Kovic, M., Rauchfleisch, A., Sele, M., & Caspar, C. (2018). Digital astroturfing in politics: Definition, typology, and countermeasures. Studies in Communication Sciences. https://doi.org/10.24434/j.scoms.2018.01.005
Latour, B. (1991). Technology is society made durable. Sociology of monsters: Essays on power, technology and domination (pp. 103–131). Routlege.
Latour, B. (1997). The trouble with actor-network theory. Philosophia, 25(3–4), 47–64.
Latour, B. (2007). Reassembling the social: An introduction to actor-network-theory. Oup.
Macq, H., Tancoigne, É., & Strasser, B. J. (2020). From deliberation to production: Public participation in science and technology policies of the European Commission (1998–2019). Minerva, 58(4), 489–512. https://doi.org/10.1007/s11024-020-09405-6
Martin, B. R. (2010). The origins of the concept of ‘foresight’in science and technology: An insider's perspective. Technological Forecasting and Social Change, 77(9), 1438–1447. https://doi.org/10.1016/j.techfore.2010.06.009.
Marres, N. (2015). Material participation. Technology, the environment and everyday publics. Palgrave Macmillan.
Marres, N. (2017). Digital sociology: The reinvention of social research. Wiley.
Mellers, B. A., McCoy, J. P., Lu, L., & Tetlock, P. E. (2023). Human and algorithmic predictions in geopolitical forecasting: Quantifying uncertainty in hard-to-quantify domains. Perspectives on Psychological Science. https://doi.org/10.1177/17456916231185339
Miles, I. (2010). The development of technology foresight: A review. Technological forecasting and social change, 77(9), 1448–1456. https://doi.org/10.1016/j.techfore.2010.07.016.
Mohamed, S., Png, M.-T., & Isaac, W. (2020). Decolonial AI: Decolonial theory as sociotechnical foresight in artificial intelligence. Philosophy & Technology, 33(4), 659–684. https://doi.org/10.1007/s13347-020-00405-8
Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2023). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4), 3005–3054. https://doi.org/10.1007/s10462-022-10246-w
Muench, S., Stoermer, E., Jensen, K., Asikainen, T., Salvi, M., & Scapolo, F. (2022). Towards a green & digital future: Key requirements for successful twin transitions in the European Union. Joint Research Centre: JRC Science for Policy Report.
OECD. (2021). Database of national AI policies. https://oecd.ai
Oppenheim, R. (2007). Actor-network theory and anthropology after science, technology, and society. Anthropological Theory, 7(4), 471–493. https://doi.org/10.1177/1463499607083430
Ouchchy, L., Coin, A., & Dubljević, V. (2020). AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media. AI & Society, 35(4), 927–936. https://doi.org/10.1007/s00146-020-00965-5
Parasuraman, R., & Riley, V. (1997). Humans and automation: Use, misuse, disuse, abuse. Human Factors: THe Journal of the Human Factors and Ergonomics Society, 39(2), 230–253. https://doi.org/10.1518/001872097778543886
Pinch, T. J., & Bijker, W. E. (1984). The social construction of facts and artefacts: Or how the sociology of science and the sociology of technology might benefit each other. Social Studies of Science, 14(3), 399–441.
Popper, R. (2008). How are foresight methods selected? Foresight, 10(6), 62–89. https://doi.org/10.1108/14636680810918586
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection (arXiv:1506.02640). arXiv. http://arxiv.org/abs/1506.02640
Scharfbilling, M. (2022). Understanding values for policymaking: The challenges. European Commission, Knowledge for Policy. https://knowledge4policy.ec.europa.eu/blog/understanding-values-policymaking-challenges_en
Shipman, F. M., & Marshall, C. C (2020). Ownership, Privacy, and Control in the Wake of Cambridge Analytica: The Relationship between Attitudes and Awareness.In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. https://doi.org/10.1145/3313831.3376662
Sloane, M., Moss, E., Awomolo, O., & Forlano, L. (2020). Participation is not a Design Fix for Machine Learning. https://doi.org/10.48550/ARXIV.2007.02423
Synced. (2020). YOLO Creator Joseph Redmon Stopped CV Research Due to Ethical Concerns. https://syncedreview.com/2020/02/24/yolo-creator-says-he-stopped-cv-research-due-to-ethical-concerns/
Tetlock, P. E. (1992). Good judgment in international politics: Three psychological perspectives. Political Psychology, 13(3), 517. https://doi.org/10.2307/3791611
Tetlock, P. E., Horowitz, M. C., & Herrmann, R. (2012). SHOULD “SYSTEMS THINKERS” ACCEPT THE LIMITS ON POLITICAL FORECASTING OR PUSH THE LIMITS? Critical Review, 24(3), 375–391. https://doi.org/10.1080/08913811.2012.767047
Trajtenberg, M. (2018). AI as the next GPT: a political-economy perspective. National Bureau of Economic Research.
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: key problems and solutions. In L. Floridi (Ed.), Ethics, governance, and policies in artificial intelligence (pp. 97–123). Springer International Publishing.
Venturini, T. (2010). Diving in magma: How to explore controversies with actor-network theory. Public Understanding of Science, 19(3), 258–273. https://doi.org/10.1177/0963662509102694
Venturini, T., & Munk, A. K. (2021). Controversy Mapping: A Field Guide. Polity Press.
Watkins, R., & Human, S. (2023). Needs-aware artificial intelligence: AI that ‘serves [human] needs.’ AI and Ethics, 3(1), 49–52. https://doi.org/10.1007/s43681-022-00181-5
Winner, L. (1980). Do artifacts have politics? Daedalus, 121–136
Wright, D., Stahl, B., & Hatzakis, T. (2020). Policy scenarios as an instrument for policymakers. Technological Forecasting and Social Change, 154, 119972. https://doi.org/10.1016/j.techfore.2020.119972
Young, M., Magassa, L., & Friedman, B. (2019). Toward inclusive tech policy design: A method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2), 89–103. https://doi.org/10.1007/s10676-019-09497-z
Acknowledgements
Part of the work presented in this paper was conducted in the context of the popAI project, funded from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101022001. We would like to express our gratitude to the reviewers for their invaluable feedback and constructive criticism, which significantly contributed to improving the quality and clarity of this paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Trevisan, F., Troullinou, P., Kyriazanos, D. et al. Deconstructing controversies to design a trustworthy AI future. Ethics Inf Technol 26, 35 (2024). https://doi.org/10.1007/s10676-024-09771-9
Accepted:
Published:
DOI: https://doi.org/10.1007/s10676-024-09771-9