Skip to main content
Log in

Deconstructing controversies to design a trustworthy AI future

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Technology policy needs to be receptive to different social needs and realities to ensure that innovations are both ethically developed and accessible. This article proposes a new method to integrate social controversies into foresight scenarios as a means to enhance the trustworthiness and inclusivity of policymaking around Artificial Intelligence. Foresight exercises are used to anticipate future tech challenges and to inform policy development. However, the integration of social controversies within these exercises remains an unexplored area. This article aims to bridge this gap by providing insights and guidelines on why and how we should incorporate social controversies into designing foresight scenarios. We emphasize the importance of considering social controversies, as they allow to understand non-mainstream perspectives, re-balance power dynamics, de-black box technologies and have accountable policymaking processes that are open to listening different social needs. Building on empirical research, we present a step-by-step method that involves identifying the key policy challenges and relevant controversies related to an emerging technology, deconstructing the identified controversies, and mapping them onto future scenarios to test policy options and build policy road mapping. Furthermore, we discuss the importance of strategically engaging involved stakeholders, including affected communities, civil society organizations, and experts, to ensure a comprehensive and inclusive perspective. Finally, we showcase the application of the method to popAI, an EU-funded project on AI use in law enforcement.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The process of repurposing classifiers involves adapting a pre-trained classifier to recognize and detect specific patterns, features, or objects in a different application or problem domain.

  2. Trust as attitude of the trustor and trustworthiness as property of the trustee (Jacovi et al., 2021).

  3. The concept of human-in-the-loop refers to different types of interaction that can occur between humans and algorithms (Mosqueira-Rey et al., 2023).

  4. https://www.pop-ai.eu/

  5. The full report comprising the scenarios is available on the popAI website under “D3.5: Foresight scenarios for AI in policing”.

  6. More information can be found in the popAI website, under “D5.8—popAI roadmaps”.

References

Download references

Acknowledgements

Part of the work presented in this paper was conducted in the context of the popAI project, funded from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101022001. We would like to express our gratitude to the reviewers for their invaluable feedback and constructive criticism, which significantly contributed to improving the quality and clarity of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francesca Trevisan.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Trevisan, F., Troullinou, P., Kyriazanos, D. et al. Deconstructing controversies to design a trustworthy AI future. Ethics Inf Technol 26, 35 (2024). https://doi.org/10.1007/s10676-024-09771-9

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10676-024-09771-9

Keywords