Skip to main content

What Does “Ethical by Design” Mean?

  • Chapter
  • First Online:
Reflections on Artificial Intelligence for Humanity

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12600))

Abstract

Artificial Intelligence (AI) is now integrated or on its way to being integrated into various aspects of our lives, both professional and personal, at a public as well as a private level. The advent of industrial robots, and more recently, of companion robots, the ubiquity of cell phones and tablets, all reflect a day-to-day interaction between humans and machines. Use of these machines has brought about profound transformations in our social behaviors, even in our minds, and as such, raises many practical ethical questions about our technological choices and several meta-ethical ones besides.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 84.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See for example https://paperjam.lu/article/ethics-by-design-et-intelligen.

  2. 2.

    In Dignum, et al. (2018), for example, Ethics by Design appears conflated with the question “Can we, and should we, build ethically-aware agents?” See https://prima2017.gforge.uni.lu/ethics.html and https://orbilu.uni.lu/bitstream/10993/38926/1/p60-dignum.pdf.

  3. 3.

    The expression “Ethics by Design” was notably employed (in English) by French President Emmanuel Macron during a conference at the Collège de France in March 2018. https://www.elysee.fr/emmanuel-macron/2018/03/29/discours-du-president-de-la-republique-sur-lintelligence-artificielle.

  4. 4.

    The European Commission’s policy expressed in its white paper (19 Feb 2020) states that “the Commission is of the view that it should follow a risk-based approach”. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

  5. 5.

    See also chapters 3 and 9 of this book.

  6. 6.

    Ann Cavoukian, 2010, The 7 Foundational Principles. Implementation and Mapping the Fair Information Practices. Information and Privacy Commissioner of Ontario, www.privacybydesign.ca.

  7. 7.

    On September 28, 2018, Facebook announced a security breach affecting 50 million user accounts. The Cambridge Analytica scandal, which broke in March 2018, differs insofar as Facebook allowed third-party applications to access personal information. Cambridge Analytica mined the personal data of 87 million unknowing Facebook users for political influence in the latest American elections. In 2015, the Hong Kong toy company VTech was the target of a data breach involving 4.8 million customers (parents and children) whose data was accessed via connected toys and devices. In France, more than 1.2 million accounts belonging to children were hacked; the data included full names, mailing addresses, dates of birth, email addresses and IP addresses.

  8. 8.

    Regulation (UE)2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation -GDPR).

  9. 9.

    Privacy Impact Assessment (PIA) is a framework that have been develop for many years. PIA is an instrument use to revise information systems and technologies that process personal data. It facilitated process of accountability, transparency and systemic improvement for enterprises and governments. The Data Protection Impact Assessment (DPIA) is now included to the GDP. It focused on the protection of the rights and freedom of data subject (Human Rights) and it is seen as an element of governance technologies and research (Raab 2020).

  10. 10.

    “The social consequences of a technology cannot be predicted early in the life of the technology. By the time undesirable consequences are discovered, however, the technology is often much part of the whole economic and social fabric that its control is extremely difficult. This is the dilemma of control. When change is easy, the need for it cannot be foreseen; when the need of change is apparent, change has become expensive, difficult and time consuming” (Collingridge 1981, p 11).

  11. 11.

    Below are two examples demonstrating this variety of approaches: (i) “Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually reponsive to each other with a view on the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society) (Von Schombert 2011, p. 9).” (ii) “Responsible innovation is a new concept that builds on governance approaches and innovation assessments that aim to take these ethical and societal concerns into account at the start of the innovation process. The main idea behind responsible innovation is to democratize innovation and realize deliberative forms of governance such as stakeholders and public engagement. Stakeholders and members of the public are involved upstream in the innovation process and encouraged to deliberate about the multiple futures and uncertainties that the innovation could bring or seeks to bring. The upstream inclusion of stakeholders and the public, by deliberative forms of governance, can help to realize a collective responsibility to control and direct innovation into a direction that is ethically acceptable, societally desirable and sustainable” (Lubberink 2017, p. 2)”.

  12. 12.

    A coherence can be gleaned with the European position, which in the 1990 s championed the precautionary principle, which serves as the basis for the “no data, no market” rule applied in biotechnologies and nanotechnologies.

  13. 13.

    Hippocratic Oath for Data scientists: https://hippocrate.tech/.

  14. 14.

    See also chapter 10 of this book.

  15. 15.

    As evidenced, for example in Europe’s recent guidelines for the GDPR but also by the French Health Data Hub, which tacitly relies on the questionable notion of ‘implicit consent’. For a more detailed account of the issue, see Margo Bernelin (Bernelin 2019).

  16. 16.

    This is also the point of view upheld by several foundations, including the Mozilla and Rockefeller foundations. See, for example: https://www.elementai.com/news/2019/supporting-rights-respecting-ai?utm_source=twitter&utm_medium=social&utm_campaign=Brand_GR&utm_content=human_rights_bloh_11/27/2019.

  17. 17.

    https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.

  18. 18.

    See also chapter 2 of this book.

  19. 19.

    IEEE, Ethically Aligned Design. A vision for Prioritizing Human Well-Being with Autonomous and Intelligent System, First Edition. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf.

  20. 20.

    In Boston, it is related both jokingly and seriously, perhaps in an ironically revealing way, how Facebook finances Harvard, who is trying its hand at AI and Microsoft finances MIT who designs it, while BU has wound up poor but enjoys freedom of ideas, having no “boss”.

  21. 21.

    See also chapter 6 of this book.

  22. 22.

    Interestingly, several cities, and notably San Francisco and Boston have banned the use of facial recognition technology. In France, the debate launched by interviews with Cédric O’ in Le Monde on October 14, 2019 and in the December 24 edition of Le Parisien gave rise to hot disputes, for example in a piece published on June 1, 2020 in Libération titled “Nos droits et nos libertés ne sont pas à vendre” (Our Rights and Freedoms are Not for Sale) calling for public debate rather than experimentation. https://www.liberation.fr/debats/2020/01/06/reconnaissance-faciale-nos-droits-et-nos-libertes-ne-sont-pas-a-vendre_1771600.

  23. 23.

    Without, moreover, questioning the fact that this expression in some sense puts humans and machines on an equal footing or that here we use the term “evolution” to describe machines.

  24. 24.

    See also chapter 13 of this book.

  25. 25.

    See also chapter 3 of this book.

References

  • Anderson, M., Anderson, S.L., Armen, C.: MedEthEx: a prototype medical ethics advisor. In: Proceedings of the 18th conference on Innovative applications of artificial intelligence (IAAI 2006), vol.2, pp. 1759–1765. AAAI Press (2006)

    Google Scholar 

  • Arkin, R.: Governing Lethal Behavior in Autonomous Robots. Chapman and Hall/CRC Press, London (2009)

    Book  Google Scholar 

  • Avizienis, A., Laprie, J.-C., Randell, B., Carl, L.: Basic concepts and taxonomy of dependable and secure computing. IEEE Trans. Dependable Secure Comput. 1(1), 11–33 (2004)

    Article  Google Scholar 

  • Awad, E., et al.: The moral machine experiment. Nature 563(7729), 29–64 (2018)

    Article  Google Scholar 

  • Beauchamp, T., Childress, J.: Principles of Biomedical Ethics. Oxford University Press, Oxford (1979)

    Google Scholar 

  • Beck, U.: La société du risque. Sur la voie d’une autre modernité, Paris, Flammarion (2008)

    Google Scholar 

  • Bernelin, M.: Intelligence artificielle en santé: la ruée vers les données personnelles. Cités 80, 75–89 (2019)

    Article  Google Scholar 

  • Blok, V., Lemmens, P.: The emerging concept of responsible innovation. three reasons why it is questionable and calls for a radical transformation of the concept of innovation. In: Koops, B.-J., Oosterlaken, I., Romijn, H., Swierstra, T., van den Hoven, J. (eds.) Responsible Innovation 2, pp. 19–35. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-17308-5_2

    Chapter  Google Scholar 

  • Brey, P.: Ethics of Emerging Technologies. In: Hansson, D.S.O. (ed.) The Ethics of Technology: Methods and Approaches, pp. 175–192. Rowman & Littlefield International (2017)

    Google Scholar 

  • David, C.: The Social Control of Technology. Macmillan, Palgrave (1981)

    Google Scholar 

  • Eubanks, V.: Automating Inequality. How high-Tech Tools Profile, Police and Punish the Poor. St. Martin’s Press (2018)

    Google Scholar 

  • European Commission: Horizon 2020—The Framework programme for Research and Innovation, Brussels (2011)

    Google Scholar 

  • Davies, Sarah R., Horst, M.: Responsible innovation in the US, UK and Denmark: governance landscapes. In: Koops, B.-J., Oosterlaken, I., Romijn, H., Swierstra, T., van den Hoven, J. (eds.) Responsible Innovation 2, pp. 37–56. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-17308-5_3

    Chapter  Google Scholar 

  • Dignum, V.: Responsible Artificial Intelligence. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30371-6

    Book  Google Scholar 

  • Genus, A., Stirling, A.: Collingridge and the dilemma of control: toward responsible and accountable innovation. Res. Policy 47, 61–69 (2018)

    Article  Google Scholar 

  • Gilligan, C.: In a Different Voice. Harvard University Press, Cambridge (1982)

    Google Scholar 

  • Groves, C.: Care, Uncertainty and Intergenerational Ethics. Palgrave Macmillan, London (2014)

    Book  Google Scholar 

  • Hale, A., Kirwanb, B., Kjellén, U.: Safe by design: where are we now? Saf. Sci. 45, 305–327 (2007)

    Article  Google Scholar 

  • Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019)

    Article  Google Scholar 

  • Jonas, H.: Le principe responsabilité. Flammarion, Paris (2013)

    Google Scholar 

  • Sarah, H., Carmen, M.L., Mike, C., Sarah, J., Charlotte, R.: A retrospective analysis of responsible innovation for low-technology innovation in the Global South. J. Respons. Innov. 69(2), 143–162 (2019)

    Google Scholar 

  • Hoffmann, A.L.: Where fairness fails: data, algorithms, and the limits of antidiscrimination discource. Inf. Commun. Soc. 22(7), 900–915 (2019)

    Article  Google Scholar 

  • Kelty, C.: Beyond implications and applications: the story of ‘safety by design. NanoEthics 3(2), 79–96 (2009)

    Article  Google Scholar 

  • Kerr, A., Hill, R., Till, C.: The limits of responsible innovation: exploring care, vulnerability and precision medicine. Technol. Soc. 1–8 (2017)

    Google Scholar 

  • Koops, B.-J., Oosterlaken, I., Romijn, H., Swierstra, T., van den Hoven, J.: Responsible Innovation 2. Concepts, Approches and Applications. Springer, Dordrecht (2015). https://doi.org/10.1007/978-3-319-17308-5

    Book  Google Scholar 

  • Kraegeloh, A., Suarez-Merino, B., Sluijters, T., Micheletti, C.: Implementation of safe-by-design for nanomaterial development and safe innovation: why we need a comprehensive approach. Nanomaterials 239, 8 (2018)

    Google Scholar 

  • Lubberink, R., Blok, V., van Ophem, J., Omta, O.: Lessons for responsible innovation in the business context: a systematic literature review of responsible. Soc. Sustain. Innov. Pract. Sustain. 9(721), 1–31 (2017)

    Google Scholar 

  • McCarthy, E., Kelty, C.: Responsibility and nanotechnology. Soc. Stud. Sci. 40(3), 405–432 (2010)

    Article  Google Scholar 

  • Nurock, V.: Nanoethics: ethics for, from, or with nanotechnologies? Hylé 1(16), 31–42 (2010)

    Google Scholar 

  • Nurock, V.: Généalogie de la morale mécanisée. In: Parizeau, M.-H., Kash, S. (eds.) Robots et sociétés: enjeux éthiques et politiques, pp. 31–50. Les Presses de l’Université Laval, Québec (2019)

    Google Scholar 

  • Nozick R.: Anarchie, État et Utopie, Paris, Quadrige (1974)

    Google Scholar 

  • O’Neil, C.: Weapons of Math Destruction, How Big Data Increases Inequality and Threatens Democracy. Crown (2016)

    Google Scholar 

  • Owen, R,, Stilgoe, J., Gorman, M., Fischer, E., Guston, D. A Framework for responsible innovation. In: Owen, R., Bessant, J., Heintz, M. (eds.) Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, pp. 27–50. Wiley-Blackwell (2013)

    Google Scholar 

  • Pavie, X.: The importance of responsible innovation and the necessity of ‘innovation-care’. Philos. Manage. 13(1), 21–42 (2014)

    Google Scholar 

  • Pavie, X.: l’innovation à l’épreuve de la philosophie. PUF, Paris (2018)

    Google Scholar 

  • Xavier, P., Daphne, C.: Leveraging uncertainty: a practical approach to the integration of responsible innovation through design thinking. Proc. – Soc. Behav. Sci. 213, 1040–1049 (2015)

    Article  Google Scholar 

  • Sophie, P., Bernard, R.: Responsible innovation in the light of moral responsibility. J. Chain Netw. Sci. 15(2), 107–117 (2015)

    Article  Google Scholar 

  • Raab, C.: Information privacy, impact assessment and the place of ethics. Comput. Law Secur. Rev. 37, 105404 (2020)

    Article  Google Scholar 

  • Rawls, J.: A Theory of Justice. Belknap Press (1971)

    Google Scholar 

  • Thomas, P.S., Castro da Silva, B., Barto, A., Giguere, S., Brun, Y., Brunskil, E.: Preventing undesirable behavior of intelligent machines. Science 366(6468), 999–1004 (2019)

    Article  Google Scholar 

  • Tronto, J.: Caring Democracy. NYU Press, New York City (2013)

    Google Scholar 

  • Turner C.: Science on coupe!, Montréal, Boréal (2013)

    Google Scholar 

  • Van de Poel, I.: An ethical framework for evaluating experimental technology. Sci. Eng. Ethics 22, 667–686 (2015). https://doi.org/10.1007/s11948-015-9724-3

    Article  Google Scholar 

  • Van de Poel, I.: Society as a laboratory to experiment with new technologies. In: Bowman, D.M., Stokes, E., Rip, A. (eds.) Embedding new Technologies into Society: A Regulatory, Ethical and Societal Perspective, pp. 62–86. Pan Stanford Publishing (2017)

    Google Scholar 

  • van de Poel, I., Robaey, Z.: Safe-by-design: from safety to responsibility. NanoEthics 11(3), 297–306 (2017). https://doi.org/10.1007/s11569-017-0301-x

    Article  Google Scholar 

  • Van de Poel, I.: Design for value change. Ethics Inf. Technol. 1–5 (2018)

    Google Scholar 

  • Verbeek, P.-P.: Materializing morality. Sci. Technol. Human Values 31(3), 361–380 (2006)

    Article  Google Scholar 

  • Verbeek, P.-P.: Values that Matter: Mediation theory and Design Values, Academy for design Innovation management. In: Research Perspectives in the Area of Transformations Conference, London, pp. 396–407 (2019)

    Google Scholar 

  • Rene, V.S.: A vision of responsible innovation. In: Owen, R., Heintz, M., Bessant, J. (eds.) Responsible Innovation. John Wiley, London (2011)

    Google Scholar 

  • Wong, P.-H.: Responsible innovation for decent nonliberal peoples: a dilemma? J. Respons. Innov. 3(2), 154–168 (2016)

    Article  Google Scholar 

  • Zou, J., Schiebinger, L.: Design AI so that it’s fair. Nature 559, 324–325 (2018)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vanessa Nurock .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nurock, V., Chatila, R., Parizeau, MH. (2021). What Does “Ethical by Design” Mean?. In: Braunschweig, B., Ghallab, M. (eds) Reflections on Artificial Intelligence for Humanity. Lecture Notes in Computer Science(), vol 12600. Springer, Cham. https://doi.org/10.1007/978-3-030-69128-8_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-69128-8_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-69127-1

  • Online ISBN: 978-3-030-69128-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics