Skip to main content

Advertisement

Log in

On conflicts between ethical and logical principles in artificial intelligence

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Artificial intelligence is nowadays a reality. Setting rules on the potential outcomes of intelligent machines, so that no surprise can be expected by humans from the behavior of those machines, is becoming a priority for policy makers. In its recent Communication “Artificial Intelligence for Europe” (EU Commission 2018), for instance, the European Commission identifies the distinguishing trait of an intelligent machine in the presence of “a certain degree of autonomy” in decision making, in the light of the context. The crucial issue to be addressed is, therefore, whether it is possible to identify a set of rules for data use by intelligent machines so that the decision-making autonomy of machines can allow for humans’ traditional informational self-determination (humans provide machines only with the data they decide to), as enshrined in many existing legal frameworks (including, for personal data protection, the EU’s General Data Protection Regulation) (EU Parliament and Council 2016) and can actually turn out to be further beneficial to individuals. Governing the autonomy of machines can be a very ambitious goal for humans since machines are geared first to the principles of formal logic and then—possibly—to ethical or legal principles. This introduces an unprecedented degree of complexity in how a norm should be engineered, which requires, in turn, an in-depth reflection in order to prevent conflicts between the legal and ethical principles underlying humans’ civil coexistence and the rules of formal logic upon which the functioning of machines is based (EU Parliament 2017).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Acquisti A, Grossklags J (2004) Privacy attitudes and privacy behavior—losses, gains, and hyperbolic discountin. Economics of Information Security, p 165–178

  • Association for Computing Machinery (ACM) U.S. Public Policy Council (2017) Algorithmic transparency and accountability, discussion panel event, 14 September 2017

  • D’Acquisto G, Naldi M (2017) Big data e privacy by design. Anonimizzazione, Pseudonimizzazione, Sicurezza, Giappichelli

  • de La Boëtie E (1576) Discours de la Servitude Volontaire

  • Elzayn H, Jabbari S, Jung C, Kearns M, Neel S, Roth A, Schutzman Z (2019) Fair algorithms for learning in allocation problems, ACM conference on fairness, accountability and transparency

  • EU Commission (2018) Communication from the Commission to the European Parliament, the European Council, the Council, the European economic and social committee and the Committee of the regions, Artificial Intelligence for Europe, COM/2018/0237

  • EU Commission (2019) The European Commission’s high-level expert group on artificial intelligence, ethics guidelines for trustworthy AI

  • EU Parliament (2017) European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))

  • EU Parliament (2019) Report on a comprehensive European industrial policy on artificial intelligence and robotics (2018/2088(INI))

  • EU Parliament and Council (2016) Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC

  • Gödel K (1931) Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I. Monatshefte für Mathematik und Physik, 38

  • Hadfield-Menell D, Dragan A, Abbeel P, Russell S (2017) The off-switch game. In: International joint conference on artificial intelligence

  • IEEE Ethically Aligned Design (2019) A vision for prioritizing human well-being with autonomous and intelligent systems, March 2019

  • MacKay DJC (1992) Bayesian Interpolation. In: Smith CR, Erickson GJ, Neudorfer PO (eds) Maximum entropy and bayesian methods. Fundamental Theories of Physics (An International Book Series on The Fundamental Theories of Physics: Their Clarification, Development and Application), vol 50. Springer, Dordrecht, pp 36–66

  • Odlyzko A (2019) Cybersecurity is not very important. ACM Ubiquity, June 2019

  • Pasquale F (2015) The black box society: the secret algorithms that control money and information. Harvard University Press, Harvard

  • Russell S (2017) Provably beneficial artificial intelligence. OECD conference “AI: intelligent machines, smart policies”, Paris 26–27 Oct 2017

  • Severino E (1988) La tendenza fondamentale del nostro tempo. Adelphi

  • Silver D, Huang A, Maddison CJ, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D (2016) Mastering the game of Go with deep neural networks and tree search. Nature volume 529

  • Sover A (2018) The languages of humor verbal, visual, and physical humor. Bloomsbury Academic

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giuseppe D’Acquisto.

Ethics declarations

Conflict of interest

The views and opinions expressed in this article are the author’s only; they cannot be considered to reflect the views of the Garante per la protezione dei dati personali.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

D’Acquisto, G. On conflicts between ethical and logical principles in artificial intelligence. AI & Soc 35, 895–900 (2020). https://doi.org/10.1007/s00146-019-00927-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-019-00927-6

Keywords

Navigation