Skip to main content

Advertisement

Log in

Reframing AI Discourse

  • Published:
Minds and Machines Aims and scope Submit manuscript

Abstract

A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Ahmed, A. (2014). The thistle and the drone: How America’s war on terror became a global war on Tribal Islam. Noida: Harper Collins Publishers India.

    Google Scholar 

  • Aidyia. (2016). Aidyia: About us. Retrieved from www.aidyia.com/company/

  • Amazon. (2016). Amazon Prime Air. Retrieved from www.amazon.com/primeair/

  • Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. New York City, NY: Thomas Dunne Books.

    Google Scholar 

  • Berkowitz, R. (2014). Drones and the question of “The Human”. Ethics & International Affairs, 28(2), 159–169.

    Article  Google Scholar 

  • Bijker, W. E. (1993). Do not despair: There is life after constructivism. Science, Technology and Human Values, 18(1), 113–138.

    Article  Google Scholar 

  • Bijker, W. E. (1997). Of bicycles, bakelites, and bulbs: Toward a theory of sociotechnical change. Cambridge, MA: MIT Press.

    Google Scholar 

  • Callon, M. (1999). Actor-network theory—The market test. The Sociological Review, 47(S1), 181–195.

    Article  Google Scholar 

  • Carr, N. (2014). The glass cage: Automation and us. New York City, NY: W. W. Norton & Company.

    Google Scholar 

  • Future of Life Institute. (2015). Research priorities for robust and beneficial artificial intelligence. Retrieved from http://futureoflife.org/ai-open-letter/

  • Future of Life Institute. (2015). Autonomous weapons: An open letter from AI & robotics researchers. Retrieved from http://futureoflife.org/open-letter-autonomous-weapons/

  • Gaudin, S. (2015). Stephen Hawking fears robots could take over in 100 years. ComputerWorld, 14 May 2015.

  • Google. (2016). Google self-driving car project. Retrieved from www.google.com/selfdrivingcar/

  • Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for crashes of autonomous vehicles: An ethical analysis. Science and Engineering Ethics, 21(3), 619–630.

    Article  Google Scholar 

  • Heyns, C. (2013). Report of the special rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns. United Nations Human Rights Council, Session 23, 9 April 2013.

  • Irmak, N. (2012). Software is an abstract artifact. Grazer Philosophische Studien, 86, 55–72.

    Google Scholar 

  • Itskov, D. (2016). 2045 strategic social initiative. Retrieved from http://2045.com

  • Jennewein, T., Achleitner, U., Weihs, G., Weinfurter, H., & Zeilinger, A. (1999). A fast and compact quantum random number generator. Retrieved from arxiv.org/abs/quant-ph/9912118

  • Johnson, D. G., & Miller, K. W. (2008). Un-making artificial moral agents. Ethics and Information Technology, 10(2–3), 123–133.

    Article  Google Scholar 

  • Kant, I. (1785/2002). Groundwork of the metaphysics of morals (trans: Wood, A. W.). New Haven: Yale University Press.

  • Kurzweil, R. (2005). The singularity is near: When humans transcend biology. London: Penguin Books.

    Google Scholar 

  • Latour, L. (1992). Where are the missing masses? The sociology of a few mundane artifacts. In W. Bijker & J. Law (Eds.), Shaping technology/building society studies in sociotechnical change. Cambridge, MA: MIT Press.

    Google Scholar 

  • Levine, S., Pastor, P., Krizhevsky, A., & Quillen, D. (2016). Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Google Preliminary Report. arxiv.org/pdf/1603.02199v4.pdf

  • MacKenzie, D. (2014). A sociology of algorithms: High-frequency trading and the shaping of markets. Retrieved from http://www.sps.ed.ac.uk/__data/assets/pdf_file/0004/156298/Algorithms25.pdf

  • Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), 175–183.

    Article  Google Scholar 

  • Metz, C. (2016). The rise of the artificially intelligent hedge fund. Wired, 25 January 2016.

  • Mindell, D. A. (2015). Our robots, ourselves: Robotics and the myths of autonomy. New York City, NY: Viking Press.

    Google Scholar 

  • Minski, M. (2013). Dr. Marvin MinskyFacing the future. Retrieved from www.youtube.com/watch?v=w9sujY8Xjro

  • Morton, O. (2014). Good and ready. The Economist, 29 March 2014.

  • Omohundro, S. (2016). Autonomous technology and the greater human good. In V. Müller (Ed.), Risks of artificial intelligence (pp. 9–27). Boca Raton, FL: CRC Press.

    Google Scholar 

  • Storm, D. (2015). Steve Wozniak on AI: Will we be pets or mere ants to be squashed our robot overlords? ComputerWorld, 25 March 2015.

  • Yampolskiy, R. V. (2016). Utility function security in artificially intelligent agents. In V. Müller (Ed.), Risks of artificial intelligence (pp. 115–140). Boca Raton, FL: CRC Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mario Verdicchio.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Johnson, D.G., Verdicchio, M. Reframing AI Discourse. Minds & Machines 27, 575–590 (2017). https://doi.org/10.1007/s11023-017-9417-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11023-017-9417-6

Keywords

Navigation