Abstract
Artificial intelligence (AI) experts are currently divided into “presentist” and “futurist” factions that call for attention to near-term and long-term AI, respectively. This paper argues that the presentist–futurist dispute is not the best focus of attention. Instead, the paper proposes a reconciliation between the two factions based on a mutual interest in AI. The paper further proposes realignment to two new factions: an “intellectualist” faction that seeks to develop AI for intellectual reasons (as found in the traditional norms of computer science) and a “societalist faction” that seeks to develop AI for the benefit of society. The paper argues in favor of societalism and offers three means of concurrently addressing societal impacts from near-term and long-term AI: (1) advancing societalist social norms, thereby increasing the portion of AI researchers who seek to benefit society; (2) technical research on how to make any AI more beneficial to society; and (3) policy to improve the societal benefits of all AI. In practice, it will often be advantageous to emphasize near-term AI due to the greater interest in near-term AI among AI and policy communities alike. However, presentist and futurist societalists alike can benefit from each others’ advocacy for attention to the societal impacts of AI. The reconciliation between the presentist and futurist factions can improve both near-term and long-term societal impacts of AI.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
The racial bias Crawford describes comes from the investigative journalism of Angwin et al. (2016).
One can argue that the AI itself is situated within society, and thus that AI researchers inevitably work on societal issues, even when they believe that they are working only on the AI itself. However, for present purposes, what matters is that the AI researchers believe that they are focusing on the AI itself and not on societal issues, even if it can be argued that they are inadvertently working on societal issues.
As just one of many other examples, see Arkin (2009) on societal issues associated with military robotics.
Again, as with any ethical position, the claim that people should help society is not universally held. A broader defense of this claim is beyond the scope of this paper.
The world still has slavery and racism and sexism, but not as much as it once did. For example, while the United States continues to grapple with a variety of racial biases, it has become unthinkable to support the “separate but equal” racial segregation of the former “Jim Crow” laws.
References
Amodei D, Olah C, Steinhardt J, Christiano P, Schulman J, Mané D (2016). Concrete problems in AI safety. arXiv:1606.06565
Angwin J, Larson J, Mattu S, Kirchner L (2016) Machine bias. ProPublica
Arkin RC (2009) Ethical robots in warfare. IEEE Technol Soc Mag 28(1):30–33
Baum SD (2015) The far future argument for confronting catastrophic threats to humanity: practical significance and alternatives. Futures 72:86–96
Bohannon J (2015) Fears of an AI pioneer. Science 349(6245):252
Bostrom N (2003) Astronomical waste: the opportunity cost of delayed technological development. Utilitas 15(3):308–314
Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford
Calo R (2011) Open robotics. Md Law Rev 70(3):571–613
Conn A (2016a) The White House considers the future of AI. Future of Life Institute
Conn A (2016b) Transcript: Concrete problems in AI safety with Dario Amodei and Seth Baum. Future of Life Institute
Crawford K (2016) Artificial intelligence’s white guy problem. The New York Times
Dafoe A, Russell S (2016) Yes, we are worried about the existential risk of artificial intelligence. MIT Technology Review
Etzioni O (2016) No, the experts don’t think superintelligent AI is a threat to humanity. MIT Technology Review
Funkhouser K (2013) Paving the road ahead: autonomous vehicles, products liability, and the need for a new approach. Utah Law Rev 2013 1:437–462
Future of Life Institute (no date). AI activities. https://futureoflife.org/ai-activities (Accessed 2 May 2017)
Garling C (2015) Andrew Ng: Why ‘deep learning’ is a mandate for humans, not just machines. Wired
Goertzel B (2014) Artificial general intelligence: concept, state of the art, and future prospects. J Artif Gen Intell 5(1):1–48
Goertzel B, Pennachin C (eds) (2007) Artificial general intelligence. Springer, New York
Good IJ (1965) Speculations concerning the first ultraintelligent machine. Adv Comput 6:31–88
Hackett R (2016) Watch Elon Musk divulge his biggest fear about artificial intelligence. Fortune
Hammond DN (2015) Autonomous weapons and the problem of state accountability. Chic J Int Law 15:652–687
Hanson R (2016) The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press, Oxford
Hawking S, Tegmark M, Russell S, Wilczek F (2014) Transcending complacency on superintelligent machines. The Huffington Post
Hern A (2016) Stephen Hawking: AI will be ‘either best or worst thing’ for humanity. The Guardian
Koopmans TC (1974) Proof for a case where discounting advances the doomsday. Rev Econ Stud 41:117–120
Kurzweil R (2006) The singularity is near: when humans transcend biology. Viking, New York
Legg S (2008) Machine Super Intelligence. Doctoral dissertation, University of Lugano
McGinnis JO (2010) Accelerating AI. Northwest Univ Law Rev 104:366–381
Moses LB (2007) Recurring dilemmas: the law’s race to keep up with technological change. Univ Ill J Law Technol Policy 2007(2):239–285
Nilsson NJ (2010) The Quest for Artificial Intelligence: A History of Ideas and Achievements. Cambridge University Press, Cambridge, UK
Price H (2013) Cambridge, cabs and Copenhagen: my route to existential risk. The New York Times
Ramsey FP (1928) A mathematical theory of saving. Econ J 38(152):543–559
Schienke EW, Tuana N, Brown DA, Davis KJ, Keller K, Shortle JS, Stickler M, Baum SD (2009) The role of the NSF Broader Impacts Criterion in enhancing research ethics pedagogy. Soc Epistemol 23(3–4):317–336
Scruggs L, Benegal S (2012) Declining public concern about climate change: can we blame the great recession? Glob Environ Change 22(2):505–515
Selin C (2007) Expectations and the emergence of nanotechnology. Sci Technol Hum Values 32(2):196–220
Shapin S (2010) Never pure: historical studies of science as if it was produced by people with bodies, situated in time, space, culture, and society, and struggling for credibility and authority. Johns Hopkins University Press, Baltimore
Weber EU (2006) Experience-based and description-based perceptions of long-term risk: why global warming does not scare us (yet). Clim Change 77(1–2):103–120
Wilson G (2013) Minimizing global catastrophic and existential risks from emerging technologies through international law. Va Environ Law J 31:307–364
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Baum, S.D. Reconciliation between factions focused on near-term and long-term artificial intelligence. AI & Soc 33, 565–572 (2018). https://doi.org/10.1007/s00146-017-0734-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-017-0734-3