Abstract
In the 1960s, without realizing it, AI researchers were hard at work finding the features, rules, and representations needed for turning rationalist philosophy into a research program, and by so doing AI researchers condemned their enterprise to failure. About the same time, a logician, Yehoshua Bar-Hillel, pointed out that AI optimism was based on what he called the “first step fallacy”. First step thinking has the idea of a successful last step built in. Limited early success, however, is not a valid basis for predicting the ultimate success of one’s project. Climbing a hill should not give one any assurance that if he keeps going he will reach the sky. Perhaps one may have overlooked some serious problem lying ahead. There is, in fact, no reason to think that we are making progress towards AI or, indeed, that AI is even possible, in which case claiming incremental progress towards it would make no sense. In current excited waiting for the singularity, religion and technology converge. Hard headed materialists desperately yearn for a world where our bodies no longer have to grow old and die. They will be transformed into information, like Google digitizes old books, and we will achieve the promise of eternal life. As an existential philosopher, however, I suggest that we may have to overcome the desperate desire to digitalize our bodies so as to achieve immortality, and, instead, face up to and maybe even enjoy our embodied finitude.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Anderson, M. (2008). ‘Never Mind the Singularity, Here’s the Science’. Wired Magazine, 16, 03.24.08.
Brooks, R. A. (1991). ‘Intelligence without representation’. Artificial Intelligence, 47, 139–159.
Brooks, R. (1997). ‘From earwigs to humans’. Robotics and Autonomous Systems, 20.
Chalmers, D. (2010). ‘The singularity: A philosophical analysis’. Journal of Consciousness Studies, 17, 7–65.
Dennett, D. C. (1994). ‘The practical requirements for making a conscious robot’. Philosophical Transactions of the Royal Society of London, A, 349, 133–146.
Dreyfus, H. L. (2007). ‘Why Heideggerian AI failed and how fixing it would require making it more Heideggerian’. Philosophical Psychology, 20, 247–268.
Dreyfus, H. L., Dreyfus, S. E., & Athanasiou, T. (1988). Mind over machine: The power of human intuition and expertise in the era of the computer. New York: Free Press.
Feigenbaum, E. A. (1983). The fifth generation: Artificial intelligence & japan’s computer challenge to the world. Boston: Addison-Wesley.
Heidegger, M. (1962). Being and time. New York: Harper & Row.
Lenat, D. B. (1998). ‘From 2001 to 2001: Common sense and the mind of HAL’. In D. G. Stork (Ed.), Hal’s legacy: 2001 s computer as dream and reality. Cambridge, MA: MIT Press.
Lenat, D. B., & Guha, R. V. (1990). Building large knowledge-based systems. Boston: Addison-Wesley.
Merleau-Ponty, M. (1981). Phenomenology of perception. London: Routledge.
Newell, A., & Simon H. (1958). ‘Heuristic problem solving: The next advance in operations research’. Operations Research, 6.
Newell, A., & Simon, H. (1976). ‘Computer science as empirical enquiry: Symbols and search’. Communications of the Association of Computing Machinery, 19, 113–126.
Vance, A. (2010). ‘Merely human? That’s so yesterday’. The New York Times, June 11, 2010.
Wolf, G. (2008). ‘Futurist ray Kurzweil Pulls out all the stops (and Pills) to live to witness the singularity’. Wired Magazine, 16, 03.24.08.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Dreyfus, H.L. A History of First Step Fallacies. Minds & Machines 22, 87–99 (2012). https://doi.org/10.1007/s11023-012-9276-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-012-9276-0