Abstract
Some of the concerns people have about A1 are: its misuses, effect on unemployment, and its potential for dehurnanising. Contrary to what most people believe and fear, GIcan lead to respect for the enormous power and complexity of the human mind. It is potentially very dangerous for users in the public domain to impute much more inferential power to computer systems, which look common-sensical, than they actually have. No matter how impressive A1 programs may be, we must be aware of their limitations and should not abrogate human responsibility to such programs.
Similar content being viewed by others
Bibliography
Borrow D.G., Hayes P. (1985) ‘Artificial Intelligence-where are we?’. Artificial Intelligence (Silver Jubilee Volume) 25: 375-416
Boden M.A. (1984) ‘Artificial Intelligence and social forecasting’. J. Mathematical Sociology 9: 341-356
Boden, M.A. (Feb. 1984), ‘The impacts of Artificial Intelligence’, Futures: The Journal of Forecasting and Planning, 1, 60-70,(Reprinted in T. Forester, (ed.), (1985), The Information Technology Revolution, BlackweU, Oxford, 95-103.)
Bundy, A. & R. Clutterbuck (1985), ‘Raising the standards of AI products’, in Proceedings, Ninth Joint Conference on Artificial Intelligence, Los Angeles, 1289-1294.
McCarthy J. (1980) ‘Circumscription -a form of non-monotonic reasoning’. Artificial Intelligence 13: 27-40
Yazdani M., Narayanan A. (eds.) (1984) Artificial Intelligence: Human Effects. Ellis Horwood, Chichester
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Boden, M. Artificial Intelligence: Cannibal or Missionary?. AI & Soc 21, 651–657 (2007). https://doi.org/10.1007/s00146-007-0109-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-007-0109-2