Skip to main content

Advertisement

Log in

The naturalness of artificial intelligence from the evolutionary perspective

  • Open Forum
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Current discussions on artificial intelligence, in both the theoretical and practical realms, contain a fundamental lack of clarity regarding the nature of artificial intelligence, perhaps due to the fact that the distinction between natural and artificial appears, at first sight, both intuitive and evident. Is AI something unnatural, non-human and therefore dangerous to humanity, or is it only a continuation of man’s natural tendency towards creativity? It is not surprising that from the philosophical point of view, this distinction is the basic question that fundamentally affects all other considerations and conclusions pertaining to artificial intelligence. In this article, I would like to explain this difference and draw attention to some conclusions, which may result from a naturalistic perspective with regard to recent philosophical posthumanism. For this purpose, I present several examples of the natural–artificial distinction in the different fields of the philosophy of science and then discuss their implications for the problem of intelligence. Based on Dennett’s conception of intentionality and the naturalistic perspective, I demonstrate that besides the traditional conception, there is a non-anthropocentric evolutionary view in which the natural–artificial distinction disappears and it is possible to see a united process of intelligence creation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. One of these non-standard approaches is an attempt to formulate the distinction between natural and artificial by means of language. Natural is that which defies being captured by language [and] artificial is that whose essence is fully determined by language (Romportl 2015, 214). I do not think that this is impossible, but it seems to me that this kind of relationship between us and the objects of the world is too subjective and could lead only to epistemological differences. In the end, this kind of difference directs us towards a false division of objects in the world, because it is dependent only upon the grasping capability of our language—and language is undoubtedly something subject to ongoing change and evolution. I prefer ontologically significant differences between natural and artificial which may then be grasped by language.

  2. To be correct, John Haugeland is an author who considers Dennett’s conception of intentionality and thinks that “maybe this active, rational engagement is more pertinent to whether the intentionality is original or not than is any question of natural or artificial origin” (Haugeland 1997, 8). This does not mean that the natural–artificial distinction is abandoned, but it could be seen as a promising starting point.

  3. The term blind powers of nature does not mean randomness in the evolutionary process, but the unintentional character of evolution not directed by any intention to achieve specific aims or goals, e.g., a specific organic form, design or phenotype. Evolutionary processes are not teleological.

  4. I have to note that we can speak of “rejecting the Aristotelian distinction between natural and artificial”, only from the perspective of some traditional, careless interpretations of Aristotle. He says in Physics, Book II (Aristotle 1930): (1) that form or shape is nature: “The form indeed is ‘nature’ rather than the matter”; “We also speak of a thing’s nature as being exhibited in the process of growth by which its nature is attained. […] What grows qua growing grows from something into something. Into what then does it grow? Not into that from which it arose but into that to which it tends. The shape then is nature”. (2) Nature is the end or aim: “Again, ‘that for the sake of which’, or the end, belongs to the same department of knowledge as the means. But the nature is the end or ‘that for the sake of which’.” (3) There is a difference between art and nature: “In the products of art, however, we make the material with a view to the function, whereas in the products of nature the matter is there all along.” (4) It does not mean that the artistic process is unnatural from architectonic (‘architectonicos’) perspectives: “The arts, therefore, which govern the matter and have knowledge are two, namely the art which uses the product and the art which directs the production of it. That is why the using art also is in a sense directive; but it differs in that it knows the form, whereas the art which is directive as being concerned with production knows the matter.” (5) The process of the creation of things is natural in two senses (form and matter): “Since ‘nature’ has two senses, the form and the matter, we must investigate its objects as we would the essence of snubness. That is, such things are neither independent of matter nor can be defined in terms of matter only.” (6) Art imitates nature only during the same process of creation: “But if on the other hand art imitates nature, and it is the part of the same discipline to know the form and the matter up to a point …” (7) Art is a natural process from the perspective of natures’ end: “the nature is the end or ‘that for the sake of which’.”

  5. There is no general agreement about what is meant by intelligence, but for current purposes (i.e., when considering the natural–artificial distinction), we could take this to be a property of organisms, e.g., it could mean that these organisms show some type of behavior. We can say very simply and more generally that for entities or actors (biological and non-biological), there is a need to interact with their surroundings on the basis of their experience and this tendency leads them to the actions that we call intelligent. This is in agreement with Paul Churchland’s claim that “one obvious measure of the degree of intelligence that any creature has achieved is how far into the future and across what range of phenomena it is capable of projecting the behavior of its environment, and thus how far in advance of that future it can begin to execute appropriately exploitative and manipulative motor behavior” (Churchland 2007, 65–66).

  6. See e.g. Bostrom 2014, Chap. 2: Paths to Superintelligence.

  7. In the same spirit, the critique of the concept of artificial intelligence as an oxymoron is raised (Boden 1977, Chrisley 2000, 2008). According to this view, ‘artificial intelligence’ is an oxymoron, since intelligence implies, and artefactuality is inconsistent with, autonomy.” (Chrisley 2000, 3) “It maintains there is an incompatibility between the possibility of having a mind and at the same time being an artefact in any interesting sense. […] This means that any purpose, meaning or intentionality in the system is not its own, but rather derivative from our design of it. Yet mindedness is autonomous, exhibiting original, non-derivative intentionality.” (Chrisley 2008, 15) However, in countering this belief Dennett convincingly demonstrates how mindedness can be autonomous and exhibit derivative intentionality.

References

  • Aristotle (1930) Physics. Translated by Hardie RP, Gaye RK, RP, Physica in The Works of Aristotle v. 2, WD Ross (eds) Clarendon Press, Oxford

  • Baker LR (2007) The metaphysics of everyday life: an essay in practical realism. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Barad K (2003) Posthumanist performativity: toward an understanding of how matter comes to matter. Signs: J Women Cult Soc 28(3):801–831

    Article  Google Scholar 

  • Bensaude-Vincent B, Newman WR (2007) The artificial and the natural. An evolving polarity, MIT, Cambridge, MA

  • Boden M (1977) Artificial intelligence and natural man. Harvester Press, Atlantic Heights

    Google Scholar 

  • Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford

    Google Scholar 

  • Braidotti R (2013) The Posthuman. Polity Press, Cambridge

  • Butler S (1872) Erewhon: or, over the range, 1 edn. Trubner & Co, London

    Book  Google Scholar 

  • Butler S (1926). Darwin among the Machines. In: The Notebooks of Samuel Butler. Vol. 20 of The Shrewsbury Edition of the Works of Samuel Butler, Jonathon Cape, London, 35–40

  • Chrisley R (2000) The concept of artificial intelligence, general introduction. In: Chrisley R, Sander B (eds) Artificial intelligence: critical concepts, vol I. Routledge, London/New York

  • Chrisley R (2008) Philosophical foundations of artificial consciousness. Artif Intell Med 44(2):119–137

    Article  Google Scholar 

  • Churchland P (2007) Neurophilosophy at work. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Darwin Ch (1859) On the origin of species by means of natural selection, or the preservation of favoured races in the struggle for life. John Murray, London

    Google Scholar 

  • Dennett DC (1996a) Kinds of minds: towards an understanding of consciousness. Basic Books, New York

    Google Scholar 

  • Dennett DC (1996b) Darwin’s dangerous idea: evolution and the meanings of life. Penguin Books, London

    Google Scholar 

  • Dennett DC (2017) From bacteria to bach and back: the evolution of minds. W.W. Norton & Company, London

    Google Scholar 

  • Dumit J (2014) Writing the Implosion: Teaching the World One. Cult Anthropol 29(2):344–362

    Article  Google Scholar 

  • Ekbia HR (2008) Artificial Dreams: the quest for non-biological intelligence. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Ferrando F (2013) Posthumanism, transhumanism, antihumanism, metahumanism and new materialisms. Diff Relat Exist 8(2):26–32

    MathSciNet  Google Scholar 

  • Hacking I (1983) Representing and intervening. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Haraway DJ (1985) Manifesto for cyborgs: science, technology and socialist feminism in the 1980’s. Soc Rev 80:65–108

    Google Scholar 

  • Haraway DJ (1991) Simians, cyborgs, and women: the reinvention of nature. Routledge, New York

    Google Scholar 

  • Haraway DJ (1997) Modest_Witness@Second_Millennium. FemaleMan©_Meets_OncoMouse™: Feminism and Technoscience. Routledge, New York

    Google Scholar 

  • Haraway DJ (2016). Staying with the Trouble: Making Kin in the Chthulucene. Duke University Press

  • Haugeland J (1997) Mind design II: philosophy, psychology, and artificial intelligence. Bradford Book. MIT Press, Cambridge, MA

  • Kroes P (1994). Science, technology and experiments; the natural versus the artificial. In: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1994, Volume Two: Symposia and Invited Papers, pp. 431–440

  • Morton T (2012) The ecological thought. Harvard University Press, Cambridge

    Book  Google Scholar 

  • Morton T (2013) Hyperobjects: philosophy and ecology after the end of the world. University of Minnesota Press, Minneapolis, MN

    Google Scholar 

  • Müller VC (2012) Introduction: philosophy and theory of artificial intelligence. Minds Mach 22:67–69

    Article  Google Scholar 

  • Negrotti M (1991) Conclusions: the dissymmetry of mind and the role of the artificial. In: Negrotti M (ed) Understanding the artificial: on the future shape of artificial intelligence. Springer, Berlin, pp 149–154

    Chapter  Google Scholar 

  • Nilsson NJ (1998) Artificial intelligence: a new synthesis. Morgan Kaufmann Pub, San Francisco

    MATH  Google Scholar 

  • Poole DL, Mackworth AK (2010) Artificial intelligence. Foundations of computational agents. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Romportl J (2015) Naturalness of Artificial Intelligence. In: Romportl J, Zackova E, Kelemen J (eds) Beyond artificial intelligence, topics in intelligent engineering and informatics, vol 9. Springer, Cham, pp 211–216

    Google Scholar 

  • Ruse M (1975) Charles Darwin’s theory of evolution: an analysis. J Hist Biol 8(2):219–241

    Article  Google Scholar 

  • Russell JS, Norvig P (2010) Artificial intelligence. A modern approach, 3rd edn. Prentice Hall series in artificial intelligence. Prentice-Hall, Upper Saddle River, NJ

  • Sargent R-M (1995) The Diffident Naturalist: Robert Boyle and the Philosophy of Experiment. The University of Chicago Press, Chicago

    Book  Google Scholar 

  • Searle J (1983) Intentionality: an essay in the philosophy of mind. Cambridge University Press, Cambridge

    Book  Google Scholar 

  • Shapiro SC (ed) (1992) Encyclopedia of artificial intelligence, Second edn. Wiley, New York

    Google Scholar 

  • Simon HA (1996) The sciences of the artificial, 3rd edn. MIT Press, Cambridge

    Google Scholar 

  • Sokolowski R (1988) Natural and artificial intelligence. Daedalus 117(1):45–64

    MathSciNet  Google Scholar 

  • Welsch W (2012) Mensch und Welt: Eine evolutionäre Perspektive der Philosophie. Beck, Munich

    Book  Google Scholar 

  • Welsch W (2017) Postmodernism—posthumanism—evolutionary anthropology. J Posthuman Studies 1(1):75–86

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by the Czech Scientific Foundation (Grant no. 17-16370S).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Vladimír Havlík.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Havlík, V. The naturalness of artificial intelligence from the evolutionary perspective. AI & Soc 34, 889–898 (2019). https://doi.org/10.1007/s00146-018-0829-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00146-018-0829-5

Keywords

Navigation