Ascriptional and ‘genuine’ autonomy
Introduction
Maturana points out that “everything said is said by an observer” (Maturana, 1978). The well-known example of a skilful submarine pilot (Maturana and Varela, 1987), who manoeuvres a submarine relying just on the readings of different kinds of meters, on the basis of which he decides which levers to pull, will help us illustrate this phrase. Maturana and Varela argue that this submarine pilot, who supposedly never in his life has left the submarine, can skilfully master the passage through a reef full of obstacles. However, he would not know what an obstacle is, or a reef, or even a submarine. These concepts can be used to describe and perceive the situation from the outside, for example by an observer standing on the seashore. The concepts the pilot himself will use to describe and perceive the situation will be different, and will probably rely on meter readings and levers, not on reefs and submarines. Maturana and Varela invoke this metaphor to illustrate how, in the scientific study of life, the biologist's point of view differs from the organism's point of view. It expresses a deep constructivist belief, which rejects the idea of an objective world out there, with a pre-given ontology of events, objects and facts that the organism aspires to represent. It is the organism that creates its objects and their meaning, in accordance with its needs, desires and the history of its sensorimotor engagement with the world.
Science, as an activity exercised by human organisms, is therefore not about real objects that exist in an observer-independent reality either. Maturana and Varela are not actually themselves adopting the organism's point of view, it is their very point that the view from within another organism is not attainable for an observer. A scientist's experiential world is a product of his own conceptual space and the distinctions he decides to undertake, and they will necessarily impact on the results from and interpretation of scientific activity. This is immediately and obviously true for the operational distinction between the ascription of autonomy and the genuine reality of a system's autonomy in the scientific study of autonomy. In this light, it seems justified to raise the question of the nature of this distinction. The recognition of our status as observers transforms our conceptual world in a way that blurs the boundaries of what we normally consider a belief and what we consider a fact. How to taxonomise judgments into mere ascriptions and recognitions of genuine truths seems problematic, or at least unclear, if the idea of the observer is taken seriously.
Basically, in this paper we argue that this distinction can indeed not be maintained in its strict sense. However, recognising that matters are not quite what they seem to be does not automatically imply that the distinction under investigation is not a useful distinction to be made. Acknowledging that, in many cases, it has served well to clarify matters, even just as a first approximation, we investigate what is at its core, and carefully try to set it onto new feet, in agreement with the idea of an observer science. In doing so, we will consider empirical evidence from experiments in minimal perceptual crossing and different approaches to explaining life, to closely examine the role that underlying mechanisms play for this distinction. Our analysis will be discussed as regards implications for the study of autonomy through artificial means and an observer science in general.
Section snippets
Ascriptional Autonomy
In 1950, in his now classic paper “Computing machinery and intelligence” (Turing, 1950), Alan Turing proposed a scenario that he called the ‘imitation game’, but which is now more commonly been known as the ‘Turing test’: Will a computer, via a language interface, be able to trick a human being into thinking that it was indeed another person? It may be arguable whether Turing's original rather gentle formulation of the test has been met, i.e., that towards the end of the 20th century “an
Beyond Ascription at First Sight: Generative Mechanisms
The aim of this section is to present a central thesis of this paper: contrary to the spirit of the ‘Turing test’ and Dennett's ‘intentional stance’, we consider that it is indeed possible to go beyond mere ascription. Our proposal, in general terms, is this: what can be done, is to elaborate a serious hypothesis concerning the generative mechanism underlying the phenomenal appearances, on which the first-blush subjective ascription (of ‘intelligence’, of ‘intentionality’, of ‘autonomy’ or
Perceptual Crossing and Intentionality
To show how our proposal can work in practice, we shall now illustrate it by a recent experiment carried out by the ‘Perceptual Supplementation Group’ at the University of Compiègne. ‘Intentionality’ is an important aspect of autonomy and the recognition of intentionality in another entity is a most interesting question. Auvray et al. (2006, personal communication) have investigated the dynamics of human perceptual crossing in a minimal shared virtual environment, in a Turing-test-style
Implications for Artificial Autonomy
Computational models can play an important role in the study of generative mechanisms, and are therefore in principle very interesting as regards our proposal of informed mechanism based ascription. An impressive example is the computational model by Hinton and Nowlan (1987), which succeeded in finally putting the ‘Baldwin effect’ in evolutionary theory beyond doubt, and above suspicion that it may be just some form of closet-Lamarckianism. A proposed mechanism that had not been perceived as
Conclusion
To many researchers, a scientific approach to autonomy that is purely based on ascription, in a Turing-test-like situation, seems insufficient. However, taking seriously Maturana's insight that everything said is said by an observer, the question we pose in this paper is: Can there ever be more than a merely ascriptional judgment about whether something is autonomous? And what exactly does this more consist in, if not an objectivist and observer-independent truth? Discussing examples from
References (27)
- et al.
Autonomy: an information theoretic perspective
Biosystems
(2008) - et al.
Autonomy and hypersets
Biosystems
(2008) - et al.
How (not) to model autonomous behaviour
Biosystems
(2008) - et al.
Teleological reasoning in infancy: the naive theory of rational action
Trends Cogn. Sci.
(2003) - et al.
From homeostatic to homeodynamic self
Biosystems
(2008) - et al.
Autopoiesis: the organization of living systems, its characterization and a model
Biosystems
(1974) - et al.
Questions de vie
(1994) - et al.
The attribution of intentionality in a simulated environment: the case of minimalist devices
- et al.
Autopoiesis and cognition
Artif. Life
(2004) Affective interaction between humans and robots
The Intentional Stance
Cited by (12)
Autonomy: An information theoretic perspective
2008, BioSystemsEpilogue to “Questioning Life and Cognition” by John Stewart
2021, Adaptive BehaviorThe Enactive Philosophy of Embodiment: From Biological Foundations of Agency to the Phenomenology of Subjectivity
2016, Historical-Analytical Studies on Nature, Mind and ActionAgency is distinct from autonomy
2014, Avant