Abstract
This article delivers an account of what it is for a physical system to be programmable. Despite its significance in computing and beyond, today’s philosophical discourse on programmability is impoverished. This contribution offers a novel definition of physical programmability as the degree to which the selected operations of an automaton can be reconfigured in a controlled way. The framework highlights several key insights: the constrained applicability of physical programmability to material automata, the characterization of selected operations within the neo-mechanistic framework, the understanding of controlled reconfiguration through the causal theory of interventionism, and the recognition of physical programmability as a gradual notion. The account can be used to individuate programmable (computing) systems and taxonomize concrete systems based on their programmability. The article closes by posing some open questions and offering avenues for future research in this domain.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
N/A.
Notes
In fact, none of the illuminated approaches in this section cross-reference each other.
Conrad’s version is much closer to what Copeland (2024) has called the ‘Maximality Thesis.’.
Zenil employs Kolmogorov Complexity (aka Algorithmic Information) as the basis for his formal variability measure.
Remember, for instance (Moor 1978), according to which the software-hardware distinction is merely a pragmatic one, dependent on context and the skills of the programmers and users.
COLOSSUS was a British top-secret electronic codebreaking device built from 1943–1945. Haigh and Priestley argue that the machine was not built to carry out numerical computations but designed to decrypt teleprinter encryption of German communication during WWII. Despite not being a (general-purpose) computer, the authors claim that the machine automatically executed a program (i.e., implemented a specified series of discrete operations). Notwithstanding, Haigh and Priestly state that COLOSSUS was not programmable since the users could not fundamentally alter the program of operations performed by the machine.
For instance, by defining different classes of abstract computing systems such as finite state machines, pushdown automata, Turing machines, etc. we can study the theoretical limits of computation (cf. Hopcroft et al. 2001). A Turing machine, e.g., provides a formal procedure for computing a function, yet the machine qua abstract object is not something physical at all. Often programmability is discussed with these formal devices; Turing machines, for instance, are said to have a higher programmability than FSM, as they compute more functions.
Interestingly, the technical-artifact-notion has also been adopted to engineered computational systems, where Turner coined the term ‘computational artifact’ (Turner 2018). On this view, computational systems also exhibit (several) structure–function pairs, where, very roughly put, a given computational structure implements a function. In case that a material automaton’s function is ‘to compute’ it thus may be regarded as the physical implementation of a computational artifact.
Worse, one may even argue that prima facie seemingly static systems (like rocks and tables) have an ability to operate in sequence. In a different context, philosophers like Putnam (1988) and Searle (1990) have employed such reasoning to argue that objects like rocks and walls, seen at a microscopic level, showcase an internal dynamical behavior (that is interpretable as a sequence of operations). The reason for this is that the physical state of ordinary systems does in fact traverse physical state space and is not completely static.
Ascribing teleological functions to arbitrary systems (with the ability to act in sequence) is insufficient to turn it into a technical artifact. Mere function ascription leaves room for ready-made artifacts or so-called objet trouvés (meaning found objects – a concept from the art world). If that were the case, one could simply promote natural objects, which can be utilized to serve human purposes, into technical artifacts. A simple example is a rock that may be used as a hammer. Similarly, one could turn systems like hurricanes or cells into a material automaton by interpreting their dynamical behavior sequentially.
As such, ‘programming’ (in a limited and basic sense) may only take place during the construction phase of the device. The reason is that the mechanism responsible for producing the flute player’s melody is internal to the system and completely hidden from its users. Since the mechanism is not meant to be changeable, there is no need for external means of regulation through an interface. Without a recognizable interface, re-programming is unfeasible.
As Simon (1996, 6) points out, designers may only ever achieve a ‘quasi-independence’ of their technologies from the outside world. Biologists may have similar discussions concerning the phenomenon of homeostasis of certain kinds of organisms (Glennan 2017 114–115). No item can be entirely shielded from environmental influences, and the insulation of the flute player's inner workings may break down due to strong vibrations, extreme temperatures, or exposure to strong magnetic fields. Additionally, a skilled individual might be able to work around the insulation and ‘hack’ into the system and access the control mechanism of the machine, revealing unforeseen (non-intended) interfaces.
For a more thorough (but still tractable) historical overview of the mechanistic turn see Kästner (2017, Ch. 3).
Particularly in the current context of computing, the conception of mechanistic levels does not equate with LoA of computational artifacts (Floridi 2008; Primiero 2019). Though one certainly can apply the methodology of LoA to mechanistic levels, there is one important difference: the mechanistic framework is limited to spatio-temporal entities only. In contrast, the notion of LoA may also be applied to abstract/formal entities. Another crucial difference between LoA and the mechanistic hierarchy is the intralevel relation between different levels. Whereas the former relies on some form of leaving out selected details (abstraction), the mechanistic intralevel relations are of a different nature. I shall return to the importance of levels in section §4.
The Musa flute player is a case in point.
It is important to note that while we should be cautious not to confuse abstract automata of the logico-mathematical realm with concrete real-world machines, we can still use the conceptual framework of automata theory to model actual material devices.
Theoretically, a FDA can be defined as a five tuple \(A=\left(Q,\Sigma ,\updelta ,{q}_{0},F\right)\), where \(Q\) denotes a finite set of states, \(\Sigma\) is the finite set of input symbols, \(\updelta\) is a transition function, \({q}_{0}\) is the start state, and \(F\) a set of final states (Hopcroft et al., 2001, 46). Depending on the input label \(a\) from the alphabet \(\Sigma\), transitions \(\updelta \left(q,a\right)\to p\) connect the states (e.g., \(q\) and \(p\)). Multiple transition labels may form a ‘word’ \(w={a}_{1},{a}_{2},\dots {a}_{n},\) (i.e., a string over the alphabet \(\Sigma\)). A word is valid for a given FDA if the sequence of transition labels leads from the initial state \({q}_{0}\) to a final one contained in \(F\). A string of inputs \(w\) that is compatible with the FDA can be interpreted as a program describing an execution trace within the set of possible behaviors.
While manipulability theories capture the intuition of how to portray causal structure, earlier versions of manipulability theories were long objected to for relying on the anthropocentric notion of ‘manipulation.’ Depicting causes C as vehicles for manipulating effects E, often (at least in older versions) assigns central significance to human action. Adhering to human agency was seen to fly in the face of the idea that causal relations are part of the mind-independent world. Considered a bug in the original theory, it is a welcome and crucial feature of physical programmability since it conceptually aligns with the required pre-determined set up of automata by agents.
Standardly, structural equations are defined as \(x_{i}=f_{i} (pa_{i},u_{i}), \, i= 1, ... n, where \, pa_{i}\) denote the set of variables (the ‘parents’) that directly determine the value of \({X}_{i}\) and where $${U}_{i}$$ stands for errors or disturbances (see Pearl 2009, 27). Each of these structural equations corresponds to a causal dependency relation. Changing the values of variables (of a given structural causal model) under external interventions uncovers those causal dependencies. In this way, the intuitive content of causal claims (C causes E) is preserved, yet concerns about the dependency of agents are side-stepped.
N.b., when employing this kind of thinking, we are engaging with modal reasoning, “[c]ausal relationships between variables thus carry a hypothetical or counterfactual commitment: they describe what the response of Y would be if a certain sort of change in the value of X were to occur.” (Woodward 2003, 40) It is thus now generally accepted that interventionism is a counterfactual theory (of causation); the notion of a surgical intervention that unearths causal relationships requires counterfactuals.
Essentially the same control mechanism was also employed in many computing machines. See Campbell-Kelly (1991) for a detailed treatment.
I am ignoring hypercomputation, etc., for now.
It is important to note that real-world machines are only potentially universal, as they cannot be given unlimited storage. Therefore, today’s computing machines can only perform computations that a TM with bounded tape can achieve.
A concise summary is given by Kästner and Anderson (2018, §3): “Since wholes cannot be manipulated without affecting any of their parts, interventions into the whole will always be non-surgical, that is, fat-handed, with respect to some part. Rather than intervening into X (the whole) with respect to Y (the part), we actually intervene on X and Y simultaneously by carrying out I.”.
Despite the challenges, I agree with (Kästner and Anderson 2018) that both interventionism and MM have solid empirical foundations (see, for instance, Craver (2007b, 144–152) for some details on the empirical grounding of experimentation on mechanisms). Thus, it is not necessary to give up on the mechanistic framework or the idea that we can intervene on mechanisms. Rather, the focus should be on construing the theoretical underpinnings of intervention-based inquiry into mechanisms in a coherent way.
I thank an anonymous reviewer for pressing me on these issues.
Analogously, one may also formulate the issue for ML systems because there we encounter the similar worry that it is not the humans who predetermine and thus program the machine.
References
Ambrosetti, N. (2011). Cultural roots of technology: An interdisciplinary study of automated systems from the antiquity to the renaissance. University of Milano.
Baker, L. (2006). On the twofold nature of artefacts. Studies in History and Philosophy of Science, 37, 132–136.
Baumgartner, M., & Casini, L. (2017). An abductive theory of constitution. Philosophy of Science, 84(2), 214–233.
Baumgartner, M., & Gebharter, A. (2016). Constitutive relevance, mutual manipulability, and fat-handedness. The British Journal for the Philosophy of Science, 67(3), 731–756.
Beate, K. (2018). The mechanical world: The metaphysical commitments of the new mechanistic approach. Springer.
Bechtel, W., & Abrahamsen, A. (2005). Explanation: A mechanist alternative. Studies in History and Philosophy of Biological and Biomedical Sciences, 36(2), 421–441.
Brennecke, A. (2000). A classification scheme for program controlled calculators. In R. Rojas & Ulf Hashagen (Eds.), The first computers (pp. 53–68). MIT Press.
Bromley, A. G. (1983). What defines a “general-purpose” computer? Annals of the History of Computing, 5(3), 303–305.
Campbell-Kelly, M., et al. (1991). Punched-Card machinery. In W. Aspray (Ed.), Computing before computers (pp. 122–155). Iowa State University Press.
Christophe, L. (2017). The forgotten history of repetitive audio technologies. Organised Sound, 22(2), 187–194.
Conrad, Michael. (1988). The price of programmability. In A half-century survey on The Universal Turing Machine (pp. 285–307). Oxford University Press.
Copeland, B. J. (2024). The church-Turing thesis. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. Metaphysics Research Lab.
Copeland, B. J., & Sommaruga, G. (2021). The stored-program universal computer: Did zuse anticipate turing and von Neumann? In G. Sommaruga & T. Strahm (Eds.), Turing’s revolution: The impact of his ideas about computability (pp. 43–101). Springer.
Couch, M. B. (2011). Mechanisms and constitutive relevance. Synthese, 183(3), 375–388.
Craver, C. F. (2007a). Constitutive explanatory relevance. Journal of Philosophical Research, 32, 3–20.
Craver, C. F. (2007b). Explaining the brain. Oxford University Press.
Craver, C. F. (2015). Levels. In T. Metzinger & J. M. Windt (Eds.), Open mind (pp. 1–26). Mind Group.
d’Udekem-Gevers, M. (2013). Telling the Long and beautiful (Hi)story of automation! In A. Tatnall, T. Blyth, & R. Johnson (Eds.), Making the history of computing relevant HC 2013 (pp. 173–195). Springer.
Dewhurst, J. (2018). Computing mechanisms without proper functions. Minds and Machines, 28(3), 569–588.
Dijksterhuis, E. (1956). Die mechanisierung des weltbildes. Springer.
Eronen, M. I. (2015). Levels of organization: A deflationary account. Biology and Philosophy, 30(1), 39–58.
Farmer, H. G. (1931). The organ of the ancients from eastern sources from eastern sources. William Reeves Bookseller.
Floridi, L. (2008). The method of levels of abstraction. Minds & Machines, 18(3), 303–329.
Glennan, S. (1996). Mechanisms and the nature of causation. Erkenntnis, 44(1), 49–71.
Glennan, S. (2017). The new mechanical philosophy. Oxford University Press.
Haigh, T., & Priestley, M. (2018). Colossus and programmability. IEEE Annals of the History of Computing, 40(4), 5–27.
Hausman, D. M. (2005). Causal relata: Tokens, types, or variables? Erkenntnis, 63(1), 33–54.
Hopcroft, J. E., Motwni, R., & Ullman, J. D. (2001). Introduction to automata theory languages and computation. Wesley.
Houkes, W., & Vermaas, P. (2010). Technical Functions. Springer.
Illari, P. M., & Williamson, J. (2012). What is a mechanism? Thinking about mechanisms across the sciences. European Journal for Philosophy of Science, 2, 119–135.
Kaiser, M. I., & Krickel, B. (2017). The metaphysics of constitutive mechanistic phenomena. The British Journal for the Philosophy of Science, 68(3), 745–779.
Kästner, L. (2017). Philosophy of cognitive neuroscience: Causal explanations, mechanisms and experimental manipulations. De Gruyter.
Kästner, L., & Andersen, L. M. (2018). Intervening into mechanisms: Prospects and challenges. Philosophy Compass, 13(11), e12546.
Klein, C. (2020). Polychrony and the process view of computation. Philosophy of Science, 87(5), 1140–1149.
Krickel, B. (2018). The mechanical world: The metaphysical commitments of the new mechanistic approach. Springer.
Koetsier, T. (2001). On the prehistory of programmable machines: Musical automata, looms, calculators. Mechanism and Machine Theory, 36(5), 589–603.
Kroes, P., & Meijers, A. (2006). The dual nature of technical artefacts. Studies in History and Philosophy of Science Part A, 37(1), 1–4.
Leuridan, B. (2012). Three problems for the mutual manipulability account of constitutive relevance in mechanisms. The British Journal for the Philosophy of Science, 62(2), 399–427.
Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67(1), 1–25.
Martin, A., Magnaudet, M., & Conversy, S. (2023). Computers as interactive machines: Can we build an explanatory abstraction? Minds and Machines, 33(1), 83–112.
Maxim, M. (2009). Algorithms languages automata and compilers: A practical approach. Jones Bertlett Learning.
Mollo, D. C. (2017). Functional individuation, mechanistic implementation: The proper way of seeing the mechanistic view of concrete computation. Synthese, 195, 3477–3497.
Moor, J. H. (1978). Three myths of computer science. The British Journal for the Philosophy of Science, 29(3), 213–222.
Mozgovoy, M. (2009). Algorithms, languages, automata, and compilers: A practical approach. Jones Bertlett Learning.
Olley, A. (2010). Existence precedes essence – meaning of the stored-program concept. In A. Tatnall (Ed.), History of computing. Learning from the past (pp. 169–178). Springer.
Pearl, J. (2009). Causality. Cambridge University Press.
Kroes, P. (2012). Technical artefacts: Creations of mind and matter: A philosophy of engineering design. Springer.
Piccinini G. & Maley, C. (2021). ‘Computation in physical systems.’ In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. Metaphysics Research Lab.
Piccinini, G. (2008). Computers. Pacific Philosophical Quarterly, 89(1), 32–73.
Piccinini, G. (2015). Physical computation: A mechanistic account. Oxford University Press.
Preston, B. (2018). ‘Artifact’. The stanford encyclopedia of philosophy, In E. N. Zalta (ed.), Fall 2018, Metaphysics Research Lab, Stanford University.
Primiero, G. (2019). On the Foundations of Computing. Oxford University Press.
Putnam, H. (1988). Representation and reality. MIT Press.
Randell, B. (1994). The origins of computer programming. IEEE Annals of the History of Computing, 16(4), 6–14.
Rapaport, W. J. (1999). Implementation is semantic interpretation. The Monist, 82(1), 109–130.
Rapaport, W. J. (2005). Implementation is semantic interpretation: Further thoughts. Journal of Experimental & Theoretical Artificial Intelligence, 17(4), 385–417.
Rojas, R. (1996). Conditional branching is not necessary for universal computation in von neumann computers. Journal of Universal Computer Science, 11(2), 756–768.
Rojas, R. (1998). How to make Zuse’s Z3 a universal computer. IEEE Annals of the History of Computing, 20(3), 51–54.
Rojas, R. (2023). Konrad Zuse’s early computers the quest for the computer in Germany. Springer.
Romero, F. (2015). Why there isn’t inter-level causation in mechanisms. Synthese, 192(11), 3731–3755.
Salvaneschi G, Margara A, & Tamburrelli G, 2015 Reactive programming: A walkthrough. IEEE/ACM 37th IEEE International Conference on Software Engineering, 2: 953-954.
Scheines, R. (2005). The similarity of causal inference in experimental and non-experimental studies. Philosophy of Science, 72(5), 927–940.
Searle, J. (1990). Is the brain a digital computer? Proceedings and Addresses of the American Philosophical Association, 64, 21–37.
Simon, H. A. (1996). The sciences of the artificial. MIT Press.
Sloman, A. (2002). The irrelevance of turing machines to artificial intelligence. In M. Scheutz (Ed.), Computationalism: New directions (pp. 87–127). MIT Press.
Turner, R. (2018). Computational artifacts. Springer.
van Eck, D. (2017). Mechanisms and engineering science. In S. Glennan & P. Illari (Eds.), The routledge handbook of mechanisms and mechanical philosophy (pp. 447–461). Routledge.
Vermaas, P., & Houkes, W. (2003). Ascribing functions to technical artefacts: A challenge to etiological accounts of functions. British Journal for the Philosophy of Science, 54(2), 261–289.
William, B., & Richardson, R. C. (1993). Discovering complexity: Decomposition and localization as strategies in scientific research. MIT Press.
Woodward, J. (2002). What is a mechanism? A counterfactual account. Proceedings of the Philosophy of Science Association, 69(3), S366–S377.
Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press.
Woodward, J. (2008). Invariance, modularity, and all that. In S. Hartman, C. Hoefer, & L. Bovens (Eds.), Nancy Cartwright’s philosophy of science (pp. 198–237). Routledge.
Woodward, J. (2023). Causation and Manipulability. In E. N. Zalta (Ed.), The stanford encyclopedia of philosophy. Metaphysics Research Lab.
Ylikoski, P. (2013). Causal and constitutive explanation compared. Erkenntnis, 78(2), 277–297.
Zenil, H. (2010). Compression-based investigation of the dynamical properties of cellular automata and other systems. Complex Systems, 19(1), 1–28.
Zenil, H. (2012a). Nature-like computation and a measure of computability. In G. Dodig-Crnkovic & R. Giovagnoli (Eds.), Natural computing /unconventional computing and its philosophical significance. Springer.
Zenil, H. (2012b). On the dynamic qualitative behavior of universal computation. Complex Systems, 20(3), 265–278.
Zenil, H. (2014). What is nature-like computation? A behavioural approach and a notion of programmability. Philosophy & Technology, 27(3), 399–421.
Zenil, H. (2015). Algorithmicity and programmability in natural computing with the game of Life as in silico case study. Journal of Experimental & Theoretical Artificial Intelligence, 27(1), 109–121.
Acknowledgements
I would like to thank Liesbeth De Mol, Noelia Iranzo Ribera, and Henri Stephanou for helpful discussions and/or comments on (earlier versions of) this manuscript. I am also grateful to the audience at HaPoC-7 conference in Warsaw in October 2023
Funding
This research was partially funded by the PROGRAMme project (ANR-17-CE38-0003-01).
Author information
Authors and Affiliations
Contributions
N/A.
Corresponding author
Ethics declarations
Conflict of interest
The authors have not disclosed any conflict of interest.
Ethical Approval
N/A.
Informed Consent
N/A
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Wiggershaus, N. Physical Programmability. Minds & Machines 35, 14 (2025). https://doi.org/10.1007/s11023-025-09714-3
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11023-025-09714-3