Automated design of distributed control rules for the self-assembly of prespecified artificial structures

https://doi.org/10.1016/j.robot.2007.08.006Get rights and content

Abstract

The self-assembly problem involves the design of agent-level control rules that will cause the agents to form some desired, target structure, subject to environmental constraints. This paper describes a fully automated rule generation procedure that allows structures to successfully self-assemble in a simulated environment with constrained, continuous motion. This environment implicitly imposes ordering constraints on the self-assembly process, where certain parts of the target structure must be assembled before others, and where it may be necessary to assemble (and subsequently disassemble) temporary structures such as staircases. A provably correct methodology is presented for computing a partial order on the self-assembly process, and for generating rules that enforce this order at runtime. The assembly and disassembly of structures is achieved by generating another set of rules, which are inspired by construction behavior among certain species of social insects. Computational experiments verify the effectiveness of the approach on a diverse set of target structures.

Introduction

Self-assembly can be defined as a process by which structures form via the combination of relatively simple components, whose actions are not determined by any centralized entity. Such processes are commonplace in nature [39], where they occur on various levels of scale, from the fusion of nuclei, to the clustering of social insects into self-assemblages (e.g., clusters, ladders and chains) [2], to the formation of galaxies. While such phenomena have been studied by natural scientists for many centuries, attention has recently been devoted to the question of how self-assembly may be controlled to yield specific, desired artificial structures while preserving decentralization, with the components of the emergent structure making individual decisions regarding their behavior. This question is important, as it allows us to examine the very complex but fundamental relationship between individual actions and emergent, systemic properties [9]. Furthermore, self-assembly has the potential to automate the production of useful artificial objects, particularly, in environments that are difficult to access; examples include solar power systems in outer space [33], bases on the lunar surface [7], and electronic components at the nanoscale [13].

While our present understanding of self-assembly processes is insufficient for most practical construction problems, important advances have been made in the field. Typically, research has been done in the context of computer simulations [1], [3], [11], [14], [17], [18], [20], [21], [32], [37], but there have been some recent developments in extending self-assembly to physical robots [4], [26], [36], [38]. This past work reveals a notable trend: the difficulty in controlling self-assembly is strongly correlated with the complexity of the environment in which it takes place. As a direct consequence, most studies that have been able to achieve the self-assembly of a diverse range of relatively complex structures have done this in rather idealized, cellular environments, with simple and primarily random motion [1], [17]. On the other hand, in continuous environments, both simulated [11], [18], [32], [37] and physical [4], [26], [38], self-assembly has generally been restricted to simple structures such as chains, triangles and hexagons. A partial exception is found in [20], where disk-shaped parts form tree-like structures in a simulated 2D continuous space, but topologically connected parts within a tree need not come into physical contact, which simplifies the problem substantially. A theoretical extension to arbitrary graphs (rather than only trees) is proposed in [21], but it is shown that this approach requires non-trivial communication schemes, where each part must have a unique identifier. In another recent study [38], stationary block-shaped modules are able to attract free-floating modules via short-range magnetic forces, potentially allowing the formation of a wide range of 3D shapes, although only relatively simple structures have been presented thus far. In the closely related area of collective construction, where a distinction exists between the material components and the agents that manipulate them, algorithms have been developed for construction by multiple robots, along with a prototype physical implementation [36]. However, only 2D structures without internal gaps can presently be built; furthermore, each robot or component stores a concrete representation of the entire desired target structure.

In an earlier work [14], we manually designed control rules to guide the self-assembly of non-trivial, 3D structures from blocks of different sizes, in a continuous environment with constraints such as gravity and block impenetrability. Our approach incorporated several distinct techniques from the field of swarm intelligence [6], [19], namely stigmergic pattern recognition (which is inspired by the nest construction behaviors of insects such as paper wasps, where the local deposition of material by one wasp gives others clues as to what should be done in the future) [6], [34], [35], force-based movement control [29], [30], and higher level coordination via the use of a limited amount of memory and local message passing [5], [30], [31]. While this approach was successful [14], the hand-design of control methods for assembling specific structures proved to be a time-consuming and error-prone process. This raises the question of whether there is a procedure that will take as input a specification of a target structure, and produce as output a set of control rules which can be used to successfully assemble this structure. For simpler environments, a few such rule generation procedures have been developed in the past. Some of them produce stigmergic patterns that, somewhat like in our work, enable agents to locally determine where to place themselves [1], [17], while others output graph grammars [20], [21]. The extension of rule generation procedures to more complex environments (such as the one studied here) is presently an open-ended problem, due to the difficulty of controlling motion in such an environment and the presence of physical constraints. (Some work along this direction has been done in [18], but only the construction of a very simple structure in a predefined, block-by-block sequence was demonstrated). As argued in [14], these constraints impose higher level sequencing requirements on the steps of the self-assembly process, and the question remains as to how these requirements can be captured and automatically translated into local, low-level behaviors.

In this paper, we present an automated rule generation procedure for assembling a variety of prespecified structures in a simulated environment with physical constraints [14]. A description of this environment, along with a summary of the control methods available to each block, is given in Section 2. The environment is not designed to be an accurate representation of reality, and we do not intend that our control mechanisms be directly applicable in the physical world without significant extension; such an extension is almost never trivial [8]. Rather, by incorporating simplified models of certain physical phenomena, we attempt to uncover fundamental (i.e., independent of how the models are implemented) issues that will need to be addressed when the self-assembly of complex objects is attempted in the real world. (The eventual plausibility of a physical implementation is supported by recent developments in robotic self-assembly and construction of simple structures [4], [26], [36], [38], as well as results in related areas such as multi-robot movement [23] and formation [10] control, and self-reconfigurable robotics [5], [16].) Even in simulation, the system’s overall dynamics can be chaotic, and it is unrealistic to expect that a set of local control methods can always be found that is guaranteed to produce the target structure. We approach this issue by factoring out those aspects of control that can be dealt with in a formal manner, and present a rule generation procedure in Section 3. This procedure first computes a partial order on the sequence of block placements, ensuring that the target structure can be successfully assembled in spite of the environmental constraints; for example, the inner parts of a structure must be assembled first, if they are enclosed by other parts (e.g., walls). The ordering step is followed by the generation of rules, some of which enforce the computed order at runtime, through local communication and memory manipulation, while others allow for the assembly and disassembly of the individual parts of the structure. In Section 4, we show that both order generation and order enforcement are correct under reasonable assumptions that we make explicit. Other aspects of the problem, such as movement control, are handled in a more empirical manner, by testing various possibilities over a large number of independent trials. In Section 5, we experimentally show that with a few modifications, the force-based movement control mechanisms that we presented in the earlier work [14] are general enough to handle a diverse range of structures, when combined with the rules generated by the procedure, thus providing an integrated, fully automated approach.

Section snippets

System overview

As background, we present example target structures, the simulated environment where these structures are to be assembled, and the control mechanisms that are available to the agents (blocks) for governing their behavior during self-assembly. Discussion is brief; for further details, the reader is referred to [14].

Automatic rule generation

This section presents a well-defined, fully automated procedure for generating a set of rules for the self-assembly of a given target structure. This structure is specified as a set of geometric locations, each one having an assigned position/orientation and requiring a block of a specific size (but not some particular block), and given as input to the procedure. The procedure operates in two major phases: order computation and rule generation. During the former phase, it computes a partial

Theoretical results

The rule generation methodology presented in the previous section builds upon earlier stigmergy-based procedures [1], [17], [18] in that it computes and incorporates higher level ordering constraints into a set of distributed rules that govern the self-assembly process. It is relatively straightforward to see how the generated stigmergic rules will, given appropriate movement dynamics, result in the assembly, disassembly or completion detection of specific rectangles. On the other hand, the

Empirical results

The previous section presented theoretical results which indicate that the rule generation procedure, as described, should produce a correct set of rules for assembling a given target structure. Can we therefore be certain that this is indeed the case? The theorems were derived in the context of abstract, mathematical models, and even though the self-assembly processes discussed in this paper take place in simulation (rather than the physical world), there exists a gap between these models and

Discussion

This paper has focused on presenting and evaluating a fully automated approach to generating local control rules for the self-assembly of prespecified target structures in a continuous and constrained environment. While the complexity of motion in such an environment makes infeasible a proof that the generated rules will always cause the elements of the system (i.e., the blocks) to converge to the desired structure, we were nonetheless able to formally verify that certain major aspects of our

Acknowledgement

This work was supported by NSF award IIS-0325098.

Alexander Grushin holds a BS in Computer and Information Sciences from the University of Delaware, a MS in Computer Science from the University of Maryland at College Park, and a PhD in Computer Science, which he recently received from the University of Maryland. His doctoral dissertation dealt with the distributed control of self-assembly processes. More generally, his research interests lie in areas that include artificial intelligence, artificial life, biologically-inspired computing,

References (39)

  • H. Bojinov et al.

    Multiagent control of self-reconfigurable robots

    Artificial Intelligence

    (2002)
  • M. Matarić

    Issues and approaches in the design of collective autonomous agents

    Robotics and Autonomous Systems

    (1995)
  • J. Adam, Designing Emergence, Ph.D. Thesis, University of Essex,...
  • C. Anderson et al.

    Self-assemblages in insect societies

    Insectes Sociaux

    (2002)
  • D. Arbuckle, A. Requicha, Active self-assembly, in: IEEE International Conference on Robotics and Automation, 2004, pp....
  • J. Bishop, S. Burden, E. Klavins, R. Kreisberg, W. Malone, N. Napp, T. Nguyen, Self-organizing programmable parts, in:...
  • E. Bonabeau et al.

    Swarm Intelligence

    (1999)
  • R. Brooks, P. Maes, M. Matarić, G. More, Lunar base construction robots, in: IEEE International Workshop on Intelligent...
  • R. Brooks, Artificial life and real robots, in: First European Conference on Artificial Life, 1992, pp....
  • S. Camazine et al.

    Self-Organization in Biological Systems

    (2001)
  • J. Fredslund et al.

    A general algorithm for robot formations using local sensing and minimal communication

    IEEE Transactions on Robotics and Automation

    (2002)
  • K. Fujibayashi, S. Murata, K. Sugawara, M. Yamamura, Self-organizing formation algorithm for active elements, in: 21st...
  • M. Ghallab et al.

    Automated Planning

    (2004)
  • S. Glotzer

    Some assembly required

    Science

    (2004)
  • A. Grushin et al.

    Stigmergic self-assembly of prespecified artificial structures in a constrained and continuous environment

    Integrated Computer-Aided Engineering

    (2006)
  • J. Hartman et al.

    The VRML 2.0 Handbook

    (1996)
  • K. Hosokawa, T. Tsujimori, T. Fujii, H. Kaetsu, H. Asama, Y. Kuroda, I. Endo, Self-organizing collective robots with...
  • C. Jones, M. Matarić, From local to global behavior in intelligent self-assembly, in: IEEE International Conference on...
  • C. Jones, M. Matarić, The use of internal state in multi-robot coordination, in: Hawaii International Conference on...
  • Cited by (27)

    View all citing articles on Scopus

    Alexander Grushin holds a BS in Computer and Information Sciences from the University of Delaware, a MS in Computer Science from the University of Maryland at College Park, and a PhD in Computer Science, which he recently received from the University of Maryland. His doctoral dissertation dealt with the distributed control of self-assembly processes. More generally, his research interests lie in areas that include artificial intelligence, artificial life, biologically-inspired computing, machine learning, and multi-agent systems. Presently, he is a research scientist at Intelligent Automation, Inc., located in Rockville, MD.

    James A. Reggia is a Professor of Computer Science at the University of Maryland at College Park, with a joint appointment in the Institute for Advanced Computer Studies. He received his PhD degree in Computer Science from the University of Maryland (1981). His research interests are in the general area of biologically-inspired computation, including evolutionary computation, artificial life, and neural computation, and he has authored numerous research papers in these and related areas.

    1

    Present address: Intelligent Automation, Inc., 15400 Calhoun Drive, Suite 400, Rockville, MD 20855, United States.

    View full text