1 Introduction

Our Experiential Model of the Atmosphere (EMA) is part of a larger research stream on creating computational platforms for integrated, gestural interaction with complex models via multi-modal interfaces that will allow for fluid human-in-the-loop control of computerized scenarios.Footnote 1 The main challenge to developing such a system is handling new densities of data that approach a continuous distribution. Our strategy is that an effectively continuous dynamical systems approach can provide principles for designing a system able to evolve in real-time to non-discrete, multi-user gestural control of rich experiential scenarios that tap embodied, human experience (Varela [1], Dourish [2], Sha [3,4,5], Ingalls [6]). Thus, we seek to develop computational paradigms that will allow designers and users to leverage the full potential of the increasing density of sensors and computational media in everyday situations by providing a wholly experiential means of controlling and interacting with dense sensors and media. One way to scaffold our technical and design imaginary is to use the model of the swimming pool in place of the model of a graph. How does the water coordinate its activity with the activity of its inhabitants and the wind blowing across its surface? Many forces modulate its movement and condition. Some forces are due to people swimming through the water, pushed by and pushing the liquid surrounding them. Others are due to the waves on its surface, or currents distinguished by momentum, or in the case of the deep ocean, salinity and temperature. Still others are due to the wind which acts continuously across a continuous surface – the continuously extended interface between the air and the water. Note that whereas we may regard a rock thrown into the water or a swimmer as a compact, point-like source of motive force, aggregates of entities or even more essentially, extended continuous fields do not fit this model of an atomic agent. Dyadic (1-1) relational interaction is a small, sparse subset of much richer fields of experiential dynamics. Thus, we seek a more ample way to conceive engagement between different fields of media in a responsive environment.

Fig. 1.
figure 1

Realtime corporeal interaction with dense, high-dimensional, GPU-accelerated simulation of atmospheric dynamics. On the order of 100 sound processes provide spatialized sonic textures with a palpable landscape for enactive, embodied engagement. B. Mechtley, C. Rawls, Synthesis 2018.

Our method is to look for computational adaptations of continuous (e.g. differential geometric or topological) models to the scientific analysis of dense, heterogeneous environments like weather systems and urban spaces. These continuous models complement discrete models (e.g., discrete graphs) of procedural computation processes. We adopt techniques from signal processing and computer science that are also shared with machine perception, fault tolerant systems, or autonomous systems but we do so with the distinctive intent to keep human-in-the-loop control of the experience that can give designers computational paradigms leveraging collective, embodied experience [1, 2, 7,8,9,10]. The three classes of continuous models we investigate are (1) homogeneous generalized computational physics of materials, (2) continuous evolution of metaphorical states, and (3) heterogeneous atmospheric models, such as models that mix for example agent-based models of urban dynamics, models of geophysics, or rule-based systems that model interventions by large scale sociopolitical institutions, that condition the lattice models of atmosphere itself.

To calibrate these models of dense experiential systems against real-world use scenarios, we have leveraged substantial experience and collaborator expertise: live action in performative environments (dance, musical or theatrical performance, games) [6, 11,12,13,14], movement/gesture tracking in everyday and rehabilitative contexts [15, 16], and experiential atmospheric models based on work by experienced atmospheric scientists [17,18,19,20].

We first lay the framework for embodied, enactivist [1, 2, 7, 21] approaches to the design of computer interfaces, and more generally of responsive environments augmented by computation. In that context we will define what we mean by realtime, multimodal, whole-body, gestural and multi-person engagement with an immersive responsive environment (Fig. 1).

An important motivation and context for our work is the focus on whole experience, in the senses of James [22], contemporary phenomenological work on experience (Gendlin [23], Casey [24,25,26], Morris [10], Petitmengin [27]), and movement-based experience (Sheets-Johnstone [28,29,30]). Under these approaches, experience cannot be decomposed into a finite number of perceptual or functional component dimensions and reassembled in some linear superposition of independent features. Senses of rhythm and of mathematical pattern are examples of such apperceptions. Despite this irreducibility of experience, this non-decomposability of experience into “independent” sensory dimensions, there are useful means of ascertaining accounts of experience that can be shared objectively across instances: notably methodological and experimental approaches by Petitmengin [31], Sha [32, 33], and Bregman [34]. Indeed as Bregman commented in his keynote on auditory scene analysis on subjectivity “versus” objectivity:

       At this point, I want to interject a few words about subjectivity and objectivity in psychological research. The personal experience of the researcher has not fared well as acceptable data for scientific psychology. Since the failure of Titchener’s Introspectionism, a very biased form of report of one’s experience, in the early twentieth century, and the rise of Behaviourism to replace it, scientific psychology hasÊ harboured a deep suspicion of the experience of the researcher as an acceptable tool in research.

        You would think that the study of perception would be exempt from this suspicion, since the subject matter of the psychology of perception is supposed to be about how a person’s experience is derived from sensory input. Instead, academic psychology, in its behaviouristic zeal, redefined perception as the ability to respond differently to different stimuli, bringing it into the behaviourist framework. We may be doing research nowadays on cognitive processes, but the research methods are, on the whole, still restricted to behaviouristic ones. Since it was a perceptual experience of my own (the rapid sequence of unrelated sounds) that set me off on a 40-year period of study of perceptual organization, I have always questioned the wisdom of this restriction....

        Sometimes we have used both types of measures, subjective rating scales and measures of accuracy, either in the same experiment or in a pair of related experiments. The two measures have given similar results, but the subjective rating scales have been more sensitive. I think the reason for their superiority is that they are a more direct measure of the experience, whereas turning one’s experience into the ability to form a discrimination between sounds brings in many other psychological processes that are involved in comparison and decision making.

        As a result of my belief in experience as an important part of Psychology, I’m going to try to describe some of my research on auditory perception, but I won’t give any data. Instead, I’m going to support my arguments with audio demonstrations to the extent that time permits. [34]

Experimental platforms scaffolding such whole experiences – experiential systems and responsive environments – have been built by Sundaram [35, 36], Wei [37], and others (see survey on responsive environments by Bullivant [38]). By embodiment we mean sense-making which is conditioned on one’s corporeal engagement with the material world. By material we mean the union of physical, energetic, social, and affective fields (Sha [4], Massumi [39]).

Francisco Varela, Evan Thompson, and others introduced the notion of enactive experience to describe how we progressively construct our sense of, concepts and know-how about the material world through engagement and empirical experience: “We propose the term enactive to emphasize the growing conviction that cognition is not the representation of a pregiven world by a pregiven mind but is rather the enactment of a world and a mind on the basis of a history of the variety of actions that a being in the world performs” (Varela [1]: 9). We extend that cognitivist sense of enaction to a more thoroughly processualist one of how subjects, organisms (Longo and Matevil [40]), technical ensembles (Simondon [41]), more generally any individuals and their environment co-construct each other (Simondon [42, 43], Sha [4]) via structural interaction (Maturana [44, 45]).

We now turn to more specific qualities of experiential systems: that they are immersive, multimodal, realtime, and multi-person. Immersivity can be more precisely framed in the phenomenological distinction between acting, being, sensing in the world without any reflection – thrownness (geworfenheit), versus the state of being reflexively aware of one’s stance with respect to the world (called “defamiliarization” or Verfremdungseffekt in some technical contexts [46]). In this more precise sense, being immersed into a situation is independent of the sensory modalities that are being most exercised. One can be immersed in reading a book on one hand, and on the other, be largely “clinically” disengaged even in a full-body, physical interaction.

Our experiential systems are designed multimodally, that is, the software framework for our experiential systems, SC, is designed for integrated, gestural interaction with complex models via multi-modal interfaces that allow fluid human-in-the-loop control of densities of data that approach a continuous distribution. The system evolves in real-time to non-discrete, multi-user gestural control of rich experiential scenarios, which tap embodied, human experience (Varela [1], Dourish [2], Sha [3,4,5], Ingalls [6, 47]). Thus, we leverage the full potential of the increasing density of sensors and computational media in everyday situations by providing a wholly experiential means of controlling and interacting with dense models.

It is essential to underline that our interpretation of multimodality is sharply different from the standard sense of the simple union of computer-synthesized fields of light and sound (or other media). Rather than limit the design of the user experience to a small number of those digital synthetic media modalities we start with the full sensorium given in physical experience – everything that one can feel including floor and wall treatments, furniture, clothing, physical props, analog HVAC, illumination, and acoustics – and carefully modulate certain modalities: e.g. sense of pressure or heat on the shoulder, or a field of sound and vibration from underfoot, or the thrum of video or structured light on the skin and the floor. In other words, instead of “zeroing out” the world and presenting only a few synthetic bits of media against a perceptual void, we leverage the affordances of analog furniture, media, objects, and other, co-participant bodies (Fig. 2).

Fig. 2.
figure 2

Layered activity tracking and computational media processing for experiential environments.

Finally, our responsive environments are all designed for multi-person use, which requires a different sort of design than extrapolating from the design of “single-user interaction” where a single user is seated in front of a screen with keyboard and mouse WIMP interfaces that can only be controlled by one person at a time. This is a concrete setting for designing for human-human and human-system interaction based on ensemble experience and on ensemble activity. Concretely, ensemble interaction concerns situations where there are three or more human participants, so that we do not fall back on social conventions encoded in dyadic interaction. Also, this sidesteps human-machine interaction design that is implicitly predicated on single-user WIMP interface design including WWW document interfaces and most non-game “applications” whether on mobile or desktop computers. A simple example of n-person engagement (n \(\ge \) 3) is walking in a circle to stir up the atmosphere or the ocean model to form a large vortex (Fig. 3).

Fig. 3.
figure 3

Ensembles (\(n \ge 3\)) steering a realtime simulation by coordinated whole-body interaction. B. Mechtley, M. Patzem, and C. Rawls. Synthesis 2018.

In parallel research, we have collaborated with experts in ensemble experience design from the areas of performing arts, exhibition design, and urban and landscape architecture, though those experts would not use the same term, and have broader sets of concerns than human-computer interaction. (Indeed those broader concerns provide useful ground-truth and use-scenario checks on the expressive power and robustness of our engineered systems.)

1.1 Steerable Scientific Simulations

Creative experimental scientific work relies on constructing fresh instruments of observation in tandem with fresh theoretical interpretations of freshly observed phenomena. We call this on-the-fly co-construction of theory, instrumentation and observation, which is characteristic of creative work in science as well as other disciplines, abductive method (Morris [48], Peirce [49], Psillos [50]).

Some computational science applications have also adopted human-in-the-loop modulation of parameters through the use of computational steering. In computational steering of simulations, investigators change parameters of computational models on the fly and immediately (or as close to immediately as possible) receive feedback on the effect, in parallel with the execution of the simulation. In practice, computational steering allows investigators to quickly explore alternative paths of evolution of system state, such as through introducing exogenous changes to boundary conditions or simulation parameters.

Computational steering has been applied to the real-time control of scientific simulations, such as fluid dynamics [51] in general, air safety [52], flood management [53, 54], particle physics [55], astrophysics [56], and cardiology [57], and several frameworks have been created for integrating the methodology into new and existing simulations deployed on high-performance computing platforms, including SCIRun [58], RealityGrid [59], and WorkWays [60].

We conceptually extend the notion of computational steering to real-time human-in-the-loop modulation of any computationally-modulated environment where the results are immediately perceived, thus minimizing the time between configuration and analysis of a simulation. With advances in dense sensing modalities and experiential media, previous responsive media systems have expanded upon these primarily screen-based interactions in several aspects:

  1. 1.

    Embodied, enactive environments allow comparatively unconstrained engagement with the computation, such as through full-body movement or the use of physical props or other aspects of the environment. For example, gestural input can afford more degrees of freedom to allow multiple parameters to be controlled simultaneously and physical props can be used to construct detailed geometry for more varied, non-parametric boundary conditions.

  2. 2.

    Applications range widely from basic experiential experiments (e.g. relation between memory and corporeal movement, or rhythmic entrainment of ensembles of people and time-based media processes) to artistic installations and performance (e.g. Serra [61] and Timelenses [62]).Footnote 2

  3. 3.

    Designing for whole body and ensemble engagement implies thick (Sha [5]: 72), multimodal, analog and digital engagement with the environment as opposed interacting along one or a few dimensions of sensory perception.

In studying the use of responsive media environments for computational scientific investigation, our key interest lies in observing what types of behaviors in these systems could contribute to a scientific practice, such as rapidly “sketching” hypotheses, perhaps in advance of more numerically reproducible studies. With lowering costs and advances in computing resources, both in HPC systems and desktop hardware, many of the simulations that now require HPC systems may eventually be able to be steered at responsive rates, so we study the use of those models which can currently be simulated at these rates to get an understanding of where responsive interaction can fit into future computational science workflows.

2 EMA: An Experiential Model of the Atmosphere

As an initial exploration, EMA is installed in the Intelligent Stage (“iStage”) space in Synthesis at the School of Arts, Media, and Engineering at Arizona State University: a \(30\times 30\)-foot black box space with a sprung dance floor, theater grid with 16 DMX-controllable RGB LED theatrical lights and additional floor-mounted lights, 4K floor and vertical scrim LED projections, horizontal ceiling-mounted and vertical infrared-filtered cameras, infrared light emitters, floor-mounted contact and boundary microphones, grid-mounted microphones, and 8.2-channel surround audio. The space is designed to be modular to support multiple responsive environments with flexible HD video routing and AVB audio routing hardware. The Synthesis research team has created a suite of software tools in Max/MSP/Jitter, known as SCFootnote 3, for animating the space for creating responsive environments, allowing both novice and expert developers to create new environments using a suite of software frameworks and abstractions. This is refined from multiple generations of researchers working on predecessor “media choreography” composition systems for responsive environments (Sha [5, 63]).

As a responsive, steerable model of warm cloud physics, EMA satisfies many of the objectives of our research into dense computational media that can be steered through collaborative human gesture and physical configuration of the space. Additionally, using a physical model of atmospheric dynamics allows us to explore human interaction with a simulated physical model that leverages participants’ existing physical intuition of matter, exhibits phase changes, and can simulate phenomena at different spatial and temporal scales, all contributing to a rich set of processes and forms that can be studied by investigators in the space.

EMA implements an incompressible fluid flow model along with additional computation of buoyancy, condensation and evaporation of water vapor, and thermodynamics. During each timestep of the simulation, the model allows for external video textures to manipulate the simulated fields, including air velocity, pressure, water vapor, liquid water, temperature, and viscosity. Global scalar parameters can also be manipulated in real-time, such as ground pressure and temperature, altitude, spatial and temporal scale, specific heat capacities of dry air and water vapor, external wind speed and direction, and gravity magnitude and direction. For mathematical and implementation details, see [64].

2.1 Visualization

The model’s fields can then be viewed with a number of different visualization modes, including conventional pseudocolor images given a colormap, particle flow fields, tracer particles, line integral convolution, vector feather plots, and additional artistic renderings composed of multiple fields, such as temperature and different phases of water. The base set of visual mappings has been developed to reflect conventional scientific visualizations from familiar platforms such as Matplotlib, ParaView, and MATLAB. Each of these visualization modes can be seemlessly interchanged using a mobile tablet interface, and EMA supports layering multiple visualizations, such as being able to view flow lines or specific tracer particles on top of a composite rendering of air temperature, water vapor, and condensed liquid water. The tablet interface also allows viewers to adjust scaling parameters, color maps, and compositing in situ without the need to return to a desktop interface in order to encourage all investigative activity to occur embedded within the simulation environment.

2.2 Sonification

The sonic affordances of the space can also be used to communicate important dynamics within the simulation that may be difficult to attend to visually, such as spatialized activity of air flow or the position and velocity of moving tracer particles. We have implemented two modes of sonification in the environment that allow investigators to sonify activity within regions and particular points in space. In particular, a field-based sonification tool allows multiple participants to scale a bounding rectangle around their bodies or static objects to sonify the dynamics of specific regions of space. The underlying field is then subdivided into a variable number of zones, and the average changes of the field within the zones are then sent through a multi-channel sample player and filterbank and ambisonically spatialized around the participants [64].

In a separate particle-based sound synthesizer, individual tracer particles are simulated in the model, which follow the velocity of the air. Each particle is mapped to a separate voice, and its speed and direction are mapped onto different aspects of the synthesized sound. To allow designers or investigators to choose informative sound textures, they are able to select an audio file or recorded audio sample which is then sampled with a granular synthesizer. The angle of velocity of each tracer particle is then mapped to the center frequency of a resonant bandpass filter, and the speed of the particle is mapped to the particle’s volume. This mapping is particularly effective at sonifying sudden changes in particle velocity, such as when it enters a vortex or suddenly encounters a gust of wind. When a particle is in circular motion, for example, the synthesized voice will make repeating sweeps up and down in frequency content. Each particle is then ambisonically spatialized within the space to allow participants to understand whole-field dynamics when they are visually focused on a particular region of the simulation.

3 Enactive Scenarios

The basic dimensions of our enactive scenarios are the number of people and props (physical manipulables that can be tracked), the weather phenomenon being simulated and experienced, the tools or instruments for inspecting or modulating the state of the simulation.

As earlier mentioned, we distinguish between the experience of one, two, or ensembles (\(n \ge 3\)) of people co-constructing an experience in realtime with the steerable environment. Thus the experiments on how the simulation is experienced are designed differently and accordingly. For each of three scenarios, we list recorded experimental behavior from open-ended sessions working with the model as a solo investigator, a pair, a pair using objects in the lab space to construct experiments, and as a guided ensemble. In the ensemble scenario, people dispensed with instruments and used their bodies to walk in a coordinated way to steer the simulation holistically. These scenarios include:

  • Cloud formation and air flow on a horizontal plane: a horizontal simulation with a ceiling-mounted camera is constructed where each square pixel corresponds to \(900\,\mathrm{m}^{2}\) of simulated space, the simulated ambient temperature is 150 Kelvin, and motion of entities in the space is mapped to an increase in water vapor, which condenses nearly instantaneously.

  • Cloud formation on a vertical plane: a simulation with a vertically oriented camera facing an opposing vertical projection surface is constructed where each pixel corresponds to \(100\,\mathrm{m}^{2}\) of simulated space and the lapse rate of ambient temperature with altitude is 6.5 K/km with a sea level temperature of 288.15 K, resulting in a temperature gradient ranging from 288.15–241.35 K. Presence of bodies and objects in the space acts as an obstruction to fluid flow, while movement is mapped to an increase in water vapor and temperature, causing buoyant lift and eventual condensation, usually slightly above-head when participants are standing approximately 5 ft from the projection.

  • Cloud formation and air flow with wind on a vertical plane: a simulation parameterized similarly to the previous scenario, but an external, constant source of downstage velocity (wind) is added, allowing participants to observe the effects of air flow around themselves and objects.

Table 1. Observed experimentation strategies in three simulations.

Table 1 summarizes observations of novel, investigative participant behavior with the simulation in three different scenarios. Increasing the number of participants in the space can be seen to increase joint expressive capabilities, such as through actions as coordinated movement and manipulation of instruments and sharing objects or physical space in the eye of the camera. Inclusion of physical instruments in the space, ranging from isolated objects such as pipes and rope to furniture, such as stools and tables, allows participants to construct stable fluid boundaries or affect larger and more complex regions of the simulation than they could with their bodies alone, such as through spinning objects overhead or jointly moving large objects between each other.

4 Participant Response

When there are two or more people in the space, we can use “second person” elicitation techniques where instead of providing the participants with a pre-designed set of descriptors and metrics from which to choose a classification of their experience, we ask them to converse with one another and come up with a commonly agreed upon account of what they experienced, and what they thought was happening. This phenomenologically informed experimental method has been elaborately developed by C. Petitmengin and colleagues for getting shared, and thus socially objective accounts of thick experience (Petitmengin [27, 65, 66]).

As an example of coordinated activity with shared physical instruments, in Dialog 1, two investigators, given names A and B, work within the fluid flow and cloud formation on a horizontal plane scenario, visualizing the flow velocity with a mode mapping angle to hue and magnitude to intensity. An external source of wind flows downstage. Within the course of 20 min, the two participants constructed several different shapes to act as obstacles to fluid flow and propose and test hypotheses regarding the relationship of the size and periodicity of vortex sheets to the size of a gap between obstacles. Being able to pause the simulation allowed for more thorough examination of a phenomenon, and using multiple visualization types allowed the participants to test a hypothesis about the relationship of fluid velocity and pressure surrounding an object. Additionally, the experience prompted further discussion about a specific topic that would continue outside the scenario.

In ensemble work, with three to on the order of 40 people, the facilitator guided joint activities by suggesting coordinated or disaggregated movement. We are learning to exploit the unique features of having a large common space in which large numbers of participants can jointly steer the simulation without props simply by coordinating their whole-body interaction with EMA. One persuasive and enlightening instance of such coordinated steering is when participants walked in a ring to create and move a common vortex (hurricane) while the vertical projection showed warm air condensing into clouds.

figure a

5 Conclusions

From our early trials using the Experiential Model of the Atmosphere, we have demonstrated the potential of responsive environments, which can respond equally to the activity of individuals, ensembles of people, and physical objects and other entities within the space, to find use in computational science practice. Moving beyond traditional single-user WIMP interfaces and into embodied, enactive computing environments opens up many new possible interaction modalities, as simulations can be interactively steered using gestural interaction, where people themselves are the scientific instruments, and through quickly prototyping and manipulating physical instruments as extended interfaces.

From our recorded experimentation sessions, we have witnessed novel gestural interaction, improvised coordination amongst individuals and within groups, on-the-fly construction of instrumentation, and abductive hypothesis formation and testing. Providing an enactive environment that is conducive to comparatively unconstrained exploration of a model compared to traditional interfaces or simulation parameterization scripts, that is play (Huizinga [67], Sutton-Smith [68], Sha [5]), can be a productive step in eliciting original scientific thought in computational scientific practice.