Keywords

1 Introduction

There are numerous visualization tools that make it possible to represent complex data sets using two-dimensional diagrams, animations, and virtual models. Interactive functions allow users to filter, sort, and compare different sets of variables to highlight specific relationships. Microsoft Excel, Tableau, and Google chart tools and Fusion Tables are just a few of the tools users can access to visualize and share data. Programming environments for data representation include Processing (http://procssing.org/), which designers and artists favor for creating animated visualizations that exist outside the browser, and D3.js (http://d3js.org/), which was launched in 2011 by the Stanford Visualization Group and includes a JavaScript library for creating web-based interactive data visualizations.

However, three-dimensional data visualizations that incorporate tactile objects, physical spaces, and blended spaces (that integrate virtual and physical data representations) can enhance our understanding of data relationships by tapping into our intuitive abilities to process data by using multiple senses. These representations use symbolic, iconic, and indexical references to data which may be defined by different sensory modalities [1]. Three-dimensional models incorporate interaction, kinesthetic design, embodiment, cross-modal perception, and multimodal semantic structures that define a new type of information aesthetic.

2 Three-Dimensional Data Representation

There are several forms of three-dimensional data representations that incorporate physical objects or physical space. Data sculptures are data-based, physical objects that signify data relationships [2]. They can range from three-dimensional extensions of two-dimensional graphs to unique abstract forms and metaphorical representations.

Research has shown that external representations can enhance our understanding of numerical tasks [3]. Vande Moere and Patel [4] demonstrated that physical data sculptures create dynamic narratives that illustrate process as well as outcomes. Data sculptures can represent quantitative relationships and qualitative information such as emotion and context. Physical materials or objects can represent literal connections with the data variables. For example, one of the data sculptures cited by Vande Moere and Patel [4] uses different types of cables (electric, electronic, headphone, phone, coaxial, and network cables) to construct a physical timeline that represents an individual’s daily activities that use cables (pp. 10–11).

In interaction design, interfaces that use tangible connections to the physical world engage the senses and augment the learning experience [5, 6]. Dourish [6] noted that interaction with physical objects enhances cognition because tangible computing “is a physical realization of a symbolic reality, and the symbolic reality is, often, the world being manipulated.” [p. 207]. Tangible interface designs can be applied to three-dimensional models and metaphorical references for data representation. For example, haptic interfaces can use inertia, force, torque, vibration, texture, and temperature to represent data variables and relationships in the physical world. Haptic interfaces enable users to interpret spatial relationships through the sense of touch. Palmerius [7] pointed out that “our sense of touch and kinesthetics is capable of supplying large amounts of intuitive information about the location, structure, stiffness and other material properties of objects” (p. 154).

Three-dimensional virtual models can be integrated into the surrounding physical space, allowing users to move in and around data representations projected into the environment. These environments may include ambient displays that turn elements in the surrounding architectural space, including physical objects, gases, and liquids, into “interfaces” that represent data [8]. Ambient displays communicate specific details as well as general information about the data variables and relationships. Data can be represented by different forms of sensory stimuli and create multiple levels of perception that lead to alternative perspectives and a holistic understanding of the information. Visual data representations can be augmented by auditory displays. For example, weather data might be enhanced by ambient sounds of rain or wind that reflect the force and velocity of these elements. The temperature of the room can mirror the actual outside temperature. With ambient displays, users can employ multiple senses to analyze relationships that might otherwise be missed [8].

However, the use of many different media and types of data representation can be distracting and overload the user with too much information. Current research is investigating the thresholds for ambient data designs to determine when there are too many media and data representations and how these thresholds transition from background (ambient) data to foreground data during different tasks [8].

Physical and virtual three-dimensional representations of data also provide another axis for mapping relationships, including dynamic changes over space and time. Three-dimensional models generate alternative perspectives and angles for viewing information. These different perspectives can highlight unexpected data relationships that might not be visible with two-dimensional representations.

Ameres and Clement [9], researchers at Rensselaer Polytechnic Institute (Troy, NY), have developed a unique three-dimensional computing interface call Campfire that allows a small group of users to collaborate on information analysis. The platform is a three-dimensional projection device, about six feet in diameter and two feet high, that allows participants to view data projected onto the walls and flat circular floor of the device (Fig. 1). Additional information can also be projected onto the walls in the room that houses the device. The goal is to expand the power of computers in collaborative decision-making by allowing users to intuitively share and manipulate data. Ameres [9] feels Campfire has the potential to enable users to “look inside the data” (para. 8) and expand data exploration beyond three-dimensional representations and traditional “one-to-one correlations between dimensionality and presentation” (para. 7).

Fig. 1.
figure 1

(from research by E. Ameres and G. Clement; photo credit: G. Clement).

The Campfire technology allows researchers to project data onto the walls and floor of a three-dimensional display. The goal is to provide a collaborative space that encourages new perspectives that expand beyond traditional 2D and 3D representations of data relationships

3 Kinesthetic Design and Embodiment

Three-dimensional models invite interaction and exploration which can also lead to new insights about the data [4]. This type of interaction design, called kinesthetic design, helps the user understand the visual and cognitive relationships in the spatial representation of the information [10]. Berkeley [11] demonstrated that kinesthetic and tactile experiences shape our perception of space. Klemmer, Hartmann, and Takayma [5] noted that “our bodies play a central role in shaping human experience in the world, understanding of the world, and interactions in the world” (p. 140). When we physically interact with models or other tactile representations of data, we use reflective practice to work through ideas rather than just think about them [5].

Physical interaction is defined as an epistemic action that helps us understand relationships [12, 13]. Researchers have documented the significance of “drawing” relationships in physical space with hand and arm movement to clarify conceptual relationships and enhance memory and recall [14, 15]. Haptic interfaces and interactive hardware use physical movement to augment our understanding of information by leveraging “body-centric experiential cognition” [5, p. 144].

Vande Moere and Patel [4] used the term “embodiment” to describe the physical materialization of the data relationships in data sculptures. Embodiment also refers to the viewer’s interpretation of the data through the perception of the data in the physical world. Researchers have noted that we perceive information in relation to our orientation [16]. We intuitively learn about audio, visual, spatial, and temporal relationships by moving in physical environments and touching objects. Piaget [17] noted that logic and the cognitive processing of information are derived from physical and mental interaction, and it is the coordination of action that leads to reflective abstraction.

The cognitive semantics theory of conceptual metaphor states that logic and reasoning are founded on image schemas formed by “patterns of our bodily orientations, movements, and interaction” that we develop into abstract references [18, p. 90]. As a result, physical movement through space and interaction with tangible objects leads to symbolic representations and quantitative analyses [19, p. 2]. As we use gestures and objects, we gain new perspectives and see additional relationships based on our physical interaction with the objects. Abrahamson and Lindgren [19] noted that “we develop the skill of controlling and interpreting the world through the mediating artifact” (p. 4).

Gestures and bodily movements are also intuitive ways of learning and communicating because they constitute a universal visual language that is based on shared and tangible experiences [20]. LeBaron and Streeck [20] pointed out that gestures provide a bridge between tactile experiences and the abstract conceptualization of the experiences. They highlighted the work of the French philosopher Condillac who felt gestures “constituted the original, natural language of humankind” because they formed symbols and a social language based on common experiences [20, p. 118]. Condillac [21] called these symbols or signs sensations transformées or transformed sensations (p. 61) because they referred to “the entire complex of affect, desire, sensory perception, and motor action that makes up what nowdays we might call ‘embodied experience’” [20, p. 118].

Gestures can play an important role in kinesthetic design for multisensory data representation. Research has shown that gestures increase creativity [22], reduce cognitive overhead [23], and help us translate our experiences with objects into cognitive interpretations [24, 25]. We have already seen how interactive phones and tablets make use of our intuitive understanding of gesture to facilitate interaction with mobile devices and engage us in the communication process.

4 Cross-Modal Perception

Research has shown that we intuitively integrate stimuli from different sensory modalities. The multisensory integration of audio and visual stimuli is a physiological process that takes place within the neurons in the brain [2628]. Researchers have identified enhanced activity in the visual cortex in congenitally blind people when they analyze speech [29], moving sounds [30], or localized sounds [31].

Research has shown that cross-modal perception heightens perceptual awareness and enhances our ability to process information from individual sensory modalities when the combinations of stimuli are organized or random [3235]. Freides [36] concluded that perception that involves more than one sensory modality is more accurate than information that is represented with one sense. This is especially true if the cross-modal perception involves the integration of visual or audio information with haptic and kinesthetic stimuli.

There has been extensive research on cross-modal perception that involves the integration of audio and visual stimuli. Research has shown that the perception of visual information is altered when sound is added to the visuals [3740]. Vroomen and de Gelder [37] also demonstrated that the temporal organization of auditory stimuli impacts visual perception. A random high tone (in a sequence of low tones) improved the perception of a visual target when the tone and the visual stimuli were presented synchronously. However, there was no effect when the high tone was presented before the visual information. The effect was also reduced when there was less contrast between the high and low tones, and when the high tone was part of a melody.

Sound can enhance the detection of specific individual visual elements as well as improve the detection of motion [28, 37]. Beer and Watanabe [28] demonstrated that visual motion detection improved when sounds were paired simultaneously with the visual stimuli. Chen and Yeh [41] discovered that the addition of repetitive sounds to visuals alleviated “repetition blindness” which is the failure to perceive visuals that repeat in rapid succession.

Visual and auditory stimuli can also impact the perception of spatial location. Audio and visual stimuli that are synchronized, but exist in different spatial locations, may appear to come from the same location [4245]. In addition, research has shown that visual and auditory stimuli that come from the same location seem to emanate from the same source if the visual stimuli precede the sound by 50 ms [46, 47]. Talsma, Senkowski, and Woldorff [48] concluded that this timing difference is due to the different velocities of light and sound, which have caused the brain to develop a higher neural transmission rate for auditory stimuli to compensate for the fact that sound reaches the auditory nerve approximately 50 ms after visual stimuli.

The different velocities of auditory and visual stimuli also impact the perception of time and whether or not sounds and visuals appear to be synchronized. There has been conflicting research in this area with some research showing the auditory stimuli must come first in order for sounds to appear to be simultaneous with visual stimuli [49], while other research indicated that the visual stimuli must come first [5052]. These different findings suggested that other variables, in addition to velocity, impact how we perceive the temporal order and synchronicity of auditory and visual stimuli. Research had indicated that the relative intensities of sensory stimuli effect the perception of temporal order by showing that a stimulus with a higher intensity was perceived before a stimulus with a lower intensity [53]. Boenke, Deliano, and Ohl [54] confirmed that intensity plays a role in the temporal perception of auditory and visual stimuli. They further defined the temporal dynamics of auditory and visual stimuli by showing that the duration of a stimulus also impacts the perception of time, noting that asynchronies in the perception of multiple stimuli appear to be stabilized when the duration of the stimuli is increased [54].

Finally, Freides [36] noted that with complex spatial or temporal pattern recognition, the sensory modality used to represent the data is more critical than the contextual and parametric variables themselves because each modality processes information in a different way, and we automatically use the modality best suited to process variables that represent spatial, temporal, tactile, or kinesthetic relationships.

Research in cross-modal perception plays an important role in the design of multisensory data representations. By using multiple sensory modalities, it is possible to expand the number of data variables that can be represented simultaneously and increase the potential for discovering patterns, trends, anomalies, and outliers. Cross-modal stimuli can enhance the perception of visual and audio information, and it can impact the perception of spatial and temporal relationships. However, when different sensory modalities are used to represent multiple variables in a complex information space, the choice of media is not the only factor to consider. As indicated in the research, other important factors that impact perception include how and when the stimuli are introduced and the location, intensity, speed, and duration of the stimuli. Research has shown that random sounds can enhance the perception of visual information. However, in multisensory data design, the use of auditory stimuli to represent data may result in repetitive or recursive audio patterns, and it is not clear from the current research in cross-modal perception, how repetitive or recursive patterns impact the perception of visual stimuli and the perception of temporal and spatial relationships in data sets.

5 Aesthetics of Data Representation

Aesthetics is another design element that impacts the interpretation of data representations [55, 56]. Information aesthetics refers to the way design is used to organize data and define relationships. Researchers have broadened the definition to refer to the user experience, engagement, and interaction with the data representations, as opposed to merely defining patterns and trends. This definition also highlights the narratives and underlying processes and principles represented by the data [4]. Information aesthetics is also defined by the database design and the way information is organized, filtered, and retrieved to form different associations [57].

Visual and audio designs create relationships that we perceive as “aesthetically pleasing” because they adhere to principles of design, defined by artists, designers, and musicians, that we have learned over time. Aesthetically pleasing designs define “good Gestalt” and use Gestalt principles of perception to help us simplify and organize information intuitively.

Information aesthetics, based on these design concepts, has been applied to graph theory and design [58] to improve the user’s ability to locate information, compare relationships, and complete tasks. With interactive systems, research has shown that the aesthetics of an interface design can impact user engagement, completion time, and error rate [5961]. In these research experiments, the aesthetics of each design was defined by Gestalt laws of perception and grouping (similarity, proximity, continuation, closure, figure/ground), as well as established concepts in visual design theory that define how to use “harmonious” color palettes, contrast, focal points, balance, symmetry, and asymmetry. In some cases, an aesthetically pleasing information design or interface design did not yield the fastest time in task completion, but the visual appeal of the design encouraged the users to stay engaged and ultimately, complete the tasks [62].

However, multisensory data representation can result in unfamiliar audiovisual patterns that do not conform to established principles of design. Multisensory data representation and cross-modal perception are defining new dimensions in information aesthetics that impact the interpretation of data relationships. We have considerable experience reading linear and hierarchical charts, but as we explore new forms of data representation that combine different sensory modalities, physical and virtual spaces, ambient displays, haptic interfaces, and interaction design, we are defining new ways of using perception and cognition to analyze and interpret complex relationships. For example, with the Campfire example previously discussed, participants are presented with an open space in the center of the device that does not contain specific information. However, the space signifies connections between the data on the sides and bottom of the display. The participants can use this space to create cognitive connections between the physical and virtual representations of the data—connections that define additional dimensions that expand beyond two-dimensional data charts and the three dimensional properties of the display itself.

Kinesthetic design in data representation is also defining new dimensions in the aesthetics of information design. In interactive sports simulators, where the participant performs specific physical motions (e.g., swinging a golf club, throwing/kicking a ball) to produce actions and events in the virtual game, the participant’s physical interaction promotes engagement and creates mental and physical connections with the information in the virtual space. We can apply these concepts from game design to interactive data representation and use embodiment, spatial movement/distance, rhythm, and time to define data relationships. Kinesthetic design adds sensory information to the user experience that augments the virtual representations of the data and creates a holistic approach to data analysis and interpretation.

In my research, I am designing interactive, multimedia art installations to explore new concepts in kinesthetic design and information aesthetics [1]. In the installations, participants interact simultaneously with two different computer programs and create dynamic visual patterns and sounds in the surrounding environment. The gestures and physical movements the participants make, as they move the interactive hardware to control the computer programs, create layers of visual patterns called “hyperplanes” that are at right angles to the virtual patterns displayed in the space (Fig. 2). Audio stimuli define additional hyperplanes as sounds penetrate the environment and immerse the viewer with sensory stimuli from different angles and directions. The hyperplanes create a counterpoint of audio, visual, and rhythmic patterns that define geometric grids of intersecting spatiotemporal planes that change as the user alters the variables in the data representations [1].

Fig. 2.
figure 2

Copyright 2014 Patricia Search. All rights reserved.

In the interactive installations, participants move a mouse on the top of pedestals (shown in the front of this illustration) to animate visual patterns projected onto the wall. The dashed lines on the pedestals represent the kinesthetic patterns the participants create as they move the hardware. These patterns define hyperplanes that augment the sensory experience for the participant.

6 Future Directions

In three-dimensional, multisensory data representations, arrays of sensory stimuli and discursive patterns represent simultaneous and sequential relationships and events. Physical and virtual spaces, interactivity, and individual sensory modalities create a system of perceptual and semantic relational codes that define the data relationships.

Cross-modal perception can enhance and alter the way we interpret information that is represented with different sensory stimuli. It also impacts how we interpret spatial and temporal relationships. Research in cross-modal perception needs to expand into the field of multisensory data design and evaluate how different sensory stimuli, blended spaces, and kinesthetic design impact the interpretation of complex data relationships, including how we perceive the transformation of data relationships over time. The research needs to include studies in the perception of rhythm which is an important element in data representation. In multisensory data design, layers of rhythms, created by the audio and visual stimuli and kinesthetic interaction, highlight the temporal dynamics in data relationships. Spence, Senkowski, and Röder [63] pointed out that current research in cross-modal perception seems to be shifting from a focus on spatial information processing to the impact of sensory modalities on the temporal processing of information. This new emphasis on the temporal dynamics of information processing will play a significant role in defining new directions for multisensory data design.

As new forms of data representation emerge, it will also be important to evaluate how new technologies and multisensory stimuli redefine information aesthetics. With interactive technologies, kinesthetic design and cross-modal perception will continue to add new dimensions to information aesthetics and expand the definition of “aesthetically pleasing” designs. These changes will, in turn, lead to even more innovative ways of representing data because we will no longer be constrained by established definitions of aesthetics and information design. We will be able to envision and develop technologies that not only leverage our intuitive abilities to process information through multiple senses, but also create interactive experiences that integrate virtual and physical objects, actions, and sensory stimuli into dynamic information spaces for data analysis.