Skip to content
Publicly Available Published by Oldenbourg Wissenschaftsverlag November 14, 2018

New Impressions in Interaction Design: A Task Taxonomy for Elastic Displays

  • Dietrich Kammer

    Dietrich Kammer is a postdoctoral researcher at Technische Universität Dresden, affiliated with the Chair of Media Design. His research is focused on the formalization of gestural input, especially with regards to multitouch technology. Further areas of research are semiotics in HCI, computer graphics, and information visualization.

    EMAIL logo
    , Mathias Müller

    Mathias Müller is researcher at Technische Universität Dresden, at the Chair of Media Design. His research is focused on virtual, mixed, and augmented reality, data visualization, and HCI. He has many years of experience in the research of interaction and visualization technologies, including elastic displays and head-mounted displays.

    , Jan Wojdziak

    Jan Wojdziak is a postdoctoral researcher at Technische Universität Dresden in Germany as well as co-founder and chief of operations (COO) at GTV – Gesellschaft für Technische Visualistik mbH. His research interests include applied visualistic and interaction design in the range of three-dimensional computer graphics.

    and Ingmar S. Franke

    Ingmar S. Franke is a graduate in Architecture at the University of Applied Sciences Magdeburg as well as in Computational Visualistics at the Institute Simulation and Computer Graphics at Otto-von-Guericke University. Later he worked as a research assistant at Fraunhofer-Gesellschaft, Institute for Factory Planning and Factory Automation. He teaches at the Chair of Media Design at Department of Computer Science, Technische Universität Dresden, where he also obtained his doctorate. His research interests are in Gestaltung and User Interfaces. He is co-founder and Managing Director of company Technische Visualistik.

From the journal i-com

Abstract

Novel shape-changing interfaces promise to provide a rich haptic experience for human-computer interaction. As a specific instance of shape-changing interfaces, Elastic Displays provide large interaction surfaces that can be temporally deformed using force-touch. The unique property of these displays is that they automatically return to their initial flat state. Recently, several review and position papers have stimulated a discussion towards consolidating the knowledge about shape-changing interfaces. The knowledge about Elastic Displays is similarly scattered across multiple publications from recent years. This paper contributes a task taxonomy based on productive uses of Elastic Displays found in literature, on the web, and in our interaction lab. This taxonomy emphasizes tasks, but also encompasses general aspects regarding content types, visualization technology, and interaction styles. All aspects of the taxonomy are illustrated using case studies from literature.

1 Introduction

Although human hands are universal tools, current human-computer interaction (HCI) does not address their power to a full extent. An important step towards leveraging the full potential of hands in HCI is the increased research in the domains of tangible interaction [35] and organic user interfaces [34]. The direct manipulation paradigm and interaction metaphors found in everyday life explain the success of these approaches [2], [11]. The goal is to provide rich sensory or force feedback and focus less on visual perception. Users can perceive the impact of their interaction in an adequate and haptic way. For instance, shape-changing interfaces afford another spatial dimension by deforming the interaction surface (cp. [33]). This technology promises a both fascinating and sophisticated user experience. However, this additional dimension introduces additional challenges for interaction with data »in depth« [6]. While current systems only offer »surface interactions«, future immersive data spaces will allow users to grasp into them and shape them according to their needs. A recent trend in the literature regarding shape-changing interfaces is to consolidate knowledge and provide thorough overviews about open research questions [1], [33]. This paper focuses on a specific instance of shape-changing interfaces: Elastic Displays. These displays allow temporary deformations while returning to their initial flat state without the need of intricate mechanical setups. Moreover, they can offer a large visualization and interaction area in order to display vast amounts of data contained in modern information visualizations. We contribute a task taxonomy in order to give designers and developers insights into promising future applications. Our task taxonomy consists of a task level, an interaction level, a technology level, and a content level. A review of existing prototypes from literature shows how these levels are combined.

2 Related Work

Rasmussen et al. [33] describe shape-changing interfaces as using physical change as input or output, encompassing organic user interfaces [34] and tangible user interfaces [35]. The available research concerning shape-changing interfaces usually only contains the description of the prototypes and few coherent implications for interaction design. In addition, researchers rarely address the perception and usability of such systems. Elastic Displays are a specific manifestation of shape-changing interfaces. Elastic Displays return to their initial flat state but are still passive in the sense that this behavior is not programmable. According to the taxonomy of Rasmussen et al. [33], Elastic Displays change shape while maintaining topology, only changing their form and not orientation, volume, texture, viscosity, or spatiality. Both input and output are combined in a direct interaction. In order to consolidate knowledge about interaction design for Elastic Displays, Troiano et al. propose interaction models and gestures [27] and Gründer et al. address a preliminary design space [9]. However, a holistic and practical taxonomic approach is still missing.

Recently, Alexander et al. have contributed several grand-challenges for shape-changing interfaces in general, which also relate to Elastic Displays [1]. In this contribution, we focus on theory building and application and content design for Elastic Displays by proposing our task taxonomy and reviewing existing applications. This is just a first step in order to answer more of these grand-challenges in the future, such as user behavior or design challenges. Sturdee and Alexander have contributed a recent broad attempt at classifying shape-changing interfaces, including liquid and hybrid prototypes [32]. In order to avoid confusion between the technologies that surround Elastic Displays, we propose a more coarse-grained classification of only three different classes with their associated deformation property: Displays with a persistent deformation, Actuated Displays with active deformation, and Elastic Displays with temporary deformation (cp. [9]). This classification provides a different view in comparison to the categorization of Organic User Interfaces into deformable, shaped, and kinetic displays by Vertegaal and Poupyrev [34]. Instead of the deformation properties, they focus on how the user perceives and interacts with the display and its shape.

2.1 Persistent Deformation

The first category comprises systems that typically allow deformation via sand, gels, or modelling clay and maintain shape-change. This includes flexible displays and also systems projecting information on the surface. Transparent gels or modelling clay can also augment conventional displays, making the visualization of data visible through the gel. Examples for these systems are Xpaaand [15], FoldMe [14], PhotoElastic Touch [23], or Softness Control [24]. While such displays can be used in various scenarios, there are no boundaries that restrict the interaction in a meaningful way. Since infinite possibilities are conceivable to interact, it is hard to define an intuitive and general way of interaction and restrict the user from unintended manipulations. Moreover, the haptic sensation is limited to the properties of the substance that stays unchanged during interaction. For instance, there is no active feedback from gels relating to the actions performed by the user.

2.2 Actuated Deformation

Secondly, actuated displays exist in a great variety in research. They range from commercial Braille displays for visually impaired people [10] to research prototypes such as inFORM [5], Relief [8], Lumen [20], Actuated TUI [21], and TableHop [22]. One of the main issues with actuated displays is the mechanical complexity, which the high realization costs reflect. Hence, this approach is appealing but rather far away from wide adoption.

2.3 Temporary Deformation

Finally, Elastic Displays are situated between the two former groups. The elastic surface introduces a number of constraints such as stretchability and tenseness. Compared to actuated displays, Elastic Displays can be realized with less effort. In the following, we focus on productive applications found in literature using medium to large-scale systems. Hence, we omit smaller screens such as MudPad [12] or GelForce [28]. These displays are mostly built with gels that provide less responsiveness compared to the systems that we focus in our review. There are also hybrid approaches that we do not consider in this paper such as Obake [4], TouchMover [25], and the Hemispherical Display [26] , combining actuated and temporary deformation.

Figure 1 
            Zoom interaction illustrated by network graphs on FlexiWall [18] (left) and an example of Planar 2D data illustrated by a point cloud [31] (right).
Figure 1

Zoom interaction illustrated by network graphs on FlexiWall [18] (left) and an example of Planar 2D data illustrated by a point cloud [31] (right).

Cassinelli und Ishikawa introduce a movie viewer with their Khronos Projector [3] that allows manipulation of the temporal dimension by deforming the surface. With the FlexiWall system [18], different applications have been introduced [30]: a map viewer showing different semantic layers on geographical maps, a painting explorer that allows analyzing the painting process through different radiological scans made of an art piece, and a photo browser that allows local application of different image effects. Moreover, FlexiWall shows two approaches to investigate big data clustering algorithms using either layers or a semantic zoom [13]. The DepthTouch system [19] was used to realize a product browser to search products by similarity [31]. The Deformable Workspace [29] exhibits a 3D working environment, similar to the impress installation [7] and the eTable demonstration [16]. ElaScreen [17] demonstrates three distinct applications: a time domain viewer for graphs, a 3D scene navigation, and a viewer for force directed graphs. For specific details, we refer to the cited research. In the next section, we describe our task taxonomy that systematically addresses the main concerns for designers and developers using Elastic Displays, which we illustrate using the applications described above.

Figure 2 
            Task taxonomy including different levels: task, interaction, technology, and data (left) and review of existing applications categorized using the task taxonomy (right).
Figure 2

Task taxonomy including different levels: task, interaction, technology, and data (left) and review of existing applications categorized using the task taxonomy (right).

3 Task Taxonomy

This section describes our task taxonomy, which is inspired by Shneiderman’s Task by Data Type Taxonomy for Information Visualization [36]. Our task taxonomy is based on practical experiences with existing prototypes and reviews of literature. Although the focus of the taxonomy is on user tasks that can be achieved with Elastic Displays, a large part is concerned with the fundamental choices regarding the displayed content, the technology for presenting this content, and finally the interaction styles used to achieve the tasks (see Figure 2, left). Hence, our task taxonomy consists of four different levels. The vertical stacking indicates interdependencies for suitable combinations of content, technology, and interaction levels. Tasks are independent from the choices made on these lower levels. We indicate an increasing complexity of tasks and the interaction, technology, and content levels on the horizontal axis of the diagram. For quick reference, we summarize how the reviewed application relate to the taxonomy in Figure 2 on the right.

3.1 Content

Elastic Displays are appropriate for different kinds of content. This level is most fundamental to address when evaluating whether an Elastic Display is suitable for specific data. We follow the data taxonomy that we first introduced in [6].

3.1.1 Planar 2D

The Planar 2D category encompasses two-dimensional data structures with different levels of detail or two-dimensional structures that are dynamically rearranged using different parameters. Typical use cases are graphs such as ElaScreen’s Graph Visualization [17] or zoomable data like FlexiWall’s Data Exploration [18] or Depth Touch’s Product Browser [31] (see Figure 1).

3.1.2 Volumetric 2.5D

Figure 3 
              Volumetric 2.5D content illustrated by image variations that form a semantic space (left) and an example for layer interaction with stacked thematic maps in FlexiWall Map Viewer [30] (right).
Figure 3

Volumetric 2.5D content illustrated by image variations that form a semantic space (left) and an example for layer interaction with stacked thematic maps in FlexiWall Map Viewer [30] (right).

With this category we refer to 2D images (slices) that come in different variations so that they can be stacked or layered (cp. [30], Figure 3), forming a semantic space regarding a specific domain, usually time or semantic layers. A concrete use case in the time domain is the haptic exploration of paintings, revealing the evolutionary history of the painting process (FlexiWall Painting Explorer [30]). Hence, different stages of work and drafts can be explored and compared. Similarly, ElaScreen’s Time Domain application [17] is used to display the development of graph data over time using specific parameters. Moreover, the movie viewer using the Khronos Projector helps to understand structures in movies concerning temporal and spatial changes between scenes [3]. An example for semantic layers is the FlexiWall Map Viewer [30], which is used to explore political or historical maps including satellite or traffic data (see Figure 3). In the FlexiWall Image Effects application [30], each slice contains a different manifestation of an image effect such as position of the focal plane, exposure, or recording technique (e. g. macro, infra-red, or x-ray). The Big Data Exploration approaches presented with FlexiWall [13] use different results of cluster algorithms for the layered images.

To our understanding, this also includes content in form of slices of three-dimensional structures, e. g. MRT, CT, or range images. Natural zooming by using the depth interaction is an intuitive interaction with volumetric data. ElaScreens 3D scene navigation [17] is an example for this type of volumetric data.

3.1.3 Spatial 3D

Finally, the last category comprises three-dimensional scenes that are not structured in layers or slices, i. e. models of 3D space. By deforming the surface, true spatial data can be explored and manipulated continuously. The Deformable Workspace is a prime example for this content [29]. Similarly, impress [7] and eTable [16] use spatial content.

3.2 Technology

On this level, we distinguish five different technological concepts for making the content types described above accessible on an Elastic Display. As our review shows, usually only a single technology is used, except for eTable’s 3D-viewer that combines pixel-based blending with multi-touch.

3.2.1 Image Sequences

The most basic concept is using image sequences that are subsequently displayed according to different depth values. Only the depth value of the global maximum is computed, ignoring the lateral position. Using this approach, a large number of images can be used, and a smooth, stable interaction is achieved. The disadvantages consist of a very limited user interface and a low expressiveness of the interaction. Both the Time Domain Viewer from ElaScreen [17] and the FlexiWall Big Data Layers application [13] use this basic technology.

3.2.2 Pixel-Based Blending

As shown in previous work [18], this approach is based on blending several images based on the depth image. This approach is suitable for rapid prototyping using either planar or volumetric data. Disadvantages are the limited number of images and real user interface elements cannot be used. However, the effect is appealing and is used most frequently in the reviewed applications: Khronos Movie Viewer, impress 3D-modelling, ElaScreen 3D Scene Navigation, eTable 3D-viewer as well as FlexiWall Image Effects, Map Viewer, and Painting Explorer.

3.2.3 Vector Field

As exhibited by the DepthTouch system [19], a force simulation is achieved based on per pixel derivatives. This allows true and natural flexible interaction metaphors. However, this approach suffers from an incomplete depth image analysis and a user interface that is difficult to adapt to requirements that exceed the physical metaphors. Additionally, manipulation of content is achieved mostly by indirect interaction. The only productive application from our review using this technology is ElaScreen’s Graph Visualization.

3.2.4 Single-Touch 3D

The basic interaction with a touch display – a single touch – can easily be translated to Elastic Displays: The finger specifies a point on the surface (touch) respectively in space (Elastic Display). Single-touch interaction is achieved by evaluating the global extremum of the surface. This approach allows more sophisticated user interfaces and even mouse emulation to make traditional user interfaces available. However, it only allows a single touch and hence, there is a low expressiveness of the interaction. The Big Data Zoom application using FlexiWall is the only application in our review relying solely on Single-Touch.

3.2.5 Multi-Touch 3D

The computation of local extremums of the depth image analysis achieves multi-touch interaction on an Elastic Display and interprets them as »multi-touch with an additional dimension« (cp. [6]). Hence, existing multi-touch gestures can be extended by evaluating the depth position of the interaction (cp. [6], [27]). As a result, full-fledged user interfaces are achieved. For instance, two fingers define a line or distance, either in 2D on a multi-touch screen or in 3D in an Elastic Display. With three fingers, the user describes an area on the surface of a multi-touch screen or a plane in case of an Elastic Display. With four or more fingers (or a moving touch), the differences between both technologies are more obvious: In the case of a static surface, users describe an (irregular) line or area, on an Elastic Display a complex relief is created. However, this approach requires a complex calibration procedure and due to the involved depth sensors, it is commonly not very stable and positional accuracy is rather low. Current technology requires smoothing procedures, which in return introduce considerable latency. However, several productive applications rely on Multi-Touch on the Elastic Display such as Deformable Workspace, eTable 3D-viewer, DepthTouch Product Browser, and FlexiWall Data Exploration.

3.3 Interaction

On the interaction level, we distinguish different styles of interaction that are closely related to the technological approaches described previously. Different interaction styles can be combined as shown by FlexiWall Data Exploration and DepthTouch Product Browser.

3.3.1 Layer

With this interaction style, insights about different structural levels of an information space and relationships between them can be gained. This is primarily based on planar content that is organized in image sequences. Due to the simplicity of this approach, most of the reviewed applications use layer interaction: Khronos Movie Viewer, ElaScreen 3D scene navigation, ElaScreen Time Domain, eTable 3D-viewer as well as FlexiWall Big Data Layers, Image Effects, Painting Explorer, and Map Viewer.

3.3.2 Zoom

With this interaction style, overview and detail techniques can be realized. This includes geometric zoom using gigapixel images and rich semantic zooms for abstract data, e. g. with magic lenses. FlexiWall Data Exploration and Big Data Zoom as well as the DepthTouch Product Browser use zoom as primary interaction style.

3.3.3 Physics-Based

Exploration of physical phenomena such as gravity or magnetism are realized with particle simulations and yield an intuitive physics-based interaction. Usage of physic-based metaphors is possible such as attraction and repulsion forces, gravity, movement, collision, or mass of objects. Examples for physics-based interaction are ElaScreen’s Graph Visualization, FlexiWall Data Exploration and the DepthTouch Product Browser.

3.3.4 Spatial

Three-dimensional data can be cut with intersection planes, investigated by using perspective distortions, or sculpted according to the display deformation. True spatial interaction can be achieved using impress 3D modeling and the Deformable Workspace.

3.3.5 Hybrid

Finally, hybrid approaches combine more than one interaction style, possibly also different technologies. For example, the combination of zoom and physics-based interaction allows using forces to filter and semantic zoom to visualize details. Layers and physics-based interaction can exploit force-touch for navigation and layers to control animations on specific items.

3.4 Task Types

The most important level in the taxonomy is the actual task level. Our list consists of common tasks in HCI that have been realized successfully on Elastic Displays. For each of the task types, we mention general application domains.

3.4.1 Discover Relationships

The most basic task that can be achieved in an application using an Elastic Display is to discover relationships in the available data. Intricate structures can be visualized, and relationships can be explored. Data visualizations all too often contain innumerable items with manifold dimensions and relationships. The resulting scatter plots or point clouds can be displayed as spatial networks using Elastic Displays (see Figure 1). In museums or during exhibitions, Elastic Displays can be used to convey relationships very effectively in an appealing way. Our review shows that nine out of the 14 applications support this task.

3.4.2 Understand Structures

Closely related to discovering relationships is the understanding of more complex structures in a data set. Using the haptic user interface of Elastic Displays for education is another usage scenario. Medicine or Geology are suitable knowledge domains where interaction with volumetric data is particularly interesting. By creating cutting planes, the location of objects, e. g. raw material deposits or abnormal tissues can be identified. The advantage of using Elastic Displays is the ability to experience the spatial location and distances more naturally. In general, the handling of volumetric data is more intelligible because the visual representation is facilitated by the haptic depth interaction using the Elastic Display. An application in other disciplines is also conceivable. Understanding structures is supported by seven of the 14 reviewed applications.

3.4.3 Search Items

With the different types of content, also search tasks become relevant. Both exploratory searches with vague search goals are possible as well as concrete searches with clear properties in mind [37]. Deforming the surface is beneficial for filtering or selecting subsets in the data, or gain an individual perspective on the data, e. g. by defining a cutting plane inside a 3D scatterplot. The product browser using the DepthTouch system [31] is the prime example showing how search tasks can be realized with an Elastic Display. Four of the 14 reviewed applications are concerned with search tasks.

3.4.4 Manipulate Data

Due to the ephemeral nature of the haptic interaction with Elastic Displays, the actual permanent manipulation of data items is a demanding task. To this end, actual multi-touch needs to be implemented in a stable way. However, this task is essential in domains such as product design. In our review, we determined that only the Deformable Workspace affords true manipulation of data.

3.4.5 Make Decisions

Data visualization on Elastic Displays can facilitate decision-making processes. In urban development and architectural visualization, numerous maps and views exist that depict aspects relevant for construction planning (plans, ground plots, profiles, supply units, waste management, escape routes, energy plans, wiring diagrams, etc.). Relating this information to maps often results in massive visual clutter. Using semi-transparent plans is a common solution to this problem. Elastic Displays can be used to control the transparency of such information layers in the desired areas of the map. Hence, conflicts of the different plans (e. g. building and civil engineering drawing) can be identified without losing the overall view of the plan. Another promising application is informing the public about construction projects (civic participation), which contains all relevant and important information resulting in specific architectural decisions that would otherwise be difficult to explain. Eight of our reviewed applications can support decision-making.

3.4.6 Collaborative Work

Large Elastic Displays are suitable for teams to work on problems. This kind of collaborative work can encompass several of the previous tasks, which are carried out individually. For instance, in mechanical engineering, the visualization of schemata such as component diagrams, circuit diagrams, or flow diagrams can explain the setup of complex systems, which is often challenging. In particular, when highly detailed information about the system is necessary, users are often overwhelmed. Elastic Displays can provide zoomable user interfaces that allow natural adjustment of level of details. In the initial state, a clear and well-structured overview of the system and its component is provided. Further details of subsystems can be viewed using the deformation of the surface. The depth of the interaction determines the level of detail. In this way, system details can be explored without losing the context of the entire system. Since collaboration on specific problems is a very complex task, we only assessed that the FlexiWall Big Data Zoom realizes a collaborative approach to discuss clustering problems.

4 Conclusions and Future Work

The goal of this contribution is the consolidation of the available knowledge on Elastic Displays in order to facilitate the creation of new applications that leverage all the potentials of this new display format. To this end, we established a task taxonomy and described the feasibility of Elastic Displays for several application domains. The taxonomy can be extended by adding more general user tasks. Further additions and modifications to Elastic Displays include the use of tangibles, more diverse surface structures as well as the overall design and form factors.

However, there are several aspects impeding broad use of these new displays. Most prominently, the precision in interaction is severely limited due to tracking issues. Time-critical tasks are hard to realize since the applications need to be fault-tolerant. Both software and hardware need to be consolidated and optimized for productive use. Hence, we will work on a modular software framework that will considerably accelerate the application development for Elastic Displays. Moreover, making the necessary hardware available in suitable construction kits consisting of frame modules and exchangeable cloths is another crucial task. However, we also envision novel hardware developments that will lead to smaller form factors and a broad adoption of the technology.

About the authors

Dietrich Kammer

Dietrich Kammer is a postdoctoral researcher at Technische Universität Dresden, affiliated with the Chair of Media Design. His research is focused on the formalization of gestural input, especially with regards to multitouch technology. Further areas of research are semiotics in HCI, computer graphics, and information visualization.

Mathias Müller

Mathias Müller is researcher at Technische Universität Dresden, at the Chair of Media Design. His research is focused on virtual, mixed, and augmented reality, data visualization, and HCI. He has many years of experience in the research of interaction and visualization technologies, including elastic displays and head-mounted displays.

Jan Wojdziak

Jan Wojdziak is a postdoctoral researcher at Technische Universität Dresden in Germany as well as co-founder and chief of operations (COO) at GTV – Gesellschaft für Technische Visualistik mbH. His research interests include applied visualistic and interaction design in the range of three-dimensional computer graphics.

Ingmar S. Franke

Ingmar S. Franke is a graduate in Architecture at the University of Applied Sciences Magdeburg as well as in Computational Visualistics at the Institute Simulation and Computer Graphics at Otto-von-Guericke University. Later he worked as a research assistant at Fraunhofer-Gesellschaft, Institute for Factory Planning and Factory Automation. He teaches at the Chair of Media Design at Department of Computer Science, Technische Universität Dresden, where he also obtained his doctorate. His research interests are in Gestaltung and User Interfaces. He is co-founder and Managing Director of company Technische Visualistik.

Acknowledgment

Many colleagues and students have supported our research about Elastic Displays over the last years: Thomas Gründer, Joshua Peschke, Fabian Göbel, Mandy Keck, Anja Knöfel, Natalie Hube, Erik Lier, Oliver Lenz, Alexander Dick, Albert Steinmetz, Duc Nguyen, Robert Richter, and Rainer Groh.

References

[1] Alexander, J., Roudaut, A., Steimle, J., Hornbæk, K., Alonso, M. B., Follmer, S. and Merritt, T. 2018. Grand Challenges in Shape-Changing Interface Research. In Proc. CHI ’18. ACM, New York, NY, USA, Paper 299, 14 pages. DOI: 10.1145/3173574.3173873.Search in Google Scholar

[2] Agarawala, A. & Balakrishnan, R. 2006. Keepin’ it real: Pushing the desktop metaphor with physics, piles and the pen. In Proceedings of the sigchi conference on human factors in computing systems (pp. 1283–1292). CHI ’06. Montreal, Quebec, Canada: ACM, DOI: 10.1145/1124772.1124965.Search in Google Scholar

[3] Cassinelli, A. and Ishikawa, M. 2005. Khronos projector. ACM SIGGRAPH 2005 Emerging technologies (New York, NY, USA).10.1145/1187297.1187308Search in Google Scholar

[4] Dand, D., and Hemsley, R. 2013. Obake: interactions on a 2.5D elastic display. In Proc. UIST ’13 Adjunct. ACM, New York, NY, USA, 109–110. DOI: 10.1145/2508468.2514734.Search in Google Scholar

[5] Follmer, S., Leithinger, D., Olwal, A., Hogge, A. and Ishii, H. 2013. inFORM: Dynamic Physical Affordances and Constraints Through Shape and Object Actuation. In Proc. UIST ’13. ACM, New York, NY, USA, 417-–426. DOI: 10.1145/2501988.2502032.Search in Google Scholar

[6] Franke, I. S., Müller, M., Gründer, T., Groh, R. 2014. FlexiWall: Interaction in-between 2D and 3D Interfaces. In Proc. HCII 2014, Springer, Berlin.10.1007/978-3-319-07857-1_73Search in Google Scholar

[7] Hilsing, S. 2010. impress - a flexible display, final documentation. http://www.silkehilsing.de/impress/blog/?cat=5.Search in Google Scholar

[8] Leithinger, D., & Ishii, H. 2010. Relief: a scalable actuated shape display. In Proc. TEI 2011, ACM Press, New York, S. 221.10.1145/1709886.1709928Search in Google Scholar

[9] Gründer, T., Kammer, D., Brade, M., & Groh, R. 2013. Towards a design space for elastic displays. In Acm sigchi conference on human factors in computing systems - workshop: Displays take new shape: An agenda for future interactive surfaces. Paris – France.Search in Google Scholar

[10] Humanware. Braille-Display. URL: http://www.humanware.com/en-usa/products/blindness/braille_displays.Search in Google Scholar

[11] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., & Zigelbaum, J. 2008. Reality-based interaction: A framework for post-wimp interfaces. In Proceedings of the sigchi conference on human factors in computing systems (pp. 201–210). CHI ’08. Florence, Italy: ACM. DOI: 10.1145/1357054.1357089.Search in Google Scholar

[12] Jansen, Y., Karrer, T. and Borchers, J. 2011. MudPad: tactile feedback for touch surfaces. In Proc. of Extended Abstracts at CHI’11 (New York, NY, USA), 323–328.10.1145/1979742.1979702Search in Google Scholar

[13] Kammer, D., Keck, M., Müller, M., Gründer, T., Groh, R. 2017. Exploring Big Data Landscapes with Elastic Displays. In: Burghardt, M., Wimmer, R., Wolff, C. & Womser-Hacker, C. (Hrsg.), Mensch und Computer 2017 – Workshopband. Gesellschaft für Informatik e.V., Regensburg.10.1145/3206505.3206556Search in Google Scholar

[14] Khalilbeigi, M., Lissermann, R., Kleine, W. and Steimle, J. 2012. FoldMe: interacting with double-sided foldable displays. In Proc. of the TEI’12 (New York, NY ,USA), 33–40.10.1145/2148131.2148142Search in Google Scholar

[15] Khalilbeigi, M., Lissermann, R., Mühlhäuser, M. and Steimle, J. 2011. Xpaaand: interaction techniques for rollable displays. In Proc. of CHI’11 (New York, NY, USA), 2729–2732.10.1145/1978942.1979344Search in Google Scholar

[16] Kingsley, P., Rossiter, J. and Subramanian, S. 2012. eTable: A Haptic Elastic Table for 3D Multi-touch Interactions, University of Bristol. https://youtu.be/v2A4bLSiX6A.Search in Google Scholar

[17] Kyungwon Yun, JunBong Song, Keehong Youn, Sungmin Cho, and Hyunwoo Bang. 2013. ElaScreen: exploring multi-dimensional data using elastic screen. In CHI ’13 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’13). ACM, New York, NY, USA, 1311–1316. DOI: 10.1145/2468356.2468590.Search in Google Scholar

[18] Müller, M., Gründer, T., & Groh, R. 2015. Data exploration on elastic displays using physical metaphors. In Proceedings xcoax 2015.Search in Google Scholar

[19] Peschke, J., Göbel, F., Gründer, T., Keck, M., Kammer, D., & Groh, R. 2012. Depthtouch: An elastic surface for tangible computing. In Proceedings of the international working conference on advanced visual interfaces (pp. 770–771). AVI ’12. Capri Island, Italy: ACM. DOI: 10.1145/2254556.2254706.Search in Google Scholar

[20] Poupyrev, I., T. Nashida, S. Maruyama, J. Rekimoto, and Y. Yamaji. 2004. Lumen: Interactive visual and shape display for calm computing. In Proc. SIGGRAPH 2004 Conference Abstracts and Applications, Emerging Technologies, ACM Press.10.1145/1186155.1186173Search in Google Scholar

[21] Riedenklau, E., Hermann, T., & Ritter, H. 2012. An integrated multi-modal actuated tangible user interface for distributed collaborative planning. In Proc. TEI 2012. ACM Press, S. 169–174.10.1145/2148131.2148167Search in Google Scholar

[22] Sahoo, D. R., Hornbæk, K. and Subramanian, S. 2016. TableHop: An Actuated Fabric Display Using Transparent Electrodes. In Proc. CHI ’16. ACM, New York, NY, USA, 3767–3780. DOI: 10.1145/2858036.2858544.Search in Google Scholar

[23] Sato, T., Mamiya, H., Koike, H. and Fukuchi, K. 2009. PhotoelasticTouch. In Proc. of UIST ’09 (New York, NY, USA), 43–50.10.1145/1622176.1622185Search in Google Scholar

[24] Sato, T., Takahashi, N., Matoba, Y. and Koike, H. 2012. Interactive surface that have dynamic softness control. In Proc. of AVI’12 (New York, NY, USA), 796–797.10.1145/2254556.2254719Search in Google Scholar

[25] Sinclair, M., Pahud, M., & Benko, H. 2014. TouchMover 2.0 - 3D touchscreen with force feedback and haptic texture. In Proc. HAPTICS 2014, IEEE, S. 1–6.10.1109/HAPTICS.2014.6775425Search in Google Scholar

[26] Stevenson, A., Perez, C., and Vertegaal, R. 2010. An inflatable hemispherical multi-touch display. In Proc. TEI 2010, ACM Press, 289–292.10.1145/1935701.1935766Search in Google Scholar

[27] Troiano, G. M., Pedersen, E. W., & Hornbæk, K. 2014. User-defined gestures for elastic, deformable displays. In Proceedings of the 2014 international working conference on advanced visual interfaces (pp. 1–8). AVI ’14. Como, Italy: ACM. DOI: 10.1145/2598153.2598184.Search in Google Scholar

[28] Vlack, K., Mizota, T., Kawakami, N., Kamiyama, K., Kajimoto, H., & Tachi, S. 2005. GelForce: a vision-based traction field computer interface. In Ext. Abstracts CHI 2005, New York: ACM, S. 1154–1155.10.1145/1056808.1056859Search in Google Scholar

[29] Watanabe, Y., Cassinelli, A., Komuro, T., and Ishikawa, M. 2008. The deformable workspace: A membrane between real and virtual space, In Proc 3rd IEEE International Workshop on Horizontal Interactive Human Computer Systems.10.1109/TABLETOP.2008.4660197Search in Google Scholar

[30] Müller, M., Knöfel, A., Gründer, T., Franke, I. S., & Groh, R. 2014. Flexiwall: Exploring layered data with elastic displays. In Proceedings its 2014, november 16.–19., Germany.10.1145/2669485.2669529Search in Google Scholar

[31] Müller, M., Keck, M., Gründer, T., Hube, N., Groh, R. 2017. A Zoomable Product Browser for Elastic Displays. In: 5th Conference on Computation, Communication, Aesthetics & X, Proceedings xCoAx 2017, S. 1–10.Search in Google Scholar

[32] Sturdee, M., Alexander, J. 2018. Analysis and Classification of Shape-Changing Interfaces for Design and Application-based Research. ACM Comput. Surv. 51, 1, Article 2 (January 2018), 32 pages. DOI: 10.1145/3143559.Search in Google Scholar

[33] Rasmussen, M. K., Pedersen, E. W., Petersen, M. G., Hornbæk, K. 2012. Shape-changing interfaces: a review of the design space and open research questions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 735–744. DOI: 10.1145/2207676.2207781.Search in Google Scholar

[34] Vertegaal, R. & Poupyrev, I. 2008. Introduction – Organic user interfaces. Commun. ACM 51, 6 (June 2008), 26–30. DOI: 10.1145/1349026.1349033.Search in Google Scholar

[35] Shaer, O. & Hornecker, E. 2010. Tangible User Interfaces: Past, Present, and Future Directions. Found. Trends Hum.-Comput. Interact. 3, 1–2 (January 2010), 1–137. DOI: 10.1561/1100000026.Search in Google Scholar

[36] Shneiderman, B. 1996. The eyes have it: A task by data type taxonomy for information visualizations. In Visual Languages, 1996. Proceedings., IEEE Symposium on, IEEE, 336–343.Search in Google Scholar

[37] Keck, M., Herrmann, M., Both, A., Gaertner, R., Groh, R. 2013. Improving Motive-Based Search: Utilization of Vague Feelings and Ideas in the Process of Information Seeking Conference, Proceedings of the First International Conference on Distributed, Ambient, and Pervasive Interactions – Volume 8028, Springer-Verlag New York, Inc., New York, NY, USA.10.1007/978-3-642-39351-8_48Search in Google Scholar

Published Online: 2018-11-14
Published in Print: 2018-12-19

© 2018 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 23.4.2024 from https://www.degruyter.com/document/doi/10.1515/icom-2018-0021/html
Scroll to top button