skip to main content
10.1145/2971485.2971545acmotherconferencesArticle/Chapter ViewAbstractPublication PagesnordichiConference Proceedingsconference-collections
research-article

Can We Interpret the Depth?: Evaluating Variation in Stereoscopic Depth for Encoding Aspects of Non-Spatial Data

Authors Info & Claims
Published:23 October 2016Publication History

ABSTRACT

Rendering a 2D node-link representation on a stereoscopic platform supports the idea of having a natural focus+context interaction. In this case, stereoscopic depth can be used to encode the different levels of detail in compound graphs, especially for highlighting the structural relations. We propose an approach that provides a novel interactive operation for expanding or contracting nodes in compound graphs to align these nodes in the 3D space with minimum occlusion, such that the children levels are rendered in a plane closer to the viewer than the parent node. Different visual cues can also be used together to encode other data aspects, e.g., color for encoding node status or shape for encoding node type. The focus of this paper is on a controlled user study that we conducted with 30 participants to evaluate the approach using different configurations. The aim of the study was to understand the viewers' ability of detecting the variation in stereoscopic depth as well as the influence of graph size and the transparency on this ability in terms of accuracy and efficiency. Further, we were interested to measure the participants' acceptance towards our approach of using the stereoscopic depth as a cue for compound graphs. The study results show that stereoscopic depth can be used to encode data aspects of compound graphs under certain circumstances.

References

  1. Alper, B., Höllerer, T., Kuchera-Morin, J., and Forbes, A. Stereoscopic highlighting: 2d graph visualization on stereo displays. IEEE Trans. Vis. Comput. Graph. 17, 12 (2011), 2325--2333. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. AlTarawneh, R., Humayoun, S. R., and Ebert, A. Expand: A stereoscopic expanding technique for compound graphs. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, VRST 2014, Edinburgh, United Kingdom, November 11-13, 2014 (2014), 223--224. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. AlTarawneh, R., Humayoun, S. R., Schultz, J., Ebert, A., and Liggesmeyer, P. Layman: A visual interactive tool to support failure analysis in embedded systems. In Proceedings of the 2015 European Conference on Software Architecture Workshops, Dubrovnik/Cavtat, Croatia, September 7-11, 2015 (2015), 68:1--68:5. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. AlTarawneh, R., Schultz, J., and Humayoun, S. R. Clue: An algorithm for expanding clustered graphs. In PacificVis 2014, Yokohama, Japan (2014). Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Brandes, U., Dwyer, T., and Schreiber, F. Visualizing Related Metabolic Pathways in Two and a Half Dimensions, vol. 2912 of Lecture Notes in Computer Science. Springer Berlin Heidelberg, 2004, 111--122.Google ScholarGoogle Scholar
  6. Broy, N., Alt, F., Schneegass, S., Henze, N., and Schmidt, A. Perceiving layered information on 3d displays using binocular disparity. In Proceedings of the 2Nd ACM International Symposium on Pervasive Displays, PerDis '13, ACM (New York, NY, USA, 2013), 61--66. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Bruce, N. D. B., and Tsotsos, J. K. An attentional framework for stereo vision. In Second Canadian Conference on Computer and Robot Vision (CRV) 2005), 9-11 May 2005, Victoria, BC, Canada (2005), 88--95. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Cockburn, A., and McKenzie, B. Evaluating the effectiveness of spatial memory in 2d and 3d physical and virtual environments. In CHI '02, ACM (New York, NY, USA, 2002), 203--210. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Collins, C., and Carpendale, S. Carpendale s: Vislink: revealing relationships amongst visualizations. IEEE Trans Vis Comput Graph 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Deller, M., Ebert, A., Agne, S., and Steffen, D. Guiding attention in information-rich virtual environments. J. Villanueva, Ed., International Association of Science and Technology for Development (IASTED), ACTA Press (9 2008), 310--315.Google ScholarGoogle Scholar
  11. Dix, A., Finlay, J. E., Abowd, G. D., and Beale, R. Human-Computer Interaction (3rd Edition). Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Eades, P., and Feng, Q.-W. Multilevel Visualization of Clustered Graphs. In Proc. Graph Drawing, GD, no. 1190, Springer-Verlag (Berlin, Germany, JanAug-Feb0~ 1996), 101--112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Grossman, T., and Balakrishnan, R. An evaluation of depth perception on volumetric displays. In AVI '06, ACM (New York, NY, USA, 2006), 193--200. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Herman, I., Melancon, G., and Marshall, M. S. Graph visualization and navigation in information visualization: A survey. IEEE Transaction on Visualization and Computer Graphics 6, 1 (2000), 24--43. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Holten, D. Hierarchical edge bundles: Visualization of adjacency relations in hierarchical data. IEEE Trans. Vis. Comput. Graph. 12, 5 (2006), 741--748. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Laha, B., Sensharma, K., Schiffbauer, J. D., and Bowman, D. A. Effects of immersion on visual analysis of volume data. IEEE Transactions on Visualization and Computer Graphics 18, 4 (April 2012), 597--606. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Marr, D. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. Henry Holt and Co., Inc. New York, NY, USA, 1982. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Munzner, T. H3: laying out large directed graphs in 3d hyperbolic space. In InfoVis '97, IEEE Computer Society (Washington, DC, USA, 1997), 2--10. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Peterson, S. D., Axholt, M., and Ellis, S. R. Technical section: Objective and subjective assessment of stereoscopically separated labels in augmented reality. Comput. Graph. 33 (February 2009), 23--33. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Reda, K., Febretti, A., Knoll, A., Aurisano, J., Leigh, J., Johnson, A., Papka, M. E., and Hereld, M. Visualizing large, heterogeneous data in hybrid-reality environments. IEEE Computer Graphics and Applications 33, 4 (2013), 38--48. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Robertson, G. G., Mackinlay, J. D., and Card, S. K. Cone Trees: animated 3D visualizations of hierarchical information. In CHI '91, ACM (New York, NY, USA, 1991), 189--194. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Tanaka, Y., Okada, Y., and Niijima, K. Treecube: Visualization tool for browsing 3d multimedia data. In IV '03 (2003), 427--432. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Van Wijk, J. J., and van de Wetering, H. Cushion treemaps: Visualization of hierarchical information. In InfoVis '99, IEEE Computer Society (Washington, DC, USA, 1999), 73--78. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Wang, J., Callet, P. L., Tourancheau, S., Ricordel, V., and Silva, M. P. D. Study of depth bias of observers in free viewing of still stereoscopic synthetic stimuli. Journal of Eye Movement Research 5, 5 (2012).Google ScholarGoogle Scholar
  25. Ware, C. Information Visualization: Perception for Design (Interactive Technologies), 1st ed. Morgan Kaufmann, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Ware, C., and Franck, G. Evaluating stereo and motion cues for visualizing information nets in three dimensions. ACM Trans. Graph. 15 (April 1996), 121--140. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Can We Interpret the Depth?: Evaluating Variation in Stereoscopic Depth for Encoding Aspects of Non-Spatial Data

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          NordiCHI '16: Proceedings of the 9th Nordic Conference on Human-Computer Interaction
          October 2016
          1045 pages
          ISBN:9781450347631
          DOI:10.1145/2971485

          Copyright © 2016 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 23 October 2016

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          NordiCHI '16 Paper Acceptance Rate58of231submissions,25%Overall Acceptance Rate379of1,572submissions,24%
        • Article Metrics

          • Downloads (Last 12 months)6
          • Downloads (Last 6 weeks)0

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader