skip to main content
10.1145/2669557acmotherconferencesBook PagePublication PageschiConference Proceedingsconference-collections
BELIV '14: Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization
ACM2014 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
BELIV '14: Novel Evaluation Methods For Visualization 2014 Paris France 10 November 2014
ISBN:
978-1-4503-3209-5
Published:
10 November 2014
Next Conference
May 11 - 16, 2024
Honolulu , HI , USA
Bibliometrics
Skip Abstract Section
Abstract

Visualization has shown its ability to produce powerful tools for analyzing, understanding, and communicating data and making it accessible for several different tasks and purposes. Impact of visualization to everyday work and personal lives is demonstrated by many successes stories---such as the increasing prevalence of Tableau, the interactive visualizations produced by the New York Times, or toolkits like VTK/Paraview to name just a few. A large community of casual and professional users are increasingly consuming and producing both interactive and static visualizations.

While interactive visualizations move from research into practice at an increasing rate, it still remains an important challenge to find appropriate methods to evaluate their utility and usability. There is a growing need in the community to develop special approaches and metrics for evaluation at all stages of the development life cycle that address specific needs in visualization. This need is reflected, for example, in the increasing number of papers on visualization evaluation---not just at BELIV but also in other venues such as the IEEE VIS conferences and EuroVis. The goal of the BELIV workshop is to continue to provide a dedicated event for discussing visualization evaluation and to spread the word on alternative and novel evaluation methods and methodologies in our community.

Skip Table Of Content Section
SESSION: Rethinking evaluation level-abstracted task vs. in situ evaluation
research-article
Visualizing dimensionally-reduced data: interviews with analysts and a characterization of task sequences

We characterize five task sequences related to visualizing dimensionally-reduced data, drawing from data collected from interviews with ten data analysts spanning six application domains, and from our understanding of the technique literature. Our ...

research-article
User tasks for evaluation: untangling the terminology throughout visualization design and development

User tasks play a pivotal role in evaluation throughout visualization design and development. However, the term 'task' is used ambiguously within the visualization community. In this position paper, we critically analyze the relevant literature and ...

research-article
Considerations for characterizing domain problems

The nested blocks and guidelines model is a useful template for creating design and evaluation criteria, because it aligns design to need [17]. Characterizing the outermost block of the nested model---the domain problem---is challenging, mainly due to ...

research-article
Navigating reductionism and holism in evaluation

In this position paper, we enumerate two approaches to the evaluation of visualizations which are associated with two approaches to knowledge formation in science: reductionism, which holds that the understanding of complex phenomena is based on the ...

SESSION: Cognitive processes & interaction
research-article
Evaluation methodology for comparing memory and communication of analytic processes in visual analytics

Provenance tools can help capture and represent the history of analytic processes. In addition to supporting analytic performance, provenance tools can be used to support memory of the process and communication of the steps to others. Objective ...

research-article
Just the other side of the coin?: from error- to insight-analysis

To shed more light on data explorers dealing with complex information visualizations in real world scenarios, new methodologies and models are needed which overcome existing explanatory gaps. Therefore, a novel model to analyze users' errors and ...

research-article
Public Access
Evaluating user behavior and strategy during visual exploration

Visualization practitioners have traditionally focused on evaluating the outcome of the visual analytic process, as opposed to studying how that process unfolds. Since user strategy would likely influence the outcome of visual analysis and the nature of ...

research-article
Value-driven evaluation of visualizations

Existing evaluations of data visualizations often employ a series of low-level, detailed questions to be answered or benchmark tasks to be performed. While that methodology can be helpful to determine a visualization's usability, such evaluations ...

SESSION: New techniques I---eye tracking
research-article
Benchmark data for evaluating visualization and analysis techniques for eye tracking for video stimuli

For the analysis of eye movement data, an increasing number of analysis methods have emerged to examine and analyze different aspects of the data. In particular, due to the complex spatio-temporal nature of gaze data for dynamic stimuli, there has been ...

research-article
Evaluating visual analytics with eye tracking

The application of eye tracking for the evaluation of humans' viewing behavior is a common approach in psychological research. So far, the use of this technique for the evaluation of visual analytics and visualization is less prominent. We investigate ...

research-article
Towards analyzing eye tracking data for evaluating interactive visualization systems

Eye tracking can be a suitable evaluation method for determining which regions and objects of a stimulus a human viewer perceived. Analysts can use eye tracking as a complement to other evaluation methods for a more holistic assessment of novel ...

SESSION: New techniques II---crowdsourcing
research-article
Gamification as a paradigm for the evaluation of visual analytics systems

The widespread web-based connectivity of people all over the world has yielded new opportunities to recruit humans for visual analytics evaluation and for an abundance of other tasks. Known as crowdsourcing, humans typically receive monetary incentives ...

research-article
Crowdster: enabling social navigation in web-based visualization using crowdsourced evaluation

Evaluation is typically seen as a validation tool for visualization, but the proliferation of web-based visualization is enabling a radical new approach that uses crowdsourced evaluation for emergent collaboration where one user's efforts facilitate a ...

research-article
Repeated measures design in crowdsourcing-based experiments for visualization

Crowdsourcing platforms, such as Amazon's Mechanical Turk (MTurk), are providing visualization researchers with a new avenue for conducting empirical studies. While such platforms offer several advantages over lab-based studies, they also feature some "...

SESSION: Adopting methods from other fields
research-article
Evaluation of information visualization techniques: analysing user experience with reaction cards

The paper originates from the idea that in the field of information visualization, positive user experience is extremely important if we wish to see users adopt and engage with the novel information visualization tools. Suggesting the use of product ...

research-article
Toward visualization-specific heuristic evaluation

This position paper describes heuristic evaluation as it relates to visualization and visual analytics. We review heuristic evaluation in general, then comment on previous process-based, performance-based, and framework-based efforts to adapt the method ...

research-article
Experiences and challenges with evaluation methods in practice: a case study

The development of information visualizations for companies poses specific challenges, especially for evaluation processes. It is advisable to test these visualizations under realistic circumstances. Because of various constraints, this can be quite ...

research-article
More bang for your research buck: toward recommender systems for visual analytics

We propose a set of common sense steps required to develop a recommender system for visual analytics. Such a system is an essential way to get additional mileage out of costly user studies, which are typically archived post publication. Crucially, we ...

research-article
Sanity check for class-coloring-based evaluation of dimension reduction techniques

Dimension Reduction techniques used to visualize multidimensional data provide a scatterplot spatialization of data similarities. A widespread way to evaluate the quality of such DR techniques is to use labeled data as a ground truth and to call the ...

SESSION: Experience reports
research-article
Oopsy-daisy: failure stories in quantitative evaluation studies for visualizations

Designing, conducting, and interpreting evaluation studies with human participants is challenging. While researchers in cognitive psychology, social science, and human-computer interaction view competence in evaluation study methodology a key job skill, ...

research-article
Pre-design empiricism for information visualization: scenarios, methods, and challenges

Empirical study can inform visualization design, both directly and indirectly. Pre-design empirical methods can be used to characterize work practices and their associated problems in a specific domain, directly motivating design choices during the ...

research-article
Field experiment methodology for pair analytics

This paper describes a qualitative research methodology developed for experimental studies of collaborative visual analysis. In much of this work we build upon Herbert H. Clark's Joint Activity Theory to infer cognitive processes from field experiments ...

research-article
Utility evaluation of models

In this paper, we present three case studies of utility evaluations of underlying models in software systems: a user-model, technical and social models both singly and in combination, and a research-based model for user identification. Each of the three ...

Contributors
  • Google LLC
  • Paris-Saclay University
  • Paris-Saclay University
  • University of Stuttgart

Index Terms

  1. Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for Visualization
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Acceptance Rates

      BELIV '14 Paper Acceptance Rate23of30submissions,77%Overall Acceptance Rate45of64submissions,70%
      YearSubmittedAcceptedRate
      BELIV '14302377%
      BELIV '10181267%
      BELIV '08161063%
      Overall644570%