ABSTRACT
My position is that improving evaluation for visualization requires more than developing more sophisticated evaluation methods. It also requires improving the efficacy of evaluations, which involves issues such as how evaluations are applied, reported, and assessed. Considering the motivations for evaluation in visualization offers a way to explore these issues, but it requires us to develop a vocabulary for discussion. This paper proposes some initial terminology for discussing the motivations of evaluation. Specifically, the scales of actionability and persuasiveness can provide a framework for understanding the motivations of evaluation, and how these relate to the interests of various stakeholders in visualizations. It can help keep issues such as audience, reporting and assessment in focus as evaluation expands to new methods.
- K. Andrews. Evaluation Comes In Many Guises. In BELIV '08 Workshop, 2008.Google Scholar
- R. Chang, C. Ziemkiewicz, R. Pyzh, J. Kielman, and W. Ribarsky. Learning-based evaluation of visual analytic systems. In Proceedings of the 3rd BELIV'10 Workshop on BEyond time and errors: novel evaLuation methods for Information Visualization - BELIV '10, pages 29--34, New York, New York, USA, Apr. 2010. ACM Press. Google ScholarDigital Library
- S. Greenberg and B. Buxton. Usability evaluation considered harmful (some of the time). In Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI '08, page 111, New York, New York, USA, Apr. 2008. ACM Press. Google ScholarDigital Library
- M. Kaptein and J. Robertson. Rethinking statistical analysis methods for CHI. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems - CHI '12, page 1105, New York, New York, USA, May 2012. ACM Press. Google ScholarDigital Library
- M. Meyer, M. Sedlmair, and T. Munzner. The Four-Level Nested Model Revisited: Blocks and Guidelines, 2012. In this proceedings. Google ScholarDigital Library
- T. Munzner. A nested model for visualization design and validation. IEEE transactions on visualization and computer graphics, 15(6):921--8, Jan. 2009. Google ScholarDigital Library
- C. North. Toward measuring visualization insight. IEEE Computer Graphics and Applications, 26(3):6--9, May 2006. Google ScholarDigital Library
- E. Peck, E. Solovey, S. Su, R. Jacob, and R. Chang. Near to the brain: Functional near-infrared spectroscopy as a lightweight brain imaging technique for visualization. In IEEE Conference on Information Visualization (InfoVis) Posters, 2011.Google Scholar
Index Terms
- Why ask why?: considering motivation in visualization evaluation
Recommendations
Toward visualization-specific heuristic evaluation
BELIV '14: Proceedings of the Fifth Workshop on Beyond Time and Errors: Novel Evaluation Methods for VisualizationThis position paper describes heuristic evaluation as it relates to visualization and visual analytics. We review heuristic evaluation in general, then comment on previous process-based, performance-based, and framework-based efforts to adapt the method ...
A Systematic Review on the Practice of Evaluating Visualization
We present an assessment of the state and historic development of evaluation practices as reported in papers published at the IEEE Visualization conference. Our goal is to reflect on a meta-level about evaluation in our community through a systematic ...
Toward mixed method evaluations of scientific visualizations and design process as an evaluation tool
BELIV '12: Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for VisualizationIn this position paper we discuss successes and limitations of current evaluation strategies for scientific visualizations and argue for embracing a mixed methods strategy of evaluation. The most novel contribution of the approach that we advocate is a ...
Comments