Abstract
Information visualization tools are being promoted to aid decision support. These tools assist in the analysis and comprehension of ambiguous and conflicting data sets. Formal evaluations are necessary to demonstrate the effectiveness of visualization tools, yet conducting these studies is difficult. Objective metrics that allow designers to compare the amount of work required for users to operate a particular interface are lacking. This in turn makes it difficult to compare workload across different interfaces, which is problematic for complicated information visualization and visual analytics packages. We believe that measures of working memory load can provide a more objective and consistent way of assessing visualizations and user interfaces across a range of applications. We present initial findings from a study using measures of working memory load to compare the usability of two graph representations.
Chapter PDF
Similar content being viewed by others
References
Gawron, V.: Human performance, workload, and situational awareness handbook. Taylor & Francis, Boca Raton (2008)
Hart, S.G., Staveland, L.E.: Development of a NASA-TLX (Task load index): Results of empirical and theoretical research. In: Hancock, P., Meshkati, N. (eds.) Human Mental Workload, pp. 139–183. North-Holland, Amsterdam (1988)
Huang, W., Eades, P., Hong, S.: Beyond time and error: A cognitive approach to the evaluation of graph drawings. In: Proc. 2008 Conference on Beyond Time and Errors: Novel Evaluation Methods for Information Visualization, pp. 1–8. ACM, New York (2008)
Meshkati, N., Hancock, P.A., Rahimi, M.: Techniques of mental workload assessment. In: Wilson, J. (ed.) Evaluation of Human Work: Practical Ergonomics Methodology, pp. 605–627. Taylor and Francis, London (1989)
Morse, E., Steves, M.P., Scholtz, J.: Metrics and methodologies for evaluating technologies for intelligence analysts. In: Proc. Conference on Intelligence Analysis (2005)
Plaisant, C., Fekete, J., Grinstein, G.: Promoting insight-based evaluation of visualizations: From contest to benchmark repository. IEEE Transactions on Visualization and Computer Graphics 14(1), 120–134 (2008)
Scholtz, J.: Progess and challenges in evaluating tools for sensemaking. Presented at the ACM CHI Conference Workshop on Sensemaking (2008)
Scholtz, J., Morse, E., Steves, M.P.: Evaluation metrics and methodologies for user-centered evaluation of intelligent systems. Interacting with Computers 18, 1186–1214 (2006)
Shah, P., Miyake, A.: Models of working memory: An introduction. In: Miyake, A., Shah, P. (eds.) Models of Working Memory: Mechanisms of Active Maintenance and Executive Control, pp. 1–27. Cambridge University Press, Cambridge (1999)
Wierwille, W.W., Eggemeier, F.T.: Recommendations for mental workload measurement in a test and evaluation environment. Human Factors 35, 263–282 (1993)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bandlow, A. et al. (2011). Evaluating Information Visualizations with Working Memory Metrics. In: Stephanidis, C. (eds) HCI International 2011 – Posters’ Extended Abstracts. HCI 2011. Communications in Computer and Information Science, vol 173. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22098-2_53
Download citation
DOI: https://doi.org/10.1007/978-3-642-22098-2_53
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22097-5
Online ISBN: 978-3-642-22098-2
eBook Packages: Computer ScienceComputer Science (R0)