Abstract
Practitioners who deal in the business world must find ways to keep up-to-date with best practices in the field and must apply them to their work in cost-effective ways. Research should help to define best practices, but often the worlds of research and of practice can seem too dichotomous. Recently, for instance, the fields of usability and user-centered design generally have seen considerable controversy about the relative effectiveness of different methodologies. In this column, Dennis Wixon argues that we need to look at whether we are evaluating methods by the appropriate criteria. He considers the growing body of literature on evaluation of methods unhelpful, or even irrelevant, to the practitioner. He argues that consideration of the factors that determine success of usability efforts in product development organizations will fundamentally change the terms of the debate. ---David A. Siegel
- Cook , T. D., and Campbell, D. T. Quasi-experimentation: Design and analysis issues for field settings. Rand McNally, Chicago, 1979.Google Scholar
- Gray, W. D. , John, B.E., and Atwood, M. E. Project Ernestine: Validating a GOMS analysis for predicting and explaining real world performance. Human Computer Interaction 8, (1993), pp. 23-309.Google ScholarDigital Library
- Gray, W.D. and Saltzman, M. C. Damaged merchandise? A review of experiments that compare usability evaluation methods. Human Computer Interaction 13, (1998), 203-261. Google ScholarDigital Library
- Jefferies, R. Miller, J. Wharton, C. and Udea, K.M. User Interface Analysis in the Real World; A comparison of four techniques. Proceedings of the ACM CHI'91. Conference in Human Factors in Computing Systems (New York, 1991), pp.119-124. Google ScholarDigital Library
- Lewis, J.R. Sample sizes for usability studies: additional considerations. Human Factors 36, (1994), pp. 368-378;Google ScholarCross Ref
- Mayhew, D. and Bias, R. Cost Justifying Usability. Academic Press, Boston, 1994. Google ScholarDigital Library
- Ramey, J. and Wixon, D. Field Method s Casebook for Software Design. John Wiley, New York, 1996. Google ScholarDigital Library
- Rudisill, M. Lewis, C. Polson, P. and McKay, T. Human Computer Interface Design: Success Cases, Emerging Methods, and Real-World Context. Morgan Kaufman, San Francisco, 1993.Google Scholar
- Spencer, R. (2000) The Streamlined Cognitive Walkthrough Method, Working Around Social Constraints Encountered in a Software Development Company. Proceedings of ACM CHI'2000. Conference in Human Factors in Computing Systems (New York, 2000), pp. 119-124. Google ScholarDigital Library
- Virzi. R. A. Refining the test phase of usability evaluation: How many subjects is enough? Human Factors, 34, (1992), pp. 457-468. Google ScholarDigital Library
- Whiteside, J. Bennett, J. and Holtzblatt, K. (1988). Usability engineering: Our experience and evolution. In M. Helander (ed,) Handbook of Human Computer Interaction, (1988), North Holland, New York, pp. 791-817.Google Scholar
Index Terms
Evaluating usability methods: why the current literature fails the practitioner
Recommendations
Dogmas in the assessment of usability evaluation methods
Usability evaluation methods (UEMs) are widely recognised as an essential part of systems development. Assessments of the performance of UEMs, however, have been criticised for low validity and limited reliability. The present study extends this ...
Evaluating usability evaluation methods: criteria, method and a case study
HCI'07: Proceedings of the 12th international conference on Human-computer interaction: interaction design and usabilityThe paper proposes an approach to comparative usability evaluation that incorporates important relevant criteria identified in previous work. It applies the proposed approach to a case study of a comparative evaluation of an academic website employing ...
Evaluating usability: using models of argumentation to improve persuasiveness of usability feedback
DIS '08: Proceedings of the 7th ACM conference on Designing interactive systemsUsability evaluation is widely accepted as a valuable activity in software development. However, how results effectively are fed back to developers is still a relatively unexplored area. We argue that usability feedback can be understood as an argument ...
Comments