skip to main content
10.1145/2207676.2208364acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

The effect of task assignments and instruction types on remote asynchronous usability testing

Published:05 May 2012Publication History

ABSTRACT

Remote asynchronous usability testing involves users directly in reporting usability problems. Most studies of this approach employ predefined tasks to ensure that users experience specific aspects of the system, whereas other studies use no task assignments. Yet the effect of using predefined tasks is still to be uncovered. There is also limited research on instructions for users in identifying usability problems. This paper reports from a comparative study of the effect of task assignments and instruction types on the problems identified in remote asynchronous usability testing of a website for information retrieval, involving 53 prospective users. The results show that users solving predefined tasks identified significantly more usability problems with a significantly higher level of agreement than those working on their own authentic tasks. Moreover, users that were instructed by means of examples of usability problems identified significantly more usability problems than those who received a conceptual definition of usability problems.

References

  1. Äijö, R. and Mantere, J. Are Non-Expert Usability Evaluations Valuable? http://www.hft.org/HFT01/paper01/acceptance/2_01.pdfGoogle ScholarGoogle Scholar
  2. Andreasen, M. S., Nielsen, H. V., Schrøder, S. O. and Stage, J. What happened to remote usability testing? An empirical study of three methods. In proc. CHI 2007, ACM Press (2007), 1405--1414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Bosenick, T., Kehr, S., Kühn, M. and Nufer, S. Remote usability tests: an extension of the usability toolbox for online-shops. In Proc. UAHCI 2007, Springer-Verlag (2007), 392--398. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Brush, A. B., Ames, M. and Davis, J. A comparison of synchronous remote and local usability studies for an expert interface. In proc. CHI 2004, ACM Press (2004), 1179--1182. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Bruun, A., Gull, P., Hofmeister, L. and Stage, J. Let Your Users Do the Testing: A Comparison of Three Asynchronous Usability Testing Methods. In proc. CHI 2009, ACM Press (2009), 1619--1628. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Castillo, J. C. The User-Reported Critical Incident Method for Remote Usability Evaluation. Master thesis, Virginia Polytechnic Institute and State University (1997).Google ScholarGoogle Scholar
  7. Castillo, J. C., Hartson, H. R. and Hix, D. Remote usability evaluation: Can users report their own critical incidents? In proc. CHI 1998, ACM Press (1998), 253--254. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Felder, R. M. Reaching the second tier: Learning and teaching styles in college science education. College Science Teaching 23, 5 (1993), 286--290.Google ScholarGoogle Scholar
  9. Felder, R. M. and Silverman, L. K. Learning and Teaching Styles in Engineering Education. Engineering. Education 78, 7 (1988), 674--681.Google ScholarGoogle Scholar
  10. Fleiss, J. L. Statistical methods for rates and proportions (2nd ed.). John Wiley & Sons, New York, 1981.Google ScholarGoogle Scholar
  11. Følstad, A. and Hornbæk, K. Work-domain knowledge in usability evaluation: Experiences with Cooperative Usability Testing. Journal of Systems and Software 83, 11, (2010), 2019--2030. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Hartson, H. R. and Castillo, J. C. Remote evaluation for post-deployment usability improvement. In proc. AVI 1998, 22--29. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Hartson, H. R., Castillo, J. C., Kelso, J. and Neale, W. C. Remote evaluation: The network as an extension of the usability laboratory. Proceedings of CHI 1996, ACM Press (1996), 228--235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Hertzum, M. and Jacobsen, N. E. The Evaluator Effect: A Chilling Fact about Usability Evaluation Methods. Human Computer Interaction 15, 1 (2003), 1336--1340.Google ScholarGoogle Scholar
  15. Hilbert, D. M. and Redmiles, D. F. Separating the Wheat from the Chaff in Internet-Mediated User Feedback Expectation-Driven Event Monitoring. ACM SIGGROUP Bulletin 20, 1, (1999), 35--40. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Hornbæk, K. and Frøkjær, E. Making Use of Business Goals in Usability Evaluation: An Experiment with Novice Evaluators. In proc. CHI 2008, ACM Press (2008), 903--911. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Hwang, W. and Salvendy, G. Number of people required for usability evaluation: the 10-2 rule. Commun. ACM 53, 5 (May 2010), 130--133. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Kjaer, A., Madsen, K. H. and Petersen, M. G.. Methodological Challenges in the Study of Technology Use at Home. In proc. HOIT 2000, Kluwer Academic Publishers (2000), 45--60. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Kjeldskov, J., Skov, M. B. & Stage, J. Instant Data Analysis: Evaluating Usability in a Day. In proc. NordiCHI 2004, ACM Press (2004), 233--240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Lindgaard, G. and Chattratichart, J. Usability Testing: What Have We Overlooked? In proc. CHI 2007, ACM Press (2007), 1415--1424. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Marsh, S. L., Dykes, J. and Attilakou, F. Evaluating a geovisualization prototype with two approaches: remote instructional vs. face-to-face exploratory. In proc. Information Visualization 2006, IEEE (2006), 310--315. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Molich, R. Usable Web Design. Nyt Teknisk Forlag, Odense, Denmark, 2007.Google ScholarGoogle Scholar
  23. Nielsen, C. M., Overgaard, M., Pedersen, M. B., Stage, J. and Stenild, S. It's Worth the Hassle! The Added Value of Evaluating the Usability of Mobile Systems in the Field. In proc. NordiCHI 2006, ACM Press (2006), 272--280. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Prince, M. J. and Felder, R. M. Inductive teaching and learning methods: Definitions, comparisons, and research bases. Engineering Education 95 (2006), 123--138.Google ScholarGoogle ScholarCross RefCross Ref
  25. Rubin, J. and Chisnell, D. Handbook of Usability Testing: How to Plan, Design, and Conduct Effective Tests. Wiley Publishing, Indianapolis, USA, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Scholtz, J. A case study: developing a remote, rapid and automated usability testing methodology for on-line books. In proc. HICSS 1999, IEEE (1999). Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Ssemugabi, S. and Villiers, R. D. A comparative study of two usability evaluation methods using a web-based elearning application. In proc. SAICSIT 2007, ACM Press (2007), 132--142. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Thompson, J. A. Investigating the Effectiveness of Applying the Critical Incident Technique to Remote Usability Evaluation. Master thesis, Virginia Polytechnic Institute and State University, 1999.Google ScholarGoogle Scholar
  29. Thornbury, S. How to teach grammar. Pearson Education Ltd, Harlow, Essex, England, 1999.Google ScholarGoogle Scholar
  30. Tullis, T., Fleischman, S., McNulty, M., Cianchette, C. and Bergel, M. An empirical comparison of lab and remote usability testing of web sites. http://home.comcast.net/~tomtullis/publications/RemoteVsLab.pdfGoogle ScholarGoogle Scholar
  31. Waterson, S., Landay, J. A. and Matthews, T. In the lab and out in the wild: remote web usability testing for mobile devices. In proc. CHI 2002, ACM Press (2002), 796--797. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Winckler, M. A. A., Freitas, C. M. D. S. and de Lima, J.V. Remote usability testing: a case study. In proc. OzCHI 1999, CHISIG (1999).Google ScholarGoogle Scholar

Index Terms

  1. The effect of task assignments and instruction types on remote asynchronous usability testing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '12: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      May 2012
      3276 pages
      ISBN:9781450310154
      DOI:10.1145/2207676

      Copyright © 2012 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 May 2012

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate6,199of26,314submissions,24%

      Upcoming Conference

      CHI '24
      CHI Conference on Human Factors in Computing Systems
      May 11 - 16, 2024
      Honolulu , HI , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader