skip to main content
10.1145/3357155.3360484acmotherconferencesArticle/Chapter ViewAbstractPublication PagesihcConference Proceedingsconference-collections
short-paper

Evaluator self-efficacy analysis in a usability test

Published:22 October 2019Publication History

ABSTRACT

When software is made available to users, it is expected to be free of faults and good in quality. Tests such as usability can be employed to contribute to their quality. In such cases, it is possible for evaluators to analyze interfaces and note which usability errors should be corrected. Also, according to the belief of self-efficacy, those who think they are more experienced should find more errors. In this context, the purpose of this study was to perform a usability test with evaluators to verify if those considered to be more experienced were able to find more errors. A self-assessment form and a usability test were applied for this purpose. For each of the nine participants, their experience information along with the usability errors encountered by them were collected. As a result, those who thought they were capable of finding most of the errors did not match those they encountered.

References

  1. Bandura A. 1986. Social foundations of thought and action. Englewood Cliffs, NJ 1986 (1986).Google ScholarGoogle Scholar
  2. Bandura A. 2010. Self-efficacy. The Corsini encyclopedia of psychology (2010), 1--3.Google ScholarGoogle Scholar
  3. Cybis W.; Betiol A. H. e Faust R. 2015. Ergonomia e usabilidade: conhecimentos, métodos e aplicações. Novatec editora.Google ScholarGoogle Scholar
  4. Hertzum M. e Jacobsen N. E. 2001. The evaluator effect: A chilling fact about usability evaluation methods. International Journal of Human-Computer Interaction 13, 4 (2001), 421--443.Google ScholarGoogle ScholarCross RefCross Ref
  5. Jacobsen N. E.; Hertzum M. e John B. E. 1998. The evaluator effect in usability tests. In CHI 98 conference summary on Human factors in computing systems. Citeseer, 255--256.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Jokela T.; Iivari N.; Matero J. e Karukka M. 2003. The standard of user-centered design and the standard definition of usability: analyzing ISO 13407 against ISO 9241-11. In Proceedings of the Latin American conference on Human-computer interaction. ACM, 53--60.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Boehm B. W.; Brown J. R. e Lipow M. 1976. Quantitative evaluation of software quality. In Proceedings of the 2nd international conference on Software engineering. IEEE Computer Society Press, 592--605.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Beer A. e Ramler R. 2008. The role of experience in software testing practice. In 2008 34th Euromicro Conference Software Engineering and Advanced Applications. IEEE, 258--265.Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Yusop N. S. M.; Schneider J. G.; Grundy J. e Vasa R. 2016. What Influences Usability Defect Reporting? - A Survey of Software Development Practitioners., 17--24 pages.Google ScholarGoogle Scholar
  10. Maeda J. 2006. The laws of simplicity. MIT press.Google ScholarGoogle Scholar
  11. Nielsen J. 1994. Usability engineering. Elsevier.Google ScholarGoogle Scholar
  12. Nielsen J. 1995. How to conduct a heuristic evaluation. Nielsen Norman Group 1 (1995), 1--8.Google ScholarGoogle Scholar
  13. Booth P. 2014. An Introduction to Human-Computer Interaction (Psychology Revivals). Psychology Press.Google ScholarGoogle Scholar

Index Terms

  1. Evaluator self-efficacy analysis in a usability test

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      IHC '19: Proceedings of the 18th Brazilian Symposium on Human Factors in Computing Systems
      October 2019
      679 pages
      ISBN:9781450369718
      DOI:10.1145/3357155

      Copyright © 2019 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 22 October 2019

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • short-paper

      Acceptance Rates

      IHC '19 Paper Acceptance Rate56of165submissions,34%Overall Acceptance Rate331of973submissions,34%
    • Article Metrics

      • Downloads (Last 12 months)14
      • Downloads (Last 6 weeks)1

      Other Metrics

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader