skip to main content
10.1145/3229345.3229410acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbsiConference Proceedingsconference-collections
research-article

Usability evaluation of software debugging tools

Published:04 June 2018Publication History

ABSTRACT

Testing and debugging are key tasks during the software development cycle. Spectrum-Based Fault Localization (SFL) is a consolidated debugging technique due to it is relative low execution cost. SFL pinpoints the most suspicious program elements by ranking lines, methods, classes and packages that are more likely to contain faults. Recently, SFL tools have been proposed to help developers during debugging. These tools use different metaphors to represent the suspiciousness of program elements. In this paper, we compare two SFL tools that utilize different metaphors: Jaguar and CodeForest. Jaguar uses a textual representation, presenting the most suspicious elements of a program as a list sorted by suspiciousness. CodeForest uses three-dimensional visualization metaphor, presenting a program as a cacti forest in which basic blocks are represented as thorns, methods as branches, and classes as cacti. We present the results of an evaluation with 76 students using both tools. The perception of usability of the tools was assessed using a questionnaire based on the Technology Acceptance Model (TAM). Three factors were considered to measure the impact of use of the tools in the debugging activity: intention of use, usefulness, and ease of use. The results suggest that there is not statistical difference in the perception of usability between CodeForest and Jaguar.

References

  1. Rui Abreu, Peter Zoeteweij, and Arjan J. C. van Gemund. 2007. On the accuracy of spectrum-based fault localization. In Proceedings of the Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION. 89--98. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Hiralal Agrawal. 1991. Towards Automatic Debugging of Computer Programs. Ph.D. Dissertation. Purdue University, West Lafayette, IN 47907. http://spaf.cerias.purdue.edu/Students/spyder/TR103P.pdf Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Keijiro Araki, Zengo Furukawa, and Jun Cheng. 1991. A General Framework for Debugging. IEEE Software Magazine v.8, 3 (1991). Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Ronald S. Calinger. 2015. Leonhard Euler: Mathematical Genius in the Enlightenment. Princeton University Press, Princeton, NJ, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Lee J. Cronbach. 1951. Coefficient alpha and the internal structure of tests. Psychometirka 16, 3 (1951), 297--334.Google ScholarGoogle ScholarCross RefCross Ref
  6. R. A. Cummins and E. Gullone. 2000. Why we should not use 5-point Likert scales: The case for subjective quality of life measurement.. In Proceedings, Second International Conference on Quality of Life in Cities. 74--93.Google ScholarGoogle Scholar
  7. Fred D. Davis. 1989. Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Q. 13, 3 (sep 1989), 319--340. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Marcio Eduardo Delamaro, José Carlos Maldonado, Mario Jino, and Marcos Lordello Chaim. 2007. Introdução ao Teste de Software. Elsevier Ltda., Rio de Janeiro.Google ScholarGoogle Scholar
  9. Carlos Gouveia, José Campos, and Rui Abreu. 2013. Using HTML5 visualizations in software fault localization. In 2013 First IEEE Working Conference on Software Visualization (VISSOFT). 1--10.Google ScholarGoogle ScholarCross RefCross Ref
  10. Andre Hora, Nicolas Anquetil, Cesar Couto, Marco Tulio Valente, and Julio Martins. 2012. Bug Maps: A Tool for the Visual Exploration and Analysis of Bugs. In 16th European Conference on Software Maintenance and Reengineering, CSMR 2012, Szeged, Hungary, March 27--30, 2012. 523--526. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. James A. Jones, James F. Bowring, and Mary Jean Harrold. 2007. Debugging in Parallel. Proceedings of the 2007 International Symposium on Software Testing and Analysis (2007), 16--26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. A. Jones, M. J. Harrold, and J. Stasko. 2002. Visualization of test information to assist fault localization. In Proceedings of the 24th International Conference on Software Engineering. 467--477. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Leonardo J. Kazmier. 2007. Estatística aplicada a administração e economia (4a ed.). Bookman, Porto Alegre.Google ScholarGoogle Scholar
  14. Paul Legris, John Ingham, and Pierre Collerette. 2003. Why do people use information technology? A critical review of the technology acceptance model. Information & Management 40, 3 (2003), 191--204. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. João Maroco and Teresa Garcia Marques. 2006. Qual a fiabilidade do alfa de Cronbach?Questões antigas e soluções modernas? http://publicacoes.ispa.pt/index.php/lp/article/viewFile/763/706Google ScholarGoogle Scholar
  16. Danilo Mutti. 2014. Coverage Based Debugging Visualization. Master's thesis. Universidade de São Paulo.Google ScholarGoogle Scholar
  17. Glenford J Myers, Tom Badgett, and Corey Sandler. 2012. The Art of Software Testing (3a ed.). John Wiley & Sons, Inc., New Jersey. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Alexandre Perez and Rui Abreu. 2013. Cues for Scent Intensification in Debugging. IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW) (2013).Google ScholarGoogle Scholar
  19. Steven P. Reiss. 2014. The Challenge of Helping the Programmer during Debugging. 2014 Second IEEE Working Conference on Software Visualization (VISSOFT) 00 (2014), 112--116. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Manos Renieris and Steven P. Reiss. 2003. Fault Localization With Nearest Neighbor Queries.. In ASE. IEEE Computer Society, 30--39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Thomas Reps, Thomas Ball, Manuvir Das, and James Larus. 1997. The use of program profiling for software maintenance with applications to the year 2000 problem. In Proceedings of the 6th European Software Engineering Conference Held Jointly with the 5th ACM SIGSOFT Symposium on the Foundations of Software Engineering. 432--449. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Henrique Lemos Ribeiro. 2016. On the use of control- and data-flow in fault localization. Master's thesis. Universidade de São Paulo.Google ScholarGoogle Scholar
  23. S. S. Shapiro and M. B. Wilk. 1965. An Analysis of Variance Test for Normality (Complete Samples). Biometrika 52, 3/4 (Dec. 1965), 591--611. http://links.jstor.org/sici?sici=0006-3444%28196512%2952%3A3%2F4%3C591%3AAAOVTF%3E2.0.CO%3B2-BGoogle ScholarGoogle ScholarCross RefCross Ref
  24. L. Taylor, R. Titmuss, and C. Lebre. 1999. The challenges of seamless handover in future mobile multimedia networks. IEEE Personal Communications 6, 2 (Apr 1999), 32--37.Google ScholarGoogle ScholarCross RefCross Ref
  25. Mark Weiser. 1981. Program Slicing. In Proceedings of the 5th International Conference on Software Engineering (ICSE '81). IEEE Press, 439--449. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Andreas Zeller. 2005. Why Programs Fail: A Guide to Systematic Debugging. Morgan Kaufmann Publishers Inc. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Usability evaluation of software debugging tools

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          SBSI '18: Proceedings of the XIV Brazilian Symposium on Information Systems
          June 2018
          578 pages
          ISBN:9781450365598
          DOI:10.1145/3229345

          Copyright © 2018 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 4 June 2018

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate181of557submissions,32%
        • Article Metrics

          • Downloads (Last 12 months)14
          • Downloads (Last 6 weeks)1

          Other Metrics

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader