Usability criteria for automated debugging systems

https://doi.org/10.1016/0164-1212(94)00087-4Get rights and content

Abstract

Much of the current discussion about automated debugging systems centers on various technical issues. In contrast, this article focuses on user-oriented usability criteria for automated debugging systems and reviews several systems according to these criteria. We introduce four usability criteria: generality, cognitive plausibility, degree of automation, and appreciation of the user's expertise. A general debugging system is able to understand a program without restrictive assumptions about the class of algorithms, implementation, etc. A cognitively plausible debugging system supports debugging according to the user's mental model, e.g., by supporting several levels of abstraction and directions of bug localization. A high degree of automation means that fewer interactions with the user are required to find a bug. A debugging system that appreciates the user's expertise is suitable for both expert and novice programmers and has the ability to take advantage of the additional knowledge of an expert programmer to speed up and improve the debugging process. Existing automated debugging systems fulfill these user-oriented requirements to a varying degree. However, many improvements are still needed to make automated debugging systems attractive to a broad range of users.

References (26)

  • W.L. Johnsson et al.

    Proust: Knowledge-Based Program Understanding

    IEEE Trans. Software Eng.

    (1985)
  • M. Kamkar

    Interprocedural Dynamic Slicing with Applications to Debugging and Testing

  • M. Kamkar et al.

    Bug localization by algorithmic debugging and program slicing

  • Cited by (6)

    View full text