skip to main content
10.1145/1370114.1370122acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Creating a cognitive metric of programming task difficulty

Published: 13 May 2008 Publication History

Abstract

Conducting controlled experiments about programming activities often requires the use of multiple tasks of similar difficulty. In previously reported work about a controlled experiment investigating software exploration tools, we tried to select two change tasks of equivalent difficulty to be performed on a medium-sized code base. Despite careful effort in the selection and confirmation from our pilot subjects finding the two tasks to be of equivalent difficulty, the data from the experiment suggest the subjects found one of the tasks more difficult than the other.
In this paper, we report on early work to create a metric to estimate the cognitive difficulty for a software change task. Such a metric would help in comparing between studies of different tools, and in designing future studies. Our particular approach uses a graph-theoretic statistic to measure the complexity of the task solution by the connectedness of the solution elements. The metric predicts the perceived difficulty for the tasks of our experiment, but fails to predict the perceived difficulty for other tasks to a small program. We discuss these differences and suggest future approaches.

References

[1]
B. Curtis, S. B. Sheppard, P. Milliman, M. A. Borst, and T. Love. Measuring the psychological complexity of software maintenance tasks with the Halstead and McCabe metrics. IEEE Trans. Softw. Eng., SE-5(2):96--104, Mar. 1979.
[2]
M. Czerwinski, E. Horvitz, and E. Cutrell. Subjective duration assessment: An implicit probe for software usability. In Proc. Joint IHM-HCI Conference, volume 2, pages 167--170, 2001.
[3]
B. de Alwis, G. C. Murphy, and M. P. Robillard. A comparative study of three program exploration tools. In Proc. Int. Conf. Program Compr. (ICPC), pages 103--112, 2007.
[4]
E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison-Wesley, Boston, 1994.
[5]
N. E. Gold, A. M. Mohan, and P. J. Layzell. Spatial complexity metrics: an investigation of utility. IEEE Trans. Softw. Eng., 31(3):203--212, Mar. 2005.
[6]
S. G. Hart and L. E. Staveland. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. In P. A. Hancock and N. Meshkati, editors, Human Mental Workload, volume 52 of Advances in Psychology, pages 139--183. North-Holland, 1988.
[7]
A. J. Ko, H. H. Aung, and B. A. Myers. Eliciting design requirements for maintenance-oriented IDEs: A detailed study of corrective and perfective maintenance tasks. In Proc. Int. Conf. Softw. Eng. (ICSE), pages 126--135, 2005.
[8]
R. W. Picard and S. B. Daily. Evaluating affective interactions: Alternatives to asking what users feel. In CHI Workshop on Evaluating Affective Interfaces: Innovative Approaches, 2005.
[9]
M. P. Robillard, W. Coelho, and G. C. Murphy. How effective developers investigate source code: An exploratory study. IEEE Trans. Softw. Eng., 30(12):889--903, 2004.
[10]
M. B. Rosson. Human factors in programming and software development. ACM Comput. Surv., 28(1):193--195, 1996.
[11]
C. Tjortjis and P. Layzell. Expert maintainers' strategies and needs when understanding software: A case study approach. In Proc. Asia-Pacific Softw. Eng. Conf. (APSEC), pages 281--287, 2001.

Cited By

View all
  • (2023)Task Models as a Mean to Identify and Justify Automations in Development Tasks2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C)10.1109/MODELS-C59198.2023.00122(757-764)Online publication date: 1-Oct-2023
  • (2020)Does stress impact technical interview performance?Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3409712(481-492)Online publication date: 8-Nov-2020
  • (2017)What makes a task difficult? An empirical study of perceptions of task difficulty2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VLHCC.2017.8103452(67-71)Online publication date: Oct-2017
  • Show More Cited By

Index Terms

  1. Creating a cognitive metric of programming task difficulty

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHASE '08: Proceedings of the 2008 international workshop on Cooperative and human aspects of software engineering
    May 2008
    120 pages
    ISBN:9781605580395
    DOI:10.1145/1370114
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 May 2008

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. cognitive metrics
    2. experimentation
    3. task difficulty

    Qualifiers

    • Research-article

    Conference

    ICSE '08
    Sponsor:

    Acceptance Rates

    CHASE '08 Paper Acceptance Rate 28 of 34 submissions, 82%;
    Overall Acceptance Rate 47 of 70 submissions, 67%

    Upcoming Conference

    ICSE 2025

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)19
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 20 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Task Models as a Mean to Identify and Justify Automations in Development Tasks2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C)10.1109/MODELS-C59198.2023.00122(757-764)Online publication date: 1-Oct-2023
    • (2020)Does stress impact technical interview performance?Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering10.1145/3368089.3409712(481-492)Online publication date: 8-Nov-2020
    • (2017)What makes a task difficult? An empirical study of perceptions of task difficulty2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VLHCC.2017.8103452(67-71)Online publication date: Oct-2017
    • (2014)Understanding understanding source code with functional magnetic resonance imagingProceedings of the 36th International Conference on Software Engineering10.1145/2568225.2568252(378-389)Online publication date: 31-May-2014
    • (2013)Task Difficulty and Time Constraint in Programmer MultitaskingInternational Journal of Green Computing10.4018/jgc.20130101034:1(35-57)Online publication date: 1-Jan-2013
    • (2011)Generating natural language summaries for crosscutting source code concernsProceedings of the 2011 27th IEEE International Conference on Software Maintenance10.1109/ICSM.2011.6080777(103-112)Online publication date: 25-Sep-2011

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media