skip to main content
10.1145/3532512.3535223acmotherconferencesArticle/Chapter ViewAbstractPublication PagesprogrammingConference Proceedingsconference-collections
research-article
Open access

Toward Understanding Task Complexity in Maintenance-Based Studies of Programming Tools

Published: 08 December 2022 Publication History

Abstract

Researchers conducting studies on programming tools often make use of maintenance tasks. The complexity of these tasks can influence the behavior of participants significantly. At the same time, the complexity of tasks is difficult to pinpoint due to the many sources of complexity for maintenance tasks. As a result, researchers may struggle to deliberately decide in which regard their tasks should be complex and in which regard they should be simple.
To help researchers make more deliberate decisions about the complexity of their tasks, we discuss different factors of task complexity. We draw these factors from previous user studies on programming tools as well as from a task complexity model from ergonomics research that we apply to maintenance tasks. In the end, task complexity might always be too complex to be fully controlled. Nevertheless, we hope that our discussion helps other researchers to decide in which dimensions their tasks are complex and in which dimensions they want to keep them simple.

References

[1]
Emad Aghayi, Aaron Massey, and Thomas D. LaToza. 2020. Find Unique Usages: Helping Developers Understand Common Usages. In IEEE Symposium on Visual Languages and Human-Centric Computing, VL/HCC 2020, Dunedin, New Zealand, August 10-14, 2020, Michael Homer, Felienne Hermans, Steven L. Tanimoto, and Craig Anslow (Eds.). IEEE, 1–8. https://doi.org/10.1109/VL/HCC50065.2020.9127285
[2]
Victor R. Basili and Richard W. Selby. 1987. Comparing the Effectiveness of Software Testing Strategies. IEEE Trans. Software Eng. 13, 12 (1987), 1278–1296. https://doi.org/10.1109/TSE.1987.232881
[3]
Deborah A. Boehm-Davis, Jean E. Fix, and Brian H. Philips. 1996. Techniques for exploring program comprehension. In Workshop on Empirical Studies on Programmers 1996. https://books.google.de/books?id=G2HflXT2tkYC&lpg=PA2-IA1
[4]
Andrew Bragdon, Robert C. Zeleznik, Steven P. Reiss, Suman Karumuri, William Cheung, Joshua Kaplan, Christopher Coleman, Ferdi Adeputra, and Joseph J. LaViola Jr.2010. Code bubbles: a working set-based interface for code understanding and maintenance. In Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Atlanta, Georgia, USA, April 10-15, 2010, Elizabeth D. Mynatt, Don Schoner, Geraldine Fitzpatrick, Scott E. Hudson, W. Keith Edwards, and Tom Rodden (Eds.). ACM, 2503–2512. https://doi.org/10.1145/1753326.1753706
[5]
G. Ann Campbell. 2018. Cognitive complexity. In Proceedings of the 2018 International Conference on Technical Debt. ACM. https://doi.org/10.1145/3194164.3194186
[6]
Curtis Cook, Margaret Burnett, and Derrick Boom. 1997. A Bug’s Eye View of Immediate Visual Feedback in Direct-Manipulation Programming Systems. In Proceedings of ESP 1997 (Alexandria, Virginia, USA) (ESP ’97). ACM, New York, NY, USA, 20–41. http://doi.acm.org/10.1145/266399.266403
[7]
Thomas A. Corbi. 1989. Program Understanding: Challenge for the 1990s. IBM Syst J 28, 2 (1989), 294–306. https://doi.org/10.1147/sj.282.0294
[8]
Fredy Cuenca, Jan Van den Bergh, Kris Luyten, and Karin Coninx. 2015. A user study for comparing the programming efficiency of modifying executable multimodal interaction descriptions: a domain-specific language versus equivalent event-callback code. In Proceedings of the 6th Workshop on Evaluation and Usability of Programming Languages and Tools, PLATEAU@SPLASH 2015, Pittsburgh, PA, USA, October 26, 2015, Thomas D. LaToza, Craig Anslow, and Joshua Sunshine (Eds.). ACM, 31–38. https://doi.org/10.1145/2846680.2846686
[9]
Françoise Détienne. 2001. Software design cognitive aspects. Springer. http://www.springer.com/computer/swe/book/978-1-85233-253-2
[10]
Françoise Détienne and Elliot Soloway. 1990. An Empirically-Derived Control Structure for the Process of Program Understanding. International Journal of Man-machine Studies 33, 3 (1990), 323–342. https://doi.org/10.1016/S0020-7373(05)80122-1
[11]
Bogdan Dit, Meghan Revelle, Malcom Gethers, and Denys Poshyvanyk. 2013. Feature location in source code: a taxonomy and survey. J. Softw. Evol. Process. 25, 1 (2013), 53–95. https://doi.org/10.1002/smr.567
[12]
Alastair Dunsmore and Marc Roper. 2000. A Comparative Evaluation of Program Comprehension Measures. Technical Report EFoCS 35-2000. Department of Computer Science, University of Strathclyde.
[13]
Marc Eisenstadt. 1997. My hairiest bug war stories. Commun. ACM 40, 4 (1997), 30–37.
[14]
Dror G. Feitelson. 2021. Considerations and Pitfalls in Controlled Experiments on Code Comprehension. In 29th IEEE/ACM International Conference on Program Comprehension, ICPC 2021, Madrid, Spain, May 20-21, 2021. IEEE, 106–117. https://doi.org/10.1109/ICPC52881.2021.00019
[15]
David J. Gilmore. 1991. Models of debugging. Acta Psychologica 78, 1 (1991), 151–172. https://doi.org/10.1016/0001-6918(91)90009-O
[16]
L. Gugerty and G. Olson. 1986. Debugging by Skilled and Novice Programmers. In Proceedings of CHI 1986 (Boston, Massachusetts, USA) (CHI ’86). ACM, New York, NY, USA, 171–174. http://doi.acm.org/10.1145/22627.22367
[17]
Donghwi Kim, Sooyoung Park, Jihoon Ko, Steven Y. Ko, and Sung-Ju Lee. 2019. X-Droid: A Quick and Easy Android Prototyping Framework with a Single-App Illusion. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, UIST 2019, New Orleans, LA, USA, October 20-23, 2019, François Guimbretière, Michael Bernstein, and Katharina Reinecke (Eds.). ACM, 95–108. https://doi.org/10.1145/3332165.3347890
[18]
Amy J. Ko, Thomas D. LaToza, and Margaret M. Burnett. 2015. A Practical Guide to Controlled Experiments of Software Engineering Tools with Human Participants. Empirical Software Engineering 20, 1 (sep 2015), 110–141. https://doi.org/10.1007/s10664-013-9279-3.
[19]
Amy J. Ko and Brad A. Myers. 2009. Finding causes of program output with the Java Whyline. In Proceedings of the 27th International Conference on Human Factors in Computing Systems, CHI 2009, Boston, MA, USA, April 4-9, 2009, Dan R. Olsen Jr., Richard B. Arthur, Ken Hinckley, Meredith Ringel Morris, Scott E. Hudson, and Saul Greenberg (Eds.). ACM, 1569–1578. https://doi.org/10.1145/1518701.1518942
[20]
Amy J. Ko, Brad A. Myers, Michael J. Coblenz, and Htet Htet Aung. 2006. An Exploratory Study of How Developers Seek, Relate, and Collect Relevant Information during Software Maintenance Tasks. IEEE Trans. Softw. Eng.12 (2006), 971–987.
[21]
Thomas D. LaToza, David Garlan, James D. Herbsleb, and Brad A. Myers. 2007. Program Comprehension As Fact Finding. In Proceedings of ESEC-FSE 2007 (Dubrovnik, Croatia) (ESEC-FSE ’07). ACM, New York, NY, USA, 361–370. https://doi.org/10.1145/1287624.1287675
[22]
Thomas D. LaToza and Brad A. Myers. 2010. Developers ask reachability questions. In Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 1, ICSE 2010, Cape Town, South Africa, 1-8 May 2010, Jeff Kramer, Judith Bishop, Premkumar T. Devanbu, and Sebastián Uchitel (Eds.). ACM, 185–194. https://doi.org/10.1145/1806799.1806829
[23]
Peng Liu and Zhizhong Li. 2012. Task Complexity: A Review and Conceptualization Framework. Int. J. Ind. Ergon. 42, 6 (2012), 553–568.
[24]
Peng Liu and Zhizhong Li. 2016. Comparison between conventional and digital nuclear power plant main control rooms: A task complexity perspective, part I: Overall results and analysis. 51 (2016), 2–9. https://doi.org/10.1016/j.ergon.2014.06.006
[25]
Peng Liu and Zhizhong Li. 2016. Comparison between conventional and digital nuclear power plant main control rooms: A task complexity perspective, Part II: Detailed results and analysis. 51 (2016), 10–20. https://doi.org/10.1016/j.ergon.2014.06.011
[26]
Bertrand Meyer. 1997. Object-oriented software construction. Vol. 2. Prentice hall Englewood Cliffs.
[27]
Michael Perscheid, Michael Haupt, Robert Hirschfeld, and Hidehiko Masuhara. 2012. Test-driven Fault Navigation for Debugging Reproducible Failures. Information and Media Technologies 7, 4 (2012), 1377–1400. https://doi.org/10.11185/imt.7.1377
[28]
Michael Perscheid, Benjamin Siegmund, Marcel Taeumel, and Robert Hirschfeld. 2017. Studying the Advancement in Debugging Practice of Professional Software Developers. Springer Software Quality Journal 25, 1 (2017), 83–110. https://doi.org/10.1007/s11219-015-9294-2
[29]
Martin P. Robillard, Wesley Coelho, and Gail C. Murphy. 2004. How Effective Developers Investigate Source Code: An Exploratory Study. IEEE Trans. Software Eng. 30, 12 (2004), 889–903. https://doi.org/10.1109/TSE.2004.101
[30]
Xin Rong, Shiyan Yan, Stephen Oney, Mira Dontcheva, and Eytan Adar. 2016. CodeMend: Assisting Interactive Programming with Bimodal Embedding. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology, UIST 2016, Tokyo, Japan, October 16-19, 2016, Jun Rekimoto, Takeo Igarashi, Jacob O. Wobbrock, and Daniel Avrahami (Eds.). ACM, 247–258. https://doi.org/10.1145/2984511.2984544
[31]
Anneliese von Mayrhauser and A. Marie Vans. 1994. Comprehension Processes During Large Scale Maintenance. In Proceedings of the 16th International Conference on Software Engineering, Sorrento, Italy, May 16-21, 1994, Bruno Fadini, Leon J. Osterweil, and Axel van Lamsweerde (Eds.). IEEE Computer Society / ACM Press, 39–48. http://portal.acm.org/citation.cfm?id=257734.257741
[32]
Anneliese von Mayrhauser and A. Marie Vans. 1995. Program Comprehension During Software Maintenance and Evolution. Computer 28, 8 (1995), 44–55. https://doi.org/10.1109/2.402076
[33]
E. Wilcox, John jr., Margaret Burnett, J. Cadiz, and Curtis Cook. 1997. Does Continuous Visual Feedback Aid Debugging in Direct-Manipulation Programming Systems?. In Proceedings of CHI 1997 (Atlanta, Georgia, USA). ACM, New York, NY, USA, 258–265. http://doi.acm.org/10.1145/258549.258721
[34]
Barbara M. Wildemuth, Luanne Freund, and Elaine G. Toms. 2014. Untangling search task complexity and difficulty in the context of interactive information retrieval studies. J. Documentation 70, 6 (2014), 1118–1140. https://doi.org/10.1108/JD-03-2014-0056
[35]
Leon A. Wilson, Yoann Senin, Yibin Wang, and Václav Rajlich. 2019. Empirical Study of Phased Model of Software Change. CoRR abs/1904.05842(2019). arxiv:1904.05842http://arxiv.org/abs/1904.05842
[36]
Andreas Zeller. 2009. Why Programs Fail: A Guide to Systematic Debugging. Elsevier.

Cited By

View all
  • (2023)Toward Studying Example-Based Live Programming in CS/SE EducationProceedings of the 2nd ACM SIGPLAN International Workshop on Programming Abstractions and Interactive Notations, Tools, and Environments10.1145/3623504.3623568(17-24)Online publication date: 18-Oct-2023
  • (2023)Too Simple? Notions of Task Complexity used in Maintenance-based Studies of Programming Tools2023 IEEE/ACM 31st International Conference on Program Comprehension (ICPC)10.1109/ICPC58990.2023.00040(254-265)Online publication date: May-2023

Index Terms

  1. Toward Understanding Task Complexity in Maintenance-Based Studies of Programming Tools

      Recommendations

      Comments

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      Programming '22: Companion Proceedings of the 6th International Conference on the Art, Science, and Engineering of Programming
      March 2022
      98 pages
      ISBN:9781450396561
      DOI:10.1145/3532512
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 December 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. experiments
      2. methodology
      3. task complexity
      4. user studies

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      Conference

      <Programming> '22 Companion

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)148
      • Downloads (Last 6 weeks)22
      Reflects downloads up to 14 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Toward Studying Example-Based Live Programming in CS/SE EducationProceedings of the 2nd ACM SIGPLAN International Workshop on Programming Abstractions and Interactive Notations, Tools, and Environments10.1145/3623504.3623568(17-24)Online publication date: 18-Oct-2023
      • (2023)Too Simple? Notions of Task Complexity used in Maintenance-based Studies of Programming Tools2023 IEEE/ACM 31st International Conference on Program Comprehension (ICPC)10.1109/ICPC58990.2023.00040(254-265)Online publication date: May-2023

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media