skip to main content
10.1145/2635868.2635925acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Enablers, inhibitors, and perceptions of testing in novice software teams

Published: 11 November 2014 Publication History

Abstract

There are many different approaches to testing software, with different benefits for software quality and the development process. Yet, it is not well understood what developers struggle with when getting started with testing - and why some do not test at all or not as much as would be good for their project. This missing understanding keeps us from improving processes and tools to help novices adopt proper testing practices. We conducted a qualitative study with 97 computer science students. Through interviews, we explored their experiences and attitudes regarding testing in a collaborative software project. We found enabling and inhibiting factors for testing activities, the different testing strategies they used, and novices’ perceptions and attitudes of testing. Students push test automation to the end of the project, thus robbing themselves from the advantages of having a test suite during implementation. Students were not convinced of the return of investment in automated tests and opted for laborious manual tests - which they often regretted in the end. Understanding such challenges and opportunities novices face when confronted with adopting testing can help us improve testing processes, company policies, and tools. Our findings provide recommendations that can enable organizations to facilitate the adoption of testing practices by their members.

References

[1]
K. Beck. Test-driven development: by example. Addison-Wesley Professional, 2003.
[2]
A. Beer and R. Ramler. The role of experience in software testing practice. In Software Engineering and Advanced Applications (SEAA), 34th Euromicro Conference on, pages 258–265, Sept 2008.
[3]
B. Benton. Designing and building a software test organization. In Software Testing, Verification, and Validation (ICST), 1st International Conference on, pages 414–422, April 2008.
[4]
A. Causevic, D. Sundmark, and S. Punnekkat. Factors limiting industrial adoption of test driven development: A systematic review. In Software Testing, Verification and Validation (ICST), 4th International Conference on, pages 337–346, March 2011.
[5]
M. Ellims, J. Bridges, and D. Ince. The economics of unit testing. Empirical Software Engineering, 11(1):5–31, 2006.
[6]
E. Engström and P. Runeson. A qualitative survey of regression testing practices. In M. Ali Babar, M. Vierimaa, and M. Oivo, editors, Product-Focused Software Process Improvement, volume 6156 of Lecture Notes in Computer Science, pages 3–16. Springer Berlin Heidelberg, 2010.
[7]
M. Fowler, K. Beck, J. Brant, W. Opdyke, and D. Roberts. Refactoring: improving the design of existing code. Addison-Wesley Professional, 1999.
[8]
R. Hoda, J. Noble, and S. Marshall. Developing a grounded theory to explain the practices of self-organizing agile teams. Empirical Software Engineering, 17(6):609–639, 2012.
[9]
T. Kanij, R. Merkel, and J. Grundy. Performance assessment metrics for software testers. In Cooperative and Human Aspects of Software Engineering (CHASE), 5th International Workshop on, pages 63–65, June 2012.
[10]
T. Kanij, R. Merkel, and J. Grundy. An empirical study of the effects of personality on software testing. In Software Engineering Education and Training (CSEET), 2013 IEEE 26th Conference on, pages 239–248, May 2013.
[11]
T. Kanij, R. Merkel, and J. Grundy. A preliminary survey of factors affecting software testers. In 2014 Australasian Conference on Software Engineering (ASWEC 2014). IEEE CS Press, 2014.
[12]
K. Karhu, T. Repo, O. Taipale, and K. Smolander. Empirical observations on software testing automation. In Software Testing Verification and Validation (ICST), International Conference on, pages 201–209, April 2009.
[13]
T. D. LaToza, G. Venolia, and R. DeLine. Maintaining mental models: A study of developer work habits. In 28th International Conference on Software Engineering (ICSE), pages 492–501. ACM, 2006.
[14]
C. McDowell, L. Werner, H. Bullock, and J. Fernald. The effects of pair-programming on performance in an introductory programming course. In 33rd Technical Symposium on Computer Science Education (SIGCSE), pages 38–42, 2002.
[15]
C. McDowell, L. Werner, H. E. Bullock, and J. Fernald. The impact of pair programming on student performance, perception and persistence. In 25th International Conference on Software Engineering (ICSE), pages 602–607. IEEE, 2003.
[16]
S. P. Ng, T. Murnane, K. Reed, D. Grant, and T. Chen. A preliminary survey on software testing practices in Australia. In Australian Software Engineering Conference (ASWEC), pages 116–125, 2004.
[17]
A. M. O’Donnell and A. King. Cognitive perspectives on peer learning. Routledge, 1999.
[18]
R. Pham, L. Singer, O. Liskin, F. Figueira Filho, and K. Schneider. Creating a shared understanding of testing culture on a social coding site. In International Conference on Software Engineering (ICSE), pages 112–121. IEEE Press, 2013.
[19]
E. M. Rogers. Diffusion of Innovations. Free Press, 5th edition, 2003.
[20]
J. Rooksby, M. Rouncefield, and I. Sommerville. Testing in the wild: The social and organisational dimensions of real world practice. Computer Supported Cooperative Work (CSCW), 18(5-6):559–580, 2009.
[21]
P. Runeson. A survey of unit testing practices. Software, IEEE, 23(4):22–29, July 2006.
[22]
K. Schneider. LIDs: A Light-Weight Approach to Experience Elicitation and Reuse. In F. Bomarius and M. Oivo, editors, Product Focused Software Process Improvement, volume 1840/2000 of Lecture Notes in Computer Science, pages 407–424. Springer, 2000.
[23]
L. Singer. Improving the Adoption of Software Engineering Practices Through Persuasive Interventions. PhD thesis, Gottfried Wilhelm Leibniz Universität Hannover, 2013.
[24]
L. Singer, F. Figueira Filho, and M.-A. Storey. Software Engineering at the Speed of Light: How Developers Stay Current Using Twitter. In International Conference on Software Engineering (ICSE) (to appear), 2014.
[25]
B. Turhan, L. Layman, M. Diep, H. Erdogmus, and F. Shull. How Effective Is Test-Driven Development? In A. Oram and G. Wilson, editors, Making Software: What Really Works, and Why We Believe It, pages 207–219. O’Reilly Media, Inc., 2010.
[26]
S. Vegas and V. Basili. A characterisation schema for software testing techniques. Empirical Software Engineering, 10(4):437–466, 2005.

Cited By

View all
  • (2024)Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication StudyProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671102(55-64)Online publication date: 12-Aug-2024
  • (2024)Automated Program Repair, What Is It Good For? Not Absolutely Nothing!Proceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3639095(1-13)Online publication date: 20-May-2024
  • (2023)An Experience Report on Introducing Explicit Strategies into Testing Checklists for Advanced BeginnersProceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 110.1145/3587102.3588781(194-200)Online publication date: 29-Jun-2023
  • Show More Cited By

Index Terms

  1. Enablers, inhibitors, and perceptions of testing in novice software teams

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    FSE 2014: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering
    November 2014
    856 pages
    ISBN:9781450330565
    DOI:10.1145/2635868
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 November 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Adoption
    2. Enablers
    3. Inhibitors
    4. Motivation
    5. Testing

    Qualifiers

    • Research-article

    Conference

    SIGSOFT/FSE'14
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 17 of 128 submissions, 13%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)22
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 14 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Evaluating the Effectiveness of a Testing Checklist Intervention in CS2: An Quasi-experimental Replication StudyProceedings of the 2024 ACM Conference on International Computing Education Research - Volume 110.1145/3632620.3671102(55-64)Online publication date: 12-Aug-2024
    • (2024)Automated Program Repair, What Is It Good For? Not Absolutely Nothing!Proceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3639095(1-13)Online publication date: 20-May-2024
    • (2023)An Experience Report on Introducing Explicit Strategies into Testing Checklists for Advanced BeginnersProceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 110.1145/3587102.3588781(194-200)Online publication date: 29-Jun-2023
    • (2023)MuTCR: Test Case Recommendation via Multi-Level Signature Matching2023 IEEE/ACM International Conference on Automation of Software Test (AST)10.1109/AST58925.2023.00022(179-190)Online publication date: May-2023
    • (2023)Sentiment overflow in the testing stackJournal of Systems and Software10.1016/j.jss.2023.111804205:COnline publication date: 17-Oct-2023
    • (2022)Check It OffProceedings of the 27th ACM Conference on on Innovation and Technology in Computer Science Education Vol. 110.1145/3502718.3524799(276-282)Online publication date: 7-Jul-2022
    • (2022)Is Assertion Roulette still a test smell? An experiment from the perspective of testing education2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VL/HCC53370.2022.9833107(1-7)Online publication date: 12-Sep-2022
    • (2021)A suite of Process Metrics to Capture the Effort of DevelopersProceedings of the 2021 10th International Conference on Software and Computer Applications10.1145/3457784.3457805(131-136)Online publication date: 23-Feb-2021
    • (2021)Mutation testing and self/peer assessmentProceedings of the 43rd International Conference on Software Engineering: Joint Track on Software Engineering Education and Training10.1109/ICSE-SEET52601.2021.00033(231-240)Online publication date: 25-May-2021
    • (2021)Finding Anomalies in Scratch Assignments2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET)10.1109/ICSE-SEET52601.2021.00027(171-182)Online publication date: May-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media