skip to main content
10.1145/2652524.2652526acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
research-article

Impact of process conformance on the effects of test-driven development

Published:18 September 2014Publication History

ABSTRACT

Context: One limitation of the empirical studies about test-driven development (TDD) is knowing whether the developers followed the advocated test-code-refactor cycle. Research dealt with the issue of process conformance only in terms of internal validity, while investigating the role of other confounding variables that might explain the controversial effects of TDD. None of the research included process conformance as a fundamental part of the analysis.

Goal: We aim to examine the impact of process conformance on the claimed effects of TDD on external quality, developers' productivity and test quality.

Method: We used data collected during a previous study to create regression models in which the level of process conformance was used to predict external quality, productivity, and tests thoroughness.

Result: Based on our analysis of the available data (n = 22), we observe that neither quality (p -- value = 0.21), productivity (p -- value = 0.80), number of tests (p -- value = 0.39) nor coverage (p -- value = 0.09) was correlated with the level of TDD process conformance.

Conclusion: While based on a small sample, we raise concerns about how TDD is interpreted. We also question whether the cost of strictly following TDD will pay-off in terms of external quality, productivity, and tests thoroughness.

References

  1. P. Abrahamsson, A. Hanhineva, and J. Jäälinoja. Improving business agility through technical solutions: A case study on test-driven development in mobile software development. In Business Agility and Information Technology Diffusion, pages 227--243. Springer, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  2. J. G. Adair. The Hawthorne Effect: A Reconsideration Of The Methodological Artefact. Journal of Applied Psychology, 69(2):334, 1984.Google ScholarGoogle ScholarCross RefCross Ref
  3. D. Astels. Test Driven development: A Practical Guide. Prentice Hall Professional Technical Reference, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. K. Beck. Test-driven Development: by Example. The Addison-Wesley signature series. Addison-Wesley, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J. Bowyer and J. Hughes. Assessing Undergraduate Experience Of Continuous Integration And Test-Driven Development. In Proceedings Of The 28th International Conference On Software Engineering, pages 691--694. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Causevic, D. Sundmark, and S. Punnekkat. Factors Limiting Industrial Adoption Of Test Driven Development: A Systematic Review. In Software Testing, Verification and Validation (ICST), 2011 IEEE Fourth International Conference on, pages 337--346. IEEE, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Čaušević, D. Sundmark, and S. Punnekkat. Impact of Test Design Technique Knowledge on Test Driven Development: A Controlled Experiment. In Agile Processes in Software Engineering and Extreme Programming, pages 138--152. Springer, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  8. J. Cohen and P. Cohen. Applied Multiple Regression/Correlation Analysis For The Behavioral Sciences. Lawrence Erlbaum, 1975.Google ScholarGoogle Scholar
  9. J. E. Cook and A. Wolf. Toward metrics for process validation. In Software Process, In Proceedings of the Third International Conference on, pages 33--44, 1994.Google ScholarGoogle Scholar
  10. S. H. Edwards. Using Test-Driven Development In The Classroom: Providing Students With Automatic, Concrete Feedback On Performance. In Proceedings of the International Conference on Education and Information Systems: Technologies and Applications EISTA, volume 3, 2003.Google ScholarGoogle Scholar
  11. H. Erdogmus, M. Morisio, and M. Torchiano. On the Effectiveness of the Test-First Approach to Programming. IEEE Transactions on Software Engineering, 31(3):226--237, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. M. Fowler. Refactoring: Improving The Design Of Existing Code. Addison-Wesley Professional, 1999. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. S. Fraser, K. Beck, B. Caputo, T. Mackinnon, J. Newkirk, and C. Poole. Test driven development. In Extreme Programming and Agile Processes in Software Engineering, pages 459--462. Springer, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. S. Freeman and N. Pryce. Growing Object-Oriented Software, Guided By Tests. Pearson Education, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. D. Fucci and B. Turhan. A Replicated Experiment on the Effectiveness of Test-First Development. In Empirical Software Engineering and Measurement. ACM/IEEE International Symposium on, pages 103--112. IEEE, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  16. D. Fucci and B. Turhan. On the Role of Tests in Test-driven Development: a Differentiated and Partial Replication. Empirical Software Engineering Journal, pages 1--26, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. B. George and L. Williams. A structured experiment of test-driven development. Information and Software Technology, 46(5):337--342, 2004.Google ScholarGoogle ScholarCross RefCross Ref
  18. T. Henchy and D. C. Glass. Evaluation Apprehension And The Social Facilitation Of Dominant And Subordinate Responses. Journal of Personality and Social Psychology, 10(4):446, 1968.Google ScholarGoogle ScholarCross RefCross Ref
  19. J. R. Horgan, S. London, and M. R. Lyu. Achieving Software Quality With Testing Coverage Measures. Computer, 27(9):60--69, 1994. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. D. S. Janzen and H. Saiedian. Does Test-Driven Development Really Improve Software Design Quality? Software, IEEE, 25(2):77--84, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. P. Johnson and H. Kou. Automated Recognition of Test-Driven Development with Zorro. In AGILE 2007, pages 15--25. IEEE, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. C. Kaner. Software negligence and testing coverage. Software QA Quarterly, 2(2), 1996.Google ScholarGoogle Scholar
  23. M. I. Kellner, P. H. Feiler, A. Finkelstein, T. Katayama, L. J. Osterweil, M. H. Penedo, and H. D. Rombach. Software Process Modeling Example Problem. In Software Process Workshop, Proceedings of the 6th International, pages 19--29. IEEE, 1990.Google ScholarGoogle ScholarCross RefCross Ref
  24. H. Kou, P. M. Johnson, and H. Erdogmus. Operational Definition And Automated Inference Of Test-Driven Development With Zorro. Automated Software Engineering, 17(1):57--85, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. W. Krebs. Turning The Knobs: A Coaching Pattern For Xp Through Agile Metrics. In Extreme Programming and Agile Methods, pages 60--69. Springer, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. L. Madeyski. The Impact Of Pair Programming And Test-Driven Development On Package Dependencies In Object-Oriented Design --- An Experiment. Product-Focused Software Process Improvement, pages 278--289, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. Madeyski and Ł. Szała. The Impact Of Test-Driven Development On Software Development Productivity --- Empirical Study. In Software Process Improvement, pages 200--211. Springer, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Y. Malaiya, M. Li, J. Bieman, and R. Karcich. Software Reliability Growth With Test Coverage. Reliability, IEEE Transactions on, 51(4):420--426, 2002.Google ScholarGoogle Scholar
  29. Y. K. Malaiya, N. Li, J. Bieman, R. Karcich, and B. Skibbe. The Relationship Between Test Coverage And Reliability. In Software Reliability Engineering. Proceedings., 5th International Symposium on, pages 186--195. IEEE, 1994.Google ScholarGoogle Scholar
  30. B. Marick. How To Misuse Code Coverage. In Proceedings of the 16th International Conference on Testing Computer Software, pages 16--18, 1999.Google ScholarGoogle Scholar
  31. R. C. Martin. Professionalism and Test-driven Development. Software, IEEE, 24(3):32--36, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. A. Montenegro. On Sample Size And Precision In Ordinary Least Squares. Journal of Applied Statistics, 28(5):603--605, 2001.Google ScholarGoogle ScholarCross RefCross Ref
  33. R. Mugridge. Challenges In Teaching Test Driven Development. In Extreme Programming and Agile Processes in Software Engineering, pages 410--413. Springer, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. M. Müller and A. Höfer. The Effect of Experience on the Test-driven Development Process. Empirical Software Engineering, 12(6):593--615, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. E. A. Pena and E. H. Slate. Global Validation Of Linear Model Assumptions. Journal of the American Statistical Association, 101(473):341--354, 2006.Google ScholarGoogle ScholarCross RefCross Ref
  36. M. Philipp. Comparison Of The Test-Driven Development Processes Of Novice And Expert Programmer Pairs. PhD thesis, University of Karlsruhe, 2009.Google ScholarGoogle Scholar
  37. Y. Rafique and V. B. Misic. The Effects of Test-Driven Development on External Quality and Productivity: A Meta-Analysis. IEEE Transactions on Software Engineering, 39(6):835--856, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. W. R. Shadish, T. D. Cook, and D. T. Campbell. Experimental and Quasi-experimental Designs for Generalized Causal Inference. 2002.Google ScholarGoogle Scholar
  39. F. Shull, G. Melnik, B. Turhan, L. Layman, M. Diep, and H. Erdogmus. What Do We Know About Test-driven Development? Software, IEEE, 27(6):16--19, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. L. Silva and G. Travassos. Tool-supported unobtrusive evaluation of software engineering process conformance. In Empirical Software Engineering, International Symposium on, pages 127--135, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. S. Sørumgård. Verification of process conformance in empirical studies of software development. PhD thesis, Ph. D. thesis, Norwegian University of Science and Technology, 1997.Google ScholarGoogle Scholar
  42. Y. Wang and H. Erdogmus. The Role Of Process Measurement In Test-Driven Development. In 4th Conference on Extreme Programming and Agile Methods, 2004.Google ScholarGoogle Scholar
  43. C. Wohlin. Experimentation in Software Engineering: an Introduction, volume 6. Springer, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. N. Zazworka, V. Basili, and F. Shull. Tool Supported Detection and Judgment of Nonconformance in Process Execution. In Proceedings of the 3rd ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pages 312--323, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. N. Zazworka, K. Stapel, E. Knauss, F. Shull, V. R. Basili, and K. Schneider. Are Developers Complying with the Process: an XP Study. In Proceedings of the 4th ACM-IEEE International Symposium on Empirical Software Engineering and Measurement, pages 1--10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. K. Zieliriski and T. Szmuc. Preliminary Analysis Of The Effects Of Pair Programming And Test-Driven Development On The External Code Quality. Software Engineering: Evolution And Emerging Technologies, 130:113, 2006.Google ScholarGoogle Scholar

Index Terms

  1. Impact of process conformance on the effects of test-driven development

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ESEM '14: Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
        September 2014
        461 pages
        ISBN:9781450327749
        DOI:10.1145/2652524

        Copyright © 2014 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 18 September 2014

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        ESEM '14 Paper Acceptance Rate23of123submissions,19%Overall Acceptance Rate130of594submissions,22%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader