Skip to main content
Log in

Improving students’ programming quality with the continuous inspection process: a social coding perspective

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

College students majoring in computer science and software engineering need to master skills for high-quality programming. However, rich research has shown that both the teaching and learning of high-quality programming are challenging and deficient in most college education systems. Recently, the continuous inspection paradigm has been widely used by developers on social coding sites (e.g., GitHub) as an important method to ensure the internal quality of massive code contributions. This paper presents a case where continuous inspection is introduced into the classroom setting to improve students’ programming quality. In the study, we first designed a specific continuous inspection process for students’ collaborative projects and built an execution environment for the process. We then conducted a controlled experiment with 48 students from the same course during two school years to evaluate how the process affects their programming quality. Our results show that continuous inspection can help students in identifying their bad coding habits, mastering a set of good coding rules and significantly reducing the density of code quality issues introduced in the code. Furthermore, we describe the lessons learned during the study and propose ideas to replicate and improve the process and its execution platform.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Salman I. Students versus professionals as experiment subjects: an investigation on the effectiveness of TDD on code quality. Master’s Thesis, University Oulu, 2014

  2. Breuker D M, Derriks J, Brunekreef J. Measuring static quality of student code. In: Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education. 2011, 13–17

  3. Feldman Y A. Teaching quality object-oriented programming. Technology on Educational Resources in Computing, 2005, 5(1): 1

    Article  Google Scholar 

  4. Carver J C, Kraft N A. Evaluating the testing ability of senior-level computer science students. In: Proceedings of IEEE-CS Conference on Software Engineering Education and Training. 2011, 169–178

  5. Nawahdah M, Taji D. Investigating students’ behavior and code quality when applying pair-programming as a teaching technique in a middle eastern society. In: Proceedings of IEEE Global Engineering Education Conference. 2016, 32–39

  6. Radermacher A D. Evaluating the gap between the skills and abilities of senior undergraduate computer science students and the expectations of industry. North Dakota State University, Thesis, 2012

  7. Begel A, Simon B. Struggles of new college graduates in their first software development job. ACM SIGCSE Bulletin, 2008, 40(1): 226–230

    Article  Google Scholar 

  8. Robins A, Rountree J, Rountree N. Learning and teaching programming: a review and discussion. Computer Science Education, 2003, 13(2): 137–172

    Article  Google Scholar 

  9. Higgins C A, Gray G, Symeonidis P, Tsintsifas A. Automated assessment and experiences of teaching programming. Technology on Educational Resources in Computing, 2005, 5(3): 5

    Article  Google Scholar 

  10. Piaget J. Psychology and Epistemology: Towards A Theory of Knowledge. Markham: Penguin Books Canada, 1977

    Book  Google Scholar 

  11. Chen W K, Tu P Y. Grading code quality of programming assignments based on bad smells. In: Proceedings of the 24th IEEE-CS Conference on Software Engineering Education and Training. 2011, 559

  12. Radermacher A, Walia G, Knudson D. Investigating the skill gap between graduating students and industry expectations. In: Proceedings of the 36th International Conference on Software Engineering Companion. 2014, 291–300

  13. McConnell S. Code Complete. Pearson Education, 2004

  14. ISO. IEC25010: 2011 systems and software engineering-systems and software quality requirements and evaluation (square)-system and software quality models. International Organization for Standardization. 2011, 34–35

  15. Gousios G, Pinzger M, Deursen A V. An exploratory study of the pull-based software development model. In: Proceedings of the 36th International Conference on Software Engineering. 2014, 345–355

  16. Dabbish L, Stuart C, Tsay J, Herbsleb J. Social coding in GitHub: transparency and collaboration in an open software repository. In: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. 2012, 1277–1286

  17. Lu Y, Mao X J, Li Z D, Zhang Y, Wang T, Yin G. Does the role matter? an investigation of the code quality of casual contributors in GitHub. In: Proceedings of the 23rd Asia-Pacific Software Engineering Conference. 2016, 49–56

  18. Yu Y, Vasilescu B, Wang H M, Filkov V, Devanbu P. Initial and eventual software quality relating to continuous integration in GitHub. 2016, arXiv preprint arXiv:1606.00521

  19. Ebbinghaus H. Memory: a contribution to experimental psychology. Annals of Neurosciences, 2013, 20(4): 155

    Article  Google Scholar 

  20. Wang H M, Yin G, Li X, Li X. TRUSTIE: A Software Development Platform for Crowdsourcing. Crowdsourcing. Springer Berlin Heidelberg, 2015

    Google Scholar 

  21. Wong C P, Xiong Y F, Zhang H Y, Hao D. Boosting bug-report-oriented fault localization with segmentation and stack-trace analysis. In: Proceedings of International Conference on Software Maintenance and Evolution. 2014, 181–190

  22. Tonella P, Abebe S L. Code quality from the programmer’s perspective. In: Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research. 2008

  23. Zhang H, Ali B M. Systematic reviews in software engineering: an empirical investigation. Information and Software Technology, 2013, 55(7): 1341–1354

    Article  Google Scholar 

  24. Lu Y, Mao X J, Li Z D, Zhang Y, Wang T, Yin G. Internal quality assurance for external contributions in GitHub: an empirical investigation. Journal of Software: Evolution and Process, 2018, 30(4): e1918

    Google Scholar 

  25. Akinola O S. An empirical comparative analysis of programming effort, bugs incurrence and code quality between solo and pair programmers. Middle East Technology Scientific Research, 2014, 21(12): 2231–2237

    Google Scholar 

  26. Braught G, Wahls T, Eby L M. The case for pair programming in the computer science classroom. ACM Transactions on Computing Education, 2011, 11(1): 2

    Article  Google Scholar 

  27. Wang Y Q, Li Y J, Collins M, Liu P J. Process improvement of peer code review and behavior analysis of its participants. In: Proceedings of SigCSE Technical Symposium on Computer Science Education. 2008, 107–111

  28. Cunha A D D, Greathead D. Does personality matter?: an analysis of code-review ability. Communications of the ACM, 2007, 50(5): 109–112

    Article  Google Scholar 

  29. Hundhausen C, Agrawal A, Fairbrother D, Trevisan M. Integrating pedagogical code reviews into a CS 1 course: an empirical study. ACM SIGCSE Bulletion, 2009, 41(1): 291–295

    Article  Google Scholar 

  30. Hüttermann M. Devops for Developers. Apress, 2012

  31. Waller J, Ehmke N C, Hasselbring W. Including performance benchmarks into continuous integration to enable devops. ACM SIGSOFT Software Engineering Notes, 2015, 40(2): 1–4

    Article  Google Scholar 

  32. Vasilescu B, Yu Y, Wang H M, Devanbu P, Filkov V. Quality and productivity outcomes relating to continuous integration in GitHub. In: Proceedings of the 10th Joint Meeting on Foundations of Software Engineering. 2015, 805–816

  33. Bowyer J, Hughes J. Assessing undergraduate experience of continuous integration and test-driven development. In: Proceedings of the 28th International Conference on Software Engineering. 2006, 691–694

  34. Heckman S, King J. Developing software engineering skills using real tools for automated grading. In: Proceedings of the 49th ACM Technical Symposium on Computer Science Education. 2018, 794–799

  35. Gaudin O, SonarSource. Continuous inspection: a paradigm shift in software quality management. Technical Report, SonarSource S.A., Switzerland, 2013

  36. Merson P, Yoder J W, Guerra E M, Aguiar A. Continuous inspection: a pattern for keeping your code healthy and aligned to the architecture. In: Proceedings of the 3rd Asian Conference on Pattern Languages of Programs. 2014

  37. Barroca L, Sharp H, Salah D, Taylor K, Gregory Peggy. Bridging the gap between research and agile practice: an evolutionary model. International Journal of System Assurance Engineering and Management, 2018, 9(2): 323–334

    Google Scholar 

  38. Krusche S, Berisha M, Bruegge B. Teaching code review management using branch based workflows. In: Proceedings of the 38th International Conference on Software Engineering Companion. 2016, 384–393

  39. Jick T D. Mixing qualitative and quantitative methods: triangulation in action. Administrative Science Quarterly, 1979, 24(4): 602–611

    Article  Google Scholar 

  40. Letouzey J L. The SQALE method definition document. In: Proceedings of the 3rd International Workshop on Managing Technical Debt. 2012, 31–36

  41. Zagalsky A, Feliciano J, Storey M A, Zhao Y Y, Wang W L. The emergence of GitHub as a collaborative platform for education. In: Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing. 2015, 1906–1917

  42. Lu Y, Mao X J, Li Z D. Assessing software maintainability based on class diagram design: a preliminary case study. Lecture Notes on Software Engineering, 2016, 4(1): 53–58

    Article  Google Scholar 

  43. Chess B, McGraw G. Static analysis for security. IEEE Security & Privacy, 2004, 2(6): 76–79

    Article  Google Scholar 

  44. Gousios G, Storey M A, Bacchelli A. Work practices and challenges in pull-based development: the contributor’s perspective. In: Proceedings of the 38th International Conference on Software Engineering. 2016, 285–296

  45. Mengel S A, Yerramilli V. A case study of the static analysis of the quality of novice student programs. ACM SIGCSE Bulletin, 1999, 31(1): 78–82

    Article  Google Scholar 

  46. Kim S H, Pan K, Whitehead E E J. Memories of bug fixes. In: Proceedings of the 14th ACM SIGSOFT International Symposium on Foundations of Software Engineering. 2006, 35–45

  47. Beck K. Extreme Programming Explained: Embrace Change. Addison-Wesley Professional, 2000

  48. Mcdowell C, Werner L, Bullock H, Fernald J. Pair programming improves student retention, confidence, and program quality. Communication of the ACM, 2006, 49(8): 91

    Article  Google Scholar 

  49. Mcdowell C, Werner L, Bullock H E, Fernald J. The impact of pair programming on student performance, perception and persistence. In: Proceedings of the 25th International Conference on Software Engineering. 2003, 602–603

  50. Mcdowell C, Werner L, Bullock H. The effects of pair-programming on performance in an introductory programming course. ACM SIGCSE Bulletin, 2002, 34(1): 38–42

    Article  Google Scholar 

  51. Nagoya F, Liu S Y, Chen Y T. A tool and case study for specification-based program review. In: Proceedings of the 29th Annual International Computer Software and Applications Conference. 2005, 375–380

  52. Campbell D T, Stanley J C, Lees Gage N. Experimental and quasi-experimental designs for research. Handbook of Research on Teaching, 1963, 5: 171–246

    Google Scholar 

  53. Sigelaman L. Question-order effects on presidential popularity. Public Opinion Quarterly, 1981, 45(2): 199–207

    Article  Google Scholar 

  54. Zheng J, Williams L, Nagappan N, Snipes W, Hudepohl J P, Vouk M A. On the value of static analysis for fault detection in software. IEEE Transaction on Software Engineering, 2006, 32(4): 240–253

    Article  Google Scholar 

Download references

Acknowledgements

We gratefully acknowledge the financial support from National Key R&D Program of China (2018YFB1004202) and the National Natural Science Foundation of China (Grant Nos. 61472430, 61502512, 61532004 and 61379051). We also want to thank our students on their active participation in our study.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xinjun Mao.

Additional information

Yao Lu is a doctoral candidate in the College of Computer, National University of Defense Technology, China. His work interests include open source software engineering, data mining, and crowdsourced learning.

Xinjun Mao is a professor in the College of Computer, National University of Defense Technology, China. He received his PhD degree in computer science from National University of Defense Technology, China in 1998. His research interests include software engineering, multi-agent system, robot system, self-adaptive system, and crowdsourcing.

Tao Wang is an assistant professor in the College of Computer, National University of Defense Technology, China. He received his PhD degree in computer science from National University of Defense Technology, China in 2015. His work interests include open source software engineering, machine learning, data mining, and knowledge discovering in open source software.

Gang Yin is an associate professor in the College of Computer, National University of Defense Technology, China. He received his PhD degree in computer science from National University of Defense Technology, China in 2006. He has published more than 60 research papers in international conferences and journals. His current research interests include distributed computing, information security, and software engineering.

Zude Li is an assistant professor at Central South University. He obtained his PhD degree in 2010 from The University of Western Ontario, Canada. His research interests are in the fields of software architecture, evolution and quality. He is appointed as a software engineering expert in SKANE SOFT.

Electronic Supplementary Material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, Y., Mao, X., Wang, T. et al. Improving students’ programming quality with the continuous inspection process: a social coding perspective. Front. Comput. Sci. 14, 145205 (2020). https://doi.org/10.1007/s11704-019-9023-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-019-9023-2

Keywords

Navigation