ABSTRACT
At SIGCSE 2002, Michael Goldwasser suggested a strategy for adding software testing practices to programming courses by requiring students to turn in tests along with their solutions, and then running every student's tests against every other student's program. This approach provides a much more robust environment for assessing the quality of student-written tests, and also provides more thorough testing of student solutions. Although software testing is included as a regular part of many more programming courses today, the all-pairs model of executing tests is still a rarity. This is because student-written tests, such as JUnit tests written for Java programs, are now more commonly written in the form of program code themselves, and they may depend on virtually any aspect of their author's own solution. These dependencies may keep one student's tests from even compiling against another student's program. This paper discusses the problem and presents a novel solution for Java that uses bytecode rewriting to transform a student's tests into a form that uses reflection to run against any other solution, regardless of any compile-time dependencies that may have been present in the original tests. Results of applying this technique to two assignments, encompassing 147 student programs and 240,158 individual test case runs, shows the feasibility of the approach and provides some insight into the quality of both student tests and student programs. An analysis of these results is presented.
- J. Spacco and W. Pugh, "Helping students appreciate test-driven development (TDD)," presented at the Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications, Portland, Oregon, USA, 2006. Google ScholarDigital Library
- M. H. Goldwasser, "A gimmick to integrate software testing throughout the curriculum," presented at the Proceedings of the 33rd SIGCSE Technical Symposium on Computer Science Education, ACM, New York, NY, pp. 271--275, 2002. Google ScholarDigital Library
- M. Hauswirth, et al., "The JavaFest: a collaborative learning technique for Java programming courses," presented at the Proceedings of the 6th international symposium on Principles and practice of programming in Java, Modena, Italy, 2008. Google ScholarDigital Library
- W. Marrero and A. Settle, "Testing first: emphasizing testing in early programming courses," SIGCSE Bull., vol. 37, pp. 4--8, 2005. Google ScholarDigital Library
- S. H. Edwards, "Rethinking computer science education from a test-first perspective," presented at the Companion of the 18th annual ACM SIGPLAN conference on Object-oriented programming, systems, languages, and applications, Anaheim, CA, USA, 2003. Google ScholarDigital Library
- D. Jackson and M. Usher, "Grading student programs using ASSYST," SIGCSE Bull., vol. 29, pp. 335--339, 1997. Google ScholarDigital Library
- B. Cole, et al., "Improving your software using static analysis to find bugs," presented at the Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications, Portland, Oregon, USA, 2006. Google ScholarDigital Library
- K. Aaltonen, et al., "Mutation analysis vs. code coverage in automated assessment of students' testing skills," presented at the Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion, Reno/Tahoe, Nevada, USA, 2010. Google ScholarDigital Library
- S. Elbaum, et al., "Bug Hunt: Making Early Software Testing Lessons Engaging and Affordable," presented at the 29th International Conference on Software Engineering (ICSE), pp. 688--697, 2007. Google ScholarDigital Library
- Beck, K. Aim, fire (test-first coding). IEEE Software, 18(5): 87--89, Sept./Oct. 2001. Google ScholarDigital Library
- Beck, K. Test-Driven Development: By Example. Addison-Wesley, Boston, MA, 2003. Google ScholarDigital Library
- S. Chiba and M. Nishizawa, "An easy-to-use toolkit for efficient Java bytecode translators," presented at the Proceedings of the 2nd international conference on Generative programming and component engineering (GPCE '03), Springer-Verlag New York, Inc., New York, NY, USA, pp. 364--376, 2003. Google ScholarDigital Library
Index Terms
- Running students' software tests against each others' code: new life for an old "gimmick"
Recommendations
Toward practical mutation analysis for evaluating the quality of student-written software tests
ICER '13: Proceedings of the ninth annual international ACM conference on International computing education researchSoftware testing is being added to programming courses at many schools, but current assessment techniques for evaluating student-written tests are imperfect. Code coverage measures are typically used in practice, but they have limitations and sometimes ...
Comparing test quality measures for assessing student-written tests
ICSE Companion 2014: Companion Proceedings of the 36th International Conference on Software EngineeringMany educators now include software testing activities in programming assignments, so there is a growing demand for appropriate methods of assessing the quality of student-written software tests. While tests can be hand-graded, some educators also use ...
Do student programmers all tend to write the same software tests?
ITiCSE '14: Proceedings of the 2014 conference on Innovation & technology in computer science educationWhile many educators have added software testing practices to their programming assignments, assessing the effectiveness of student-written tests using statement coverage or branch coverage has limitations. While researchers have begun investigating ...
Comments