ABSTRACT
Online coding tools are an increasingly common feature of programming courses, providing students with rapid feedback and flexible practice opportunities and providing instructors with useful analytics. However, little research has explored the complexity of online exercises provided to students and the order in which students are exposed to new ideas. In this paper, we investigate the benefits of using a short sequence of practice exercises, each targeting a distinct topic, prior to having students solve a goal task that combines the concepts. As expected, we find students solve the goal task with fewer errors and in less time after completing the practice tasks. However, we also find that the practice tasks reduce the likelihood of students delaying work on the goal task, and these effects are particularly large for less-experienced students.
- Ryan S. Baker, David Lindrum, Mary Jane Lindrum, and David Perkowski. 2015. Analyzing Early At-Risk Factors in Higher Education e-Learning Courses. In Proceedings of the the 8th Intl Conf on Educational Data Mining. 150–155.Google Scholar
- Anthony Estey, Hieke Keuning, and Yvonne Coady. 2017. Automatically Classifying Students in Need of Support by Detecting Changes in Programming Behaviour. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (SIGCSE ’17). ACM, New York, NY, USA, 189–194. Google ScholarDigital Library
- Kyung Ryung Kim and Eun Hee Seo. 2015. The relationship between procrastination and academic performance: A meta-analysis. Personality and Individual Differences 82 (2015), 26–33.Google ScholarCross Ref
- Andrew Luxton-Reilly, Brett A. Becker, Yingjun Cao, Roger McDermott, Claudio Mirolo, Andreas Mühling, Andrew Petersen, Kate Sanders, Simon, and Jacqueline Whalley. 2017. Developing Assessments to Determine Mastery of Programming Fundamentals. In Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’17). ACM, New York, NY, USA, 388–388. Google ScholarDigital Library
- Andrew Luxton-Reilly and Andrew Petersen. 2017. The Compound Nature of Novice Programming Assessments. In Proceedings of the Nineteenth Australasian Computing Education Conference (ACE ’17). ACM, New York, NY, USA, 26–35. Google ScholarDigital Library
- Joshua Martin, Stephen H. Edwards, and Clifford A. Shaffer. 2015. The Effects of Procrastination Interventions on Programming Project Success. In Proceedings of the Eleventh Annual International Conference on International Computing Education Research (ICER ’15). ACM, New York, NY, USA, 3–11. Google ScholarDigital Library
- Andrew Petersen, Michelle Craig, and Daniel Zingaro. 2011. Reviewing CS1 Exam Question Content. In Proceedings of the 42nd ACM Technical Symposium on Computer Science Education (SIGCSE ’11). ACM, New York, NY, USA, 631–636. Google ScholarDigital Library
- Kate Sanders, Marzieh Ahmadzadeh, Tony Clear, Stephen H. Edwards, Mikey Goldweber, Chris Johnson, Raymond Lister, Robert McCartney, Elizabeth Patitsas, and Jaime Spacco. 2013. The Canterbury QuestionBank: Building a Repository of Multiple-choice CS1 and CS2 Questions. In Proceedings of the ITiCSE Working Group Reports Conference on Innovation and Technology in Computer Science Education-working Group Reports (ITiCSE -WGR ’13). ACM, New York, NY, USA, 33–52. Google ScholarDigital Library
- Judy Sheard, Simon, Angela Carbone, Donald Chinn, Tony Clear, Malcolm Corney, Daryl D’Souza, Joel Fenwick, James Harland, Mikko-Jussi Laakso, and Donna Teague. 2013. How Difficult Are Exams?: A Framework for Assessing the Complexity of Introductory Programming Exams. In Proceedings of the Fifteenth Australasian Computing Education Conference - Volume 136 (ACE ’13). Australian Computer Society, Inc., Darlinghurst, Australia, Australia, 145–154. http://dl.acm.org/citation.cfm?id=2667199.2667215 Google ScholarDigital Library
- Simon, Donald Chinn, Michael de Raadt, Anne Philpott, Judy Sheard, Mikko-Jussi Laakso, Daryl D’Souza, James Skene, Angela Carbone, Tony Clear, Raymond Lister, and Geoff Warburton. 2012. Introductory Programming: Examining the Exams. In Proceedings of the Fourteenth Australasian Computing Education Conference - Volume 123 (ACE ’12). Australian Computer Society, Inc., Darlinghurst, Australia, Australia, 61–70. http://dl.acm.org/citation.cfm?id=2483716.2483724 Google ScholarDigital Library
- Jaime Spacco, Paul Denny, Brad Richards, David Babcock, David Hovemeyer, James Moscola, and Robert Duvall. 2015. Analyzing Student Work Patterns Using Programming Exercise Data. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education (SIGCSE ’15). ACM, New York, NY, USA, 18–23. Google ScholarDigital Library
Index Terms
- Improving complex task performance using a sequence of simple practice tasks
Recommendations
Developing Assessments to Determine Mastery of Programming Fundamentals
ITiCSE-WGR '17: Proceedings of the 2017 ITiCSE Conference on Working Group ReportsCurrent learning outcomes for introductory programming courses are relatively general, specifying tasks such as designing, implementing, testing and debugging programs that use some fundamental programming constructs. These outcomes impact what we teach, ...
Developing Assessments to Determine Mastery of Programming Fundamentals
ITiCSE '17: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science EducationCurrent CS1 learning outcomes are relatively general, specifying tasks such as designing, implementing, testing and debugging programs that use some fundamental programming constructs. These outcomes impact what we teach, our expectations, and our ...
The Compound Nature of Novice Programming Assessments
ACE '17: Proceedings of the Nineteenth Australasian Computing Education ConferenceFailure rates in introductory programming courses are notoriously high, and researchers have noted that students struggle with the assessments that we typically use to evaluate programming ability. Current assessment practices in introductory courses ...
Comments