skip to main content
review-article

Naturally occurring data as research instrument: analyzing examination responses to study the novice programmer

Published:18 January 2010Publication History
Skip Abstract Section

Abstract

In New Zealand and Australia, the BRACElet project has been investigating students' acquisition of programming skills in introductory programming courses. The project has explored students' skills in basic syntax, tracing code, understanding code, and writing code, seeking to establish the relationships between these skills. This ITiCSE working group report presents the most recent step in the BRACElet project, which includes replication of earlier analysis using a far broader pool of naturally occurring data, refinement of the SOLO taxonomy in code-explaining questions, extension of the taxonomy to code-writing questions, extension of some earlier studies on students' 'doodling' while answering exam questions, and exploration of a further theoretical basis for work that until now has been primarily empirical.

References

  1. Baroody, A., Feil, Y., and Johnson, A. (2007). An alternative reconceptualization of procedural and conceptual knowledge. Journal for Research in Mathematics Education, 38:2, 115--131.Google ScholarGoogle Scholar
  2. Biggs, J. B. (1999). Teaching for quality learning at university. Buckingham: Open University Press.Google ScholarGoogle Scholar
  3. Biggs, J. B. and Collis, K. F. (1982). Evaluating the quality of learning: the SOLO taxonomy (Structure of the Observed Learning Outcome). New York: Academic Press.Google ScholarGoogle Scholar
  4. Byckling, P. and Sajaniemi, J. (2006). Roles of variables and programming skills improvement. SIGCSE Bulletin 38:1, 413--417. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Clear, T., Edwards, J., Lister, R., Simon, B., Thompson, E. and Whalley, J. (2008). The teaching of novice computer programmers: bringing the scholarly-research approach to Australia. Tenth Australasian Computing Education Conference (ACE 2008), Wollongong, Australia, 63--68. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Clear, T., Whalley, J., Lister, R., Carbone, A., Hu, M., Sheard, J., Simon, B. and Thompson, E. (2008). Reliably classifying novice programmer exam results using the SOLO taxonomy. 21st Annual NACCQ Conference, NACCQ, Auckland, New Zealand, 23--30.Google ScholarGoogle Scholar
  7. Colburn, T. and Shute, G. (2007). Abstraction in computer science. Minds & Machines 17, 169--184. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Cottrill, J., Dubinsky, E., Nichols, D., Schwingendorf, K., Thomas, K., and Vidakovic, D. (1996). Understanding the limit concept: beginning with a coordinated process scheme. Journal of Mathematical Behavior 15:2, 167--192.Google ScholarGoogle ScholarCross RefCross Ref
  9. Denny, P., Luxton-Reilly, A. and Simon, B. (2008). Evaluating a new exam question: Parsons problems. Fourth International Workshop on Computing Education Research (ICER '08), Sydney, Australia, 113--124. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Fincher, S., Lister, R., Clear, T., Robins, A., Tenenberg, J. and Petre, M. (2005). Multi-institutional, multi-national studies in CSEd research: some design considerations and trade-offs. in Anderson, R., Fincher, S. and Guzdial, M. eds. First International Workshop on Computing Education Research, Seattle, WA, USA, 111--121. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Gray, E., Pinto, M., Pitta, D., and Tall, D. (1999). Knowledge construction and diverging thinking in elementary and advanced mathematics. Educational Studies in Mathematics 38, 111--133.Google ScholarGoogle ScholarCross RefCross Ref
  12. Gray, E. and Tall, D. (2007). Abstraction as a natural process of mental compression. Mathematics Education Research Journal 19:2, 23--40.Google ScholarGoogle ScholarCross RefCross Ref
  13. Hattie, J. and Purdie, N. (1998). The SOLO model: addressing fundamental measurement issues. In Dart, B. & Boulton-Lewis, G. M. (Eds.), Teaching and learning in higher education, 145--176. Camberwell, Vic: Australian Council of Educational Research.Google ScholarGoogle Scholar
  14. Hiebert, J. and Lefevre, P. (1986). Conceptual and procedural knowledge in mathematics: an introductory analysis. In Hiebert, J., ed., Conceptual and procedural knowledge: the case of mathematics, 1--27. Erlbaum, Hillsdale, NJ.Google ScholarGoogle Scholar
  15. Lister R., Adams, E. S., Fitzgerald, S., Fone, W., Hamer, J., Lindholm, M., McCartney, R., Moström, E., Sanders, K., Seppälä, O., Simon, B., and Thomas, L. (2004). A multi national study of reading and tracing skills in novice programmers. SIGCSE Bulletin 36:4, 119--150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Lister, R., Fidge, C., and Teague, D. (2009). Further evidence of a relationship between explaining, tracing and writing skills in introductory programming. 14th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education (ITiCSE'09), July 3-8, 2009, Paris, France. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Lister, R., Simon, B., Thompson, E., Whalley, J. L., and Prasad, C. (2006). Not seeing the forest for the trees: novice programmers and the SOLO taxonomy. SIGCSE Bulletin 38:3, 118--122. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Lopez, M., Whalley, J., Robbins, P., and Lister, R. (2008). Relationships between reading, tracing and writing skills in introductory programming. Fourth International Workshop on Computing Education Research (ICER '08), Sydney, Australia, 101--112. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. McCartney, R., Mostrom, J.E., Sanders, K., and Seppälä, O. (2004). Questions, annotations, and institutions: observations from a study of novice programmers. Fourth Finnish / Baltic Sea Conference on Computer Science Education (Koli Calling 2004), 11--19.Google ScholarGoogle Scholar
  20. McCormick, R. (1997). Conceptual and procedural knowledge. International Journal of Technology and Design Education 7:1-2, 141--159.Google ScholarGoogle ScholarCross RefCross Ref
  21. Parsons, D. and Haden, P. (2006). Parson's programming puzzles: a fun and effective learning tool for first programming courses. Eighth Australasian Computing Education Conference (ACE2006), Hobart, Australia, 157--163. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Pegg, J. (2002). Assessment in mathematics: A developmental approach. In J. M. Royer (Ed.), Mathematical Cognition 227--259. Information Age Publishing: Greenwich, CT, USA.Google ScholarGoogle Scholar
  23. Pegg, J. and Tall, D. (2005). The fundamental cycle of concept construction underlying various theoretical frameworks. ZDM 37:6, 468--474.Google ScholarGoogle Scholar
  24. Perkins, D. and Martin, F. (1989). Fragile knowledge and neglected strategies in novice programmers. In Soloway, E. and Spohrer, J, Eds., Studying the Novice Programmer. Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 213--229. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Philpott, A., Robbins, P., and Whalley, J. (2007). Accessing the steps on the road to relational thinking. 20th Annual Conference of the National Advisory Committee on Computing Qualifications, Nelson, New Zealand, 286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Rasch, G. (1960/1980). Probabilistic models for some intelligence and attainment tests (Copenhagen, Danish Institute for Educational Research), expanded edition (1980) with foreword and afterword by B.D. Wright. Chicago: The University of Chicago Press.Google ScholarGoogle Scholar
  27. Robins, A., Rountree, J., and Rountree, N. (2003). Learning and teaching programming: a review and discussion. Computer Science Education 13:2, 137--172.Google ScholarGoogle ScholarCross RefCross Ref
  28. Sfard, A. (1988). Operational vs. structural method of teaching mathematics -- case study. Twelfth Conference for the Psychology of Mathematics Education 2, 560--567. Vesprèm, Hungary: Ferenc Genzwein, OOK.Google ScholarGoogle Scholar
  29. Sfard, A. (1991). On the dual nature of mathematical conceptions: reflections on processes and objects as different sides of the same coin. Educational Studies in Mathematics 22, 1--36.Google ScholarGoogle ScholarCross RefCross Ref
  30. Sheard, J., Carbone, A., Lister, R., Simon, B., Thompson, E., and Whalley, J. L. (2008). Going SOLO to assess novice programmers. SIGCSE Bulletin 40:3, 209--213. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Simon (2009). Code-explaining exam questions: a cautionary note. Ninth International Conference on Computing Education Research (Koli Calling 2009).Google ScholarGoogle Scholar
  32. Soloway, E. (1986). Learning to program = learning to construct mechanisms and explanations. Communications of the ACM 29:9, 850--858. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Star, J. R. (2005). Reconceptualizing procedural knowledge. Journal for Research in Mathematics Education 36, 401--411.Google ScholarGoogle Scholar
  34. Tall, D., (2008). The transition to formal thinking in mathematics. Mathematics Education Research Journal 20:2, 5--24.Google ScholarGoogle ScholarCross RefCross Ref
  35. Thompson, E. (2004). Does the sum of the parts equal the whole? 21st Annual NACCQ Conference, NACCQ, Auckland, New Zealand, 2008, 440--445.Google ScholarGoogle Scholar
  36. Thompson, E. (2007). Holistic assessment criteria -- Applying SOLO to programming projects. Ninth Australasian Computing Education Conference (ACE2007), Ballarat, Australia, 155--162. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Thompson, E. (2009). How do they understand? Practitioner perceptions of an object-oriented program. Dissertation, Massey University, Palmerston North.Google ScholarGoogle Scholar
  38. Venables, A., Tan, G., and Lister, R. (2009) A closer look at tracing, explaining and code writing skills in the novice programmer. Fifth International Workshop on Computing Education Research (ICER '09), Berkeley, California, USA. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Wiedenbeck, S. (1985). Novice/expert differences in programming skills. International Journal of Man-Machine Studies 23:4, 383--390. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Whalley, J. L., Lister, R., Thompson, E., Clear, T., Robbins, P., Kumar, P. K. A., and Prasad, C. (2006). An Australasian study of reading and comprehension skills in novice programmers, using the Bloom and SOLO taxonomies. Eighth Australasian Computing Education Conference (ACE2006), Hobart, Australia, 243--252. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Whalley, J. and Lister, R. (2009). The BRACElet 2009.1 (Wellington) Specification. Eleventh Australasian Computing Education Conference (ACE2009), Wellington, New Zealand, 9--18. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Whalley, J. and Robbins, P. (2007). Report on the Fourth BRACElet Workshop. Bulletin of Applied Computing and IT. Retrieved June 7, 2007 from http://www.naccq.co.nz/bacit/0501/2007Whalley_BRACELET_Workshop.htm.Google ScholarGoogle Scholar
  43. Wright, B.D, and Linacre, J.M. (1994) Reasonable meansquare fit values. Rasch Measurement Transactions 8:3, 370.Google ScholarGoogle Scholar

Index Terms

  1. Naturally occurring data as research instrument: analyzing examination responses to study the novice programmer

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader