skip to main content
10.1145/3364510.3364511acmotherconferencesArticle/Chapter ViewAbstractPublication Pageskoli-callingConference Proceedingsconference-collections
research-article

Assessing students' understanding of object structures

Published:21 November 2019Publication History

ABSTRACT

We present a theoretically derived and empirically tested competence model related to the concepts of "object state" and "references" that both form an important part of object-oriented programming. Our model characterizes different levels of programming capability with a focus on possible learning stages of beginning learners. It is based on the notion of understanding objects and their interaction with each other during the runtime of a program. Based on a hierarchical description of our theory, we derive a two-dimensional structure that separates the hierarchy into two facets "structure" (how are objects structured/stored) and "behaviour" (how do objects interact and access each other). Based on this, we have developed a set of items and collected data in a CS1 course (N = 195) to validate the item-set. We analyzed the data using a Rasch model to check item difficulty and the presence of different difficulty levels, and factor analysis to check the dimensions of the model. Furthermore, we argue for the validity of the items with the help of additional data collected from the students. The results indicate that our theoretical assumptions are correct and that the items will be usable with some minor modifications.

References

  1. E. B. Andersen. A goodness of fit test for the rasch model. Psychometrika, 38(1):123--140, 1973.Google ScholarGoogle ScholarCross RefCross Ref
  2. D. J. Armstrong. The quarks of object-oriented development. Communications of the ACM, 49(2):123--128, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. D. J. Bartholomew, F. Steele, I. Moustaki, and J. I. Galbraith. Analysis of Multivariate Social Science Data. Chapman & Hall/CRC and CRC Press, Boca Raton, 2nd ed edition, 2008.Google ScholarGoogle Scholar
  4. K. Beck and W. Cunningham. A laboratory for teaching object oriented thinking. SIGPLAN Not., 24(10):1--6, Sept. 1989.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J. Bennedsen and C. Schulte. A competence model for object-interaction in introductory programming. In Proceeding of 18th Workshop of the Psychology of Programming Interest Group. Citeseer, 2006.Google ScholarGoogle Scholar
  6. J. Bennedsen and C. Schulte. Bluej visual debugger for learning the execution of object-oriented programs? Trans. Comput. Educ., 10(2):8:1--8:22, Jun 2010.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. Bennedsen and C. Schulte. Object interaction competence model v. 2.0. In Proceedings of the 2013 Learning and Teaching in Computing and Engineering, LATICE '13, pages 9--16, Washington, DC, USA, 2013. IEEE Computer Society.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Börstler. Using role-play diagrams to improve scenario role-play, pages 309--334. Springer, 2010.Google ScholarGoogle Scholar
  9. J. Börstler and C. Schulte. Teaching object oriented modelling with crc cards and roleplaying games. In Proceedings WCCE, volume 5, 2005.Google ScholarGoogle Scholar
  10. C. P. Brackmann, M. Román-González, G. Robles, J. Moreno-León, A. Casali, and D. Barone. Development of computational thinking skills through unplugged activities in primary school. In E. Barendsen and P. Hubwieser, editors, Proceedings of the 12th Workshop on Primary and Secondary Computing Education - WiPSCE '17, pages 65--72, New York, NY, USA, 2017. ACM Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. R. Brooks. Towards a theory of the comprehension of computer programs. International Journal of Man-Machine Studies, 18(6):543--554, 1983.Google ScholarGoogle ScholarCross RefCross Ref
  12. P. S. Buffum, E. V. Lobene, M. H. Frankosky, K. E. Boyer, E. N. Wiebe, and J. C. Lester. A practical guide to developing and validating computer science knowledge assessments with application to middle school. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education, SIGCSE '15, pages 622--627, New York, NY, USA, 2015. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. R. Caceffo, S. Wolfman, K. S. Booth, and R. Azevedo. Developing a computer science concept inventory for introductory programming. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education, SIGCSE '16, pages 364--369, New York, NY, USA, 2016. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Q. I. Cutts, R. Connor, Donaldson Peter, and G. Michaelson. Code or (not code) -- separating formal and natural language in cs education. In Proceedings of the 9th Workshop in Primary and Secondary Computing Education, WiPSCE '14, pages 20--28, New York, NY, USA, 2014. ACM.Google ScholarGoogle Scholar
  15. B. Du Boulay. Some difficulties of learning to program. Journal of Educational Computing Research, pages 57--73, 1986.Google ScholarGoogle ScholarCross RefCross Ref
  16. J. Garner, P. Denny, and A. Luxton-Reilly. Mastery learning in computer science education. In Simon and A. Luxton-Reilly, editors, Proceedings of the Twenty-First Australasian Computing Education Conference on - ACE '19, pages 37--46, New York, New York, USA, 2019. ACM Press.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. S. Grover and S. Basu. Measuring student learning in introductory block-based programming. In M. E. Caspersen and A. S. I. G. o. C. S. Education, editors, Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education, pages 267--272, [S.l.], 2017. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. T. R. Hinkin, J. B. Tracey, and C. A. Enz. Scale construction: Developing reliable and valid measurement instruments. Journal of Hospitality & Tourism Research, 21(1):100--120, 2016.Google ScholarGoogle ScholarCross RefCross Ref
  19. J. Jungjohann, J. M. DeVries, M. Gebhardt, and A. Mühling. Levumi: A web-based curriculum-based measurement to monitor learning progress in inclusive classrooms. In K. Miesenberger and G. Kouroupetroglou, editors, Computers Helping People with Special Needs, pages 369--378, Cham, 2018. Springer International Publishing.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. E. Klieme, J. Hartig, and D. Rauch. The concept of competence in educational contexts. In E. Klieme, D. Leutner, and J. Hartig, editors, Assessment of Competencies in Educational Contexts. Hogrefe & Huber Publishers, Toronto, 2008.Google ScholarGoogle Scholar
  21. I. Koller and R. Hatzinger. Nonparametric tests for the rasch model: explanation, development, and application of quasi-exact tests for small samples. InterStat, 11:1--16, 2013.Google ScholarGoogle Scholar
  22. M. Kramer, P. Hubwieser, and T. Brinda. A competency structure model of object-oriented programming. In 2016 International Conference on Learning and Teaching in Computing and Engineering, pages 1--8, Piscataway, NJ, 2016. IEEE.Google ScholarGoogle ScholarCross RefCross Ref
  23. M. Kramer, D. A. Tobinski, and T. Brinda. On the way to a test instrument for object-oriented programming competencies. In J. Sheard and C. S. Montero, editors, Proceedings, 16th Koli Calling Conference on Computing Education Research, ICPS, pages 145--149, New York, New York, 2016. The Association for Computing Machinery.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. E. Lahtinen, K. Ala-Mutka, and H.-M. Järvinen. A study of the difficulties of novice programmers. SIGCSE Bull., 37(3):14--18, June 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. R. Lister, B. Simon, E. Thompson, J. L. Whalley, and C. Prasad. Not seeing the forest for the trees: novice programmers and the solo taxonomy. SIGCSE Bulletin, 38(3):118--122, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. Luxton-Reilly, B. A. Becker, Y. Cao, R. McDermott, C. Mirolo, A. Mühling, A. Petersen, K. Sanders, Simon, and J. L. Whalley. Developing assessments to determine mastery of programming fundamentals. In Proceedings of the 2017 ITiCSE Conference on Working Group Reports, ITiCSE-WGR '17, pages 47--69, New York, NY, USA, 2017. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. R. Mason and G. Cooper. Why the bottom 10% just can't do it: Mental effort measures and implication for introductory programming courses. In Proceedings of the Fourteenth Australasian Computing Education Conference - Volume 123, ACE '12, pages 187--196, Darlinghurst, Australia, Australia, 2012. Australian Computer Society, Inc.Google ScholarGoogle Scholar
  28. J. Mead, S. Gray, J. Hamer, R. James, J. Sorva, C. St. Clair, and L. Thomas. A cognitive approach to identifying measurable milestones for programming skill acquisition. ACM SIGCSE Bulletin, 38(4):182--194, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. I. Milne and G. Rowe. Difficulties in learning and teaching programming - views of students and tutors. Education and Information Technologies, 7(1):55--66, Mar. 2002.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. A. Mühling, P. Hubwieser, and M. Berges. Dimensions of programming knowledge. In A. Brodnik and J. Vahrenhold, editors, Informatics in Schools. Curricula, Competences, and Competitions, volume 9378 of Lecture notes in computer science, pages 32--44. Springer International Publishing, 2015.Google ScholarGoogle Scholar
  31. A. Mühling, A. Ruf, and P. Hubwieser. Design and first results of a psychometric test for measuring basic programming abilities. In Proceedings of the Workshop in Primary and Secondary Computing Education, WiPSCE '15, pages 2--10, New York, NY, USA, 2015. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. R. Pea. Language-independent conceptual bugs in nocive programming. Journal of Educational Computing Research, 2(1):25--36, 1986.Google ScholarGoogle ScholarCross RefCross Ref
  33. R. Pea. The buggy path to the development of programming expertise. Focus on Learning Problems in Mathematics, 9(1):5--30, 1987.Google ScholarGoogle Scholar
  34. N. Ragonis and M. Ben-Ari. A long-term investigation of the comprehension of oop concepts by novices. Computer Science Education, 15(3):203--221, 2005.Google ScholarGoogle ScholarCross RefCross Ref
  35. G. Rasch. Probabilistic Models for Some Intelligence and Attainment Tests. University of Chicago Press, Chicago, expanded ed edition, 1980.Google ScholarGoogle Scholar
  36. W. Revelle and T. Rocklin. Very simple structure: An alternative procedure for estimating the optimal number of interpretable factors. Multivariate Behavioral Research, 14:403--414, 1979.Google ScholarGoogle ScholarCross RefCross Ref
  37. A. Robins. Learning edge momentum: A new account of outcomes in cs1. Computer Science Education, 20(1):37--71, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  38. A. V. Robins. Novice programmers and introductory programming. In S. A. Fincher and A. V. Robins, editors, The Cambridge Handbook of Computing Education Research, Cambridge Handbooks in Psychology, pages 327--376. Cambridge University Press, 2019.Google ScholarGoogle Scholar
  39. K. Sanders, M. Ahmadzadeh, T. Clear, S. H. Edwards, M. Goldweber, C. Johnson, R. Lister, R. McCartney, E. Patitsas, and J. Spacco. The canterbury questionbank: Building a repository of multiple-choice cs1 and cs2 questions. In Proceedings of the ITiCSE Working Group Reports Conference on Innovation and Technology in Computer Science Education-working Group Reports, ITiCSE -WGR '13, pages 33--52, New York, NY, USA, 2013. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. C. Schulte and J. Bennedsen. What do teachers teach in introductory programming? In Proceedings of the second international workshop on Computing education research, ICER '06, pages 17--28. ACM, 2006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. J. Siegmund, C. Kästner, J. Liebig, S. Apel, and S. Hanenberg. Measuring and modeling programming experience. Empirical Software Engineering, 19(5):1299--1334, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Simon, J. Sheard, D. D'Souza, P. Klemperer, L. Porter, J. Sorva, M. Stegeman, and D. Zingaro. Benchmarking introductory programming exams: How and why. In Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '16, pages 154--159, New York, NY, USA, 2016. ACM.Google ScholarGoogle Scholar
  43. J. Sorva. Notional machines and introductory programming education. ACM Transactions on Computing Education, 13(2):1--31, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. C. Taylor, D. Zingaro, L. Porter, K. C. Webb, C. B. Lee, and M. Clancy. Computer science concept inventories: Past and future. Computer Science Education, 24(4):253--276, 2014.Google ScholarGoogle ScholarCross RefCross Ref
  45. E. Tempero, P. Denny, A. Luxton-Reilly, and P. Ralph. Objects count so count objects! In Proceedings of the 2018 ACM Conference on International Computing Education Research, ICER '18, pages 187--195, New York, NY, USA, 2018. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. A. E. Tew and B. Dorn. The case for validated tools in computer science education research. Computer, 46(9):60--66, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. A. E. Tew and M. Guzdial. The fcs1: A language independent assessment of cs1 knowledge. In T. J. Cortina, E. L. Walker, L. S. King, D. R. Musicant, and L. I. McCann, editors, Proceedings of the 42nd ACM Technical Symposium on Computer Science Education, Dallas, USA, 9-12 March 2011, pages 111--116, New York, 2011. ACM.Google ScholarGoogle Scholar
  48. A. Vihavainen, J. Airaksinen, and C. Watson. A systematic review of approaches for teaching introductory programming and their influence on success. In Proceedings of the Tenth Annual Conference on International Computing Education Research, Glasgow, Scotland, 11-13 August 2014, ICER '14, pages 19--26, New York, NY, USA, 2014. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. L. Waheed. Development and Application of a Rasch Model Measure of Student Competency in University Introductory Computer Programming. Phd thesis, 2018.Google ScholarGoogle Scholar
  50. C. Watson and F. W. Li. Failure rates in introductory programming revisited. In Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education, ITiCSE '14, pages 39--44, New York, NY, USA, 2014. ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. L. E. Winslow. Programming pedagogy---a psychological overview. ACM SIGCSE Bulletin, 28(3):17--22, 1996.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Assessing students' understanding of object structures

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Other conferences
          Koli Calling '19: Proceedings of the 19th Koli Calling International Conference on Computing Education Research
          November 2019
          247 pages
          ISBN:9781450377157
          DOI:10.1145/3364510

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 21 November 2019

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed limited

          Acceptance Rates

          Overall Acceptance Rate80of182submissions,44%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader