skip to main content
10.1145/2818314.2818320acmotherconferencesArticle/Chapter ViewAbstractPublication PageswipsceConference Proceedingsconference-collections
research-article

Design and First Results of a Psychometric Test for Measuring Basic Programming Abilities

Published: 09 November 2015 Publication History

Abstract

We present the design of a test for measuring students' abilities concerning the application of control structures. Validated test instruments are a valuable tool for the evaluation of teaching both in a research setting as well as in a classroom setting. Our test is based on item-response-theory, in particular the Rasch model, and comprises a set of items all following the same format and using a simple, artificial programming language. We field-tested and modified the instrument in four iterations using only small samples and special statistical methods instead of the large samples usually required for IRT models. After the fourth iteration, the test has now reached a usable state. Based on the results, we were able to identify two misconceptions that are occurring very frequently in our test population - students of grade 7 to 10 in secondary schools.

References

[1]
E. B. Andersen. A goodness of fit test for the rasch model. Psychometrika, 38(1):123--140, 1973.
[2]
D. J. Bartholomew, F. Steele, I. Moustaki, and J. I. Galbraith. Analysis of Multivariate Social Science Data. Chapman & Hall/CRC and CRC Press, Boca Raton, 2nd ed edition, 2008.
[3]
J. Bennedsen and C. Schulte. A competence model for object-interaction in introductory programming. In Proceedings of the 18th Workshop of the Psychology of Programming Interest Group, Sussex UK, September 2006, pages 215--229. 2001.
[4]
P. S. Buffum, E. V. Lobene, M. H. Frankosky, K. E. Boyer, E. N. Wiebe, and J. C. Lester. A practical guide to developing and validating computer science knowledge assessments with application to middle school. In Proceedings of the 46th ACM Technical Symposium on Computer Science Education, SIGCSE '15, pages 622--627, New York, NY, USA, 2015. ACM.
[5]
Computing at School Working Group. Computer science: A curriculum for schools, March 2012.
[6]
V. Dagienė and G. Futschek. Bebras international contest on informatics and computer literacy: Criteria for good tasks. In R. T. Mittermeir and M. M. Sysło, editors, Informatics Education - Supporting Computational Thinking, Lecture notes in computer science, pages 19--30. Springer Berlin Heidelberg, 2008.
[7]
L. V. DiBello and W. Stout. Irt-based cognitive diagnostic models and related methods /. Journal of Educational Measurement, 44(4):285--383, 2007.
[8]
S. Fincher and M. Petre. Computer Science Education Research. RoutledgeFalmer, London, New York, 2004.
[9]
G. H. Fischer. The linear logistic test model as an instrument in educational research. Acta Psychologica, 37(6):359--374, 1973.
[10]
U. Fuller, C. G. Johnson, T. Ahoniemi, D. Cukierman, I. Hernán-Losada, J. Jackova, E. Lahtinen, T. L. Lewis, D. M. Thompson, C. Riedesel, and E. Thompson. Developing a computer science-specific learning taxonomy. In Working Group Reports on Innovation and Technology in Computer Science Education, ITiCSE-WGR '07, pages 152--170. ACM, New York, NY and USA, 2007.
[11]
C. A. Glas and N. D. Verhelst. Testing the rasch model. In G. H. Fischer and I. W. Molenaar, editors, Rasch Models, pages 69--95. Springer New York, New York, NY, 1995.
[12]
C. Holmboe, L. McIver, and G. Carlisle. Research agenda for computer science education. In G. Kadoda, editor, Proceedings of the 13th Workshop of the Psychology of Programming Interest Group, Bournemouth UK, April 2001, pages 207--223. 2001.
[13]
I. Koller and R. Hatzinger. Nonparametric tests for the rasch model: explanation, development, and application of quasi-exact tests for small samples. InterStat, 11:1--16, 2013.
[14]
A. N. Kumar. A study of the influence of code-tracing problems on code-writing skills. In Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE '13, pages 183--188, New York, NY, USA, 2013. ACM.
[15]
B. Linck, L. Ohrndorf, S. Schubert, P. Stechert, J. Magenheim, W. Nelles, J. Neugebauer, and N. Schaper. Competence model for informatics modelling and system comprehension. In Proceedings of the IEEE Global Engineering Education Conference (EDUCON), Berlin, Germany, 13-15 March 2013, pages 85--93, Piscataway, N.J., 2013. IEEE.
[16]
L. Malmi, J. Sheard, Simon, R. Bednarik, J. Helminen, A. Korhonen, N. Myller, J. Sorva, and A. Taherkhani. Characterizing research in computing education: A preliminary analysis of the literature. In Proceedings of the Sixth International Workshop on Computing Education Research, Aarhus, Denmark, 9-10 August 2010, pages 3--12, New York, 2010. ACM.
[17]
A. Mühling, P. Hubwieser, and M. Berges. Dimensions of programming knowledge. In Proceedings of ISSEP, Ljubljana, - to appear., 2015.
[18]
A. M. Mühling. Investigating Knowledge Structures in Computer Science Education. PhD thesis, Technische Universität München, München, 2014.
[19]
F. H. Müller, B. Hanfstingl, and I. Andreitz. Skalen zur motivationalen regulation beim lernen von schülerinnen und schülern: Adaptierte und ergänzte version des academic self-regulation questionnaire (srq-a) nach ryan & connell, 2007.
[20]
R. E. Pattis. Karel the Robot: A Gentle Introduction to the Art of Programming. Wiley, New York, 1981.
[21]
I. Ponocny. Nonparametric goodness-of-fit tests for the rasch model. Psychometrika, 66(3):437--460, 2001.
[22]
G. Rasch. Probabilistic Models for Some Intelligence and Attainment Tests. University of Chicago Press, Chicago, expanded ed edition, 1980.
[23]
A. Ruf, A. M. Mühling, and P. Hubwieser. Scratch vs. karel: Impact on learning outcomes and motivation. In Proceedings of the 9th Workshop in Primary and Secondary Computing Education, WiPSCE '14, pages 50--59, New York, NY, USA, 2014. ACM.
[24]
L. A. Sudol and C. Studer. Analyzing test items: Using item response theory to validate assessments. In Proceedings of the 41st ACM Technical Symposium on Computer Science Education, SIGCSE '10, pages 436--440, New York, NY, USA, 2010. ACM.
[25]
A. E. Tew. Assessing Fundamental Introductory Computing Concept Knowledge in a Language Independent Manner. PhD thesis, Georgia Institute of Technology, Atlanta, December 2010.
[26]
A. E. Tew and M. Guzdial. The fcs1: A language independent assessment of cs1 knowledge. In Proceedings of the 42Nd ACM Technical Symposium on Computer Science Education, SIGCSE '11, pages 111--116, New York, NY, USA, 2011. ACM.
[27]
A. Wald. Tests of statistical hypotheses concerning several parameters when the number of observations is large. Transactions of the American Mathematical Society, 54:426--482, 1943.
[28]
T. Winters and T. Payne. What do students know? an outcomes-based assessment system. In Proceedings of the First International Workshop on Computing Education Research, ICER '05, pages 165--172, New York, NY, USA, 2005. ACM.
[29]
T. Winters and T. Payne. Closing the loop on test creation: A question assessment mechanism for instructors. SIGCSE Bull, 38(1):169--172, 2006.
[30]
C. M. Yoder and M. L. Schrag. Nassi-shneiderman charts an alternative to flowcharts for design. SIGMETRICS Perform. Eval. Rev., 7(3--4):79--86, 1978.

Cited By

View all
  • (2025)Towards high-quality informatics K-12 education in Europe: key insights from the literatureSmart Learning Environments10.1186/s40561-025-00366-512:1Online publication date: 5-Feb-2025
  • (2024)Design and Validation of a Computational Thinking Test for Children in the First Grades of Elementary EducationMultimodal Technologies and Interaction10.3390/mti80500398:5(39)Online publication date: 9-May-2024
  • (2024)Using Card Sorting Activity as a Strategy for Evaluating Students’ Learning of Computational Thinking ConceptsInternational Journal of Computer Science Education in Schools10.21585/ijcses.v6i4.2156:4Online publication date: 16-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
WiPSCE '15: Proceedings of the Workshop in Primary and Secondary Computing Education
November 2015
149 pages
ISBN:9781450337533
DOI:10.1145/2818314
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 09 November 2015

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Algorithmic control structures
  2. assessment
  3. computer science education
  4. computing concepts
  5. latent-trait
  6. novice programmer
  7. program flow
  8. psychometric test

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

WiPSCE '15

Acceptance Rates

Overall Acceptance Rate 104 of 279 submissions, 37%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)86
  • Downloads (Last 6 weeks)11
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Towards high-quality informatics K-12 education in Europe: key insights from the literatureSmart Learning Environments10.1186/s40561-025-00366-512:1Online publication date: 5-Feb-2025
  • (2024)Design and Validation of a Computational Thinking Test for Children in the First Grades of Elementary EducationMultimodal Technologies and Interaction10.3390/mti80500398:5(39)Online publication date: 9-May-2024
  • (2024)Using Card Sorting Activity as a Strategy for Evaluating Students’ Learning of Computational Thinking ConceptsInternational Journal of Computer Science Education in Schools10.21585/ijcses.v6i4.2156:4Online publication date: 16-Nov-2024
  • (2024)Relationships Between Executive Functions and Computational ThinkingJournal of Educational Computing Research10.1177/0735633124124243562:5(1267-1301)Online publication date: 1-Apr-2024
  • (2024)Mastering Control Structures in Secondary Education: Student Observations and Descriptions of Program LogicInformatics in Schools. Innovative Approaches to Computer Science Teaching and Learning10.1007/978-3-031-73474-8_5(61-72)Online publication date: 27-Oct-2024
  • (2023)A Systematic Review of Computational Thinking Assessment in the Context of 21st Century SkillsProceedings of the 2nd International Conference on Humanities, Wisdom Education and Service Management (HWESM 2023)10.2991/978-2-38476-068-8_34(271-283)Online publication date: 19-Jul-2023
  • (2023)Computing Education Research in SchoolsPast, Present and Future of Computing Education Research10.1007/978-3-031-25336-2_20(481-520)Online publication date: 18-Apr-2023
  • (2022)Aprimoramento do CT Puzzle Test para avaliação do pensamento computacionalEstudos em Avaliação Educacional10.18222/eae.v33.893833(e08938)Online publication date: 19-Dec-2022
  • (2022)Comparing estimates of difficulty of programming constructsProceedings of the 22nd Koli Calling International Conference on Computing Education Research10.1145/3564721.3565950(1-12)Online publication date: 17-Nov-2022
  • (2022)Design an Assessment for an Introductory Computer Science Course: A Systematic Literature Review2022 IEEE Frontiers in Education Conference (FIE)10.1109/FIE56618.2022.9962584(1-8)Online publication date: 8-Oct-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media