Abstract
A body of research on the assessment of scientific practices revealed that constructed-response items (CRI) are more valid for assessing scientific explanations than selected-response items (SRI). A few studies have compared the differences between these question item formats in small-scale formative science assessments. It is unclear, however, whether this phenomenon is universal in large-scale science assessments, which is within the scope of the present study. This study showed that one-third of students on average demonstrated inconsistent performance across 58 countries/regions when scientific practices were measured by SRI and CRI.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Baker, R.S., Clarke-Midura, J., Ocumpaugh, J.: Towards general models of effective science inquiry in virtual performance assessments. J. Comput. Assist. Learn. 32, 267–280 (2016)
Federer, M.R., Nehm, R.H., Opfer, J.E., Pearl, D.: Using a constructed-response instrument to explore the effects of item position and item features on the assessment of students’ written scientific explanations. Res. Sci. Educ. 45, 527–553 (2014)
Li, H., Gobert, J., Dicker, R.: Dusting off the messy middle: assessing students’ inquiry skills through doing and writing. In: André, E., Baker, R., Hu, X., Rodrigo, M.M.T., du Boulay, B. (eds.) AIED 2017. LNCS (LNAI), vol. 10331, pp. 175–187. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61425-0_15
Li, H., Gobert, J., Dickler, R.: The relationship between scientific explanations and the proficiencies of content, inquiry, and writing. In Klemmer, S., Koedinger, K. (eds.) Proceedings of the Fifth Annual ACM Conference on Learning at Scale, New York, NY (2018)
Li, H., Gobert, J., Dickler, R.: Unpacking why student writing does not match their science inquiry experimentation in Inq-ITS. In: Kay, J., Luckin, R. (eds.) Rethinking learning in the digital age: Making the learning sciences count, 13th International Conference of the Learning Sciences (ICLS) 2018, vol. 3, pp. 1465–1466, London, UK (2018)
Martinez, M.E.: Cognition and the question of test item format. Educ. Psychol. 34, 207–218 (1999)
Nehm, R.H., Beggrow, E.P., Opfer, J.E., Ha, M.: Reasoning about natural selection: diagnosing contextual competency using the Acorns Instrument. Am. Biol. Teach. 74, 92–98 (2012)
OECD: PISA 2015 technical report. PISA, OECD Publishing, Paris (2017)
OECD:Â PISA 2015 results in focus. PISA, OECD Publishing, Paris (2016)
Rodriguez, M.C.: Construct equivalence of multiple-choice and constructed-response items: a random effects synthesis of correlations. J. Educ. Meas. 40, 163–184 (2003)
Acknowledgments
This work was funded by the Organization for Economic Co-Operation and Development (Grant No. EDU/0500091979).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, H. (2022). Comparison of Selected- and Constructed-Response Items. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners’ and Doctoral Consortium. AIED 2022. Lecture Notes in Computer Science, vol 13356. Springer, Cham. https://doi.org/10.1007/978-3-031-11647-6_70
Download citation
DOI: https://doi.org/10.1007/978-3-031-11647-6_70
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-11646-9
Online ISBN: 978-3-031-11647-6
eBook Packages: Computer ScienceComputer Science (R0)