skip to main content
10.1145/3330430.3333647acmotherconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
poster

Predicting the difficulty of automatic item generators on exams from their difficulty on homeworks

Authors Info & Claims
Published:24 June 2019Publication History

ABSTRACT

To design good assessments, it is useful to have an estimate of the difficulty of a novel exam question before running an exam. In this paper, we study a collection of a few hundred automatic item generators (short computer programs that generate a variety of unique item instances) and show that their exam difficulty can be roughly predicted from student performance on the same generator during pre-exam practice. Specifically, we show that the rate that students correctly respond to a generator on an exam is on average within 5% of the correct rate for those students on their last practice attempt. This study is conducted with data from introductory undergraduate Computer Science and Mechanical Engineering courses.

References

  1. Binglin Chen, Matthew West, and Craig Zilles. 2018. How much randomization is needed to deter collaborative cheating on asynchronous exams?. In Learning at Scale. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Hermann Ebbinghaus. 1913. Memory: A contribution to experimental psychology. Number 3. University Microfilms.Google ScholarGoogle ScholarCross RefCross Ref
  3. M.J. Gierl and T.M. Haladyna. 2013. Automatic Item Generation: Theory and practice. Routledge.Google ScholarGoogle Scholar
  4. Ray Hembree. 1988. Correlates, causes, effects, and treatment of test anxiety. Review of educational research 58, 1 (1988), 47--77.Google ScholarGoogle Scholar
  5. Nate Kornell, Matthew Jensen Hays, and Robert A Bjork. 2009. Unsuccessful retrieval attempts enhance subsequent learning. Journal of Experimental Psychology: Learning, Memory, and Cognition 35, 4 (2009), 989.Google ScholarGoogle ScholarCross RefCross Ref
  6. Constance A Mara and Robert A Cribbie. 2012. Paired-samples tests of equivalence. Communications in Statistics-Simulation and Computation 41, 10 (2012), 1928--1943.Google ScholarGoogle ScholarCross RefCross Ref
  7. Matthew West, Geoffrey L. Herman, and Craig Zilles. 2015. PrairieLearn: Mastery-based Online Problem Solving with Adaptive Scoring and Recommendations Driven by Machine Learning. In 2015 ASEE Annual Conference & Exposition. ASEE Conferences, Seattle, Washington. https://peer.asee.org/24575.Google ScholarGoogle ScholarCross RefCross Ref
  8. Craig Zilles, Matthew West, and David Mussulman. 2016. Student Behavior in Selecting an Exam Time in a Computer-Based Testing Facility. In 2016 ASEE Annual Conference & Exposition. ASEE Conferences, New Orleans, Louisiana. https://peer.asee.org/25896.Google ScholarGoogle ScholarCross RefCross Ref
  9. Craig Zilles, Matthew West, David Mussulman, and Timothy Bretl. 2018. Making testing less trying: Lessons learned from operating a Computer-Based Testing Facility. In 2018 IEEE Frontiers in Education (FIE) Conference. San Jose, California.Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    L@S '19: Proceedings of the Sixth (2019) ACM Conference on Learning @ Scale
    June 2019
    386 pages
    ISBN:9781450368049
    DOI:10.1145/3330430

    Copyright © 2019 Owner/Author

    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 24 June 2019

    Check for updates

    Qualifiers

    • poster
    • Research
    • Refereed limited

    Acceptance Rates

    L@S '19 Paper Acceptance Rate24of70submissions,34%Overall Acceptance Rate117of440submissions,27%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader