Elsevier

Computers & Education

Volume 117, February 2018, Pages 59-74
Computers & Education

Analyzing productive learning behaviors for students using immediate corrective feedback in a blended learning environment

https://doi.org/10.1016/j.compedu.2017.09.013Get rights and content

Highlights

  • We analyzed students' interactions with a simple feedback tool for online homework.

  • Higher engagement with the feedback tool is positively correlated to performance.

  • Getting the correct answer on the first try is negatively correlated to performance.

  • Interactions with the feedback explains 25% of variance in cumulative grades.

Abstract

Undergraduate classes in many science and engineering courses are utilizing asynchronous computer platforms to host educational materials such as lecture videos or discussion forums. These platforms also have the ability to provide immediate feedback to students on formative assessment tasks such as homework problems, reading questions, or weekly quizzes. Although there have been a number of studies on computer-based feedback, there is more we need to know about how students interact with immediate feedback, and how those interactions influence their learning. In this study, we characterize introductory physics students' interactions with one computer-based immediate simple corrective feedback tool, the “checkable answer feature” (CAF), powered by the institutional version of the edX platform. We investigate how much students interact with the CAF, the patterns of interaction, and, ultimately, how these patterns are associated with course performance. We utilize rich quantitative data, including a large volume of server tracking logs that show students’ use the CAF, as well as performance metrics. Our findings show certain patterns of engagement with feedback reflect productive study strategies and significantly predict higher performance. The findings provide guidance for instructional practice and the continued development of online feedback tools in introductory STEM courses.

Introduction

In educational settings, feedback serves the crucial role of closing the gap between students’ current understanding and desired learning (Shute, 2008). Timely and informative feedback can help learners recognize and correct misconceptions, motivate them to acquire knowledge, and increase their confidence and motivation to learn (Epstein et al., 2010). Yet in the context of higher education, it is often not possible for instructors to provide timely feedback to every student individually. This is especially true in first-year foundational courses due to the large number of students usually enrolled. But feedback delivered on computer-based platforms such as edX (www.edx.org), Coursera (www.coursera.org), and FutureLearn (www.futurelearn.com) offer a potential solution. They can provide students with automatic immediate feedback on formative assessment tasks such as reading questions, homework problems, and short quizzes. These platforms enable the detailed recording of the activities the students perform online. In turn, these data logs can be re-assembled to give researchers an in-depth look into how students are learning the concepts and skills instructors want them to master.

The specific feedback mechanism we focus on in this study is the “checkable answer feature” (CAF) included in a suite of online materials in an introductory blended learning course in physics, which we call PHYS101, at an elite private university in the northeast United States. The CAF, a rudimentary intelligent tutoring system, gives students immediate feedback on whether or not their answers to online homework problems are correct. Once the student enters their answer, if it is correct, the system returns a green tick mark, and if it is wrong, they see a red X (see Fig. 1). In addition, students use the CAF to see if their answers to multi-part problems, which they have to write out, are correct before they submit their traditional, paper-based homework assignments.

It is evident that formative feedback can foster improved achievement and enhanced motivation to learn, while also supporting deep and self-regulated learning (Crisp and Ward, 2008, Koh, 2008, Wolsey, 2008). Studies of courses that are wholly online (Crisp and Ward, 2008, Gijbels et al., 2005, Sorensen and Takle, 2005, Van der Pol et al., 2008, Vonderwell et al., 2007, Wang et al., 2008) show that the effective use of online formative assessment can engage students and teachers in meaningful educational experiences, providing them with opportunities to collaboratively identify learning needs and devise strategies to meet them. Other studies have investigated the effectiveness of online or computer-assisted feedback tools in blended learning environments (e.g., Angus and Watson, 2009, Chung et al., 2006, Feldman and Capobianco, 2008, Lin, 2008, Wang et al., 2008). Foundational work by Van der Kleij, Feskens, and Eggen (2015) analyzing a significant number of studies indicates that elaborated feedback is more effective than simple corrective feedback. This effect is more pronounced in higher level learning and in certain disciplines, as, for example, math versus science. Interestingly, several studies, including recent work by Van der Kleij, do not observe a difference between corrective feedback and elaborated feedback (Mazingo, 2006, Van der Kleij et al., 2012).

This raises the possibility that how students engage with feedback may have a significant influence on how feedback affects learning outcomes. While, there is evidence that the amount of time spent with feedback is mediated by student attitudes and motivations (Narciss and Huth, 2006, Van der Kleij et al., 2012), to our knowledge there are no quantitative studies of how the level and patterns of student engagement with corrective feedback can affect learning outcomes. Our study addresses this gap by addressing the following research questions:

RQ1: How much do students interact with simple corrective feedback in a blended undergraduate physics class, and what are the patterns of engagement?

RQ2: (a) What patterns of engagement with simple corrective feedback comprise productive study behaviors and are associated with stronger performance, and (b) what patterns are negatively correlated with performance?

Computer-based feedback systems are becoming more sophisticated and of growing benefit in higher education. Individual feedback can be more easily scaled to large numbers of learners because each student can interact with information that is instantaneously provided, and the material can be focused on the individual student's cognitive needs, study strategies, and preferences for how material is presented. By combining advances in artificial intelligence with content experts' knowledge of difficult concepts and common misconceptions, the material can also address the stumbling blocks that interfere with learning in particular disciplines. Feedback can be delivered when and where it is most convenient for the student (a dorm room, on-campus common study space, or Starbucks, for that matter). The ever-increasing use of these online educational platforms, coupled with rich, large-scale data gathering tools, enables an unprecedented window into student use of computer-based feedback mechanisms. Recognizing these interactions can help instructors deliver more targeted interventions, as well as help platform developers create the features that can help maximize the potential of these online systems. This study also contributes to understanding how a large, detailed online dataset can reveal patters of student study behaviors. In turn, this demonstrates ways in which data mining can be used in to complement traditional statistics and generate new knowledge. Finally, correlating online feedback behaviors with course performance sheds light on how self-regulated learning and productive struggle contribute to positive student outcomes.

Section snippets

Context of the study

Our specific study was undertaken within the context of a semester-long required residential physics course in mechanics, which all undergraduates at the university must pass to graduate. A well-respected faculty member in the physics department began to experiment with redesigning the course in 2000, moving away from lectures to a model that incorporated active learning pedagogies. (In physics, this model of teaching is known as “studio physics” (Beichner et al., 2007, Beichner and Saul, 2003,

Theoretical framework

This study is grounded both in the research on feedback in learning, which has a long and rich history, and relatively new scholarship on computer-mediated feedback, including comparisons between students’ use of feedback provided by intelligent tutoring systems and online learning platforms. Some of the behaviors we see indicate a subset of the students are using strategies associated with self-regulation, and we utilize that body of work as well. We knit this applicable background literature

Methodology

In order to identify students’ behavioral patterns with the CAF and to understand the relationship between those patterns and performance metrics, we needed to go through several distinct steps, balancing theory with emergent patterns we saw in our data. First, we generated descriptive statics, and then we used inferential statistics and data mining techniques to determine the relationship between student behaviors and their outcomes. This approach allows us to take advantage of traditional

Results

The results of our analyses allow us to begin to answer the research questions that motivate this study. We can identify how students are using immediate feedback in solving online homework problems and what behaviors correlate with a stronger performance in the course.

Discussion

This study utilizes rich quantitative data to understand how students interact with immediate feedback on their online homework assignments in a blended learning course. The feedback is simple and corrective: once the students submit an answer to a problem, the checkable answer feature (CAF) responds with either a green tick mark or a red X to tell them if they are right or wrong. Although a substantial body of research exists on the importance of feedback to the learning process, clickstream

Conclusion

Online platforms can now provide researchers with data that indicate what resources students are using, how they come to understand concepts, and what behaviors reflect effective study strategies. Thus, these platforms allow us to study learning at a deeper level than we have been able to. In this study, we examine how students use very simple online task-level feedback to master material that they may not have encountered before, recognizing that feedback is integral to the learning process.

Acknowledgements

We would like to recognize the contributions of undergraduate researcher Xingyu Zhou (Purdue University) in data cleaning and understanding the data generating mechanism of the platform. We would like to thank the instructors and researchers at the university studied for their participation and help with the ongoing investigation.

We would like to aknowledge the support and feedback from the first author’s doctoral committee members Dr. Ruth Streveler and Dr. Sean Brophy.

This research is based

References (59)

  • J. Van der Pol et al.

    The nature, reception, and use of online peer feedback in higher education

    Computers & Education

    (2008)
  • A. Vedel

    The big five and tertiary academic performance: A systematic review and meta-analysis

    Personality and Individual Differences

    (2014)
  • T.-H. Wang et al.

    Designing a web-based assessment environment for improving pre-service teacher assessment literacy

    Computers & Education

    (2008)
  • H. Wei et al.

    Can more interactivity improve learning achievement in an online course? Effects of college students' perception and actual use of a course-management system on their learning achievement

    Computers & Education

    (2015)
  • B. Zheng et al.

    Participation, interaction, and academic achievement in an online discussion environment

    Computers & Education

    (2015)
  • V. Aleven et al.

    Limitations of student control: Do students know when they need help?

  • S.D. Angus et al.

    Does regular online testing enhance student learning in the numerical sciences? Robust evidence from a large data set

    British Journal of Educational Technology

    (2009)
  • R.S. Baker et al.

    Off-task behavior in the cognitive tutor classroom: When students game the system

  • W.K. Balzer et al.

    Effects of cognitive feedback on performance

    Psychological Bulletin

    (1989)
  • R.J. Beichner et al.

    Introduction to the SCALE-UP (student-centered activities for large enrollment undergraduate programs) project

  • R.J. Beichner et al.

    The student-centered activities for large enrollment undergraduate programs (SCALE-UP) project

    Research-based Reform of University Physics

    (2007)
  • R.M. Bernard et al.

    A meta-analysis of three types of interaction treatments in distance education

    Review of Educational Research

    (2009)
  • R.M. Bernard et al.

    A meta-analysis of blended learning and technology use in higher education: from the general to the applied

    Journal of Computing in Higher Education

    (2014)
  • J. Bransford et al.

    How people learn: Brain, mind, experience, and school

    (1999)
  • L. Breslow

    Wrestling with pedagogical change: the TEAL initiative at MIT

    Change: The Magazine of Higher Learning

    (2010)
  • G.K. Chung et al.

    An exploratory study of a novel online formative assessment and instructional tool to promote students' circuit problem solving

    The Journal of Technology, Learning and Assessment

    (2006)
  • Roy Clariana

    A comparison of answer until correct feedback and knowledge of correct response feedback under two conditions of contextualization

    Journal of Computer-based Instruction

    (1990)
  • J. DeBoer et al.

    Bringing student backgrounds online: MOOC user demographics, site usage, and online learning

    Engineer

    (2013)
  • R.E. Dihoff et al.

    The role of feedback during academic testing: The delay retention effect revisited

    The Psychological Record

    (2003)
  • Cited by (0)

    View full text