On-line question-posing and peer-assessment as means for web-based knowledge sharing in learning
Introduction
Networked computers can be used for sharing information and knowledge. They can also be used in on-line evaluation of learning outcomes. Can the process of on-line knowledge sharing be made relevant to learning and assessment? This study is an examination of a novel way of merging assessment and knowledge sharing in the context of a hybrid on-line learning system, used in a postgraduate MBA course.
The internet has always been a prominent space for learning (Dori et al., 2003; Potelle and Rouet, 2003) and testing (Rafaeli and Tractinsky (1989), Rafaeli and Tractinsky (1991)). This paper describes a unique way of implementing a web-based testing mechanism that goes beyond just the administration of on-line tests. The system, named QSIA, was designed to enhance knowledge sharing among both instructors and students. QSIA, employed in a Graduate School of Business, was used as a platform for carrying out an on-line Question-Posing Assignment (QPA). In this assignment, students were required to contribute questions (knowledge items) for public use. The students were also asked to rank their peers’ contributions. The on-line QPA was graded for quality and persistence. This sort of knowledge sharing assignment is only feasible in a web-based on-line supported environment, if only for the sheer volume of data flows. These new procedures, grafted onto traditional classroom practice, raise several intriguing research questions: How do students respond to and perform with this novel on-line, collaborative QPA? How does the on-line question-posing and peer-assessment activity relate to students’ traditionally conceptualized learning outcomes? And what are the students’ attitudes towards the use of systems such as QSIA and the on-line QPA? In the following we report on a field test that investigates the implementation of such a set of tools and practices.
The evaluation of learning outcomes is evolving in both methodology and technology. The methodology of evaluation is shifting from a “culture of testing” to a “culture of assessment” (Birenbaum and Dochy, 1996; Sluijsmans et al., 2001). Emphasis is placed on integrating assessment and instruction. Assessment addresses the process of learning rather than just the evaluation of products and individual progress. The role of students has also been changing from passive subjects to active participants who share responsibility in the process, practices self-assessment and collaboration.
Technologically, the environment is shifting from paper and pen to computerized adaptive testing. Computerized administration of tests is attractive for a variety of reasons. Computerized tests offer convenience, efficiency, aesthetic and pedagogic improvements (Rafaeli and Tractinsky (1989), Rafaeli and Tractinsky (1991)). Computerized testing has traditionally been a very centralized, closely guarded and tightly controlled enterprise. An artefactual expression of this centralization is evident in the reliance of most computerized adaptive testing systems on closed, mainly multiple-choice format questions. More recent developments in interface design allow a relaxation of some of this rigidity and an enrichment in test types.
The role of information technology in educational assessment has been growing rapidly (Beichner et al., 2000; Hamilton et al., 2000; Barak, 2003). Several well-known computer-based tests are now administered on the web, including the Graduate Record Exam (GRE), the Graduate Management Admissions Test (GMAT), and the Medical Licensing Examination (MLE). The high speed and large storage capacities of today's computers, coupled with their rapidly shrinking costs, makes computerized testing a promising alternative to traditional paper-and-pencil measures. Web-based testing systems offer the advantages of computer-based testing delivered over the Internet. The possibility of conducting an examination where time and place are not limited, however time and pace can still be controlled and measured, is one of the major advantages of web-based testing systems (Rafaeli and Tractinsky (1989), Rafaeli and Tractinsky (1991); Rafaeli et al., 2003). Other advantages include: the easy accessibility of on-line knowledge databases and the inclusion of rich multimedia, and interactive features such as color, sound, video, and simulations.
Modern on-line assessment systems offer considerable scope for innovations in testing and assessment as well as a significant improvement of the process for all its stakeholders, including teachers, students and administrators (McDonald, 2002). This paper presents a new approach for web-based testing. We use the term ‘on-line assessment’ to emphasize the shift from a culture of testing to a culture of assessment and from paper and pen to web-based administered examinations. This study describes a new on-line mode of learning and evaluation that includes question-posing integrated with multi-modes of assessment:
- 1.
Self-assessment: Students conduct self-assessment by completing an independently run test followed by immediate feedback.
- 2.
Peer assessment: Students are required to contribute items to a joint pool and are encouraged to read and review questions developed and contributed by others—their classmates.
- 3.
Achievement assessment: Knowledge acquisition is assessed via an on-line final examination.
All modes of assessment were administered by QSIA—an on-line system for assessment and knowledge sharing (Rafaeli et al., 2003).
There is a recent increase in recognition given to the importance of student's questions in the teaching and learning process (Dori and Herscovitz, 1999; Marbach-Ad and Sokolove, 2000). The realization that questions and information seeking are central to meaningful learning dates back to Socratean thought (Bohlin, 2000). Challenging students to assume an active role in posing questions can promote independence in learning (Bruner, 1990; Marbach-Ad and Sokolove, 2000).
Although the essence of thinking is asking questions, most students perceive learning as the study of facts (Shodell, 1995). This may relate to acquired experience with questions as something teachers impose on students, using fact-demanding questions rather than thought provoking queries. During their years of education, students are schooled at answering questions but remain novices at asking them (Dillon, 1990). Questions, in the traditional teaching, are privately owned and displayed by the teachers. Dillon (1990) suggests that questions should come from both teachers and students. Similarly, studies of novel teaching approaches stress the importance of the student's questions. These studies suggest that the central role of education should be to develop in students an appreciation of posing questions (Shodell, 1995; Dori and Herscovitz, 1999). In the study reported here, students were asked to develop questions relating to the course learning contents. Using QSIA as the web-based technology they were asked to share these questions with their peers, use the questions as a form of preparing for the final test, and evaluate questions posed by their classmates.
Innovation in assessment practices has accelerated in recent years as well. Assessment systems that require students to use high-order thinking skills such as developing, analysing and solving problems instead of memorizing facts are important for the learning outcomes (Zohar and Dori, 2002). Two of these higher-order skills, are reflection on one's own performance—self-assessment, and consideration of peers’ accomplishments—peer assessment (Birenbaum and Dochy, 1996; Sluijsmans et al., 2001). Both self- and peer-assessment seem to be underrepresented in contemporary higher education, despite their rapid implementation at all other levels of education (Williams, 1992). Larisey suggested that the adult student should be given opportunities for self-directed learning and critical reflection in order to mirror the world of learning beyond formal education (Larisey, 1994). Experiencing peer assessment seems to motivate deeper learning and produces better learning outcomes (Williams, 1992).
Peer assessment tasks include rating of individual and group presentations, artwork, or posters (Zevenbergen, 2001); marking classmates’ problem solving performances; and rating classmates’ contributions while carrying out different group assignments (Conway et al., 1993; Sluijsmans et al., 2001). Our study describes a new mode of peer assessment task. In this study, students were asked to review instructional questions developed by their classmates’ and conduct peer assessment by rating the questions. This peer assessment task was available throughout the learning period as it was conducted via a web-based on-line assessment system.
Section snippets
Research settings
Our study explored a novel educational methodology and technology implemented in a postgraduate E-business course. The students participating in the course carried out an on-line QPA (Question Posing Assignment) administered by QSIA—an on-line system for assessment and knowledge sharing. This section describes the E-business course, The QSIA system, the on-line QPA, and the way students were graded on their assignment.
Research objective and methodology
This study is an investigation of a novel mode for on-line assessment and knowledge sharing. Our objective was to explore student's learning and knowledge sharing while engaged in an on-line question-posing and peer-assessment activity. QSIA system was used as a platform for this study.
When we harness the capabilities of web-based testing mechanisms to go beyond just the administration of on-line tests and include knowledge sharing by on-line QPA, we encounter three interesting research
Results
The results section consists of three parts. Each part relates to a certain research question and presents data for its answer. Each part is also associated with one or two explored domains—social, cognitive and affective. The first part reports the results of the students’ performance on the on-line QPA. The second part reports the relationships between the students’ on-line QPA grades and their final examination grades. Both parts explore the relationships between the social and cognitive
Summary and discussion
Some instructors expect learning to remain unchanged from the forms it had when they were students. However, both technology and teaching paradigms are evolving, and so should the learning environment (Dillon, 1990). This paper describes a new approach to assessment that we believe holds promise for reshaping the way learning outcomes are measured in higher education. This approach includes question-posing as well as self-, peer- and achievement-assessments, all administered by QSIA—a
Acknowledgements
The authors wish to thank Caesarea Edmond Benjamin de Rothschild Foundation, Institute for Interdisciplinary Applications of Computer Science and IUCEL—the Israeli Inter-University Center for E-Learning for supporting this research.
The authors are grateful to the Center for the Study of the Information Society at the University of Haifa, Israel. The authors also wish to thank Alumit Wolfowitz for her valuable contribution in supporting students with QSIA.
References (33)
- et al.
Notification and awarenesssynchronizing task-oriented collaborative activity
International Journal of Human–Computer Studies
(2003) The impact of individual differences on the equivalence of computer-based and paper-and-pencil educational assessments
Computers & Education
(2002)- et al.
Effect of content representation and readers’ prior knowledge on the comprehension of hypertext
International Journal of Human–Computer Studies
(2003) - et al.
A virtual library for building community and sharing knowledge
International Journal of Human–Computer Studies
(1999) - et al.
Peer assessment in problem based learning
Studies in Educational Evaluation
(2001) Findings from observational studies of collaborative work
International Journal of Man–Machine Studies
(1991)- et al.
Problem solving and problem solving networks in chemistry
Journal of Chemical Education
(1979) - Barak, M., 2003. Exploiting an online testing system to go beyond the administration of tests. AACE E-Learn Conference,...
- Beichner, R., Wilkinson, J., Gastineau, L., Engelhardt, P., Gjertsen, M., Hazen, M., Ritchie, L., Risley, J., 2000....
- et al.
Alternatives in Assessment of Achievements, Learning Processes and Prior Knowledge
(1996)
Schooling of desire
Journal of Education
Acts of Meaning
Peer assessment of an individual's contribution to a group project
Assessment and Evaluation in Higher Education
Multiple assessment in an online graduate coursean effectiveness evaluation
The Practice of Questioning
Cited by (159)
Perceptions of STEM alumni and students on developing 21st century skills through methods of teaching and learning
2021, Studies in Educational EvaluationCitation Excerpt :Based on guidance from Technion senior management, we added to the list three more skills. These were included in one of the frameworks or elevated from being subskills: (1) entrepreneurship, since our country is known as a global leader in entrepreneurship (Senor & Singer, 2011); (2) question posing, since this skill is fundamental in science education (Barak & Rafaeli, 2004; Kohen, Herscovitz, & Dori, 2020); (3) intercultural communication, because the country the Institute is in is an immigration country of numerous cultural groups. The final list of 14 skills is as follows: complex problem-solving; creativity; critical thinking; collaboration; engineering design; experimenting and testing; individual learning; intercultural communication; STEM knowledge application in a professional environment; oral communication; question posing; systems thinking; and written communication.
A cultural perspective to project-based learning and the cultivation of innovative thinking
2021, Thinking Skills and CreativityCitation Excerpt :The literature on innovation recognize questioning - the process of asking questions as an important cognitive ability in which old assumptions are contested in order for new ideas to emerge (Barak & Usher, 2019; Dyer et al., 2019; Usher & Barak, 2020). Studies recognize the importance of asking meaningful questions since it requires information processing and a deep understanding of the topics at hand (Barak & Rafaeli, 2004; Barak & Asakle, 2018). However, asking questions just for the sake of asking can be counterproductive, missing the opportunity to converge ideas into tangible innovation.
Design and Development of a Personalized Recommender System of Student Question-Generation Exercises for Programming Courses
2023, 31st International Conference on Computers in Education, ICCE 2023 - ProceedingsPeer-to-peer online video feedback with pedagogical activity improves the snatch learning during the COVID-19-induced confinement in young weightlifting athletes
2023, International Journal of Sports Science and Coaching