Elsevier

Computers & Education

Volume 51, Issue 1, August 2008, Pages 86-96
Computers & Education

Task results processing for the needs of task-oriented design environments

https://doi.org/10.1016/j.compedu.2007.04.009Get rights and content

Abstract

This paper presents learners’ task results gathered by means of an example task-oriented environment for knowledge testing and processed by EXCEL. The processing is domain- and task-independent and includes automatic calculation of several important task and session’s parameters, drawing specific graphics, generating tables, and analyzing the correlation coefficient. The intention is to design and implement a specialized tool called postprocessor for support of a common task base, learner’s model and decision making of the author and teacher by means of an environment for individually planned teaching courseware.

Introduction

The learner’s tasks in the task-oriented design environments (TODEs) (Andreeva, 2006, Dimitrova and Radojska, 2005, Georgiev, 1999, Stefanova, 2002, Zheliazkova and Atanasova, 2004, Zheliazkova et al., 2006) can vary greatly from answering simple test questions through learning complex lesson concepts to performing active exercise tasks such as for simulation-based training in dynamic systems. In the last case, the task’s results processing is carried out after a given task is performed by a given author (A) or learner (L). According to Georgiev (1999), the processing includes different standard procedures for representation and processing of the simulation results such as: graphical representation of a functional dependency, tabular representation of such a dependency, graphical representation of a family of dependencies in a common co-ordinate system, evaluation of the type and duration of a transient process, and evaluation of the model adequacy, and so on. A graphical interpretation is also made for a precise comparing of the current learner’s task results to the teacher’s ones during the time. A specialized module called “Postprocessor” is embedded in the corresponding task-oriented environment for design of virtual labs (TOEDVL). Its task base (TB) includes different types of training tasks (e.g., measuring, monitoring, control, and diagnostic) for quantitative modelling of continuous, discrete and discrete-event systems. A sample TB concerning the well-known system of a DC motor, DC generator and a mechanical connection between them can be found in Georgiev, Zheliazkova, and Andreeva (2004). The task parameters, namely: knowledge volume, degree of system prompt, degree of difficulty, and time planned for the task performing are automatically computed by the environment. All TODEs designed and implemented so far by Zheliazkova’a group use domain-independent ontologies and script languages for construction of different cognitive types of knowledge units such as test questions (Andreeva, 2006) animated concepts (Stefanova, 2002) structural schemes (Zheliazkova, Andreeva, & Kolev, 2006) algorithms (Zheliazkova & Atanasova, 2004) and for extracting the teacher’s preferences for a given learner’s lesson/test/exercise session. The environments have the same task’s parameters as the TOEDVL some of which have constant values calculated in accordance with different formulae, and the others have statistical values with initial values. For example, the knowledge volume is calculated based on the corresponding task’s subprogram tree as a sum of its nodes and arcs. The degree of system prompt, a real number in the range (0–1) shows what part of the author’s knowledge volume is presented to the L when he/she is performing the task. Together with the environment and task’s identifier, formulation, and keywords, these parameters are stored in a common TB. After a given learner’s task/session finishes, the relevant parameters, such as knowledge volume, coefficient of proximity, duration of the task performance, as well as speed of the knowledge acquisition are used to update the learner’s model (Zheliazkova & Kolev, 2004). After a given session of a group finishes, the average degree of difficulty and time for performing a task/session are updated in the TB as real-time experimental data, for the needs of an intelligent and adaptive environment for an individual planned teaching (IAEIPT) courseware (Brusilovsky, 1999, Zheliazkova, 2002). In this case, the results of processing are used for support of a three-level learner model and for correction of the course material, strategy, plan, and goal by the A and T. More precisely, this processing is aimed to:

  • improve the formulation of the very difficult/easy tasks or remove them from the TB;

  • precise the time for performing the task/ session, planned and monitored by the T;

  • assess the representation of the sample size and reliability (validation) of the session (Jelev and Minkova, 2004, Jelev and Minkovska, 2004);

  • assess the personal learner’s characteristics such as coefficient of knowledge proximity, speed of knowledge acquisition, coefficient of knowledge kept, and so on (Stanchev & Mileva, 1997);

  • compare the difficulty of two different sessions with the same tasks;

  • compare the effectiveness of traditional and computer-based teaching (The, 1994, Zheliazkova and Kolev, 2003).

The accuracy of the author’s and teacher’s decision made by means of the TODEs increase due to using more and precise parameters rather than just scores and duration given by a human teacher (Kurata and Sato, 1984, Marinov, 1995) or a questionnaire answers given by learners (Carashtranova et al., 2004, Zheliazkova and Kolev, 2003). At the same time, the processing is simpler than, for example, the well-known regression (Georgiev, 1999) cluster (Zheliazkova & Kolev, 2003) latent-structural analysis (Bizhkov, 1996), or optimization methods (Saveliev, Novikov, & Liubanov, 1986).

The paper is an attempt to summarize our experience in the methodology of task processing before starting to design and implement a special purpose tool for the needs of the IEIPT courseware. The rest of the paper is organized as follows. In Section 2, the methodology and TB are described. Test’s quality evaluation is discussed in the next Section (Section 3). Then the correlation analysis between session’s parameters is commented. In Section 5, the learner’s analysis is given, and, finally, comparison of two sessions is described.

Section snippets

The used methodology and TB

The most simple TODE called integrated authoring environment for knowledge testing (IAEKT) (Zheliazkova & Andreeva, 2004) had been chosen to test the proposed task-independent methodology. It covers five different subjects, 212 questions, and three groups with 469 students (Table 1). Each test session presents a separate experiment carried out in a similar way by means of IAKTE using its WORD or WEB technology (Zheliazkova and Kolev, 2005, Zheliazkova and Kolev, 2006, Zheliazkova et al., 2006).

Test quality evaluation

The learner’s answers in T2 were partially correct, e.g., the points (knowledge volume), received for the ith question from the jth student Pij could be between 0 and Pmaxi, where Pmaxi are the maximal scores for answer of a question. In terms of the graph theory, Pij was computed by the IAEKT as (Pmaxi  A  B), where A is the number of nodes missing from the author’s graph (Te) but present in the learner’s one (Ts), B – the number of nodes present in Te but missing from Ts. The calculated value V

Correlation analysis

The linear correlation coefficient can serve as a qualitative indicator of the relationship between session’s parameters. The value of this coefficient r, a real number in the range of (− 1 to 1), shows how strong the relationship is between the two test’s parameters. For example, if r is in the range 0.0 ÷ 0.3 then relationship is low; 0.3 ÷ 0.5 – moderate; 0.5 ÷ 0.7 – significant; 0.7 ÷ 0.9 – high; 0.9 ÷ 1.0 – very high. The corresponding correlation coefficients received from the experimental data are

Learner’s analysis

To analyze the proximity coefficient for each student, first we found the proportion Scores/max scores for each question included in T2. As it is already said, this coefficient for the questions in T1 is not of interest because it is 1 or 0, respectively, meaning correct or incorrect answer. Table 5 contains the calculated coefficients for each T2 question and each student. The column “Average” shows the proximity of the T2 actual student’s scores to the maximal scores. For all students, this

Comparison of two sessions

Comparison of two sessions for one and the same lesson/test/exercise (S1 and S2 for T2 in Table 8) can be useful for improving the questions formulation and updating the value of test’s questions difficulty. Significant difference between these values for three questions (dark grey) could be explained with unclear formulation of the question, possible alternative keywords, and other reasons. So these questions have to be revised or deleted from the TB. For 10 questions (light grey), this

Conclusions and intentions

The present paper is an attempt to summarize the accumulated Zheliazkova’s group experience in a methodology for task results processing in task-oriented design environments teaching different cognitive types of knowledge units – from simple questions through more complex tasks for construction of animated concepts, structural schemes, algorithms, and even whole systems. The need for such a methodology arises in connection to design and implementation of an adaptive and intelligent environment

References (29)

  • Andreeva, M. (2006). Models and tools for development of an integrated authoring knowledge testing environment. Thesis...
  • Angel 7.1, ANGEL Learning 2006. Available:...
  • Bespalko, V. P. (1976). Programming the teaching (theoretical background). Moskow: High School. (in...
  • Bizhkov, G. (1996). Theory and methodology of didactic tests. Sofia: Prosveta. (in...
  • Blackboard 6, Blackboard Inc. Available:...
  • Brusilovsky, P. (1999). Adaptive and intelligent technologies for web-based education. Kunstische...
  • Carashtranova, E. L., Dureva, D. I., & Tuparov, G. T. (2004). Assessment of the students’ input level knowledge and...
  • Desire2Learn, Desire2Learn Inc. Available:...
  • Dimitrova, S., & Radojska, P. (2005). Web-Based System for Generation and Performing Electric Scheme Tasks. In...
  • Georgiev, G. T. (1999). Models and tools for development of intelligent environments for training in dynamic systems....
  • Georgiev, G. T., Zheliazkova, I. I., & Andreeva, M. H. (2004). A distributed task-oriented environment for design of...
  • Jelev, G., & Minkova, Y. (2004). Determination of representative sample size and knowledge assimilation tests results...
  • Jelev, G., & Minkovska, D. (2004). Approaches for definition the validity of the results of the test for knowledge...
  • M. Kurata et al.

    An educational and psychological test item data base system

    Journal of Information Processing

    (1984)
  • Cited by (0)

    View full text