PewePro 2

Adaptive Question Creation

A method for adaptive question creation and selection.

Overview
Inputs: questions provided by students at different levels of quality
Outputs: quality estimation of questions and selection of question with the best quality

Addressed Problems

An important aspect of any learning materials are questions, which summarize the key facts and allow students to actively apply and assess their knowledge. There are some approaches to automatic extraction of relevant questions, but the quality of the extracted questions is low. Creating questions by an expert is tedious, time-consuming and from the expert point of view it is often complicated to specify the difficulty of questions as well as to choose relevant questions.

Description

We proposed the method of collaborative creating questions which consists of creating, adding and evaluating questions. We use peer reviewing with the explicit and implicit feedback from students, but also the final evaluation by an expert (a teacher). For this purpose we designed a student rating model and question rating model. Both models are based on evaluation of particular factors. Each factor has its own weight and its value was determined experimentally.

Evaluating of the quality of questions is based on the explicit feedback of students in conjunction with actions that students do within the educational system combined also with the evaluation of an expert. Therefore we consider the following critical factors:

  1. creating of question,
  2. answering the question,
  3. explicit rating of the question,
  4. similar rating.

Similarly to rating the ability of users to add questions, we rate the quality of questions. Question quality rating derives from the explicit question rating by the students and implicit question rating based on the actions of the students in the educational system. The model of question quality rating is determined by four factors:

  1. explicit rating – arithmetic average of the explicit ratings of all students,
  2. count of correct answers – normal distribution of the ratio of correct and wrong answers,
  3. count of "I don’t understand" – calculated as the ratio of number of "I don't understand" labels and number of all answers to particular question; this factor reduces the question rating,
  4. count of mistakes – calculated as the ratio of the number of error labels on the question and the number of all answers of the question; this factor reduces the question rating,

Each factor in the system is recorded and quantified using the derived equations. Equations were determined experimentally.

The models evaluate and count factors with appropriate weight. The question rating model rates all questions. A treshold in the proposed models was determined to be in the subset of approximately 20% of the best rated questions.

An important element in the proposed method is the motivation of students. For the motivation we use a simple game principle. Students get points that they can see and compare with their peers. Their task is to find a tactic that brings them as much points as possible. Thus, we motivate students to pursue activities in relation to questions, as well as we motivate students to perform these activities as their best.

Publications

  1. Unčík, M., Bieliková, M.: Annotating Educational Content by Questions Created by Learners. In: Semantic media adaptation and personalization: proceedings 2010 5th international workshop 2010 Limassol, Cyprus. New York: IEEE, 2011, pp. 13-18.