A Comparative Study of Methods for a Priori Prediction of MCQ Difficulty

Tracking #: 2225-3438

This paper is currently under review
Authors: 
Ghader Kurdi
Jared Leo
Nicolas Matentzoglu
Bijan Parsia
Uli Sattler
Sophie Forge
Gina Donato
Will Dowling

Responsible editor: 
Lora Aroyo

Submission type: 
Full Paper
Abstract: 
Successful exams require a balance of easy, medium, and difficult questions. Question difficulty is generally either estimated by an expert or determined after an exam is taken. The latter provides no utility for the generation of new questions and the former is expensive both in terms of time and cost. Additionally, it is not known whether expert prediction is indeed a good proxy for estimating question difficulty. In this paper, we analyse and compare two ontology-based measures for difficulty prediction, as well as comparing each measure with expert prediction (by 15 experts) against the exam performance of 12 residents over a corpus of 231 medical case-based questions. We find one ontology-based measure (relation strength indicativeness) to be of comparable performance (accuracy = 47%) to expert prediction (average accuracy = 49%).
Full PDF Version: 
Tags: 
Under Review