Ian Jones & Lara Alcock
Loughborough University
Adaptive Comparative Judgement (ACJ) is a method for assessing evidence of student learning that offers an alternative to marking (Pollitt, 2012). It requires no mark schemes, no item scoring and no aggregation of scores into a final grade. Instead, experts are presented with pairs of student work and asked to decide, based on the evidence before them, who has demonstrated the greatest mathematical proficiency. The outcomes of many such pairings are then used to construct a scaled rank order of students from least to most proficient.
ACJ is based on a well-established psychophysical principle, called the Law of Comparative Judgement (Thurstone, 1927), which states that people are far more reliable when comparing one thing with another than when making absolute judgements. The reliability of comparative judgements means “subjective” expertise can be put at the heart of assessment while achieving the sound psychometrics normally associated with “objective” mark schemes.
Until recently comparative judgement was not viable for educational assessment because it is tedious and inefficient. The complete number of required judgements for producing a rank order of (n) scripts is (frac{n^2-n}{2}).However the development of an adaptive algorithm for intelligently pairing scripts as more judgements come in means the number of required judgements has been slashed to around (6n).