Evaluating the relative effectiveness of online and face-to-face training for new writing raters
Ute Knoch, Language Testing Research Centre, University of Melbourne
Judith Fairbairn, British Council
Carol Myford, University of Illinois at Chicago
Annemiek Huisman, Language Testing Research Centre, University of Melbourne
Judith Fairbairn, British Council
Carol Myford, University of Illinois at Chicago
Annemiek Huisman, Language Testing Research Centre, University of Melbourne
https://doi.org/10.58379/ZVMM4117
|
Volume 7, Issue 1, 2018
|
Abstract: Training writing raters in large-scale tests is commonly conducted face-to-face but bringing raters together for training is difficult and expensive. For this reason, more and more testing agencies are exploring technological advances with the aim of providing training online. A number of studies have examined whether online rater training is a feasible alternative to face-to-face training. This mixed methods study compared two groups of new raters, one trained online using an online training platform and the other trained using the conventional face-to-face rater training procedures. Raters who passed accreditation were also compared in the reliability of their subsequent operational ratings. The findings show that no significant differences between the rating behaviour of the two groups were identified on the writing test. The qualitative data also showed that, in general, the raters enjoyed both modes of training and felt generally sufficiently trained although some specific problems were encountered. Results on the operational ratings in the first five months after completing the training showed no significant differences between the two training groups. The paper concludes with some implications for training raters in online environments and sets out a possible programme for further research.
Keywords: rater training, online rater training, rater quality, Rasch measurement