Implications of training university teachers in developing local writing rating scales
Olga Kvasova, Taras Shevchenko National University of Kyiv, Ukraine
Lyudmyla Hnapovska, Sumy State University, Ukraine
Vira Kalinichenko, Vasyl' Stus Donetsk National University, Ukraine
Iuliia Budas, Vinnytsia Mykhailo Kotsiubynskyi State Pedagogical University, Ukraine
Lyudmyla Hnapovska, Sumy State University, Ukraine
Vira Kalinichenko, Vasyl' Stus Donetsk National University, Ukraine
Iuliia Budas, Vinnytsia Mykhailo Kotsiubynskyi State Pedagogical University, Ukraine
https://doi.org/10.58379/XVDF9070
|
Volume 11, Issue 1, 2022
|
Abstract: Language assessment literacy is currently in search of new, modern conceptualisations in which contextual factors have a growing significance and impact (Tsagari, 2020). This article presents an initiative to promote writing assessment literacy in a culture-specific educational context. Assessment of writing belongs to the underresearched areas in Ukrainian higher education, wherein teachers have to act as raters and as rating scale developers without being properly trained in language assessment. The gaps in writing assessment literacy prompted research into the strengths and weaknesses of using a local rating scale developed by university teachers. It was conducted within an Erasmus + Staff mobility project in 2016-2019 and followed up by dissemination events held in several universities in Ukraine. The сurrent study aims to explore the impact of training in writing assessment on the processes and outcomes of university teachers’ development and use of analytic rating scales. The paper analyses how three teams of teachers from different universities coped with the task, and whether the training they underwent enabled them to design well-performing rating scales. The nine participants in the study developed three local context-specific analytic rating scales following the intuitive method of scale design, detailed in the guidelines prepared by the trainer. Given the same context (ESP) and the same CEFR level (B1 ->B2), we managed to compare the three local rating scales. The study testifies to a positive impact of the training on teachers’ literacy in writing assessment.
Keywords: writing assessment literacy; local rating scales design; teacher rater training