How do raters understand rubrics for assessing L2 interactional engagement? A comparative study of CA- and non-CA-formulated performance descriptors
Erica Sandlund, Karlstad University, Sweden
Tim Greer, Kobe University, Japan
Tim Greer, Kobe University, Japan
https://doi.org/10.58379/JCIW3943
|
Volume 9, Issue 1, 2020
|
Abstract: While paired student discussion tests in EFL contexts are often graded using rubrics with broad descriptors, an alternative approach constructs the rubric via extensive written descriptions of videorecorded exemplary cases at each performance level. With its long history of deeply descriptive observation of interaction, Conversation Analysis (CA) is one apt tool for constructing such exemplar-based rubrics; but to what extent are non-CA specialist teacher-raters able to interpret a CA analysis in order to assess the test? This study explores this issue by comparing two paired EFL discussion tests that use exemplar-based rubrics, one written by a CA specialist and the other by EFL test constructors not specialized in CA. The complete dataset consists of test recordings (university-level Japanese learners of English, and secondary-level Swedish learners of English) and recordings of teacher-raters’ interaction. Our analysis focuses on ways experienced language educators perceive engagement while discussing their ratings of the video-recorded test talk in relation to the exemplars and descriptive rubrics. The study highlights differences in the way teacher-raters display their understanding of the notion of engagement within the tests, and demonstrates how CA rubrics can facilitate a more emically grounded assessment.
Keywords: engagement; conversation analysis; paired discussion tests; interactional competence; English as a foreign language (EFL)