ALTAANZ
  • About ALTAANZ
  • ALTAANZ Committee
    • Current Committee
    • Past Committees >
      • 2024 Committee
      • 2022 - 2023 Committee
      • 2021 Committee
      • 2020 ALTAANZ Committee
      • 2018 - 2019 ALTAANZ Committee
      • 2017 ALTAANZ Committee
      • 2016 ALTAANZ Committee
      • 2015 ALTAANZ Committee
      • 2014 ALTAANZ Committee
      • 2013 ALTAANZ Committee
      • 2012 ALTAANZ Committee
      • 2011 ALTAANZ Committee
  • Events
    • ALTAANZ Online conference 2025 >
      • Conference info
      • ALTAANZ conference registration 2025
      • Keynote Speakers
      • Featured sessions
      • ALTAANZ 2025 Mentor-mentee program
    • Past Conferences >
      • The Applied Linguistics ALAA/ALANZ/ALTAANZ Conference 2024
      • ALTAANZ Online Conference 2023 >
        • Program 2023
        • Plenary Sessions 2023
        • Registration 2023
        • Conference Committee 2023
      • ALANZ - ALAA - ALTAANZ 2022
      • ALTAANZ Online Research Forum 2021
      • LTRC/ALTAANZ Online Celebratory event 2020 >
        • About the event
        • Event Programme
        • LTRC Anniversary Symposium
      • ALANZ / ALAA / ALTAANZ Auckland 2017
      • ALTAANZ Conference Auckland 2016 >
        • Keynote Speakers >
          • Plenary Abstracts
        • Teachers' Day
        • Pre-conference workshops
        • Conference programme
      • ALTAANZ Conference Brisbane 2014
      • ALTAANZ Conference Sydney 2012
    • Past Workshops >
      • LTRC / ALTAANZ Workshops July 2014 >
        • Test analysis for teachers
        • Diagnostic assessment in the language classroom
        • Responding to student writing
        • Assessing Pragmatics
        • Introduction to Rasch measurement
        • Introduction to many-facet Rasch measurement
      • LTRC / ALTAANZ workshops September 2015 >
        • A Practical Approach to Questionnaire Construction for Language Assessment Research
        • Integrating self- and peer-assessment into the language classroom
        • Implementing and assessing collaborative writing activities
        • Assessing Vocabulary
        • Revisiting language constructs
  • SiLA Journal
    • About SiLA
    • SiLA Publication Policies
    • Early View Articles
    • Current Issue
    • Past Issues >
      • 2024
      • 2023
      • 2022
      • 2021
      • 2020
      • 2019
      • 2018
      • 2017
      • 2016
      • 2015
      • 2014
      • 2013
      • 2012
    • Editorial Board
    • Submission Guidelines
  • Awards
    • SiLA Best Paper Award
    • PLTA Best Paper Award 2013-2021
    • ALTAANZ Best Student Journal Article Award
    • ALTAANZ Best Student Paper Award
    • Penny McKay Award
  • Funding Opportunities
  • Newsletter: Language Assessment Matters
  • Resources
    • Best practice in language testing & assessment
  • Join ALTAANZ
  • Contact us
The gap between communicative ability measurements: General-purpose English speaking tests and linguistic laypersons’ judgments
Takanori Sato, Center for Language Education and Research, Sophia University, Tokyo, Japan
https://doi.org/10.58379/NGNH8496
Volume 7, Issue 1, 2018
Abstract: The assessment criteria for general-purpose speaking tests are normally produced from test developers’ intuition or communicative competence models. Therefore, the ideal second language (L2) communication in general-purpose speaking tests reflects language specialists’ perspectives. However, neglect of the views of nonlanguage specialists (i.e., linguistic laypersons) on communication is problematic since these laypersons are interlocutors in many realworld situations. This study (a) investigated whether L2 speakers’ results on general-purpose speaking tests align with linguistic laypersons’ judgments of L2 communicative ability and (b) explored performance features that affect these judgments. Twenty-six postgraduate students of non-linguistic disciplines rated 13 speakers’ communicative ability on general-purpose speaking tests and provided verbal explanations of the performance features affecting their ratings. Their ratings were compared with the speakers’ test results, and the features that determined their ratings were examined. Although these ratings were not completely different from the test results, some speakers’ testresults did not align with theirratings. The linguistic laypersons’ judgments were affected not only by features that the general-proficiency tests assessed but by other factors as well. The findings of this study will deepen our understanding of realworld interlocutors’ views on communication and contribute to the development of authentic criteria for general-purpose speaking tests.
Keywords: general-purpose speaking test; linguistic laypersons; indigenous assessment criteria
Click to download Full Text