ALTAANZ
  • About ALTAANZ
  • ALTAANZ Committee
    • Current Committee
    • Past Committees >
      • 2024 Committee
      • 2022 - 2023 Committee
      • 2021 Committee
      • 2020 ALTAANZ Committee
      • 2018 - 2019 ALTAANZ Committee
      • 2017 ALTAANZ Committee
      • 2016 ALTAANZ Committee
      • 2015 ALTAANZ Committee
      • 2014 ALTAANZ Committee
      • 2013 ALTAANZ Committee
      • 2012 ALTAANZ Committee
      • 2011 ALTAANZ Committee
  • Events
    • ALTAANZ Online conference 2025 >
      • Conference info
      • ALTAANZ conference registration 2025
      • Keynote Speakers
      • Featured sessions
      • ALTAANZ 2025 Mentor-mentee program
    • Past Conferences >
      • The Applied Linguistics ALAA/ALANZ/ALTAANZ Conference 2024
      • ALTAANZ Online Conference 2023 >
        • Program 2023
        • Plenary Sessions 2023
        • Registration 2023
        • Conference Committee 2023
      • ALANZ - ALAA - ALTAANZ 2022
      • ALTAANZ Online Research Forum 2021
      • LTRC/ALTAANZ Online Celebratory event 2020 >
        • About the event
        • Event Programme
        • LTRC Anniversary Symposium
      • ALANZ / ALAA / ALTAANZ Auckland 2017
      • ALTAANZ Conference Auckland 2016 >
        • Keynote Speakers >
          • Plenary Abstracts
        • Teachers' Day
        • Pre-conference workshops
        • Conference programme
      • ALTAANZ Conference Brisbane 2014
      • ALTAANZ Conference Sydney 2012
    • Past Workshops >
      • LTRC / ALTAANZ Workshops July 2014 >
        • Test analysis for teachers
        • Diagnostic assessment in the language classroom
        • Responding to student writing
        • Assessing Pragmatics
        • Introduction to Rasch measurement
        • Introduction to many-facet Rasch measurement
      • LTRC / ALTAANZ workshops September 2015 >
        • A Practical Approach to Questionnaire Construction for Language Assessment Research
        • Integrating self- and peer-assessment into the language classroom
        • Implementing and assessing collaborative writing activities
        • Assessing Vocabulary
        • Revisiting language constructs
  • SiLA Journal
    • About SiLA
    • SiLA Publication Policies
    • Early View Articles
    • Current Issue
    • Past Issues >
      • 2024
      • 2023
      • 2022
      • 2021
      • 2020
      • 2019
      • 2018
      • 2017
      • 2016
      • 2015
      • 2014
      • 2013
      • 2012
    • Editorial Board
    • Submission Guidelines
  • Awards
    • SiLA Best Paper Award
    • PLTA Best Paper Award 2013-2021
    • ALTAANZ Best Student Journal Article Award
    • ALTAANZ Best Student Paper Award
    • Penny McKay Award
  • Funding Opportunities
  • Newsletter: Language Assessment Matters
  • Resources
    • Best practice in language testing & assessment
  • Join ALTAANZ
  • Contact us
Picture
Intelligibility and comprehensibility across Aptis Speaking tasks
Daniel R. Isbell, University of Hawai’i at Mānoa, USA
Dustin Crowther, University of Hawai’i at Mānoa, USA
Jieun Kim, University of Hawai’i at Mānoa, USA / Soongsil University, 
Republic of Korea 
Yoonseo Kim, University of Hawai’i at Mānoa, USA
https://doi.org/10.58379/YWIO8217
Volume 14, Issue 1, 2025
Abstract: Current Aptis Speaking rubrics reflect Common European Framework of Reference (CEFR) Phonological Control descriptors, which were revised in 2018 to better emphasize the intelligibility of second language (L2) speech. The present study investigates the validity of Aptis Speaking scores and related CEFR-based interpretations by comparing official Aptis Speaking scores to laypersons’ assessments of intelligibility, a measure of accuracy of understanding using orthographic transcription (0-100%), and comprehensibility, a measure of ease of understanding using ratings on a scale of 1 to 9. Additionally, as Aptis Speaking tasks feature several target performance levels, we considered relationships between task complexity and speakers’ intelligibility and comprehensibility. Archived speaking performances from 50 Aptis examinees were assessed by layperson listeners for intelligibility (n = 562) and comprehensibility (n = 567). Comprehensibility was generally a stronger predictor of Aptis Speaking scores than intelligibility for both overall and task-level scores. Segmented regressions revealed specific breakpoints in which predictive power was greatest for each dimension (intelligibility up to 70% in transcription accuracy; comprehensibility ≥ 2). In sum, results from the current study generally provided support for the current Aptis Speaking rubrics, though the way in which comprehensibility is described at the upper levels may benefit from more nuance.
Keywords: Aptis, CEFR, comprehensibility, intelligibility, layperson listener
Click to download Full Text