ALTAANZ
  • About ALTAANZ
  • ALTAANZ Committee
    • Current Committee
    • Past Committees >
      • 2024 Committee
      • 2022 - 2023 Committee
      • 2021 Committee
      • 2020 ALTAANZ Committee
      • 2018 - 2019 ALTAANZ Committee
      • 2017 ALTAANZ Committee
      • 2016 ALTAANZ Committee
      • 2015 ALTAANZ Committee
      • 2014 ALTAANZ Committee
      • 2013 ALTAANZ Committee
      • 2012 ALTAANZ Committee
      • 2011 ALTAANZ Committee
  • Events
    • ALTAANZ Online conference 2025 >
      • Conference info
      • ALTAANZ conference registration 2025
      • Keynote Speakers
      • Featured sessions
      • ALTAANZ 2025 Mentor-mentee program
    • Past Conferences >
      • The Applied Linguistics ALAA/ALANZ/ALTAANZ Conference 2024
      • ALTAANZ Online Conference 2023 >
        • Program 2023
        • Plenary Sessions 2023
        • Registration 2023
        • Conference Committee 2023
      • ALANZ - ALAA - ALTAANZ 2022
      • ALTAANZ Online Research Forum 2021
      • LTRC/ALTAANZ Online Celebratory event 2020 >
        • About the event
        • Event Programme
        • LTRC Anniversary Symposium
      • ALANZ / ALAA / ALTAANZ Auckland 2017
      • ALTAANZ Conference Auckland 2016 >
        • Keynote Speakers >
          • Plenary Abstracts
        • Teachers' Day
        • Pre-conference workshops
        • Conference programme
      • ALTAANZ Conference Brisbane 2014
      • ALTAANZ Conference Sydney 2012
    • Past Workshops >
      • LTRC / ALTAANZ Workshops July 2014 >
        • Test analysis for teachers
        • Diagnostic assessment in the language classroom
        • Responding to student writing
        • Assessing Pragmatics
        • Introduction to Rasch measurement
        • Introduction to many-facet Rasch measurement
      • LTRC / ALTAANZ workshops September 2015 >
        • A Practical Approach to Questionnaire Construction for Language Assessment Research
        • Integrating self- and peer-assessment into the language classroom
        • Implementing and assessing collaborative writing activities
        • Assessing Vocabulary
        • Revisiting language constructs
  • SiLA Journal
    • About SiLA
    • SiLA Publication Policies
    • Early View Articles
    • Current Issue
    • Past Issues >
      • 2024
      • 2023
      • 2022
      • 2021
      • 2020
      • 2019
      • 2018
      • 2017
      • 2016
      • 2015
      • 2014
      • 2013
      • 2012
    • Editorial Board
    • Submission Guidelines
  • Awards
    • SiLA Best Paper Award
    • PLTA Best Paper Award 2013-2021
    • ALTAANZ Best Student Journal Article Award
    • ALTAANZ Best Student Paper Award
    • Penny McKay Award
  • Funding Opportunities
  • Newsletter: Language Assessment Matters
  • Resources
    • Best practice in language testing & assessment
  • Join ALTAANZ
  • Contact us
Picture
Comparing automated analysis and human analysis of hedging in academic texts
Rachael Ruegg, Victoria University of Wellington, New Zealand
Pansa Prommas, Prince of Songkla University, Thailand
https://doi.org/10.58379/SLVL8621
Volume 13, Issue 1, 2024
Abstract: Hedging is a metadiscourse device employed by academic writers to manage knowledge claims and establish writer-reader interaction in written discourse. Research writing involves a balance of fact and a writer’s personal evaluation and interpretation. This study compared automated analysis of hedging through Authorial Voice Analyzer (AVA) with a more traditional human analysis of hedging, to increase understanding of the relative strengths and weaknesses of the AVA versus human analysis of hedging in academic texts. An explanatory sequential mixed-methods design was used; quantitative analysis (Pearson correlation) was followed by qualitative analysis to understand the reasons for quantitative differences. AVA found a larger number of hedging items than the human analysis in the same academic writing corpus. However, qualitative analysis suggests that the AVA only considers frequency and does not take account of function. Since many hedging devices are multifunctional, AVA seems to overestimate the frequency of hedging by counting hedge markers as hedging even when they are used with propositional functions. Overall, automated analytic tools like AVA are useful for metadiscourse studies. However, unless used in combination with human analysis they are unlikely to effectively operate with multifunctional markers. The findings of this study offer validation of AVA and help raise awareness of how the tool can be used to evaluate hedging in academic texts.
Keywords: Metadiscourse, hedging, automated analysis, corpus analysis, academic writing
Click to download Full Text