Network Analysis for the investigation of rater effects in language assessment: A comparison of ChatGPT vs human raters

Iasonas Lamprianou
{"title":"Network Analysis for the investigation of rater effects in language assessment: A comparison of ChatGPT vs human raters","authors":"Iasonas Lamprianou","doi":"10.1016/j.rmal.2025.100205","DOIUrl":null,"url":null,"abstract":"<div><div>A recent study by Yamashita (2024) showcases the usefulness of Many-Facet Rasch Model (MFRM) for the analysis of rater effects within the context of Automated Essay Scoring (AES). Building upon Yamashita's work, we break new ground by using Network Analysis (NA) to interrogate the same dataset comparing ChatGPT and human raters for the evaluation of 136 essays. We replicate the analysis of the original study and show a near-perfect agreement between the results of NA and MFRM. We extend the original study by providing strong evidence of halo effect in the data (including the ChatGPT ratings) and propose two new statistics to assess the consistency of raters. We also present simulation studies to show that the NA estimation algorithm is robust, even with small and sparse datasets. Finally, we provide practical guidelines for researchers seeking to use NA with their own datasets. We argue that NA can complement established methodologies, such as the MFRM, but can also be used independently, leveraging its strong visual representations. Relevant algorithms and R code are provided in the Online Appendix to support researchers and practitioners in replicating our findings.</div></div>","PeriodicalId":101075,"journal":{"name":"Research Methods in Applied Linguistics","volume":"4 2","pages":"Article 100205"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Methods in Applied Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772766125000266","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A recent study by Yamashita (2024) showcases the usefulness of Many-Facet Rasch Model (MFRM) for the analysis of rater effects within the context of Automated Essay Scoring (AES). Building upon Yamashita's work, we break new ground by using Network Analysis (NA) to interrogate the same dataset comparing ChatGPT and human raters for the evaluation of 136 essays. We replicate the analysis of the original study and show a near-perfect agreement between the results of NA and MFRM. We extend the original study by providing strong evidence of halo effect in the data (including the ChatGPT ratings) and propose two new statistics to assess the consistency of raters. We also present simulation studies to show that the NA estimation algorithm is robust, even with small and sparse datasets. Finally, we provide practical guidelines for researchers seeking to use NA with their own datasets. We argue that NA can complement established methodologies, such as the MFRM, but can also be used independently, leveraging its strong visual representations. Relevant algorithms and R code are provided in the Online Appendix to support researchers and practitioners in replicating our findings.
语言评估中评价者效应调查的网络分析:ChatGPT与人类评价者的比较
Yamashita(2024)最近的一项研究展示了多面Rasch模型(MFRM)在自动论文评分(AES)背景下分析评分效果的有用性。在Yamashita的工作基础上,我们通过使用网络分析(NA)来查询相同的数据集,比较ChatGPT和人类评分员对136篇文章的评估,从而开辟了新的领域。我们重复了原始研究的分析,并显示NA和MFRM的结果几乎完全一致。我们通过在数据(包括ChatGPT评级)中提供光环效应的有力证据来扩展原始研究,并提出两个新的统计数据来评估评级者的一致性。我们还提供了仿真研究,表明即使在小而稀疏的数据集上,NA估计算法也是鲁棒的。最后,我们为寻求将NA与自己的数据集一起使用的研究人员提供了实用指南。我们认为,NA可以补充现有的方法,如MFRM,但也可以独立使用,利用其强大的视觉表示。在线附录中提供了相关算法和R代码,以支持研究人员和从业者复制我们的发现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.10
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信