从一致性到一致性,评估基于标准分级的大型语言模型。

IF 3.6 1区 心理学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Da-Wei Zhang, Melissa Boey, Yan Yu Tan, Alexis Hoh Sheng Jia
{"title":"从一致性到一致性,评估基于标准分级的大型语言模型。","authors":"Da-Wei Zhang, Melissa Boey, Yan Yu Tan, Alexis Hoh Sheng Jia","doi":"10.1038/s41539-024-00291-1","DOIUrl":null,"url":null,"abstract":"<p><p>This study evaluates the ability of large language models (LLMs) to deliver criterion-based grading and examines the impact of prompt engineering with detailed criteria on grading. Using well-established human benchmarks and quantitative analyses, we found that even free LLMs achieve criterion-based grading with a detailed understanding of the criteria, underscoring the importance of domain-specific understanding over model complexity. These findings highlight the potential of LLMs to deliver scalable educational feedback.</p>","PeriodicalId":48503,"journal":{"name":"npj Science of Learning","volume":"9 1","pages":"79"},"PeriodicalIF":3.6000,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11683144/pdf/","citationCount":"0","resultStr":"{\"title\":\"Evaluating large language models for criterion-based grading from agreement to consistency.\",\"authors\":\"Da-Wei Zhang, Melissa Boey, Yan Yu Tan, Alexis Hoh Sheng Jia\",\"doi\":\"10.1038/s41539-024-00291-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This study evaluates the ability of large language models (LLMs) to deliver criterion-based grading and examines the impact of prompt engineering with detailed criteria on grading. Using well-established human benchmarks and quantitative analyses, we found that even free LLMs achieve criterion-based grading with a detailed understanding of the criteria, underscoring the importance of domain-specific understanding over model complexity. These findings highlight the potential of LLMs to deliver scalable educational feedback.</p>\",\"PeriodicalId\":48503,\"journal\":{\"name\":\"npj Science of Learning\",\"volume\":\"9 1\",\"pages\":\"79\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2024-12-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11683144/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"npj Science of Learning\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1038/s41539-024-00291-1\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"npj Science of Learning","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1038/s41539-024-00291-1","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

本研究评估了大型语言模型(llm)提供基于标准的评分的能力,并检查了带有详细评分标准的即时工程的影响。使用完善的人类基准和定量分析,我们发现即使是免费的法学硕士也可以通过对标准的详细理解来实现基于标准的评分,强调了特定领域对模型复杂性的理解的重要性。这些发现突出了法学硕士在提供可扩展的教育反馈方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating large language models for criterion-based grading from agreement to consistency.

This study evaluates the ability of large language models (LLMs) to deliver criterion-based grading and examines the impact of prompt engineering with detailed criteria on grading. Using well-established human benchmarks and quantitative analyses, we found that even free LLMs achieve criterion-based grading with a detailed understanding of the criteria, underscoring the importance of domain-specific understanding over model complexity. These findings highlight the potential of LLMs to deliver scalable educational feedback.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.40
自引率
7.10%
发文量
29
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信