AGREE:评估古希腊分布语义模型的新基准

IF 0.7 3区 文学 0 HUMANITIES, MULTIDISCIPLINARY
Silvia Stopponi, Saskia Peels-Matthey, Malvina Nissim
{"title":"AGREE:评估古希腊分布语义模型的新基准","authors":"Silvia Stopponi, Saskia Peels-Matthey, Malvina Nissim","doi":"10.1093/llc/fqad087","DOIUrl":null,"url":null,"abstract":"The last years have seen the application of Natural Language Processing, in particular, language models, to the study of the Semantics of ancient Greek, but only a little work has been done to create gold data for the evaluation of such models. In this contribution we introduce AGREE, the first benchmark for intrinsic evaluation of semantic models of ancient Greek created from expert judgements. In the absence of native speakers, eliciting expert judgements to create a gold standard is a way to leverage a competence that is the closest to that of natives. Moreover, this method allows for collecting data in a uniform way and giving precise instructions to participants. Human judgements about word relatedness were collected via two questionnaires: in the first, experts provided related lemmas to some proposed seeds, while in the second, they assigned relatedness judgements to pairs of lemmas. AGREE was built from a selection of the collected data.","PeriodicalId":45315,"journal":{"name":"Digital Scholarship in the Humanities","volume":"134 1","pages":""},"PeriodicalIF":0.7000,"publicationDate":"2024-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AGREE: a new benchmark for the evaluation of distributional semantic models of ancient Greek\",\"authors\":\"Silvia Stopponi, Saskia Peels-Matthey, Malvina Nissim\",\"doi\":\"10.1093/llc/fqad087\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The last years have seen the application of Natural Language Processing, in particular, language models, to the study of the Semantics of ancient Greek, but only a little work has been done to create gold data for the evaluation of such models. In this contribution we introduce AGREE, the first benchmark for intrinsic evaluation of semantic models of ancient Greek created from expert judgements. In the absence of native speakers, eliciting expert judgements to create a gold standard is a way to leverage a competence that is the closest to that of natives. Moreover, this method allows for collecting data in a uniform way and giving precise instructions to participants. Human judgements about word relatedness were collected via two questionnaires: in the first, experts provided related lemmas to some proposed seeds, while in the second, they assigned relatedness judgements to pairs of lemmas. AGREE was built from a selection of the collected data.\",\"PeriodicalId\":45315,\"journal\":{\"name\":\"Digital Scholarship in the Humanities\",\"volume\":\"134 1\",\"pages\":\"\"},\"PeriodicalIF\":0.7000,\"publicationDate\":\"2024-01-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Scholarship in the Humanities\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1093/llc/fqad087\",\"RegionNum\":3,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"HUMANITIES, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Scholarship in the Humanities","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1093/llc/fqad087","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"HUMANITIES, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

摘要

近年来,自然语言处理技术,尤其是语言模型,被广泛应用于古希腊语语义学的研究,但在创建用于评估此类模型的黄金数据方面却鲜有建树。在这篇论文中,我们介绍了 AGREE,它是第一个对根据专家判断创建的古希腊语义模型进行内在评估的基准。在没有母语使用者的情况下,通过专家判断来创建黄金标准是一种利用最接近母语使用者能力的方法。此外,这种方法还能以统一的方式收集数据,并向参与者提供精确的指导。人类对词语关联性的判断是通过两份问卷收集的:在第一份问卷中,专家们为一些提议的种子提供了相关的词组,而在第二份问卷中,专家们为成对的词组分配了关联性判断。AGREE 系统就是从收集到的数据中精选出来的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
AGREE: a new benchmark for the evaluation of distributional semantic models of ancient Greek
The last years have seen the application of Natural Language Processing, in particular, language models, to the study of the Semantics of ancient Greek, but only a little work has been done to create gold data for the evaluation of such models. In this contribution we introduce AGREE, the first benchmark for intrinsic evaluation of semantic models of ancient Greek created from expert judgements. In the absence of native speakers, eliciting expert judgements to create a gold standard is a way to leverage a competence that is the closest to that of natives. Moreover, this method allows for collecting data in a uniform way and giving precise instructions to participants. Human judgements about word relatedness were collected via two questionnaires: in the first, experts provided related lemmas to some proposed seeds, while in the second, they assigned relatedness judgements to pairs of lemmas. AGREE was built from a selection of the collected data.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.80
自引率
25.00%
发文量
78
期刊介绍: DSH or Digital Scholarship in the Humanities is an international, peer reviewed journal which publishes original contributions on all aspects of digital scholarship in the Humanities including, but not limited to, the field of what is currently called the Digital Humanities. Long and short papers report on theoretical, methodological, experimental, and applied research and include results of research projects, descriptions and evaluations of tools, techniques, and methodologies, and reports on work in progress. DSH also publishes reviews of books and resources. Digital Scholarship in the Humanities was previously known as Literary and Linguistic Computing.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信