Linguini: A benchmark for language-agnostic linguistic reasoning

Eduardo Sánchez, Belen Alastruey, Christophe Ropers, Pontus Stenetorp, Mikel Artetxe, Marta R. Costa-jussà
{"title":"Linguini: A benchmark for language-agnostic linguistic reasoning","authors":"Eduardo Sánchez, Belen Alastruey, Christophe Ropers, Pontus Stenetorp, Mikel Artetxe, Marta R. Costa-jussà","doi":"arxiv-2409.12126","DOIUrl":null,"url":null,"abstract":"We propose a new benchmark to measure a language model's linguistic reasoning\nskills without relying on pre-existing language-specific knowledge. The test\ncovers 894 questions grouped in 160 problems across 75 (mostly) extremely\nlow-resource languages, extracted from the International Linguistic Olympiad\ncorpus. To attain high accuracy on this benchmark, models don't need previous\nknowledge of the tested language, as all the information needed to solve the\nlinguistic puzzle is presented in the context. We find that, while all analyzed\nmodels rank below 25% accuracy, there is a significant gap between open and\nclosed models, with the best-performing proprietary model at 24.05% and the\nbest-performing open model at 8.84%.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"91 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We propose a new benchmark to measure a language model's linguistic reasoning skills without relying on pre-existing language-specific knowledge. The test covers 894 questions grouped in 160 problems across 75 (mostly) extremely low-resource languages, extracted from the International Linguistic Olympiad corpus. To attain high accuracy on this benchmark, models don't need previous knowledge of the tested language, as all the information needed to solve the linguistic puzzle is presented in the context. We find that, while all analyzed models rank below 25% accuracy, there is a significant gap between open and closed models, with the best-performing proprietary model at 24.05% and the best-performing open model at 8.84%.
语言学推理基准与语言无关的语言推理基准
我们提出了一种新的基准来衡量语言模型的语言推理能力,而无需依赖已有的特定语言知识。该测试涵盖了从国际语言学奥林匹克语料库中提取的 75 种(大部分)资源极其匮乏的语言的 160 个问题中的 894 个问题。要想在这一基准测试中获得高准确率,模型不需要事先了解被测语言,因为解决语言难题所需的所有信息都会在上下文中呈现。我们发现,虽然所有分析模型的准确率都低于 25%,但开放模型和封闭模型之间存在明显差距,表现最好的专有模型准确率为 24.05%,表现最好的开放模型准确率为 8.84%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信