Eduardo Sánchez, Belen Alastruey, Christophe Ropers, Pontus Stenetorp, Mikel Artetxe, Marta R. Costa-jussà
{"title":"Linguini: A benchmark for language-agnostic linguistic reasoning","authors":"Eduardo Sánchez, Belen Alastruey, Christophe Ropers, Pontus Stenetorp, Mikel Artetxe, Marta R. Costa-jussà","doi":"arxiv-2409.12126","DOIUrl":null,"url":null,"abstract":"We propose a new benchmark to measure a language model's linguistic reasoning\nskills without relying on pre-existing language-specific knowledge. The test\ncovers 894 questions grouped in 160 problems across 75 (mostly) extremely\nlow-resource languages, extracted from the International Linguistic Olympiad\ncorpus. To attain high accuracy on this benchmark, models don't need previous\nknowledge of the tested language, as all the information needed to solve the\nlinguistic puzzle is presented in the context. We find that, while all analyzed\nmodels rank below 25% accuracy, there is a significant gap between open and\nclosed models, with the best-performing proprietary model at 24.05% and the\nbest-performing open model at 8.84%.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"91 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.12126","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose a new benchmark to measure a language model's linguistic reasoning
skills without relying on pre-existing language-specific knowledge. The test
covers 894 questions grouped in 160 problems across 75 (mostly) extremely
low-resource languages, extracted from the International Linguistic Olympiad
corpus. To attain high accuracy on this benchmark, models don't need previous
knowledge of the tested language, as all the information needed to solve the
linguistic puzzle is presented in the context. We find that, while all analyzed
models rank below 25% accuracy, there is a significant gap between open and
closed models, with the best-performing proprietary model at 24.05% and the
best-performing open model at 8.84%.