{"title":"On the performativity of SDG classifications in large bibliometric databases","authors":"Matteo Ottaviani, Stephan Stahlschmidt","doi":"arxiv-2405.03007","DOIUrl":null,"url":null,"abstract":"Large bibliometric databases, such as Web of Science, Scopus, and OpenAlex,\nfacilitate bibliometric analyses, but are performative, affecting the\nvisibility of scientific outputs and the impact measurement of participating\nentities. Recently, these databases have taken up the UN's Sustainable\nDevelopment Goals (SDGs) in their respective classifications, which have been\ncriticised for their diverging nature. This work proposes using the feature of\nlarge language models (LLMs) to learn about the \"data bias\" injected by diverse\nSDG classifications into bibliometric data by exploring five SDGs. We build a\nLLM that is fine-tuned in parallel by the diverse SDG classifications inscribed\ninto the databases' SDG classifications. Our results show high sensitivity in\nmodel architecture, classified publications, fine-tuning process, and natural\nlanguage generation. The wide arbitrariness at different levels raises concerns\nabout using LLM in research practice.","PeriodicalId":501285,"journal":{"name":"arXiv - CS - Digital Libraries","volume":"18 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Digital Libraries","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.03007","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Large bibliometric databases, such as Web of Science, Scopus, and OpenAlex,
facilitate bibliometric analyses, but are performative, affecting the
visibility of scientific outputs and the impact measurement of participating
entities. Recently, these databases have taken up the UN's Sustainable
Development Goals (SDGs) in their respective classifications, which have been
criticised for their diverging nature. This work proposes using the feature of
large language models (LLMs) to learn about the "data bias" injected by diverse
SDG classifications into bibliometric data by exploring five SDGs. We build a
LLM that is fine-tuned in parallel by the diverse SDG classifications inscribed
into the databases' SDG classifications. Our results show high sensitivity in
model architecture, classified publications, fine-tuning process, and natural
language generation. The wide arbitrariness at different levels raises concerns
about using LLM in research practice.