{"title":"Finding “similar” universities using ChatGPT for institutional benchmarking: A large-scale comparison of European universities","authors":"Benedetto Lepori, Lutz Bornmann, Mario Gay","doi":"10.1002/asi.25010","DOIUrl":null,"url":null,"abstract":"<p>The study objective was to evaluate the efficacy of ChatGPT in identifying “similar” institutions for benchmarking the research performance of a university. Benchmarking is deemed a promising approach to compare “similar with similar” as a better alternative to rankings (comparing “different” universities). Current approaches either focus on a limited number of “quantitative” dimensions or are too complex for most users. We conducted large-scale testing by tasking ChatGPT with identifying the most similar European universities in terms of research performance, utilizing the European Tertiary Education Register data. We tested whether the peers suggested by ChatGPT were similar to the focal university on size, research intensity, and subject composition. Additionally, we evaluated whether providing more specific instructions improved the results. The findings offer a nuanced perspective on the potential and risks of using ChatGPT to identify peer institutions for benchmarking. On one hand, solely using ChatGPT would replicate the visibility biases associated with university rankings, thereby undermining the rationale for benchmarking. On the other hand, relying on semantic associations might capture dimensions of university similarity that are relevant and difficult to capture through quantitative methods. We finally reflected on the broader implications for scholars in higher education and science studies research.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 9","pages":"1174-1187"},"PeriodicalIF":4.3000,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the Association for Information Science and Technology","FirstCategoryId":"91","ListUrlMain":"https://asistdl.onlinelibrary.wiley.com/doi/10.1002/asi.25010","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
The study objective was to evaluate the efficacy of ChatGPT in identifying “similar” institutions for benchmarking the research performance of a university. Benchmarking is deemed a promising approach to compare “similar with similar” as a better alternative to rankings (comparing “different” universities). Current approaches either focus on a limited number of “quantitative” dimensions or are too complex for most users. We conducted large-scale testing by tasking ChatGPT with identifying the most similar European universities in terms of research performance, utilizing the European Tertiary Education Register data. We tested whether the peers suggested by ChatGPT were similar to the focal university on size, research intensity, and subject composition. Additionally, we evaluated whether providing more specific instructions improved the results. The findings offer a nuanced perspective on the potential and risks of using ChatGPT to identify peer institutions for benchmarking. On one hand, solely using ChatGPT would replicate the visibility biases associated with university rankings, thereby undermining the rationale for benchmarking. On the other hand, relying on semantic associations might capture dimensions of university similarity that are relevant and difficult to capture through quantitative methods. We finally reflected on the broader implications for scholars in higher education and science studies research.
期刊介绍:
The Journal of the Association for Information Science and Technology (JASIST) is a leading international forum for peer-reviewed research in information science. For more than half a century, JASIST has provided intellectual leadership by publishing original research that focuses on the production, discovery, recording, storage, representation, retrieval, presentation, manipulation, dissemination, use, and evaluation of information and on the tools and techniques associated with these processes.
The Journal welcomes rigorous work of an empirical, experimental, ethnographic, conceptual, historical, socio-technical, policy-analytic, or critical-theoretical nature. JASIST also commissions in-depth review articles (“Advances in Information Science”) and reviews of print and other media.