Wolfgang G. Stock , Gerhard Reichmann , Christian Schlögl
{"title":"调查机构的研究成果","authors":"Wolfgang G. Stock , Gerhard Reichmann , Christian Schlögl","doi":"10.1016/j.joi.2025.101638","DOIUrl":null,"url":null,"abstract":"<div><div>Describing, analyzing, and evaluating research institutions are among the main tasks of scientometrics and research evaluation. But how can we optimally search for an institution's research output? Possible search arguments include institution names, affiliations, addresses, and affiliated authors’ names. Prerequisites of these search tasks are complete lists (or at least good approximations) of the institutions’ publications, and—in later steps—their citations, and topics. When searching for the publications of research institutions in an information service, there are two options, namely (1) searching directly for the name of the institution and (2) searching for all authors affiliated with the institution in a defined time interval. Which strategy is more effective? More specifically, do informetric indicators such as recall and precision, search recall and search precision, and relative visibility change depending on the search strategy? What are the reasons for differences? To illustrate our approach, we conducted an illustrative study on two information science institutions and identified all staff members. The search was performed using the Web of Science Core Collection (WoS CC). As a performance indicator, applying fractional counting and considering co-affiliations of authors, we used the institution's relative visibility in an information service. We also calculated two variants of recall and precision at the institution level, namely search recall and search precision as informetric measures of performance differences between different search strategies (here: author search versus institution search) on the same information service (here: WoS CC) and recall and precision in relation to the complete set of an institution's publications. For all our calculations, there is a clear result: Searches for affiliated authors outperform searches for institutions in WoS. However, especially for large institutions it is difficult to determine all the staff members in the time interval of research. Additionally, information services (including WoS) are incomplete and there are variants for the names of institutions in the services. Therefore, searching for institutions and the publication-based quantitative evaluation of institutions are very critical issues.</div></div>","PeriodicalId":48662,"journal":{"name":"Journal of Informetrics","volume":"19 2","pages":"Article 101638"},"PeriodicalIF":3.5000,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Investigating the research output of institutions\",\"authors\":\"Wolfgang G. Stock , Gerhard Reichmann , Christian Schlögl\",\"doi\":\"10.1016/j.joi.2025.101638\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Describing, analyzing, and evaluating research institutions are among the main tasks of scientometrics and research evaluation. But how can we optimally search for an institution's research output? Possible search arguments include institution names, affiliations, addresses, and affiliated authors’ names. Prerequisites of these search tasks are complete lists (or at least good approximations) of the institutions’ publications, and—in later steps—their citations, and topics. When searching for the publications of research institutions in an information service, there are two options, namely (1) searching directly for the name of the institution and (2) searching for all authors affiliated with the institution in a defined time interval. Which strategy is more effective? More specifically, do informetric indicators such as recall and precision, search recall and search precision, and relative visibility change depending on the search strategy? What are the reasons for differences? To illustrate our approach, we conducted an illustrative study on two information science institutions and identified all staff members. The search was performed using the Web of Science Core Collection (WoS CC). As a performance indicator, applying fractional counting and considering co-affiliations of authors, we used the institution's relative visibility in an information service. We also calculated two variants of recall and precision at the institution level, namely search recall and search precision as informetric measures of performance differences between different search strategies (here: author search versus institution search) on the same information service (here: WoS CC) and recall and precision in relation to the complete set of an institution's publications. For all our calculations, there is a clear result: Searches for affiliated authors outperform searches for institutions in WoS. However, especially for large institutions it is difficult to determine all the staff members in the time interval of research. Additionally, information services (including WoS) are incomplete and there are variants for the names of institutions in the services. Therefore, searching for institutions and the publication-based quantitative evaluation of institutions are very critical issues.</div></div>\",\"PeriodicalId\":48662,\"journal\":{\"name\":\"Journal of Informetrics\",\"volume\":\"19 2\",\"pages\":\"Article 101638\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-01-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Informetrics\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1751157725000021\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Informetrics","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1751157725000021","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
摘要
描述、分析和评价研究机构是科学计量学和研究评价的主要任务之一。但是,我们如何才能最优地搜索一个机构的研究产出呢?可能的搜索参数包括机构名称、附属机构、地址和附属作者的姓名。这些搜索任务的先决条件是机构出版物的完整列表(或至少是良好的近似值),以及在后面的步骤中,它们的引用和主题。在信息服务中搜索研究机构的出版物时,有两种选择:(1)直接搜索该机构的名称;(2)在规定的时间间隔内搜索该机构所属的所有作者。哪种策略更有效?更具体地说,诸如查全率和查准率、查全率和查准率以及相对可见性等信息指标是否会根据搜索策略而变化?造成差异的原因是什么?为了说明我们的方法,我们对两个信息科学机构进行了说明性研究,并确定了所有工作人员。使用Web of Science Core Collection (WoS CC)进行搜索。作为一个性能指标,我们应用分数计数并考虑作者的共同隶属关系,我们使用了机构在信息服务中的相对可见性。我们还计算了机构层面的查全率和查准率的两种变体,即在同一信息服务(WoS CC)上,作为不同搜索策略(作者搜索与机构搜索)之间性能差异的信息计量指标的查全率和查准率,以及与机构出版物完整集合相关的查全率和查准率。对于我们所有的计算,有一个明确的结果:对附属作者的搜索超过了对WoS机构的搜索。然而,特别是对于大型机构来说,很难确定所有的工作人员在研究的时间间隔。此外,信息服务(包括WoS)是不完整的,并且服务中的机构名称存在变体。因此,寻找机构和基于出版物的机构定量评价是非常关键的问题。
Describing, analyzing, and evaluating research institutions are among the main tasks of scientometrics and research evaluation. But how can we optimally search for an institution's research output? Possible search arguments include institution names, affiliations, addresses, and affiliated authors’ names. Prerequisites of these search tasks are complete lists (or at least good approximations) of the institutions’ publications, and—in later steps—their citations, and topics. When searching for the publications of research institutions in an information service, there are two options, namely (1) searching directly for the name of the institution and (2) searching for all authors affiliated with the institution in a defined time interval. Which strategy is more effective? More specifically, do informetric indicators such as recall and precision, search recall and search precision, and relative visibility change depending on the search strategy? What are the reasons for differences? To illustrate our approach, we conducted an illustrative study on two information science institutions and identified all staff members. The search was performed using the Web of Science Core Collection (WoS CC). As a performance indicator, applying fractional counting and considering co-affiliations of authors, we used the institution's relative visibility in an information service. We also calculated two variants of recall and precision at the institution level, namely search recall and search precision as informetric measures of performance differences between different search strategies (here: author search versus institution search) on the same information service (here: WoS CC) and recall and precision in relation to the complete set of an institution's publications. For all our calculations, there is a clear result: Searches for affiliated authors outperform searches for institutions in WoS. However, especially for large institutions it is difficult to determine all the staff members in the time interval of research. Additionally, information services (including WoS) are incomplete and there are variants for the names of institutions in the services. Therefore, searching for institutions and the publication-based quantitative evaluation of institutions are very critical issues.
期刊介绍:
Journal of Informetrics (JOI) publishes rigorous high-quality research on quantitative aspects of information science. The main focus of the journal is on topics in bibliometrics, scientometrics, webometrics, patentometrics, altmetrics and research evaluation. Contributions studying informetric problems using methods from other quantitative fields, such as mathematics, statistics, computer science, economics and econometrics, and network science, are especially encouraged. JOI publishes both theoretical and empirical work. In general, case studies, for instance a bibliometric analysis focusing on a specific research field or a specific country, are not considered suitable for publication in JOI, unless they contain innovative methodological elements.