Investigating the effect of publication text similarity between reviewers and authors on the rigor of peer review: An intellectual proximity perspective
IF 3.5 2区 管理学Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Yanlan Kang , Chenwei Zhang , Zhuanlan Sun , Yiwei Li
{"title":"Investigating the effect of publication text similarity between reviewers and authors on the rigor of peer review: An intellectual proximity perspective","authors":"Yanlan Kang , Chenwei Zhang , Zhuanlan Sun , Yiwei Li","doi":"10.1016/j.joi.2025.101709","DOIUrl":null,"url":null,"abstract":"<div><div>The involvement of experienced peers as reviewers plays a crucial role in manuscript evaluation during the peer review process. Nonetheless, concerns have arisen regarding potential cognitive bias when reviewers assess research that is outside their areas of expertise. Despite these concerns, quantitative analysis of this issue remains limited. This study aims to empirically investigate whether submissions reviewed by peers with academic backgrounds similar to the authors’ research areas correlate with more rigorous comments during the peer review process. Utilizing a dataset of 2,147 papers published in the journal <em>eLife</em>, along with their publicly available peer review reports and reviewers’ publication records, we employed natural language processing techniques to measure the publication text similarity of reviewers to that of the manuscript’s authors, representing a minuscule part of intellectual proximity. We then used a linear regression model to examine whether such similarity was associated with review rigor, quantified by the frequency of statistical terms from two well-known glossaries. We observed no statistically significant differences in the rigor of comments made by peers with varying levels of publication text similarity in the constructed dataset and setting. The findings remained consistent across several robustness checks and alternative specifications. This suggests that no discernible cognitive bias is introduced by the reviewers’ academic background during the peer review process, enriching the extant literature and offering important insights into understanding the role of reviewers in maintaining fairness.</div></div>","PeriodicalId":48662,"journal":{"name":"Journal of Informetrics","volume":"19 3","pages":"Article 101709"},"PeriodicalIF":3.5000,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Informetrics","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1751157725000732","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
The involvement of experienced peers as reviewers plays a crucial role in manuscript evaluation during the peer review process. Nonetheless, concerns have arisen regarding potential cognitive bias when reviewers assess research that is outside their areas of expertise. Despite these concerns, quantitative analysis of this issue remains limited. This study aims to empirically investigate whether submissions reviewed by peers with academic backgrounds similar to the authors’ research areas correlate with more rigorous comments during the peer review process. Utilizing a dataset of 2,147 papers published in the journal eLife, along with their publicly available peer review reports and reviewers’ publication records, we employed natural language processing techniques to measure the publication text similarity of reviewers to that of the manuscript’s authors, representing a minuscule part of intellectual proximity. We then used a linear regression model to examine whether such similarity was associated with review rigor, quantified by the frequency of statistical terms from two well-known glossaries. We observed no statistically significant differences in the rigor of comments made by peers with varying levels of publication text similarity in the constructed dataset and setting. The findings remained consistent across several robustness checks and alternative specifications. This suggests that no discernible cognitive bias is introduced by the reviewers’ academic background during the peer review process, enriching the extant literature and offering important insights into understanding the role of reviewers in maintaining fairness.
期刊介绍:
Journal of Informetrics (JOI) publishes rigorous high-quality research on quantitative aspects of information science. The main focus of the journal is on topics in bibliometrics, scientometrics, webometrics, patentometrics, altmetrics and research evaluation. Contributions studying informetric problems using methods from other quantitative fields, such as mathematics, statistics, computer science, economics and econometrics, and network science, are especially encouraged. JOI publishes both theoretical and empirical work. In general, case studies, for instance a bibliometric analysis focusing on a specific research field or a specific country, are not considered suitable for publication in JOI, unless they contain innovative methodological elements.