A noise audit of the peer review of a scientific article: a WPOM journal case study

IF 1.4 Q3 BUSINESS
Tomás Bonavía, J. Marin-Garcia
{"title":"A noise audit of the peer review of a scientific article: a WPOM journal case study","authors":"Tomás Bonavía, J. Marin-Garcia","doi":"10.4995/wpom.19631","DOIUrl":null,"url":null,"abstract":"This study aims to be one of the first to analyse the noise level in the peer review process of scientific articles. Noise is defined as the undesired variability in the judgements made by professionals on the same topic or subject. We refer to evaluative judgements in which experts are expected to agree. This is what happens when we try to judge the quality of a scientific work. To measure noise, the only information needed is to have several judgements made by different people on the same case to analyse their dispersion (what Kahneman et al. call a noise audit). This was the procedure followed in this research. We asked a set of reviewers from the journal WPOM (Working Papers on Operations Management) to review the same manuscript which had been previously accepted for publication in this journal, although the reviewers were unaware of that fact. The results indicated that if two reviewers were used, the probability of this manuscript not being published would be close to 8%, while the probability of it having an uncertain future would be 40% (one favorable opinion and one unfavorable opinion or both suggesting substantial changes). In the case of employing only one reviewer, in 25% of the cases, the audited work would have encountered significant challenges for publication. The great advantage of measuring noise is, once measured, it is usually possible to reduce it. This article concludes by outlining some of the measures which can be put in place by scientific journals to improve their peer review processes.","PeriodicalId":42114,"journal":{"name":"WPOM-Working Papers on Operations Management","volume":"2 1","pages":""},"PeriodicalIF":1.4000,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"WPOM-Working Papers on Operations Management","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4995/wpom.19631","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

This study aims to be one of the first to analyse the noise level in the peer review process of scientific articles. Noise is defined as the undesired variability in the judgements made by professionals on the same topic or subject. We refer to evaluative judgements in which experts are expected to agree. This is what happens when we try to judge the quality of a scientific work. To measure noise, the only information needed is to have several judgements made by different people on the same case to analyse their dispersion (what Kahneman et al. call a noise audit). This was the procedure followed in this research. We asked a set of reviewers from the journal WPOM (Working Papers on Operations Management) to review the same manuscript which had been previously accepted for publication in this journal, although the reviewers were unaware of that fact. The results indicated that if two reviewers were used, the probability of this manuscript not being published would be close to 8%, while the probability of it having an uncertain future would be 40% (one favorable opinion and one unfavorable opinion or both suggesting substantial changes). In the case of employing only one reviewer, in 25% of the cases, the audited work would have encountered significant challenges for publication. The great advantage of measuring noise is, once measured, it is usually possible to reduce it. This article concludes by outlining some of the measures which can be put in place by scientific journals to improve their peer review processes.
科学文章同行评议的噪音审计:WPOM期刊案例研究
本研究旨在成为第一个分析科学论文同行评议过程中噪音水平的研究之一。噪声被定义为专业人员对同一话题或主题的判断中不希望出现的变化。我们指的是期望专家们达成一致的评价性判断。当我们试图判断一项科学工作的质量时,就会发生这种情况。为了测量噪声,唯一需要的信息是由不同的人对同一个案例做出几个判断,以分析它们的分散度(Kahneman等人称之为噪声审计)。这是本研究遵循的程序。我们请WPOM(《运营管理工作论文》)杂志的一组审稿人对该杂志先前接受发表的同一篇手稿进行审查,尽管审稿人不知道这一事实。结果表明,如果使用两位审稿人,该稿件不发表的概率接近8%,而未来不确定的概率为40%(一个赞成意见和一个反对意见或两者都建议进行实质性修改)。在只雇用一名审稿人的情况下,在25%的情况下,被审计的工作将遇到重大的出版挑战。测量噪声的最大优点是,一旦测量出来,通常就有可能降低噪声。本文最后概述了科学期刊可以采取的一些措施,以改进其同行评议过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
12.50%
发文量
5
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信