个人与一般结构化反馈以提高拨款同行评审的一致性:一项随机对照试验。

IF 7.2 Q1 ETHICS
Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege
{"title":"个人与一般结构化反馈以提高拨款同行评审的一致性:一项随机对照试验。","authors":"Jan-Ole Hesselberg,&nbsp;Knut Inge Fostervold,&nbsp;Pål Ulleberg,&nbsp;Ida Svege","doi":"10.1186/s41073-021-00115-5","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.</p><p><strong>Methods: </strong>A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.</p><p><strong>Results: </strong>A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.</p><p><strong>Conclusions: </strong>We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.</p><p><strong>Trial registration: </strong>The study was preregistered at OSF.io/n4fq3 .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2000,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8485516/pdf/","citationCount":"2","resultStr":"{\"title\":\"Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.\",\"authors\":\"Jan-Ole Hesselberg,&nbsp;Knut Inge Fostervold,&nbsp;Pål Ulleberg,&nbsp;Ida Svege\",\"doi\":\"10.1186/s41073-021-00115-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.</p><p><strong>Methods: </strong>A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.</p><p><strong>Results: </strong>A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.</p><p><strong>Conclusions: </strong>We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.</p><p><strong>Trial registration: </strong>The study was preregistered at OSF.io/n4fq3 .</p>\",\"PeriodicalId\":74682,\"journal\":{\"name\":\"Research integrity and peer review\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.2000,\"publicationDate\":\"2021-09-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8485516/pdf/\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research integrity and peer review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41073-021-00115-5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research integrity and peer review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41073-021-00115-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 2

摘要

背景:大量的金额是根据拨款同行评审进行分配的,但研究表明,评审者之间的可靠性通常很低。在这项研究中,我们测试了两份简短的个人反馈报告与一份简短的一般反馈报告对评审员之间一致性的影响。方法:挪威基础大坝共有42名评审员被随机分配接受一般反馈报告或个人反馈报告。一般性反馈小组在审查开始前收到一份报告,其中载有关于审查人员参加的上一次电话会议的一般性信息。在个人反馈组中,评审员收到了两份报告,一份是在评审期前(基于上一次电话),另一份是在此期间(基于当前电话)。在个人反馈组中,在审查之前和审查期间,向审查人员提供了与整个审查委员会相比的评分详细信息。主要结果是在资格评估中达成一致的比例,以及评估同一提案的两对评审员之间的平均得分差异。在2017年和2018年提供反馈后对结果进行了测量。结果:共有2398条配对评论被纳入分析。两组在提案是否符合资助方案的绝对一致比例方面存在显著差异,一般反馈组的一致率更高。两组之间的平均得分差异没有差异。然而,对于这两组人来说,关于提案得分的一致性仍然极低。结论:在2017年至2018年间,我们没有观察到收到不同反馈的评审员的提案得分一致性发生变化。低水平的协议仍然是赠款同行审查中的一个主要问题,仍然需要进行研究以确定促成因素,以及制定和测试提高协议率的干预措施。试验注册:该研究在OSF.io/n4fq3预先注册。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.

Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.

Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.

Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.

Background: Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.

Methods: A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.

Results: A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.

Conclusions: We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.

Trial registration: The study was preregistered at OSF.io/n4fq3 .

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
审稿时长
5 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信