Research integrity and peer review最新文献

筛选
英文 中文
Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 发行商更正:对抗审稿人疲劳还是放大偏见?在学术同行评审中使用ChatGPT和其他大型语言模型的注意事项和建议。
Research integrity and peer review Pub Date : 2023-07-10 DOI: 10.1186/s41073-023-00136-2
Mohammad Hosseini, Serge P J M Horbach
{"title":"Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00136-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00136-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10334596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10170319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot. RAndomised Controlled Trials 可信度评估核对表(TRACT 核对表):概念提案和试点。
IF 7.2
Research integrity and peer review Pub Date : 2023-06-20 DOI: 10.1186/s41073-023-00130-8
Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li
{"title":"Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot.","authors":"Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li","doi":"10.1186/s41073-023-00130-8","DOIUrl":"10.1186/s41073-023-00130-8","url":null,"abstract":"<p><strong>Objectives: </strong>To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).</p><p><strong>Design: </strong>A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.</p><p><strong>Outcome: </strong>The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.</p><p><strong>Conclusions: </strong>The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10066264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. 负责任的研究实践可以得到澳大利亚大学研究行为准则的更强有力的支持。
Research integrity and peer review Pub Date : 2023-06-06 DOI: 10.1186/s41073-023-00129-1
Yi Kai Ong, Kay L Double, Lisa Bero, Joanna Diong
{"title":"Responsible research practices could be more strongly endorsed by Australian university codes of research conduct.","authors":"Yi Kai Ong,&nbsp;Kay L Double,&nbsp;Lisa Bero,&nbsp;Joanna Diong","doi":"10.1186/s41073-023-00129-1","DOIUrl":"https://doi.org/10.1186/s41073-023-00129-1","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.</p><p><strong>Methods: </strong>Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.</p><p><strong>Results: </strong>Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.</p><p><strong>Conclusions: </strong>Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10242962/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9591647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. 消除审稿人疲劳还是放大偏见?在学术同行评审中使用 ChatGPT 和其他大型语言模型的考虑因素和建议。
IF 7.2
Research integrity and peer review Pub Date : 2023-05-18 DOI: 10.1186/s41073-023-00133-5
Mohammad Hosseini, Serge P J M Horbach
{"title":"Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00133-5","DOIUrl":"10.1186/s41073-023-00133-5","url":null,"abstract":"<p><strong>Background: </strong>The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.</p><p><strong>Methods: </strong>To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.</p><p><strong>Results: </strong>LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.</p><p><strong>Conclusions: </strong>We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"4"},"PeriodicalIF":7.2,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9849534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis. 同行评议拨款申请、奖励和数额的性别差异:系统回顾和荟萃分析。
Research integrity and peer review Pub Date : 2023-05-03 DOI: 10.1186/s41073-023-00127-3
Karen B Schmaling, Stephen A Gallo
{"title":"Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis.","authors":"Karen B Schmaling,&nbsp;Stephen A Gallo","doi":"10.1186/s41073-023-00127-3","DOIUrl":"https://doi.org/10.1186/s41073-023-00127-3","url":null,"abstract":"<p><strong>Background: </strong>Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.</p><p><strong>Methods: </strong>The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.</p><p><strong>Results: </strong>The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I<sup>2</sup> = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I<sup>2</sup> = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I<sup>2</sup> = 100%).</p><p><strong>Conclusions: </strong>The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10155348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9762431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling. 科学陷坑:基于滚雪球抽样的调查数据估算同行评议的成本。
Research integrity and peer review Pub Date : 2023-04-24 DOI: 10.1186/s41073-023-00128-2
Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput
{"title":"Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling.","authors":"Allana G LeBlanc,&nbsp;Joel D Barnes,&nbsp;Travis J Saunders,&nbsp;Mark S Tremblay,&nbsp;Jean-Philippe Chaput","doi":"10.1186/s41073-023-00128-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00128-2","url":null,"abstract":"<p><strong>Background: </strong>There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.</p><p><strong>Methods: </strong>Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).</p><p><strong>Results: </strong>A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.</p><p><strong>Conclusions: </strong>Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10122980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9776362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating and preventing scientific misconduct using Benford's Law. 利用本福德定律调查和预防科学不端行为。
IF 7.2
Research integrity and peer review Pub Date : 2023-04-11 DOI: 10.1186/s41073-022-00126-w
Gregory M Eckhartt, Graeme D Ruxton
{"title":"Investigating and preventing scientific misconduct using Benford's Law.","authors":"Gregory M Eckhartt, Graeme D Ruxton","doi":"10.1186/s41073-022-00126-w","DOIUrl":"10.1186/s41073-022-00126-w","url":null,"abstract":"<p><p>Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"1"},"PeriodicalIF":7.2,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol 报告生物医学研究和临床实践中基于共识的方法的ACCORD指南:一项研究方案
Research integrity and peer review Pub Date : 2022-06-07 DOI: 10.1186/s41073-022-00122-0
William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison
{"title":"ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol","authors":"William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison","doi":"10.1186/s41073-022-00122-0","DOIUrl":"https://doi.org/10.1186/s41073-022-00122-0","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Background</h3><p>Structured, systematic methods to formulate consensus recommendations, such as the Delphi process or nominal group technique, among others, provide the opportunity to harness the knowledge of experts to support clinical decision making in areas of uncertainty. They are widely used in biomedical research, in particular where disease characteristics or resource limitations mean that high-quality evidence generation is difficult. However, poor reporting of methods used to reach a consensus – for example, not clearly explaining the definition of consensus, or not stating how consensus group panellists were selected – can potentially undermine confidence in this type of research and hinder reproducibility. Our objective is therefore to systematically develop a reporting guideline to help the biomedical research and clinical practice community describe the methods or techniques used to reach consensus in a complete, transparent, and consistent manner.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The ACCORD (ACcurate COnsensus Reporting Document) project will take place in five stages and follow the EQUATOR Network guidance for the development of reporting guidelines. In Stage 1, a multidisciplinary Steering Committee has been established to lead and coordinate the guideline development process. In Stage 2, a systematic literature review will identify evidence on the quality of the reporting of consensus methodology, to obtain potential items for a reporting checklist. In Stage 3, Delphi methodology will be used to reach consensus regarding the checklist items, first among the Steering Committee, and then among a broader Delphi panel comprising participants with a range of expertise, including patient representatives. In Stage 4, the reporting guideline will be finalised in a consensus meeting, along with the production of an Explanation and Elaboration (E&amp;E) document. In Stage 5, we plan to publish the reporting guideline and E&amp;E document in open-access journals, supported by presentations at appropriate events. Dissemination of the reporting guideline, including a website linked to social media channels, is crucial for the document to be implemented in practice.</p><h3 data-test=\"abstract-sub-heading\">Discussion</h3><p>The ACCORD reporting guideline will provide a set of minimum items that should be reported about methods used to achieve consensus, including approaches ranging from simple unstructured opinion gatherings to highly structured processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
What works for peer review and decision-making in research funding: a realist synthesis. 同行评审和科研经费决策的有效途径:现实主义综述。
IF 7.2
Research integrity and peer review Pub Date : 2022-03-04 DOI: 10.1186/s41073-022-00120-2
Alejandra Recio-Saucedo, Ksenia Crane, Katie Meadmore, Kathryn Fackrell, Hazel Church, Simon Fraser, Amanda Blatch-Jones
{"title":"What works for peer review and decision-making in research funding: a realist synthesis.","authors":"Alejandra Recio-Saucedo, Ksenia Crane, Katie Meadmore, Kathryn Fackrell, Hazel Church, Simon Fraser, Amanda Blatch-Jones","doi":"10.1186/s41073-022-00120-2","DOIUrl":"10.1186/s41073-022-00120-2","url":null,"abstract":"<p><strong>Introduction: </strong>Allocation of research funds relies on peer review to support funding decisions, and these processes can be susceptible to biases and inefficiencies. The aim of this work was to determine which past interventions to peer review and decision-making have worked to improve research funding practices, how they worked, and for whom.</p><p><strong>Methods: </strong>Realist synthesis of peer-review publications and grey literature reporting interventions in peer review for research funding.</p><p><strong>Results: </strong>We analysed 96 publications and 36 website sources. Sixty publications enabled us to extract stakeholder-specific context-mechanism-outcomes configurations (CMOCs) for 50 interventions, which formed the basis of our synthesis. Shorter applications, reviewer and applicant training, virtual funding panels, enhanced decision models, institutional submission quotas, applicant training in peer review and grant-writing reduced interrater variability, increased relevance of funded research, reduced time taken to write and review applications, promoted increased investment into innovation, and lowered cost of panels.</p><p><strong>Conclusions: </strong>Reports of 50 interventions in different areas of peer review provide useful guidance on ways of solving common issues with the peer review process. Evidence of the broader impact of these interventions on the research ecosystem is still needed, and future research should aim to identify processes that consistently work to improve peer review across funders and research contexts.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"7 1","pages":"2"},"PeriodicalIF":7.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8894828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characteristics of 'mega' peer-reviewers. “超级”同行审稿人的特点。
Research integrity and peer review Pub Date : 2022-02-21 DOI: 10.1186/s41073-022-00121-1
Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher
{"title":"Characteristics of 'mega' peer-reviewers.","authors":"Danielle B Rice,&nbsp;Ba' Pham,&nbsp;Justin Presseau,&nbsp;Andrea C Tricco,&nbsp;David Moher","doi":"10.1186/s41073-022-00121-1","DOIUrl":"https://doi.org/10.1186/s41073-022-00121-1","url":null,"abstract":"<p><strong>Background: </strong>The demand for peer reviewers is often perceived as disproportionate to the supply and availability of reviewers. Considering characteristics associated with peer review behaviour can allow for the development of solutions to manage the growing demand for peer reviewers. The objective of this research was to compare characteristics among two groups of reviewers registered in Publons.</p><p><strong>Methods: </strong>A descriptive cross-sectional study design was used to compare characteristics between (1) individuals completing at least 100 peer reviews ('mega peer reviewers') from January 2018 to December 2018 as and (2) a control group of peer reviewers completing between 1 and 18 peer reviews over the same time period. Data was provided by Publons, which offers a repository of peer reviewer activities in addition to tracking peer reviewer publications and research metrics. Mann Whitney tests and chi-square tests were conducted comparing characteristics (e.g., number of publications, number of citations, word count of peer review) of mega peer reviewers to the control group of reviewers.</p><p><strong>Results: </strong>A total of 1596 peer reviewers had data provided by Publons. A total of 396 M peer reviewers and a random sample of 1200 control group reviewers were included. A greater proportion of mega peer reviews were male (92%) as compared to the control reviewers (70% male). Mega peer reviewers demonstrated a significantly greater average number of total publications, citations, receipt of Publons awards, and a higher average h index as compared to the control group of reviewers (all p < .001). We found no statistically significant differences in the number of words between the groups (p > .428).</p><p><strong>Conclusions: </strong>Mega peer reviewers registered in the Publons database also had a higher number of publications and citations as compared to a control group of reviewers. Additional research that considers motivations associated with peer review behaviour should be conducted to help inform peer reviewing activity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"7 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8862198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信