{"title":"Investigating and preventing scientific misconduct using Benford's Law.","authors":"Gregory M Eckhartt, Graeme D Ruxton","doi":"10.1186/s41073-022-00126-w","DOIUrl":"10.1186/s41073-022-00126-w","url":null,"abstract":"<p><p>Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"1"},"PeriodicalIF":7.2,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison
{"title":"ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol","authors":"William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison","doi":"10.1186/s41073-022-00122-0","DOIUrl":"https://doi.org/10.1186/s41073-022-00122-0","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Background</h3><p>Structured, systematic methods to formulate consensus recommendations, such as the Delphi process or nominal group technique, among others, provide the opportunity to harness the knowledge of experts to support clinical decision making in areas of uncertainty. They are widely used in biomedical research, in particular where disease characteristics or resource limitations mean that high-quality evidence generation is difficult. However, poor reporting of methods used to reach a consensus – for example, not clearly explaining the definition of consensus, or not stating how consensus group panellists were selected – can potentially undermine confidence in this type of research and hinder reproducibility. Our objective is therefore to systematically develop a reporting guideline to help the biomedical research and clinical practice community describe the methods or techniques used to reach consensus in a complete, transparent, and consistent manner.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The ACCORD (ACcurate COnsensus Reporting Document) project will take place in five stages and follow the EQUATOR Network guidance for the development of reporting guidelines. In Stage 1, a multidisciplinary Steering Committee has been established to lead and coordinate the guideline development process. In Stage 2, a systematic literature review will identify evidence on the quality of the reporting of consensus methodology, to obtain potential items for a reporting checklist. In Stage 3, Delphi methodology will be used to reach consensus regarding the checklist items, first among the Steering Committee, and then among a broader Delphi panel comprising participants with a range of expertise, including patient representatives. In Stage 4, the reporting guideline will be finalised in a consensus meeting, along with the production of an Explanation and Elaboration (E&E) document. In Stage 5, we plan to publish the reporting guideline and E&E document in open-access journals, supported by presentations at appropriate events. Dissemination of the reporting guideline, including a website linked to social media channels, is crucial for the document to be implemented in practice.</p><h3 data-test=\"abstract-sub-heading\">Discussion</h3><p>The ACCORD reporting guideline will provide a set of minimum items that should be reported about methods used to achieve consensus, including approaches ranging from simple unstructured opinion gatherings to highly structured processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What works for peer review and decision-making in research funding: a realist synthesis.","authors":"Alejandra Recio-Saucedo, Ksenia Crane, Katie Meadmore, Kathryn Fackrell, Hazel Church, Simon Fraser, Amanda Blatch-Jones","doi":"10.1186/s41073-022-00120-2","DOIUrl":"10.1186/s41073-022-00120-2","url":null,"abstract":"<p><strong>Introduction: </strong>Allocation of research funds relies on peer review to support funding decisions, and these processes can be susceptible to biases and inefficiencies. The aim of this work was to determine which past interventions to peer review and decision-making have worked to improve research funding practices, how they worked, and for whom.</p><p><strong>Methods: </strong>Realist synthesis of peer-review publications and grey literature reporting interventions in peer review for research funding.</p><p><strong>Results: </strong>We analysed 96 publications and 36 website sources. Sixty publications enabled us to extract stakeholder-specific context-mechanism-outcomes configurations (CMOCs) for 50 interventions, which formed the basis of our synthesis. Shorter applications, reviewer and applicant training, virtual funding panels, enhanced decision models, institutional submission quotas, applicant training in peer review and grant-writing reduced interrater variability, increased relevance of funded research, reduced time taken to write and review applications, promoted increased investment into innovation, and lowered cost of panels.</p><p><strong>Conclusions: </strong>Reports of 50 interventions in different areas of peer review provide useful guidance on ways of solving common issues with the peer review process. Evidence of the broader impact of these interventions on the research ecosystem is still needed, and future research should aim to identify processes that consistently work to improve peer review across funders and research contexts.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"7 1","pages":"2"},"PeriodicalIF":7.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8894828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher
{"title":"Characteristics of 'mega' peer-reviewers.","authors":"Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher","doi":"10.1186/s41073-022-00121-1","DOIUrl":"https://doi.org/10.1186/s41073-022-00121-1","url":null,"abstract":"<p><strong>Background: </strong>The demand for peer reviewers is often perceived as disproportionate to the supply and availability of reviewers. Considering characteristics associated with peer review behaviour can allow for the development of solutions to manage the growing demand for peer reviewers. The objective of this research was to compare characteristics among two groups of reviewers registered in Publons.</p><p><strong>Methods: </strong>A descriptive cross-sectional study design was used to compare characteristics between (1) individuals completing at least 100 peer reviews ('mega peer reviewers') from January 2018 to December 2018 as and (2) a control group of peer reviewers completing between 1 and 18 peer reviews over the same time period. Data was provided by Publons, which offers a repository of peer reviewer activities in addition to tracking peer reviewer publications and research metrics. Mann Whitney tests and chi-square tests were conducted comparing characteristics (e.g., number of publications, number of citations, word count of peer review) of mega peer reviewers to the control group of reviewers.</p><p><strong>Results: </strong>A total of 1596 peer reviewers had data provided by Publons. A total of 396 M peer reviewers and a random sample of 1200 control group reviewers were included. A greater proportion of mega peer reviews were male (92%) as compared to the control reviewers (70% male). Mega peer reviewers demonstrated a significantly greater average number of total publications, citations, receipt of Publons awards, and a higher average h index as compared to the control group of reviewers (all p < .001). We found no statistically significant differences in the number of words between the groups (p > .428).</p><p><strong>Conclusions: </strong>Mega peer reviewers registered in the Publons database also had a higher number of publications and citations as compared to a control group of reviewers. Additional research that considers motivations associated with peer review behaviour should be conducted to help inform peer reviewing activity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"7 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8862198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gender disparity in publication records: a qualitative study of women researchers in computing and engineering.","authors":"Mohammad Hosseini, Shiva Sharifzad","doi":"10.1186/s41073-021-00117-3","DOIUrl":"https://doi.org/10.1186/s41073-021-00117-3","url":null,"abstract":"<p><strong>Background: </strong>The current paper follows up on the results of an exploratory quantitative analysis that compared the publication and citation records of men and women researchers affiliated with the Faculty of Computing and Engineering at Dublin City University (DCU) in Ireland. Quantitative analysis of publications between 2013 and 2018 showed that women researchers had fewer publications, received fewer citations per person, and participated less often in international collaborations. Given the significance of publications for pursuing an academic career, we used qualitative methods to understand these differences and explore factors that, according to women researchers, have contributed to this disparity.</p><p><strong>Methods: </strong>Sixteen women researchers from DCU's Faculty of Computing and Engineering were interviewed using a semi-structured questionnaire. Once interviews were transcribed and anonymised, they were coded by both authors in two rounds using an inductive approach.</p><p><strong>Results: </strong>Interviewed women believed that their opportunities for research engagement and research funding, collaborations, publications and promotions are negatively impacted by gender roles, implicit gender biases, their own high professional standards, family responsibilities, nationality and negative perceptions of their expertise and accomplishments.</p><p><strong>Conclusions: </strong>Our study has found that women in DCU's Faculty of Computing and Engineering face challenges that, according to those interviewed, negatively affect their engagement in various research activities, and, therefore, have contributed to their lower publication record. We suggest that while affirmative programmes aiming to correct disparities are necessary, they are more likely to improve organisational culture if they are implemented in parallel with bottom-up initiatives that engage all parties, including men researchers and non-academic partners, to inform and sensitise them about the significance of gender equity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"15"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8632200/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39679575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evan Mayo-Wilson, Meredith L Phillips, Avonne E Connor, Kelly J Vander Ley, Kevin Naaman, Mark Helfand
{"title":"Peer review reduces spin in PCORI research reports.","authors":"Evan Mayo-Wilson, Meredith L Phillips, Avonne E Connor, Kelly J Vander Ley, Kevin Naaman, Mark Helfand","doi":"10.1186/s41073-021-00119-1","DOIUrl":"https://doi.org/10.1186/s41073-021-00119-1","url":null,"abstract":"<p><strong>Background: </strong>The Patient-Centered Outcomes Research Institute (PCORI) is obligated to peer review and to post publicly \"Final Research Reports\" of all funded projects. PCORI peer review emphasizes adherence to PCORI's Methodology Standards and principles of ethical scientific communication. During the peer review process, reviewers and editors seek to ensure that results are presented objectively and interpreted appropriately, e.g., free of spin.</p><p><strong>Methods: </strong>Two independent raters assessed PCORI peer review feedback sent to authors. We calculated the proportion of reports in which spin was identified during peer review, and the types of spin identified. We included reports submitted by April 2018 with at least one associated journal article. The same raters then assessed whether authors addressed reviewers' comments about spin. The raters also assessed whether spin identified during PCORI peer review was present in related journal articles.</p><p><strong>Results: </strong>We included 64 PCORI-funded projects. Peer reviewers or editors identified spin in 55/64 (86%) submitted research reports. Types of spin included reporting bias (46/55; 84%), inappropriate interpretation (40/55; 73%), inappropriate extrapolation of results (15/55; 27%), and inappropriate attribution of causality (5/55; 9%). Authors addressed comments about spin related to 47/55 (85%) of the reports. Of 110 associated journal articles, PCORI comments about spin were potentially applicable to 44/110 (40%) articles, of which 27/44 (61%) contained the same spin that was identified in the PCORI research report. The proportion of articles with spin was similar for articles accepted before and after PCORI peer review (63% vs 58%).</p><p><strong>Discussion: </strong>Just as spin is common in journal articles and press releases, we found that most reports submitted to PCORI included spin. While most spin was mitigated during the funder's peer review process, we found no evidence that review of PCORI reports influenced spin in journal articles. Funders could explore interventions aimed at reducing spin in published articles of studies they support.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"16"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8638354/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39768548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transparency of peer review: a semi-structured interview study with chief editors from social sciences and humanities.","authors":"Veli-Matti Karhulahti, Hans-Joachim Backe","doi":"10.1186/s41073-021-00116-4","DOIUrl":"https://doi.org/10.1186/s41073-021-00116-4","url":null,"abstract":"<p><strong>Background: </strong>Open peer review practices are increasing in medicine and life sciences, but in social sciences and humanities (SSH) they are still rare. We aimed to map out how editors of respected SSH journals perceive open peer review, how they balance policy, ethics, and pragmatism in the review processes they oversee, and how they view their own power in the process.</p><p><strong>Methods: </strong>We conducted 12 pre-registered semi-structured interviews with editors of respected SSH journals. Interviews consisted of 21 questions and lasted an average of 67 min. Interviews were transcribed, descriptively coded, and organized into code families.</p><p><strong>Results: </strong>SSH editors saw anonymized peer review benefits to outweigh those of open peer review. They considered anonymized peer review the \"gold standard\" that authors and editors are expected to follow to respect institutional policies; moreover, anonymized review was also perceived as ethically superior due to the protection it provides, and more pragmatic due to eased seeking of reviewers. Finally, editors acknowledged their power in the publication process and reported strategies for keeping their work as unbiased as possible.</p><p><strong>Conclusions: </strong>Editors of SSH journals preferred the benefits of anonymized peer review over open peer and acknowledged the power they hold in the publication process during which authors are almost completely disclosed to editorial bodies. We recommend journals to communicate the transparency elements of their manuscript review processes by listing all bodies who contributed to the decision on every review stage.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"13"},"PeriodicalIF":0.0,"publicationDate":"2021-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8598274/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39721579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A billion-dollar donation: estimating the cost of researchers' time spent on peer review.","authors":"Balazs Aczel, Barnabas Szaszi, Alex O Holcombe","doi":"10.1186/s41073-021-00118-2","DOIUrl":"https://doi.org/10.1186/s41073-021-00118-2","url":null,"abstract":"<p><strong>Background: </strong>The amount and value of researchers' peer review work is critical for academia and journal publishing. However, this labor is under-recognized, its magnitude is unknown, and alternative ways of organizing peer review labor are rarely considered.</p><p><strong>Methods: </strong>Using publicly available data, we provide an estimate of researchers' time and the salary-based contribution to the journal peer review system.</p><p><strong>Results: </strong>We found that the total time reviewers globally worked on peer reviews was over 100 million hours in 2020, equivalent to over 15 thousand years. The estimated monetary value of the time US-based reviewers spent on reviews was over 1.5 billion USD in 2020. For China-based reviewers, the estimate is over 600 million USD, and for UK-based, close to 400 million USD.</p><p><strong>Conclusions: </strong>By design, our results are very likely to be under-estimates as they reflect only a portion of the total number of journals worldwide. The numbers highlight the enormous amount of work and time that researchers provide to the publication system, and the importance of considering alternative ways of structuring, and paying for, peer review. We foster this process by discussing some alternative models that aim to boost the benefits of peer review, thus improving its cost-benefit ratio.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"14"},"PeriodicalIF":0.0,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8591820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39622221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege
{"title":"Individual versus general structured feedback to improve agreement in grant peer review: a randomized controlled trial.","authors":"Jan-Ole Hesselberg, Knut Inge Fostervold, Pål Ulleberg, Ida Svege","doi":"10.1186/s41073-021-00115-5","DOIUrl":"10.1186/s41073-021-00115-5","url":null,"abstract":"<p><strong>Background: </strong>Vast sums are distributed based on grant peer review, but studies show that interrater reliability is often low. In this study, we tested the effect of receiving two short individual feedback reports compared to one short general feedback report on the agreement between reviewers.</p><p><strong>Methods: </strong>A total of 42 reviewers at the Norwegian Foundation Dam were randomly assigned to receive either a general feedback report or an individual feedback report. The general feedback group received one report before the start of the reviews that contained general information about the previous call in which the reviewers participated. In the individual feedback group, the reviewers received two reports, one before the review period (based on the previous call) and one during the period (based on the current call). In the individual feedback group, the reviewers were presented with detailed information on their scoring compared with the review committee as a whole, both before and during the review period. The main outcomes were the proportion of agreement in the eligibility assessment and the average difference in scores between pairs of reviewers assessing the same proposal. The outcomes were measured in 2017 and after the feedback was provided in 2018.</p><p><strong>Results: </strong>A total of 2398 paired reviews were included in the analysis. There was a significant difference between the two groups in the proportion of absolute agreement on whether the proposal was eligible for the funding programme, with the general feedback group demonstrating a higher rate of agreement. There was no difference between the two groups in terms of the average score difference. However, the agreement regarding the proposal score remained critically low for both groups.</p><p><strong>Conclusions: </strong>We did not observe changes in proposal score agreement between 2017 and 2018 in reviewers receiving different feedback. The low levels of agreement remain a major concern in grant peer review, and research to identify contributing factors as well as the development and testing of interventions to increase agreement rates are still needed.</p><p><strong>Trial registration: </strong>The study was preregistered at OSF.io/n4fq3 .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"12"},"PeriodicalIF":0.0,"publicationDate":"2021-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8485516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39474032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero
{"title":"Strengthening the incentives for responsible research practices in Australian health and medical research funding.","authors":"Joanna Diong, Cynthia M Kroeger, Katherine J Reynolds, Adrian Barnett, Lisa A Bero","doi":"10.1186/s41073-021-00113-7","DOIUrl":"10.1186/s41073-021-00113-7","url":null,"abstract":"<p><strong>Background: </strong>Australian health and medical research funders support substantial research efforts, and incentives within grant funding schemes influence researcher behaviour. We aimed to determine to what extent Australian health and medical funders incentivise responsible research practices.</p><p><strong>Methods: </strong>We conducted an audit of instructions from research grant and fellowship schemes. Eight national research grants and fellowships were purposively sampled to select schemes that awarded the largest amount of funds. The funding scheme instructions were assessed against 9 criteria to determine to what extent they incentivised these responsible research and reporting practices: (1) publicly register study protocols before starting data collection, (2) register analysis protocols before starting data analysis, (3) make study data openly available, (4) make analysis code openly available, (5) make research materials openly available, (6) discourage use of publication metrics, (7) conduct quality research (e.g. adhere to reporting guidelines), (8) collaborate with a statistician, and (9) adhere to other responsible research practices. Each criterion was answered using one of the following responses: \"Instructed\", \"Encouraged\", or \"No mention\".</p><p><strong>Results: </strong>Across the 8 schemes from 5 funders, applicants were instructed or encouraged to address a median of 4 (range 0 to 5) of the 9 criteria. Three criteria received no mention in any scheme (register analysis protocols, make analysis code open, collaborate with a statistician). Importantly, most incentives did not seem strong as applicants were only instructed to register study protocols, discourage use of publication metrics and conduct quality research. Other criteria were encouraged but were not required.</p><p><strong>Conclusions: </strong>Funders could strengthen the incentives for responsible research practices by requiring grant and fellowship applicants to implement these practices in their proposals. Administering institutions could be required to implement these practices to be eligible for funding. Strongly rewarding researchers for implementing robust research practices could lead to sustained improvements in the quality of health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"6 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2021-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8328133/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39277405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}