{"title":"Reporting quality of abstracts and inconsistencies with full text articles in pediatric orthopedic publications.","authors":"Sherif Ahmed Kamel, Tamer A El-Sobky","doi":"10.1186/s41073-023-00135-3","DOIUrl":"10.1186/s41073-023-00135-3","url":null,"abstract":"<p><strong>Background: </strong>Abstracts should provide a brief yet comprehensive reporting of all components of a manuscript. Inaccurate reporting may mislead readers and impact citation practices. It was our goal to investigate the reporting quality of abstracts of interventional observational studies in three major pediatric orthopedic journals and to analyze any reporting inconsistencies between those abstracts and their corresponding full-text articles.</p><p><strong>Methods: </strong>We selected a sample of 55 abstracts and their full-text articles published between 2018 and 2022. Included articles were primary therapeutic research investigating the results of treatments or interventions. Abstracts were scrutinized for reporting quality and inconsistencies with their full-text versions with a 22-itemized checklist. The reporting quality of titles was assessed by a 3-items categorical scale.</p><p><strong>Results: </strong>In 48 (87%) of articles there were abstract reporting inaccuracies related to patient demographics. The study's follow-up and complications were not reported in 21 (38%) of abstracts each. Most common inconsistencies between the abstracts and full-text articles were related to reporting of inclusion or exclusion criteria in 39 (71%) and study correlations in 27 (49%) of articles. Reporting quality of the titles was insufficient in 33 (60%) of articles.</p><p><strong>Conclusions: </strong>In our study we found low reporting quality of abstracts and noticeable inconsistencies with full-text articles, especially regarding inclusion or exclusion criteria and study correlations. While the current sample is likely not representative of overall pediatric orthopedic literature, we recommend that authors, reviewers, and editors ensure abstracts are reported accurately, ideally following the appropriate reporting guidelines, and that they double check that there are no inconsistencies between abstracts and full text articles. To capture essential study information, journals should also consider increasing abstract word limits.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"11"},"PeriodicalIF":0.0,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10463470/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10121003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jérôme Barriere, Éric Billy, Véronique Saada, Alexander Samuel, Jacques Robert, Lonni Besançon
{"title":"Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection.","authors":"Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jérôme Barriere, Éric Billy, Véronique Saada, Alexander Samuel, Jacques Robert, Lonni Besançon","doi":"10.1186/s41073-023-00134-4","DOIUrl":"https://doi.org/10.1186/s41073-023-00134-4","url":null,"abstract":"<p><strong>Background: </strong>The practice of clinical research is strictly regulated by law. During submission and review processes, compliance of such research with the laws enforced in the country where it was conducted is not always correctly filled in by the authors or verified by the editors. Here, we report a case of a single institution for which one may find hundreds of publications with seemingly relevant ethical concerns, along with 10 months of follow-up through contacts with the editors of these articles. We thus argue for a stricter control of ethical authorization by scientific editors and we call on publishers to cooperate to this end.</p><p><strong>Methods: </strong>We present an investigation of the ethics and legal aspects of 456 studies published by the IHU-MI (Institut Hospitalo-Universitaire Méditerranée Infection) in Marseille, France.</p><p><strong>Results: </strong>We identified a wide range of issues with the stated research authorization and ethics of the published studies with respect to the Institutional Review Board and the approval presented. Among the studies investigated, 248 were conducted with the same ethics approval number, even though the subjects, samples, and countries of investigation were different. Thirty-nine (39) did not even contain a reference to the ethics approval number while they present research on human beings. We thus contacted the journals that published these articles and provide their responses to our concerns. It should be noted that, since our investigation and reporting to journals, PLOS has issued expressions of concerns for several publications we analyze here.</p><p><strong>Conclusion: </strong>This case presents an investigation of the veracity of ethical approval, and more than 10 months of follow-up by independent researchers. We call for stricter control and cooperation in handling of these cases, including editorial requirement to upload ethical approval documents, guidelines from COPE to address such ethical concerns, and transparent editorial policies and timelines to answer such concerns. All supplementary materials are available.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"9"},"PeriodicalIF":0.0,"publicationDate":"2023-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10398994/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9938883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephen A Gallo, Michael Pearce, Carole J Lee, Elena A Erosheva
{"title":"A new approach to grant review assessments: score, then rank.","authors":"Stephen A Gallo, Michael Pearce, Carole J Lee, Elena A Erosheva","doi":"10.1186/s41073-023-00131-7","DOIUrl":"https://doi.org/10.1186/s41073-023-00131-7","url":null,"abstract":"<p><strong>Background: </strong>In many grant review settings, proposals are selected for funding on the basis of summary statistics of review ratings. Challenges of this approach (including the presence of ties and unclear ordering of funding preference for proposals) could be mitigated if rankings such as top-k preferences or paired comparisons, which are local evaluations that enforce ordering across proposals, were also collected and incorporated in the analysis of review ratings. However, analyzing ratings and rankings simultaneously has not been done until recently. This paper describes a practical method for integrating rankings and scores and demonstrates its usefulness for making funding decisions in real-world applications.</p><p><strong>Methods: </strong>We first present the application of our existing joint model for rankings and ratings, the Mallows-Binomial, in obtaining an integrated score for each proposal and generating the induced preference ordering. We then apply this methodology to several theoretical \"toy\" examples of rating and ranking data, designed to demonstrate specific properties of the model. We then describe an innovative protocol for collecting rankings of the top-six proposals as an add-on to the typical peer review scoring procedures and provide a case study using actual peer review data to exemplify the output and how the model can appropriately resolve judges' evaluations.</p><p><strong>Results: </strong>For the theoretical examples, we show how the model can provide a preference order to equally rated proposals by incorporating rankings, to proposals using ratings and only partial rankings (and how they differ from a ratings-only approach) and to proposals where judges provide internally inconsistent ratings/rankings and outlier scoring. Finally, we discuss how, using real world panel data, this method can provide information about funding priority with a level of accuracy in a well-suited format for research funding decisions.</p><p><strong>Conclusions: </strong>A methodology is provided to collect and employ both rating and ranking data in peer review assessments of proposal submission quality, highlighting several advantages over methods relying on ratings alone. This method leverages information to most accurately distill reviewer opinion into a useful output to make an informed funding decision and is general enough to be applied to settings such as in the NIH panel review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"10"},"PeriodicalIF":0.0,"publicationDate":"2023-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10367367/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9865500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edwin Were, Jepchirchir Kiplagat, Eunice Kaguiri, Rose Ayikukwei, Violet Naanyu
{"title":"Institutional capacity to prevent and manage research misconduct: perspectives from Kenyan research regulators.","authors":"Edwin Were, Jepchirchir Kiplagat, Eunice Kaguiri, Rose Ayikukwei, Violet Naanyu","doi":"10.1186/s41073-023-00132-6","DOIUrl":"https://doi.org/10.1186/s41073-023-00132-6","url":null,"abstract":"<p><strong>Background: </strong>Research misconduct i.e. fabrication, falsification, and plagiarism is associated with individual, institutional, national, and global factors. Researchers' perceptions of weak or non-existent institutional guidelines on the prevention and management of research misconduct can encourage these practices. Few countries in Africa have clear guidance on research misconduct. In Kenya, the capacity to prevent or manage research misconduct in academic and research institutions has not been documented. The objective of this study was to explore the perceptions of Kenyan research regulators on the occurrence of and institutional capacity to prevent or manage research misconduct.</p><p><strong>Methods: </strong>Interviews with open-ended questions were conducted with 27 research regulators (chairs and secretaries of ethics committees, research directors of academic and research institutions, and national regulatory bodies). Among other questions, participants were asked: (1) How common is research misconduct in your view? (2) Does your institution have the capacity to prevent research misconduct? (3) Does your institution have the capacity to manage research misconduct? Their responses were audiotaped, transcribed, and coded using NVivo software. Deductive coding covered predefined themes including perceptions on occurrence, prevention detection, investigation, and management of research misconduct. Results are presented with illustrative quotes.</p><p><strong>Results: </strong>Respondents perceived research misconduct to be very common among students developing thesis reports. Their responses suggested there was no dedicated capacity to prevent or manage research misconduct at the institutional and national levels. There were no specific national guidelines on research misconduct. At the institutional level, the only capacity/efforts mentioned were directed at reducing, detecting, and managing student plagiarism. There was no direct mention of the capacity to manage fabrication and falsification or misconduct by faculty researchers. We recommend the development of Kenya code of conduct or research integrity guidelines that would cover misconduct.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10190722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Publisher Correction: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00136-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00136-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"7"},"PeriodicalIF":0.0,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10334596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10170319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li
{"title":"Checklist to assess Trustworthiness in RAndomised Controlled Trials (TRACT checklist): concept proposal and pilot.","authors":"Ben W Mol, Shimona Lai, Ayesha Rahim, Esmée M Bordewijk, Rui Wang, Rik van Eekelen, Lyle C Gurrin, Jim G Thornton, Madelon van Wely, Wentao Li","doi":"10.1186/s41073-023-00130-8","DOIUrl":"10.1186/s41073-023-00130-8","url":null,"abstract":"<p><strong>Objectives: </strong>To propose a checklist that can be used to assess trustworthiness of randomized controlled trials (RCTs).</p><p><strong>Design: </strong>A screening tool was developed using the four-stage approach proposed by Moher et al. This included defining the scope, reviewing the evidence base, suggesting a list of items from piloting, and holding a consensus meeting. The initial checklist was set-up by a core group who had been involved in the assessment of problematic RCTs for several years. We piloted this in a consensus panel of several stakeholders, including health professionals, reviewers, journal editors, policymakers, researchers, and evidence-synthesis specialists. Each member was asked to score three articles with the checklist and the results were then discussed in consensus meetings.</p><p><strong>Outcome: </strong>The Trustworthiness in RAndomised Clinical Trials (TRACT) checklist includes 19 items organised into seven domains that are applicable to every RCT: 1) Governance, 2) Author Group, 3) Plausibility of Intervention Usage, 4) Timeframe, 5) Drop-out Rates, 6) Baseline Characteristics, and 7) Outcomes. Each item can be answered as either no concerns, some concerns/no information, or major concerns. If a study is assessed and found to have a majority of items rated at a major concern level, then editors, reviewers or evidence synthesizers should consider a more thorough investigation, including assessment of original individual participant data.</p><p><strong>Conclusions: </strong>The TRACT checklist is the first checklist developed specifically to detect trustworthiness issues in RCTs. It might help editors, publishers and researchers to screen for such issues in submitted or published RCTs in a transparent and replicable manner.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2023-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10280869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10066264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Responsible research practices could be more strongly endorsed by Australian university codes of research conduct.","authors":"Yi Kai Ong, Kay L Double, Lisa Bero, Joanna Diong","doi":"10.1186/s41073-023-00129-1","DOIUrl":"https://doi.org/10.1186/s41073-023-00129-1","url":null,"abstract":"<p><strong>Background: </strong>This study aimed to investigate how strongly Australian university codes of research conduct endorse responsible research practices.</p><p><strong>Methods: </strong>Codes of research conduct from 25 Australian universities active in health and medical research were obtained from public websites, and audited against 19 questions to assess how strongly they (1) defined research integrity, research quality, and research misconduct, (2) required research to be approved by an appropriate ethics committee, (3) endorsed 9 responsible research practices, and (4) discouraged 5 questionable research practices.</p><p><strong>Results: </strong>Overall, a median of 10 (IQR 9 to 12) of 19 practices covered in the questions were mentioned, weakly endorsed, or strongly endorsed. Five to 8 of 9 responsible research practices were mentioned, weakly, or strongly endorsed, and 3 questionable research practices were discouraged. Results are stratified by Group of Eight (n = 8) and other (n = 17) universities. Specifically, (1) 6 (75%) Group of Eight and 11 (65%) other codes of research conduct defined research integrity, 4 (50%) and 8 (47%) defined research quality, and 7 (88%) and 16 (94%) defined research misconduct. (2) All codes required ethics approval for human and animal research. (3) All codes required conflicts of interest to be declared, but there was variability in how strongly other research practices were endorsed. The most commonly endorsed practices were ensuring researcher training in research integrity [8 (100%) and 16 (94%)] and making study data publicly available [6 (75%) and 12 (71%)]. The least commonly endorsed practices were making analysis code publicly available [0 (0%) and 0 (0%)] and registering analysis protocols [0 (0%) and 1 (6%)]. (4) Most codes discouraged fabricating data [5 (63%) and 15 (88%)], selectively deleting or modifying data [5 (63%) and 15 (88%)], and selective reporting of results [3 (38%) and 15 (88%)]. No codes discouraged p-hacking or hypothesising after results are known.</p><p><strong>Conclusions: </strong>Responsible research practices could be more strongly endorsed by Australian university codes of research conduct. Our findings may not be generalisable to smaller universities, or those not active in health and medical research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"5"},"PeriodicalIF":0.0,"publicationDate":"2023-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10242962/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9591647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00133-5","DOIUrl":"10.1186/s41073-023-00133-5","url":null,"abstract":"<p><strong>Background: </strong>The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.</p><p><strong>Methods: </strong>To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.</p><p><strong>Results: </strong>LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.</p><p><strong>Conclusions: </strong>We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"4"},"PeriodicalIF":7.2,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9849534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis.","authors":"Karen B Schmaling, Stephen A Gallo","doi":"10.1186/s41073-023-00127-3","DOIUrl":"https://doi.org/10.1186/s41073-023-00127-3","url":null,"abstract":"<p><strong>Background: </strong>Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.</p><p><strong>Methods: </strong>The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.</p><p><strong>Results: </strong>The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I<sup>2</sup> = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I<sup>2</sup> = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I<sup>2</sup> = 100%).</p><p><strong>Conclusions: </strong>The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10155348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9762431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput
{"title":"Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling.","authors":"Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput","doi":"10.1186/s41073-023-00128-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00128-2","url":null,"abstract":"<p><strong>Background: </strong>There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.</p><p><strong>Methods: </strong>Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).</p><p><strong>Results: </strong>A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.</p><p><strong>Conclusions: </strong>Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"8 1","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10122980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9776362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}