{"title":"Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.","authors":"Mohammad Hosseini, Serge P J M Horbach","doi":"10.1186/s41073-023-00133-5","DOIUrl":"https://doi.org/10.1186/s41073-023-00133-5","url":null,"abstract":"<p><strong>Background: </strong>The emergence of systems based on large language models (LLMs) such as OpenAI's ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks.</p><p><strong>Methods: </strong>To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers' role, 2) editors' role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT's performance regarding identified issues.</p><p><strong>Results: </strong>LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs' training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing.</p><p><strong>Conclusions: </strong>We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports' accuracy, tone, reasoning and originality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10191680/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9849534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Gender differences in peer reviewed grant applications, awards, and amounts: a systematic review and meta-analysis.","authors":"Karen B Schmaling, Stephen A Gallo","doi":"10.1186/s41073-023-00127-3","DOIUrl":"https://doi.org/10.1186/s41073-023-00127-3","url":null,"abstract":"<p><strong>Background: </strong>Differential participation and success in grant applications may contribute to women's lesser representation in the sciences. This study's objective was to conduct a systematic review and meta-analysis to address the question of gender differences in grant award acceptance rates and reapplication award acceptance rates (potential bias in peer review outcomes) and other grant outcomes.</p><p><strong>Methods: </strong>The review was registered on PROSPERO (CRD42021232153) and conducted in accordance with PRISMA 2020 standards. We searched Academic Search Complete, PubMed, and Web of Science for the timeframe 1 January 2005 to 31 December 2020, and forward and backward citations. Studies were included that reported data, by gender, on any of the following: grant applications or reapplications, awards, award amounts, award acceptance rates, or reapplication award acceptance rates. Studies that duplicated data reported in another study were excluded. Gender differences were investigated by meta-analyses and generalized linear mixed models. Doi plots and LFK indices were used to assess reporting bias.</p><p><strong>Results: </strong>The searches identified 199 records, of which 13 were eligible. An additional 42 sources from forward and backward searches were eligible, for a total of 55 sources with data on one or more outcomes. The data from these studies ranged from 1975 to 2020: 49 sources were published papers and six were funders' reports (the latter were identified by forwards and backwards searches). Twenty-nine studies reported person-level data, 25 reported application-level data, and one study reported both: person-level data were used in analyses. Award acceptance rates were 1% higher for men, which was not significantly different from women (95% CI 3% more for men to 1% more for women, k = 36, n = 303,795 awards and 1,277,442 applications, I<sup>2</sup> = 84%). Reapplication award acceptance rates were significantly higher for men (9%, 95% CI 18% to 1%, k = 7, n = 7319 applications and 3324 awards, I<sup>2</sup> = 63%). Women received smaller award amounts (g = -2.28, 95% CI -4.92 to 0.36, k = 13, n = 212,935, I<sup>2</sup> = 100%).</p><p><strong>Conclusions: </strong>The proportions of women that applied for grants, re-applied, accepted awards, and accepted awards after reapplication were less than the proportion of eligible women. However, the award acceptance rate was similar for women and men, implying no gender bias in this peer reviewed grant outcome. Women received smaller awards and fewer awards after re-applying, which may negatively affect continued scientific productivity. Greater transparency is needed to monitor and verify these data globally.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10155348/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9762431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput
{"title":"Scientific sinkhole: estimating the cost of peer review based on survey data with snowball sampling.","authors":"Allana G LeBlanc, Joel D Barnes, Travis J Saunders, Mark S Tremblay, Jean-Philippe Chaput","doi":"10.1186/s41073-023-00128-2","DOIUrl":"https://doi.org/10.1186/s41073-023-00128-2","url":null,"abstract":"<p><strong>Background: </strong>There are a variety of costs associated with publication of scientific findings. The purpose of this work was to estimate the cost of peer review in scientific publishing per reviewer, per year and for the entire scientific community.</p><p><strong>Methods: </strong>Internet-based self-report, cross-sectional survey, live between June 28, 2021 and August 2, 2021 was used. Participants were recruited via snowball sampling. No restrictions were placed on geographic location or field of study. Respondents who were asked to act as a peer-reviewer for at least one manuscript submitted to a scientific journal in 2020 were eligible. The primary outcome measure was the cost of peer review per person, per year (calculated as wage-cost x number of initial reviews and number of re-reviews per year). The secondary outcome was the cost of peer review globally (calculated as the number of peer-reviewed papers in Scopus x median wage-cost of initial review and re-review).</p><p><strong>Results: </strong>A total of 354 participants completed at least one question of the survey, and information necessary to calculate the cost of peer-review was available for 308 participants from 33 countries (44% from Canada). The cost of peer review was estimated at $US1,272 per person, per year ($US1,015 for initial review and $US256 for re-review), or US$1.1-1.7 billion for the scientific community per year. The global cost of peer-review was estimated at US$6 billion in 2020 when relying on the Dimensions database and taking into account reviewed-but-rejected manuscripts.</p><p><strong>Conclusions: </strong>Peer review represents an important financial piece of scientific publishing. Our results may not represent all countries or fields of study, but are consistent with previous estimates and provide additional context from peer reviewers themselves. Researchers and scientists have long provided peer review as a contribution to the scientific community. Recognizing the importance of peer-review, institutions should acknowledge these costs in job descriptions, performance measurement, promotion packages, and funding applications. Journals should develop methods to compensate reviewers for their time and improve transparency while maintaining the integrity of the peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10122980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9776362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Investigating and preventing scientific misconduct using Benford's Law.","authors":"Gregory M Eckhartt, Graeme D Ruxton","doi":"10.1186/s41073-022-00126-w","DOIUrl":"https://doi.org/10.1186/s41073-022-00126-w","url":null,"abstract":"<p><p>Integrity and trust in that integrity are fundamental to academic research. However, procedures for monitoring the trustworthiness of research, and for investigating cases where concern about possible data fraud have been raised are not well established. Here we suggest a practical approach for the investigation of work suspected of fraudulent data manipulation using Benford's Law. This should be of value to both individual peer-reviewers and academic institutions and journals. In this, we draw inspiration from well-established practices of financial auditing. We provide synthesis of the literature on tests of adherence to Benford's Law, culminating in advice of a single initial test for digits in each position of numerical strings within a dataset. We also recommend further tests which may prove useful in the event that specific hypotheses regarding the nature of data manipulation can be justified. Importantly, our advice differs from the most common current implementations of tests of Benford's Law. Furthermore, we apply the approach to previously-published data, highlighting the efficacy of these tests in detecting known irregularities. Finally, we discuss the results of these tests, with reference to their strengths and limitations.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10088595/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9290217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reducing the Inadvertent Spread of Retracted Science: recommendations from the RISRS report.","authors":"Jodi Schneider, Nathan D Woods, Randi Proescholdt","doi":"10.1186/s41073-022-00125-x","DOIUrl":"10.1186/s41073-022-00125-x","url":null,"abstract":"<p><strong>Background: </strong>Retraction is a mechanism for alerting readers to unreliable material and other problems in the published scientific and scholarly record. Retracted publications generally remain visible and searchable, but the intention of retraction is to mark them as \"removed\" from the citable record of scholarship. However, in practice, some retracted articles continue to be treated by researchers and the public as valid content as they are often unaware of the retraction. Research over the past decade has identified a number of factors contributing to the unintentional spread of retracted research. The goal of the Reducing the Inadvertent Spread of Retracted Science: Shaping a Research and Implementation Agenda (RISRS) project was to develop an actionable agenda for reducing the inadvertent spread of retracted science. This included identifying how retraction status could be more thoroughly disseminated, and determining what actions are feasible and relevant for particular stakeholders who play a role in the distribution of knowledge.</p><p><strong>Methods: </strong>These recommendations were developed as part of a year-long process that included a scoping review of empirical literature and successive rounds of stakeholder consultation, culminating in a three-part online workshop that brought together a diverse body of 65 stakeholders in October-November 2020 to engage in collaborative problem solving and dialogue. Stakeholders held roles such as publishers, editors, researchers, librarians, standards developers, funding program officers, and technologists and worked for institutions such as universities, governmental agencies, funding organizations, publishing houses, libraries, standards organizations, and technology providers. Workshop discussions were seeded by materials derived from stakeholder interviews (N = 47) and short original discussion pieces contributed by stakeholders. The online workshop resulted in a set of recommendations to address the complexities of retracted research throughout the scholarly communications ecosystem.</p><p><strong>Results: </strong>The RISRS recommendations are: (1) Develop a systematic cross-industry approach to ensure the public availability of consistent, standardized, interoperable, and timely information about retractions; (2) Recommend a taxonomy of retraction categories/classifications and corresponding retraction metadata that can be adopted by all stakeholders; (3) Develop best practices for coordinating the retraction process to enable timely, fair, unbiased outcomes; and (4) Educate stakeholders about pre- and post-publication stewardship, including retraction and correction of the scholarly record.</p><p><strong>Conclusions: </strong>Our stakeholder engagement study led to 4 recommendations to address inadvertent citation of retracted research, and formation of a working group to develop the Communication of Retractions, Removals, and Expressions of Concern (CORREC) Recommende","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2022-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9483880/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40371377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher
{"title":"Correction: Characteristics of 'mega' peer-reviewers.","authors":"Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher","doi":"10.1186/s41073-022-00124-y","DOIUrl":"https://doi.org/10.1186/s41073-022-00124-y","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9281154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40503523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving equity, diversity, and inclusion in academia.","authors":"Omar Dewidar, Nour Elmestekawy, Vivian Welch","doi":"10.1186/s41073-022-00123-z","DOIUrl":"https://doi.org/10.1186/s41073-022-00123-z","url":null,"abstract":"<p><p>There are growing bodies of evidence demonstrating the benefits of equity, diversity, and inclusion (EDI) on academic and organizational excellence. In turn, some editors have stated their desire to improve the EDI of their journals and of the wider scientific community. The Royal Society of Chemistry established a minimum set of requirements aimed at improving EDI in scholarly publishing. Additionally, several resources were reported to have the potential to improve EDI, but their effectiveness and feasibility are yet to be determined. In this commentary we suggest six approaches, based on the Royal Society of Chemistry set of requirements, that journals could implement to improve EDI. They are: (1) adopt a journal EDI statement with clear, actionable steps to achieve it; (2) promote the use of inclusive and bias-free language; (3) appoint a journal's EDI director or lead; (4) establish a EDI mentoring approach; (5) monitor adherence to EDI principles; and (6) publish reports on EDI actions and achievements. We also provide examples of journals that have implemented some of these strategies, and discuss the roles of peer reviewers, authors, researchers, academic institutes, and funders in improving EDI.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9251949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40470381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison
{"title":"ACCORD guideline for reporting consensus-based methods in biomedical research and clinical practice: a study protocol","authors":"William T. Gattrell, Amrit Pali Hungin, Amy Price, Christopher C. Winchester, David Tovey, Ellen L. Hughes, Esther J. van Zuuren, Keith Goldman, Patricia Logullo, Robert Matheis, Niall Harrison","doi":"10.1186/s41073-022-00122-0","DOIUrl":"https://doi.org/10.1186/s41073-022-00122-0","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Background</h3><p>Structured, systematic methods to formulate consensus recommendations, such as the Delphi process or nominal group technique, among others, provide the opportunity to harness the knowledge of experts to support clinical decision making in areas of uncertainty. They are widely used in biomedical research, in particular where disease characteristics or resource limitations mean that high-quality evidence generation is difficult. However, poor reporting of methods used to reach a consensus – for example, not clearly explaining the definition of consensus, or not stating how consensus group panellists were selected – can potentially undermine confidence in this type of research and hinder reproducibility. Our objective is therefore to systematically develop a reporting guideline to help the biomedical research and clinical practice community describe the methods or techniques used to reach consensus in a complete, transparent, and consistent manner.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>The ACCORD (ACcurate COnsensus Reporting Document) project will take place in five stages and follow the EQUATOR Network guidance for the development of reporting guidelines. In Stage 1, a multidisciplinary Steering Committee has been established to lead and coordinate the guideline development process. In Stage 2, a systematic literature review will identify evidence on the quality of the reporting of consensus methodology, to obtain potential items for a reporting checklist. In Stage 3, Delphi methodology will be used to reach consensus regarding the checklist items, first among the Steering Committee, and then among a broader Delphi panel comprising participants with a range of expertise, including patient representatives. In Stage 4, the reporting guideline will be finalised in a consensus meeting, along with the production of an Explanation and Elaboration (E&E) document. In Stage 5, we plan to publish the reporting guideline and E&E document in open-access journals, supported by presentations at appropriate events. Dissemination of the reporting guideline, including a website linked to social media channels, is crucial for the document to be implemented in practice.</p><h3 data-test=\"abstract-sub-heading\">Discussion</h3><p>The ACCORD reporting guideline will provide a set of minimum items that should be reported about methods used to achieve consensus, including approaches ranging from simple unstructured opinion gatherings to highly structured processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138529742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"What works for peer review and decision-making in research funding: a realist synthesis.","authors":"Alejandra Recio-Saucedo, Ksenia Crane, Katie Meadmore, Kathryn Fackrell, Hazel Church, Simon Fraser, Amanda Blatch-Jones","doi":"10.1186/s41073-022-00120-2","DOIUrl":"10.1186/s41073-022-00120-2","url":null,"abstract":"<p><strong>Introduction: </strong>Allocation of research funds relies on peer review to support funding decisions, and these processes can be susceptible to biases and inefficiencies. The aim of this work was to determine which past interventions to peer review and decision-making have worked to improve research funding practices, how they worked, and for whom.</p><p><strong>Methods: </strong>Realist synthesis of peer-review publications and grey literature reporting interventions in peer review for research funding.</p><p><strong>Results: </strong>We analysed 96 publications and 36 website sources. Sixty publications enabled us to extract stakeholder-specific context-mechanism-outcomes configurations (CMOCs) for 50 interventions, which formed the basis of our synthesis. Shorter applications, reviewer and applicant training, virtual funding panels, enhanced decision models, institutional submission quotas, applicant training in peer review and grant-writing reduced interrater variability, increased relevance of funded research, reduced time taken to write and review applications, promoted increased investment into innovation, and lowered cost of panels.</p><p><strong>Conclusions: </strong>Reports of 50 interventions in different areas of peer review provide useful guidance on ways of solving common issues with the peer review process. Evidence of the broader impact of these interventions on the research ecosystem is still needed, and future research should aim to identify processes that consistently work to improve peer review across funders and research contexts.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":7.2,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8894828/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"65775168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher
{"title":"Characteristics of 'mega' peer-reviewers.","authors":"Danielle B Rice, Ba' Pham, Justin Presseau, Andrea C Tricco, David Moher","doi":"10.1186/s41073-022-00121-1","DOIUrl":"https://doi.org/10.1186/s41073-022-00121-1","url":null,"abstract":"<p><strong>Background: </strong>The demand for peer reviewers is often perceived as disproportionate to the supply and availability of reviewers. Considering characteristics associated with peer review behaviour can allow for the development of solutions to manage the growing demand for peer reviewers. The objective of this research was to compare characteristics among two groups of reviewers registered in Publons.</p><p><strong>Methods: </strong>A descriptive cross-sectional study design was used to compare characteristics between (1) individuals completing at least 100 peer reviews ('mega peer reviewers') from January 2018 to December 2018 as and (2) a control group of peer reviewers completing between 1 and 18 peer reviews over the same time period. Data was provided by Publons, which offers a repository of peer reviewer activities in addition to tracking peer reviewer publications and research metrics. Mann Whitney tests and chi-square tests were conducted comparing characteristics (e.g., number of publications, number of citations, word count of peer review) of mega peer reviewers to the control group of reviewers.</p><p><strong>Results: </strong>A total of 1596 peer reviewers had data provided by Publons. A total of 396 M peer reviewers and a random sample of 1200 control group reviewers were included. A greater proportion of mega peer reviews were male (92%) as compared to the control reviewers (70% male). Mega peer reviewers demonstrated a significantly greater average number of total publications, citations, receipt of Publons awards, and a higher average h index as compared to the control group of reviewers (all p < .001). We found no statistically significant differences in the number of words between the groups (p > .428).</p><p><strong>Conclusions: </strong>Mega peer reviewers registered in the Publons database also had a higher number of publications and citations as compared to a control group of reviewers. Additional research that considers motivations associated with peer review behaviour should be conducted to help inform peer reviewing activity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8862198/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}