{"title":"Gender reporting across regions and time in psychological studies: a scoping review of studies in psychological Science between 2019 and 2024.","authors":"Tiantian Chen","doi":"10.1186/s41073-025-00186-8","DOIUrl":"10.1186/s41073-025-00186-8","url":null,"abstract":"<p><strong>Background: </strong>Despite growing calls for gender-responsive psychological research, implementation of gender-related guidelines is underresearched. The Sex and Gender Equity in Research (SAGER) guidelines recommend reporting participants' gender, presenting gender-stratified results, analyzing gender-related data, acknowledging non-binary identities, and distinguishing between biological sex and social gender. This scoping review assessed the extent to which these guidelines are followed.</p><p><strong>Methods: </strong>We included all primary data studies on human participants published in Psychological Science from 2019 to 2024 (n = 699) and assessed their gender reporting practices according to the SAGER guidelines.</p><p><strong>Results: </strong>While 87.8% (n = 614) of studies reported participants' gender, only 35.3% (n = 247) presented gender-stratified results, and 24.2% (n = 169) conducted gender-based analysis. Only 17.2% (n = 120) of studies reported participants' non-binary identities. Regional patterns emerged: Global North studies more frequently reported non-binary identities but less often presented gender-stratified results and conducted gender-based analysis than Global South studies. The U.S.-based studies saw a notable decline in reporting gender-stratified results, from 43.2% (n = 32) in 2022 to 28.1% (n = 16) in 2024.</p><p><strong>Conclusion: </strong>This review reveals persistent inconsistencies in how gender is conceptualized and reported. It provides recommendations to improve gender reporting in order to facilitate the production of more accurate and socially relevant knowledge in psychological research.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"11 1","pages":"2"},"PeriodicalIF":10.7,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12817501/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146004804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Renée O'Leary, Giusy Rita Maria La Rosa, Riccardo Polosa
{"title":"Reverse spin bias: preliminary observations of reporting bias in medical systematic reviews.","authors":"Renée O'Leary, Giusy Rita Maria La Rosa, Riccardo Polosa","doi":"10.1186/s41073-025-00185-9","DOIUrl":"10.1186/s41073-025-00185-9","url":null,"abstract":"<p><strong>Background: </strong>While conducting an umbrella review of e-cigarettes for smoking cessation, we observed that in many instances, systematic review authors reported findings favorable to the treatment, yet they declined to recommend it or recommended against it despite the evidence of its effectiveness in their own systematic reviews.</p><p><strong>Existing literature: </strong>We searched the literature for a term or category to describe this form of reporting bias where the authors' recommendations dismiss their findings of treatment benefit. Ideally the term spin bias should apply to any conclusion or recommendation not supported by the findings of the study, but in practice spin bias is almost exclusively applied to the narrative attribution of significance or causation to statistically non-significant data or findings.</p><p><strong>Issue under discussion: </strong>After observing that many systematic review authors dismissed their findings of effectiveness for e-cigarettes for cessation, we wondered if this form of reporting bias also occurs in the systematic reviews on other controversial treatments. We made a rapid search for recent systematic reviews on medical cannabis for pain, another controversial treatment. Here also we observed that many authors did not recommend cannabis for pain management even though their findings clearly showed treatment benefit. We tentatively offer the term reverse spin bias for the narrative discounting or dismissal of statistically significant findings. We catalogued the narrative turns that enabled reverse spin bias in 20 systematic reviews of e-cigarettes for cessation and medical cannabis for pain. We identified five mechanisms: discount the evidence base, discredit the primary studies, appeal to fear, dismiss the treatment modality a priori, and omit findings. We speculate that authors introduce reverse spin bias to improve their chances for publication or to support their position about a treatment.</p><p><strong>Conclusion: </strong>A standard task for editors and peer reviewers is confirming that treatment recommendations are supported by the review's data, yet our examples strongly suggest that this examination for reporting bias is frequently skipped. By proposing a new term, reverse spin bias, we hope to bring stronger scrutiny to bear on these instances of reporting bias that are detrimental to evidence-informed clinical practice.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"11 1","pages":"1"},"PeriodicalIF":10.7,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12784479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145936661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marieke S Jansen, Rolf H H Groenwold, Olaf M Dekkers
{"title":"The role of research ethics committees in addressing optimism in sample size calculations: a meta-epidemiological study.","authors":"Marieke S Jansen, Rolf H H Groenwold, Olaf M Dekkers","doi":"10.1186/s41073-025-00184-w","DOIUrl":"10.1186/s41073-025-00184-w","url":null,"abstract":"<p><strong>Background: </strong>Sample size calculations are critical in clinical trial design, yet hypothesised effect sizes are often overly optimistic, leading to underpowered studies. Research ethics committees (RECs) assess trial protocols, including sample size justification, but their role in mitigating optimism bias in sample size calculations is not well studied.</p><p><strong>Methods: </strong>We descriptively analysed 50 clinical trial protocols approved by a Dutch REC (2015-2018) with available primary outcome results. We examined REC comments on sample size calculations, protocol modifications during ethics review and amendments, and discrepancies between target and observed effect sizes. For comparability, effect sizes were standardised.</p><p><strong>Results: </strong>Nine (18%) trials received REC comments on sample size calculations, mainly addressing calculation errors (n = 5), missing parameters (n = 2), or other methodological considerations (n = 3), with only three comments (6%) requesting effect size justification. Seven (14%) trials modified their sample size calculation during ethics review, mostly in response to REC comments, and 10 (20%) trials made modifications in amendments. In total, 40 (80%) trials overestimated their target effect size. Across all trials, the target effect size was overestimated by a median of 0.22 [IQR: 0.03 - 0.41]. Changes during ethics review led to less overestimation for only one trial, which reflected the correction of a calculation error rather than a reassessment of assumptions.</p><p><strong>Conclusions: </strong>Optimism in sample size calculations is common, but the influence of REC feedback on reducing overestimation appears limited. As this was a small, descriptive study from a single Dutch REC in 2015-2018, findings may not generalise to other settings or more recent practice. Future research should validate these findings and may help identify characteristics associated with overestimation, supporting RECs in recognising trials at risk of being underpowered.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"26"},"PeriodicalIF":10.7,"publicationDate":"2025-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12699925/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Constant Vinatier, Emma Fahed, Yoann Chollet, Laura Caquelin, Sylvie Jaillard, Veerle Van den Eynden, Magdalena Kozula, Florian Naudet
{"title":"Using reporting guidelines to improve the reproducibility of cooking Christmas tree meringues: the \"People tasting trees\" cluster-randomised controlled trial.","authors":"Constant Vinatier, Emma Fahed, Yoann Chollet, Laura Caquelin, Sylvie Jaillard, Veerle Van den Eynden, Magdalena Kozula, Florian Naudet","doi":"10.1186/s41073-025-00167-x","DOIUrl":"10.1186/s41073-025-00167-x","url":null,"abstract":"<p><strong>Objectives: </strong>To test whether improving a Christmas tree meringue recipe using reporting guidelines yields more appealing and sweeter meringues.</p><p><strong>Design: </strong>A prospective, superiority, single-blind, cluster-randomised (1:1) controlled trial.</p><p><strong>Setting: </strong>A public participatory event in a large cultural facility in France.</p><p><strong>Participants: </strong>Budding chefs with basic culinary skills, possessing the utensils necessary for baking Christmas tree meringues, and not having burned pasta in the past month (for safety reasons). Bunding chefs represent the cluster and meringue the unit.</p><p><strong>Interventions: </strong>Each budding chef was randomised to a standard recipe for making Christmas tree meringues or to the same recipe written in consultation with a professional baker using the TIDieR checklist-a reporting guideline for description of complex interventions-plus a short video tutorial.</p><p><strong>Main outcome measures: </strong>The primary outcome was reproducibility in terms of visual aspect. Secondary outcomes included colour, size, taste and survival time in the course of a sale organised as part of the public event. The visual aspect, colour and size was rated by an independent jury which compared the cooked Christmas tree meringues with the recipe picture on a scale from 1 to 10. Analyses were performed in intention-to-eat (randomization unit: budding chefs / analysis unit: Christmas trees).</p><p><strong>Results: </strong>60 budding chefs (30 in each group) baked a total of 845 Christmas tree meringues. There was no significant difference between the groups (mean difference = -0.1; [95%CI -0.99; 0.80]; p-value = 0.84; intra-cluster correlation, ICC = 0.77) on visual aspect. No difference was found for reproducibility in terms of colour (mean difference = -0.31; [95%CI -0.97; 0.35]; p-value = 0.35; ICC = 0.67) or size (mean difference = -0.17; [95%CI -1.07; 0.73]; p-value = 0.71; ICC = 0.74). There was no significant difference in terms of taste between the groups (mean difference = -0.55; [95%CI -1.62; 0.52]; p-value = 0.31). 400 meringues were sold during the public event with no difference in survival time between groups (hazard ratio = 1.26 [95% CI 0.75-2.09], p-value = 0.38, with values > 1 in favour of the control group). e.g. 75% survived for 134 min in intervention group and for 124 min in the control group.</p><p><strong>Conclusions: </strong>Our study failed to demonstrate that an improved recipe using the TIDieR reporting guideline with a video tutorial improved the reproducibility in terms of visual aspect, colour, size, taste and sales for Christmas tree meringues. The best way to succeed in reproducing Christmas tree meringues alike those showcased by the recipe-and thereby to improve reproducibility of experiments-remains a mystery still to be solved by further explorations.</p><p><strong>Trial registration: </strong>https://osf.io/dnhbx .</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"22"},"PeriodicalIF":10.7,"publicationDate":"2025-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12673745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The disclosure of potential conflicts of interest among editors and members of editorial boards in leading ethics journals.","authors":"Clovis Mariano Faggion","doi":"10.1186/s41073-025-00181-z","DOIUrl":"10.1186/s41073-025-00181-z","url":null,"abstract":"<p><strong>Background and aim: </strong>The International Committee of Medical Journal Editors (ICMJE) defines a potential conflict of interest (COI) as a situation where professional judgment could be influenced by secondary interests. Competing interests can introduce bias into the peer-review process, making it essential for all participants to declare any potential COIs. While authors are currently required to disclose their COIs, editors and editorial board members are not held to the same standard. This study aimed to evaluate the extent to which editors and editorial board members of ethics journals report their potential competing interests.</p><p><strong>Methods: </strong>From October 23 to November 1, 2024, 82 ethics journals selected based on their impact factors were assessed, focusing on the disclosure of potential COIs by editors and editorial board members. Journal websites were examined to determine how editors and board members disclose potential COIs. Additionally, publisher websites were assessed for policies guiding these individuals in reporting COIs during peer review.</p><p><strong>Results: </strong>Only 2% of the journals disclosed potential COIs for their editors, and 13% provided biographical information about editorial members. None of the journals employed a structured reporting approach, such as the ICMJE disclosure form, despite most claiming adherence to ICMJE and COPE guidelines. There was considerable variability in how journals and publishers guided their editors and board members in reporting their own COIs.</p><p><strong>Conclusion: </strong>The findings indicate that disclosures of potential COIs by editors and editorial board members in leading ethics journals are often inconsistent and insufficient. Increasing transparency in this area could lead to a fairer and more trustworthy peer-review process.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"25"},"PeriodicalIF":10.7,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12636210/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145566699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Silke Kniffert, Ivan Buljan, Flavio Azevedo, Peter Babinčák, Lucija Batinović, Thomas Rhys Evans, Sara Garofalo, Christopher Graham, Lucianne Groenink, Malika Ihle, Miloslav Klugar, Lucia Kočišová, Michal Kohút, Nikolaos Kostomitsopoulos, Seán Lacey, Anita Lunić, Ana Marušić, Thomas Nordström, Charlotte R Pennington, Daniel Pizzolato, Ulf Toelch, Marta Topor, Miro Vuković, Michiel R de Boer
{"title":"Research methodology education in Europe: a multi-country, cross-disciplinary survey of current practices and perspectives.","authors":"Silke Kniffert, Ivan Buljan, Flavio Azevedo, Peter Babinčák, Lucija Batinović, Thomas Rhys Evans, Sara Garofalo, Christopher Graham, Lucianne Groenink, Malika Ihle, Miloslav Klugar, Lucia Kočišová, Michal Kohút, Nikolaos Kostomitsopoulos, Seán Lacey, Anita Lunić, Ana Marušić, Thomas Nordström, Charlotte R Pennington, Daniel Pizzolato, Ulf Toelch, Marta Topor, Miro Vuković, Michiel R de Boer","doi":"10.1186/s41073-025-00183-x","DOIUrl":"10.1186/s41073-025-00183-x","url":null,"abstract":"<p><strong>Background: </strong>Research methodology education aims to equip students with the foundational knowledge of robust scientific practices, emphasizing deep understanding of scientific inquiry, integrity, and critical thinking in research practice. A literature review reveals that the observed diversity in research methods course design and instruction stems from a lack of consensus about the essential foundations required to critically engage with, design, and execute research in education. This is further compounded by a limited pedagogical innovation. However, no study has yet investigated how research methodology is taught and perceived across European universities. The objective of this study is to examine practices and attitudes regarding teaching research methodology in different European countries, across different disciplines and different training stages to identify commonalities and discrepancies.</p><p><strong>Methods: </strong>A cross-sectional survey was designed based on the Structure of Observed Learning Outcome (SOLO) taxonomy and further developed in several rounds of expert input and feedback, ensuring comprehensive inclusion of diverse teaching formats and assessment types. The survey was distributed to research methodology and non-research methodology higher education teachers across Europe through stratified and snowball sampling methods.</p><p><strong>Results: </strong>The survey was completed by 559 respondents across 24 countries and seven disciplinary categories. The findings identified a predominant reliance on traditional passive teaching formats, such as face-to-face or online lectures. Active methods such as flipped classroom (8.4% Bachelor, 4.8% Master, 2.3% PhD) and protocol writing (8.2% Bachelor, 6.6% Master, 3.9% PhD) were less frequently used. Written exams dominated assessment strategies at all levels. Across our stratification levels, all topics were rated very important, with hypothesis formulation, research integrity, and study design as the most necessary topics, while pre-registration, peer review, and data management plan were prioritized slightly less.</p><p><strong>Conclusions: </strong>These findings reveal relative homogeneity in research methodology teaching across academic levels and disciplines in Europe. The persistence of passive teaching formats and the limited adoption of active methodologies reflects an untapped opportunity to improve the effectiveness of research methodology education in fostering critical thinking and ethical practices. Higher education institutions need to reevaluate research methodology curricula to better align with contemporary research demands.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"24"},"PeriodicalIF":10.7,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12621402/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145535098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI in peer review: can artificial intelligence be an ally in reducing gender and geographical gaps in peer review? A randomized trial.","authors":"André L Teixeira","doi":"10.1186/s41073-025-00182-y","DOIUrl":"10.1186/s41073-025-00182-y","url":null,"abstract":"<p><strong>Background: </strong>Gender and geographical disparities have been widely reported in the peer-review process of biomedical journals. Artificial Intelligence (AI) is increasingly transforming the publishing system; however, its potential to identify suitable reviewers, and whether it might reduce, replicate or reinforce existing biases in peer review has never been comprehensively investigated. This study sought to determine the usefulness of AI in identifying expert scientists in medicine taking into consideration gender and geographical diversity, equity and inclusion (DEI).</p><p><strong>Methods: </strong>The title and abstract of 50 research articles published in high-impact biomedical journals between November 2023 and September 2024 were fed into a large language model software (GPT-4o), which was prompted to identify 20 distinguished scientists in the study's field. Two trials were randomly performed with and without a gender and geographical DEI prompt. Scientists were classified based on gender, geographical location, and country of affiliation income level. Furthermore, the number of peer-reviewed publications, Google Scholar-derived total citations and h-index were computed.</p><p><strong>Results: </strong>Without a DEI prompt, GPT-4o primarily identified male scientists (68%) and those affiliated to high-income countries (95.3%). Conversely, when DEI was explicitly prompted, GPT-4o generated a gender-balanced (51% females) and geographically diverse list of scientists. Specifically, the proportion of scientists from high-income countries decreased to 42.3%, while representation from upper-middle (3.2% to 26.2%), lower-middle (1.2% to 26.1%), and low-income (0.2% to 5.4%) countries significantly increased. The number of publications (without vs. with DEI: 284 ± 237 vs. 281 ± 245, P = 0.77), citations (48,445 ± 60,270 vs. 53,792 ± 71,903, P = 0.13), and h-index (79 ± 43 vs. 76 ± 43, P = 0.15) did not differ between groups.</p><p><strong>Conclusions: </strong>When not prompted to consider DEI, GPT-4o successfully identified expert scientists, but primarily males and those from high-income countries. However, when DEI was explicitly prompted, GPT-4o generated a gender-balanced and geographically diverse list of scientists. The academic productivity was considerably high and comparable between groups, suggesting that GPT-4o identified potentially skilled scientists who could reasonably serve as reviewers for scientific journals. These findings provide evidence that AI can be an ally in combating gender and geographical gaps in peer review, though DEI should be explicitly prompted. Conversely, AI could perpetuate existing biases if not carefully managed.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"23"},"PeriodicalIF":10.7,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12557967/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145373412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noa Mascato Fontaíña, Cristina Candal-Pedreira, Guadalupe García, Joseph S Ross, Alberto Ruano-Ravina, Lucía Martin-Gisbert
{"title":"Identifying common patterns in journals that retracted papers from paper mills: a cross-sectional study.","authors":"Noa Mascato Fontaíña, Cristina Candal-Pedreira, Guadalupe García, Joseph S Ross, Alberto Ruano-Ravina, Lucía Martin-Gisbert","doi":"10.1186/s41073-025-00177-9","DOIUrl":"10.1186/s41073-025-00177-9","url":null,"abstract":"<p><strong>Objectives: </strong>To characterize journals that published and retracted articles retracted for having originated from paper mills and examine associations between paper mill retraction frequency and journal characteristics.</p><p><strong>Methods: </strong>Retraction Watch database was used to identify papers retracted due to originating from paper mills and journals, between January 2020 and December 2022. Data on the total number of articles and journal characteristics were obtained from Web of Science and Journal Citation Reports. Journals were classified based on the frequency of retracted paper mill papers (1, 2-9, ≥ 10 retractions). Logistic regressions were conducted to explore associations between retraction frequency and journal characteristics.</p><p><strong>Results: </strong>One hundred forty-two journals were identified that retracted 2,051 articles from paper mills. Among these, 71 (50%) journals had 1 retraction, 36 (25.4%) had 2-9 retractions, and 35 (24.6%) had ≥ 10 retractions; 4 (2.8%) journals had > 100 retractions. These journals, regardless of paper mill retraction number, were mainly in the second (35.2%) and third (29.6%) quartiles by impact factor. Medicine and health emerged as the predominant subject area, comprising 61.2% of all indexed journal categories. Comparing journals with one retraction to those with ten or more, the proportion of open access articles (72.6% vs. 19.2%) and median editorial times (86 vs. 116 days) differed across groups, although these differences were not statistically significant. An inverse correlation was observed between the proportion of paper mill papers and original articles (Spearman's Rho = -0.1891, 95%CI -0.370 to -0.008). Logistic regressions found no significant association between paper mill retraction number and other variables.</p><p><strong>Conclusion: </strong>This study suggests that paper mill retractions are concentrated in a small number of journals with common characteristics: high open access rates, intermediate impact factor quartiles, a high volume of citable items, and classification in medicine and health categories. Short editorial times may indicate a higher presence of paper mill publications, but more research is needed to examine this factor in depth, as well as the possible influence of acceptance rates.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"21"},"PeriodicalIF":10.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12487316/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145202329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring ethical elements in reporting guidelines: results from a research-on-research study.","authors":"Clovis Mariano Faggion, Carla Brigitte Susan Kohl","doi":"10.1186/s41073-025-00180-0","DOIUrl":"10.1186/s41073-025-00180-0","url":null,"abstract":"<p><strong>Background: </strong>Reporting guidelines are key tools for enhancing the transparency and reproducibility of research. To support responsible reporting, such guidelines should also address ethical considerations. However, the extent to which these elements are integrated into reporting checklists remains unclear. This study aimed to evaluate how ethical elements are incorporated in these guidelines.</p><p><strong>Methods: </strong>We identified reporting guidelines indexed on the \"Enhancing the Quality and Transparency of Health Research (EQUATOR) Network\" website. On 30 January 2025, a random sample of 128 reporting guidelines and extensions was drawn from a total of 657. For each, we retrieved the associated development publication and extracted data into a standardised table. The assessed ethical elements included COI disclosure, sponsorship, authorship criteria, data sharing guidance, and protocol development and study registration. Data extraction for the first 13 guidelines was conducted independently and in duplicate. After achieving 100% agreement, the remaining data were extracted by one author, following \"A MeaSurement Tool to Assess Systematic Reviews\" (AMSTAR)-2 recommendations.</p><p><strong>Results: </strong>The dataset comprised 101 original guidelines and 27 extensions of existing guidelines. Half of the included guidelines were published from 2015 onward, with 32.0% published between 2020 and 2024. The median year of publication was 2016. Approximately 90 of the 128 assessed guidelines focused on clinical studies. Over 70% of the guidelines did not include items related to conflicts of interest (COI) or sponsorship. Only 8.6% addressed COI and sponsorship jointly in a single item, while fewer than 9% covered them as two separate items. Notably, only two guidelines (1.6%) provided instructions for using the ICMJE disclosure form to report potential conflicts of interest. Nearly 20% of the guidelines offered guidance on study registration. Fewer than 30% recommended the development of a research protocol, and only 18.8% provided guidance on protocol sharing. Additionally, fewer than 10% of the checklists included guidance on authorship criteria or data sharing.</p><p><strong>Conclusion: </strong>Ethical considerations are insufficiently addressed in current reporting guidelines. The absence of standardised items on COIs, funding, authorship, and data sharing represents a missed opportunity to promote transparency and research integrity. Future updates to reporting guidelines should systematically incorporate these elements.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"20"},"PeriodicalIF":10.7,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12452000/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher
{"title":"Attitudes and perceptions of biomedical journal editors in chief towards the use of artificial intelligence chatbots in the scholarly publishing process: a cross-sectional survey.","authors":"Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher","doi":"10.1186/s41073-025-00178-8","DOIUrl":"10.1186/s41073-025-00178-8","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence chatbots (AICs) are designed to mimic human conversations through text or speech, offering both opportunities and challenges in scholarly publishing. While journal policies of AICs are becoming more defined, there is still a limited understanding of how Editors in chief (EiCs) of biomedical journals' view these tools. This survey examined EiCs' attitudes and perceptions, highlighting positive aspects, such as language and grammar support, and concerns regarding setup time, training requirements, and ethical considerations towards the use of AICs in the scholarly publishing process.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted, targeting EiCs of biomedical journals across multiple publishers. Of 3725 journals screened, 3381 eligible emails were identified through web scraping and manual verification. Survey invitations were sent to all identified EiCs. The survey remained open for five weeks, with three follow-up email reminders.</p><p><strong>Results: </strong>The survey had a response rate of 16.5% (510 total responses) and a completion rate of 87.0%. Most respondents were familiar with AIs (66.7%), however, most had not utilized AICs in their editorial work (83.7%) and many expressed interest in further training (64.4%). EiCs acknowledged benefits such as language and grammar support (70.8%) but expressed mixed attitudes on AIC roles in accelerating peer review. Perceptions included the initial time and resources required for setup (83.7%), training needs (83.9%), and ethical considerations (80.6%).</p><p><strong>Conclusions: </strong>This study found that EiCs have mixed attitudes toward AICs, with some EICs acknowledging their potential to enhance editorial efficiency, particularly in tasks like language editing, while others expressed concerns about the ethical implications, the time and resources required for implementation, and the need for additional training.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"19"},"PeriodicalIF":10.7,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145016838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}