Noa Mascato Fontaíña, Cristina Candal-Pedreira, Guadalupe García, Joseph S Ross, Alberto Ruano-Ravina, Lucía Martin-Gisbert
{"title":"Identifying common patterns in journals that retracted papers from paper mills: a cross-sectional study.","authors":"Noa Mascato Fontaíña, Cristina Candal-Pedreira, Guadalupe García, Joseph S Ross, Alberto Ruano-Ravina, Lucía Martin-Gisbert","doi":"10.1186/s41073-025-00177-9","DOIUrl":"10.1186/s41073-025-00177-9","url":null,"abstract":"<p><strong>Objectives: </strong>To characterize journals that published and retracted articles retracted for having originated from paper mills and examine associations between paper mill retraction frequency and journal characteristics.</p><p><strong>Methods: </strong>Retraction Watch database was used to identify papers retracted due to originating from paper mills and journals, between January 2020 and December 2022. Data on the total number of articles and journal characteristics were obtained from Web of Science and Journal Citation Reports. Journals were classified based on the frequency of retracted paper mill papers (1, 2-9, ≥ 10 retractions). Logistic regressions were conducted to explore associations between retraction frequency and journal characteristics.</p><p><strong>Results: </strong>One hundred forty-two journals were identified that retracted 2,051 articles from paper mills. Among these, 71 (50%) journals had 1 retraction, 36 (25.4%) had 2-9 retractions, and 35 (24.6%) had ≥ 10 retractions; 4 (2.8%) journals had > 100 retractions. These journals, regardless of paper mill retraction number, were mainly in the second (35.2%) and third (29.6%) quartiles by impact factor. Medicine and health emerged as the predominant subject area, comprising 61.2% of all indexed journal categories. Comparing journals with one retraction to those with ten or more, the proportion of open access articles (72.6% vs. 19.2%) and median editorial times (86 vs. 116 days) differed across groups, although these differences were not statistically significant. An inverse correlation was observed between the proportion of paper mill papers and original articles (Spearman's Rho = -0.1891, 95%CI -0.370 to -0.008). Logistic regressions found no significant association between paper mill retraction number and other variables.</p><p><strong>Conclusion: </strong>This study suggests that paper mill retractions are concentrated in a small number of journals with common characteristics: high open access rates, intermediate impact factor quartiles, a high volume of citable items, and classification in medicine and health categories. Short editorial times may indicate a higher presence of paper mill publications, but more research is needed to examine this factor in depth, as well as the possible influence of acceptance rates.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"21"},"PeriodicalIF":10.7,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12487316/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145202329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring ethical elements in reporting guidelines: results from a research-on-research study.","authors":"Clovis Mariano Faggion, Carla Brigitte Susan Kohl","doi":"10.1186/s41073-025-00180-0","DOIUrl":"10.1186/s41073-025-00180-0","url":null,"abstract":"<p><strong>Background: </strong>Reporting guidelines are key tools for enhancing the transparency and reproducibility of research. To support responsible reporting, such guidelines should also address ethical considerations. However, the extent to which these elements are integrated into reporting checklists remains unclear. This study aimed to evaluate how ethical elements are incorporated in these guidelines.</p><p><strong>Methods: </strong>We identified reporting guidelines indexed on the \"Enhancing the Quality and Transparency of Health Research (EQUATOR) Network\" website. On 30 January 2025, a random sample of 128 reporting guidelines and extensions was drawn from a total of 657. For each, we retrieved the associated development publication and extracted data into a standardised table. The assessed ethical elements included COI disclosure, sponsorship, authorship criteria, data sharing guidance, and protocol development and study registration. Data extraction for the first 13 guidelines was conducted independently and in duplicate. After achieving 100% agreement, the remaining data were extracted by one author, following \"A MeaSurement Tool to Assess Systematic Reviews\" (AMSTAR)-2 recommendations.</p><p><strong>Results: </strong>The dataset comprised 101 original guidelines and 27 extensions of existing guidelines. Half of the included guidelines were published from 2015 onward, with 32.0% published between 2020 and 2024. The median year of publication was 2016. Approximately 90 of the 128 assessed guidelines focused on clinical studies. Over 70% of the guidelines did not include items related to conflicts of interest (COI) or sponsorship. Only 8.6% addressed COI and sponsorship jointly in a single item, while fewer than 9% covered them as two separate items. Notably, only two guidelines (1.6%) provided instructions for using the ICMJE disclosure form to report potential conflicts of interest. Nearly 20% of the guidelines offered guidance on study registration. Fewer than 30% recommended the development of a research protocol, and only 18.8% provided guidance on protocol sharing. Additionally, fewer than 10% of the checklists included guidance on authorship criteria or data sharing.</p><p><strong>Conclusion: </strong>Ethical considerations are insufficiently addressed in current reporting guidelines. The absence of standardised items on COIs, funding, authorship, and data sharing represents a missed opportunity to promote transparency and research integrity. Future updates to reporting guidelines should systematically incorporate these elements.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"20"},"PeriodicalIF":10.7,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12452000/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145115404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher
{"title":"Attitudes and perceptions of biomedical journal editors in chief towards the use of artificial intelligence chatbots in the scholarly publishing process: a cross-sectional survey.","authors":"Jeremy Y Ng, Malvika Krishnamurthy, Gursimran Deol, Wid Al-Zahraa Al-Khafaji, Vetrivel Balaji, Magdalene Abebe, Jyot Adhvaryu, Tejas Karrthik, Pranavee Mohanakanthan, Adharva Vellaparambil, Lex M Bouter, R Brian Haynes, Alfonso Iorio, Cynthia Lokker, Hervé Maisonneuve, Ana Marušić, David Moher","doi":"10.1186/s41073-025-00178-8","DOIUrl":"10.1186/s41073-025-00178-8","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence chatbots (AICs) are designed to mimic human conversations through text or speech, offering both opportunities and challenges in scholarly publishing. While journal policies of AICs are becoming more defined, there is still a limited understanding of how Editors in chief (EiCs) of biomedical journals' view these tools. This survey examined EiCs' attitudes and perceptions, highlighting positive aspects, such as language and grammar support, and concerns regarding setup time, training requirements, and ethical considerations towards the use of AICs in the scholarly publishing process.</p><p><strong>Methods: </strong>A cross-sectional survey was conducted, targeting EiCs of biomedical journals across multiple publishers. Of 3725 journals screened, 3381 eligible emails were identified through web scraping and manual verification. Survey invitations were sent to all identified EiCs. The survey remained open for five weeks, with three follow-up email reminders.</p><p><strong>Results: </strong>The survey had a response rate of 16.5% (510 total responses) and a completion rate of 87.0%. Most respondents were familiar with AIs (66.7%), however, most had not utilized AICs in their editorial work (83.7%) and many expressed interest in further training (64.4%). EiCs acknowledged benefits such as language and grammar support (70.8%) but expressed mixed attitudes on AIC roles in accelerating peer review. Perceptions included the initial time and resources required for setup (83.7%), training needs (83.9%), and ethical considerations (80.6%).</p><p><strong>Conclusions: </strong>This study found that EiCs have mixed attitudes toward AICs, with some EICs acknowledging their potential to enhance editorial efficiency, particularly in tasks like language editing, while others expressed concerns about the ethical implications, the time and resources required for implementation, and the need for additional training.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"19"},"PeriodicalIF":10.7,"publicationDate":"2025-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12416066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145016838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carole Bandiera, Kate Lowrie, Donna Thomas, Sabuj Kanti Mistry, Elizabeth Harris, Mark F Harris, Parisa Aslani
{"title":"I have been scammed in my qualitative research.","authors":"Carole Bandiera, Kate Lowrie, Donna Thomas, Sabuj Kanti Mistry, Elizabeth Harris, Mark F Harris, Parisa Aslani","doi":"10.1186/s41073-025-00179-7","DOIUrl":"10.1186/s41073-025-00179-7","url":null,"abstract":"<p><p>We have been scammed in our online qualitative study by some fraudulent participants who falsely claimed to be pharmacists or community health workers. These participants were interviewed before we discovered that they were not who they claimed to be.In this commentary, we describe key indicators of potential imposters, such as the number of emails received in a short period of time, emails with similar content and address structure, participants having a keen interest in the reimbursement, camera switched off during the interview, and inconsistency in the participants' responses.We provide recommendations on how to prevent future fraud, such as promoting the study to a closed network or groups on social media, encouraging participants to provide sources that verify their identity, ensuring that the camera is switched on during the entire interview, discouraging the use of artificial intelligence (AI) to answer questions or generate content, unless when AI-based language tools are used to facilitate translation, understanding or communication, providing reimbursements with local vouchers rather than international ones, and where the participants are healthcare professionals, checking their registration number prior to the interview.It is important for Human Research Ethics Committee members to consider genuine measures to assess participant authenticity and reduce the risk of fraudulent participation. Additionally, universities and research institutions should develop guidance to educate researchers in this area. Published protocols, guidelines and checklists for online qualitative studies, and participant information statements and consent forms should be adapted to prevent and address potential fraud. For example, the COREQ checklist should be updated so that researchers report the actions undertaken to prevent and detect fraud and their experiences and actions if there was fraud.Fraud in online research impacts the integrity and quality of online research. Urgent actions are needed to raise awareness of this issue within the research community and prevent further occurrences of scams.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"18"},"PeriodicalIF":10.7,"publicationDate":"2025-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reporting of measures against bias in nonclinical published research studies: a journal-based comparison.","authors":"Sara Steele, Tom Lavrijssen, Thomas Steckler","doi":"10.1186/s41073-025-00176-w","DOIUrl":"10.1186/s41073-025-00176-w","url":null,"abstract":"<p><strong>Background: </strong>Historically, systematic review studies of nonclinical published research articles around the life sciences have shown that the overall reporting of information on measures against bias is low. Measures such as randomization, blinding and sample size estimation are mentioned in the minority of the studies. The present study aims to provide an overview of the recent reporting standards in a large sample of nonclinical articles with focus on statistical information.</p><p><strong>Methods: </strong>Journals were randomly selected from Journal Citation Reports (Clarivate). Biomedical research articles published in 2020 from 10 journals were analyzed for their reporting standards using a checklist.</p><p><strong>Results: </strong>In total 860 articles; 320 articles describing in vivo methods, 187 articles describing in vitro methods and 353 articles including both in vivo and in vitro methods, were included in the study. The reporting rate of \"randomization\" ranged from 0%-63% between journals for in vivo articles and 0%-4% for in vitro articles. The reporting rate of \"blinded conduct of the experiments\" ranged from 11%-71% between journals for in vivo articles and 0%-86% for in vitro articles.</p><p><strong>Conclusion: </strong>The analysis showed that the reporting standards remained low, also when other statistical information is concerned. Additionally, our results suggest that the reporting in articles on in vivo experiments is better compared to articles on in vitro experiments. Furthermore, important differences in reporting standards between journals seem to exist.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"17"},"PeriodicalIF":10.7,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12398162/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić
{"title":"How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey.","authors":"Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić","doi":"10.1186/s41073-025-00172-0","DOIUrl":"10.1186/s41073-025-00172-0","url":null,"abstract":"<p><strong>Background: </strong>Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.</p><p><strong>Methods: </strong>We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.</p><p><strong>Results: </strong>A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).</p><p><strong>Conclusions: </strong>Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"14"},"PeriodicalIF":10.7,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12333226/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144801184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman
{"title":"Correction: Evaluating psychiatry journals' adherence to informed consent guidelines for case reports.","authors":"Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman","doi":"10.1186/s41073-025-00175-x","DOIUrl":"10.1186/s41073-025-00175-x","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"16"},"PeriodicalIF":10.7,"publicationDate":"2025-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12309193/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144755332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Systematic review and meta-analysis of quotation inaccuracy in medicine.","authors":"Christopher Baethge, Hannah Jergas","doi":"10.1186/s41073-025-00173-z","DOIUrl":"10.1186/s41073-025-00173-z","url":null,"abstract":"<p><strong>Background: </strong>Quotations are crucial to science but have been shown to be often inaccurate. Quotation errors, that is, a reference not supporting the authors' claim, may still be a significant issue in scientific medical writing. This study aimed to examine the quotation error rate and trends over time in the medical literature.</p><p><strong>Methods: </strong>A systematic search of PubMed, Web of Science, and reference lists for quotation error studies in medicine and without date or language restrictions identified 46 studies analyzing 32,000 quotations/references. Literature search, data extraction, and risk of bias assessments were performed independently by two raters. Random-effects meta-analyses and meta-regression were used to analyze error rates and trends (protocol pre-registered on OSF).</p><p><strong>Results: </strong>16.9% (95% CI: 14.1%-20.0%) of quotations were incorrect, with approximately half classified as major errors (8.0% [95% CI: 6.4%-10.0%]). Heterogeneity was high, and Egger's test for small study effects remained negative throughout. Meta-regression showed no significant improvement in quotation accuracy over recent years (slope: -0.002 [95% CI: -0.03 to 0.02], p = 0.85). Neither risk of bias, nor the number of references were statistically significantly associated with total error rate, but journal impact factor was: Spearman's ρ = -0.253 (p = 0.043, binomial test, N = 25).</p><p><strong>Conclusions: </strong>Quotation errors remain a problem in the medical literature, with no improvement over time. Addressing this issue requires concerted efforts to improve scholarly practices and editorial processes.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"13"},"PeriodicalIF":10.7,"publicationDate":"2025-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12285159/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144692730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman
{"title":"Evaluating psychiatry journals' adherence to informed consent guidelines for case reports.","authors":"Ashley J Tsang, John Z Sadler, E Sherwood Brown, Elizabeth Heitman","doi":"10.1186/s41073-025-00171-1","DOIUrl":"10.1186/s41073-025-00171-1","url":null,"abstract":"<p><strong>Background: </strong>Case reports are valuable tools that illustrate and analyze practical scenarios, novel problems, and the effectiveness of interventions. In psychiatry they often explore unique and potentially stigmatizing aspects of mental health, underscoring the importance of confidentiality and informed consent. However, journals' guidance on consent and confidentiality for case reports varies. In 2013, an international expert group developed the CAse REports (CARE) Guidelines for best practices in case reports, which include guidelines for informed consent and de-identification. In 2016, the Committee on Publication Ethics (COPE) issued ethical standards for publishing case reports, calling for written informed consent from featured patients.</p><p><strong>Methods: </strong>Using a cross-sectional approach, we assessed the instructions for authors of 253 indexed psychiatry journals, of which 129 had published English-language case reports in the prior five years. Our research identified and evaluated journals' use of COPE and CARE guidelines on informed consent and de-identification in case reports.</p><p><strong>Results: </strong>Among these 129 journals, 84 (65%) referred to COPE guidelines, and 59 (46%) referenced CARE guidelines. Furthermore, 46 (36%) required informed consent without de-identification, 7 (5%) required only de-identification, and 21 (16%) required both, specifying consent for identifying information. Notably, 40 (31%) lacked informed consent instructions. Of the 82 journals that required informed consent, 69 (85%) required documentation of consent.</p><p><strong>Conclusion: </strong>A decade after the publication of expert guidance, psychiatry journals remain inconsistent in their adherence to ethical guidelines for informed consent in case reports. More attention to clear instructions from journals on informed consent-a notable topic across different fields-would provide an important educational message about both publication ethics and fundamental respect for patients' confidentiality.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"15"},"PeriodicalIF":10.7,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12273215/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Misidentified cell lines: failures of peer review, varying journal responses to misidentification inquiries, and strategies for safeguarding biomedical research.","authors":"Ralf Weiskirchen","doi":"10.1186/s41073-025-00170-2","DOIUrl":"10.1186/s41073-025-00170-2","url":null,"abstract":"<p><strong>Background: </strong>Continuous cell lines are indispensable in basic and preclinical research. However, cross-contamination, misidentification, and over-passaging affect the validity and reproducibility of biomedical results. Although there have been efforts to highlight this problem for decades, definitive prevention remains a challenge. The International Cell Line Authentication Committee (ICLAC) registry (version 13, 26 April 2024) lists nearly 600 misidentified or contaminated cell lines. The inappropriate use of such cells has led to countless publications containing invalid data, creating a ripple effect of wasted resources, misleading follow-up studies, and compromised evidence-based conclusions.</p><p><strong>Methods: </strong>The ICLAC registry was consulted to identify commonly misidentified cell lines. A literature search of PubMed was performed to identify recent papers using these lines in liver-related experiments. Four publications with questionable conclusions were highlighted, and the editors of the respective journals were informed with short comments or letters to the editor.</p><p><strong>Results: </strong>Reactions from journal editors varied widely. In two cases, the editors quickly published the comments, resulting in transparent corrections. In the third example, the editor conducted an internal investigation without immediately publishing a correction. In the fourth example, the journal declined to address concerns publicly.</p><p><strong>Conclusions: </strong>Misidentified cell lines pose an ongoing threat to scientific rigor. Despite some responsible editorial interventions, the lack of universal standards fosters the dissemination of erroneous data. However, authors, reviewers, and editors have some important tools to prevent publications with misidentified cells by consulting available resources (e.g., ICLAC, Cellosaurus, Research Resource Identification Portal, SciScore™), and adopting consistent procedures to maintain research integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"12"},"PeriodicalIF":7.2,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12247328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144610565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}