{"title":"False authorship: an explorative case study around an AI-generated article published under my name.","authors":"Diomidis Spinellis","doi":"10.1186/s41073-025-00165-z","DOIUrl":"10.1186/s41073-025-00165-z","url":null,"abstract":"<p><strong>Background: </strong>The proliferation of generative artificial intelligence (AI) has facilitated the creation and publication of fraudulent scientific articles, often in predatory journals. This study investigates the extent of AI-generated content in the Global International Journal of Innovative Research (GIJIR), where a fabricated article was falsely attributed to me.</p><p><strong>Methods: </strong>The entire GIJIR website was crawled to collect article PDFs and metadata. Automated scripts were used to extract the number of probable in-text citations, DOIs, affiliations, and contact emails. A heuristic based on the number of in-text citations was employed to identify the probability of AI-generated content. A subset of articles was manually reviewed for AI indicators such as formulaic writing and missing empirical data. Turnitin's AI detection tool was used as an additional indicator. The extracted data were compiled into a structured dataset, which was analyzed to examine human-authored and AI-generated articles.</p><p><strong>Results: </strong>Of the 53 examined articles with the fewest in-text citations, at least 48 appeared to be AI-generated, while five showed signs of human involvement. Turnitin's AI detection scores confirmed high probabilities of AI-generated content in most cases, with scores reaching 100% for multiple papers. The analysis also revealed fraudulent authorship attribution, with AI-generated articles falsely assigned to researchers from prestigious institutions. The journal appears to use AI-generated content both to inflate its standing through misattributed papers and to attract authors aiming to inflate their publication record.</p><p><strong>Conclusions: </strong>The findings highlight the risks posed by AI-generated and misattributed research articles, which threaten the credibility of academic publishing. Ways to mitigate these issues include strengthening identity verification mechanisms for DOIs and ORCIDs, enhancing AI detection methods, and reforming research assessment practices. Without effective countermeasures, the unchecked growth of AI-generated content in scientific literature could severely undermine trust in scholarly communication.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"8"},"PeriodicalIF":7.2,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12107892/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144153024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero
{"title":"Research on policy mechanisms to address funding bias and conflicts of interest in biomedical research: a scoping review.","authors":"S Scott Graham, Quinn Grundy, Nandini Sharma, Jade Shiva Edward, Joshua B Barbour, Justin F Rousseau, Zoltan P Majdik, Lisa Bero","doi":"10.1186/s41073-025-00164-0","DOIUrl":"10.1186/s41073-025-00164-0","url":null,"abstract":"<p><strong>Background: </strong>Industry funding and author conflicts of interest (COI) have been consistently shown to introduce bias into agenda-setting and results-reporting in biomedical research. Accordingly, maintaining public trust, diminishing patient harm, and securing the integrity of the biomedical research enterprise are critical policy priorities. In this context, a coordinated and methodical research effort is required to effectively identify which policy interventions are most likely to mitigate against the risks of funding bias. Subsequently this scoping review aims to identify and synthesize the available research on policy mechanisms designed to address funding bias and COI in biomedical research.</p><p><strong>Methods: </strong>We searched PubMed for peer-reviewed, empirical analyses of policy mechanisms designed to address industry sponsorship of research studies, author industry affiliation, and author COI at any stage of the biomedical research process and published between January 2009 and 28 August 2023. The review identified literature conducting five primary analysis types: (1) surveys of COI policies, (2) disclosure compliance analyses, (3) disclosure concordance analyses, (4) COI policy effects analyses, and (5) studies of policy perceptions and contexts. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies.</p><p><strong>Results: </strong>Six thousand three hundreds eighty five articles were screened, and 81 studies were included. Studies were conducted in 11 geographic regions, with studies of international scope being the most common. Most available research is devoted to evaluating the prevalence, nature, and effects of author COI disclosure policies. This evidence demonstrates that while disclosure policies are pervasive, those policies are not consistently designed, implemented, or enforced. The available evidence also indicates that COI disclosure policies are not particularly effective in mitigating risk of bias or subsequent negative externalities.</p><p><strong>Conclusions: </strong>The results of this review indicate that the COI policy landscape could benefit from a significant shift in the research agenda. The available literature predominantly focuses on a single policy intervention-author disclosure requirements. As a result, new lines of research are needed to establish a more robust evidence-based policy landscape. There is a particular need for implementation research, greater attention to the structural conditions that create COI, and evaluation of policy mechanisms other than disclosure.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"6"},"PeriodicalIF":7.2,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12076912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon
{"title":"Correction: Raising concerns on questionable ethics approvals - a case study of 456 trials from the Institut Hospitalo-Universitaire Méditerranée Infection.","authors":"Fabrice Frank, Nans Florens, Gideon Meyerowitz-Katz, Jerome Barriere, Eric Billy, Veronique Saada, Alexander Samuel, Jacques Robert, Lonni Besancon","doi":"10.1186/s41073-025-00162-2","DOIUrl":"https://doi.org/10.1186/s41073-025-00162-2","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"7"},"PeriodicalIF":7.2,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063339/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From 2015 to 2023, eight years of empirical research on research integrity: a scoping review.","authors":"Baptiste Vendé, Anouk Barberousse, Stéphanie Ruphy","doi":"10.1186/s41073-025-00163-1","DOIUrl":"https://doi.org/10.1186/s41073-025-00163-1","url":null,"abstract":"<p><strong>Background: </strong>Research on research integrity (RI) has grown exponentially over the past several decades. Although the earliest publications emerged in the 1980 s, more than half of the existing literature has been produced within the last five years. Given that the most recent comprehensive literature review is now eight years old, the present study aims to extend and update previous findings.</p><p><strong>Method: </strong>We conducted a systematic search of the Web of Science and Constellate databases for articles published between 2015 and 2023. To structure our overview and guide our inquiry, we addressed the following seven broad questions about the field:-What topics does the empirical literature on RI explore? What are the primary objectives of the empirical literature on RI? What methodologies are prevalent in the empirical literature on RI? What populations or organizations are studied in the empirical literature on RI? Where are the empirical studies on RI conducted? Where is the empirical literature on RI published? To what degree is the general literature on RI grounded in empirical research? Additionally, we used the previous scoping review as a benchmark to identify emerging trends and shifts.</p><p><strong>Results: </strong>Our search yielded a total of 3,282 studies, of which 660 articles met our inclusion criteria. All research questions were comprehensively addressed. Notably, we observed a significant shift in methodologies: the reliance on interviews and surveys decreased from 51 to 30%, whereas the application of meta-scientific methods increased from 17 to 31%. In terms of theoretical orientation, the previously dominant \"Bad Apple\" hypothesis declined from 54 to 30%, while the \"Wicked System\" hypothesis increased from 46 to 52%. Furthermore, there has been a pronounced trend toward testing solutions, rising from 31 to 56% at the expense of merely describing the problem, which fell from 69 to 44%.</p><p><strong>Conclusion: </strong>Three gaps highlighted eight years ago by the previous scoping review remain unresolved. Research on decision makers (e.g., scientists in positions of power, policymakers, accounting for 3%), the private research sector and patents (4.7%), and the peer review system (0.3%) continues to be underexplored. Even more concerning, if current trends persist, these gaps are likely to become increasingly problematic.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"5"},"PeriodicalIF":7.2,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12042460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personal experience with AI-generated peer reviews: a case study.","authors":"Nicholas Lo Vecchio","doi":"10.1186/s41073-025-00161-3","DOIUrl":"10.1186/s41073-025-00161-3","url":null,"abstract":"<p><strong>Background: </strong>While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.</p><p><strong>Methods: </strong>This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.</p><p><strong>Results: </strong>After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.</p><p><strong>Conclusions: </strong>Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"4"},"PeriodicalIF":7.2,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11974187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143796279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johanna Goldberg, Heather Snijdewind, Céline Soudant, Kendra Godwin, Robin O'Hanlon
{"title":"How do oncology journals approach plagiarism? A website review.","authors":"Johanna Goldberg, Heather Snijdewind, Céline Soudant, Kendra Godwin, Robin O'Hanlon","doi":"10.1186/s41073-025-00160-4","DOIUrl":"10.1186/s41073-025-00160-4","url":null,"abstract":"<p><strong>Background: </strong>Journals and publishers vary in the methods they use to detect plagiarism, when they implement these methods, and how they respond when plagiarism is suspected both before and after publication. This study aims to determine the policies and procedures of oncology journals for detecting and responding to suspected plagiarism in unpublished and published manuscripts.</p><p><strong>Methods: </strong>We reviewed the websites of each journal in the Oncology category of Journal Citation Reports' Science Citation Index Expanded (SCIE) to determine how they detect and respond to suspected plagiarism. We collected data from each journal's website, or publisher webpages directly linked from journal websites, to ascertain what information about plagiarism policies and procedures is publicly available.</p><p><strong>Results: </strong>There are 241 extant oncology journals included in SCIE, of which 224 (92.95%) have a plagiarism policy or mention plagiarism. Text similarity software or other plagiarism checking methods are mentioned by 207 of these (92.41%, and 85.89% of the 241 total journals examined). These text similarity checks occur most frequently at manuscript submission or initial editorial review. Journal or journal-linked publisher webpages frequently report following guidelines from the Committee on Publication Ethics (COPE) (135, 56.01%).</p><p><strong>Conclusions: </strong>Oncology journals report similar methods for identifying and responding to plagiarism, with some variation based on the breadth, location, and timing of plagiarism detection. Journal policies and procedures are often informed by guidance from professional organizations, like COPE.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"3"},"PeriodicalIF":7.2,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11956406/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143756243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes
{"title":"Analysis of indications for selectively missing results in comparative registry-based studies in medicine: a meta-research study.","authors":"Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes","doi":"10.1186/s41073-025-00159-x","DOIUrl":"10.1186/s41073-025-00159-x","url":null,"abstract":"<p><strong>Background: </strong>We assess if there are indications that results of registry-based studies comparing the effectiveness of interventions might be selectively missing depending on the statistical significance (p < 0.05).</p><p><strong>Methods: </strong>Eligibility criteria Sample of cohort type studies that used data from a patient registry, compared two study arms for assessing a medical intervention, and reported an effect for a binary outcome. Information sources We searched PubMed to identify registries in seven different medical specialties in 2022/23. Subsequently, we included all studies that satisfied the eligibility criteria for each of the identified registries and collected p-values from these studies. Synthesis of results We plotted the cumulative distribution of p-values and a histogram of absolute z-scores for visual inspection of selectively missing results because of p-hacking, selective reporting, or publication bias. In addition, we tested for publication bias by applying a caliper test.</p><p><strong>Results: </strong>Included studies Sample of 150 registry-based cohort type studies. Synthesis of results The cumulative distribution of p-values displays an abrupt, heavy increase just below the significance threshold of 0.05 while the distribution above the threshold shows a slow, gradual increase. The p-value of the caliper test with a 10% caliper was 0.011 (k = 2, N = 13).</p><p><strong>Conclusions: </strong>We found that the results of registry-based studies might be selectively missing. Results from registry-based studies comparing medical interventions should be interpreted very cautiously, as positive findings could be a result from p-hacking, publication bias, or selective reporting. Prospective registration of such studies is necessary and should be made mandatory both in regulatory contexts and for publication in journals. Further research is needed to determine the main reasons for selectively missing results to support the development and implementation of more specific methods for preventing selectively missing results.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"2"},"PeriodicalIF":7.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11881244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng
{"title":"Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit.","authors":"Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng","doi":"10.1186/s41073-025-00158-y","DOIUrl":"10.1186/s41073-025-00158-y","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors' responsible use of AI chatbots.</p><p><strong>Methods: </strong>This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023-December 2023). Data was categorized into policy elements, such as 'proofreading' and 'image generation'. Counts and percentages of 'yes' (i.e., permitted), 'no', and 'no available information' (NAI) were established for each policy element.</p><p><strong>Results: </strong>A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors' use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.</p><p><strong>Conclusions: </strong>Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"1"},"PeriodicalIF":7.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11869395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel
{"title":"Publisher Correction: Developing the Clarity and Openness in Reporting: E3-based (CORE) Reference user manual for creation of clinical study reports in the era of clinical trial transparency.","authors":"Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel","doi":"10.1186/s41073-024-00157-5","DOIUrl":"10.1186/s41073-024-00157-5","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"16"},"PeriodicalIF":7.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adam G Dunn, Enrico Coiera, Kenneth D Mandl, Florence T Bourgeois
{"title":"Publisher Correction: Conflict of interest disclosure in biomedical research: a review of current practices, biases, and the role of public registries in improving transparency.","authors":"Adam G Dunn, Enrico Coiera, Kenneth D Mandl, Florence T Bourgeois","doi":"10.1186/s41073-024-00154-8","DOIUrl":"10.1186/s41073-024-00154-8","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"13"},"PeriodicalIF":7.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}