Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić
{"title":"How good are medical students and researchers in detecting duplications in digital images from research articles: a cross-sectional survey.","authors":"Antonija Mijatović, Marija Franka Žuljević, Luka Ursić, Nensi Bralić, Miro Vuković, Marija Roguljić, Ana Marušić","doi":"10.1186/s41073-025-00172-0","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.</p><p><strong>Methods: </strong>We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.</p><p><strong>Results: </strong>A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).</p><p><strong>Conclusions: </strong>Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"14"},"PeriodicalIF":10.7000,"publicationDate":"2025-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12333226/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research integrity and peer review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41073-025-00172-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Inappropriate manipulations of digital images pose significant risks to research integrity. Here we assessed the capability of students and researchers to detect image duplications in biomedical images.
Methods: We conducted a pen-and-paper survey involving medical students who had been exposed to research paper images during their studies, as well as active researchers. We asked them to identify duplications in images of Western blots, cell cultures, and histological sections and evaluated their performance based on the number of correctly and incorrectly detected duplications.
Results: A total of 831 students and 26 researchers completed the survey during 2023/2024 academic year. Out of 34 duplications of 21 unique image parts, the students correctly identified a median of 10 duplications (interquartile range [IQR] = 8-13), and made 2 mistakes (IQR = 1-4), whereas the researchers identified a median of 11 duplications (IQR = 8-14) and made 1 mistake (IQR = 1-3). There were no significant differences between the two groups in either the number of correctly detected duplications (p = .271, Cliff's δ = 0.126) or the number of mistakes (p = .731, Cliff's δ = 0.039). Both students and researchers identified higer percentage of duplications in the Western blot images than cell or tissue images (p < .005 and Cohen's d = 0.72; p < .005 and Cohen's d = 1.01, respectively). For students, gender was a weak predictor of performance, with female participants finding slightly more duplications (p < .005, Cliff's δ = 0.158), but making more mistakes (p < .005, Cliff's δ = 0.239). The study year had no significant impact on student performance (p = .209; Cliff's δ = 0.085).
Conclusions: Despite differences in expertise, both students and researchers demonstrated limited proficiency in detecting duplications in digital images. Digital image manipulation may be better detected by automated screening tools, and researchers should have clear guidance on how to prepare digital images in scientific publications.