Journal of Imaging最新文献

筛选
英文 中文
MST-AI: Skin Color Estimation in Skin Cancer Datasets. MST-AI:皮肤癌数据集中的肤色估计。
IF 2.7
Journal of Imaging Pub Date : 2025-07-13 DOI: 10.3390/jimaging11070235
Vahid Khalkhali, Hayan Lee, Joseph Nguyen, Sergio Zamora-Erazo, Camille Ragin, Abhishek Aphale, Alfonso Bellacosa, Ellis P Monk, Saroj K Biswas
{"title":"MST-AI: Skin Color Estimation in Skin Cancer Datasets.","authors":"Vahid Khalkhali, Hayan Lee, Joseph Nguyen, Sergio Zamora-Erazo, Camille Ragin, Abhishek Aphale, Alfonso Bellacosa, Ellis P Monk, Saroj K Biswas","doi":"10.3390/jimaging11070235","DOIUrl":"10.3390/jimaging11070235","url":null,"abstract":"<p><p>The absence of skin color information in skin cancer datasets poses a significant challenge for accurate diagnosis using artificial intelligence models, particularly for non-white populations. In this paper, based on the Monk Skin Tone (MST) scale, which is less biased than the Fitzpatrick scale, we propose MST-AI, a novel method for detecting skin color in images of large datasets, such as the International Skin Imaging Collaboration (ISIC) archive. The approach includes automatic frame, lesion removal, and lesion segmentation using convolutional neural networks, and modeling normal skin tones with a Variational Bayesian Gaussian Mixture Model (VB-GMM). The distribution of skin color predictions was compared with MST scale probability distribution functions (PDFs) using the Kullback-Leibler Divergence (KLD) metric. Validation against manual annotations and comparison with K-means clustering of image and skin mean RGBs demonstrated the superior performance of the MST-AI, with Kendall's Tau, Spearman's Rho, and Normalized Discounted Cumulative Gain (NDGC) of 0.68, 0.69, and 1.00, respectively. This research lays the groundwork for developing unbiased AI models for early skin cancer diagnosis by addressing skin color imbalances in large datasets.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295582/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bone Mineral Density (BMD) Assessment Using Dual-Energy CT with Different Base Material Pairs (BMPs). 不同基材对(BMPs)双能CT评估骨矿物质密度(BMD)。
IF 2.7
Journal of Imaging Pub Date : 2025-07-13 DOI: 10.3390/jimaging11070236
Stefano Piscone, Sara Saccone, Paola Milillo, Giorgia Schiraldi, Roberta Vinci, Luca Macarini, Luca Pio Stoppino
{"title":"Bone Mineral Density (BMD) Assessment Using Dual-Energy CT with Different Base Material Pairs (BMPs).","authors":"Stefano Piscone, Sara Saccone, Paola Milillo, Giorgia Schiraldi, Roberta Vinci, Luca Macarini, Luca Pio Stoppino","doi":"10.3390/jimaging11070236","DOIUrl":"10.3390/jimaging11070236","url":null,"abstract":"<p><p>The assessment of bone mineral density (BMD) is essential for osteoporosis diagnosis. Dual-energy X-ray Absorptiometry (DXA) is the current gold standard, but it has limitations in evaluating trabecular bone and is susceptible to different artifacts. In this study we evaluate whether Dual-Energy Computed Tomography (DECT) can be defined as an alternative method for the assessment of BMD in a sample of postmenopausal patients undergoing oncological follow-up. In this study a retrospective analysis was conducted on 41 patients who had both DECT and DXA within six months. BMD values were extracted from DECT using five different base material pairs (BMPs) and compared with DXA measurements at the femoral neck. The calcium-fat pairing showed the strongest correlation with DXA-derived BMD (Spearman's ρ = 0.797) and excellent reproducibility (ICC = 0.983). There was a strong and significant association between the DXA results and the various BPM measurements. These findings support the possibility of DECT in the precise and opportunistic evaluation of BMD changes when employing particular BMPs. This study showed how this technique can be a useful and effective substitute for conventional DXA, particularly when patients are in oncological follow-up using DECT, minimizing additional radiation exposure.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295864/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dual-Branch Fusion Model for Deepfake Detection Using Video Frames and Microexpression Features. 基于视频帧和微表情特征的深度伪造检测双分支融合模型。
IF 2.7
Journal of Imaging Pub Date : 2025-07-11 DOI: 10.3390/jimaging11070231
Georgios Petmezas, Vazgken Vanian, Manuel Pastor Rufete, Eleana E I Almaloglou, Dimitris Zarpalas
{"title":"A Dual-Branch Fusion Model for Deepfake Detection Using Video Frames and Microexpression Features.","authors":"Georgios Petmezas, Vazgken Vanian, Manuel Pastor Rufete, Eleana E I Almaloglou, Dimitris Zarpalas","doi":"10.3390/jimaging11070231","DOIUrl":"10.3390/jimaging11070231","url":null,"abstract":"<p><p>Deepfake detection has become a critical issue due to the rise of synthetic media and its potential for misuse. In this paper, we propose a novel approach to deepfake detection by combining video frame analysis with facial microexpression features. The dual-branch fusion model utilizes a 3D ResNet18 for spatiotemporal feature extraction and a transformer model to capture microexpression patterns, which are difficult to replicate in manipulated content. We evaluate the model on the widely used FaceForensics++ (FF++) dataset and demonstrate that our approach outperforms existing state-of-the-art methods, achieving 99.81% accuracy and a perfect ROC-AUC score of 100%. The proposed method highlights the importance of integrating diverse data sources for deepfake detection, addressing some of the current limitations of existing systems.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implantation of an Artificial Intelligence Denoising Algorithm Using SubtlePET™ with Various Radiotracers: 18F-FDG, 68Ga PSMA-11 and 18F-FDOPA, Impact on the Technologist Radiation Doses. 基于不同示踪剂(18F-FDG, 68Ga PSMA-11和18F-FDOPA)的人工智能去噪算法植入技术人员辐射剂量的影响
IF 2.7
Journal of Imaging Pub Date : 2025-07-11 DOI: 10.3390/jimaging11070234
Jules Zhang-Yin, Octavian Dragusin, Paul Jonard, Christian Picard, Justine Grangeret, Christopher Bonnier, Philippe P Leveque, Joel Aerts, Olivier Schaeffer
{"title":"Implantation of an Artificial Intelligence Denoising Algorithm Using SubtlePET™ with Various Radiotracers: 18F-FDG, 68Ga PSMA-11 and 18F-FDOPA, Impact on the Technologist Radiation Doses.","authors":"Jules Zhang-Yin, Octavian Dragusin, Paul Jonard, Christian Picard, Justine Grangeret, Christopher Bonnier, Philippe P Leveque, Joel Aerts, Olivier Schaeffer","doi":"10.3390/jimaging11070234","DOIUrl":"10.3390/jimaging11070234","url":null,"abstract":"<p><p>This study assesses the clinical deployment of SubtlePET™, a commercial AI-based denoising algorithm, across three radiotracers-<sup>18</sup>F-FDG, <sup>68</sup>Ga-PSMA-11, and <sup>18</sup>F-FDOPA-with the goal of improving image quality while reducing injected activity, technologist radiation exposure, and scan time. A retrospective analysis on a digital PET/CT system showed that SubtlePET™ enabled dose reductions exceeding 33% and time savings of over 25%. AI-enhanced images were rated interpretable in 100% of cases versus 65% for standard low-dose reconstructions. Notably, 85% of AI-enhanced scans received the maximum Likert quality score (5/5), indicating excellent diagnostic confidence and noise suppression, compared to only 50% with conventional reconstruction. The quantitative image quality improved significantly across all tracers, with SNR and CNR gains of 50-70%. Radiotracer dose reductions were particularly substantial in low-BMI patients (up to 41% for FDG), and the technologist exposure decreased for high-exposure roles. The daily patient throughput increased by an average of 4.84 cases. These findings support the robust integration of SubtlePET™ into routine clinical PET practice, offering improved efficiency, safety, and image quality without compromising lesion detectability.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
E-InMeMo: Enhanced Prompting for Visual In-Context Learning. E-InMeMo:视觉情境学习的增强提示。
IF 2.7
Journal of Imaging Pub Date : 2025-07-11 DOI: 10.3390/jimaging11070232
Jiahao Zhang, Bowen Wang, Hong Liu, Liangzhi Li, Yuta Nakashima, Hajime Nagahara
{"title":"E-InMeMo: Enhanced Prompting for Visual In-Context Learning.","authors":"Jiahao Zhang, Bowen Wang, Hong Liu, Liangzhi Li, Yuta Nakashima, Hajime Nagahara","doi":"10.3390/jimaging11070232","DOIUrl":"10.3390/jimaging11070232","url":null,"abstract":"<p><p>Large-scale models trained on extensive datasets have become the standard due to their strong generalizability across diverse tasks. In-context learning (ICL), widely used in natural language processing, leverages these models by providing task-specific prompts without modifying their parameters. This paradigm is increasingly being adapted for computer vision, where models receive an input-output image pair, known as an in-context pair, alongside a query image to illustrate the desired output. However, the success of visual ICL largely hinges on the quality of these prompts. To address this, we propose <b>E</b>nhanced <b>In</b>struct <b>Me</b><b>Mo</b>re (E-InMeMo), a novel approach that incorporates learnable perturbations into in-context pairs to optimize prompting. Through extensive experiments on standard vision tasks, E-InMeMo demonstrates superior performance over existing state-of-the-art methods. Notably, it improves mIoU scores by 7.99 for foreground segmentation and by 17.04 for single object detection when compared to the baseline without learnable prompts. These results highlight E-InMeMo as a lightweight yet effective strategy for enhancing visual ICL.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Pegoraro et al. Cardiac Magnetic Resonance in the Assessment of Atrial Cardiomyopathy and Pulmonary Vein Isolation Planning for Atrial Fibrillation. J. Imaging 2025, 11, 143. 更正:Pegoraro等人。心磁共振评价心房心肌病和房颤肺静脉隔离计划。[j] .中国影像科学,2015,33(4):391 - 391。
IF 2.7
Journal of Imaging Pub Date : 2025-07-11 DOI: 10.3390/jimaging11070233
Nicola Pegoraro, Serena Chiarello, Riccardo Bisi, Giuseppe Muscogiuri, Matteo Bertini, Aldo Carnevale, Melchiore Giganti, Alberto Cossu
{"title":"Correction: Pegoraro et al. Cardiac Magnetic Resonance in the Assessment of Atrial Cardiomyopathy and Pulmonary Vein Isolation Planning for Atrial Fibrillation. <i>J. Imaging</i> 2025, <i>11</i>, 143.","authors":"Nicola Pegoraro, Serena Chiarello, Riccardo Bisi, Giuseppe Muscogiuri, Matteo Bertini, Aldo Carnevale, Melchiore Giganti, Alberto Cossu","doi":"10.3390/jimaging11070233","DOIUrl":"10.3390/jimaging11070233","url":null,"abstract":"<p><p>In the original publication [...].</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295335/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PS-YOLO-seg: A Lightweight Instance Segmentation Method for Lithium Mineral Microscopic Images Based on Improved YOLOv12-seg. ps - YOLOv12-seg:基于改进YOLOv12-seg的锂矿显微图像轻量化实例分割方法
IF 2.7
Journal of Imaging Pub Date : 2025-07-10 DOI: 10.3390/jimaging11070230
Zeyang Qiu, Xueyu Huang, Zhicheng Deng, Xiangyu Xu, Zhenzhong Qiu
{"title":"PS-YOLO-seg: A Lightweight Instance Segmentation Method for Lithium Mineral Microscopic Images Based on Improved YOLOv12-seg.","authors":"Zeyang Qiu, Xueyu Huang, Zhicheng Deng, Xiangyu Xu, Zhenzhong Qiu","doi":"10.3390/jimaging11070230","DOIUrl":"10.3390/jimaging11070230","url":null,"abstract":"<p><p>Microscopic image automatic recognition is a core technology for mineral composition analysis and plays a crucial role in advancing the intelligent development of smart mining systems. To overcome the limitations of traditional lithium ore analysis and meet the challenges of deployment on edge devices, we propose PS-YOLO-seg, a lightweight segmentation model specifically designed for lithium mineral analysis under visible light microscopy. The network is compressed by adjusting the width factor to reduce global channel redundancy. A PSConv-based downsampling strategy enhances the network's ability to capture dim mineral textures under microscopic conditions. In addition, the improved C3k2-PS module strengthens feature extraction, while the streamlined Segment-Efficient head minimizes redundant computation, further reducing the overall model complexity. PS-YOLO-seg achieves a slightly improved segmentation performance compared to the baseline YOLOv12n model on a self-constructed lithium ore microscopic dataset, while reducing FLOPs by 20%, parameter count by 33%, and model size by 32%. Additionally, it achieves a faster inference speed, highlighting its potential for practical deployment. This work demonstrates how architectural optimization and targeted enhancements can significantly improve instance segmentation performance while maintaining speed and compactness, offering strong potential for real-time deployment in industrial settings and edge computing scenarios.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12296170/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth-Dependent Variability in Ultrasound Attenuation Imaging for Hepatic Steatosis: A Pilot Study of ATI and HRI in Healthy Volunteers. 肝脂肪变性超声衰减成像的深度依赖性变异性:健康志愿者ATI和HRI的初步研究。
IF 2.7
Journal of Imaging Pub Date : 2025-07-09 DOI: 10.3390/jimaging11070229
Alexander Martin, Oliver Hurni, Catherine Paverd, Olivia Hänni, Lisa Ruby, Thomas Frauenfelder, Florian A Huber
{"title":"Depth-Dependent Variability in Ultrasound Attenuation Imaging for Hepatic Steatosis: A Pilot Study of ATI and HRI in Healthy Volunteers.","authors":"Alexander Martin, Oliver Hurni, Catherine Paverd, Olivia Hänni, Lisa Ruby, Thomas Frauenfelder, Florian A Huber","doi":"10.3390/jimaging11070229","DOIUrl":"10.3390/jimaging11070229","url":null,"abstract":"<p><p>Ultrasound attenuation imaging (ATI) is a non-invasive method for quantifying hepatic steatosis, offering advantages over the hepatorenal index (HRI). However, its reliability can be influenced by factors such as measurement depth, ROI size, and subcutaneous fat. This paper examines the impact of these confounders on ATI measurements and discusses diagnostic considerations. In this study, 33 healthy adults underwent liver ultrasound with ATI and HRI protocols. ATI measurements were taken at depths of 2-5 cm below the liver capsule using small and large ROIs. Two operators performed the measurements, and inter-operator variability was assessed. Subcutaneous fat thickness was measured to evaluate its influence on attenuation values. The ATI measurements showed a consistent decrease in attenuation coefficient values with increasing depth, approximately 0.05 dB/cm/MHz. Larger ROI sizes increased measurement variability due to greater anatomical heterogeneity. HRI values correlated weakly with ATI and were influenced by operator technique and subcutaneous fat, the latter accounting for roughly 2.5% of variability. ATI provides a quantitative assessment of hepatic steatosis compared to HRI, although its accuracy can vary depending on the depth and ROI selection. Standardised imaging protocols and AI tools may improve reproducibility and clinical utility, supporting advancements in ultrasound-based liver diagnostics for better patient care.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12294838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Essential Multi-Secret Image Sharing for Sensor Images. 传感器图像的基本多秘密图像共享。
IF 2.7
Journal of Imaging Pub Date : 2025-07-08 DOI: 10.3390/jimaging11070228
Shang-Kuan Chen
{"title":"Essential Multi-Secret Image Sharing for Sensor Images.","authors":"Shang-Kuan Chen","doi":"10.3390/jimaging11070228","DOIUrl":"10.3390/jimaging11070228","url":null,"abstract":"<p><p>In this paper, we propose an innovative <i>essential multi-secret image sharing (EMSIS)</i> scheme that integrates sensor data to securely and efficiently share multiple secret images of varying importance. Secret images are categorized into hierarchical levels and encoded into <i>essential shadows</i> and <i>fault-tolerant non-essential shares</i>, with access to higher-level secrets requiring higher-level essential shadows. By incorporating sensor data, such as location, time, or biometric input, into the encoding and access process, the scheme enables the context-aware and adaptive reconstruction of secrets based on real-world conditions. Experimental results demonstrate that the proposed method not only strengthens hierarchical access control, but also enhances robustness, flexibility, and situational awareness in secure image distribution systems.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295154/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretation of AI-Generated vs. Human-Made Images. 人工智能生成与人造图像的解释。
IF 2.7
Journal of Imaging Pub Date : 2025-07-07 DOI: 10.3390/jimaging11070227
Daniela Velásquez-Salamanca, Miguel Ángel Martín-Pascual, Celia Andreu-Sánchez
{"title":"Interpretation of AI-Generated vs. Human-Made Images.","authors":"Daniela Velásquez-Salamanca, Miguel Ángel Martín-Pascual, Celia Andreu-Sánchez","doi":"10.3390/jimaging11070227","DOIUrl":"10.3390/jimaging11070227","url":null,"abstract":"<p><p>AI-generated content has grown significantly in recent years. Today, AI-generated and human-made images coexist across various settings, including news media, social platforms, and beyond. However, we still know relatively little about how audiences interpret and evaluate these different types of images. The goal of this study was to examine whether image interpretation is influenced by the origin of the image (AI-generated vs. human-made). Additionally, we aimed to explore whether visual professionalization influences how images are interpreted. To this end, we presented 24 AI-generated images (produced using Midjourney, DALL·E, and Firefly) and 8 human-made images to 161 participants-71 visual professionals and 90 non-professionals. Participants were asked to evaluate each image based on the following: (1) the source they believed the image originated from, (2) the level of realism, and (3) the level of credibility they attributed to it. A total of 5152 responses were collected for each question. Our results reveal that human-made images are more readily recognized as such, whereas AI-generated images are frequently misclassified as human-made. We also find that human-made images are perceived as both more realistic and more credible than AI-generated ones. We conclude that individuals are generally unable to accurately determine the source of an image, which in turn affects their assessment of its credibility.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 7","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12295870/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144709237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信