{"title":"Decision Strategies in AI-Based Ensemble Models in Opportunistic Alzheimer's Detection from Structural MRI.","authors":"Solveig Kristina Hammonds, Trygve Eftestøl, Kathinka Daehli Kurz, Alvaro Fernandez-Quilez","doi":"10.1007/s10278-025-01604-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01604-5","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurodegenerative condition and the most common form of dementia. Recent developments in AD treatment call for robust diagnostic tools to facilitate medical decision-making. Despite progress for early diagnostic tests, there remains uncertainty about clinical use. Structural magnetic resonance imaging (MRI), as a readily available imaging tool in the current AD diagnostic pathway, in combination with artificial intelligence, offers opportunities of added value beyond symptomatic evaluation. However, MRI studies in AD tend to suffer from small datasets and consequently limited generalizability. Although ensemble models take advantage of the strengths of several models to improve performance and generalizability, there is little knowledge of how the different ensemble models compare performance-wise and the relationship between detection performance and model calibration. The latter is especially relevant for clinical translatability. In our study, we applied three ensemble decision strategies with three different deep learning architectures for multi-class AD detection with structural MRI. For two of the three architectures, the weighted average was the best decision strategy in terms of balanced accuracy and calibration error. In contrast to the base models, the results of the ensemble models showed that the best detection performance corresponded to the lowest calibration error, independent of the architecture. For each architecture, the best ensemble model reduced the estimated calibration error compared to the base model average from (1) 0.174±0.01 to 0.164±0.04, (2) 0.182±0.02 to 0.141±0.04, and (3) 0.269±0.08 to 0.240±0.04 and increased the balanced accuracy from (1) 0.527±0.05 to 0.608±0.06, (2) 0.417±0.03 to 0.456±0.04, and (3) 0.348±0.02 to 0.371±0.03.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kuan-Chih Huang, Chang-En Lin, Donna Shu-Han Lin, Ting-Tse Lin, Cho-Kai Wu, Geng-Shi Jeng, Lian-Yu Lin, Lung-Chun Lin
{"title":"Video Transformer for Segmentation of Echocardiography Images in Myocardial Strain Measurement.","authors":"Kuan-Chih Huang, Chang-En Lin, Donna Shu-Han Lin, Ting-Tse Lin, Cho-Kai Wu, Geng-Shi Jeng, Lian-Yu Lin, Lung-Chun Lin","doi":"10.1007/s10278-025-01682-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01682-5","url":null,"abstract":"<p><p>The adoption of left ventricular global longitudinal strain (LVGLS) is still restricted by variability among various vendors and observers, despite advancements from tissue Doppler to speckle tracking imaging, machine learning, and, more recently, convolutional neural network (CNN)-based segmentation strain analysis. While CNNs have enabled fully automated strain measurement, they are inherently constrained by restricted receptive fields and a lack of temporal consistency. Transformer-based networks have emerged as a powerful alternative in medical imaging, offering enhanced global attention. Among these, the Video Swin Transformer (V-SwinT) architecture, with its 3D-shifted windows and locality inductive bias, is particularly well suited for ultrasound imaging, providing temporal consistency while optimizing computational efficiency. In this study, we propose the DTHR-SegStrain model based on a V-SwinT backbone. This model incorporates contour regression and utilizes an FCN-style multiscale feature fusion. As a result, it can generate accurate and temporally consistent left ventricle (LV) contours, allowing for direct calculation of myocardial strain without the need for conversion from segmentation to contours or any additional postprocessing. Compared to EchoNet-dynamic and Unity-GLS, DTHR-SegStrain showed greater efficiency, reliability, and validity in LVGLS measurements. Furthermore, the hybridization experiments assessed the interaction between segmentation models and strain algorithms, reinforcing that consistent segmentation contours over time can simplify strain calculations and decrease measurement variability. These findings emphasize the potential of V-SwinT-based frameworks to enhance the standardization and clinical applicability of LVGLS assessments.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145082770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qian Tang, Lijun Liu, Xiaobing Yang, Li Liu, Wei Peng
{"title":"From Detection to Radiology Report Generation: Fine-Grained Multi-Modal Alignment with Semi-Supervised Learning.","authors":"Qian Tang, Lijun Liu, Xiaobing Yang, Li Liu, Wei Peng","doi":"10.1007/s10278-025-01650-z","DOIUrl":"https://doi.org/10.1007/s10278-025-01650-z","url":null,"abstract":"<p><p>Radiology report generation plays a critical role in supporting diagnosis, alleviating clinicians' workload, and improving diagnostic accuracy by integrating radiological image content with clinical knowledge. However, most existing models primarily establish coarse-grained mappings between global images and textual reports, often overlooking fine-grained associations between lesion regions and corresponding report content. This limitation affects the accuracy and clinical relevance of the generated reports. To address this, we propose D2R-Net, a lesion-aware radiology report generation model. D2R-Net leverages bounding box annotations for 22 chest diseases to guide the model to focus on clinically significant lesion regions. It employs a global-local dual-branch architecture that fuses global image context with localized lesion features and incorporates a Lesion Region Enhancement Module (LERA) to strengthen the recognition of key lesion regions. Additionally, an implicit alignment mechanism, including Local Alignment Blocks (LAB) and Global Alignment Blocks (GAB), is designed to bridge the semantic gap between visual and textual modalities. Experimental results on the benchmark MIMIC-CXR dataset demonstrate the superior performance of D2R-Net in generating accurate and clinically relevant radiology reports.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145071370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Fei, Dane Van Tassel, Ianto Xi, Vineeth Gangaram
{"title":"AtTheViewBox: Scrolling Past PowerPoints to a Novel Web-Based Solution for Interactive Case-Based Presentations.","authors":"Michael Fei, Dane Van Tassel, Ianto Xi, Vineeth Gangaram","doi":"10.1007/s10278-025-01620-5","DOIUrl":"https://doi.org/10.1007/s10278-025-01620-5","url":null,"abstract":"<p><p>Traditional radiology education relies heavily on PowerPoint presentations with static 2D images, which fail to replicate the interactive nature of reading radiological studies at a workstation. There is a growing need for an interactive, case-based platform that enables real-time collaboration in presentations. This study introduces AtTheViewBox, a web-based application designed to integrate DICOM images into presentations, offering a more dynamic and interactive learning experience. AtTheViewBox was developed using open-source libraries, including React, CornerstoneJS, and Supabase. The application allows users to embed DICOM images in slide presentations via iframes, enabling standard functionalities at a radiology workstation like scrolling, zooming, and windowing. A survey was conducted among radiology residents and educators from four academic institutions to assess the utility and ease of use of AtTheViewBox compared to traditional teaching methods. Among 27 radiology residents surveyed, 100% agreed that AtTheViewBox would enhance their case-based learning experience, with 93% preferring it over static images or videos. Among 30 educators, the application received an average usefulness rating of 9.5/10. Additionally, 63% of educators found AtTheViewBox as easy or easier to use than their current methods. AtTheViewBox effectively modernizes radiology education by enabling interactive DICOM integration in presentations. This tool enhances learning by mimicking workstation experiences and fostering real-time collaboration. The overwhelmingly positive reception suggests that AtTheViewBox addresses key limitations in current teaching methodologies and has the potential to become a standard in radiology education.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145077001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accuracy of AI-Based Algorithms in Pulmonary Embolism Detection on Computed Tomographic Pulmonary Angiography: An Updated Systematic Review and Meta-analysis.","authors":"Seyed Ali Nabipoorashrafi, Arsalan Seyedi, Razman Arabzadeh Bahri, Amirhossein Yadegar, Mostafa Shomal-Zadeh, Fatemeh Mohammadi, Samira Amin Afshari, Negar Firoozeh, Navida Noroozzadeh, Farbod Khosravi, Sanaz Asadian, Hamid Chalian","doi":"10.1007/s10278-025-01645-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01645-w","url":null,"abstract":"<p><p>Several artificial intelligence (AI) algorithms have been designed for detection of pulmonary embolism (PE) using computed tomographic pulmonary angiography (CTPA). Due to the rapid development of this field and the lack of an updated meta-analysis, we aimed to systematically review the available literature about the accuracy of AI-based algorithms to diagnose PE via CTPA. We searched EMBASE, PubMed, Web of Science, and Cochrane for studies assessing the accuracy of AI-based algorithms. Studies that reported sensitivity and specificity were included. The R software was used for univariate meta-analysis and drawing summary receiver operating characteristic (sROC) curves based on bivariate analysis. To explore the source of heterogeneity, sub-group analysis was performed (PROSPERO: CRD42024543107). A total of 1722 articles were found, and after removing duplicated records, 1185 were screened. Twenty studies with 26 AI models/population met inclusion criteria, encompassing 11,950 participants. Univariate meta-analysis showed a pooled sensitivity of 91.5% (95% CI 85.5-95.2) and specificity of 84.3 (95% CI 74.9-90.6) for PE detection. Additionally, in the bivariate sROC, the pooled area under the curved (AUC) was 0.923 out of 1, indicating a very high accuracy of AI algorithms in the detection of PE. Also, subgroup meta-analysis showed geographical area as a potential source of heterogeneity where the I<sup>2</sup> for sensitivity and specificity in the Asian article subgroup were 60% and 6.9%, respectively. Findings highlight the promising role of AI in accurately diagnosing PE while also emphasizing the need for further research to address regional variations and improve generalizability.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fractal Analysis of Mandibular Condyles in Patients with Temporomandibular Disorder.","authors":"Esra Yavuz, Selmi Yardimci Tunc, Humeyra Tercanli","doi":"10.1007/s10278-025-01669-2","DOIUrl":"https://doi.org/10.1007/s10278-025-01669-2","url":null,"abstract":"<p><p>Fractal analysis (FA) is a mathematical method used to evaluate irregular and complex shapes. The numerical result obtained from FA is called fractal dimension (FD). FA can detect subtle bone changes in diseases that affect bone microstructures such as temporomandibular disorder (TMD), even when these changes are not visible on radiographs. It provides objective results that can improve clinical diagnosis without creating extra burden for patients. This study aimed to evaluate the relationship between FD values and both the severity of TMD and degenerative changes in temporomandibular joints (TMJ). Specifically, we aimed to assess the diagnostic capacity of FA for TMD. This study included 161 participants. The presence and severity of TMD in the participants were evaluated using the Fonseca Anamnestic Index (FAI). Degenerative bone changes in the participants' mandibular condyles were categorized as flattening, osteophytes, and erosion on panoramic radiographic Images. FA was performed using ImageJ 1.49 software on panoramic radiographs. Data were analyzed using independent samples t-test and one-way ANOVA. Post hoc multiple comparisons were evaluated with the least significant difference test (LSD). Statistical significance was considered at p < 0.05. The severe TMD group had the lowest mean FD value (1.36 ± 0.11), whereas the group with no TMD (1.48 ± 0.11) had the highest mean FD value. In each case, the mean FD value was found to be statistically significantly lower in participants with flattening, osteophyte, or erosion than in those without (p < 0.001 for each comparison). Our main findings suggest that FD values were significantly associated with both the severity of TMD and with each type of degenerative bone changes we investigated. FA may provide valuable, quantitative information to improve the diagnosis of TMD. As such, FA may support clinicians in making early and accurate diagnoses and treatment decisions.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145071293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinyu Zhang, Feng Liu, Vincent Cs Lee, Karishma Jassal, Bruno Di Muzio, James C Lee
{"title":"Dynamic Ensemble Transfer Learning with Multi-view Ultrasonography for Improving Thyroid Cancer Diagnostic Reliability.","authors":"Xinyu Zhang, Feng Liu, Vincent Cs Lee, Karishma Jassal, Bruno Di Muzio, James C Lee","doi":"10.1007/s10278-025-01675-4","DOIUrl":"https://doi.org/10.1007/s10278-025-01675-4","url":null,"abstract":"<p><p>Diagnostic decision-making requires the integration of relevant facts and clinician experience. Incorporating the clinical experience from diverse backgrounds is beneficial in a multi-disciplinary model to mitigate uncertainties aroused by incomplete mastery of knowledge. However, current computer-aided diagnostic systems are generally designed using unitary datasets and are challenging to adapt to diverse institutions, leading to the limited reliability of the generated decisions. Accordingly, this study proposes a dynamic ensemble transfer learning-based system that simulates such diversity in its training and structure by integrating knowledge and data. The approach consists of a self-directed model selection scheme, a dynamic weighting mechanism, and a unified weighted ensemble averaging model, tailored for reliable diagnostic decision-making. This study adopts the most rapidly rising malignancy worldwide, thyroid cancer, for evaluation. Two multi-view thyroid ultrasonography datasets with matching tissue diagnosis from over 700 cross-national patients are used to pre-train the individual networks. The learnt knowledge is then transferred to the weighted ensemble averaging model through the dynamic weighting mechanism. The fine-tuned ensemble model is evaluated using an external set of thyroid nodules with radiological risk of malignancy based on the Thyroid Imaging Reporting and Data System. Further, we alter the datasets through up-sampling and down-sampling to evaluate the ensemble model's generalization. Extensive experiments demonstrate that the proposed ensemble model yields promising performance with an area under the curve value between 0.87 and 0.93 under diversified strategies. Benchmarking results show the proposed approach surpasses existing studies and improves diagnostic reliability in thyroid cancer care while guiding subsequent management options.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of ChatGPT-4o in Breast Cancer Screening: Insights from the 5th Edition BI-RADS Atlas and ACR Guidelines.","authors":"Bilgen Mehpare Özer, Eda Nur Korkmaz","doi":"10.1007/s10278-025-01663-8","DOIUrl":"https://doi.org/10.1007/s10278-025-01663-8","url":null,"abstract":"<p><p>The aim of this study is to evaluate the potential, reliability, and limitations of ChatGPT-4o in text-based questions and its effectiveness in clinical decision support processes based on the 5th edition of the BI-RADS Atlas and ACR breast cancer screening guidelines. In this study, a total of 100 questions-50 multiple-choice and 50 true/false-prepared by two radiologists were submitted to ChatGPT-4o between November 5 and 19. The answers provided by ChatGPT-4o were evaluated at baseline and 14 days later by both radiologists for accuracy and comprehensiveness using a Likert scale. Group comparisons were performed using Mann-Whitney U, Wilcoxon tests; response consistency was evaluated with Cohen's Kappa, and overall accuracy differences with a two-proportion z-test. The increase in overall accuracy from 86 to 95% was statistically significant according to the two-proportion z-test (p = .030). Comparisons between the two sessions revealed statistically significant increases in the accuracy (p = .013, r = .35, 95% CI [0.09, 0.61]) and comprehensiveness (p = .014, r = .35, 95% CI [0.09, 0.61]) rates of true/false questions. On the other hand, no significant difference was found between the accuracy (p = .180, r = .19, 95% CI [- 0.09, 0.47]) and comprehensiveness (p = .180, r = .19, 95% CI [- 0.09, 0.47]) rates of multiple-choice questions. In addition, group comparisons evaluating the effect of different question formats on performance revealed no significant difference in terms of accuracy (p = .661, r = - 0.04, 95% CI [- 0.23, 0.16]) and comprehensiveness (p = .708, r = - 0.04, 95% CI [- 0.23, 0.16]). The consistency of ChatGPT-4o responses was supported by Cohen's Kappa coefficients, all statistically significant (p < .001), with 95% confidence intervals ranging from - .038 to 1.084. ChatGPT-4o demonstrated remarkable performance in answering multiple-choice and true-false questions with overall accuracy improving from 86% in the first test to 95% after 14 days. ChatGPT-4o holds significant potential as a clinical decision support tool for healthcare professionals.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145056527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harnessing Artificial Intelligence for Shoulder Ultrasonography: A Narrative Review.","authors":"Wei-Ting Wu, Yi-Chung Shu, Che-Yu Lin, Consuelo B Gonzalez-Suarez, Levent Özçakar, Ke-Vin Chang","doi":"10.1007/s10278-025-01661-w","DOIUrl":"https://doi.org/10.1007/s10278-025-01661-w","url":null,"abstract":"<p><p>Shoulder pain is a common musculoskeletal complaint requiring accurate imaging for diagnosis and management. Ultrasound is favored for its accessibility, dynamic imaging, and high-resolution soft tissue visualization. However, its operator dependency and variability in interpretation present challenges. Recent advancements in artificial intelligence (AI), particularly deep learning algorithms like convolutional neural networks, offer promising applications in musculoskeletal imaging, enhancing diagnostic accuracy and efficiency. This narrative review explores AI integration in shoulder ultrasound, emphasizing automated pathology detection, image segmentation, and outcome prediction. Deep learning models have demonstrated high accuracy in grading bicipital peritendinous effusion and discriminating rotator cuff tendon tears, while machine learning techniques have shown efficacy in predicting the success of ultrasound-guided percutaneous irrigation for rotator cuff calcification. AI-powered segmentation models have improved anatomical delineation; however, despite these advancements, challenges remain, including the need for large, well-annotated datasets, model generalizability across diverse populations, and clinical validation. Future research should optimize AI algorithms for real-time applications, integrate multimodal imaging, and enhance clinician-AI collaboration.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145056530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of the Detectability of Oral Potentially Malignant Diseases with a Deep Learning Approach: A Retrospective Pilot Study.","authors":"Gaye Keser, Hakan Yülek, İbrahim Şevki Bayrakdar, Filiz Namdar Pekiner, Özer Çelik","doi":"10.1007/s10278-025-01665-6","DOIUrl":"https://doi.org/10.1007/s10278-025-01665-6","url":null,"abstract":"<p><p>Oral potentially malignant diseases (OPMD) may arise during the malignant transformation of the oral mucosa, with cellular changes in these lesions increasing the likelihood of cancer development compared to normal tissues. This study aims to evaluate the performance of a deep learning-based diagnostic software designed to detect OPMD. A total of 358 anonymized retrospective intraoral images from patients histopathologically diagnosed with oral lichen planus, oral leukoplakia, or oral cancer via incisional biopsy were used. The images were annotated using the polygonal labeling method in CranioCatch software (CranioCatch, Eskişehir, Turkey) and reviewed by Oral, Dental, and Maxillofacial Radiologists. The dataset was divided into training (n = 288), validation (n = 35), and test (n = 35) sets. A deep learning model based on the YOLOv8 architecture was developed, and its performance was assessed using a confusion matrix. The model achieved an F1 score of 0.693, a sensitivity of 0.666, and a precision of 0.723. These findings suggest that deep learning and artificial intelligence show promise in the diagnosis of OPMD and that routine oral examinations and early detection of these lesions-especially in high-risk individuals-are essential responsibilities for dental professionals. Larger, multi-center datasets, calibration, and external validation are needed for clinical translation.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145042845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}