Detection of Jaw Lesions on Panoramic Radiographs Using Deep Learning Method.

Dilek Çoban, Yasin Yaşa, Abdulsamet Aktaş, Hamza Osman İlhan
{"title":"Detection of Jaw Lesions on Panoramic Radiographs Using Deep Learning Method.","authors":"Dilek Çoban, Yasin Yaşa, Abdulsamet Aktaş, Hamza Osman İlhan","doi":"10.1007/s10278-025-01642-z","DOIUrl":null,"url":null,"abstract":"<p><p>This study aimed to evaluate and compare the performance of state-of-the-art deep learning models for detecting and segmenting both radiolucent and radiopaque jaw lesions on panoramic radiographs. A total of 2371 anonymized panoramic radiographs containing jaw lesions were retrospectively collected and categorized into radiolucent and radiopaque datasets. Expert annotation was performed to delineate lesion boundaries and assign anatomical localization (anterior/posterior maxilla and mandible). Four deep learning architectures-YOLOv8, YOLOv11, Mask R-CNN, and RT-DETR-were trained and evaluated under three experimental scenarios: (I) training without spatial labels, (II) data augmentation with unlabeled background images, and (III) inclusion of spatial localization annotations. Performance metrics included precision, recall, F1-score, and mean average precision (mAP@0.5 and mAP@0.5-0.95), with paired t-tests used for statistical comparison. In Scenario I, YOLOv11x-seg and YOLOv8x-seg achieved the highest segmentation performance for radiolucent and radiopaque lesions, respectively. For detection, YOLOv8x performed best on radiolucent lesions, while RT-DETR-L outperformed others on radiopaque lesions. In Scenario II, while YOLOv8x-seg achieved the best segmentation results across both lesion types, RT-DETR-L demonstrated superior detection performance, particularly for radiolucent lesions. In Scenario III, RT-DETR-L consistently outperformed all models across both lesion types. This study demonstrates the potential of state-of-the-art deep learning models for effective detection of lesions in panoramic radiographs. The developed models may offer valuable support to clinicians in lesion evaluation; however, it is recommended that they be employed primarily as decision support tools within clinical workflows, rather than as standalone diagnostic systems.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-025-01642-z","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This study aimed to evaluate and compare the performance of state-of-the-art deep learning models for detecting and segmenting both radiolucent and radiopaque jaw lesions on panoramic radiographs. A total of 2371 anonymized panoramic radiographs containing jaw lesions were retrospectively collected and categorized into radiolucent and radiopaque datasets. Expert annotation was performed to delineate lesion boundaries and assign anatomical localization (anterior/posterior maxilla and mandible). Four deep learning architectures-YOLOv8, YOLOv11, Mask R-CNN, and RT-DETR-were trained and evaluated under three experimental scenarios: (I) training without spatial labels, (II) data augmentation with unlabeled background images, and (III) inclusion of spatial localization annotations. Performance metrics included precision, recall, F1-score, and mean average precision (mAP@0.5 and mAP@0.5-0.95), with paired t-tests used for statistical comparison. In Scenario I, YOLOv11x-seg and YOLOv8x-seg achieved the highest segmentation performance for radiolucent and radiopaque lesions, respectively. For detection, YOLOv8x performed best on radiolucent lesions, while RT-DETR-L outperformed others on radiopaque lesions. In Scenario II, while YOLOv8x-seg achieved the best segmentation results across both lesion types, RT-DETR-L demonstrated superior detection performance, particularly for radiolucent lesions. In Scenario III, RT-DETR-L consistently outperformed all models across both lesion types. This study demonstrates the potential of state-of-the-art deep learning models for effective detection of lesions in panoramic radiographs. The developed models may offer valuable support to clinicians in lesion evaluation; however, it is recommended that they be employed primarily as decision support tools within clinical workflows, rather than as standalone diagnostic systems.

基于深度学习方法的全景x线片颌骨病变检测。
本研究旨在评估和比较最先进的深度学习模型在全景x线片上检测和分割放射透光和不透光颌骨病变的性能。回顾性收集了2371张包含颌骨病变的匿名全景x线照片,并将其分为透光和不透光两组。专家注释进行了划定病变边界和分配解剖定位(前/后上颌和下颌骨)。对yolov8、YOLOv11、Mask R-CNN和rt - der四种深度学习架构进行了三种实验场景下的训练和评估:(I)无空间标记训练,(II)无标记背景图像的数据增强,(III)包含空间定位注释。性能指标包括精度、召回率、f1评分和平均平均精度(mAP@0.5和mAP@0.5-0.95),配对t检验用于统计比较。在场景一中,YOLOv11x-seg和YOLOv8x-seg分别对透光和不透光病变的分割效果最好。在检测方面,YOLOv8x在透光病变上表现最好,而rt - der - l在不透光病变上表现较好。在场景II中,虽然YOLOv8x-seg在两种病变类型中都获得了最好的分割结果,但rt - der - l的检测性能更好,特别是对于放射性病变。在场景III中,rt - der - l在两种病变类型上的表现均优于所有模型。这项研究展示了最先进的深度学习模型在全景x线片中有效检测病变的潜力。所建立的模型可为临床医生的病变评估提供有价值的支持;然而,建议它们主要用作临床工作流程中的决策支持工具,而不是作为独立的诊断系统。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信