Khoa Dang Nguyen , Hung Trong Hoang , Thi-Phuong Hong Doan , Khai Quang Dao , Ding-Han Wang , Ming-Lun Hsu
{"title":"SegmentAnyTooth:一个用于口腔内照片中牙齿枚举和分割的开源深度学习框架","authors":"Khoa Dang Nguyen , Hung Trong Hoang , Thi-Phuong Hong Doan , Khai Quang Dao , Ding-Han Wang , Ming-Lun Hsu","doi":"10.1016/j.jds.2025.01.003","DOIUrl":null,"url":null,"abstract":"<div><h3>Background/purpose</h3><div>Preventive dentistry is essential for maintaining public oral health, but inequalities in dental care, especially in underserved areas, remain a significant challenge. Image-based dental analysis, using intraoral photographs, offers a practical and scalable approach to bridge this gap. In this context, we developed SegmentAnyTooth, an open-source deep learning framework that solves the critical first step by enabling automated tooth enumeration and segmentation across five standard intraoral views: upper occlusal, lower occlusal, frontal, right lateral, and left lateral. This tool lays the groundwork for advanced applications, reducing reliance on limited professional resources and enhancing access to preventive dental care.</div></div><div><h3>Materials and methods</h3><div>A dataset of 5000 intraoral photos from 1000 sets (953 subjects) was annotated with tooth surfaces and FDI notations. You Only Look Once 11 (YOLO11) nano models were trained for tooth localization and enumeration, followed by Light Segment Anything in High Quality (Light HQ-SAM) for segmentation using an active learning approach.</div></div><div><h3>Results</h3><div>SegmentAnyTooth demonstrated high segmentation accuracy, with mean Dice similarity coefficients (DSC) of 0.983 ± 0.036 for upper occlusal, 0.973 ± 0.060 for lower occlusal, and 0.920 ± 0.063 for frontal views. Lateral view models also performed well, with mean DSCs of 0.939 ± 0.070 (right) and 0.945 ± 0.056 (left). Statistically significant improvements over baseline models such as U-Net, nnU-Net, and Mask R-CNN were observed (Wilcoxon signed-rank test, <em>P</em> < 0.01).</div></div><div><h3>Conclusion</h3><div>SegmentAnyTooth provides accurate, multi-view tooth segmentation to enhance dental care, early diagnosis, individualized care, and population-level research. Its open-source design supports integration into clinical and public health workflows, with ongoing improvements focused on generalizability.</div></div>","PeriodicalId":15583,"journal":{"name":"Journal of Dental Sciences","volume":"20 2","pages":"Pages 1110-1117"},"PeriodicalIF":3.4000,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SegmentAnyTooth: An open-source deep learning framework for tooth enumeration and segmentation in intraoral photos\",\"authors\":\"Khoa Dang Nguyen , Hung Trong Hoang , Thi-Phuong Hong Doan , Khai Quang Dao , Ding-Han Wang , Ming-Lun Hsu\",\"doi\":\"10.1016/j.jds.2025.01.003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><h3>Background/purpose</h3><div>Preventive dentistry is essential for maintaining public oral health, but inequalities in dental care, especially in underserved areas, remain a significant challenge. Image-based dental analysis, using intraoral photographs, offers a practical and scalable approach to bridge this gap. In this context, we developed SegmentAnyTooth, an open-source deep learning framework that solves the critical first step by enabling automated tooth enumeration and segmentation across five standard intraoral views: upper occlusal, lower occlusal, frontal, right lateral, and left lateral. This tool lays the groundwork for advanced applications, reducing reliance on limited professional resources and enhancing access to preventive dental care.</div></div><div><h3>Materials and methods</h3><div>A dataset of 5000 intraoral photos from 1000 sets (953 subjects) was annotated with tooth surfaces and FDI notations. You Only Look Once 11 (YOLO11) nano models were trained for tooth localization and enumeration, followed by Light Segment Anything in High Quality (Light HQ-SAM) for segmentation using an active learning approach.</div></div><div><h3>Results</h3><div>SegmentAnyTooth demonstrated high segmentation accuracy, with mean Dice similarity coefficients (DSC) of 0.983 ± 0.036 for upper occlusal, 0.973 ± 0.060 for lower occlusal, and 0.920 ± 0.063 for frontal views. Lateral view models also performed well, with mean DSCs of 0.939 ± 0.070 (right) and 0.945 ± 0.056 (left). Statistically significant improvements over baseline models such as U-Net, nnU-Net, and Mask R-CNN were observed (Wilcoxon signed-rank test, <em>P</em> < 0.01).</div></div><div><h3>Conclusion</h3><div>SegmentAnyTooth provides accurate, multi-view tooth segmentation to enhance dental care, early diagnosis, individualized care, and population-level research. Its open-source design supports integration into clinical and public health workflows, with ongoing improvements focused on generalizability.</div></div>\",\"PeriodicalId\":15583,\"journal\":{\"name\":\"Journal of Dental Sciences\",\"volume\":\"20 2\",\"pages\":\"Pages 1110-1117\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-01-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Dental Sciences\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1991790225000030\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Dental Sciences","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1991790225000030","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
摘要
背景/目的预防性牙科对维护公众口腔健康至关重要,但牙科保健方面的不平等现象,特别是在服务不足的地区,仍然是一个重大挑战。使用口腔内照片的基于图像的牙科分析为弥合这一差距提供了一种实用且可扩展的方法。在这种情况下,我们开发了SegmentAnyTooth,这是一个开源的深度学习框架,通过在五个标准的口腔内视图(上咬合,下咬合,正面,右侧和左侧)中实现自动牙齿枚举和分割,解决了关键的第一步。该工具为先进的应用奠定了基础,减少了对有限的专业资源的依赖,并提高了获得预防性牙科保健的机会。材料与方法对来自1000组(953名受试者)的5000张口腔内照片数据集进行牙齿表面和FDI标注。您只看一次11 (YOLO11)纳米模型进行牙齿定位和枚举训练,然后使用主动学习方法进行Light Segment Anything in High Quality (Light HQ-SAM)分割。结果segmentanytooth具有较高的分割精度,上咬合的DSC均值为0.983±0.036,下咬合的DSC均值为0.973±0.060,正面切面的DSC均值为0.920±0.063。侧视图模型也表现良好,平均dsc分别为0.939±0.070(右)和0.945±0.056(左)。与基线模型(如U-Net、nnU-Net和Mask R-CNN)相比,观察到统计学上显著的改善(Wilcoxon sign -rank检验,P <;0.01)。结论segmentanytooth提供准确、多视角的牙齿分割,有助于提高牙科护理、早期诊断、个性化护理和人群水平研究。它的开源设计支持集成到临床和公共卫生工作流程中,并不断改进其通用性。
SegmentAnyTooth: An open-source deep learning framework for tooth enumeration and segmentation in intraoral photos
Background/purpose
Preventive dentistry is essential for maintaining public oral health, but inequalities in dental care, especially in underserved areas, remain a significant challenge. Image-based dental analysis, using intraoral photographs, offers a practical and scalable approach to bridge this gap. In this context, we developed SegmentAnyTooth, an open-source deep learning framework that solves the critical first step by enabling automated tooth enumeration and segmentation across five standard intraoral views: upper occlusal, lower occlusal, frontal, right lateral, and left lateral. This tool lays the groundwork for advanced applications, reducing reliance on limited professional resources and enhancing access to preventive dental care.
Materials and methods
A dataset of 5000 intraoral photos from 1000 sets (953 subjects) was annotated with tooth surfaces and FDI notations. You Only Look Once 11 (YOLO11) nano models were trained for tooth localization and enumeration, followed by Light Segment Anything in High Quality (Light HQ-SAM) for segmentation using an active learning approach.
Results
SegmentAnyTooth demonstrated high segmentation accuracy, with mean Dice similarity coefficients (DSC) of 0.983 ± 0.036 for upper occlusal, 0.973 ± 0.060 for lower occlusal, and 0.920 ± 0.063 for frontal views. Lateral view models also performed well, with mean DSCs of 0.939 ± 0.070 (right) and 0.945 ± 0.056 (left). Statistically significant improvements over baseline models such as U-Net, nnU-Net, and Mask R-CNN were observed (Wilcoxon signed-rank test, P < 0.01).
Conclusion
SegmentAnyTooth provides accurate, multi-view tooth segmentation to enhance dental care, early diagnosis, individualized care, and population-level research. Its open-source design supports integration into clinical and public health workflows, with ongoing improvements focused on generalizability.
期刊介绍:
he Journal of Dental Sciences (JDS), published quarterly, is the official and open access publication of the Association for Dental Sciences of the Republic of China (ADS-ROC). The precedent journal of the JDS is the Chinese Dental Journal (CDJ) which had already been covered by MEDLINE in 1988. As the CDJ continued to prove its importance in the region, the ADS-ROC decided to move to the international community by publishing an English journal. Hence, the birth of the JDS in 2006. The JDS is indexed in the SCI Expanded since 2008. It is also indexed in Scopus, and EMCare, ScienceDirect, SIIC Data Bases.
The topics covered by the JDS include all fields of basic and clinical dentistry. Some manuscripts focusing on the study of certain endemic diseases such as dental caries and periodontal diseases in particular regions of any country as well as oral pre-cancers, oral cancers, and oral submucous fibrosis related to betel nut chewing habit are also considered for publication. Besides, the JDS also publishes articles about the efficacy of a new treatment modality on oral verrucous hyperplasia or early oral squamous cell carcinoma.