Self-supervised learning enhances periapical films segmentation with limited labeled data.

IF 5.5 2区 医学 Q1 DENTISTRY, ORAL SURGERY & MEDICINE
Meiyu Hu, Qianli Zhang, Zhenyang Wei, Pingyi Jia, Mu Yuan, Huajie Yu, Xu-Cheng Yin, Junran Peng
{"title":"Self-supervised learning enhances periapical films segmentation with limited labeled data.","authors":"Meiyu Hu, Qianli Zhang, Zhenyang Wei, Pingyi Jia, Mu Yuan, Huajie Yu, Xu-Cheng Yin, Junran Peng","doi":"10.1016/j.jdent.2025.106150","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To overcome reliance on large-scale, costly labeled datasets and annotation variability for accurate periapical film segmentation. This study develops a self-supervised learning framework requiring limited labeled data, enhancing practical applicability while reducing extensive manual annotation efforts.</p><p><strong>Methods: </strong>This research proposes a two-stage framework: 1) Self-supervised pre-training. A Vision Transformer (ViT), initialized with weights from the DINOv2 model pre-trained on 142M natural images (LVD-142M), undergoes further self-supervised pre-training on our dataset of 74,292 unlabeled periapical films using student-teacher contrastive learning. 2) Fine-tuning adapts these features for segmentation. The domain-adapted ViT is fine-tuned with a Mask2Former head on only 229 labeled films for segmenting seven critical dental structures (tooth, pulp, crown, fillings, root canal fillings, caries, periapical lesions).</p><p><strong>Results: </strong>The domain-adapted self-supervised method significantly outperformed traditional fully-supervised models like U-Net and DeepLabV3+ (average Dice coefficient: 74.77% vs 33.53%-41.55%; 80%-123% relative improvement). Comprehensive comparison with cutting-edge SSL methods through cross-validation demonstrated the superiority of our DINOv2-based approach (74.77 ± 1.87%) over MAE (72.53 ± 1.90%), MoCov3 (65.92 ± 1.68%) and BEiTv3 (65.17 ± 1.77%). The method surpassed its supervised Mask2Former counterparts with statistical significance (p<0.01).</p><p><strong>Conclusions: </strong>This two-stage, domain-specific self-supervised framework effectively learns robust anatomical features. It enables accurate, reliable periapical film segmentation using very limited annotations. The approach addresses the challenge of labeled data scarcity in medical imaging.</p><p><strong>Clinical significance: </strong>This approach provides a feasible pathway for developing AI-assisted diagnostic tools. It can improve diagnostic accuracy through consistent segmentation and enhance workflow efficiency by reducing manual analysis time, especially in resource-constrained dental practices.</p>","PeriodicalId":15585,"journal":{"name":"Journal of dentistry","volume":" ","pages":"106150"},"PeriodicalIF":5.5000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of dentistry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jdent.2025.106150","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: To overcome reliance on large-scale, costly labeled datasets and annotation variability for accurate periapical film segmentation. This study develops a self-supervised learning framework requiring limited labeled data, enhancing practical applicability while reducing extensive manual annotation efforts.

Methods: This research proposes a two-stage framework: 1) Self-supervised pre-training. A Vision Transformer (ViT), initialized with weights from the DINOv2 model pre-trained on 142M natural images (LVD-142M), undergoes further self-supervised pre-training on our dataset of 74,292 unlabeled periapical films using student-teacher contrastive learning. 2) Fine-tuning adapts these features for segmentation. The domain-adapted ViT is fine-tuned with a Mask2Former head on only 229 labeled films for segmenting seven critical dental structures (tooth, pulp, crown, fillings, root canal fillings, caries, periapical lesions).

Results: The domain-adapted self-supervised method significantly outperformed traditional fully-supervised models like U-Net and DeepLabV3+ (average Dice coefficient: 74.77% vs 33.53%-41.55%; 80%-123% relative improvement). Comprehensive comparison with cutting-edge SSL methods through cross-validation demonstrated the superiority of our DINOv2-based approach (74.77 ± 1.87%) over MAE (72.53 ± 1.90%), MoCov3 (65.92 ± 1.68%) and BEiTv3 (65.17 ± 1.77%). The method surpassed its supervised Mask2Former counterparts with statistical significance (p<0.01).

Conclusions: This two-stage, domain-specific self-supervised framework effectively learns robust anatomical features. It enables accurate, reliable periapical film segmentation using very limited annotations. The approach addresses the challenge of labeled data scarcity in medical imaging.

Clinical significance: This approach provides a feasible pathway for developing AI-assisted diagnostic tools. It can improve diagnostic accuracy through consistent segmentation and enhance workflow efficiency by reducing manual analysis time, especially in resource-constrained dental practices.

自监督学习增强了有限标记数据的根尖周膜分割。
目的:克服对大规模、昂贵的标记数据集和注释可变性的依赖,实现准确的根尖周围膜分割。本研究开发了一种需要有限标记数据的自监督学习框架,增强了实际适用性,同时减少了大量的人工注释工作。方法:本研究提出了一个两阶段的框架:1)自监督预训练。Vision Transformer (ViT)使用在142M张自然图像(LVD-142M)上预训练的DINOv2模型的权重进行初始化,并使用师生对比学习在74292张未标记的根尖周电影数据集上进行进一步的自监督预训练。2)微调适应这些特征的分割。domain- adaptive ViT与Mask2Former头在229个标记膜上进行微调,用于分割七个关键的牙齿结构(牙齿,牙髓,冠,填充物,根管填充物,龋齿,根尖周病变)。结果:领域自监督方法显著优于U-Net、DeepLabV3+等传统全监督模型(平均Dice系数:74.77% vs 33.53% ~ 41.55%;相对改进80% ~ 123%)。交叉验证结果表明,基于dinov2的方法(74.77±1.87%)优于MAE(72.53±1.90%)、MoCov3(65.92±1.68%)和BEiTv3(65.17±1.77%)。结论:这种两阶段的、特定领域的自监督框架有效地学习了鲁棒的解剖特征。它可以使用非常有限的注释进行准确,可靠的根尖周围膜分割。该方法解决了医学成像中标记数据稀缺的挑战。临床意义:本方法为开发人工智能辅助诊断工具提供了一条可行的途径。它可以通过一致的分割提高诊断准确性,并通过减少人工分析时间来提高工作流程效率,特别是在资源有限的牙科实践中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of dentistry
Journal of dentistry 医学-牙科与口腔外科
CiteScore
7.30
自引率
11.40%
发文量
349
审稿时长
35 days
期刊介绍: The Journal of Dentistry has an open access mirror journal The Journal of Dentistry: X, sharing the same aims and scope, editorial team, submission system and rigorous peer review. The Journal of Dentistry is the leading international dental journal within the field of Restorative Dentistry. Placing an emphasis on publishing novel and high-quality research papers, the Journal aims to influence the practice of dentistry at clinician, research, industry and policy-maker level on an international basis. Topics covered include the management of dental disease, periodontology, endodontology, operative dentistry, fixed and removable prosthodontics, dental biomaterials science, long-term clinical trials including epidemiology and oral health, technology transfer of new scientific instrumentation or procedures, as well as clinically relevant oral biology and translational research. The Journal of Dentistry will publish original scientific research papers including short communications. It is also interested in publishing review articles and leaders in themed areas which will be linked to new scientific research. Conference proceedings are also welcome and expressions of interest should be communicated to the Editor.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信