{"title":"Self-supervised learning enhances periapical films segmentation with limited labeled data.","authors":"Meiyu Hu, Qianli Zhang, Zhenyang Wei, Pingyi Jia, Mu Yuan, Huajie Yu, Xu-Cheng Yin, Junran Peng","doi":"10.1016/j.jdent.2025.106150","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To overcome reliance on large-scale, costly labeled datasets and annotation variability for accurate periapical film segmentation. This study develops a self-supervised learning framework requiring limited labeled data, enhancing practical applicability while reducing extensive manual annotation efforts.</p><p><strong>Methods: </strong>This research proposes a two-stage framework: 1) Self-supervised pre-training. A Vision Transformer (ViT), initialized with weights from the DINOv2 model pre-trained on 142M natural images (LVD-142M), undergoes further self-supervised pre-training on our dataset of 74,292 unlabeled periapical films using student-teacher contrastive learning. 2) Fine-tuning adapts these features for segmentation. The domain-adapted ViT is fine-tuned with a Mask2Former head on only 229 labeled films for segmenting seven critical dental structures (tooth, pulp, crown, fillings, root canal fillings, caries, periapical lesions).</p><p><strong>Results: </strong>The domain-adapted self-supervised method significantly outperformed traditional fully-supervised models like U-Net and DeepLabV3+ (average Dice coefficient: 74.77% vs 33.53%-41.55%; 80%-123% relative improvement). Comprehensive comparison with cutting-edge SSL methods through cross-validation demonstrated the superiority of our DINOv2-based approach (74.77 ± 1.87%) over MAE (72.53 ± 1.90%), MoCov3 (65.92 ± 1.68%) and BEiTv3 (65.17 ± 1.77%). The method surpassed its supervised Mask2Former counterparts with statistical significance (p<0.01).</p><p><strong>Conclusions: </strong>This two-stage, domain-specific self-supervised framework effectively learns robust anatomical features. It enables accurate, reliable periapical film segmentation using very limited annotations. The approach addresses the challenge of labeled data scarcity in medical imaging.</p><p><strong>Clinical significance: </strong>This approach provides a feasible pathway for developing AI-assisted diagnostic tools. It can improve diagnostic accuracy through consistent segmentation and enhance workflow efficiency by reducing manual analysis time, especially in resource-constrained dental practices.</p>","PeriodicalId":15585,"journal":{"name":"Journal of dentistry","volume":" ","pages":"106150"},"PeriodicalIF":5.5000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of dentistry","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.jdent.2025.106150","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: To overcome reliance on large-scale, costly labeled datasets and annotation variability for accurate periapical film segmentation. This study develops a self-supervised learning framework requiring limited labeled data, enhancing practical applicability while reducing extensive manual annotation efforts.
Methods: This research proposes a two-stage framework: 1) Self-supervised pre-training. A Vision Transformer (ViT), initialized with weights from the DINOv2 model pre-trained on 142M natural images (LVD-142M), undergoes further self-supervised pre-training on our dataset of 74,292 unlabeled periapical films using student-teacher contrastive learning. 2) Fine-tuning adapts these features for segmentation. The domain-adapted ViT is fine-tuned with a Mask2Former head on only 229 labeled films for segmenting seven critical dental structures (tooth, pulp, crown, fillings, root canal fillings, caries, periapical lesions).
Results: The domain-adapted self-supervised method significantly outperformed traditional fully-supervised models like U-Net and DeepLabV3+ (average Dice coefficient: 74.77% vs 33.53%-41.55%; 80%-123% relative improvement). Comprehensive comparison with cutting-edge SSL methods through cross-validation demonstrated the superiority of our DINOv2-based approach (74.77 ± 1.87%) over MAE (72.53 ± 1.90%), MoCov3 (65.92 ± 1.68%) and BEiTv3 (65.17 ± 1.77%). The method surpassed its supervised Mask2Former counterparts with statistical significance (p<0.01).
Conclusions: This two-stage, domain-specific self-supervised framework effectively learns robust anatomical features. It enables accurate, reliable periapical film segmentation using very limited annotations. The approach addresses the challenge of labeled data scarcity in medical imaging.
Clinical significance: This approach provides a feasible pathway for developing AI-assisted diagnostic tools. It can improve diagnostic accuracy through consistent segmentation and enhance workflow efficiency by reducing manual analysis time, especially in resource-constrained dental practices.
期刊介绍:
The Journal of Dentistry has an open access mirror journal The Journal of Dentistry: X, sharing the same aims and scope, editorial team, submission system and rigorous peer review.
The Journal of Dentistry is the leading international dental journal within the field of Restorative Dentistry. Placing an emphasis on publishing novel and high-quality research papers, the Journal aims to influence the practice of dentistry at clinician, research, industry and policy-maker level on an international basis.
Topics covered include the management of dental disease, periodontology, endodontology, operative dentistry, fixed and removable prosthodontics, dental biomaterials science, long-term clinical trials including epidemiology and oral health, technology transfer of new scientific instrumentation or procedures, as well as clinically relevant oral biology and translational research.
The Journal of Dentistry will publish original scientific research papers including short communications. It is also interested in publishing review articles and leaders in themed areas which will be linked to new scientific research. Conference proceedings are also welcome and expressions of interest should be communicated to the Editor.