Cesar Gonzalez, Zaid Badr, Hamdi Cem Güngör, Shengtong Han, Manal D Hamdan
{"title":"Identifying Primary Proximal Caries Lesions in Pediatric Patients From Bitewing Radiographs Using Artificial Intelligence.","authors":"Cesar Gonzalez, Zaid Badr, Hamdi Cem Güngör, Shengtong Han, Manal D Hamdan","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p><b>Purpose:</b> To develop a no-code artificial intelligence (AI) model capable of identifying primary proximal surface caries using bitewings among pediatric patients. <b>Methods:</b> One hundred bitewing radiographs acquired at pediatric dental clinics were anonymized and reviewed. The inclusion criteria encompassed bitewing radiographs of adequate diagnostic quality of primary and mixed-dentition stages. The exclusion criteria included artifacts related to sensors' quality, positioning errors, and motion. Sixty-six bitewing radiographs were selected. Images were uploaded to LandingLens™, a no-code AI platform. A calibrated consensus panel determined the presence or absence of proximal caries lesions on all surfaces (using ground truth labeling). The radiographs were divided into training (70 percent), development (20 percent), and testing (10 percent) subsets. Data augmentation techniques were applied to artificially increase the sample size. Sensitivity, specificity, accuracy, precision, F1-score, and receiver operating characteristic area under the curve (ROC-AUC) were calculated. <b>Results:</b> Among the 755 proximal surfaces identified from 66 bitewings, 178 were annotated as caries lesions by experts. The model achieved the following metrics: 88.8 percent sensitivity, 98.8 percent specificity, 95.8 percent precision, 96.4 percent accuracy, and an F1-score of 92 percent by surface. The ROC-AUC was 0.965. <b>Conclusions:</b> The developed model demonstrated strong performance despite the limited dataset size. This may be attributed to the exclusion of unsuitable radiographs and the use of expert-labeled ground truth annotations. The utilization of no-code artificial intelligence may improve outcomes for computer vision tasks.</p>","PeriodicalId":101357,"journal":{"name":"Pediatric dentistry","volume":"46 5","pages":"332-336"},"PeriodicalIF":0.0000,"publicationDate":"2024-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pediatric dentistry","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: To develop a no-code artificial intelligence (AI) model capable of identifying primary proximal surface caries using bitewings among pediatric patients. Methods: One hundred bitewing radiographs acquired at pediatric dental clinics were anonymized and reviewed. The inclusion criteria encompassed bitewing radiographs of adequate diagnostic quality of primary and mixed-dentition stages. The exclusion criteria included artifacts related to sensors' quality, positioning errors, and motion. Sixty-six bitewing radiographs were selected. Images were uploaded to LandingLens™, a no-code AI platform. A calibrated consensus panel determined the presence or absence of proximal caries lesions on all surfaces (using ground truth labeling). The radiographs were divided into training (70 percent), development (20 percent), and testing (10 percent) subsets. Data augmentation techniques were applied to artificially increase the sample size. Sensitivity, specificity, accuracy, precision, F1-score, and receiver operating characteristic area under the curve (ROC-AUC) were calculated. Results: Among the 755 proximal surfaces identified from 66 bitewings, 178 were annotated as caries lesions by experts. The model achieved the following metrics: 88.8 percent sensitivity, 98.8 percent specificity, 95.8 percent precision, 96.4 percent accuracy, and an F1-score of 92 percent by surface. The ROC-AUC was 0.965. Conclusions: The developed model demonstrated strong performance despite the limited dataset size. This may be attributed to the exclusion of unsuitable radiographs and the use of expert-labeled ground truth annotations. The utilization of no-code artificial intelligence may improve outcomes for computer vision tasks.