Baiyan Qi, Lekshmi Sasi, Suhel Khan, Jordan Luo, Casey Chen, Keivan Rahmani, Zeinab Jahed, Jesse V Jokerst
{"title":"超声牙周成像解剖标志自动识别的机器学习。","authors":"Baiyan Qi, Lekshmi Sasi, Suhel Khan, Jordan Luo, Casey Chen, Keivan Rahmani, Zeinab Jahed, Jesse V Jokerst","doi":"10.1093/dmfr/twaf001","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>To identify landmarks in ultrasound periodontal images and automate the image-based measurements of gingival recession (iGR), gingival height (iGH), and alveolar bone level (iABL) using machine learning.</p><p><strong>Methods: </strong>We imaged 184 teeth from 29 human subjects. The dataset included 1580 frames for training and validating the U-Net convolutional neural network machine learning model, and 250 frames from new teeth that were not used in training for testing the generalization performance. The predicted landmarks, including the tooth, gingiva, bone, gingival margin (GM), cementoenamel junction (CEJ), and alveolar bone crest (ABC), were compared to manual annotations. We further demonstrated automated measurements of the clinical metrics iGR, iGH, and iABL.</p><p><strong>Results: </strong>Over 98% of predicted GM, CEJ, and ABC distances are within 200 µm of the manual annotation. Bland-Altman analysis revealed biases (bias of machine learning vs ground truth) of -0.1 µm, -37.6 µm, and -40.9 µm, with 95% limits of agreement of [-281.3, 281.0] µm, [-203.1, 127.9] µm, and [-297.6, 215.8] µm for iGR, iGH, and iABL, respectively, when compared to manual annotations. On the test dataset, the biases were 167.5 µm, 40.1 µm, and 78.7 µm with 95% CIs of [-1175 to 1510] µm, [-910.3 to 990.4] µm, and [-1954 to 1796] µm for iGR, iGH, and iABL, respectively.</p><p><strong>Conclusions: </strong>The proposed machine learning model demonstrates robust prediction performance, with the potential to enhance the efficiency of clinical periodontal diagnosis by automating landmark identification and clinical metrics measurements.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"210-221"},"PeriodicalIF":2.9000,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879227/pdf/","citationCount":"0","resultStr":"{\"title\":\"Machine learning for automated identification of anatomical landmarks in ultrasound periodontal imaging.\",\"authors\":\"Baiyan Qi, Lekshmi Sasi, Suhel Khan, Jordan Luo, Casey Chen, Keivan Rahmani, Zeinab Jahed, Jesse V Jokerst\",\"doi\":\"10.1093/dmfr/twaf001\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Objectives: </strong>To identify landmarks in ultrasound periodontal images and automate the image-based measurements of gingival recession (iGR), gingival height (iGH), and alveolar bone level (iABL) using machine learning.</p><p><strong>Methods: </strong>We imaged 184 teeth from 29 human subjects. The dataset included 1580 frames for training and validating the U-Net convolutional neural network machine learning model, and 250 frames from new teeth that were not used in training for testing the generalization performance. The predicted landmarks, including the tooth, gingiva, bone, gingival margin (GM), cementoenamel junction (CEJ), and alveolar bone crest (ABC), were compared to manual annotations. We further demonstrated automated measurements of the clinical metrics iGR, iGH, and iABL.</p><p><strong>Results: </strong>Over 98% of predicted GM, CEJ, and ABC distances are within 200 µm of the manual annotation. Bland-Altman analysis revealed biases (bias of machine learning vs ground truth) of -0.1 µm, -37.6 µm, and -40.9 µm, with 95% limits of agreement of [-281.3, 281.0] µm, [-203.1, 127.9] µm, and [-297.6, 215.8] µm for iGR, iGH, and iABL, respectively, when compared to manual annotations. On the test dataset, the biases were 167.5 µm, 40.1 µm, and 78.7 µm with 95% CIs of [-1175 to 1510] µm, [-910.3 to 990.4] µm, and [-1954 to 1796] µm for iGR, iGH, and iABL, respectively.</p><p><strong>Conclusions: </strong>The proposed machine learning model demonstrates robust prediction performance, with the potential to enhance the efficiency of clinical periodontal diagnosis by automating landmark identification and clinical metrics measurements.</p>\",\"PeriodicalId\":11261,\"journal\":{\"name\":\"Dento maxillo facial radiology\",\"volume\":\" \",\"pages\":\"210-221\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2025-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11879227/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Dento maxillo facial radiology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1093/dmfr/twaf001\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"DENTISTRY, ORAL SURGERY & MEDICINE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Dento maxillo facial radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/dmfr/twaf001","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"DENTISTRY, ORAL SURGERY & MEDICINE","Score":null,"Total":0}
Machine learning for automated identification of anatomical landmarks in ultrasound periodontal imaging.
Objectives: To identify landmarks in ultrasound periodontal images and automate the image-based measurements of gingival recession (iGR), gingival height (iGH), and alveolar bone level (iABL) using machine learning.
Methods: We imaged 184 teeth from 29 human subjects. The dataset included 1580 frames for training and validating the U-Net convolutional neural network machine learning model, and 250 frames from new teeth that were not used in training for testing the generalization performance. The predicted landmarks, including the tooth, gingiva, bone, gingival margin (GM), cementoenamel junction (CEJ), and alveolar bone crest (ABC), were compared to manual annotations. We further demonstrated automated measurements of the clinical metrics iGR, iGH, and iABL.
Results: Over 98% of predicted GM, CEJ, and ABC distances are within 200 µm of the manual annotation. Bland-Altman analysis revealed biases (bias of machine learning vs ground truth) of -0.1 µm, -37.6 µm, and -40.9 µm, with 95% limits of agreement of [-281.3, 281.0] µm, [-203.1, 127.9] µm, and [-297.6, 215.8] µm for iGR, iGH, and iABL, respectively, when compared to manual annotations. On the test dataset, the biases were 167.5 µm, 40.1 µm, and 78.7 µm with 95% CIs of [-1175 to 1510] µm, [-910.3 to 990.4] µm, and [-1954 to 1796] µm for iGR, iGH, and iABL, respectively.
Conclusions: The proposed machine learning model demonstrates robust prediction performance, with the potential to enhance the efficiency of clinical periodontal diagnosis by automating landmark identification and clinical metrics measurements.
期刊介绍:
Dentomaxillofacial Radiology (DMFR) is the journal of the International Association of Dentomaxillofacial Radiology (IADMFR) and covers the closely related fields of oral radiology and head and neck imaging.
Established in 1972, DMFR is a key resource keeping dentists, radiologists and clinicians and scientists with an interest in Head and Neck imaging abreast of important research and developments in oral and maxillofacial radiology.
The DMFR editorial board features a panel of international experts including Editor-in-Chief Professor Ralf Schulze. Our editorial board provide their expertise and guidance in shaping the content and direction of the journal.
Quick Facts:
- 2015 Impact Factor - 1.919
- Receipt to first decision - average of 3 weeks
- Acceptance to online publication - average of 3 weeks
- Open access option
- ISSN: 0250-832X
- eISSN: 1476-542X