Rashmi S, Srinath S, Prashanth S Murthy, Seema Deshmukh
{"title":"Landmark annotation through feature combinations: a comparative study on cephalometric images with in-depth analysis of model's explainability.","authors":"Rashmi S, Srinath S, Prashanth S Murthy, Seema Deshmukh","doi":"10.1093/dmfr/twad011","DOIUrl":"10.1093/dmfr/twad011","url":null,"abstract":"<p><strong>Objectives: </strong>The objectives of this study are to explore and evaluate the automation of anatomical landmark localization in cephalometric images using machine learning techniques, with a focus on feature extraction and combinations, contextual analysis, and model interpretability through Shapley Additive exPlanations (SHAP) values.</p><p><strong>Methods: </strong>We conducted extensive experimentation on a private dataset of 300 lateral cephalograms to thoroughly study the annotation results obtained using pixel feature descriptors including raw pixel, gradient magnitude, gradient direction, and histogram-oriented gradient (HOG) values. The study includes evaluation and comparison of these feature descriptions calculated at different contexts namely local, pyramid, and global. The feature descriptor obtained using individual combinations is used to discern between landmark and nonlandmark pixels using classification method. Additionally, this study addresses the opacity of LGBM ensemble tree models across landmarks, introducing SHAP values to enhance interpretability.</p><p><strong>Results: </strong>The performance of feature combinations was assessed using metrics like mean radial error, standard deviation, success detection rate (SDR) (2 mm), and test time. Remarkably, among all the combinations explored, both the HOG and gradient direction operations demonstrated significant performance across all context combinations. At the contextual level, the global texture outperformed the others, although it came with the trade-off of increased test time. The HOG in the local context emerged as the top performer with an SDR of 75.84% compared to others.</p><p><strong>Conclusions: </strong>The presented analysis enhances the understanding of the significance of different features and their combinations in the realm of landmark annotation but also paves the way for further exploration of landmark-specific feature combination methods, facilitated by explainability.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"115-126"},"PeriodicalIF":3.3,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139080441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanns Leonhard Kaatsch, Florian Fulisch, Daniel Dillinger, Laura Kubitscheck, Benjamin V Becker, Joel Piechotka, Marc A Brockmann, Matthias F Froelich, Stefan O Schoenberg, Daniel Overhoff, Stephan Waldeck
{"title":"Ultra-low-dose photon-counting CT of paranasal sinus: an in vivo comparison of radiation dose and image quality to cone-beam CT.","authors":"Hanns Leonhard Kaatsch, Florian Fulisch, Daniel Dillinger, Laura Kubitscheck, Benjamin V Becker, Joel Piechotka, Marc A Brockmann, Matthias F Froelich, Stefan O Schoenberg, Daniel Overhoff, Stephan Waldeck","doi":"10.1093/dmfr/twad010","DOIUrl":"10.1093/dmfr/twad010","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigated the differences in subjective and objective image parameters as well as dose exposure of photon-counting CT (PCCT) compared to cone-beam CT (CBCT) in paranasal sinus imaging for the assessment of rhinosinusitis and sinonasal anatomy.</p><p><strong>Methods: </strong>This single-centre retrospective study included 100 patients, who underwent either clinically indicated PCCT or CBCT of the paranasal sinus. Two blinded experienced ENT radiologists graded image quality and delineation of specific anatomical structures on a 5-point Likert scale. In addition, contrast-to-noise ratio (CNR) and applied radiation doses were compared among both techniques.</p><p><strong>Results: </strong>Image quality and delineation of bone structures in paranasal sinus PCCT was subjectively rated superior by both readers compared to CBCT (P < .001). CNR was significantly higher for photon-counting CT (P < .001). Mean effective dose for PCCT examinations was significantly lower than for CBCT (0.038 mSv ± 0.009 vs. 0.14 mSv ± 0.011; P < .001).</p><p><strong>Conclusion: </strong>In a performance comparison of PCCT and a modern CBCT scanner in paranasal sinus imaging, we demonstrated that first-use PCCT in clinical routine provides higher subjective image quality accompanied by higher CNR at close to a quarter of the dose exposure compared to CBCT.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 2","pages":"103-108"},"PeriodicalIF":3.3,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139706368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fernanda B Martins, Millena B Oliveira, Leandro M Oliveira, Alan Grupioni Lourenço, Luiz Renato Paranhos, Ana Carolina F Motta
{"title":"Diagnostic accuracy of ultrasonography in relation to salivary gland biopsy in Sjögren's syndrome: a systematic review with meta-analysis.","authors":"Fernanda B Martins, Millena B Oliveira, Leandro M Oliveira, Alan Grupioni Lourenço, Luiz Renato Paranhos, Ana Carolina F Motta","doi":"10.1093/dmfr/twad007","DOIUrl":"10.1093/dmfr/twad007","url":null,"abstract":"<p><strong>Objectives: </strong>To evaluate the accuracy of major salivary gland ultrasonography (SGUS) in relation to minor salivary gland biopsy (mSGB) in the diagnosis of Sjögren's syndrome (SS).</p><p><strong>Methods: </strong>A systematic review and meta-analysis were performed. Ten databases were searched to identify studies that compared the accuracy of SGUS and mSGB. The risk of bias was assessed, data were extracted, and univariate and bivariate random-effects meta-analyses were done.</p><p><strong>Results: </strong>A total of 5000 records were identified; 13 studies were included in the qualitative synthesis and 10 in the quantitative synthesis. The first meta-analysis found a sensitivity of 0.86 (95% CI: 0.74-0.92) and specificity of 0.87 (95% CI: 0.81-0.92) for the predictive value of SGUS scoring in relation to the result of mSGB. In the second meta-analysis, mSGB showed higher sensitivity and specificity than SGUS. Sensitivity was 0.80 (95% CI: 0.74-0.85) for mSGB and 0.71 (95% CI: 0.58-0.81) for SGUS, and specificity was 0.94 (95% CI: 0.87-0.97) for mSGB and 0.89 (95% CI: 0.82-0.94) for SGUS.</p><p><strong>Conclusions: </strong>The diagnostic accuracy of SGUS was similar to that of mSGB. SGUS is an effective diagnostic test that shows good sensitivity and high specificity, in addition to being a good tool for prognosis and for avoiding unnecessary biopsies. More studies using similar methodologies are needed to assess the accuracy of SGUS in predicting the result of mSGB. Our results will contribute to decision-making for the implementation of SGUS as a diagnostic tool for SS, considering the advantages of this method.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"91-102"},"PeriodicalIF":2.9,"publicationDate":"2024-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139097507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of deep learning methods for the radiographic detection of patients with different periodontitis stages.","authors":"Berceste Guler Ayyildiz, Rukiye Karakis, Busra Terzioglu, Durmus Ozdemir","doi":"10.1093/dmfr/twad003","DOIUrl":"10.1093/dmfr/twad003","url":null,"abstract":"<p><strong>Objectives: </strong>The objective of this study is to assess the accuracy of computer-assisted periodontal classification bone loss staging using deep learning (DL) methods on panoramic radiographs and to compare the performance of various models and layers.</p><p><strong>Methods: </strong>Panoramic radiographs were diagnosed and classified into 3 groups, namely \"healthy,\" \"Stage1/2,\" and \"Stage3/4,\" and stored in separate folders. The feature extraction stage involved transferring and retraining the feature extraction layers and weights from 3 models, namely ResNet50, DenseNet121, and InceptionV3, which were proposed for classifying the ImageNet dataset, to 3 DL models designed for classifying periodontal bone loss. The features obtained from global average pooling (GAP), global max pooling (GMP), or flatten layers (FL) of convolutional neural network (CNN) models were used as input to the 8 different machine learning (ML) models. In addition, the features obtained from the GAP, GMP, or FL of the DL models were reduced using the minimum redundancy maximum relevance (mRMR) method and then classified again with 8 ML models.</p><p><strong>Results: </strong>A total of 2533 panoramic radiographs, including 721 in the healthy group, 842 in the Stage1/2 group, and 970 in the Stage3/4 group, were included in the dataset. The average performance values of DenseNet121 + GAP-based and DenseNet121 + GAP + mRMR-based ML techniques on 10 subdatasets and ML models developed using 2 feature selection techniques outperformed CNN models.</p><p><strong>Conclusions: </strong>The new DenseNet121 + GAP + mRMR-based support vector machine model developed in this study achieved higher performance in periodontal bone loss classification compared to other models in the literature by detecting effective features from raw images without the need for manual selection.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"32-42"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003609/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Natalia Kazimierczak, Wojciech Kazimierczak, Zbigniew Serafin, Paweł Nowicki, Tomasz Jankowski, Agnieszka Jankowska, Joanna Janiszewska-Olszowska
{"title":"Skeletal facial asymmetry: reliability of manual and artificial intelligence-driven analysis.","authors":"Natalia Kazimierczak, Wojciech Kazimierczak, Zbigniew Serafin, Paweł Nowicki, Tomasz Jankowski, Agnieszka Jankowska, Joanna Janiszewska-Olszowska","doi":"10.1093/dmfr/twad006","DOIUrl":"10.1093/dmfr/twad006","url":null,"abstract":"<p><strong>Objectives: </strong>To compare artificial intelligence (AI)-driven web-based platform and manual measurements for analysing facial asymmetry in craniofacial CT examinations.</p><p><strong>Methods: </strong>The study included 95 craniofacial CT scans from patients aged 18-30 years. The degree of asymmetry was measured based on AI platform-predefined anatomical landmarks: sella (S), condylion (Co), anterior nasal spine (ANS), and menton (Me). The concordance between the results of automatic asymmetry reports and manual linear 3D measurements was calculated. The asymmetry rate (AR) indicator was determined for both automatic and manual measurements, and the concordance between them was calculated. The repeatability of manual measurements in 20 randomly selected subjects was assessed. The concordance of measurements of quantitative variables was assessed with interclass correlation coefficient (ICC) according to the Shrout and Fleiss classification.</p><p><strong>Results: </strong>Erroneous AI tracings were found in 16.8% of cases, reducing the analysed cases to 79. The agreement between automatic and manual asymmetry measurements was very low (ICC < 0.3). A lack of agreement between AI and manual AR analysis (ICC type 3 = 0) was found. The repeatability of manual measurements and AR calculations showed excellent correlation (ICC type 2 > 0.947).</p><p><strong>Conclusions: </strong>The results indicate that the rate of tracing errors and lack of agreement with manual AR analysis make it impossible to use the tested AI platform to assess the degree of facial asymmetry.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"52-59"},"PeriodicalIF":3.3,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003660/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei Liu, Renpeng Li, Yong Cheng, Bo Li, Lili Wei, Wei Li, Xiaolong Guo, Hang Li, Fang Wang
{"title":"Morphological variation of gubernacular tracts for permanent mandibular canines in eruption: a three-dimensional analysis.","authors":"Pei Liu, Renpeng Li, Yong Cheng, Bo Li, Lili Wei, Wei Li, Xiaolong Guo, Hang Li, Fang Wang","doi":"10.1093/dmfr/twad008","DOIUrl":"10.1093/dmfr/twad008","url":null,"abstract":"<p><strong>Objectives: </strong>This study aims to evaluate the morphological features of gubernacular tract (GT) for erupting permanent mandibular canines at different ages from 5 to 9 years old with a three-dimensional (3D) measurement method.</p><p><strong>Methods: </strong>The cone-beam CT images of 50 patients were divided into five age groups. The 3D models of the GT for mandibular canines were reconstructed and analysed. The characteristics of the GT, including length, diameter, ellipticity, tortuosity, superficial area, volume, and the angle between the canine and GT, were evaluated using a centreline fitting algorithm.</p><p><strong>Results: </strong>Among the 100 GTs that were examined, the length of the GT for mandibular canines decreased between the ages of 5 and 9 years, while the diameter increased until the age of 7 years. Additionally, the ellipticity and tortuosity of the GT decreased as age advanced. The superficial area and volume exhibited a trend of initially increasing and then decreasing. The morphological variations of the GT displayed heterogeneous changes during different periods.</p><p><strong>Conclusions: </strong>The 3D measurement method effectively portrayed the morphological attributes of the GT for mandibular canines. The morphological characteristics of the GT during the eruption process exhibited significant variations. The variations in morphological changes may indicate different stages of mandibular canine eruption.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"60-66"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003659/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huan-Zhong Su, Long-Cheng Hong, Mei Huang, Feng Zhang, Yu-Hui Wu, Zuo-Bing Zhang, Xiao-Dong Zhang
{"title":"A nomogram based on ultrasound scoring system for differentiating between immunoglobulin G4-related sialadenitis and primary Sjögren syndrome.","authors":"Huan-Zhong Su, Long-Cheng Hong, Mei Huang, Feng Zhang, Yu-Hui Wu, Zuo-Bing Zhang, Xiao-Dong Zhang","doi":"10.1093/dmfr/twad005","DOIUrl":"10.1093/dmfr/twad005","url":null,"abstract":"<p><strong>Objectives: </strong>Accurate distinguishing between immunoglobulin G4-related sialadenitis (IgG4-RS) and primary Sjögren syndrome (pSS) is crucial due to their different treatment approaches. This study aimed to construct and validate a nomogram based on the ultrasound (US) scoring system for the differentiation of IgG4-RS and pSS.</p><p><strong>Methods: </strong>A total of 193 patients with a clinical diagnosis of IgG4-RS or pSS treated at our institution were enrolled in the training cohort (n = 135; IgG4-RS = 28, pSS = 107) and the validation cohort (n = 58; IgG4-RS = 15, pSS = 43). The least absolute shrinkage and selection operator regression algorithm was utilized to screen the most optimal clinical features and US scoring parameters. A model for the differential diagnosis of IgG4-RS or pSS was built using logistic regression and visualized as a nomogram. The performance levels of the nomogram model were evaluated and validated in both the training and validation cohorts.</p><p><strong>Results: </strong>The nomogram incorporating clinical features and US scoring parameters showed better predictive value in differentiating IgG4-RS from pSS, with the area under the curves of 0.947 and 0.958 for the training cohort and the validation cohort, respectively. Decision curve analysis demonstrated that the nomogram was clinically useful.</p><p><strong>Conclusions: </strong>A nomogram based on the US scoring system showed favourable predictive efficacy in differentiating IgG4-RS from pSS. It has the potential to aid in clinical decision-making.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"43-51"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003662/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning for tooth identification and numbering on dental radiography: a systematic review and meta-analysis.","authors":"Soroush Sadr, Rata Rokhshad, Yasaman Daghighi, Mohsen Golkar, Fateme Tolooie Kheybari, Fatemeh Gorjinejad, Atousa Mataji Kojori, Parisa Rahimirad, Parnian Shobeiri, Mina Mahdian, Hossein Mohammad-Rahimi","doi":"10.1093/dmfr/twad001","DOIUrl":"10.1093/dmfr/twad001","url":null,"abstract":"<p><strong>Objectives: </strong>Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification.</p><p><strong>Methods: </strong>An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation.</p><p><strong>Results: </strong>The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%.</p><p><strong>Conclusion: </strong>Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":" ","pages":"5-21"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003608/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139106005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine learning assessment of dental age classification based on cone-beam CT images: a different approach.","authors":"Ozlem B Dogan, Hatice Boyacioglu, Dincer Goksuluk","doi":"10.1093/dmfr/twad009","DOIUrl":"10.1093/dmfr/twad009","url":null,"abstract":"<p><strong>Objectives: </strong>Machine learning (ML) algorithms are a portion of artificial intelligence that may be used to create more accurate algorithmic procedures for estimating an individual's dental age or defining an age classification. This study aims to use ML algorithms to evaluate the efficacy of pulp/tooth area ratio (PTR) in cone-beam CT (CBCT) images to predict dental age classification in adults.</p><p><strong>Methods: </strong>CBCT images of 236 Turkish individuals (121 males and 115 females) from 18 to 70 years of age were included. PTRs were calculated for six teeth in each individual, and a total of 1416 PTRs encompassed the study dataset. Support vector machine, classification and regression tree, and random forest (RF) models for dental age classification were employed. The accuracy of these techniques was compared. To facilitate this evaluation process, the available data were partitioned into training and test datasets, maintaining a proportion of 70% for training and 30% for testing across the spectrum of ML algorithms employed. The correct classification performances of the trained models were evaluated.</p><p><strong>Results: </strong>The models' performances were found to be low. The models' highest accuracy and confidence intervals were found to belong to the RF algorithm.</p><p><strong>Conclusions: </strong>According to our results, models were found to be low in performance but were considered as a different approach. We suggest examining the different parameters derived from different measuring techniques in the data obtained from CBCT images in order to develop ML algorithms for age classification in forensic situations.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"67-73"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003658/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jae-An Park, DaEl Kim, Su Yang, Ju-Hee Kang, Jo-Eun Kim, Kyung-Hoe Huh, Sam-Sun Lee, Won-Jin Yi, Min-Suk Heo
{"title":"Automatic detection of posterior superior alveolar artery in dental cone-beam CT images using a deeply supervised multi-scale 3D network.","authors":"Jae-An Park, DaEl Kim, Su Yang, Ju-Hee Kang, Jo-Eun Kim, Kyung-Hoe Huh, Sam-Sun Lee, Won-Jin Yi, Min-Suk Heo","doi":"10.1093/dmfr/twad002","DOIUrl":"10.1093/dmfr/twad002","url":null,"abstract":"<p><strong>Objectives: </strong>This study aimed to develop a robust and accurate deep learning network for detecting the posterior superior alveolar artery (PSAA) in dental cone-beam CT (CBCT) images, focusing on the precise localization of the centre pixel as a critical centreline pixel.</p><p><strong>Methods: </strong>PSAA locations were manually labelled on dental CBCT data from 150 subjects. The left maxillary sinus images were horizontally flipped. In total, 300 datasets were created. Six different deep learning networks were trained, including 3D U-Net, deeply supervised 3D U-Net (3D U-Net DS), multi-scale deeply supervised 3D U-Net (3D U-Net MSDS), 3D Attention U-Net, 3D V-Net, and 3D Dense U-Net. The performance evaluation involved predicting the centre pixel of the PSAA. This was assessed using mean absolute error (MAE), mean radial error (MRE), and successful detection rate (SDR).</p><p><strong>Results: </strong>The 3D U-Net MSDS achieved the best prediction performance among the tested networks, with an MAE measurement of 0.696 ± 1.552 mm and MRE of 1.101 ± 2.270 mm. In comparison, the 3D U-Net showed the lowest performance. The 3D U-Net MSDS demonstrated a SDR of 95% within a 2 mm MAE. This was a significantly higher result than other networks that achieved a detection rate of over 80%.</p><p><strong>Conclusions: </strong>This study presents a robust deep learning network for accurate PSAA detection in dental CBCT images, emphasizing precise centre pixel localization. The method achieves high accuracy in locating small vessels, such as the PSAA, and has the potential to enhance detection accuracy and efficiency, thus impacting oral and maxillofacial surgery planning and decision-making.</p>","PeriodicalId":11261,"journal":{"name":"Dento maxillo facial radiology","volume":"53 1","pages":"22-31"},"PeriodicalIF":2.9,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11003607/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139424458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}