Zihang Huang, Yaning Feng, Lilin Guo, Qiutao Shi, Wei Jin
{"title":"全自动下颌髁分割:更详细的提取与混合定制SAM","authors":"Zihang Huang, Yaning Feng, Lilin Guo, Qiutao Shi, Wei Jin","doi":"10.1002/ima.70138","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Accurate segmentation of the mandibular condyle is a key step in three-dimensional reconstruction, which is clinically crucial for digital surgical planning in oral and maxillofacial surgery. Quantitative analysis of its volume and morphology can provide an objective basis for preoperative assessment and postoperative efficacy evaluation. Although many deep learning-based approaches have achieved remarkable success, several challenges persist. Current methods are constrained by low-resolution global image maps, produce masks with blurred boundaries, and require large datasets to ensure accuracy and robustness. To address these challenges, we propose a novel framework for condylar segmentation by adapting the “Segmentation Anything Model” (SAM) to cone beam computed tomography (CBCT) imaging data, with targeted architectural optimizations to enhance segmentation accuracy and boundary delineation. Our framework introduces two novel architectural components: (1) a dual-adapter system combining feature augmentation and transformer-level prompt enhancement to improve target-specific contextual learning, and (2) a boundary-optimized loss function that prioritizes anatomical edge fidelity. For clinical practicality, we further develop ConDetector to enable fully automated prompting without manual intervention. Through extensive experiments, we have shown that our adapted SAM (using Ground Truth as a prompt) achieves state-of-the-art performance, reaching a Dice coefficient of 94.73% on a relatively small sample set. The fully automated SAM even achieves the second-best segmentation performance, with a Dice coefficient of 94.00%. Our approach exhibits robust segmentation capabilities and achieves excellent performance even with limited training data.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 4","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fully Automated Mandibular Condyle Segmentation: More Detailed Extraction With Hybrid Customized SAM\",\"authors\":\"Zihang Huang, Yaning Feng, Lilin Guo, Qiutao Shi, Wei Jin\",\"doi\":\"10.1002/ima.70138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>Accurate segmentation of the mandibular condyle is a key step in three-dimensional reconstruction, which is clinically crucial for digital surgical planning in oral and maxillofacial surgery. Quantitative analysis of its volume and morphology can provide an objective basis for preoperative assessment and postoperative efficacy evaluation. Although many deep learning-based approaches have achieved remarkable success, several challenges persist. Current methods are constrained by low-resolution global image maps, produce masks with blurred boundaries, and require large datasets to ensure accuracy and robustness. To address these challenges, we propose a novel framework for condylar segmentation by adapting the “Segmentation Anything Model” (SAM) to cone beam computed tomography (CBCT) imaging data, with targeted architectural optimizations to enhance segmentation accuracy and boundary delineation. Our framework introduces two novel architectural components: (1) a dual-adapter system combining feature augmentation and transformer-level prompt enhancement to improve target-specific contextual learning, and (2) a boundary-optimized loss function that prioritizes anatomical edge fidelity. For clinical practicality, we further develop ConDetector to enable fully automated prompting without manual intervention. Through extensive experiments, we have shown that our adapted SAM (using Ground Truth as a prompt) achieves state-of-the-art performance, reaching a Dice coefficient of 94.73% on a relatively small sample set. The fully automated SAM even achieves the second-best segmentation performance, with a Dice coefficient of 94.00%. Our approach exhibits robust segmentation capabilities and achieves excellent performance even with limited training data.</p>\\n </div>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"35 4\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2025-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.70138\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.70138","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
Fully Automated Mandibular Condyle Segmentation: More Detailed Extraction With Hybrid Customized SAM
Accurate segmentation of the mandibular condyle is a key step in three-dimensional reconstruction, which is clinically crucial for digital surgical planning in oral and maxillofacial surgery. Quantitative analysis of its volume and morphology can provide an objective basis for preoperative assessment and postoperative efficacy evaluation. Although many deep learning-based approaches have achieved remarkable success, several challenges persist. Current methods are constrained by low-resolution global image maps, produce masks with blurred boundaries, and require large datasets to ensure accuracy and robustness. To address these challenges, we propose a novel framework for condylar segmentation by adapting the “Segmentation Anything Model” (SAM) to cone beam computed tomography (CBCT) imaging data, with targeted architectural optimizations to enhance segmentation accuracy and boundary delineation. Our framework introduces two novel architectural components: (1) a dual-adapter system combining feature augmentation and transformer-level prompt enhancement to improve target-specific contextual learning, and (2) a boundary-optimized loss function that prioritizes anatomical edge fidelity. For clinical practicality, we further develop ConDetector to enable fully automated prompting without manual intervention. Through extensive experiments, we have shown that our adapted SAM (using Ground Truth as a prompt) achieves state-of-the-art performance, reaching a Dice coefficient of 94.73% on a relatively small sample set. The fully automated SAM even achieves the second-best segmentation performance, with a Dice coefficient of 94.00%. Our approach exhibits robust segmentation capabilities and achieves excellent performance even with limited training data.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.