Hua Chen, Chong Liu, Xiaoshi Cheng, Chenjun Jiang, Ying Wang
{"title":"A Thyroid Nodule Ultrasound Image Grading Model Integrating Medical Prior Knowledge.","authors":"Hua Chen, Chong Liu, Xiaoshi Cheng, Chenjun Jiang, Ying Wang","doi":"10.1007/s10278-024-01120-y","DOIUrl":null,"url":null,"abstract":"<p><p>In recent years, there has been increasing research on computer-aided diagnosis (CAD) using deep learning and image processing techniques. Still, most studies have focused on the benign-malignant classification of nodules. In this study, we propose an integrated architecture for grading thyroid nodules based on the Chinese Thyroid Imaging Reporting and Data System (C-TIRADS). The method combines traditional handcrafted features with deep features in the extraction process. In the preprocessing stage, a pseudo-artifact removal algorithm based on the fast marching method (FMM) is employed, followed by a hybrid median filtering for noise reduction. Contrast-limited adaptive histogram equalization is used for contrast enhancement to restore and enhance the information in ultrasound images. In the feature extraction stage, the improved ShuffleNetV2 network with multi-head self-attention mechanism is selected, and its extracted features are fused with medical prior knowledge features. Finally, a multi-class classification task is performed using the eXtreme Gradient Boosting (XGBoost) classifier. The dataset used in this study consists of 922 original images, including 149 examples belonging to class 2, 140 examples to class 3, 156 examples to class 4A, 114 examples to class 4B, 123 examples to class 4C, and 240 examples to class 5. The model is trained for 2000 epochs. The accuracy, precision, recall, F1 score, and AUC value of the proposed method are 97.17%, 97.65%, 97.17%, 0.9834, and 0.9855, respectively. The results demonstrate that the fusion of medical prior knowledge based on C-TIRADS and deep features from convolutional neural networks can effectively improve the overall performance of thyroid nodule diagnosis, providing a new feasible solution for developing clinical CAD systems for thyroid nodule ultrasound diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of imaging informatics in medicine","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s10278-024-01120-y","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, there has been increasing research on computer-aided diagnosis (CAD) using deep learning and image processing techniques. Still, most studies have focused on the benign-malignant classification of nodules. In this study, we propose an integrated architecture for grading thyroid nodules based on the Chinese Thyroid Imaging Reporting and Data System (C-TIRADS). The method combines traditional handcrafted features with deep features in the extraction process. In the preprocessing stage, a pseudo-artifact removal algorithm based on the fast marching method (FMM) is employed, followed by a hybrid median filtering for noise reduction. Contrast-limited adaptive histogram equalization is used for contrast enhancement to restore and enhance the information in ultrasound images. In the feature extraction stage, the improved ShuffleNetV2 network with multi-head self-attention mechanism is selected, and its extracted features are fused with medical prior knowledge features. Finally, a multi-class classification task is performed using the eXtreme Gradient Boosting (XGBoost) classifier. The dataset used in this study consists of 922 original images, including 149 examples belonging to class 2, 140 examples to class 3, 156 examples to class 4A, 114 examples to class 4B, 123 examples to class 4C, and 240 examples to class 5. The model is trained for 2000 epochs. The accuracy, precision, recall, F1 score, and AUC value of the proposed method are 97.17%, 97.65%, 97.17%, 0.9834, and 0.9855, respectively. The results demonstrate that the fusion of medical prior knowledge based on C-TIRADS and deep features from convolutional neural networks can effectively improve the overall performance of thyroid nodule diagnosis, providing a new feasible solution for developing clinical CAD systems for thyroid nodule ultrasound diagnosis.