{"title":"CATNet: A Cross Attention and Texture-Aware Network for Polyp Segmentation","authors":"Zhifang Deng, Yangdong Wu","doi":"10.1002/ima.23220","DOIUrl":"https://doi.org/10.1002/ima.23220","url":null,"abstract":"<div>\u0000 \u0000 <p>Polyp segmentation is a challenging task, as some polyps exhibit similar textures to surrounding tissues, making them difficult to distinguish. Therefore, we present a parallel cross-attention and texture-aware network to address this challenging task. CATNet incorporates the parallel cross-attention mechanism, Residual Feature Fusion Module, and texture-aware module. Initially, polyp images undergo processing in our backbone network to extract multi-level polyp features. Subsequently, the parallel cross-attention mechanism sequentially captures channel and spatial dependencies across multi-scale polyp features, thereby yielding enhanced representations. These enhanced representations are then input into multiple texture-aware modules, which facilitate polyp segmentation by accentuating subtle textural disparities between polyps and the background. Finally, the Residual Feature Fusion module integrates the segmentation results with the previous layer of enhanced representations. This process serves to eliminate background noise and enhance intricate details. We assess the efficacy of our proposed method across five distinct polyp datasets. On three unseen datasets, CVC-300, CVC-ColonDB, and ETIS. We achieve mDice scores of 0.916, 0.817, and 0.777, respectively. Experimental results unequivocally demonstrate the superior performance of our approach over current models. The proposed CATNet addresses the challenges posed by textural similarities, setting a benchmark for future advancements in automated polyp detection and segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Mehdi Hosseini, Zahra Mosahebeh, Somenath Chakraborty, Abdorreza Alavi Gharahbagh
{"title":"Predicting the Early Detection of Breast Cancer Using Hybrid Machine Learning Systems and Thermographic Imaging","authors":"Mohammad Mehdi Hosseini, Zahra Mosahebeh, Somenath Chakraborty, Abdorreza Alavi Gharahbagh","doi":"10.1002/ima.23211","DOIUrl":"https://doi.org/10.1002/ima.23211","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer is a leading cause of mortality among women, emphasizing the critical need for precise early detection and prognosis. However, conventional methods often struggle to differentiate precancerous lesions or tailor treatments effectively. Thermal imaging, capturing subtle temperature variations, presents a promising avenue for non-invasive cancer detection. While some studies explore thermography for breast cancer detection, integrating it with advanced machine learning for early diagnosis and personalized prediction remains relatively unexplored. This study proposes a novel hybrid machine learning system (HMLS) incorporating deep autoencoder techniques for automated early detection and prognostic stratification of breast cancer patients. By exploiting the temporal dynamics of thermographic data, this approach offers a more comprehensive analysis than static single-frame approaches. Data processing involves splitting the dataset for training and testing. A predominant infrared image was selected, and matrix factorization was applied to capture temperature changes over time. Integration of convex factor analysis and bell-curve membership function embedding for dimensionality reduction and feature extraction. The autoencoder deep neural network further reduces dimensionality. HMLS model development included feature selection and optimization of survival prediction algorithms through cross-validation. Model performance was assessed using accuracy and F-measure metrics. HMLS, integrating clinical data, achieved 81.6% accuracy, surpassing 77.6% using only convex-NMF. The best classifier attained 83.2% accuracy on test data. This study demonstrates the effectiveness of thermographic imaging and HMLS for accurate early detection and personalized prediction of breast cancer. The proposed framework holds promise for enhancing patient care and potentially reducing mortality rates.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142664749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VMC-UNet: A Vision Mamba-CNN U-Net for Tumor Segmentation in Breast Ultrasound Image","authors":"Dongyue Wang, Weiyu Zhao, Kaixuan Cui, Yi Zhu","doi":"10.1002/ima.23222","DOIUrl":"https://doi.org/10.1002/ima.23222","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer remains one of the most significant health threats to women, making precise segmentation of target tumors critical for early clinical intervention and postoperative monitoring. While numerous convolutional neural networks (CNNs) and vision transformers have been developed to segment breast tumors from ultrasound images, both architectures encounter difficulties in effectively modeling long-range dependencies, which are essential for accurate segmentation. Drawing inspiration from the Mamba architecture, we introduce the Vision Mamba-CNN U-Net (VMC-UNet) for breast tumor segmentation. This innovative hybrid framework merges the long-range dependency modeling capabilities of Mamba with the detailed local representation power of CNNs. A key feature of our approach is the implementation of a residual connection method within the U-Net architecture, utilizing the visual state space (VSS) module to extract long-range dependency features from convolutional feature maps effectively. Additionally, to better integrate texture and structural features, we have designed a bilinear multi-scale attention module (BMSA), which significantly enhances the network's ability to capture and utilize intricate feature details across multiple scales. Extensive experiments conducted on three public datasets demonstrate that the proposed VMC-UNet surpasses other state-of-the-art methods in breast tumor segmentation, achieving Dice coefficients of 81.52% for BUSI, 88.00% for BUS, and 88.96% for STU. The source code is accessible at https://github.com/windywindyw/VMC-UNet.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingying Zhu, Yufeng Zhang, Bingbing He, Zhiyao Li, Li Xiong, Xun Lang
{"title":"Suppression of the Tissue Component With the Total Least-Squares Algorithm to Improve Second Harmonic Imaging of Ultrasound Contrast Agents","authors":"Jingying Zhu, Yufeng Zhang, Bingbing He, Zhiyao Li, Li Xiong, Xun Lang","doi":"10.1002/ima.23218","DOIUrl":"https://doi.org/10.1002/ima.23218","url":null,"abstract":"<div>\u0000 \u0000 <p>The second harmonic (SH) of ultrasound contrast agents (UCAs) is widely used in contrast-enhanced ultrasound imaging; however, is affected by the nonlinearity of surrounding tissue. Suppression of the tissue component based on the total least-squares (STLS) algorithm is proposed to enhance the SH imaging of UCAs. The image blocks of pulse-inversion-based SH images before and after UCA injections are set as the reference and input of the total least-squares model, respectively. The optimal coefficients of the model are obtained by minimizing the Frobenius norm of perturbations in the input and output signals. After processing all image blocks, the complete SH image of UCAs is obtained by subtracting the optimal output of the model (i.e., the estimated tissue SH image) from the SH image after UCA injection. Simulation and in vivo experiments confirm that the STLS approach offers clearer capillaries. For in vivo experiments, the STLS-based contrast-to-tissue ratios and contrasts increase by 26.90% and 56.27%, as well as 26.99% and 56.43%, respectively, compared with those based on bubble-echo deconvolution and pulse inversion bubble-wavelet imaging methods. The STLS approach enhances the SH imaging of UCAs by effectively suppressing more tissue SH components, having the potential to provide more accurate diagnostic information for clinical applications.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Segmentation and Classification of Breast Masses From the Whole Mammography Images Using Transfer Learning and BI-RADS Characteristics","authors":"Hayette Oudjer, Assia Cherfa, Yazid Cherfa, Noureddine Belkhamsa","doi":"10.1002/ima.23216","DOIUrl":"https://doi.org/10.1002/ima.23216","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer is the most prevalent cancer among women worldwide, highlighting the critical need for its accurate detection and early diagnosis. In this context, the segmentation of breast masses (the most common symptom of breast cancer) plays a crucial role in analyzing mammographic images. In addition, in image processing, the analysis of mammographic images is very common, but certain combinations of mathematical tools have never been exploited. We propose a computer-aided diagnosis (CAD) system designed with different and new algorithm combinations for the segmentation and classification of breast masses based on the Breast Imaging-Reporting and Data System (BI-RADS) lexicon. The image is initially divided into superpixels using the simple linear iterative clustering (SLIC) algorithm. Fine-tuning of ResNet50, EfficientNetB2, MobileNetV2, and InceptionV3 models is employed to extract features from superpixels. The classification of each superpixel as background or breast mass is performed by feeding the extracted features into a support vector machine (SVM) classifier, resulting in an accurate primary segmentation for breast masses, refined by the GrabCut algorithm with automated initialization. Finally, we extract contour, texture, and shape parameters from the segmented regions for the classification of masses into BI-Rads 2, 3, 4, and 5 using the gradient boost (GB) classifier while also examining the impact of the surrounding tissue. The proposed method was evaluated on the INBreast database, achieving a Dice score of 87.65% and a sensitivity of 87.96% for segmentation. For classification, we obtained a sensitivity of 88.66%, a precision of 90.51%, and an area under the curve (AUC) of 97.8%. The CAD system demonstrates high accuracy in both the segmentation and classification of breast masses, providing a reliable tool for aiding breast cancer diagnosis using the BI-Rads lexicon. The study also showed that the surrounding tissue has an impact on classification, leading to the importance of choosing the right size of ROIs.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142642168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhanao Meng, Qing Xiang, Jian Cao, Yahao Guo, Sisi Deng, Tao Luo, Yue Zhang, Ke Zhang, Xuan Zhu, Kun Ma, Xiaohong Wang, Jie Qin
{"title":"Dual-Low Technology in Coronary and Abdominal CT Angiography: A Comparative Study of Deep Learning Image Reconstruction and Adaptive Statistic Iterative Reconstruction-Veo","authors":"Zhanao Meng, Qing Xiang, Jian Cao, Yahao Guo, Sisi Deng, Tao Luo, Yue Zhang, Ke Zhang, Xuan Zhu, Kun Ma, Xiaohong Wang, Jie Qin","doi":"10.1002/ima.23217","DOIUrl":"https://doi.org/10.1002/ima.23217","url":null,"abstract":"<div>\u0000 \u0000 <p>To investigate the application advantages of dual-low technology (low radiation dose and low contrast agent dose) in deep learning image reconstruction (DLIR) compared to the adaptive statistical iterative reconstruction-Veo (ASIR-V) standard protocol when combing coronary computed tomography angiography (CCTA) and abdominal computed tomography angiography (ACTA). Sixty patients who underwent CCTA and ACTA were recruited. Thirty patients with low body mass index (BMI) (< 24 kg/m<sup>2</sup>, Group A, standard protocol) were reconstructed using 60% ASIR-V, and 30 patients with high BMI (> 24 kg/m<sup>2</sup>, Group B, dual-low protocol) were reconstructed using DLIR at high strength (DLIR-H). The effective dose and contrast agent dose were recorded. The CT values, standard deviations, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were measured. The subjective evaluation criteria were scored by two radiologists using a blind Likert 5-point scale. The general data, objective evaluation, and subjective scores between both groups were compared using corresponding test methods. The consistency of objective and subjective evaluations between the two radiologists were analyzed using Kappa tests. Group B showed a remarkable 44.6% reduction in mean effective dose (<i>p</i> < 0.01) and a 20.3% decrease in contrast agent dose compared to Group A (<i>p</i> < 0.01). The DLIR-H demonstrated the smallest standard deviations and highest SNR and CNR values (<i>p</i> < 0.01). The subjective score of DLIR-H was the highest (<i>p</i> < 0.01), and there was good consistency between the two radiologists in the subjective scoring of CCTA and ACTA image quality (κ = 0.751 ~ 0.919, <i>p</i> < 0.01). In combined CCTA and ACTA protocols, DLIR can significantly reduce the effective dose and contrast agent dose compared to ASIR-V while maintaining good image quality.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142642169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdul Qadir Khan, Guangmin Sun, Majdi Khalid, Majed Farrash, Anas Bilal
{"title":"Multi-Deep Learning Approach With Transfer Learning for 7-Stages Diabetic Retinopathy Classification","authors":"Abdul Qadir Khan, Guangmin Sun, Majdi Khalid, Majed Farrash, Anas Bilal","doi":"10.1002/ima.23213","DOIUrl":"https://doi.org/10.1002/ima.23213","url":null,"abstract":"<div>\u0000 \u0000 <p>Proposed novel investigation focused on leveraging an innovative diabetic retinopathy (DR) dataset comprising seven severity stages, an approach not previously examined. By capitalizing on this unique resource, this study′s findings set a new benchmark for DR classification, highlighting the transformative potential of incorporating advanced data into AI models. This study developed a Vgg16 transfer learning model and gauged its performance against established algorithms including Vgg-19, AlexNet, and SqueezeNet. Remarkably, our results achieved accuracy rates of 96.95, 96.75, 96.09, and 92.96, respectively, emphasizing the contribution of our work. We strongly emphasized comprehensive severity rating, yielding perfect and impressive F1-scores of 1.00 for “mild NPDR” and 97.00 for “no DR signs.” The Vgg16-TL model consistently outperformed other models across all severity levels, reinforcing the value of our discoveries. Our deep learning training process, carefully selecting a learning rate of 1e-05, allowed continuous refinements in training and validation accuracy. Beyond metrics, our investigation underscores the vital clinical importance of precise DR classification for preventing vision loss. This study conclusively establishes deep learning as a powerful transformative tool for developing effective DR algorithms with the potential to improve patient outcomes and advance ophthalmology standards.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"YOLOv7-XAI: Multi-Class Skin Lesion Diagnosis Using Explainable AI With Fair Decision Making","authors":"Nirmala Veeramani, Premaladha Jayaraman","doi":"10.1002/ima.23214","DOIUrl":"https://doi.org/10.1002/ima.23214","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin cancer, a prevalent and potentially life-threatening condition, demands accurate and timely detection for effective intervention. It is an uncontrolled growth of abnormal cells in the human body. Studies are underway to determine if a skin lesion is benign (non-cancerous) or malignant (cancerous), but the biggest challenge for a doctor is determining the type of skin cancer. As a result, determining the type of tumour is crucial for the right course of treatment. In this study, we introduce a groundbreaking approach to multi-class skin cancer detection by harnessing the power of Explainable Artificial Intelligence (XAI) in conjunction with a customised You Only Look Once (YOLOv7) architecture. Our research focuses on enhancing the YOLOv7 framework to accurately discern 8 different skin cancer classes, including melanoma, basal cell carcinoma, and squamous cell carcinoma. The YOLOv7 model is the robust backbone, enriched with features tailored for precise multi-class classification. Concurrently, integrating XAI elements, Local Interpretable Modal-agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) ensures transparent decision-making processes, enabling healthcare professionals to interpret and trust the model's predictions. This innovative synergy between YOLOv7 and XAI heralds a new era in interpretability, resulting in high-performance skin cancer diagnostics. The obtained results are 96.8%, 94.2%, 95.6%, and 95.8%, evaluated with popular quantitative metrics such as accuracy, precision, recall, and F1 score, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhang Le, Yue Liang, Xiaokang Hu, Taorong Qiu, Pan Xu
{"title":"A Risk Stratification Study of Ultrasound Images of Thyroid Nodules Based on Improved DETR","authors":"Zhang Le, Yue Liang, Xiaokang Hu, Taorong Qiu, Pan Xu","doi":"10.1002/ima.23219","DOIUrl":"https://doi.org/10.1002/ima.23219","url":null,"abstract":"<div>\u0000 \u0000 <p>The Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) standard is based on the Chinese current medical context. However, at present, there is a lack of C-TIRADS-based automatic computer-aided diagnosis system for thyroid nodule ultrasound images, and the existing algorithms for detecting and recognizing thyroid nodules are basically for the dichotomous classification of benign and malignant. We used the DETR (detection transformer) model as a baseline model and carried out model enhancements to address the shortcomings of unsatisfactory classification accuracy and difficulty in detecting small thyroid nodules in the DETR model. First, to investigate the method of acquiring multi-scale features of thyroid nodule ultrasound images, we choose TResNet-L as the feature extraction network and introduce multi-scale feature information and group convolution, thereby enhancing the model's multi-label classification accuracy. Second, a parallel decoder architecture for multi-label thyroid nodule ultrasound image classification is designed to enhance the learning of correlation between pathological feature class labels, aiming to improve the multi-label classification accuracy of the detection model. Third, the loss function of the detection model is improved. We propose a linear combination of Smooth L1-Loss and CIoU Loss as the model's bounding box loss function and asymmetric loss as the model's multi-label classification loss function, aiming to further improve the detection model's detection accuracy for small thyroid nodules. The experiment results show that the improved DETR model achieves an AP of 92.4% and 81.6% with IoU thresholds of 0.5 and 0.75, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning and Handcrafted Features for Thyroid Nodule Classification","authors":"Ayoub Abderrazak Maarouf, Hacini meriem, Fella Hachouf","doi":"10.1002/ima.23215","DOIUrl":"https://doi.org/10.1002/ima.23215","url":null,"abstract":"<div>\u0000 \u0000 <p>In this research, we present a refined image-based computer-aided diagnosis (CAD) system for thyroid cancer detection using ultrasound imagery. This system integrates a specialized convolutional neural network (CNN) architecture designed to address the unique aspects of thyroid image datasets. Additionally, it incorporates a novel statistical model that utilizes a two-dimensional random coefficient autoregressive (2D-RCA) method to precisely analyze the textural characteristics of thyroid images, thereby capturing essential texture-related information. The classification framework relies on a composite feature vector that combines deep learning features from the CNN and handcrafted features from the 2D-RCA model, processed through a support vector machine (SVM) algorithm. Our evaluation methodology is structured in three phases: initial assessment of the 2D-RCA features, analysis of the CNN-derived features, and a final evaluation of their combined effect on classification performance. Comparative analyses with well-known networks such as VGG16, VGG19, ResNet50, and AlexNet highlight the superior performance of our approach. The outcomes indicate a significant enhancement in diagnostic accuracy, achieving a classification accuracy of 97.2%, a sensitivity of 84.42%, and a specificity of 95.23%. These results not only demonstrate a notable advancement in the classification of thyroid nodules but also establish a new standard in the efficiency of CAD systems.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}