International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Integrating Cross-Scale Attention With Atrous Spatial Pyramid Pooling for Accurate Optic Disc and Cup Segmentation 融合跨尺度关注与空间金字塔池化的视盘杯精确分割
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-18 DOI: 10.1002/ima.70204
Chenglu Zong, Weiwei Gao, Yu Fang, Fengjuan Gao, Zuxiang Wang
{"title":"Integrating Cross-Scale Attention With Atrous Spatial Pyramid Pooling for Accurate Optic Disc and Cup Segmentation","authors":"Chenglu Zong,&nbsp;Weiwei Gao,&nbsp;Yu Fang,&nbsp;Fengjuan Gao,&nbsp;Zuxiang Wang","doi":"10.1002/ima.70204","DOIUrl":"https://doi.org/10.1002/ima.70204","url":null,"abstract":"<div>\u0000 \u0000 <p>The Cup to Disc Ratio (CDR) is a valuable metric for assessing the relative size of the Optic Cup (OC) and Optic Disc (OD), playing a crucial role in glaucoma diagnosis. Accurate segmentation of the OC and OD is therefore the first step toward reliable glaucoma detection. However, precise segmentation is challenging due to the presence of blood vessels that traverse the OC and OD regions, as well as the blurred boundaries and relatively small proportions of the OC and OD. To address these challenges, Atrous Spatial Pyramid CrossFormer-U-Net (ACC-U-Net) is proposed to achieve accurate OC and OD segmentation. CrossFormer is integrated into the encoder to enhance the integrity of the OC and OD segmentation boundaries by constructing global attention mechanisms in both the horizontal and vertical directions. Additionally, an Atrous Spatial Pyramid Pooling (ASPP) head is incorporated at the end of the decoder, allowing the model to capture multi-level feature information of the OC and OD through multiple parallel dilated convolutions, which improves the segmentation accuracy of both the OC, OD, and their irregular boundaries. Finally, Cross Entropy and Dice (CD) Loss is introduced to enhance the model's focus on the OC, which solves the problem of the OC being easily overlooked by the model due to its small proportion. Ablation studies and comparative experiments are performed on three publicly available datasets. Compared to U-Net, the proposed ACC-U-Net shows significant improvements in segmentation accuracy, with mean Intersection over Union (mIoU), mean Dice, and mean Accuracy (mACC) increasing by 9.96%/2.75%/4.54%, 2.65%/2.94%/5.31%, and 5.89%/5.57%/4.21%, respectively. Moreover, the proposed model outperforms nine other models in segmentation accuracy on three datasets. Thus, ACC-U-Net accurately segments the OC and OD, thus providing precise CDR values that could assist in the diagnosis of glaucoma. Source code and pretrained models are available at: https://github.com/zong1019/segmentation-OCOD.git.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale Rolling Attention Network With Enhanced Local and Global Features for Medical Image Segmentation 基于增强局部和全局特征的多尺度滚动关注网络用于医学图像分割
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-18 DOI: 10.1002/ima.70206
Shangwang Liu, Yusen Wang, Yinghai Lin, Xianglian Jin, Hongwei Wang, Yulin Cheng
{"title":"Multiscale Rolling Attention Network With Enhanced Local and Global Features for Medical Image Segmentation","authors":"Shangwang Liu,&nbsp;Yusen Wang,&nbsp;Yinghai Lin,&nbsp;Xianglian Jin,&nbsp;Hongwei Wang,&nbsp;Yulin Cheng","doi":"10.1002/ima.70206","DOIUrl":"https://doi.org/10.1002/ima.70206","url":null,"abstract":"<div>\u0000 \u0000 <p>Medical image segmentation plays a key role in disease diagnosis, but its accuracy is often constrained by the morphological variability and scale variability of lesions. Although existing methods alleviate this problem by fusing local and global features, they suffer from the defects of low feature fusion efficiency and insufficient multiscale modeling. To this end, we propose the LLA network, the core of which is to enhance the model's ability to extract detailed features from images by learning global contextual information in multiple directions of the whole image through a parallel dual orthogonal rolling multilayer perceptron (DOR-MLP), as well as by enhancing the extraction of detailed features from images through the local perceptual ability of the windowed attention module. We introduce multiscale field blocks (MSF) in skip connections containing four parallel convolutional branches of different sizes to extract more comprehensive and richer feature information at different scales. The encoder and decoder utilize double-layer convolution and residual concatenation for efficient feature extraction. Experiments on BUSI, PH2, and DDTI datasets show that the IoU reaches 73.32%, 90.96%, and 70.89%, respectively, and our method effectively captures local and global information and achieves better segmentation results compared to other state-of-the-art methods.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnose Like a Doctor: A Vision-Guided Global–Local Fusion Network for Chest Disease Diagnosis 像医生一样诊断:用于胸部疾病诊断的视觉引导的全局-局部融合网络
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-17 DOI: 10.1002/ima.70203
Guangli Li, Xinjiong Zhou, Chentao Huang, Jingqin Lv, Hongbin Zhang, Donghong Ji, Jianguo Wu
{"title":"Diagnose Like a Doctor: A Vision-Guided Global–Local Fusion Network for Chest Disease Diagnosis","authors":"Guangli Li,&nbsp;Xinjiong Zhou,&nbsp;Chentao Huang,&nbsp;Jingqin Lv,&nbsp;Hongbin Zhang,&nbsp;Donghong Ji,&nbsp;Jianguo Wu","doi":"10.1002/ima.70203","DOIUrl":"https://doi.org/10.1002/ima.70203","url":null,"abstract":"<div>\u0000 \u0000 <p>Chest diseases are the most common diseases around the world. Deep neural networks for chest disease diagnosis are usually limited by the need for extensive manual labeling and insufficient model interpretability. To this end, we propose the dual-branch framework called Vision-Guided global–local fusion network (VGFNet) for chest disease diagnosis like an experienced doctor. We first introduce radiologists' eye-tracking data as a low-cost but easily accessible information source, which implicitly contains sufficient but unexplored pathological knowledge that provides the localization of lesions. An eye-tracking network (ETNet) is first devised to learn clinical observation patterns from the eye-tracking data. Then, we propose a dual-branch network that can simultaneously process global and local features. ETNet provides the approximate local lesions to guide the learning procedure of the local branch. Meanwhile, a triple convolutional attention (TCA) module is created and embedded into the global branch to refine the global features. Finally, a convolution attention fusion (CAF) module is designed to fuse the heterogeneous features from the two branches, taking full advantage of their local and global representation abilities. Extensive experiments demonstrate that VGFNet can significantly improve classification performance on both multilabel classification and multiclassification tasks, obtaining an AUC value of 0.841 on Chest x-ray14 and an accuracy of 0.9820 on RAD, which outperforms state-of-the-art models. We also validate the model's generalizability on Chest x-ray. This study introduces eye-tracking data, which increases the interpretability of the model and provides new perspectives for deep mining of eye-tracking data. Meanwhile, we designed several plug-and-play modules to provide new ideas in the field of feature refinement. The code for our model is available at https://github.com/ZXJ-YeYe/VGFNet.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tensor-Based Weber Feature Representation of Brain CT Images for the Automated Classification of Ischemic Stroke 基于张量的脑CT图像Weber特征表示用于缺血性脑卒中自动分类
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-17 DOI: 10.1002/ima.70200
Mahesh Anil Inamdar, Anjan Gudigar, U. Raghavendra, Raja R. Azman, Nadia Fareeda Binti Muhammad Gowdh, Izzah Amirah Binti Mohd Ahir, Mohd Salahuddin Bin Kamaruddin, Ajay Hegde, U. Rajendra Acharya
{"title":"Tensor-Based Weber Feature Representation of Brain CT Images for the Automated Classification of Ischemic Stroke","authors":"Mahesh Anil Inamdar,&nbsp;Anjan Gudigar,&nbsp;U. Raghavendra,&nbsp;Raja R. Azman,&nbsp;Nadia Fareeda Binti Muhammad Gowdh,&nbsp;Izzah Amirah Binti Mohd Ahir,&nbsp;Mohd Salahuddin Bin Kamaruddin,&nbsp;Ajay Hegde,&nbsp;U. Rajendra Acharya","doi":"10.1002/ima.70200","DOIUrl":"https://doi.org/10.1002/ima.70200","url":null,"abstract":"<p>Ischemic brain stroke remains a global health concern and a leading cause of mortality and long-term disability worldwide. Despite significant advancements in acute stroke management, the incidence and burden of this devastating cerebrovascular event continue to increase, particularly in developing nations. This study proposes a novel machine learning approach for classifying brain stroke Computed Tomography (CT) images into its subtypes using an efficient feature descriptor. The presented descriptor is a Modified Weber Local Descriptor (MWLD), which incorporates the structure tensor for precise orientation computation and a multi-scale approach to capture multi-resolution features. Further, analysis of variance ranking for discriminative feature selection was applied to the MWLD features. These ranked features were tested on 4850 CT images (i.e., 875 acute, 1447 chronic, and 2528 normal) using various classifiers, such as the nearest neighbor classifier and ensemble models. The methodology achieved 98.34% (highest) testing accuracy with a fine k-nearest neighbor classifier, outperforming existing descriptors. The MWLD descriptor and machine learning technique can accurately diagnose ischemic stroke, enabling improved clinical decision support.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SF Net: A Pyramid-Based Feature Fusion Convolutional Neural Network With Embedded Squeeze-and-Excitation Mechanism for Retinal OCT Image Classification 基于压缩激励机制的金字塔特征融合卷积神经网络用于视网膜OCT图像分类
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-17 DOI: 10.1002/ima.70197
Shudi Zheng, Yongxiong Wang
{"title":"SF Net: A Pyramid-Based Feature Fusion Convolutional Neural Network With Embedded Squeeze-and-Excitation Mechanism for Retinal OCT Image Classification","authors":"Shudi Zheng,&nbsp;Yongxiong Wang","doi":"10.1002/ima.70197","DOIUrl":"https://doi.org/10.1002/ima.70197","url":null,"abstract":"<div>\u0000 \u0000 <p>Age-related macular degeneration (AMD) and diabetic macular edema (DME) are among the leading causes of blindness worldwide, and optical coherence tomography (OCT) analysis plays a crucial role in diagnosing and treating ocular diseases. While deep learning has been extensively applied to OCT image classification, existing methods often require large-scale training datasets. However, the inherent challenges of medical image acquisition make large datasets difficult to obtain. Therefore, it is desirable to develop models that can achieve high performance even with limited training data. Moreover, most current approaches rely solely on features extracted from the final network layer, whereas incorporating intermediate feature maps can further enhance classification accuracy. In this study, a novel end-to-end multi-scale classification framework, termed SF Net (squeeze-and-excitation (S) embedded feature fusion pyramid (F) convolutional neural network), is proposed for the reliable diagnosis of eye conditions, including normal retinal images and three clinical categories: early and late stages of age-related macular degeneration (AMD) and diabetic macular edema (DME). The effectiveness of the proposed method is evaluated on two datasets: a national dataset collected at Noor Eye Hospital (NEH) and a publicly available dataset from the University of California, San Diego (UCSD). The experimental results demonstrate that the proposed multi-scale method outperforms all well-known OCT classification frameworks. Despite a significant reduction in the training dataset size, the model's performance still exceeds that of most comparable networks.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AE-YOLO: Feature Focus Enhancement for Breast Mass Detection AE-YOLO:乳腺肿块检测的特征聚焦增强
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-17 DOI: 10.1002/ima.70205
Huangchi Liu, Xiaoxiao Chen, Wenqian Zhang, Wei Yao, Shengzhou Xu
{"title":"AE-YOLO: Feature Focus Enhancement for Breast Mass Detection","authors":"Huangchi Liu,&nbsp;Xiaoxiao Chen,&nbsp;Wenqian Zhang,&nbsp;Wei Yao,&nbsp;Shengzhou Xu","doi":"10.1002/ima.70205","DOIUrl":"https://doi.org/10.1002/ima.70205","url":null,"abstract":"<div>\u0000 \u0000 <p>Mammography remains the primary imaging modality for early breast-cancer screening. However, small mass size, irregular shape, and complex background tissue often limit the sensitivity and precision of computer-aided detection systems. In this work, we propose AE-YOLO, a novel enhancement of the YOLOv8 framework incorporating two key modules: aggregated dynamic convolution (ADC), which dynamically adapts convolutional weights across kernel, input-channel, and output-channel dimensions to strengthen feature extraction, and a visual enhancement block (VEB) comprising a lightweight transformer-based unit (TFormer) for global context capture and a feature reconstruction center (FRC) to suppress redundancy and refine mass features. Experiments on two public mammography datasets (DDSM and MIAS) demonstrate that AE-YOLO achieves a precision of 85.0%, recall of 77.2%, mAP50 of 84.9%, and mAP50:95 of 48.4%, outperforming current state-of-the-art models. Moreover, the proposed ADC and VEB modules are agnostic to network backbone and image source—they can be seamlessly integrated into other mammographic detection pipelines (e.g., INbreast) and consistently improve mass-detection performance across datasets and resolutions.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OPFSS: A Few-Shot Medical Image Segmentation Algorithm Based on Optimized Pseudo-Annotations and Self-Attention OPFSS:一种基于优化伪标注和自关注的少镜头医学图像分割算法
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-16 DOI: 10.1002/ima.70202
Weiyi Wei, Jiang Wu, Luheng Chen
{"title":"OPFSS: A Few-Shot Medical Image Segmentation Algorithm Based on Optimized Pseudo-Annotations and Self-Attention","authors":"Weiyi Wei,&nbsp;Jiang Wu,&nbsp;Luheng Chen","doi":"10.1002/ima.70202","DOIUrl":"https://doi.org/10.1002/ima.70202","url":null,"abstract":"<div>\u0000 \u0000 <p>Deep learning has demonstrated excellent capabilities in the field of medical image segmentation, but its practical application is limited by insufficient labels. In this paper, we propose a semi-supervised training method to achieve few-shot medical segmentation through an innovative and optimized pseudo-annotation strategy. We generate pseudo annotations through the fusion scheme of the Felzenszwalb algorithm and a small convolutional neural network: the Felzenszwalb algorithm completes the preliminary region division based on regional features, and the convolutional neural network optimizes the annotation area with its powerful feature extraction ability. The synergy of the two methods not only avoids the limitation of traditional model iterative labeling that is easy to fall into local optima, but also makes up for the defect of insufficient feature representation of simple graph theory methods. In addition, a self-attention mechanism and automatic enhancement techniques are introduced into the prototype network to make full use of the context and texture information in annotated images. The experimental results show that OPFSS achieves a Dice score of 78.77% on the CHAOS dataset and 72.19% on the Synapse dataset on two publicly available medical image datasets, demonstrating the effectiveness and superiority of the approach.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pixel by Pixel Semantic Segmentation Approach on WSI Images for Gastric Gland Segmentation and Gastric Cancer Grade Classification Using MLP-XAI Model 基于MLP-XAI模型的WSI图像逐像素语义分割胃腺体分割和胃癌分级方法
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-12 DOI: 10.1002/ima.70201
Mousumi Gupta, Prasanna Dhungel, Madhab Nirola, Bidyut Krishna Goswami, Amlan Gupta
{"title":"Pixel by Pixel Semantic Segmentation Approach on WSI Images for Gastric Gland Segmentation and Gastric Cancer Grade Classification Using MLP-XAI Model","authors":"Mousumi Gupta,&nbsp;Prasanna Dhungel,&nbsp;Madhab Nirola,&nbsp;Bidyut Krishna Goswami,&nbsp;Amlan Gupta","doi":"10.1002/ima.70201","DOIUrl":"https://doi.org/10.1002/ima.70201","url":null,"abstract":"<div>\u0000 \u0000 <p>Gastric cancer remains one of the most prevailing cancers with high mortality. Timely and quantitative diagnosis stays challenging with the pathologists. H&amp;E stain provides a color composition which distinguishes individual components of gastric histopathology images. The human eye is able to distinguish each component but fails to quantify and varies with the pathologists' opinions. The gastric histopathology components like lamina propria sometimes contain hyperchromatic nuclei and lymphocytes. This characteristic sometimes makes the diagnosis confusing as the system might incorrectly identify it as malignancy. Automation of this diagnosis is extremely crucial but can be a strong support system in gastric cancer diagnosis. This study developed a combinational neural network approach based on DeepLabV3+ and U-Net architectures. A pixel-by-pixel semantic segmentation approach is implemented to segment gland texture from gastric histopathology WSI images. A sliding window approach is employed to process the whole slide images. Various categories of gastric abnormalities classification models are implemented using Multilayer Perceptron (MLP). To interpret the classification model, the XAI technique is used, utilizing SHapley Additive exPlanations (SHAP). The model is able to categorize gastric lesions into five classes: benign, mild dysplasia, dysplasia, high-grade dysplasia, and malignant using the features nuclear-cytoplasmic ratio, GLCM, and intensity metrics. The segmentation model scored an accuracy of 96.983%, precision of 94.057%, recall of 93.835%, and F1 score of 95.497%, and the classification model achieved an accuracy of 90.36%. A framework is designed to support pathologists in making early decisions on gastric cancer.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145038256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Breast Cancer Diagnosis With Attention Branch Network and Thermographic Imaging 关注分支网络和热成像增强乳腺癌诊断
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-12 DOI: 10.1002/ima.70195
Sruthi Krishna, Shruthy S. Stancilas, Suganthi Salem Srinivasan, Dehannathparambil Kottarathil Vijayakumar
{"title":"Enhancing Breast Cancer Diagnosis With Attention Branch Network and Thermographic Imaging","authors":"Sruthi Krishna,&nbsp;Shruthy S. Stancilas,&nbsp;Suganthi Salem Srinivasan,&nbsp;Dehannathparambil Kottarathil Vijayakumar","doi":"10.1002/ima.70195","DOIUrl":"https://doi.org/10.1002/ima.70195","url":null,"abstract":"<div>\u0000 \u0000 <p>The high mortality rate among breast cancer patients in developing regions is primarily due to the lack of affordable access to breast screening systems for the detection of abnormalities. Thermographic breast screening aided by machine learning-based decision support systems has shown promising results. We present an interpretable computer-assisted diagnostic system that enhances clinical inference by visual identification of regions of interest in thermographic images. A CNN feature extractor with an Attention Branch Network (ABN) is developed for binary classification of thermographic images. We trained and validated our model on a newly created Amrita Breast Thermogram (ABT) dataset consisting of 331 participants. The model performance compared against standard clinical mammogram results demonstrated an F1 score of 98.88% (precision: 97.78%, recall: 100%, accuracy: 98.15%) after sample weighting. The model was also tested on another publicly available dataset, DMR-IR, wherein the ABN-DCN model demonstrated comparable performance (accuracy: 95%). Test results showed that incorporating the ABN along with sample weighting enhanced the performance of the baseline DarkNet19 CNN model by 6%. The proposed DarkNet19-integrated ABN decision support system offers diagnostic interpretability besides top-tier performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145038257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interpretable Model Based on CT Radiomics and Deep Learning Features for Distinguishing Typical Signs of Secondary Pulmonary Tuberculosis 基于CT放射组学和深度学习特征的可解释模型识别继发性肺结核的典型征象
IF 2.5 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-09-09 DOI: 10.1002/ima.70196
Zhenzhen Wan, Peilin Wang, Ning Shi, Bing Wang, Ye li, Shidong Zhang, Dailun Hou, Xiuling Liu
{"title":"Interpretable Model Based on CT Radiomics and Deep Learning Features for Distinguishing Typical Signs of Secondary Pulmonary Tuberculosis","authors":"Zhenzhen Wan,&nbsp;Peilin Wang,&nbsp;Ning Shi,&nbsp;Bing Wang,&nbsp;Ye li,&nbsp;Shidong Zhang,&nbsp;Dailun Hou,&nbsp;Xiuling Liu","doi":"10.1002/ima.70196","DOIUrl":"https://doi.org/10.1002/ima.70196","url":null,"abstract":"<div>\u0000 \u0000 <p>Secondary pulmonary tuberculosis presents with diverse and complex signs on CT images, which have hindered accurate diagnosis. To address this, we developed an interpretable voting model that integrates radiomics and deep learning features to classify the five most common signs of secondary tuberculosis, improving diagnostic efficiency with high accuracy. This model assists physicians in early diagnosis, lesion progression monitoring, and prognostic assessment. In this retrospective study, features of five main CT signs of secondary pulmonary tuberculosis (Tree-in-bud pattern, nodule, consolidation, thick-walled cavity, and fibrous lesion) were extracted from 350 patients CT sequences using radiomics and ResNet. 350 slices were used for training, and 150 slices from different patients were used for testing. Morphological analysis based on radiomics, SHAP analysis, Grad-CAM visualization, and statistical analysis was employed to enhance the interpretability of the model. The results indicated that the combined models statistically outperformed the individual radiomics and ResNet feature models in identifying the five main signs of secondary pulmonary tuberculosis. The AUC values (for radiomics, neural network, and combined model) on the test set were as follows: Tree-in-bud pattern: (0.795, 0.845, 0.880), Nodules: (0.830, 0.818, 0.851), Consolidation: (0.745, 0.799, 0.821), Thick-walled cavity: (0.789, 0.820, 0.814), Fibrous lesion: (0.854, 0.846, 0.934). The combined models outperformed individual radiomics and ResNet feature models in identifying the main signs of secondary pulmonary tuberculosis. The interpretability of the models was enhanced through various analysis methods. The model shows potential for improving diagnostic accuracy and supporting early diagnosis, treatment monitoring, and prognostic assessment.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145012701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信