{"title":"Explainable Attention-Enhanced Approach for Multimodal Breast Cancer Diagnosis Across Diverse Imaging Modalities","authors":"Uzma Nawaz, Zubair Saeed, Hafiz Muhammad UbaidUllah, Farheen Mirza, Mirza Muzzamil","doi":"10.1002/ima.70209","DOIUrl":"https://doi.org/10.1002/ima.70209","url":null,"abstract":"<p>Early and accurate detection of breast cancer is critical for improving survival rates. This study presents a robust deep learning framework that integrates convolutional and attention-based modules to enhance feature extraction across various imaging modalities. The proposed model is evaluated on four benchmark breast cancer datasets: BreakHis (400×), INbreast, BUSI, and CBIS-DDSM, which capture variations in histopathological, mammographic, and ultrasound images. A stratified fivefold cross-validation strategy was adopted to ensure model generalizability. The proposed approach achieves outstanding classification performance, with accuracies of 98.75% on BreakHis, 99.12% on INbreast, 98.40% on BUSI, and 99.05% on CBIS-DDSM. These results consistently surpass those of traditional CNNs and recent baseline models, such as ResNet50, DenseNet121, EfficientNet-B0, and Vision Transformers, across all datasets. A detailed ablation study confirms the effectiveness of each component in the architecture. A computational cost analysis demonstrates that the proposed model achieves superior accuracy with reduced training epochs and competitive inference times.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 6","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145224072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low-Dose Computed Tomography Image Denoising Vision Transformer Model Optimization Using Space State Method","authors":"Luella Marcos, Paul Babyn, Javad Alirezaie","doi":"10.1002/ima.70220","DOIUrl":"https://doi.org/10.1002/ima.70220","url":null,"abstract":"<p>Low-dose computed tomography (LDCT) is widely used to promote reduction of patient radiation exposure, but the associated increase in image noise poses challenges for diagnostic accuracy. In this study, we propose a Vision Transformer (ViT)-based denoising framework enhanced with a State Space Optimizing Block (SSOB) to improve both image quality and computational efficiency. The SSOB upgrades the multihead self-attention mechanism by reducing spatial redundancy and optimizing contextual feature fusion, thereby strengthening the transformer's ability to capture long-range dependencies and preserve fine anatomical structures under severe noise. Extensive evaluations on randomized and categorized datasets demonstrate that the proposed model consistently outperforms existing state-of-the-art denoising approaches. It achieved the highest average SSIM (up to 6.10% improvement), PSNR values (36.51 ± 0.37 dB on randomized and 36.30 ± 0.36 dB on categorized datasets), and the lowest RMSE, surpassing recent CNN-transformer-based denoising hybrid models by approximately 12%. Intensity profile analysis further confirmed its effectiveness, showing sharper edge transitions and more accurate gray-level distributions across anatomical boundaries, closely aligning with ground truth and retaining subtle diagnostic features often lost in competing models. In addition to improved reconstruction quality, the SSOB-empowered ViT achieved notable computational gains. It delivered the fastest inference (0.42 s per image), highest throughput (2.38 images/s), lowest GPU memory usage (750 MB), and smallest model size (7.6 MB), alongside one of the shortest training times (6.5 h). Compared to legacy architectures, which required up to 16 h of training and substantially more resources, the proposed model offers both accuracy and deployability. Collectively, these findings establish the SSOB as a key component for efficient transformer-based LDCT denoising, addressing memory and convergence challenges while preserving global contextual advantages.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 6","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70220","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145196274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi Hadef, Said Yacine Boulahia, Abdenour Amamra
{"title":"Problem-Oriented Strategy for Diabetic Retinopathy Identification","authors":"Mahdi Hadef, Said Yacine Boulahia, Abdenour Amamra","doi":"10.1002/ima.70216","DOIUrl":"https://doi.org/10.1002/ima.70216","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy is a prevalent and sight-threatening complication of diabetes that affects individuals worldwide. Effectively addressing this condition requires adapting approaches to the specific characteristics of retinal images. Existing works often tackle the diagnostic challenge without focusing on a specific aspect. In contrast, our study introduces a new problem-oriented strategy that addresses key gaps in diabetic retinopathy using three novel, tailored approaches. First, to address the underexploitation of high-resolution retinal images, we propose a resolution-preserving, data-based approach that employs patch-based analysis without downscaling while also mitigating data scarcity and imbalance. Second, inspired by real-world clinical practice, we develop a symptoms-based approach that explicitly segments multiple key pathological indicators (blood vessels, exudates, and microaneurysms) and then uses them to guide the classification network. Third, we propose a hierarchical approach that decomposes the multi-stage classification task into multiple hierarchical binary classifications, enabling more specialized feature learning and informed decision-making across different severity levels. Evaluations on both EyePACS and APTOS benchmark datasets showcased superior performance, surpassing or matching contemporary state-of-the-art results. These outcomes demonstrate the effectiveness of our proposed approaches and underscore the strategy's potential to improve diabetic retinopathy diagnosis.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145146524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FANet: Feature Aggregation Network With Dual Encoders for Fundus Retinal Vessel Segmentation","authors":"Linfeng Kong, Yun Wu","doi":"10.1002/ima.70213","DOIUrl":"https://doi.org/10.1002/ima.70213","url":null,"abstract":"<div>\u0000 \u0000 <p>Fundus retinal vessel segmentation is important for assisting in the diagnosis and monitoring of related ophthalmic diseases. Due to the fact that fundus retinal vessels have the characteristics of both local complex topology (e.g., branching structure) and global wide-area distribution, to be able to simultaneously take into account the local detail information and global context information and fully fuse the two kinds of information, this paper proposes a feature aggregation network (FANet) with dual encoders for fundus retinal vessel segmentation. Firstly, we employ the convolutional neural network (CNN) and Transformer to construct dual path encoders for extracting local detail information and global context information, respectively. Among them, to enhance the feature expression ability of the feed-forward network (FFN) in the Transformer block, we design the feature-optimized FFN (F3N). Next, we introduce the dual path feature aggregation (DPFA) module to fully fuse the feature information extracted from the CNN and Transformer paths. Finally, we introduce the multi-scale feature aggregation (MFA) module to obtain rich multi-scale information and adapt to the scale variation of vessels. Experimental results on CHASE-DB1, DRIVE, and STARE datasets demonstrate that FANet outperforms the existing mainstream segmentation methods in the comprehensive performance comparison of multiple evaluation metrics, verifying its effectiveness.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145172028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jose Carlos Nóbrega Júnior, Simone Soares Brandão, James B. Fink, Daiara Xavier, Roberta Torres, Arzu Ari, Caio Morais, Shirley Campos, Daniella Brandão, Armèle Dornelas de Andrade
{"title":"Long COVID: Ventilation and Perfusion Are Reduced in Symptomatic Individuals Compared to Asymptomatic Patients—A V/Q Scintigraphy Study","authors":"Jose Carlos Nóbrega Júnior, Simone Soares Brandão, James B. Fink, Daiara Xavier, Roberta Torres, Arzu Ari, Caio Morais, Shirley Campos, Daniella Brandão, Armèle Dornelas de Andrade","doi":"10.1002/ima.70212","DOIUrl":"https://doi.org/10.1002/ima.70212","url":null,"abstract":"<p>Pulmonary dysfunction is a common sequel of COVID-19, with long-term effects on lung function even after recovery. Traditional imaging methods like computed tomography (CT) and X-ray often fail to detect subtle functional impairments. In contrast, ventilation and perfusion scintigraphy (V/Q scintigraphy) provide a sensitive assessment of regional ventilatory and perfusion abnormalities, revealing functional changes missed by anatomical imaging. To compare the regional pulmonary distribution of radiopharmaceuticals in symptomatic and asymptomatic post-COVID-19 individuals. A cross-sectional study was conducted with 33 post-COVID-19 individuals, categorized into asymptomatic (<i>n</i> = 10) and symptomatic groups (<i>n</i> = 23), classified according to symptom severity. Ventilation and perfusion scintigraphy were performed using technetium-99m radiopharmaceuticals (99mTc-DTPA for ventilation and 99mTc-MAA for perfusion). Pulmonary radiopharmaceutical activity was quantified using Regions of Interest (ROIs) in central and peripheral zones and upper, middle, and lower regions. Symptomatic patients showed significantly lower radiopharmaceutical activity in all pulmonary regions compared to asymptomatic participants (<i>p</i> < 0.05). Perfusion analysis revealed significantly higher total lung radiopharmaceutical counts in asymptomatic individuals (Median [IQR]: 1046.94 [447.41] Kct) compared to symptomatic individuals (765.66 [269.94] Kct, <i>p</i> = 0.002). No significant differences were found in the left lung regions. The total pulmonary count during the ventilation phase progressively decreases as disease severity increases. The comparison between the extremes (asymptomatic vs. severe) reveals a nearly 50% reduction in pulmonary counts. Significant differences in pulmonary radiopharmaceutical activity were observed between groups, with reduced ventilatory and perfusion functionality in symptomatic patients. These findings underscore the importance of advanced techniques for monitoring and individualized rehabilitation of post-COVID-19 patients.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145172027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenshuai Zhang, Lei Wang, Pengcheng Dai, Zhiyao Liu, Juan Wang, Qun Liu
{"title":"SuperFormer: Unet-Like Super Token Transformer for Medical Image Segmentation","authors":"Wenshuai Zhang, Lei Wang, Pengcheng Dai, Zhiyao Liu, Juan Wang, Qun Liu","doi":"10.1002/ima.70208","DOIUrl":"https://doi.org/10.1002/ima.70208","url":null,"abstract":"<div>\u0000 \u0000 <p>The application of computer-aided diagnosis in the medical field is gradually becoming widespread. Multi-organ segmentation in clinical abdominal CT images and cardiac MRI images poses a challenging task. Accurate segmentation of multiple organs is a crucial prerequisite for disease diagnosis and treatment planning. In this paper, we introduce a multi-organ segmentation method based on CT or MRI images: SuperFormer.SuperFormer is a hierarchical encoder-decoder network with two compelling designs: (1) It introduces the super token transformer block into the U-shaped encoder-decoder structure, making it easier to extract global information while significantly improving computational efficiency. (2) It presents a channel-based multi-scale Transformer context bridge for effectively extracting correlations of global dependencies and local context in multi-scale features generated by our hierarchical Transformer encoder. This guides the efficient connection of fused multi-scale channel information to decoder features, eliminating the semantic gap. In medical image segmentation, SuperFormer demonstrates a powerful ability to capture more discriminative dependencies and context. Experimental results on multi-organ segmentation and cardiac segmentation tasks demonstrate the algorithm's superiority, effectiveness, and robustness. Specifically, experimental results from training SuperFormer from scratch even surpass state-of-the-art methods pretrained on ImageNet, and its core design can be extended to other visual segmentation tasks.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haiping Xu, Jie Wang, Zuoyong Li, Shenghua Teng, Xuesong Cheng
{"title":"Lite-PolypNet: A Lightweight and Efficient Network for Polyp Segmentation in Colonoscopy Images","authors":"Haiping Xu, Jie Wang, Zuoyong Li, Shenghua Teng, Xuesong Cheng","doi":"10.1002/ima.70215","DOIUrl":"https://doi.org/10.1002/ima.70215","url":null,"abstract":"<div>\u0000 \u0000 <p>Early detection of colorectal polyps is of great significance in preventing colorectal cancer. However, existing segmentation methods often struggle to balance accuracy and computational efficiency. To address this issue, this paper proposes a lightweight and efficient polyp segmentation network named Lite-PolypNet. Built upon MobileNetV3 as the backbone, the network integrates a progressive feature aggregation module, a global attention augmentation module, and a dual-branch decoder structure to effectively fuse multi-scale features and global contextual information, thereby enhancing boundary reconstruction and small polyp detection capabilities. Extensive experiments conducted on five public datasets demonstrate that Lite-PolypNet achieves high segmentation accuracy (with a maximum Dice score of 94.7%) while significantly reducing model parameters and computational complexity. Compared with representative baseline models, Lite-PolypNet reduces the number of parameters by a factor of more than six and significantly decreases the FLOPs, making it suitable for deployment in resource-constrained environments.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145110903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sohaib Asif, Lingying Zhu, Zhenqiu Huang, Rongbiao Ying, Jun yao
{"title":"Ant Colony Optimization-Based Deep Ensemble Learning Model for Improved Gastrointestinal Disease Detection","authors":"Sohaib Asif, Lingying Zhu, Zhenqiu Huang, Rongbiao Ying, Jun yao","doi":"10.1002/ima.70214","DOIUrl":"https://doi.org/10.1002/ima.70214","url":null,"abstract":"<div>\u0000 \u0000 <p>Gastrointestinal (GI) disorders represent a significant challenge in healthcare, underscoring the necessity for more precise and effective diagnostic techniques. Conventional approaches, which often rely on single models, have demonstrated shortcomings in both accuracy and efficacy, often failing to detect the intricate and varied patterns linked to these diseases. To overcome these challenges, this study introduces a novel ensemble learning framework tailored for GI detection. The framework utilizes a three-layer architectural approach that integrates Convolutional Neural Networks (CNNs), the Ant Colony Optimization Algorithm (ACO), and Weighted Aggregation Ensemble Techniques (WAET). The methodology unfolds in three key stages: First, multiple CNNs are fine-tuned using transfer learning, while ACO optimizes the hyperparameters of each CNN to enhance model adaptability and performance. Second, the predictions from the top three optimized models are combined using WAET to strengthen the system's robustness in GI detection. Lastly, ACO is employed to optimize the weight assignment for each model during the ensembling process. We use a dataset of 6000 endoscopy images, enhanced by cropping and augmentation techniques to boost diversity and improve classification performance. Additional experiments on CP-Child-A and CP-Child-B show that the proposed ensemble model achieves superior performance, with an accuracy of 99.88% on the primary dataset and 98.75% and 100% on CP-Child-A and B, respectively. It outperforms traditional hybrid methods and state-of-the-art approaches. The effectiveness of the model is further validated through interpretability techniques like Grad-CAM and SHAP, providing insights into the decision-making process. This approach enhances diagnostic accuracy and provides a robust, interpretable solution for automated detection of GI diseases, improving clinical decision-making.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dang Zhang, Xiaoming Wu, Bo Wang, Xinran Wang, Peilin Sheng, Wei Jin, Lilin Guo, Xiaobo Lai, Jian Xu, Jianqing Wang
{"title":"Radiomics-Driven Lung Adenocarcinoma Subtype Classification","authors":"Dang Zhang, Xiaoming Wu, Bo Wang, Xinran Wang, Peilin Sheng, Wei Jin, Lilin Guo, Xiaobo Lai, Jian Xu, Jianqing Wang","doi":"10.1002/ima.70211","DOIUrl":"https://doi.org/10.1002/ima.70211","url":null,"abstract":"<div>\u0000 \u0000 <p>ObjectiveThis study aimed to identify the optimal classification model for lung adenocarcinoma (LUAD) subtypes through radiomics-driven analysis, addressing challenges such as data set imbalance, small sample sizes, and the need for accurate multi-class classification.MethodsRadiomic features were extracted from CT scans and integrated with machine-learning and deep-learning techniques to improve diagnostic accuracy. After preliminary feature selection, the most effective feature subsets were identified by comparing single-stage and multi-stage feature selection methods, such as recursive feature elimination (RFE), random forest (RF), and Lasso. SMOTE techniques were applied to address class imbalance through data augmentation, and loss functions such as cross-entropy were used for model training and evaluation. Finally, classification was performed using RF, KNN, GBDT, SVM, Stacking, Voting, and deep-learning models (ResNet-18, ResNet-50, VGG16, etc.).ResultsThe MStacking model, based on mutual information (MI) and the stacking ensemble algorithm, achieved superior performance with a classification accuracy of 82.00%, precision of 82.00%, F1 score of 83.00%, AUC of 95.00%, sensitivity of 79.00%, and specificity of 94.00%. These results outperformed other methods. Deep-learning models showed limited performance when trained on small sample sizes. However, when integrated with radiomics features, CNN models, particularly ResNet-50, demonstrated significantly improved performance, especially when addressing class imbalance using SMOTE, with ResNet-50's accuracy increasing by 20%. The MStacking model also showed stable performance in multi-class tasks.ConclusionRadiomics-driven deep-learning models demonstrated a significant advantage in LUAD subtype classification, particularly when dealing with small sample sizes. Integrating radiomics features enhanced the performance of deep-learning models, offering a promising approach for LUAD classification.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Multi-Scale Feature Interaction and Fusion Medical Image Segmentation Method","authors":"Yanjin Wang, Hualing Li, Gaizhen Liu, Jiaxin Huo, Jijie Sun, Yonglai Zhang","doi":"10.1002/ima.70207","DOIUrl":"https://doi.org/10.1002/ima.70207","url":null,"abstract":"<div>\u0000 \u0000 <p>To address the challenges of edge loss and low segmentation accuracy in small regions in medical image segmentation, this study proposes a novel segmentation network, MSFIF-Net, which integrates the convolutional neural networks (CNNs) and transformer. Built upon the TransUNet architecture, our approach introduces two novel modules: the multi-group contextual attention (MDGA) module and the multi-scale dilated aggregation (MSDAM) module. The MDGA module enhances feature extraction across different dimensions by facilitating the interaction and fusion of multiple contextual information groups. Meanwhile, the MSDAM module optimizes feature fusion in skip connections by integrating multi-scale dilated convolutions with global feature aggregation. For evaluation, we conduct extensive experiments on four data sets: Left Atrial Appendage and Pulmonary Vein CT(LAA & PV CT), ISIC-2018, Chest X-ray, and COVID-19 CT scans. A series of ablation studies are designed to validate the effectiveness of individual components within the proposed framework. Experimental results demonstrate that MSFIF-Net achieves superior segmentation performance compared to existing models across five quantitative metrics, effectively addressing the challenge of low segmentation accuracy in small regions within medical image analysis.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 5","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145101795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}