International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Optimization of MR-Guided Focused Ultrasound System: Comparative Study of Water Bolus and Electrically Optimized Material Using Automated Machine Learning
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-03-10 DOI: 10.1002/ima.70061
Eunwoo Lee, Taewoo Nam, Daniel Hernandez, Eugene Ozhinsky, Kazim Narsinh, Leo Sugrue, Yeji Han, Kisoo Kim, Kyoung-Nam Kim
{"title":"Optimization of MR-Guided Focused Ultrasound System: Comparative Study of Water Bolus and Electrically Optimized Material Using Automated Machine Learning","authors":"Eunwoo Lee,&nbsp;Taewoo Nam,&nbsp;Daniel Hernandez,&nbsp;Eugene Ozhinsky,&nbsp;Kazim Narsinh,&nbsp;Leo Sugrue,&nbsp;Yeji Han,&nbsp;Kisoo Kim,&nbsp;Kyoung-Nam Kim","doi":"10.1002/ima.70061","DOIUrl":"https://doi.org/10.1002/ima.70061","url":null,"abstract":"<div>\u0000 \u0000 <p>Magnetic resonance-guided focused ultrasound (MRgFUS) is a therapeutic technology designed for the treatment of neurological disorders, enabling precise focal heating under magnetic resonance imaging (MRI) guidance. However, electromagnetic (EM) interaction between the radiofrequency (RF) coil and the MRgFUS system leads to increased specific absorption rate (SAR), reduced RF transmit magnetic (|B<sub>1</sub><sup>+</sup>|)-field homogeneity, and decreased signal-to-noise ratio (SNR). In this study, we compared a conventional water bolus containing sodium chloride and sterile water with an electrically optimized material (EOM) optimized using an automated machine learning (Auto-ML) approach to minimize SAR while maximizing |B<sub>1</sub><sup>+</sup>|-field quality. EM simulation results demonstrated that our EOM achieved significant improvements in |B<sub>1</sub><sup>+</sup>|-field homogeneity and a reduction in peak spatial SAR averaged over 10 g (psSAR<sub>10g</sub>) compared to the conventional water bolus. These findings suggest that Auto-ML-based EOM can enhance the safety and efficiency of MRgFUS procedures.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MARS-Net: Multi-Scale Attention Residual Spatiotemporal Network for Robust Left Ventricular Ejection Fraction Prediction in Echocardiography Videos
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-03-10 DOI: 10.1002/ima.70059
Shun Cheng, Fangqi Guo, Qihui Guo, Haobo Chen, Zhou Xu, Bo Zhang, Jiaqi Zhao, Qi Zhang
{"title":"MARS-Net: Multi-Scale Attention Residual Spatiotemporal Network for Robust Left Ventricular Ejection Fraction Prediction in Echocardiography Videos","authors":"Shun Cheng,&nbsp;Fangqi Guo,&nbsp;Qihui Guo,&nbsp;Haobo Chen,&nbsp;Zhou Xu,&nbsp;Bo Zhang,&nbsp;Jiaqi Zhao,&nbsp;Qi Zhang","doi":"10.1002/ima.70059","DOIUrl":"https://doi.org/10.1002/ima.70059","url":null,"abstract":"<div>\u0000 \u0000 <p>Left ventricular ejection fraction (LVEF) is a key measure of heart pumping performance, playing a pivotal role in the ongoing management and efficacy assessment of cardiovascular disease treatments. By quantifying the percentage of blood that is pumped out of the left ventricle with each heartbeat, LVEF provides invaluable insights into the overall efficiency of the heart's function, enabling clinical professionals to make informed decisions regarding point of care and therapeutic strategies. However, accurate LVEF measurement faces challenges such as large observer variability, poor image quality, and the complexity of cardiac motion. To address these issues, a residual spatiotemporal network with a multi-scale attention mechanism is proposed for robust LVEF prediction in transthoracic echocardiographic videos, named the Multi-scale Attention Residual Spatiotemporal Network (MARS-Net). The MARS-Net excels at extracting spatiotemporal features from echocardiographic videos, accurately capturing heart dynamics and morphology while demonstrating robust performance across multi-center data. The sub-video division block is first designed to partition echocardiographic videos into smaller sub-videos, capturing key cardiac motion. The input embedding block compresses these sub-videos for efficient processing. Then the multi-scale attention residual block enhances spatiotemporal feature extraction by combining multi-scale convolutions with attention mechanisms to improve focus on important details. Finally, the output convolutional block transforms the extracted features into the final LVEF prediction, ensuring accurate measurement. Through extensive evaluations, our MARS-Net outperforms comparative deep learning models in LVEF prediction, offering exceptional promise for diagnosing heart dysfunction. Notably, it has achieved commendable results in three medical centers, underscoring its generalizability and reliability across varied clinical environments.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143594844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GRASP-Net: Grouped Residual Convolution U-Net With Attention Mechanism and Atrous Spatial Pyramid Pooling for Prostate Zone Segmentation Using MR Images
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-03-09 DOI: 10.1002/ima.70060
R. Deiva Nayagam, D. Selvathi
{"title":"GRASP-Net: Grouped Residual Convolution U-Net With Attention Mechanism and Atrous Spatial Pyramid Pooling for Prostate Zone Segmentation Using MR Images","authors":"R. Deiva Nayagam,&nbsp;D. Selvathi","doi":"10.1002/ima.70060","DOIUrl":"https://doi.org/10.1002/ima.70060","url":null,"abstract":"<div>\u0000 \u0000 <p>Prostate cancer is a prevalent disease in men, especially among the elderly, and magnetic resonance imaging is the leading acquisition method for the diagnosis and evaluation of the prostate. Accurate segmentation of the prostate, particularly the transition zone and peripheral zone, is crucial for early detection and effective treatment planning. This work introduces GRASP-Net as an innovative deep learning-based model to improve prostate MRI zonal segmentation accuracy. GRASP-Net integrates grouped residual convolutional modules, attention mechanisms, convolutional block attention module, and atrous spatial pyramid pooling blocks to enhance feature extraction and boundary segmentation. The model has been evaluated on the Medical Segmentation Decathlon Task 05 Prostate dataset, comparing its performance against other well-known models. Overall, the GRASP-Net model achieved higher segmentation results with a dice similarity coefficient of 0.928 for the transition zone and 0.864 for the peripheral zone, surpassing previous state-of-the-art results. Additionally, the model exhibits significant performance on 95 percentile Hausdorff Distance, Average Surface Distance, and Sensitivity values and proving its accuracy in anatomical prostate structure localization. These advancements emphasize the promising prospect of the GRASP-Net model to advance prostate cancer diagnosis and treatment, presenting an effective tool for clinical usage.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143581579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EMViT-BCC: Enhanced Mobile Vision Transformer for Breast Cancer Classification
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-03-06 DOI: 10.1002/ima.70053
Jacinta Potsangbam, Salam Shuleenda Devi
{"title":"EMViT-BCC: Enhanced Mobile Vision Transformer for Breast Cancer Classification","authors":"Jacinta Potsangbam,&nbsp;Salam Shuleenda Devi","doi":"10.1002/ima.70053","DOIUrl":"https://doi.org/10.1002/ima.70053","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer (BC) accounts for most cancer-related deaths worldwide, so it is crucial to consider it as a prominent issue and emphasize proper diagnosis and timely detection. This study introduces a deep learning strategy called EMViT-BCC for the BC histopathology image classification to two class and eight class. The proposed model utilizes the Mobile Vision Transformer (MobileViT) block, which captures local and global features and extracts necessary features for the classification task. The proposed approach is trained and evaluated on the standard BreaKHis dataset. The model is evaluated with both the original raw histopathology images as well as the stain-normalized images for the analysis of the classification task. Extensive experiments demonstrate that the proposed EMViT-BCC achieves higher accuracy and robustness in classifying benign and malignant images and identifying various subtypes of BC. Our results demonstrate that by incorporating further layers, the classification performance of MobileViT can be greatly enhanced, with 99.43% for two-class and 93.61% for eight-class classification. These findings suggest that while stain normalization can standardize variations, original image data retain crucial details that enhance model performance. In comparison with the existing works, the proposed methodology surpasses the state-of-the-art (SOTA) methods for BC histopathology image classification. The proposed approach offers a promising solution for reliable BC classification for both binary and multi-class.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143554839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-Dimensional Network With Squeeze and Excitation for Accurate Multi-Region Brain Tumor Segmentation
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-03-05 DOI: 10.1002/ima.70057
Anila Kunjumon, Chinnu Jacob, R. Resmi
{"title":"Three-Dimensional Network With Squeeze and Excitation for Accurate Multi-Region Brain Tumor Segmentation","authors":"Anila Kunjumon,&nbsp;Chinnu Jacob,&nbsp;R. Resmi","doi":"10.1002/ima.70057","DOIUrl":"https://doi.org/10.1002/ima.70057","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain tumors involve abnormal cell growth within or adjacent to brain tissues, necessitating precise segmentation for effective clinical decision-making. Traditional models often face challenges in accurately delineating tumor regions, and building robust segmentation models for high-resolution MRI data requires substantial computational power. This study presents a three-dimensional U-Net architecture with Squeeze and Excitation (SE) modules, called SE-3D Brain Net, to enhance multi-region brain tumor segmentation. The model leverages SE modules to recalibrate channel-wise feature significance, improving segmentation accuracy across tumor subregions. Extensive experiments on datasets such as BraTS 2018 and BraTS 2020 demonstrate that the model outperforms traditional U-Net models and various advanced methods, achieving average Dice scores of 0.86 for enhancing tumor, 0.84 for tumor core, and 0.86 for whole tumor segmentation. An ablation study further revealed the model's sensitivity to hyperparameters, identifying optimal settings for batch size, learning rate, and dropout rate. This study demonstrates the effectiveness of deep learning in accurately identifying brain tumors, emphasizing its potential to improve medical image analysis and patient outcomes significantly.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143555023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
KIDBA-Net: A Multi-Feature Fusion Brain Tumor Segmentation Network Utilizing Kernel Inception Depthwise Convolution and Bi-Cross Attention
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-03-05 DOI: 10.1002/ima.70055
Jie Min, Tongyuan Huang, Boxiong Huang, Chuanxin Hu, Zhixing Zhang
{"title":"KIDBA-Net: A Multi-Feature Fusion Brain Tumor Segmentation Network Utilizing Kernel Inception Depthwise Convolution and Bi-Cross Attention","authors":"Jie Min,&nbsp;Tongyuan Huang,&nbsp;Boxiong Huang,&nbsp;Chuanxin Hu,&nbsp;Zhixing Zhang","doi":"10.1002/ima.70055","DOIUrl":"https://doi.org/10.1002/ima.70055","url":null,"abstract":"<div>\u0000 \u0000 <p>Automatic brain tumor segmentation technology plays a crucial role in tumor diagnosis, particularly in the precise delineation of tumor subregions. It can assist doctors in accurately assessing the type and location of brain tumors, potentially saving patients' lives. However, the highly variable size and shape of brain tumors, along with their similarity to healthy tissue, pose significant challenges in the segmentation of multi-label brain tumor subregions. This paper proposes a network model, KIDBA-Net, based on an encoder-decoder architecture, aimed at solving the issue of pixel-level classification errors in multi-label tumor subregions. The proposed Kernel Inception Depthwise Block (KIDB) employs multi-kernel depthwise convolution to extract multi-scale features in parallel, accurately capturing the feature differences between tumor types to mitigate misclassification. To ensure the network focuses more on the lesion areas and excludes the interference of irrelevant tissues, this paper adopts Bi-Cross Attention as a skip connection hub to bridge the semantic gap between layers. Additionally, the Dynamic Feature Reconstruction Block (DFRB) exploits the complementary advantages of convolution and dynamic upsampling operators, effectively aiding the model in generating high-resolution prediction maps during the decoding phase. The proposed model surpasses other state-of-the-art brain tumor segmentation methods on the BraTS2018 and BraTS2019 datasets, particularly in the segmentation accuracy of smaller and highly overlapping tumor core (TC) and enhanced tumor (ET), achieving DSC scores of 87.8%, 82.0%, and 90.2%, 88.7%, respectively; Hausdorff distances of 2.8, 2.7 mm, and 2.7, 2.0 mm.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143554925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight Local–Global Fusion for Robust Multiclass Classification of Skin Lesions
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-24 DOI: 10.1002/ima.70045
Guangli Li, Xinjiong Zhou, Yiyuan Ye, Jingqin Lv, Donghong Ji, Jianguo Wu, Ruiyang Zhang, Hongbin Zhang
{"title":"Lightweight Local–Global Fusion for Robust Multiclass Classification of Skin Lesions","authors":"Guangli Li,&nbsp;Xinjiong Zhou,&nbsp;Yiyuan Ye,&nbsp;Jingqin Lv,&nbsp;Donghong Ji,&nbsp;Jianguo Wu,&nbsp;Ruiyang Zhang,&nbsp;Hongbin Zhang","doi":"10.1002/ima.70045","DOIUrl":"https://doi.org/10.1002/ima.70045","url":null,"abstract":"<div>\u0000 \u0000 <p>Skin lesion classification is crucial for early diagnosis of skin cancer. However, the task faces challenges such as limited labeled data, data imbalance, and high intra-class variability. In this paper, we propose a lightweight local–global fusion (LGF) model that leverages the advantages of RegNet for local processing and Transformer for global interaction. The LGF model consists of four stages that integrate local and global pathological information using channel attention and residual connections. Furthermore, Polyloss is employed to address the data imbalance. Extensive experiments on the ISIC2018 and ISIC2019 datasets demonstrate that LGF achieves state-of-the-art performance with 93.10% and 90.36% accuracy, respectively, without any data augmentation. The LGF model is relatively lightweight and easier to reproduce, contributing to the field by offering a satisfactory trade-off between model complexity and classification performance. The code for our model will be available at https://github.com/candiceyyy/LGF.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143475357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffusion Model-Based MRI Super-Resolution Synthesis
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-24 DOI: 10.1002/ima.70021
Ji Ma, Guojun Jian, Jinjin Chen
{"title":"Diffusion Model-Based MRI Super-Resolution Synthesis","authors":"Ji Ma,&nbsp;Guojun Jian,&nbsp;Jinjin Chen","doi":"10.1002/ima.70021","DOIUrl":"https://doi.org/10.1002/ima.70021","url":null,"abstract":"<div>\u0000 \u0000 <p>In modern medical imaging, although there have been advances in the application of super-resolution technology to MRI in recent years, current applications still cannot meet practical needs. For example, for MRI images under specific pathological or physiological conditions, the existing super-resolution technology still lacks effectiveness in processing noise and restoring details. And when processing images with complex organizational structures, such as white matter fiber bundles in the brain, existing super-resolution techniques often fail to accurately restore image details, resulting in structural distortion. To address these deficiencies, we propose in this study an advanced super-resolution (SR) reconstruction framework tailored specifically for magnetic resonance imaging (MRI). Our approach makes use of the Denoising Diffusion Probabilistic Model (DDPM) and CrossAttention, an advanced technique known for its ability to maintain data accuracy while making the most of available conditions, leading to high-quality image restoration. By incorporating sophisticated priors and innovative network architecture, our method significantly outperforms traditional SR techniques, particularly in preserving fine anatomical details and enhancing overall image quality. The proposed framework undergoes rigorous validation through extensive experiments on diverse MRI datasets, demonstrating its robustness and effectiveness in various scenarios. Furthermore, we provide a comprehensive analysis of the performance metrics, including structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), Normalized Mean Squared Error (NMSE), and Universal Quality Index (UQI), to underscore the superiority of our DDPM-based approach. This work not only contributes to advancing the state-of-the-art in MRI SR but also paves the way for broader applications in medical imaging and related fields.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143475720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel Model of Medical CT Image Segmentation Based on GANs With Residual Neural Networks
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-24 DOI: 10.1002/ima.70049
Amir Bouden, Ahmed Ghazi Blaiech, Asma Ben Abdallah, Mourad Said, Mohamed Hédi Bedoui
{"title":"Novel Model of Medical CT Image Segmentation Based on GANs With Residual Neural Networks","authors":"Amir Bouden,&nbsp;Ahmed Ghazi Blaiech,&nbsp;Asma Ben Abdallah,&nbsp;Mourad Said,&nbsp;Mohamed Hédi Bedoui","doi":"10.1002/ima.70049","DOIUrl":"https://doi.org/10.1002/ima.70049","url":null,"abstract":"<div>\u0000 \u0000 <p>A Generative Adversarial Network (GAN) is a machine learning model used to generate new examples that are like real data. In segmentation, it can be used to improve the generation of segmented images that get closer to ground truth ones. In a super-resolution context, the GAN solves the problem of low-resolution images as it allows increasing the resolution of images while preserving the original details. In this paper, we leverage these advantages of GANs to provide a new methodology with a pipeline of two novel GANs for accurate segmentation. The proposed pipeline is composed of a first GAN model that segments the images and a second model that applies super-resolution as post-processing on the segmented images to improve its quality. The two novel GAN architectures integrate the nested residual connections (NRCs) to improve the extraction and traffic of features. These architectures are validated on CT lung datasets to detect the infected regions for COVID-19. The experimental results prove that the suggested models with NRC implementation outperform state-of-the-art solutions in multiple metrics. It achieves a dice score of 0.77 for the segmentation of COVID-19 images using the first GAN. After applying super-resolution to the segmented images using the second GAN, the PSNR and MS-SSIM metrics increase from 19.69 and 0.8756 to 33.24 and 0.9682, respectively.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143475358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Local and Global Features for Enhanced Segmentation of Brain Metastatic Tumors in Magnetic Resonance Imaging
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-02-21 DOI: 10.1002/ima.70042
Mojtaba Mansouri Nejad, Habib Rostami, Ahmad Keshavarz, Hojat Ghimatgar, Mohamad Saleh Rayani, Leila Gonbadi
{"title":"Leveraging Local and Global Features for Enhanced Segmentation of Brain Metastatic Tumors in Magnetic Resonance Imaging","authors":"Mojtaba Mansouri Nejad,&nbsp;Habib Rostami,&nbsp;Ahmad Keshavarz,&nbsp;Hojat Ghimatgar,&nbsp;Mohamad Saleh Rayani,&nbsp;Leila Gonbadi","doi":"10.1002/ima.70042","DOIUrl":"https://doi.org/10.1002/ima.70042","url":null,"abstract":"<div>\u0000 \u0000 <p>Metastatic brain tumors present significant challenges in diagnosis and treatment, contributing to high mortality rates worldwide. Magnetic resonance imaging (MRI) is a pivotal diagnostic tool for identifying and assessing these tumors. However, accurate segmentation of MRI images remains critical for effective treatment planning and prognosis determination. Traditional segmentation methods, including threshold-based algorithms, often struggle with precisely delineating tumor boundaries, especially in three-dimensional (3D) images. This article introduces a 3D segmentation framework that combines Swin Transformers and 3D U-Net architectures, leveraging the complementary strengths of these models to improve segmentation accuracy and generalizability for metastatic brain tumors. We train multiple 3D U-Net and Swin U-Net models, selecting the best-performing architectures for segmenting tumor voxels. The outputs of these networks are then combined using various strategies, such as logical operations and stacking the outputs with the original images, to guide the training of a third model. Our method employs an innovative ensemble approach, integrating these outputs into a unified prediction model to enhance performance reliability. Experimental analysis on a newly released metastasis brain tumor dataset, which to the best of our knowledge has been tested for the first time using our models, yielded an impressive accuracy of 73.47%, validating the effectiveness of the proposed architectures.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 2","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143455890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信