International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
Application of Non-Immersive Virtual Reality in Cerebral Palsy Children: A Systematic Review 非沉浸式虚拟现实技术在脑瘫儿童中的应用:系统回顾
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-27 DOI: 10.1002/ima.23162
Agrawal Luckykumar Dwarkadas, Viswanath Talasila, Sheeraz Kirmani, Rama Krishna Challa, K. G. Srinivasa
{"title":"Application of Non-Immersive Virtual Reality in Cerebral Palsy Children: A Systematic Review","authors":"Agrawal Luckykumar Dwarkadas,&nbsp;Viswanath Talasila,&nbsp;Sheeraz Kirmani,&nbsp;Rama Krishna Challa,&nbsp;K. G. Srinivasa","doi":"10.1002/ima.23162","DOIUrl":"https://doi.org/10.1002/ima.23162","url":null,"abstract":"<div>\u0000 \u0000 <p>Cerebral palsy (CP) is a movement disorder caused by brain damage. Virtual reality (VR) can improve motor function and daily life activities in CP patients. This systematic review examines the use of non-immersive VR in treating CP children. The objective is to evaluate the effectiveness of non-immersive VR in rehabilitating CP children as a standalone intervention or in combination with traditional therapy. The review follows the PRISMA guidelines and includes a comprehensive search of five bibliographic databases. Two reviewers independently assess the search results, evaluate full-text publications, and extract relevant data. The outcomes were described by the International Classification of Functioning, Disability, and Health for Children and Youth (ICF-CY) framework. A total of 20 English-language studies published between January 2013 and January 2023 are included in the review based on predefined inclusion and exclusion criteria. The findings demonstrate that non-immersive VR, when used in conjunction with traditional therapy, yields positive effects on body structure and function (hand function, grip strength, and upper extremity function), activity (motor function, activities of daily life [ADL], balance), and participation (caretakers' assessment, usability, motivation, and user satisfaction) in CP children. Moreover, non-immersive VR alone is found to be more efficient than traditional therapy in improving manual dexterity in CP children. The non-immersive VR can be effective in rehabilitating CP children, and the review concludes by recommending future research with larger sample sizes and randomized trials to investigate further the potential benefits of non-immersive VR in this population.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Smartphone App to Detect Pathological Myopia Using Spatial Attention and Squeeze-Excitation Network as a Classifier and Segmentation Encoder 利用空间注意力和挤压-激发网络作为分类器和分割编码器检测病理性近视的智能手机应用程序
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-27 DOI: 10.1002/ima.23157
Sarvat Ali, Shital Raut
{"title":"Smartphone App to Detect Pathological Myopia Using Spatial Attention and Squeeze-Excitation Network as a Classifier and Segmentation Encoder","authors":"Sarvat Ali,&nbsp;Shital Raut","doi":"10.1002/ima.23157","DOIUrl":"https://doi.org/10.1002/ima.23157","url":null,"abstract":"<div>\u0000 \u0000 <p>Pathological myopia (PM) is a worldwide visual health concern that can cause irreversible vision impairment. It affects up to 20 crore population, causing social and economic burdens. Initial screening of PM using computer-aided diagnosis (CAD) can prevent loss of time and finances for intricate treatments later on. Current research works utilizes complex models that are too resource-intensive or lack explanations behind the categorizations. To emphasize the significance of artificial intelligence for the ophthalmic usage and address the limitations of the current studies, we have designed a mobile-compatible application for smartphone users to detect PM. For this purpose, we have developed a lightweight model, using the enhanced MobileNetV3 architecture integrated with spatial attention (SA) and squeeze-excitation (SE) modules to effectively capture lesion location and channel features. To demonstrate its robustness, the model is tested against three heterogeneous datasets namely PALM, RFMID, and ODIR reporting the area under curve (AUC) score of 0.9983, 0.95, and 0.94, respectively. In order to support PM categorization and demonstrate its correlation with the associated lesions, we have segmented different forms of PM lesion atrophy, which gave us intersection over union (IOU) scores of 0.96 and fscore of 0.97 using the same SA+SE inclusive MobileNetV3 as an encoder. This lesion segmentation can aid ophthalmologists in further analysis and treatment. The optimized and explainable model version is calibrated to develop the smartphone application, which can identify fundus image as PM or normal vision. This app is appropriate for ophthalmologists seeking second opinions or by rural general practitioners to refer PM cases to specialists.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VTCNet: A Feature Fusion DL Model Based on CNN and ViT for the Classification of Cervical Cells VTCNet:基于 CNN 和 ViT 的特征融合 DL 模型用于颈椎细胞分类
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-27 DOI: 10.1002/ima.23161
Mingzhe Li, Ningfeng Que, Juanhua Zhang, Pingfang Du, Yin Dai
{"title":"VTCNet: A Feature Fusion DL Model Based on CNN and ViT for the Classification of Cervical Cells","authors":"Mingzhe Li,&nbsp;Ningfeng Que,&nbsp;Juanhua Zhang,&nbsp;Pingfang Du,&nbsp;Yin Dai","doi":"10.1002/ima.23161","DOIUrl":"https://doi.org/10.1002/ima.23161","url":null,"abstract":"<div>\u0000 \u0000 <p>Cervical cancer is a common malignancy worldwide with high incidence and mortality rates in underdeveloped countries. The Pap smear test, widely used for early detection of cervical cancer, aims to minimize missed diagnoses, which sometimes results in higher false-positive rates. To enhance manual screening practices, computer-aided diagnosis (CAD) systems based on machine learning (ML) and deep learning (DL) for classifying cervical Pap cells have been extensively researched. In our study, we introduced a DL-based method named VTCNet for the task of cervical cell classification. Our approach combines CNN-SPPF and ViT components, integrating modules like Focus and SeparableC3, to capture more potential information, extract local and global features, and merge them to enhance classification performance. We evaluated our method on the public SIPaKMeD dataset, achieving accuracies, precision, recall, and F1 scores of 97.16%, 97.22%, 97.19%, and 97.18%, respectively. We also conducted additional experiments on the Herlev dataset, where our results outperformed previous methods. The VTCNet method achieved higher classification accuracy than traditional ML or shallow DL models through this integration. Related codes: https://github.com/Camellia-0892/VTCNet/tree/main.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Neuroimaging Fusion for Alzheimer's Disease: An Image Colorization Approach With Mobile Vision Transformer 阿尔茨海默病的多模态神经成像融合:利用移动视觉转换器的图像着色方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-26 DOI: 10.1002/ima.23158
Modupe Odusami, Robertas Damasevicius, Egle Milieskaite-Belousoviene, Rytis Maskeliunas
{"title":"Multimodal Neuroimaging Fusion for Alzheimer's Disease: An Image Colorization Approach With Mobile Vision Transformer","authors":"Modupe Odusami,&nbsp;Robertas Damasevicius,&nbsp;Egle Milieskaite-Belousoviene,&nbsp;Rytis Maskeliunas","doi":"10.1002/ima.23158","DOIUrl":"https://doi.org/10.1002/ima.23158","url":null,"abstract":"<div>\u0000 \u0000 <p>Multimodal neuroimaging, combining data from different sources, has shown promise in the classification of the Alzheimer's disease (AD) stage. Existing multimodal neuroimaging fusion methods exhibit certain limitations, which require advancements to enhance their objective performance, sensitivity, and specificity for AD classification. This study uses the use of a Pareto-optimal cosine color map to enhance classification performance and visual clarity of fused images. A mobile vision transformer (ViT) model, incorporating the swish activation function, is introduced for effective feature extraction and classification. Fused images from the Alzheimer's Disease Neuroimaging Initiative (ADNI), the Whole Brain Atlas (AANLIB), and Open Access Series of Imaging Studies (OASIS) datasets, obtained through optimized transposed convolution, are utilized for model training, while evaluation is achieved using images that have not been fused from the same databases. The proposed model demonstrates high accuracy in AD classification across different datasets, achieving 98.76% accuracy for Early Mild Cognitive Impairment (EMCI) versus LMCI, 98.65% for Late Mild Cognitive Impairment (LMCI) versus AD, 98.60% for EMCI versus AD, and 99.25% for AD versus Cognitive Normal (CN) in the ADNI dataset. Similarly, on OASIS and AANLIB, the precision of the AD versus CN classification is 99.50% and 96.00%, respectively. Evaluation metrics showcase the model's precision, recall, and F1 score for various binary classifications, emphasizing its robust performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142077964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Classification Approach for Retinal Disease Using Improved Gannet Optimization-Based Capsule DenseNet 使用基于改进型 Gannet 优化胶囊致密网的新型视网膜疾病分类方法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-22 DOI: 10.1002/ima.23156
S. Venkatesan, M. Kempanna, J. Nagaraja, A. Bhuvanesh
{"title":"A Novel Classification Approach for Retinal Disease Using Improved Gannet Optimization-Based Capsule DenseNet","authors":"S. Venkatesan,&nbsp;M. Kempanna,&nbsp;J. Nagaraja,&nbsp;A. Bhuvanesh","doi":"10.1002/ima.23156","DOIUrl":"https://doi.org/10.1002/ima.23156","url":null,"abstract":"<div>\u0000 \u0000 <p>An unusual condition of the eye called diabetic retinopathy affects the human retina and is brought on by the blood's constant rise in insulin levels. Loss of vision is the result. Diabetic retinopathy can be improved by receiving an early diagnosis to prevent further damage. A cost-effective method of accumulating medical treatments is through appropriate DR screening. In this work, deep learning framework is introduced for the accurate classification of retinal diseases. The proposed method processes retinal fundus images obtained from databases, addressing noise and artifacts through an improved median filter (ImMF). It leverages the UNet++ model for precise segmentation of the disease-affected regions. UNet++ enhances feature extraction through cross-stage connections, improving segmentation results. The segmented images are then fed as input to the improved gannet optimization-based capsule DenseNet (IG-CDNet) for retinal disease classification. The hybrid capsule DenseNet (CDNet) classifies disease and is optimized using the improved gannet optimization algorithm to boost classification accuracy. Finally, the accuracy and dice score values achieved are 0.9917 and 0.9652 on the APTOS-2019 dataset.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142041540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving COVID-19 Detection Through Cooperative Deep-Learning Pipeline for Lung Semantic Segmentation in Medical Imaging 通过合作深度学习管道改进 COVID-19 检测,实现医学影像中的肺部语义分割
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-14 DOI: 10.1002/ima.23129
Youssef Mourdi, Hanane Allioui, Mohamed Sadgal
{"title":"Improving COVID-19 Detection Through Cooperative Deep-Learning Pipeline for Lung Semantic Segmentation in Medical Imaging","authors":"Youssef Mourdi,&nbsp;Hanane Allioui,&nbsp;Mohamed Sadgal","doi":"10.1002/ima.23129","DOIUrl":"https://doi.org/10.1002/ima.23129","url":null,"abstract":"<div>\u0000 \u0000 <p>The global impact of COVID-19 has resulted in millions of individuals being afflicted, with a staggering mortality toll of over 16 000 over a span of 2 years. The dearth of resources and diagnostic techniques has had an impact on both emerging and wealthy nations. In response to this, researchers from the domains of engineering and medicine are using deep learning methods to create automated algorithms for detecting COVID-19. This work included the development and comparison of a collaborative deep-learning model for the identification of COVID-19 using CT scan images, in comparison to previous deep learning-based methods. The model underwent an ablation study using publicly accessible COVID-19 CT imaging datasets, with encouraging outcomes. The suggested model might aid doctors and academics in devising tools to expedite the process of determining the optimal therapeutic approach for health professionals, hence reducing the risk of potential problems.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking Barriers in Cancer Diagnosis: Super-Light Compact Convolution Transformer for Colon and Lung Cancer Detection 打破癌症诊断障碍:用于结肠癌和肺癌检测的超轻型紧凑卷积变压器
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-12 DOI: 10.1002/ima.23154
Ritesh Maurya, Nageshwar Nath Pandey, Mohan Karnati, Geet Sahu
{"title":"Breaking Barriers in Cancer Diagnosis: Super-Light Compact Convolution Transformer for Colon and Lung Cancer Detection","authors":"Ritesh Maurya,&nbsp;Nageshwar Nath Pandey,&nbsp;Mohan Karnati,&nbsp;Geet Sahu","doi":"10.1002/ima.23154","DOIUrl":"https://doi.org/10.1002/ima.23154","url":null,"abstract":"<div>\u0000 \u0000 <p>According to the World Health Organization, lung and colon cancers are known for their high mortality rates which necessitate the diagnosis of these cancers at an early stage. However, the limited availability of data such as histopathology images used for diagnosis of these cancers, poses a significant challenge while developing computer-aided detection system. This makes it necessary to keep a check on the number of parameters in the artificial intelligence (AI) model used for the detection of these cancers considering the limited availability of the data. In this work, a customised compact and efficient convolution transformer architecture, termed, C3-Transformer has been proposed for the diagnosis of colon and lung cancers using histopathological images. The proposed C3-Transformer relies on convolutional tokenisation and sequence pooling approach to keep a check on the number of parameters and to combine the advantage of convolution neural network with the advantages of transformer model. The novelty of the proposed method lies in efficient classification of colon and lung cancers using the proposed C3-Transformer architecture. The performance of the proposed method has been evaluated on the ‘LC25000’ dataset. Experimental results shows that the proposed method has been able to achieve average classification accuracy, precision and recall value of 99.30%, 0.9941 and 0.9950, in classifying the five different classes of colon and lung cancer with only 0.0316 million parameters. Thus, the present computer-aided detection system developed using proposed C3-Transformer can efficiently detect the colon and lung cancers using histopathology images with high detection accuracy.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141967902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal Fusion for Enhanced Semantic Segmentation in Brain Tumor Imaging: Integrating Deep Learning and Guided Filtering Via Advanced 3D Semantic Segmentation Architectures 多模态融合增强脑肿瘤成像中的语义分割:通过先进的三维语义分割架构整合深度学习和引导式过滤技术
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-12 DOI: 10.1002/ima.23152
Abbadullah .H Saleh, Ümit Atila, Oğuzhan Menemencioğlu
{"title":"Multimodal Fusion for Enhanced Semantic Segmentation in Brain Tumor Imaging: Integrating Deep Learning and Guided Filtering Via Advanced 3D Semantic Segmentation Architectures","authors":"Abbadullah .H Saleh,&nbsp;Ümit Atila,&nbsp;Oğuzhan Menemencioğlu","doi":"10.1002/ima.23152","DOIUrl":"https://doi.org/10.1002/ima.23152","url":null,"abstract":"<p>Brain tumor segmentation is paramount in medical diagnostics. This study presents a multistage segmentation model consisting of two main steps. First, the fusion of magnetic resonance imaging (MRI) modalities creates new and more effective tumor imaging modalities. Second, the semantic segmentation of the original and fused modalities, utilizing various modified architectures of the U-Net model. In the first step, a residual network with multi-scale backbone architecture (Res2Net) and guided filter are employed for pixel-by-pixel image fusion tasks without requiring any training or learning process. This method captures both detailed and base elements from the multimodal images to produce better and more informative fused images that significantly enhance the segmentation process. Many fusion scenarios were performed and analyzed, revealing that the best fusion results are attained when combining T2-weighted (T2) with fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1CE) with FLAIR modalities. In the second step, several models, including the U-Net and its many modifications (adding attention layers, residual connections, and depthwise separable connections), are trained using both the original and fused modalities. Further, a “Model Selection-based” fusion of these individual models is also considered for more enhancement. In the preprocessing step, the images are resized by cropping them to decrease the pixel count and minimize background interference. Experiments utilizing the brain tumor segmentation (BraTS) 2020 dataset were performed to verify the efficiency and accuracy of the proposed methodology. The “Model Selection-based” fusion model achieved an average Dice score of 88.4%, an individual score of 91.1% for the whole tumor (WT) class, an average sensitivity score of 86.26%, and a specificity score of 91.7%. These results prove the robustness and high performance of the proposed methodology compared to other state-of-the-art methods.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23152","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141980407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction to “Multi-Branch Sustainable Convolutional Neural Network for Disease Classification” 对 "用于疾病分类的多分支可持续卷积神经网络 "的更正
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-06 DOI: 10.1002/ima.23153
{"title":"Correction to “Multi-Branch Sustainable Convolutional Neural Network for Disease Classification”","authors":"","doi":"10.1002/ima.23153","DOIUrl":"10.1002/ima.23153","url":null,"abstract":"<p>M. Naz, M. A. Shah, H. A. Khattak, et al., “Multi-Branch Sustainable Convolutional Neural Network for Disease Classification,” <i>International Journal of Imaging Systems and Technology</i> 33, no. 5 (2023): 1621–1633, https://doi.org/10.1002/ima.22884.</p><p>The affiliation of Hafiz Tayyab Rauf should be: Independent Researcher, UK. The correct author list and affiliations appear below.</p><p>Maria Naz<sup>1</sup> | Munam Ali Shah<sup>1</sup> | Hasan Ali Khattak<sup>2</sup> | Abdul Wahid<sup>2,3</sup> | Muhammad Nabeel Asghar<sup>4</sup> | Hafiz Tayyab Rauf<sup>5</sup> | Muhammad Attique Khan<sup>6</sup> | Zoobia Ameer<sup>7</sup></p><p><sup>1</sup>Department of Computer Science, COMSATS University Islamabad, Islamabad, Pakistan</p><p><sup>2</sup>School of Electrical Engineering &amp; Computer Science (SEECS), National University of Sciences and Technology (NUST), Islamabad, Pakistan</p><p><sup>3</sup>School of Computer Science, University of Birmingham, Dubai, United Arab Emirates</p><p><sup>4</sup>Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan</p><p><sup>5</sup>Independent Researcher, UK</p><p><sup>6</sup>HITEC University Taxila, Taxila, Pakistan</p><p><sup>7</sup>Shaheed Benazir Bhutto Women University Peshawar, Peshawar, Pakistan</p><p>We apologize for this error.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23153","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141935110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ADT-UNet: An Innovative Algorithm for Glioma Segmentation in MR Images ADT-UNet:磁共振图像中胶质瘤分割的创新算法
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-08-01 DOI: 10.1002/ima.23150
Liu Zhipeng, Wu Jiawei, Jing Ye, Xuefeng Bian, Wu Qiwei, Rui Li, Yinxing Zhu
{"title":"ADT-UNet: An Innovative Algorithm for Glioma Segmentation in MR Images","authors":"Liu Zhipeng,&nbsp;Wu Jiawei,&nbsp;Jing Ye,&nbsp;Xuefeng Bian,&nbsp;Wu Qiwei,&nbsp;Rui Li,&nbsp;Yinxing Zhu","doi":"10.1002/ima.23150","DOIUrl":"10.1002/ima.23150","url":null,"abstract":"<div>\u0000 \u0000 <p>The precise delineation of glioma tumors is of paramount importance for surgical and radiotherapy planning. Presently, the primary drawbacks associated with the manual segmentation approach are its laboriousness and inefficiency. In order to tackle these challenges, a deep learning-based automatic segmentation technique was introduced to enhance the efficiency of the segmentation process. We proposed ADT-UNet, an innovative algorithm for segmenting glioma tumors in MR images. ADT-UNet leveraged attention-dense blocks and Transformer as its foundational elements. It extended the U-Net framework by incorporating the dense connection structure and attention mechanism. Additionally, a Transformer structure was introduced at the end of the encoder. Furthermore, a novel attention-guided multi-scale feature fusion module was integrated into the decoder. To enhance network stability during training, a loss function was devised that combines Dice loss and binary cross-entropy loss, effectively guiding the network optimization process. On the test set, the DSC was 0.933, the IOU was 0.878, the PPV was 0.942, and the Sen was 0.938. Ablation experiments conclusively demonstrated that the inclusion of all the three proposed modules led to enhanced segmentation accuracy within the model. The most favorable outcomes were observed when all the three modules were employed simultaneously. The proposed methodology exhibited substantial competitiveness across various evaluation indices, with the three additional modules synergistically complementing each other to collectively enhance the segmentation accuracy of the model. Consequently, it is anticipated that this method will serve as a robust tool for assisting clinicians in auxiliary diagnosis and contribute to the advancement of medical intelligence technology.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 5","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141886667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信