International Journal of Imaging Systems and Technology最新文献

筛选
英文 中文
ESFCU-Net: A Lightweight Hybrid Architecture Incorporating Self-Attention and Edge Enhancement Mechanisms for Enhanced Polyp Image Segmentation
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-01-10 DOI: 10.1002/ima.70026
Wenbin Yang, Xin Chang, Xinyue Guo
{"title":"ESFCU-Net: A Lightweight Hybrid Architecture Incorporating Self-Attention and Edge Enhancement Mechanisms for Enhanced Polyp Image Segmentation","authors":"Wenbin Yang,&nbsp;Xin Chang,&nbsp;Xinyue Guo","doi":"10.1002/ima.70026","DOIUrl":"https://doi.org/10.1002/ima.70026","url":null,"abstract":"<div>\u0000 \u0000 <p>Early detection of polyps during endoscopy reduces the risk of malignancy and facilitates timely intervention. Precise polyp segmentation during endoscopy aids clinicians in identifying polyps, playing a vital role in the clinical prevention of malignancy. However, due to considerable differences in the size, color, and morphology of polyps, the resemblance between polyp lesions and their background, and the impact of factors like lighting changes, low-contrast areas, and gastrointestinal contents during image acquisition, accurate polyp segmentation remains a challenging issue. Additionally, most existing methods require high computational power, which restricts their practical application. Our objective is to develop and test a new lightweight polyp segmentation architecture. This paper presents a hybrid lightweight architecture called ESFCU-Net that combines self-attention and edge enhancement to address these challenges. The model comprises an encoder-decoder and an improved fire module (ESF module), which can learn both local and global information, reduce information loss, maintain computational efficiency, enhance the extraction of critical features in images, and includes a coordinate attention mechanism in each skip connection to suppress background interference and minimize spatial information loss. Extensive validation on two public datasets (Kvasir-SEG and CVC-ClinicDB) and one internal dataset reveals that this network exhibits strong learning performance and generalization capabilities, significantly enhances segmentation accuracy, surpasses existing segmentation methods, and shows potential for clinical application. The code for our work and more technical details can be found at https://github.com/aaafoxy/ESFCU-Net.git.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NRD-Net: Non-local Residual Dense Network for Brain MR Image Super-Resolution
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-01-08 DOI: 10.1002/ima.70022
Jian Chen, Mengting Guang, Geng Chen, Xiong Yao, Tewodros Megabiaw Tassew, Zhen Li, Zuoyong Li, He Zhang
{"title":"NRD-Net: Non-local Residual Dense Network for Brain MR Image Super-Resolution","authors":"Jian Chen,&nbsp;Mengting Guang,&nbsp;Geng Chen,&nbsp;Xiong Yao,&nbsp;Tewodros Megabiaw Tassew,&nbsp;Zhen Li,&nbsp;Zuoyong Li,&nbsp;He Zhang","doi":"10.1002/ima.70022","DOIUrl":"https://doi.org/10.1002/ima.70022","url":null,"abstract":"<div>\u0000 \u0000 <p>Super-resolution can significantly enhance image visibility and restore image features without requiring scanning devices to be updated. It is notably useful for magnetic resonance imaging (MRI), which suffers from low-resolution issue. In practice, MR images possess more intricate texture details than natural images, leading to the issue that existing super-resolution algorithms struggle to reach acceptable performance, particularly for brain MR images. To this end, we propose a non-local residual dense network (NRD-Net) for brain MR image super-resolution. In NRD-Net, shallow features are first extracted using a convolutional layer. Next, we propose to adaptively weight the extracted features using a non-local residual dense block, which captures the long-range relationship between features and enables the network to incorporate global information while retaining rich deep features. Finally, HR images are reconstructed using the reconstruction block based on atrous spatial pyramid pooling and sub-pixel convolution. Extensive experiments illustrate that, compared with existing super-resolution approaches, our NRD-Net provides better reconstruction performance with promising peak signal-to-noise ratio and structural similarity, as well as better anatomical structural details.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Fusion and Edge-Oriented Enhancement for Brain Tumor Segmentation With Missing Modalities
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-01-08 DOI: 10.1002/ima.70012
Yulan Yan, Yinwei Zhan, Huiyao He
{"title":"Adaptive Fusion and Edge-Oriented Enhancement for Brain Tumor Segmentation With Missing Modalities","authors":"Yulan Yan,&nbsp;Yinwei Zhan,&nbsp;Huiyao He","doi":"10.1002/ima.70012","DOIUrl":"https://doi.org/10.1002/ima.70012","url":null,"abstract":"<div>\u0000 \u0000 <p>Magnetic resonance imaging (MRI) offers comprehensive information about brain structures, enabling excellent performance in brain tumor segmentation using multimodal MRI in many methods. Nonetheless, missing modalities are common in clinical practice, which can significantly degrade segmentation performance. Current brain tumor segmentation methods often struggle to maintain feature consistency and robustness in multimodal feature fusion when modalities are missing and face difficulties in accurately capturing tumor boundaries. In this study, we propose an adaptive fusion and edge-oriented enhancement method to address these challenges. Our approach introduces learnable parameters and a masked attention mechanism in the transformer model to achieve cross-modal adaptive fusion, ensuring consistent feature representation even with missing data. To aggregate more information, we integrate multimodal and multi-level features through a hierarchical context integration module. Additionally, to tackle the complex morphology of brain tumor regions, we design an edge-enhanced deformable convolution module that captures deformation information and edge features from incomplete multimodal images, enabling precise tumor localization. Evaluations on the widely recognized BRATS2018 and BRATS2020 datasets demonstrate that our approach significantly surpasses existing brain tumor segmentation techniques in scenarios with missing modalities.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation of Brain Tumor Resections in Intraoperative 3D Ultrasound Images Using a Semisupervised Cross nnSU-Net
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-01-07 DOI: 10.1002/ima.70018
Yuhua Li, Shan Jiang, Zhiyong Yang, Liwen Wang, Zifeng Liu, Zeyang Zhou
{"title":"Segmentation of Brain Tumor Resections in Intraoperative 3D Ultrasound Images Using a Semisupervised Cross nnSU-Net","authors":"Yuhua Li,&nbsp;Shan Jiang,&nbsp;Zhiyong Yang,&nbsp;Liwen Wang,&nbsp;Zifeng Liu,&nbsp;Zeyang Zhou","doi":"10.1002/ima.70018","DOIUrl":"https://doi.org/10.1002/ima.70018","url":null,"abstract":"<div>\u0000 \u0000 <p>Intraoperative ultrasound (iUS) has been widely used in recent years to track intraoperative brain tissue deformation. Outlining tumor boundaries on iUS not only facilitates the robustness and accuracy of brain shift correction but also enables the direct use of iUS information for neurosurgical navigation. We developed a semisupervised cross nnU-Net with depthwise separable convolution (SSC nnSU-Net) for real-time segmentation of 3D iUS images by two networks with different initialization but consistent network structure networks. Unlike previous methods, RESECT as labeled data and ReMIND as unlabeled data for hybrid dataset training selected break down the barriers between different datasets and further alleviate the problem of “data hunger.” The SSC nnSU-Net method was evaluated by ablation of semisupervised learning, comparison with other state-of-the-art methods, and model complexity. The results indicate that the proposed framework achieves a certain balance in terms of computation time, GPU memory utilization, and segmentation performance. This motivates segmentation of 3D iUS images for real-time application in clinical surgery. The method can assist surgeons in identifying brain tumors through iUS.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pancreas Segmentation Based on Multi-Stage Attention Enhanced U-Net
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-01-06 DOI: 10.1002/ima.70025
Peng He, Qian qian Kong, Yun Chen, Chang Bin Shao, Zhen Su
{"title":"Pancreas Segmentation Based on Multi-Stage Attention Enhanced U-Net","authors":"Peng He,&nbsp;Qian qian Kong,&nbsp;Yun Chen,&nbsp;Chang Bin Shao,&nbsp;Zhen Su","doi":"10.1002/ima.70025","DOIUrl":"https://doi.org/10.1002/ima.70025","url":null,"abstract":"<div>\u0000 \u0000 <p>Considering that the pancreas occupies a very small proportion of the abdominal organs and varies in size, shape, and position, U-Net encounters challenges related to intra-class inconsistency and inter-class indistinction in pancreatic segmentation tasks. To address these issues, this paper proposes a pancreas segmentation method based on a multi-stage attention enhanced U-Net to effectively leverage feature information at each stage of the U-Net architecture. In particular, during the encoding phase of U-Net, triple attention is utilized to capture dependencies between different dimensions; during the skip connection phase, a channel cross-fusion Transformer is introduced to fuse multi-scale channel information from different layers of the encoder; and during the decoding phase, feature integration convolution is employed to enhance the model's capacity for integrating global and local information. A four-fold cross-validation was performed on 82 Three-Dimensional Computed Tomography (3D CT) scans from the National Institutes of Health (NIH) and 281 3D CT scans from the Medical Segmentation Decathlon (MSD) to evaluate the proposed model. Experimental results demonstrate that the proposed method achieved superior performance on both pancreatic datasets, surpassing mainstream pancreatic segmentation methods, with average Dice scores of 87.16% and 87.53%, respectively, yielding improvements of 2.58% and 1.71% compared to U-Net. The proposed method is an end-to-end pancreatic segmentation algorithm suitable for small organ region segmentation in complex tissues, capable of high-precision pancreatic segmentation in processing entire 3D CT image slices.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143112498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Cascade Model to Detect and Segment Lung Nodule Using YOLOv8 and Resnet50U-Net
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2025-01-04 DOI: 10.1002/ima.70023
Selma Mammeri, Mohamed-Yassine Haouam, Mohamed Amroune, Issam Bendib, Elhadj Benkhelifa
{"title":"A Cascade Model to Detect and Segment Lung Nodule Using YOLOv8 and Resnet50U-Net","authors":"Selma Mammeri,&nbsp;Mohamed-Yassine Haouam,&nbsp;Mohamed Amroune,&nbsp;Issam Bendib,&nbsp;Elhadj Benkhelifa","doi":"10.1002/ima.70023","DOIUrl":"https://doi.org/10.1002/ima.70023","url":null,"abstract":"<div>\u0000 \u0000 <p>In our research, we introduce a sophisticated “two-stage” or cascade model designed to enhance the precision of lung nodule analysis. This innovative approach integrates two crucial processes: detection and segmentation. In the initial stage, a specialized object detection algorithm efficiently scans medical images to identify potential areas of interest, specifically focusing on lung nodules. This plays a crucial role in minimizing the segmentation area, particularly in the context of lung imaging, where the structures exhibit heterogeneity. This algorithm helps focus the segmentation process only on the relevant areas, reducing unnecessary computation and potential errors. Subsequently, the second stage employs advanced segmentation algorithms to precisely delineate the boundaries of the identified nodules, providing detailed and accurate contours. The combination of object detection and segmentation not only enhances the overall accuracy of lung cancer detection but also minimizes false positives, streamlines the workflow for radiologists, and provides a more comprehensive understanding of potential abnormalities. Additionally, it improves the efficiency and accuracy of segmentation, especially in cases where the complexity and heterogeneity of the lung structure make the segmentation task more challenging. This proposed method has been tested on the LIDC-IDRI dataset, demonstrating favorable results in both nodule detection and segmentation steps, with 81.3% mAP and 83.54% DSC, respectively. These results serve as evidence that the proposed method effectively improves the accuracy of lung nodule detection and segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143111981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TransGAN: A Transformer-CNN Mixed Model for Volumetric CT to MRI Modality Translation and Visualization
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-27 DOI: 10.1002/ima.70013
Ji Ma, Yetao Xie, Jinjin Chen
{"title":"TransGAN: A Transformer-CNN Mixed Model for Volumetric CT to MRI Modality Translation and Visualization","authors":"Ji Ma,&nbsp;Yetao Xie,&nbsp;Jinjin Chen","doi":"10.1002/ima.70013","DOIUrl":"https://doi.org/10.1002/ima.70013","url":null,"abstract":"<div>\u0000 \u0000 <p>Many clinical procedures necessitate the integration of multi-modality imaging data to facilitate more informed decision-making. In practice, the cost of scanning and the potential health risks involved often make the scanning of multi-modality images impractical. It is therefore important to explore the area of modality translation. In recent years, numerous studies have been conducted with the objective of developing methods for translating images between different modalities. Nevertheless, due to the substantial memory requirements and the difficulty in obtaining perfectly paired data, 3D volume modality translation remains a challenging topic. This research proposes a 3D generative adversarial network for the 3D CT-MRI modality translation task. In order to leverage both low-level features (pixel-wise information) and high-level features (overall image structure), our method introduces both convolutional and transformer structures. Furthermore, our method demonstrates robustness in the presence of imperfectly paired matched CT and MRI volumes from two medical datasets employed in the research. To validate the network performance, qualitative and quantitative comparisons and ablation studies were conducted. The results of the experiments demonstrate that the proposed framework can achieve good results in comparison to four other methods, with improvements of between 10% and 20% in four objective and one subjective evaluation metrics.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rapid Early Screening for Lymphedema Using Kinect
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-27 DOI: 10.1002/ima.70020
Sneha Noble, Uma Gopalakrishnan, D. K. Vijaykumar, Rahul Krishnan Pathinarupothi
{"title":"Rapid Early Screening for Lymphedema Using Kinect","authors":"Sneha Noble,&nbsp;Uma Gopalakrishnan,&nbsp;D. K. Vijaykumar,&nbsp;Rahul Krishnan Pathinarupothi","doi":"10.1002/ima.70020","DOIUrl":"https://doi.org/10.1002/ima.70020","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer related lymphedema (BCRL) is the swelling that generally occurs in the arms and is caused by the removal of or damage to lymph nodes as a part of invasive cancer treatment. Treating it at the earliest possible stage is the best way to manage the condition and prevent the pain, recurrent infection, reduced mobility, and impaired function. Current approaches for lymphedema detection include physical examination with tape measurement, lymphoscintigraphy, and magnetic resonance lymphangiography. The tape measurement method requires high manual involvement and lacks standardization, and the imaging modalities are expensive, time-consuming and involve the injection of a contrast agent. To overcome these challenges, we built and validated a noncontact volumetric measurement system called rapid early screening for lymphedema (RESLy) consisting of (a) noninvasive Kinect infrared sensor-based imaging that captures and builds a 3D reconstructed model of the body, (b) an automated segmentation process for extracting the region of interest (ROI), and (c) the volumetric computation of lymphedema. By employing density-based spatial clustering of applications with noise (DBSCAN), which is a density-based unsupervised learning algorithm, RESLy automatically segments out limbs from 3D maps, thereby streamlining automated measurements to be conducted quickly. The in-lab calibration, testing, validation, and clinical deployment of RESLy among twelve patients in a tertiary cancer care department demonstrate that it is able to accurately identify limb volume differences with the least standard error of measurement, as well as identify lymphedema stages satisfactorily.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Melanoma Diagnostic: Harnessing the Synergy of AI and CNNs for Groundbreaking Advances in Early Melanoma Detection and Treatment Strategies
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-24 DOI: 10.1002/ima.70016
Muhammad Sajid, Ali Haider Khan, Tauqeer Safdar Malik, Anas Bilal, Zohaib Ahmad, Raheem Sarwar
{"title":"Enhancing Melanoma Diagnostic: Harnessing the Synergy of AI and CNNs for Groundbreaking Advances in Early Melanoma Detection and Treatment Strategies","authors":"Muhammad Sajid,&nbsp;Ali Haider Khan,&nbsp;Tauqeer Safdar Malik,&nbsp;Anas Bilal,&nbsp;Zohaib Ahmad,&nbsp;Raheem Sarwar","doi":"10.1002/ima.70016","DOIUrl":"https://doi.org/10.1002/ima.70016","url":null,"abstract":"<p>Skin cancer is one of the most prevalent and deadly neoplasms globally. Although melanoma constitutes a minor percentage of all skin cancer types, it presently stands as the primary cause of skin cancer-related deaths. Previous studies indicate that deep learning algorithms may identify subtle patterns and detailed features in medical images for melanoma detection, however, challenges remain due to the scarcity of annotated images and the intricacy of cancer images. The need for early skin cancer detection, particularly melanoma, is an urgent concern due to its potential for high mortality when not identified and treated promptly. This paper introduces a comprehensive method for melanoma detection in medical images through the incorporation of data augmentation approaches. We employed a CNN model for the categorization of melanoma images with data augmentation techniques such as random horizontal flips, random cropping, grayscale conversion, Gaussian blur, and random perspective transformations. Experiments demonstrate that the suggested method surpasses the existing peak performance in melanoma identification within medical imaging. The results indicate the potential of data augmentation techniques in alleviating the issue of insufficient medical images and improving melanoma detection. We attained an overall accuracy of 93.43%, a sensitivity of 99.74%, and a specificity of 88.53% in melanoma detection, surpassing state-of-the-art approaches with the HAM10000 dataset. Our model is beneficial in clinical settings to aid dermatologists in precisely identifying patients, facilitating early intervention, and potentially preserving lives. In the future, we intend to test our algorithm on more skin cancer datasets that may enhance the accuracy of melanoma diagnosis. A crucial component of the design is the ablation study, which seeks to discover and improve the most significant model parameters for computational efficiency and diagnostic precision. The HAM10000 dataset is utilized for ablation tests to assess and validate the efficacy of the suggested method.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Edge-Enhanced Networks for Optic Disc and Optic Cup Segmentation 一种新的视盘和视杯分割边缘增强网络
IF 3 4区 计算机科学
International Journal of Imaging Systems and Technology Pub Date : 2024-12-20 DOI: 10.1002/ima.70019
Mingtao Liu, Yunyu Wang, Yuxuan Li, Shunbo Hu, Guodong Wang, Jing Wang
{"title":"A Novel Edge-Enhanced Networks for Optic Disc and Optic Cup Segmentation","authors":"Mingtao Liu,&nbsp;Yunyu Wang,&nbsp;Yuxuan Li,&nbsp;Shunbo Hu,&nbsp;Guodong Wang,&nbsp;Jing Wang","doi":"10.1002/ima.70019","DOIUrl":"https://doi.org/10.1002/ima.70019","url":null,"abstract":"<div>\u0000 \u0000 <p>Optic disc and optic cup segmentation plays a key role in early diagnosis of glaucoma which is a serious eye disease that can cause damage to the optic nerve, retina, and may cause permanent blindness. Deep learning-based models are used to improve the efficiency and accuracy of fundus image segmentation. However, most approaches currently still have limitations in accurately segmenting optic disc and optic cup, which suffer from the lack of feature abstraction representation and blurring of segmentation in edge regions. This paper proposes a novel edge enhancement network called EE-TransUNet to tackle this challenge. It incorporates the Cascaded Convolutional Fusion block before each decoder layer. This enhances the abstract representation of features and preserves the information of the original features, thereby improving the model's nonlinear fitting ability. Additionally, the Channel Shuffling Multiple Expansion Fusion block is incorporated into the skip connections of the model. This block enhances the network's ability to perceive and characterize image features, thereby improving segmentation accuracy at the edges of the optic cup and optic disc. We validate the effectiveness of the method by conducting experiments on three publicly available datasets, RIM-ONE-v3, REFUGUE and DRISHTI-GS. The Dice coefficients on the test set are 0.871, 0.9056, 0.9068 for the optic cup region and 0.9721, 0.967, 0.9774 for the optic disc region, respectively. The proposed method achieves competitive results compared to other state-of-the-art methods. Our code is available at: https://github.com/wangyunyuwyy/EE-TransUNet.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 1","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142868846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信