PolySegNet:通过斯温变换器和视觉变换器融合改进息肉分割。

IF 2.8 4区 医学 Q2 ENGINEERING, BIOMEDICAL
Biomedical Engineering Letters Pub Date : 2024-08-20 eCollection Date: 2024-11-01 DOI:10.1007/s13534-024-00415-x
P Lijin, Mohib Ullah, Anuja Vats, Faouzi Alaya Cheikh, G Santhosh Kumar, Madhu S Nair
{"title":"PolySegNet:通过斯温变换器和视觉变换器融合改进息肉分割。","authors":"P Lijin, Mohib Ullah, Anuja Vats, Faouzi Alaya Cheikh, G Santhosh Kumar, Madhu S Nair","doi":"10.1007/s13534-024-00415-x","DOIUrl":null,"url":null,"abstract":"<p><p>Colorectal cancer ranks as the second most prevalent cancer worldwide, with a high mortality rate. Colonoscopy stands as the preferred procedure for diagnosing colorectal cancer. Detecting polyps at an early stage is critical for effective prevention and diagnosis. However, challenges in colonoscopic procedures often lead medical practitioners to seek support from alternative techniques for timely polyp identification. Polyp segmentation emerges as a promising approach to identify polyps in colonoscopy images. In this paper, we propose an advanced method, PolySegNet, that leverages both Vision Transformer and Swin Transformer, coupled with a Convolutional Neural Network (CNN) decoder. The fusion of these models facilitates a comprehensive analysis of various modules in our proposed architecture.To assess the performance of PolySegNet, we evaluate it on three colonoscopy datasets, a combined dataset, and their augmented versions. The experimental results demonstrate that PolySegNet achieves competitive results in terms of polyp segmentation accuracy and efficacy, achieving a mean Dice score of 0.92 and a mean Intersection over Union (IoU) of 0.86. These metrics highlight the superior performance of PolySegNet in accurately delineating polyp boundaries compared to existing methods. PolySegNet has shown great promise in accurately and efficiently segmenting polyps in medical images. The proposed method could be the foundation for a new class of transformer-based segmentation models in medical image analysis.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"14 6","pages":"1421-1431"},"PeriodicalIF":2.8000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11502643/pdf/","citationCount":"0","resultStr":"{\"title\":\"PolySegNet: improving polyp segmentation through swin transformer and vision transformer fusion.\",\"authors\":\"P Lijin, Mohib Ullah, Anuja Vats, Faouzi Alaya Cheikh, G Santhosh Kumar, Madhu S Nair\",\"doi\":\"10.1007/s13534-024-00415-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Colorectal cancer ranks as the second most prevalent cancer worldwide, with a high mortality rate. Colonoscopy stands as the preferred procedure for diagnosing colorectal cancer. Detecting polyps at an early stage is critical for effective prevention and diagnosis. However, challenges in colonoscopic procedures often lead medical practitioners to seek support from alternative techniques for timely polyp identification. Polyp segmentation emerges as a promising approach to identify polyps in colonoscopy images. In this paper, we propose an advanced method, PolySegNet, that leverages both Vision Transformer and Swin Transformer, coupled with a Convolutional Neural Network (CNN) decoder. The fusion of these models facilitates a comprehensive analysis of various modules in our proposed architecture.To assess the performance of PolySegNet, we evaluate it on three colonoscopy datasets, a combined dataset, and their augmented versions. The experimental results demonstrate that PolySegNet achieves competitive results in terms of polyp segmentation accuracy and efficacy, achieving a mean Dice score of 0.92 and a mean Intersection over Union (IoU) of 0.86. These metrics highlight the superior performance of PolySegNet in accurately delineating polyp boundaries compared to existing methods. PolySegNet has shown great promise in accurately and efficiently segmenting polyps in medical images. The proposed method could be the foundation for a new class of transformer-based segmentation models in medical image analysis.</p>\",\"PeriodicalId\":46898,\"journal\":{\"name\":\"Biomedical Engineering Letters\",\"volume\":\"14 6\",\"pages\":\"1421-1431\"},\"PeriodicalIF\":2.8000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11502643/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Engineering Letters\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1007/s13534-024-00415-x\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/11/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Engineering Letters","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s13534-024-00415-x","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/11/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

大肠癌是全球第二大高发癌症,死亡率很高。结肠镜检查是诊断结肠直肠癌的首选方法。早期发现息肉对于有效预防和诊断至关重要。然而,结肠镜检查过程中存在的挑战往往导致医疗从业人员寻求其他技术的支持,以便及时发现息肉。息肉分割是在结肠镜图像中识别息肉的一种有前途的方法。在本文中,我们提出了一种先进的方法--PolySegNet,它同时利用了视觉变换器(Vision Transformer)和斯温变换器(Swin Transformer)以及卷积神经网络(CNN)解码器。为了评估 PolySegNet 的性能,我们在三个结肠镜检查数据集、一个综合数据集及其增强版本上对其进行了评估。实验结果表明,PolySegNet 在息肉分割的准确性和有效性方面都取得了具有竞争力的结果,平均 Dice 得分为 0.92,平均 Intersection over Union (IoU) 为 0.86。与现有方法相比,这些指标凸显了 PolySegNet 在准确划分息肉边界方面的卓越性能。PolySegNet 在准确、高效地分割医学图像中的息肉方面表现出了巨大的潜力。所提出的方法可以成为医学图像分析中基于变换器的新型分割模型的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
PolySegNet: improving polyp segmentation through swin transformer and vision transformer fusion.

Colorectal cancer ranks as the second most prevalent cancer worldwide, with a high mortality rate. Colonoscopy stands as the preferred procedure for diagnosing colorectal cancer. Detecting polyps at an early stage is critical for effective prevention and diagnosis. However, challenges in colonoscopic procedures often lead medical practitioners to seek support from alternative techniques for timely polyp identification. Polyp segmentation emerges as a promising approach to identify polyps in colonoscopy images. In this paper, we propose an advanced method, PolySegNet, that leverages both Vision Transformer and Swin Transformer, coupled with a Convolutional Neural Network (CNN) decoder. The fusion of these models facilitates a comprehensive analysis of various modules in our proposed architecture.To assess the performance of PolySegNet, we evaluate it on three colonoscopy datasets, a combined dataset, and their augmented versions. The experimental results demonstrate that PolySegNet achieves competitive results in terms of polyp segmentation accuracy and efficacy, achieving a mean Dice score of 0.92 and a mean Intersection over Union (IoU) of 0.86. These metrics highlight the superior performance of PolySegNet in accurately delineating polyp boundaries compared to existing methods. PolySegNet has shown great promise in accurately and efficiently segmenting polyps in medical images. The proposed method could be the foundation for a new class of transformer-based segmentation models in medical image analysis.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical Engineering Letters
Biomedical Engineering Letters ENGINEERING, BIOMEDICAL-
CiteScore
6.80
自引率
0.00%
发文量
34
期刊介绍: Biomedical Engineering Letters (BMEL) aims to present the innovative experimental science and technological development in the biomedical field as well as clinical application of new development. The article must contain original biomedical engineering content, defined as development, theoretical analysis, and evaluation/validation of a new technique. BMEL publishes the following types of papers: original articles, review articles, editorials, and letters to the editor. All the papers are reviewed in single-blind fashion.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信