基于图像重建的自监督视觉特征学习在结肠镜图像中进行息肉分割

Le Thi Thu Hong, N. Thanh, T. Q. Long
{"title":"基于图像重建的自监督视觉特征学习在结肠镜图像中进行息肉分割","authors":"Le Thi Thu Hong, N. Thanh, T. Q. Long","doi":"10.1109/NICS54270.2021.9701580","DOIUrl":null,"url":null,"abstract":"Automatic polyp detection and segmentation are desirable for colon screening because the polyps miss rate in clinical practice is relatively high. The deep learning-based approach for polyp segmentation has gained much attention in recent years due to the automatic feature extraction process to segment polyp regions with unprecedented precision. However, training these networks requires a large amount of manually annotated data, which is limited by the available resources of endoscopic doctors. We propose a self-supervised visual learning method for polyp segmentation to address this challenge. We adapted self-supervised visual feature learning with image reconstruction as a pretext task and polyp segmentation as a downstream task. UNet is used as the backbone architecture for both the pretext task and the downstream task. The unlabeled colonoscopy image dataset is used to train the pretext network. For polyp segmentation, we apply transfer learning on the pretext network. The polyp segmentation network is trained using a public benchmark dataset for polyp segmentation. Our experiments demonstrate that the proposed self-supervised learning method can achieve a better segmentation accuracy than an UNet trained from scratch. On the CVC-ColonDB polyp segmentation dataset with only annotated 300 images, the proposed method improves IoU metric from 76.87% to 81.99% and Dice metric from 86.61% to 89.33% for polyp segmentation, compared to the baseline UNet.","PeriodicalId":296963,"journal":{"name":"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Self-supervised Visual Feature Learning for Polyp Segmentation in Colonoscopy Images Using Image Reconstruction as Pretext Task\",\"authors\":\"Le Thi Thu Hong, N. Thanh, T. Q. Long\",\"doi\":\"10.1109/NICS54270.2021.9701580\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Automatic polyp detection and segmentation are desirable for colon screening because the polyps miss rate in clinical practice is relatively high. The deep learning-based approach for polyp segmentation has gained much attention in recent years due to the automatic feature extraction process to segment polyp regions with unprecedented precision. However, training these networks requires a large amount of manually annotated data, which is limited by the available resources of endoscopic doctors. We propose a self-supervised visual learning method for polyp segmentation to address this challenge. We adapted self-supervised visual feature learning with image reconstruction as a pretext task and polyp segmentation as a downstream task. UNet is used as the backbone architecture for both the pretext task and the downstream task. The unlabeled colonoscopy image dataset is used to train the pretext network. For polyp segmentation, we apply transfer learning on the pretext network. The polyp segmentation network is trained using a public benchmark dataset for polyp segmentation. Our experiments demonstrate that the proposed self-supervised learning method can achieve a better segmentation accuracy than an UNet trained from scratch. On the CVC-ColonDB polyp segmentation dataset with only annotated 300 images, the proposed method improves IoU metric from 76.87% to 81.99% and Dice metric from 86.61% to 89.33% for polyp segmentation, compared to the baseline UNet.\",\"PeriodicalId\":296963,\"journal\":{\"name\":\"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NICS54270.2021.9701580\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 8th NAFOSTED Conference on Information and Computer Science (NICS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NICS54270.2021.9701580","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

在结肠筛查中,由于息肉的漏检率较高,需要对息肉进行自动检测和分割。近年来,基于深度学习的息肉分割方法因其自动特征提取过程以前所未有的精度分割息肉区域而受到广泛关注。然而,训练这些网络需要大量人工标注的数据,这受到内窥镜医生可用资源的限制。为了解决这一问题,我们提出了一种自监督视觉学习方法来进行息肉分割。我们采用自监督视觉特征学习,将图像重建作为借口任务,将息肉分割作为下游任务。UNet被用作托辞任务和下游任务的主干架构。使用未标记的结肠镜图像数据集来训练借口网络。对于息肉的分割,我们在托辞网络上应用迁移学习。息肉分割网络使用息肉分割的公共基准数据集进行训练。我们的实验表明,与从头开始训练的UNet相比,提出的自监督学习方法可以获得更好的分割精度。在CVC-ColonDB息肉分割数据集上,与基线UNet相比,该方法将息肉分割的IoU度量从76.87%提高到81.99%,Dice度量从86.61%提高到89.33%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Self-supervised Visual Feature Learning for Polyp Segmentation in Colonoscopy Images Using Image Reconstruction as Pretext Task
Automatic polyp detection and segmentation are desirable for colon screening because the polyps miss rate in clinical practice is relatively high. The deep learning-based approach for polyp segmentation has gained much attention in recent years due to the automatic feature extraction process to segment polyp regions with unprecedented precision. However, training these networks requires a large amount of manually annotated data, which is limited by the available resources of endoscopic doctors. We propose a self-supervised visual learning method for polyp segmentation to address this challenge. We adapted self-supervised visual feature learning with image reconstruction as a pretext task and polyp segmentation as a downstream task. UNet is used as the backbone architecture for both the pretext task and the downstream task. The unlabeled colonoscopy image dataset is used to train the pretext network. For polyp segmentation, we apply transfer learning on the pretext network. The polyp segmentation network is trained using a public benchmark dataset for polyp segmentation. Our experiments demonstrate that the proposed self-supervised learning method can achieve a better segmentation accuracy than an UNet trained from scratch. On the CVC-ColonDB polyp segmentation dataset with only annotated 300 images, the proposed method improves IoU metric from 76.87% to 81.99% and Dice metric from 86.61% to 89.33% for polyp segmentation, compared to the baseline UNet.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信