G-SET-DCL: a guided sequential episodic training with dual contrastive learning approach for colon segmentation.

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Samir Farag Harb, Asem Ali, Mohamed Yousuf, Salwa Elshazly, Aly Farag
{"title":"G-SET-DCL: a guided sequential episodic training with dual contrastive learning approach for colon segmentation.","authors":"Samir Farag Harb, Asem Ali, Mohamed Yousuf, Salwa Elshazly, Aly Farag","doi":"10.1007/s11548-024-03319-4","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>This article introduces a novel deep learning approach to substantially improve the accuracy of colon segmentation even with limited data annotation, which enhances the overall effectiveness of the CT colonography pipeline in clinical settings.</p><p><strong>Methods: </strong>The proposed approach integrates 3D contextual information via guided sequential episodic training in which a query CT slice is segmented by exploiting its previous labeled CT slice (i.e., support). Segmentation starts by detecting the rectum using a Markov Random Field-based algorithm. Then, supervised sequential episodic training is applied to the remaining slices, while contrastive learning is employed to enhance feature discriminability, thereby improving segmentation accuracy.</p><p><strong>Results: </strong>The proposed method, evaluated on 98 abdominal scans of prepped patients, achieved a Dice coefficient of 97.3% and a polyp information preservation accuracy of 98.28%. Statistical analysis, including 95% confidence intervals, underscores the method's robustness and reliability. Clinically, this high level of accuracy is vital for ensuring the preservation of critical polyp details, which are essential for accurate automatic diagnostic evaluation. The proposed method performs reliably in scenarios with limited annotated data. This is demonstrated by achieving a Dice coefficient of 97.15% when the model was trained on a smaller number of annotated CT scans (e.g., 10 scans) than the testing dataset (e.g., 88 scans).</p><p><strong>Conclusions: </strong>The proposed sequential segmentation approach achieves promising results in colon segmentation. A key strength of the method is its ability to generalize effectively, even with limited annotated datasets-a common challenge in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"279-287"},"PeriodicalIF":2.3000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-024-03319-4","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/9 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: This article introduces a novel deep learning approach to substantially improve the accuracy of colon segmentation even with limited data annotation, which enhances the overall effectiveness of the CT colonography pipeline in clinical settings.

Methods: The proposed approach integrates 3D contextual information via guided sequential episodic training in which a query CT slice is segmented by exploiting its previous labeled CT slice (i.e., support). Segmentation starts by detecting the rectum using a Markov Random Field-based algorithm. Then, supervised sequential episodic training is applied to the remaining slices, while contrastive learning is employed to enhance feature discriminability, thereby improving segmentation accuracy.

Results: The proposed method, evaluated on 98 abdominal scans of prepped patients, achieved a Dice coefficient of 97.3% and a polyp information preservation accuracy of 98.28%. Statistical analysis, including 95% confidence intervals, underscores the method's robustness and reliability. Clinically, this high level of accuracy is vital for ensuring the preservation of critical polyp details, which are essential for accurate automatic diagnostic evaluation. The proposed method performs reliably in scenarios with limited annotated data. This is demonstrated by achieving a Dice coefficient of 97.15% when the model was trained on a smaller number of annotated CT scans (e.g., 10 scans) than the testing dataset (e.g., 88 scans).

Conclusions: The proposed sequential segmentation approach achieves promising results in colon segmentation. A key strength of the method is its ability to generalize effectively, even with limited annotated datasets-a common challenge in medical imaging.

G-SET-DCL:一种带有双重对比学习方法的引导序列情景训练,用于结肠分割。
目的:本文介绍了一种新颖的深度学习方法,即使在有限的数据注释下也能大幅提高结肠分割的准确性,从而提高CT结肠镜管道在临床环境中的整体有效性。方法:该方法通过引导序列情景训练集成3D上下文信息,其中查询CT切片通过利用其先前标记的CT切片(即支持)进行分割。分割首先使用基于马尔科夫随机场的算法检测直肠。然后对剩余的切片进行有监督的序列情景训练,同时利用对比学习增强特征的可判别性,从而提高分割的准确率。结果:对98例术前准备患者的腹部扫描进行评估,该方法的Dice系数为97.3%,息肉信息保存准确率为98.28%。统计分析,包括95%的置信区间,强调了方法的稳健性和可靠性。临床上,这种高水平的准确性对于确保保留关键的息肉细节至关重要,这对于准确的自动诊断评估至关重要。该方法在标注数据有限的情况下运行可靠。当模型在较少数量的带注释的CT扫描(例如,10次扫描)上进行训练时,其Dice系数达到97.15%,这一点得到了证明(例如,88次扫描)。结论:提出的顺序分割方法在结肠分割中取得了良好的效果。该方法的一个关键优势是它能够有效地泛化,即使是在有限的注释数据集上——这是医学成像中的一个常见挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信