Towards better laparoscopic video segmentation: A class-wise contrastive learning approach with multi-scale feature extraction

IF 2.8 Q3 ENGINEERING, BIOMEDICAL
Luyang Zhang, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori
{"title":"Towards better laparoscopic video segmentation: A class-wise contrastive learning approach with multi-scale feature extraction","authors":"Luyang Zhang,&nbsp;Yuichiro Hayashi,&nbsp;Masahiro Oda,&nbsp;Kensaku Mori","doi":"10.1049/htl2.12069","DOIUrl":null,"url":null,"abstract":"<p>The task of segmentation is integral to computer-aided surgery systems. Given the privacy concerns associated with medical data, collecting a large amount of annotated data for training is challenging. Unsupervised learning techniques, such as contrastive learning, have shown powerful capabilities in learning image-level representations from unlabelled data. This study leverages classification labels to enhance the accuracy of the segmentation model trained on limited annotated data. The method uses a multi-scale projection head to extract image features at various scales. The partitioning method for positive sample pairs is then improved to perform contrastive learning on the extracted features at each scale to effectively represent the differences between positive and negative samples in contrastive learning. Furthermore, the model is trained simultaneously with both segmentation labels and classification labels. This enables the model to extract features more effectively from each segmentation target class and further accelerates the convergence speed. The method was validated using the publicly available CholecSeg8k dataset for comprehensive abdominal cavity surgical segmentation. Compared to select existing methods, the proposed approach significantly enhances segmentation performance, even with a small labelled subset (1–10%) of the dataset, showcasing a superior intersection over union (IoU) score.</p>","PeriodicalId":37474,"journal":{"name":"Healthcare Technology Letters","volume":"11 2-3","pages":"126-136"},"PeriodicalIF":2.8000,"publicationDate":"2024-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/htl2.12069","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Healthcare Technology Letters","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/htl2.12069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

The task of segmentation is integral to computer-aided surgery systems. Given the privacy concerns associated with medical data, collecting a large amount of annotated data for training is challenging. Unsupervised learning techniques, such as contrastive learning, have shown powerful capabilities in learning image-level representations from unlabelled data. This study leverages classification labels to enhance the accuracy of the segmentation model trained on limited annotated data. The method uses a multi-scale projection head to extract image features at various scales. The partitioning method for positive sample pairs is then improved to perform contrastive learning on the extracted features at each scale to effectively represent the differences between positive and negative samples in contrastive learning. Furthermore, the model is trained simultaneously with both segmentation labels and classification labels. This enables the model to extract features more effectively from each segmentation target class and further accelerates the convergence speed. The method was validated using the publicly available CholecSeg8k dataset for comprehensive abdominal cavity surgical segmentation. Compared to select existing methods, the proposed approach significantly enhances segmentation performance, even with a small labelled subset (1–10%) of the dataset, showcasing a superior intersection over union (IoU) score.

Abstract Image

实现更好的腹腔镜视频分割:多尺度特征提取的分类对比学习方法
分割任务是计算机辅助手术系统不可或缺的一部分。考虑到与医疗数据相关的隐私问题,收集大量标注数据进行训练具有挑战性。无监督学习技术(如对比学习)在从无标签数据中学习图像级表征方面显示出强大的能力。本研究利用分类标签来提高在有限注释数据上训练的分割模型的准确性。该方法使用多尺度投影头提取不同尺度的图像特征。然后改进正样本对的分割方法,对每个尺度上提取的特征进行对比学习,从而在对比学习中有效地表示正样本和负样本之间的差异。此外,该模型同时使用分割标签和分类标签进行训练。这使模型能更有效地从每个分割目标类别中提取特征,并进一步加快收敛速度。我们使用公开的 CholecSeg8k 数据集对该方法进行了验证,该数据集用于综合腹腔手术分割。与精选的现有方法相比,所提出的方法显著提高了分割性能,即使只使用数据集的一小部分标记子集(1-10%),也能显示出卓越的交集大于联合(IoU)得分。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Healthcare Technology Letters
Healthcare Technology Letters Health Professions-Health Information Management
CiteScore
6.10
自引率
4.80%
发文量
12
审稿时长
22 weeks
期刊介绍: Healthcare Technology Letters aims to bring together an audience of biomedical and electrical engineers, physical and computer scientists, and mathematicians to enable the exchange of the latest ideas and advances through rapid online publication of original healthcare technology research. Major themes of the journal include (but are not limited to): Major technological/methodological areas: Biomedical signal processing Biomedical imaging and image processing Bioinstrumentation (sensors, wearable technologies, etc) Biomedical informatics Major application areas: Cardiovascular and respiratory systems engineering Neural engineering, neuromuscular systems Rehabilitation engineering Bio-robotics, surgical planning and biomechanics Therapeutic and diagnostic systems, devices and technologies Clinical engineering Healthcare information systems, telemedicine, mHealth.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信