Advancing MRI segmentation with CLIP-driven semi-supervised learning and semantic alignment

IF 5.5 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Bo Sun , Kexuan Li , Jingjuan Liu , Zhen Sun , Xuehao Wang , Yuanbo He , Xin Zhao , Huadan Xue , Aimin Hao , Shuai Li , Yi Xiao
{"title":"Advancing MRI segmentation with CLIP-driven semi-supervised learning and semantic alignment","authors":"Bo Sun ,&nbsp;Kexuan Li ,&nbsp;Jingjuan Liu ,&nbsp;Zhen Sun ,&nbsp;Xuehao Wang ,&nbsp;Yuanbo He ,&nbsp;Xin Zhao ,&nbsp;Huadan Xue ,&nbsp;Aimin Hao ,&nbsp;Shuai Li ,&nbsp;Yi Xiao","doi":"10.1016/j.neucom.2024.128690","DOIUrl":null,"url":null,"abstract":"<div><div>Precise segmentation and reconstruction of multi-structures within MRI are crucial for clinical applications such as surgical navigation. However, medical image segmentation faces several challenges. Although semi-supervised methods can reduce the annotation workload, they often suffer from limited robustness. To address this issue, this study proposes a novel CLIP-driven semi-supervised model, that includes two branches and a module. In the image branch, copy-paste is used as data augmentation method to enhance consistency learning. In the text branch, patient-level information is encoded via CLIP to drive the image branch. Notably, a novel cross-modal fusion module is designed to enhance the alignment and representation of text and image. Additionally, a semantic spatial alignment module is introduced to register segmentation results from different axial MRIs into a unified space. Three multi-modal datasets (one private and two public) were constructed to demonstrate the model’s performance. Compared to previous state-of-the-art methods, this model shows a significant advantage with both 5% and 10% labeled data. This study constructs a robust semi-supervised medical segmentation model, particularly effective in addressing label inconsistency and abnormal organ deformations. It also tackles the axial non-orthogonality challenges inherent in MRI, providing a consistent view of multi-structures.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":null,"pages":null},"PeriodicalIF":5.5000,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231224014619","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Precise segmentation and reconstruction of multi-structures within MRI are crucial for clinical applications such as surgical navigation. However, medical image segmentation faces several challenges. Although semi-supervised methods can reduce the annotation workload, they often suffer from limited robustness. To address this issue, this study proposes a novel CLIP-driven semi-supervised model, that includes two branches and a module. In the image branch, copy-paste is used as data augmentation method to enhance consistency learning. In the text branch, patient-level information is encoded via CLIP to drive the image branch. Notably, a novel cross-modal fusion module is designed to enhance the alignment and representation of text and image. Additionally, a semantic spatial alignment module is introduced to register segmentation results from different axial MRIs into a unified space. Three multi-modal datasets (one private and two public) were constructed to demonstrate the model’s performance. Compared to previous state-of-the-art methods, this model shows a significant advantage with both 5% and 10% labeled data. This study constructs a robust semi-supervised medical segmentation model, particularly effective in addressing label inconsistency and abnormal organ deformations. It also tackles the axial non-orthogonality challenges inherent in MRI, providing a consistent view of multi-structures.

Abstract Image

利用 CLIP 驱动的半监督学习和语义对齐推进磁共振成像分割
磁共振成像中多结构的精确分割和重建对于手术导航等临床应用至关重要。然而,医学图像分割面临着一些挑战。虽然半监督方法可以减少标注工作量,但其鲁棒性往往有限。为解决这一问题,本研究提出了一种新颖的 CLIP 驱动的半监督模型,包括两个分支和一个模块。在图像分支中,复制粘贴被用作数据增强方法,以提高学习的一致性。在文本分支中,通过 CLIP 对患者层面的信息进行编码,从而驱动图像分支。值得注意的是,设计了一个新颖的跨模态融合模块,以增强文本和图像的对齐和表示。此外,还引入了一个语义空间配准模块,将不同轴向核磁共振成像的分割结果注册到一个统一的空间中。为了证明该模型的性能,我们构建了三个多模态数据集(一个私人数据集和两个公共数据集)。与之前最先进的方法相比,该模型在使用 5%和 10%的标记数据时均显示出显著优势。这项研究构建了一个稳健的半监督医疗分割模型,尤其是在解决标签不一致和器官异常变形方面非常有效。它还解决了核磁共振成像固有的轴向非正交性难题,提供了一致的多结构视图。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neurocomputing
Neurocomputing 工程技术-计算机:人工智能
CiteScore
13.10
自引率
10.00%
发文量
1382
审稿时长
70 days
期刊介绍: Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信