医学图像分割领域连接对比预训练的探索

Zequn Zhang;Yun Jiang;Yunnan Wang;Baao Xie;Wenyao Zhang;Yuhang Li;Zhen Chen;Xin Jin;Wenjun Zeng
{"title":"医学图像分割领域连接对比预训练的探索","authors":"Zequn Zhang;Yun Jiang;Yunnan Wang;Baao Xie;Wenyao Zhang;Yuhang Li;Zhen Chen;Xin Jin;Wenjun Zeng","doi":"10.1109/TMI.2024.3525095","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation (UDA) in medical image segmentation aims to improve the generalization of deep models by alleviating domain gaps caused by inconsistency across equipment, imaging protocols, and patient conditions. However, existing UDA works remain insufficiently explored and present great limitations: 1) Exhibit cumbersome designs that prioritize aligning statistical metrics and distributions, which limits the model’s flexibility and generalization while also overlooking the potential knowledge embedded in unlabeled data; 2) More applicable in a certain domain, lack the generalization capability to handle diverse shifts encountered in clinical scenarios. To overcome these limitations, we introduce MedCon, a unified framework that leverages general unsupervised contrastive pre-training to establish domain connections, effectively handling diverse domain shifts without tailored adjustments. Specifically, it initially explores a general contrastive pre-training to establish domain connections by leveraging the rich prior knowledge from unlabeled images. Thereafter, the pre-trained backbone is fine-tuned using source-based images to ultimately identify per-pixel semantic categories. To capture both intra- and inter-domain connections of anatomical structures, we construct positive-negative pairs from a hybrid aspect of both local and global scales. In this regard, a shared-weight encoder-decoder is employed to generate pixel-level representations, which are then mapped into hyper-spherical space using a non-learnable projection head to facilitate positive pair matching. Comprehensive experiments on diverse medical image datasets confirm that MedCon outperforms previous methods by effectively managing a wide range of domain shifts and showcasing superior generalization capabilities.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1686-1698"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Exploring Contrastive Pre-Training for Domain Connections in Medical Image Segmentation\",\"authors\":\"Zequn Zhang;Yun Jiang;Yunnan Wang;Baao Xie;Wenyao Zhang;Yuhang Li;Zhen Chen;Xin Jin;Wenjun Zeng\",\"doi\":\"10.1109/TMI.2024.3525095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unsupervised domain adaptation (UDA) in medical image segmentation aims to improve the generalization of deep models by alleviating domain gaps caused by inconsistency across equipment, imaging protocols, and patient conditions. However, existing UDA works remain insufficiently explored and present great limitations: 1) Exhibit cumbersome designs that prioritize aligning statistical metrics and distributions, which limits the model’s flexibility and generalization while also overlooking the potential knowledge embedded in unlabeled data; 2) More applicable in a certain domain, lack the generalization capability to handle diverse shifts encountered in clinical scenarios. To overcome these limitations, we introduce MedCon, a unified framework that leverages general unsupervised contrastive pre-training to establish domain connections, effectively handling diverse domain shifts without tailored adjustments. Specifically, it initially explores a general contrastive pre-training to establish domain connections by leveraging the rich prior knowledge from unlabeled images. Thereafter, the pre-trained backbone is fine-tuned using source-based images to ultimately identify per-pixel semantic categories. To capture both intra- and inter-domain connections of anatomical structures, we construct positive-negative pairs from a hybrid aspect of both local and global scales. In this regard, a shared-weight encoder-decoder is employed to generate pixel-level representations, which are then mapped into hyper-spherical space using a non-learnable projection head to facilitate positive pair matching. Comprehensive experiments on diverse medical image datasets confirm that MedCon outperforms previous methods by effectively managing a wide range of domain shifts and showcasing superior generalization capabilities.\",\"PeriodicalId\":94033,\"journal\":{\"name\":\"IEEE transactions on medical imaging\",\"volume\":\"44 4\",\"pages\":\"1686-1698\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on medical imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10820867/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical imaging","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10820867/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

医学图像分割中的无监督域自适应(UDA)旨在通过减轻设备、成像方案和患者情况不一致导致的域差距来提高深度模型的泛化。然而,现有的UDA工作仍然没有得到充分的探索,并且存在很大的局限性:1)表现出繁琐的设计,优先考虑统计指标和分布,这限制了模型的灵活性和泛化,同时也忽略了未标记数据中嵌入的潜在知识;2)在某一领域的适用性较强,缺乏泛化能力来处理临床场景中遇到的不同班次。为了克服这些限制,我们引入了MedCon,这是一个统一的框架,利用一般的无监督对比预训练来建立域连接,有效地处理不同的域转换,而无需量身定制的调整。具体来说,它首先探索了一种一般的对比预训练,通过利用来自未标记图像的丰富先验知识来建立领域连接。然后,使用基于源的图像对预训练的主干进行微调,最终识别每个像素的语义类别。为了捕获解剖结构的域内和域间连接,我们从局部和全局尺度的混合方面构建了正负对。在这方面,采用共享权重编码器-解码器来生成像素级表示,然后使用不可学习的投影头将其映射到超球面空间中,以促进正对匹配。在不同医学图像数据集上进行的综合实验证实,MedCon通过有效管理大范围的域转移和展示卓越的泛化能力,优于以前的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Exploring Contrastive Pre-Training for Domain Connections in Medical Image Segmentation
Unsupervised domain adaptation (UDA) in medical image segmentation aims to improve the generalization of deep models by alleviating domain gaps caused by inconsistency across equipment, imaging protocols, and patient conditions. However, existing UDA works remain insufficiently explored and present great limitations: 1) Exhibit cumbersome designs that prioritize aligning statistical metrics and distributions, which limits the model’s flexibility and generalization while also overlooking the potential knowledge embedded in unlabeled data; 2) More applicable in a certain domain, lack the generalization capability to handle diverse shifts encountered in clinical scenarios. To overcome these limitations, we introduce MedCon, a unified framework that leverages general unsupervised contrastive pre-training to establish domain connections, effectively handling diverse domain shifts without tailored adjustments. Specifically, it initially explores a general contrastive pre-training to establish domain connections by leveraging the rich prior knowledge from unlabeled images. Thereafter, the pre-trained backbone is fine-tuned using source-based images to ultimately identify per-pixel semantic categories. To capture both intra- and inter-domain connections of anatomical structures, we construct positive-negative pairs from a hybrid aspect of both local and global scales. In this regard, a shared-weight encoder-decoder is employed to generate pixel-level representations, which are then mapped into hyper-spherical space using a non-learnable projection head to facilitate positive pair matching. Comprehensive experiments on diverse medical image datasets confirm that MedCon outperforms previous methods by effectively managing a wide range of domain shifts and showcasing superior generalization capabilities.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信