CA2CL: Cluster-Aware Adversarial Contrastive Learning for Pathological Image Analysis.

IF 6.7 2区 医学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS
Junjian Li, Hulin Kuang, Jin Liu, Hailin Yue, Jianxin Wang
{"title":"CA<sup>2</sup>CL: Cluster-Aware Adversarial Contrastive Learning for Pathological Image Analysis.","authors":"Junjian Li, Hulin Kuang, Jin Liu, Hailin Yue, Jianxin Wang","doi":"10.1109/JBHI.2025.3552640","DOIUrl":null,"url":null,"abstract":"<p><p>Pathological diagnosis assists in saving human lives, but such models are annotation hungry and pathological images are notably expensive to annotate. Contrastive learning could be a promising solution that relies only on the unlabeled training data to generate informative representations. However, the majority of current methods in contrastive learning have the following two issues: (1) positive samples produced through random augmentation are less challenging, and (2) false negative pairs problem caused by negative sampling bias. To alleviate the above issues, we propose a novel contrastive learning method called Cluster-Aware Adversarial Contrastive Learning (CA<sup>2</sup>CL). Specifically, a mixed data augmentation technique is provided to learn more transferable representations by generating more discriminative sample pairs. Furthermore, to mitigate the effects of inherent false negative pairs, we adopt a cluster-aware loss to identify similarities between instances and incorporate them into the process of contrastive learning. Finally, we generate challenging contrastive data pairs by adversarial learning, and adversarially learn robust representations in the representation space without the labeled training data, which aims to maximize the similarity between the augmented sample and the related adversarial sample. Our proposed CA<sup>2</sup>CL is evaluated on two public datasets: NCT-CRC-HE and PCam for the fine-tuning and linear evaluation tasks and on two other public datasets: GlaS and CARG for the detection and segmentation tasks, respectively. Extensive experimental results demonstrate the superior performance improvement of our method over several Self-supervised learning (SSL) methods and ImageNet pretraining particularly in scenarios with limited data availability for all four tasks. The code and the pre-trained weights are available at https://github.com/junjianli106/CA2CL.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.7000,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3552640","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Pathological diagnosis assists in saving human lives, but such models are annotation hungry and pathological images are notably expensive to annotate. Contrastive learning could be a promising solution that relies only on the unlabeled training data to generate informative representations. However, the majority of current methods in contrastive learning have the following two issues: (1) positive samples produced through random augmentation are less challenging, and (2) false negative pairs problem caused by negative sampling bias. To alleviate the above issues, we propose a novel contrastive learning method called Cluster-Aware Adversarial Contrastive Learning (CA2CL). Specifically, a mixed data augmentation technique is provided to learn more transferable representations by generating more discriminative sample pairs. Furthermore, to mitigate the effects of inherent false negative pairs, we adopt a cluster-aware loss to identify similarities between instances and incorporate them into the process of contrastive learning. Finally, we generate challenging contrastive data pairs by adversarial learning, and adversarially learn robust representations in the representation space without the labeled training data, which aims to maximize the similarity between the augmented sample and the related adversarial sample. Our proposed CA2CL is evaluated on two public datasets: NCT-CRC-HE and PCam for the fine-tuning and linear evaluation tasks and on two other public datasets: GlaS and CARG for the detection and segmentation tasks, respectively. Extensive experimental results demonstrate the superior performance improvement of our method over several Self-supervised learning (SSL) methods and ImageNet pretraining particularly in scenarios with limited data availability for all four tasks. The code and the pre-trained weights are available at https://github.com/junjianli106/CA2CL.

求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Journal of Biomedical and Health Informatics
IEEE Journal of Biomedical and Health Informatics COMPUTER SCIENCE, INFORMATION SYSTEMS-COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
CiteScore
13.60
自引率
6.50%
发文量
1151
期刊介绍: IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信