UniSAL: Unified Semi-supervised Active Learning for histopathological image classification

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Lanfeng Zhong , Kun Qian , Xin Liao , Zongyao Huang , Yang Liu , Shaoting Zhang , Guotai Wang
{"title":"UniSAL: Unified Semi-supervised Active Learning for histopathological image classification","authors":"Lanfeng Zhong ,&nbsp;Kun Qian ,&nbsp;Xin Liao ,&nbsp;Zongyao Huang ,&nbsp;Yang Liu ,&nbsp;Shaoting Zhang ,&nbsp;Guotai Wang","doi":"10.1016/j.media.2025.103542","DOIUrl":null,"url":null,"abstract":"<div><div>Histopathological image classification using deep learning is crucial for accurate and efficient cancer diagnosis. However, annotating a large amount of histopathological images for training is costly and time-consuming, leading to a scarcity of available labeled data for training deep neural networks. To reduce human efforts and improve efficiency for annotation, we propose a Unified Semi-supervised Active Learning framework (UniSAL) that effectively selects informative and representative samples for annotation. First, unlike most existing active learning methods that only train from labeled samples in each round, dual-view high-confidence pseudo training is proposed to utilize both labeled and unlabeled images to train a model for selecting query samples, where two networks operating on different augmented versions of an input image provide diverse pseudo labels for each other, and pseudo label-guided class-wise contrastive learning is introduced to obtain better feature representations for effective sample selection. Second, based on the trained model at each round, we design novel uncertain and representative sample selection strategy. It contains a Disagreement-aware Uncertainty Selector (DUS) to select informative uncertain samples with inconsistent predictions between the two networks, and a Compact Selector (CS) to remove redundancy of selected samples. We extensively evaluate our method on three public pathological image classification datasets, i.e., CRC5000, Chaoyang and CRC100K datasets, and the results demonstrate that our UniSAL significantly surpasses several state-of-the-art active learning methods, and reduces the annotation cost to around 10% to achieve a performance comparable to full annotation. Code is available at <span><span>https://github.com/HiLab-git/UniSAL</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103542"},"PeriodicalIF":10.7000,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525000891","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Histopathological image classification using deep learning is crucial for accurate and efficient cancer diagnosis. However, annotating a large amount of histopathological images for training is costly and time-consuming, leading to a scarcity of available labeled data for training deep neural networks. To reduce human efforts and improve efficiency for annotation, we propose a Unified Semi-supervised Active Learning framework (UniSAL) that effectively selects informative and representative samples for annotation. First, unlike most existing active learning methods that only train from labeled samples in each round, dual-view high-confidence pseudo training is proposed to utilize both labeled and unlabeled images to train a model for selecting query samples, where two networks operating on different augmented versions of an input image provide diverse pseudo labels for each other, and pseudo label-guided class-wise contrastive learning is introduced to obtain better feature representations for effective sample selection. Second, based on the trained model at each round, we design novel uncertain and representative sample selection strategy. It contains a Disagreement-aware Uncertainty Selector (DUS) to select informative uncertain samples with inconsistent predictions between the two networks, and a Compact Selector (CS) to remove redundancy of selected samples. We extensively evaluate our method on three public pathological image classification datasets, i.e., CRC5000, Chaoyang and CRC100K datasets, and the results demonstrate that our UniSAL significantly surpasses several state-of-the-art active learning methods, and reduces the annotation cost to around 10% to achieve a performance comparable to full annotation. Code is available at https://github.com/HiLab-git/UniSAL.
UniSAL:用于组织病理学图像分类的统一半监督主动学习
使用深度学习的组织病理学图像分类对于准确有效的癌症诊断至关重要。然而,标注大量的组织病理学图像用于训练是昂贵和耗时的,导致可用于训练深度神经网络的标记数据稀缺。为了减少人工标注的工作量并提高标注效率,我们提出了一个统一的半监督主动学习框架(UniSAL),该框架可以有效地选择信息丰富且具有代表性的样本进行标注。首先,与大多数现有的主动学习方法不同,每轮只从标记的样本中进行训练,提出了双视图高置信度伪训练,利用标记和未标记的图像来训练选择查询样本的模型,其中两个网络在输入图像的不同增强版本上运行,为彼此提供不同的伪标签;并引入伪标签引导的分类对比学习,以获得更好的特征表示,从而实现有效的样本选择。其次,基于每轮训练的模型,设计新颖的不确定性和代表性样本选择策略。它包含一个分歧感知的不确定性选择器(DUS)来选择两个网络之间预测不一致的信息不确定样本,以及一个紧凑选择器(CS)来去除所选样本的冗余。我们在三个公开的病理图像分类数据集,即CRC5000,朝阳和CRC100K数据集上对我们的方法进行了广泛的评估,结果表明我们的UniSAL显著超过了几种最先进的主动学习方法,并将标注成本降低了10%左右,达到了与完全标注相当的性能。代码可从https://github.com/HiLab-git/UniSAL获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信