Evaluation of a task specific self-supervised learning framework in digital pathology relative to transfer learning approaches and existing foundation models.

IF 7.1 1区 医学 Q1 PATHOLOGY
Tawsifur Rahman, Alexander S Baras, Rama Chellappa
{"title":"Evaluation of a task specific self-supervised learning framework in digital pathology relative to transfer learning approaches and existing foundation models.","authors":"Tawsifur Rahman, Alexander S Baras, Rama Chellappa","doi":"10.1016/j.modpat.2024.100636","DOIUrl":null,"url":null,"abstract":"<p><p>An integral stage in typical digital pathology workflows involves deriving specific features from tiles extracted from a tessellated whole slide image. Notably, various computer vision neural network architectures, particularly the ImageNet pre-trained, have been extensively used in this domain. This study critically analyzes multiple strategies for encoding tiles to understand the extent of transfer learning and identify the most effective approach. The study categorizes neural network performance into three weight initialization methods: random, ImageNet-based, and self-supervised learning. Additionally, we propose a framework based on task-specific self-supervised learning (TS-SSL) which introduces a shallow feature extraction method, employing a spatial-channel attention block to glean distinctive features optimized for histopathology intricacies. Across two different downstream classification tasks (patch classification, and weakly supervised whole slide image classification) with diverse classification datasets, including Colorectal cancer histology, Patch Camelyon, PANDA, TCGA and CIFAR-10, our task specific self-supervised encoding approach consistently outperforms other CNN-based encoders. The better performances highlight the potential of task-specific-attention based self-supervised training in tailoring feature extraction for histopathology, indicating a shift from utilizing pretrained models originating outside the histopathology domain. Our study supports the idea that task-specific self-supervised learning allows domain-specific feature extraction, encouraging a more focused analysis.</p>","PeriodicalId":18706,"journal":{"name":"Modern Pathology","volume":" ","pages":"100636"},"PeriodicalIF":7.1000,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Modern Pathology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1016/j.modpat.2024.100636","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PATHOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

An integral stage in typical digital pathology workflows involves deriving specific features from tiles extracted from a tessellated whole slide image. Notably, various computer vision neural network architectures, particularly the ImageNet pre-trained, have been extensively used in this domain. This study critically analyzes multiple strategies for encoding tiles to understand the extent of transfer learning and identify the most effective approach. The study categorizes neural network performance into three weight initialization methods: random, ImageNet-based, and self-supervised learning. Additionally, we propose a framework based on task-specific self-supervised learning (TS-SSL) which introduces a shallow feature extraction method, employing a spatial-channel attention block to glean distinctive features optimized for histopathology intricacies. Across two different downstream classification tasks (patch classification, and weakly supervised whole slide image classification) with diverse classification datasets, including Colorectal cancer histology, Patch Camelyon, PANDA, TCGA and CIFAR-10, our task specific self-supervised encoding approach consistently outperforms other CNN-based encoders. The better performances highlight the potential of task-specific-attention based self-supervised training in tailoring feature extraction for histopathology, indicating a shift from utilizing pretrained models originating outside the histopathology domain. Our study supports the idea that task-specific self-supervised learning allows domain-specific feature extraction, encouraging a more focused analysis.

相对于迁移学习方法和现有基础模型,评估数字病理学中特定任务自监督学习框架。
在典型的数字病理工作流程中,一个不可或缺的阶段是从整张幻灯片图像中提取的瓦片中获取特定特征。值得注意的是,各种计算机视觉神经网络架构,尤其是 ImageNet 预训练架构,已被广泛应用于这一领域。本研究对瓷砖编码的多种策略进行了批判性分析,以了解迁移学习的程度并确定最有效的方法。研究将神经网络性能分为三种权重初始化方法:随机、基于 ImageNet 和自我监督学习。此外,我们还提出了一个基于特定任务自我监督学习(TS-SSL)的框架,该框架引入了一种浅层特征提取方法,利用空间通道注意块来收集针对组织病理学复杂性进行优化的独特特征。在包括结直肠癌组织学、Patch Camelyon、PANDA、TCGA 和 CIFAR-10 等不同分类数据集的两个不同下游分类任务(斑块分类和弱监督整张切片图像分类)中,我们的任务特定自监督编码方法始终优于其他基于 CNN 的编码器。更好的表现凸显了基于特定任务注意力的自我监督训练在定制组织病理学特征提取方面的潜力,表明了从利用组织病理学领域以外的预训练模型向利用组织病理学领域以外的预训练模型的转变。我们的研究支持这样一种观点,即特定任务的自我监督学习允许特定领域的特征提取,从而鼓励更有针对性的分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Modern Pathology
Modern Pathology 医学-病理学
CiteScore
14.30
自引率
2.70%
发文量
174
审稿时长
18 days
期刊介绍: Modern Pathology, an international journal under the ownership of The United States & Canadian Academy of Pathology (USCAP), serves as an authoritative platform for publishing top-tier clinical and translational research studies in pathology. Original manuscripts are the primary focus of Modern Pathology, complemented by impactful editorials, reviews, and practice guidelines covering all facets of precision diagnostics in human pathology. The journal's scope includes advancements in molecular diagnostics and genomic classifications of diseases, breakthroughs in immune-oncology, computational science, applied bioinformatics, and digital pathology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信