用于组织学图像少镜头分类的相互重构网络模型:解决类间相似性和类内多样性问题。

IF 2.9 2区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Quantitative Imaging in Medicine and Surgery Pub Date : 2024-08-01 Epub Date: 2024-07-25 DOI:10.21037/qims-24-253
Xiangbo Li, Yinghui Zhang, Fengxiang Ge
{"title":"用于组织学图像少镜头分类的相互重构网络模型:解决类间相似性和类内多样性问题。","authors":"Xiangbo Li, Yinghui Zhang, Fengxiang Ge","doi":"10.21037/qims-24-253","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The automated classification of histological images is crucial for the diagnosis of cancer. The limited availability of well-annotated datasets, especially for rare cancers, poses a significant challenge for deep learning methods due to the small number of relevant images. This has led to the development of few-shot learning approaches, which bear considerable clinical importance, as they are designed to overcome the challenges of data scarcity in deep learning for histological image classification. Traditional methods often ignore the challenges of intraclass diversity and interclass similarities in histological images. To address this, we propose a novel mutual reconstruction network model, aimed at meeting these challenges and improving the few-shot classification performance of histological images.</p><p><strong>Methods: </strong>The key to our approach is the extraction of subtle and discriminative features. We introduce a feature enhancement module (FEM) and a mutual reconstruction module to increase differences between classes while reducing variance within classes. First, we extract features of support and query images using a feature extractor. These features are then processed by the FEM, which uses a self-attention mechanism for self-reconstruction of features, enhancing the learning of detailed features. These enhanced features are then input into the mutual reconstruction module. This module uses enhanced support features to reconstruct enhanced query features and vice versa. The classification of query samples is based on weighted calculations of the distances between query features and reconstructed query features and between support features and reconstructed support features.</p><p><strong>Results: </strong>We extensively evaluated our model using a specially created few-shot histological image dataset. The results showed that in a 5-way 10-shot setup, our model achieved an impressive accuracy of 92.09%. This is a 23.59% improvement in accuracy compared to the model-agnostic meta-learning (MAML) method, which does not focus on fine-grained attributes. In the more challenging, 5-way 1-shot setting, our model also performed well, demonstrating a 18.52% improvement over the ProtoNet, which does not address this challenge. Additional ablation studies indicated the effectiveness and complementary nature of each module and confirmed our method's ability to parse small differences between classes and large variations within classes in histological images. These findings strongly support the superiority of our proposed method in the few-shot classification of histological images.</p><p><strong>Conclusions: </strong>The mutual reconstruction network provides outstanding performance in the few-shot classification of histological images, successfully overcoming the challenges of similarities between classes and diversity within classes. This marks a significant advancement in the automated classification of histological images.</p>","PeriodicalId":54267,"journal":{"name":"Quantitative Imaging in Medicine and Surgery","volume":null,"pages":null},"PeriodicalIF":2.9000,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11320516/pdf/","citationCount":"0","resultStr":"{\"title\":\"A mutual reconstruction network model for few-shot classification of histological images: addressing interclass similarity and intraclass diversity.\",\"authors\":\"Xiangbo Li, Yinghui Zhang, Fengxiang Ge\",\"doi\":\"10.21037/qims-24-253\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The automated classification of histological images is crucial for the diagnosis of cancer. The limited availability of well-annotated datasets, especially for rare cancers, poses a significant challenge for deep learning methods due to the small number of relevant images. This has led to the development of few-shot learning approaches, which bear considerable clinical importance, as they are designed to overcome the challenges of data scarcity in deep learning for histological image classification. Traditional methods often ignore the challenges of intraclass diversity and interclass similarities in histological images. To address this, we propose a novel mutual reconstruction network model, aimed at meeting these challenges and improving the few-shot classification performance of histological images.</p><p><strong>Methods: </strong>The key to our approach is the extraction of subtle and discriminative features. We introduce a feature enhancement module (FEM) and a mutual reconstruction module to increase differences between classes while reducing variance within classes. First, we extract features of support and query images using a feature extractor. These features are then processed by the FEM, which uses a self-attention mechanism for self-reconstruction of features, enhancing the learning of detailed features. These enhanced features are then input into the mutual reconstruction module. This module uses enhanced support features to reconstruct enhanced query features and vice versa. The classification of query samples is based on weighted calculations of the distances between query features and reconstructed query features and between support features and reconstructed support features.</p><p><strong>Results: </strong>We extensively evaluated our model using a specially created few-shot histological image dataset. The results showed that in a 5-way 10-shot setup, our model achieved an impressive accuracy of 92.09%. This is a 23.59% improvement in accuracy compared to the model-agnostic meta-learning (MAML) method, which does not focus on fine-grained attributes. In the more challenging, 5-way 1-shot setting, our model also performed well, demonstrating a 18.52% improvement over the ProtoNet, which does not address this challenge. Additional ablation studies indicated the effectiveness and complementary nature of each module and confirmed our method's ability to parse small differences between classes and large variations within classes in histological images. These findings strongly support the superiority of our proposed method in the few-shot classification of histological images.</p><p><strong>Conclusions: </strong>The mutual reconstruction network provides outstanding performance in the few-shot classification of histological images, successfully overcoming the challenges of similarities between classes and diversity within classes. This marks a significant advancement in the automated classification of histological images.</p>\",\"PeriodicalId\":54267,\"journal\":{\"name\":\"Quantitative Imaging in Medicine and Surgery\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2024-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11320516/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Quantitative Imaging in Medicine and Surgery\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.21037/qims-24-253\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/7/25 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q2\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quantitative Imaging in Medicine and Surgery","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.21037/qims-24-253","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/7/25 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

背景:组织学图像的自动分类对癌症诊断至关重要。由于相关图像的数量较少,因此可用的注释良好的数据集有限,特别是对于罕见癌症而言,这对深度学习方法构成了重大挑战。这促使人们开发了 "少量学习 "方法,这些方法旨在克服深度学习在组织学图像分类中面临的数据稀缺挑战,因此具有相当重要的临床意义。传统方法往往忽视组织学图像中类内多样性和类间相似性的挑战。为了解决这个问题,我们提出了一种新颖的相互重构网络模型,旨在应对这些挑战,提高组织学图像的少量分类性能:我们的方法的关键在于提取微妙的鉴别特征。我们引入了一个特征增强模块(FEM)和一个相互重构模块,以增加类间差异,同时减少类内差异。首先,我们使用特征提取器提取支持图像和查询图像的特征。然后由 FEM 对这些特征进行处理,FEM 使用自我注意机制对特征进行自我重构,从而加强对细节特征的学习。然后将这些增强的特征输入到相互重建模块。该模块使用增强的支持特征来重建增强的查询特征,反之亦然。查询样本的分类基于查询特征与重构查询特征之间以及支持特征与重构支持特征之间距离的加权计算:我们使用专门创建的少量组织学图像数据集对我们的模型进行了广泛评估。结果表明,在 5 路 10 张照片的设置中,我们的模型达到了令人印象深刻的 92.09% 的准确率。与不关注细粒度属性的模型无关元学习(MAML)方法相比,准确率提高了 23.59%。在更具挑战性的五向单发设置中,我们的模型也表现出色,比没有应对这一挑战的 ProtoNet 提高了 18.52%。更多的消融研究表明了每个模块的有效性和互补性,并证实了我们的方法能够解析组织学图像中类别之间的微小差异和类别内部的巨大变化。这些研究结果有力地证明了我们提出的方法在组织学图像的少镜头分类中的优越性:相互重构网络在组织学图像的少量照片分类中表现出色,成功克服了类间相似性和类内多样性的挑战。这标志着组织学图像的自动分类取得了重大进展。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A mutual reconstruction network model for few-shot classification of histological images: addressing interclass similarity and intraclass diversity.

Background: The automated classification of histological images is crucial for the diagnosis of cancer. The limited availability of well-annotated datasets, especially for rare cancers, poses a significant challenge for deep learning methods due to the small number of relevant images. This has led to the development of few-shot learning approaches, which bear considerable clinical importance, as they are designed to overcome the challenges of data scarcity in deep learning for histological image classification. Traditional methods often ignore the challenges of intraclass diversity and interclass similarities in histological images. To address this, we propose a novel mutual reconstruction network model, aimed at meeting these challenges and improving the few-shot classification performance of histological images.

Methods: The key to our approach is the extraction of subtle and discriminative features. We introduce a feature enhancement module (FEM) and a mutual reconstruction module to increase differences between classes while reducing variance within classes. First, we extract features of support and query images using a feature extractor. These features are then processed by the FEM, which uses a self-attention mechanism for self-reconstruction of features, enhancing the learning of detailed features. These enhanced features are then input into the mutual reconstruction module. This module uses enhanced support features to reconstruct enhanced query features and vice versa. The classification of query samples is based on weighted calculations of the distances between query features and reconstructed query features and between support features and reconstructed support features.

Results: We extensively evaluated our model using a specially created few-shot histological image dataset. The results showed that in a 5-way 10-shot setup, our model achieved an impressive accuracy of 92.09%. This is a 23.59% improvement in accuracy compared to the model-agnostic meta-learning (MAML) method, which does not focus on fine-grained attributes. In the more challenging, 5-way 1-shot setting, our model also performed well, demonstrating a 18.52% improvement over the ProtoNet, which does not address this challenge. Additional ablation studies indicated the effectiveness and complementary nature of each module and confirmed our method's ability to parse small differences between classes and large variations within classes in histological images. These findings strongly support the superiority of our proposed method in the few-shot classification of histological images.

Conclusions: The mutual reconstruction network provides outstanding performance in the few-shot classification of histological images, successfully overcoming the challenges of similarities between classes and diversity within classes. This marks a significant advancement in the automated classification of histological images.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Quantitative Imaging in Medicine and Surgery
Quantitative Imaging in Medicine and Surgery Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
4.20
自引率
17.90%
发文量
252
期刊介绍: Information not localized
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信