Robust prostate disease classification using transformers with discrete representations.

IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL
Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker
{"title":"Robust prostate disease classification using transformers with discrete representations.","authors":"Ainkaran Santhirasekaram, Mathias Winkler, Andrea Rockall, Ben Glocker","doi":"10.1007/s11548-024-03153-8","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Automated prostate disease classification on multi-parametric MRI has recently shown promising results with the use of convolutional neural networks (CNNs). The vision transformer (ViT) is a convolutional free architecture which only exploits the self-attention mechanism and has surpassed CNNs in some natural imaging classification tasks. However, these models are not very robust to textural shifts in the input space. In MRI, we often have to deal with textural shift arising from varying acquisition protocols. Here, we focus on the ability of models to generalise well to new magnet strengths for MRI.</p><p><strong>Method: </strong>We propose a new framework to improve the robustness of vision transformer-based models for disease classification by constructing discrete representations of the data using vector quantisation. We sample a subset of the discrete representations to form the input into a transformer-based model. We use cross-attention in our transformer model to combine the discrete representations of T2-weighted and apparent diffusion coefficient (ADC) images.</p><p><strong>Results: </strong>We analyse the robustness of our model by training on a 1.5 T scanner and test on a 3 T scanner and vice versa. Our approach achieves SOTA performance for classification of lesions on prostate MRI and outperforms various other CNN and transformer-based models in terms of robustness to domain shift and perturbations in the input space.</p><p><strong>Conclusion: </strong>We develop a method to improve the robustness of transformer-based disease classification of prostate lesions on MRI using discrete representations of the T2-weighted and ADC images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"11-20"},"PeriodicalIF":2.3000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759462/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-024-03153-8","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/5/13 0:00:00","PubModel":"Epub","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose: Automated prostate disease classification on multi-parametric MRI has recently shown promising results with the use of convolutional neural networks (CNNs). The vision transformer (ViT) is a convolutional free architecture which only exploits the self-attention mechanism and has surpassed CNNs in some natural imaging classification tasks. However, these models are not very robust to textural shifts in the input space. In MRI, we often have to deal with textural shift arising from varying acquisition protocols. Here, we focus on the ability of models to generalise well to new magnet strengths for MRI.

Method: We propose a new framework to improve the robustness of vision transformer-based models for disease classification by constructing discrete representations of the data using vector quantisation. We sample a subset of the discrete representations to form the input into a transformer-based model. We use cross-attention in our transformer model to combine the discrete representations of T2-weighted and apparent diffusion coefficient (ADC) images.

Results: We analyse the robustness of our model by training on a 1.5 T scanner and test on a 3 T scanner and vice versa. Our approach achieves SOTA performance for classification of lesions on prostate MRI and outperforms various other CNN and transformer-based models in terms of robustness to domain shift and perturbations in the input space.

Conclusion: We develop a method to improve the robustness of transformer-based disease classification of prostate lesions on MRI using discrete representations of the T2-weighted and ADC images.

Abstract Image

使用具有离散表示的变换器进行稳健的前列腺疾病分类。
目的:最近,使用卷积神经网络(CNN)对多参数磁共振成像进行前列腺疾病自动分类取得了可喜的成果。视觉转换器(ViT)是一种卷积自由架构,它只利用了自注意机制,在一些自然成像分类任务中已经超越了 CNN。然而,这些模型对输入空间的纹理变化并不十分稳健。在核磁共振成像中,我们经常需要处理因不同采集协议而产生的纹理偏移。在此,我们将重点关注模型对 MRI 新磁铁强度的良好泛化能力:方法:我们提出了一个新框架,通过使用向量量化来构建数据的离散表示,从而提高基于视觉变换器的疾病分类模型的鲁棒性。我们对离散表示的一个子集进行采样,以形成基于转换器的模型的输入。我们在变压器模型中使用交叉注意,将 T2 加权图像和表观扩散系数(ADC)图像的离散表示结合起来:我们通过在 1.5 T 扫描仪上进行训练和在 3 T 扫描仪上进行测试来分析模型的鲁棒性,反之亦然。我们的方法在前列腺磁共振成像病变分类方面实现了 SOTA 性能,在对输入空间的域偏移和扰动的鲁棒性方面优于其他各种基于 CNN 和变压器的模型:我们开发了一种方法,利用 T2 加权和 ADC 图像的离散表示,提高了基于变压器的前列腺 MRI 病变分类的鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
International Journal of Computer Assisted Radiology and Surgery
International Journal of Computer Assisted Radiology and Surgery ENGINEERING, BIOMEDICAL-RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
CiteScore
5.90
自引率
6.70%
发文量
243
审稿时长
6-12 weeks
期刊介绍: The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信