Attention mechanism-based multi-parametric MRI ensemble model for predicting tumor budding grade in rectal cancer patients

IF 2.2 3区 医学 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Jianye Jia, Yue Kang, Jiahao Wang, Fan Bai, Lei Han, Yantao Niu
{"title":"Attention mechanism-based multi-parametric MRI ensemble model for predicting tumor budding grade in rectal cancer patients","authors":"Jianye Jia,&nbsp;Yue Kang,&nbsp;Jiahao Wang,&nbsp;Fan Bai,&nbsp;Lei Han,&nbsp;Yantao Niu","doi":"10.1007/s00261-025-04886-z","DOIUrl":null,"url":null,"abstract":"<div><h3>Purpose</h3><p>To develop and validate a deep learning-based feature ensemble model using multiparametric magnetic resonance imaging (MRI) for predicting tumor budding (TB) grading in patients with rectal cancer (RC).</p><h3>Methods</h3><p>A retrospective cohort of 458 patients with pathologically confirmed rectal cancer (RC) from three institutions was included. Among them, 355 patients from Center 1 were divided into two groups at a 7:3 ratio: the training cohort (<i>n</i> = 248) and the internal validation cohort (<i>n</i> = 107). An additional 103 patients from two other centers served as the external validation cohort. Deep learning models were constructed for T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) based on the CrossFormer architecture, and deep learning features were extracted. Subsequently, a feature ensemble module based on the attention mechanism of Transformer was used to capture spatial interactions between different imaging sequences, creating a multiparametric ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), calibration curves, and decision curve analysis (DCA).</p><h3>Results</h3><p>The deep learning model based on T2WI achieved AUC values of 0.789 (95% CI: 0.680–0.900) and 0.720 (95% CI: 0.591–0.849) in the internal and external validation cohorts, respectively. The deep learning model based on DWI had AUC values of 0.806 (95% CI: 0.705–0.908) and 0.772 (95% CI: 0.657–0.887) in the internal and external validation cohorts, respectively. The multiparametric ensemble model demonstrated superior performance, with AUC values of 0.868 (95% CI: 0.775–0.960) in the internal validation cohort and 0.839 (95% CI: 0.743–0.935) in the external validation cohort. DeLong test showed that the differences in AUC values among the models were not statistically significant in both the internal and external test sets (<i>P</i> &gt; 0.05). The DCA curve demonstrated that within the 10–80% threshold range, the fusion model provided significantly higher clinical net benefit compared to other models.</p><h3>Conclusion</h3><p>Compared to single-sequence deep learning models, the attention mechanism-based multiparametric MRI fusion model enables more effective individualized prediction of TB grading in RC patients. It offers valuable guidance for treatment selection and prognostic evaluation while providing imaging-based support for personalized postoperative follow-up adjustments.</p><h3>Graphical abstract</h3><div><figure><div><div><picture><source><img></source></picture></div></div></figure></div></div>","PeriodicalId":7126,"journal":{"name":"Abdominal Radiology","volume":"50 10","pages":"4483 - 4494"},"PeriodicalIF":2.2000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Abdominal Radiology","FirstCategoryId":"3","ListUrlMain":"https://link.springer.com/article/10.1007/s00261-025-04886-z","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Purpose

To develop and validate a deep learning-based feature ensemble model using multiparametric magnetic resonance imaging (MRI) for predicting tumor budding (TB) grading in patients with rectal cancer (RC).

Methods

A retrospective cohort of 458 patients with pathologically confirmed rectal cancer (RC) from three institutions was included. Among them, 355 patients from Center 1 were divided into two groups at a 7:3 ratio: the training cohort (n = 248) and the internal validation cohort (n = 107). An additional 103 patients from two other centers served as the external validation cohort. Deep learning models were constructed for T2-weighted imaging (T2WI) and diffusion-weighted imaging (DWI) based on the CrossFormer architecture, and deep learning features were extracted. Subsequently, a feature ensemble module based on the attention mechanism of Transformer was used to capture spatial interactions between different imaging sequences, creating a multiparametric ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), calibration curves, and decision curve analysis (DCA).

Results

The deep learning model based on T2WI achieved AUC values of 0.789 (95% CI: 0.680–0.900) and 0.720 (95% CI: 0.591–0.849) in the internal and external validation cohorts, respectively. The deep learning model based on DWI had AUC values of 0.806 (95% CI: 0.705–0.908) and 0.772 (95% CI: 0.657–0.887) in the internal and external validation cohorts, respectively. The multiparametric ensemble model demonstrated superior performance, with AUC values of 0.868 (95% CI: 0.775–0.960) in the internal validation cohort and 0.839 (95% CI: 0.743–0.935) in the external validation cohort. DeLong test showed that the differences in AUC values among the models were not statistically significant in both the internal and external test sets (P > 0.05). The DCA curve demonstrated that within the 10–80% threshold range, the fusion model provided significantly higher clinical net benefit compared to other models.

Conclusion

Compared to single-sequence deep learning models, the attention mechanism-based multiparametric MRI fusion model enables more effective individualized prediction of TB grading in RC patients. It offers valuable guidance for treatment selection and prognostic evaluation while providing imaging-based support for personalized postoperative follow-up adjustments.

Graphical abstract

Abstract Image

Abstract Image

基于注意机制的多参数MRI集合模型预测直肠癌患者肿瘤出芽分级。
目的:利用多参数磁共振成像(MRI)开发并验证一种基于深度学习的特征集成模型,用于预测直肠癌(RC)患者的肿瘤萌芽(TB)分级。方法:对来自三所医院的458例经病理证实的直肠癌患者进行回顾性研究。其中,中心1的355例患者按7:3的比例分为两组:训练组(n = 248)和内部验证组(n = 107)。另外来自另外两个中心的103名患者作为外部验证队列。基于CrossFormer架构构建t2加权成像(T2WI)和扩散加权成像(DWI)深度学习模型,提取深度学习特征。随后,利用基于Transformer注意机制的特征集成模块捕获不同成像序列之间的空间交互作用,建立多参数集成模型。使用曲线下面积(AUC)、校准曲线和决策曲线分析(DCA)评估每个模型的预测性能。结果:基于T2WI的深度学习模型在内部验证队列和外部验证队列中的AUC值分别为0.789 (95% CI: 0.680-0.900)和0.720 (95% CI: 0.591-0.849)。基于DWI的深度学习模型在内部和外部验证队列中的AUC值分别为0.806 (95% CI: 0.705-0.908)和0.772 (95% CI: 0.657-0.887)。多参数集成模型表现出更好的性能,内部验证队列的AUC值为0.868 (95% CI: 0.775-0.960),外部验证队列的AUC值为0.839 (95% CI: 0.743-0.935)。DeLong检验显示,内部和外部测试集各模型间AUC值的差异均无统计学意义(P < 0.05)。DCA曲线显示,在10-80%阈值范围内,融合模型比其他模型提供了更高的临床净效益。结论:与单序列深度学习模型相比,基于注意机制的多参数MRI融合模型能够更有效地个性化预测RC患者的TB分级。它为治疗选择和预后评估提供了有价值的指导,同时为个性化的术后随访调整提供了基于图像的支持。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Abdominal Radiology
Abdominal Radiology Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
5.20
自引率
8.30%
发文量
334
期刊介绍: Abdominal Radiology seeks to meet the professional needs of the abdominal radiologist by publishing clinically pertinent original, review and practice related articles on the gastrointestinal and genitourinary tracts and abdominal interventional and radiologic procedures. Case reports are generally not accepted unless they are the first report of a new disease or condition, or part of a special solicited section. Reasons to Publish Your Article in Abdominal Radiology: · Official journal of the Society of Abdominal Radiology (SAR) · Published in Cooperation with: European Society of Gastrointestinal and Abdominal Radiology (ESGAR) European Society of Urogenital Radiology (ESUR) Asian Society of Abdominal Radiology (ASAR) · Efficient handling and Expeditious review · Author feedback is provided in a mentoring style · Global readership · Readers can earn CME credits
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信