Siyu Liu , Peng Zheng , Haoran Wang , Qingyang Feng , Jiayue Zhao , Manning Wang , Chenxi Zhang , Jianmin Xu
{"title":"基于深度学习的直肠癌术前t分期三维多视图多参数MRI融合模型","authors":"Siyu Liu , Peng Zheng , Haoran Wang , Qingyang Feng , Jiayue Zhao , Manning Wang , Chenxi Zhang , Jianmin Xu","doi":"10.1016/j.bspc.2025.108787","DOIUrl":null,"url":null,"abstract":"<div><div>Deep learning (DL) approaches leveraging multi-parametric magnetic resonance imaging (mpMRI) hold significant promise for the preoperative assessment of rectal cancer T-stage. In this study, we investigate whether a mpMRI fusion-based DL model can effectively evaluate the T-stage of rectal cancer. To enable robust development and comprehensive evaluation of an automated T-staging system, we assembled the largest mpMRI cohort to date, comprising 756 patients from three institutions with nine distinct imaging sequences. We introduce a multi-view multi-parametric (MVMP) MRI fusion model for this purpose. The strategy for effective sequence fusion involves grouping different MRI sequences based on scanning directions and integrating features from each group using an attention module. During evaluations, the MVMP model achieves performance comparable to that of two radiologists in both the internal test cohort (AUC: 0.84 vs. 0.79 vs. 0.79) and the external test cohort (AUC: 0.83 vs. 0.81 vs. 0.75). Moreover, it outperforms other DL competitors in both the internal (AUC: 0.840 vs. 0.766 vs. 0.787) and external test cohorts (AUC: 0.826 vs. 0.792 vs. 0.821). The validity of our design is further substantiated through ablation studies on backbone networks, view-specific branches, and individual sequences. In summary, our DL model based on mpMRI and multi-view fusion accurately evaluates the preoperative T-stage of rectal cancer and shows great promise as a valuable tool for clinical assessment.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"113 ","pages":"Article 108787"},"PeriodicalIF":4.9000,"publicationDate":"2025-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep-learning-based 3D multi-view multi-parametric MRI fusion model for preoperative T-staging of rectal cancer\",\"authors\":\"Siyu Liu , Peng Zheng , Haoran Wang , Qingyang Feng , Jiayue Zhao , Manning Wang , Chenxi Zhang , Jianmin Xu\",\"doi\":\"10.1016/j.bspc.2025.108787\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Deep learning (DL) approaches leveraging multi-parametric magnetic resonance imaging (mpMRI) hold significant promise for the preoperative assessment of rectal cancer T-stage. In this study, we investigate whether a mpMRI fusion-based DL model can effectively evaluate the T-stage of rectal cancer. To enable robust development and comprehensive evaluation of an automated T-staging system, we assembled the largest mpMRI cohort to date, comprising 756 patients from three institutions with nine distinct imaging sequences. We introduce a multi-view multi-parametric (MVMP) MRI fusion model for this purpose. The strategy for effective sequence fusion involves grouping different MRI sequences based on scanning directions and integrating features from each group using an attention module. During evaluations, the MVMP model achieves performance comparable to that of two radiologists in both the internal test cohort (AUC: 0.84 vs. 0.79 vs. 0.79) and the external test cohort (AUC: 0.83 vs. 0.81 vs. 0.75). Moreover, it outperforms other DL competitors in both the internal (AUC: 0.840 vs. 0.766 vs. 0.787) and external test cohorts (AUC: 0.826 vs. 0.792 vs. 0.821). The validity of our design is further substantiated through ablation studies on backbone networks, view-specific branches, and individual sequences. In summary, our DL model based on mpMRI and multi-view fusion accurately evaluates the preoperative T-stage of rectal cancer and shows great promise as a valuable tool for clinical assessment.</div></div>\",\"PeriodicalId\":55362,\"journal\":{\"name\":\"Biomedical Signal Processing and Control\",\"volume\":\"113 \",\"pages\":\"Article 108787\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-10-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical Signal Processing and Control\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1746809425012984\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425012984","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
摘要
利用多参数磁共振成像(mpMRI)的深度学习(DL)方法对直肠癌t期术前评估具有重要意义。在本研究中,我们探讨基于mpMRI融合的DL模型是否能有效评估直肠癌的t期。为了实现自动化t分期系统的稳健开发和全面评估,我们汇集了迄今为止最大的mpMRI队列,包括来自三家机构的756名患者,共有9种不同的成像序列。为此,我们引入了一种多视图多参数(MVMP) MRI融合模型。有效的序列融合策略包括基于扫描方向对不同的MRI序列进行分组,并使用注意力模块对每组序列的特征进行整合。在评估期间,MVMP模型在内部测试队列(AUC: 0.84 vs. 0.79 vs. 0.79)和外部测试队列(AUC: 0.83 vs. 0.81 vs. 0.75)中的表现与两名放射科医生相当。此外,它在内部(AUC: 0.840 vs 0.766 vs 0.787)和外部测试队列(AUC: 0.826 vs 0.792 vs 0.821)中都优于其他DL竞争对手。通过骨干网、视图特定分支和单个序列的消融研究,进一步证实了我们设计的有效性。综上所述,我们基于mpMRI和多视图融合的DL模型可以准确评估直肠癌的术前t期,并有望成为临床评估的有价值工具。
Deep-learning-based 3D multi-view multi-parametric MRI fusion model for preoperative T-staging of rectal cancer
Deep learning (DL) approaches leveraging multi-parametric magnetic resonance imaging (mpMRI) hold significant promise for the preoperative assessment of rectal cancer T-stage. In this study, we investigate whether a mpMRI fusion-based DL model can effectively evaluate the T-stage of rectal cancer. To enable robust development and comprehensive evaluation of an automated T-staging system, we assembled the largest mpMRI cohort to date, comprising 756 patients from three institutions with nine distinct imaging sequences. We introduce a multi-view multi-parametric (MVMP) MRI fusion model for this purpose. The strategy for effective sequence fusion involves grouping different MRI sequences based on scanning directions and integrating features from each group using an attention module. During evaluations, the MVMP model achieves performance comparable to that of two radiologists in both the internal test cohort (AUC: 0.84 vs. 0.79 vs. 0.79) and the external test cohort (AUC: 0.83 vs. 0.81 vs. 0.75). Moreover, it outperforms other DL competitors in both the internal (AUC: 0.840 vs. 0.766 vs. 0.787) and external test cohorts (AUC: 0.826 vs. 0.792 vs. 0.821). The validity of our design is further substantiated through ablation studies on backbone networks, view-specific branches, and individual sequences. In summary, our DL model based on mpMRI and multi-view fusion accurately evaluates the preoperative T-stage of rectal cancer and shows great promise as a valuable tool for clinical assessment.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.