C2HFusion: Clinical context-driven hierarchical fusion of multimodal data for personalized and quantitative prognostic assessment in pancreatic cancer

IF 11.8 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Medical image analysis Pub Date : 2026-03-01 Epub Date: 2026-01-11 DOI:10.1016/j.media.2026.103937
Bolun Zeng , Yaolin Xu , Peng Wang , Tianyu Lu , Zongyu Xie , Mengsu Zeng , Jianjun Zhou , Liang Liu , Haitao Sun , Xiaojun Chen
{"title":"C2HFusion: Clinical context-driven hierarchical fusion of multimodal data for personalized and quantitative prognostic assessment in pancreatic cancer","authors":"Bolun Zeng ,&nbsp;Yaolin Xu ,&nbsp;Peng Wang ,&nbsp;Tianyu Lu ,&nbsp;Zongyu Xie ,&nbsp;Mengsu Zeng ,&nbsp;Jianjun Zhou ,&nbsp;Liang Liu ,&nbsp;Haitao Sun ,&nbsp;Xiaojun Chen","doi":"10.1016/j.media.2026.103937","DOIUrl":null,"url":null,"abstract":"<div><div>Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive malignancy. Accurate prognostic modeling enables reliable risk stratification to identify patients most likely to benefit from adjuvant therapy, thereby facilitating individualized clinical management and potentially improving patient outcomes. Although recent deep learning approaches have shown promise in this area, their effectiveness is often constrained by fusion strategies that fail to fully capture the hierarchical and complementary information across heterogeneous clinical modalities. To address these limitations, we propose C2HFusion, a novel fusion framework inspired by clinical decision-making for personalized prognostic risk assessment. C2HFusion is unique in that it integrates multimodal data across multiple representational levels and structural forms. At the imaging level, it extracts and aggregates tumor-level features from multi-sequence MRI using cross-attention, effectively capturing complementary imaging patterns. At the patient level, it encodes structured data (e.g., laboratory results, demographics) and unstructured data (e.g., radiology reports) as contextual priors, which are then fused with imaging representations through a novel feature modulation mechanism. To further enhance this cross-level integration, a scalable Mixture-of-Clinical-Experts (MoCE) module dynamically routes different modalities through specialized branches and adaptively optimizes feature fusion for more robust multimodal modeling. Validation on multi-center real-world datasets covering 681 PDAC patients shows that C2HFusion consistently outperforms state-of-the-art methods in overall survival prediction, achieving over a 5% improvement in C-index. These results highlight its potential to improve prognostic accuracy and support more informed, personalized clinical decision-making.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"109 ","pages":"Article 103937"},"PeriodicalIF":11.8000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S136184152600006X","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/1/11 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Pancreatic ductal adenocarcinoma (PDAC) is a highly aggressive malignancy. Accurate prognostic modeling enables reliable risk stratification to identify patients most likely to benefit from adjuvant therapy, thereby facilitating individualized clinical management and potentially improving patient outcomes. Although recent deep learning approaches have shown promise in this area, their effectiveness is often constrained by fusion strategies that fail to fully capture the hierarchical and complementary information across heterogeneous clinical modalities. To address these limitations, we propose C2HFusion, a novel fusion framework inspired by clinical decision-making for personalized prognostic risk assessment. C2HFusion is unique in that it integrates multimodal data across multiple representational levels and structural forms. At the imaging level, it extracts and aggregates tumor-level features from multi-sequence MRI using cross-attention, effectively capturing complementary imaging patterns. At the patient level, it encodes structured data (e.g., laboratory results, demographics) and unstructured data (e.g., radiology reports) as contextual priors, which are then fused with imaging representations through a novel feature modulation mechanism. To further enhance this cross-level integration, a scalable Mixture-of-Clinical-Experts (MoCE) module dynamically routes different modalities through specialized branches and adaptively optimizes feature fusion for more robust multimodal modeling. Validation on multi-center real-world datasets covering 681 PDAC patients shows that C2HFusion consistently outperforms state-of-the-art methods in overall survival prediction, achieving over a 5% improvement in C-index. These results highlight its potential to improve prognostic accuracy and support more informed, personalized clinical decision-making.
C2HFusion:临床情境驱动的多模式数据分层融合,用于胰腺癌的个性化和定量预后评估
胰导管腺癌是一种高度侵袭性的恶性肿瘤。准确的预后模型可以实现可靠的风险分层,以确定最有可能从辅助治疗中获益的患者,从而促进个体化临床管理,并有可能改善患者的预后。尽管最近的深度学习方法在这一领域显示出了希望,但它们的有效性往往受到融合策略的限制,这些策略无法完全捕获跨异质临床模式的分层和互补信息。为了解决这些局限性,我们提出了C2HFusion,这是一种受个性化预后风险评估临床决策启发的新型融合框架。C2HFusion的独特之处在于它集成了跨多个表示级别和结构形式的多模态数据。在成像水平,它通过交叉注意从多序列MRI中提取和聚集肿瘤水平的特征,有效地捕获互补的成像模式。在患者层面,它将结构化数据(例如,实验室结果,人口统计数据)和非结构化数据(例如,放射学报告)编码为上下文先验,然后通过一种新的特征调制机制将其与成像表示融合。为了进一步增强这种跨层集成,一个可扩展的临床专家混合(MoCE)模块通过专门的分支动态路由不同的模式,并自适应地优化特征融合,以实现更稳健的多模式建模。在覆盖681名PDAC患者的多中心真实数据集上的验证表明,C2HFusion在总体生存预测方面始终优于最先进的方法,c指数提高了5%以上。这些结果突出了它在提高预后准确性和支持更明智、个性化的临床决策方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书