Bangkang Fu , Junjie He , Xiaoli Zhang , Yunsong Peng , Zhuxu Zhang , Qi Tang , Xinfeng Liu , Ying Cao , Rongpin Wang
{"title":"HSFSurv: A hybrid supervision framework at individual and feature levels for multimodal cancer survival analysis","authors":"Bangkang Fu , Junjie He , Xiaoli Zhang , Yunsong Peng , Zhuxu Zhang , Qi Tang , Xinfeng Liu , Ying Cao , Rongpin Wang","doi":"10.1016/j.media.2025.103810","DOIUrl":null,"url":null,"abstract":"<div><div>Multimodal data play a significant role in survival analysis, with pathological images providing morphological information about tumors and genomic data offering molecular insights. Leveraging multimodal data for survival analysis has become a prominent research topic. However, the heterogeneity of data poses significant challenges to multimodal integration. While existing methods consider interactions among features from different modalities, the heterogeneity of feature spaces often hinders performance in survival analysis. In this paper, we propose a hybrid supervised framework for survival analysis (HSFSurv) based on multimodal feature decomposition. This framework utilizes a multimodal feature decomposition module to partition features into highly correlated and modality-specific components, facilitating targeted feature fusion in subsequent steps. To alleviate feature space heterogeneity, we design an individual-level uncertainty minimization (UMI) module to ensure consistency in prediction outcomes. Additionally, we develop a feature-level multimodal cohort contrastive learning (MCF) module to enforce consistency across features. Moreover, a probabilistic decay detection module with a supervisory signal is introduced to guide the contrastive learning process. These modules are jointly trained to project multimodal features into a shared latent vector space. Finally, we fine-tune the framework for survival analysis tasks to achieve prognostic predictions. Experimental results on five cancer datasets demonstrate the state-of-the-art performance of the proposed multimodal fusion framework in survival analysis.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"107 ","pages":"Article 103810"},"PeriodicalIF":11.8000,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525003561","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Multimodal data play a significant role in survival analysis, with pathological images providing morphological information about tumors and genomic data offering molecular insights. Leveraging multimodal data for survival analysis has become a prominent research topic. However, the heterogeneity of data poses significant challenges to multimodal integration. While existing methods consider interactions among features from different modalities, the heterogeneity of feature spaces often hinders performance in survival analysis. In this paper, we propose a hybrid supervised framework for survival analysis (HSFSurv) based on multimodal feature decomposition. This framework utilizes a multimodal feature decomposition module to partition features into highly correlated and modality-specific components, facilitating targeted feature fusion in subsequent steps. To alleviate feature space heterogeneity, we design an individual-level uncertainty minimization (UMI) module to ensure consistency in prediction outcomes. Additionally, we develop a feature-level multimodal cohort contrastive learning (MCF) module to enforce consistency across features. Moreover, a probabilistic decay detection module with a supervisory signal is introduced to guide the contrastive learning process. These modules are jointly trained to project multimodal features into a shared latent vector space. Finally, we fine-tune the framework for survival analysis tasks to achieve prognostic predictions. Experimental results on five cancer datasets demonstrate the state-of-the-art performance of the proposed multimodal fusion framework in survival analysis.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.