Ying Xu, Yi Shi, Honglei Liu, Ao Li, Anli Zhang, Minghui Wang
{"title":"DRLSurv: Disentangled Representation Learning for Cancer Survival Prediction by Mining Multimodal Consistency and Complementarity.","authors":"Ying Xu, Yi Shi, Honglei Liu, Ao Li, Anli Zhang, Minghui Wang","doi":"10.1109/JBHI.2025.3578859","DOIUrl":null,"url":null,"abstract":"<p><p>Accurate cancer survival prediction is crucial in devising optimal treatment plans and offering individualized care to improve clinical outcomes. Recent researches confirm that integrating heterogenous cancer data such as histopathological images and genomic data, can enhance our understanding of cancer progression and provides a multimodal perspective on patient survival chances. However, existing methods often over-look the fundamental aspects of multimodal data, i.e., consistency and complementarity, which in consequence significantly hinder advancements in cancer survival prediction. To address this issue, we represent DRLSurv, a novel multimodal deep learning method that leverages disentangled representation learning for precise cancer survival prediction. Through dedicated deep encoding networks, DRLSurv decomposes each modality into modality-invariant and modality-specific representations, which are mapped to common and unique feature subspaces for simultaneously mining the distinct aspects of cancer multimodal data. Moreover, our method innovatively introduces a subspace-based proximity contrastive loss and re-disentanglement loss, thus ensuring the successful decomposition of consistent and complementary information while maintaining the multimodal fidelity during the learning of disentangled representations. Both quantitative analyses and visual assessments on different datasets validate the superiority of DRLSurv over existing survival prediction approaches, demonstrating its powerful capability to exploit enriched survival-related information from cancer multimodal data. Therefore, DRLSurv not only offers a unified and comprehensive deep learning framework for advancing multimodal survival predictions, but also provides valuable insights for cancer prognosis and survival analysis.</p>","PeriodicalId":13073,"journal":{"name":"IEEE Journal of Biomedical and Health Informatics","volume":"PP ","pages":""},"PeriodicalIF":6.8000,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Journal of Biomedical and Health Informatics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/JBHI.2025.3578859","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Accurate cancer survival prediction is crucial in devising optimal treatment plans and offering individualized care to improve clinical outcomes. Recent researches confirm that integrating heterogenous cancer data such as histopathological images and genomic data, can enhance our understanding of cancer progression and provides a multimodal perspective on patient survival chances. However, existing methods often over-look the fundamental aspects of multimodal data, i.e., consistency and complementarity, which in consequence significantly hinder advancements in cancer survival prediction. To address this issue, we represent DRLSurv, a novel multimodal deep learning method that leverages disentangled representation learning for precise cancer survival prediction. Through dedicated deep encoding networks, DRLSurv decomposes each modality into modality-invariant and modality-specific representations, which are mapped to common and unique feature subspaces for simultaneously mining the distinct aspects of cancer multimodal data. Moreover, our method innovatively introduces a subspace-based proximity contrastive loss and re-disentanglement loss, thus ensuring the successful decomposition of consistent and complementary information while maintaining the multimodal fidelity during the learning of disentangled representations. Both quantitative analyses and visual assessments on different datasets validate the superiority of DRLSurv over existing survival prediction approaches, demonstrating its powerful capability to exploit enriched survival-related information from cancer multimodal data. Therefore, DRLSurv not only offers a unified and comprehensive deep learning framework for advancing multimodal survival predictions, but also provides valuable insights for cancer prognosis and survival analysis.
期刊介绍:
IEEE Journal of Biomedical and Health Informatics publishes original papers presenting recent advances where information and communication technologies intersect with health, healthcare, life sciences, and biomedicine. Topics include acquisition, transmission, storage, retrieval, management, and analysis of biomedical and health information. The journal covers applications of information technologies in healthcare, patient monitoring, preventive care, early disease diagnosis, therapy discovery, and personalized treatment protocols. It explores electronic medical and health records, clinical information systems, decision support systems, medical and biological imaging informatics, wearable systems, body area/sensor networks, and more. Integration-related topics like interoperability, evidence-based medicine, and secure patient data are also addressed.