Saisai Ding , Linjin Li , Ge Jin , Jun Wang , Shihui Ying , Jun Shi
{"title":"HGMSurvNet:用于多模式癌症生存预测的两阶段超图学习网络","authors":"Saisai Ding , Linjin Li , Ge Jin , Jun Wang , Shihui Ying , Jun Shi","doi":"10.1016/j.media.2025.103661","DOIUrl":null,"url":null,"abstract":"<div><div>Cancer survival prediction based on multimodal data (e.g., pathological slides, clinical records, and genomic profiles) has become increasingly prevalent in recent years. A key challenge of this task is obtaining an effective survival-specific global representation from patient data with highly complicated correlations. Furthermore, the absence of certain modalities is a common issue in clinical practice, which renders current multimodal methods either outdated or ineffective. This article proposes a novel two-stage hypergraph learning network, called HGMSurvNet, for multimodal cancer survival prediction. HGMSurvNet can gradually learn the higher-order global representations from the WSI-level to the patient-level for multimodal learning via multilateral correlation modeling in multiple stages. Most importantly, to address the data noise and missing modalities issues in clinical scenarios, we develop a new hypergraph convolution network with a hyperedge dropout mechanism to discard unimportant hyperedges during model training. Extensive validation experiments were conducted on six public cancer cohorts from TCGA. The results demonstrated that the proposed method consistently outperforms state-of-the-art methods. We also demonstrate the interpretable analysis of HGMSurvNet and its application potential in pathological images and patient modeling, which has valuable clinical significance for the survival prognosis.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"104 ","pages":"Article 103661"},"PeriodicalIF":11.8000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"HGMSurvNet: A two-stage hypergraph learning network for multimodal cancer survival prediction\",\"authors\":\"Saisai Ding , Linjin Li , Ge Jin , Jun Wang , Shihui Ying , Jun Shi\",\"doi\":\"10.1016/j.media.2025.103661\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Cancer survival prediction based on multimodal data (e.g., pathological slides, clinical records, and genomic profiles) has become increasingly prevalent in recent years. A key challenge of this task is obtaining an effective survival-specific global representation from patient data with highly complicated correlations. Furthermore, the absence of certain modalities is a common issue in clinical practice, which renders current multimodal methods either outdated or ineffective. This article proposes a novel two-stage hypergraph learning network, called HGMSurvNet, for multimodal cancer survival prediction. HGMSurvNet can gradually learn the higher-order global representations from the WSI-level to the patient-level for multimodal learning via multilateral correlation modeling in multiple stages. Most importantly, to address the data noise and missing modalities issues in clinical scenarios, we develop a new hypergraph convolution network with a hyperedge dropout mechanism to discard unimportant hyperedges during model training. Extensive validation experiments were conducted on six public cancer cohorts from TCGA. The results demonstrated that the proposed method consistently outperforms state-of-the-art methods. We also demonstrate the interpretable analysis of HGMSurvNet and its application potential in pathological images and patient modeling, which has valuable clinical significance for the survival prognosis.</div></div>\",\"PeriodicalId\":18328,\"journal\":{\"name\":\"Medical image analysis\",\"volume\":\"104 \",\"pages\":\"Article 103661\"},\"PeriodicalIF\":11.8000,\"publicationDate\":\"2025-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical image analysis\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1361841525002087\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525002087","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
HGMSurvNet: A two-stage hypergraph learning network for multimodal cancer survival prediction
Cancer survival prediction based on multimodal data (e.g., pathological slides, clinical records, and genomic profiles) has become increasingly prevalent in recent years. A key challenge of this task is obtaining an effective survival-specific global representation from patient data with highly complicated correlations. Furthermore, the absence of certain modalities is a common issue in clinical practice, which renders current multimodal methods either outdated or ineffective. This article proposes a novel two-stage hypergraph learning network, called HGMSurvNet, for multimodal cancer survival prediction. HGMSurvNet can gradually learn the higher-order global representations from the WSI-level to the patient-level for multimodal learning via multilateral correlation modeling in multiple stages. Most importantly, to address the data noise and missing modalities issues in clinical scenarios, we develop a new hypergraph convolution network with a hyperedge dropout mechanism to discard unimportant hyperedges during model training. Extensive validation experiments were conducted on six public cancer cohorts from TCGA. The results demonstrated that the proposed method consistently outperforms state-of-the-art methods. We also demonstrate the interpretable analysis of HGMSurvNet and its application potential in pathological images and patient modeling, which has valuable clinical significance for the survival prognosis.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.