{"title":"ADFusion: Multi-modal adaptive deep fusion for cancer subtype prediction","authors":"Ziye Zhang, Weixian Huang, Shijin Wang, Kaiwen Tan, Xiaorou Zheng, Shoubin Dong","doi":"10.1016/j.inffus.2025.103138","DOIUrl":null,"url":null,"abstract":"<div><div>The identification of cancer subtypes is crucial for personalized treatment. Subtype prediction can be achieved by using multi-modal data collected from patients. Multi-modal cancer data contains hidden joint information that cannot be adequately tapped by current vector-based fusion methods. To address this, we propose a multi-modal adaptive deep fusion network ADFusion, which utilizes a hierarchical graph convolutional network HiGCN for high-quality representation of multi-modal cancer data. Subsequently, an adaptive deep fusion network based on deep equilibrium theory is designed to capture effectively multi-modal joint information, which is then fused with multi-modal feature vectors to produce the fused features. HiGCN includes co-expressed genes and sample similarity networks, which provide a more nuanced consideration of the relationships between genes, and also between samples, achieving superior representation of multi-modal genes data. Adaptive deep fusion network, with flexible non-fixed layer structure, is designed for mining multi-modal joint information, automatically adjusting its layers according to real-time training conditions, ensuring flexibility and broad applicability. ADFusion was evaluated across 5 public cancer datasets using 3 evaluation metrics, outperforming state-of-arts methods in all results. Additionally, ablation experiments, convergence analysis, and interpretability analysis also demonstrate the performance of ADFusion.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"121 ","pages":"Article 103138"},"PeriodicalIF":14.7000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002118","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The identification of cancer subtypes is crucial for personalized treatment. Subtype prediction can be achieved by using multi-modal data collected from patients. Multi-modal cancer data contains hidden joint information that cannot be adequately tapped by current vector-based fusion methods. To address this, we propose a multi-modal adaptive deep fusion network ADFusion, which utilizes a hierarchical graph convolutional network HiGCN for high-quality representation of multi-modal cancer data. Subsequently, an adaptive deep fusion network based on deep equilibrium theory is designed to capture effectively multi-modal joint information, which is then fused with multi-modal feature vectors to produce the fused features. HiGCN includes co-expressed genes and sample similarity networks, which provide a more nuanced consideration of the relationships between genes, and also between samples, achieving superior representation of multi-modal genes data. Adaptive deep fusion network, with flexible non-fixed layer structure, is designed for mining multi-modal joint information, automatically adjusting its layers according to real-time training conditions, ensuring flexibility and broad applicability. ADFusion was evaluated across 5 public cancer datasets using 3 evaluation metrics, outperforming state-of-arts methods in all results. Additionally, ablation experiments, convergence analysis, and interpretability analysis also demonstrate the performance of ADFusion.
期刊介绍:
Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.