ADFusion:用于癌症亚型预测的多模式自适应深度融合

IF 14.7 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Ziye Zhang, Weixian Huang, Shijin Wang, Kaiwen Tan, Xiaorou Zheng, Shoubin Dong
{"title":"ADFusion:用于癌症亚型预测的多模式自适应深度融合","authors":"Ziye Zhang,&nbsp;Weixian Huang,&nbsp;Shijin Wang,&nbsp;Kaiwen Tan,&nbsp;Xiaorou Zheng,&nbsp;Shoubin Dong","doi":"10.1016/j.inffus.2025.103138","DOIUrl":null,"url":null,"abstract":"<div><div>The identification of cancer subtypes is crucial for personalized treatment. Subtype prediction can be achieved by using multi-modal data collected from patients. Multi-modal cancer data contains hidden joint information that cannot be adequately tapped by current vector-based fusion methods. To address this, we propose a multi-modal adaptive deep fusion network ADFusion, which utilizes a hierarchical graph convolutional network HiGCN for high-quality representation of multi-modal cancer data. Subsequently, an adaptive deep fusion network based on deep equilibrium theory is designed to capture effectively multi-modal joint information, which is then fused with multi-modal feature vectors to produce the fused features. HiGCN includes co-expressed genes and sample similarity networks, which provide a more nuanced consideration of the relationships between genes, and also between samples, achieving superior representation of multi-modal genes data. Adaptive deep fusion network, with flexible non-fixed layer structure, is designed for mining multi-modal joint information, automatically adjusting its layers according to real-time training conditions, ensuring flexibility and broad applicability. ADFusion was evaluated across 5 public cancer datasets using 3 evaluation metrics, outperforming state-of-arts methods in all results. Additionally, ablation experiments, convergence analysis, and interpretability analysis also demonstrate the performance of ADFusion.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"121 ","pages":"Article 103138"},"PeriodicalIF":14.7000,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ADFusion: Multi-modal adaptive deep fusion for cancer subtype prediction\",\"authors\":\"Ziye Zhang,&nbsp;Weixian Huang,&nbsp;Shijin Wang,&nbsp;Kaiwen Tan,&nbsp;Xiaorou Zheng,&nbsp;Shoubin Dong\",\"doi\":\"10.1016/j.inffus.2025.103138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>The identification of cancer subtypes is crucial for personalized treatment. Subtype prediction can be achieved by using multi-modal data collected from patients. Multi-modal cancer data contains hidden joint information that cannot be adequately tapped by current vector-based fusion methods. To address this, we propose a multi-modal adaptive deep fusion network ADFusion, which utilizes a hierarchical graph convolutional network HiGCN for high-quality representation of multi-modal cancer data. Subsequently, an adaptive deep fusion network based on deep equilibrium theory is designed to capture effectively multi-modal joint information, which is then fused with multi-modal feature vectors to produce the fused features. HiGCN includes co-expressed genes and sample similarity networks, which provide a more nuanced consideration of the relationships between genes, and also between samples, achieving superior representation of multi-modal genes data. Adaptive deep fusion network, with flexible non-fixed layer structure, is designed for mining multi-modal joint information, automatically adjusting its layers according to real-time training conditions, ensuring flexibility and broad applicability. ADFusion was evaluated across 5 public cancer datasets using 3 evaluation metrics, outperforming state-of-arts methods in all results. Additionally, ablation experiments, convergence analysis, and interpretability analysis also demonstrate the performance of ADFusion.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"121 \",\"pages\":\"Article 103138\"},\"PeriodicalIF\":14.7000,\"publicationDate\":\"2025-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525002118\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525002118","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

癌症亚型的识别对于个性化治疗至关重要。亚型预测可以通过使用从患者收集的多模态数据来实现。多模态癌症数据包含隐藏的关节信息,目前基于矢量的融合方法无法充分利用这些信息。为了解决这个问题,我们提出了一个多模态自适应深度融合网络ADFusion,它利用分层图卷积网络HiGCN来高质量地表示多模态癌症数据。随后,设计了基于深度平衡理论的自适应深度融合网络,有效捕获多模态关节信息,然后将多模态关节信息与多模态特征向量融合产生融合特征。HiGCN包括共表达基因和样本相似性网络,这为基因之间以及样本之间的关系提供了更细致的考虑,从而实现了多模态基因数据的卓越表示。自适应深度融合网络具有灵活的非固定层结构,专为挖掘多模态联合信息而设计,可根据实时训练情况自动调整其层数,保证了灵活性和广泛的适用性。使用3个评估指标对5个公共癌症数据集进行了评估,在所有结果中都优于最先进的方法。此外,消融实验、收敛分析和可解释性分析也证明了ADFusion的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
ADFusion: Multi-modal adaptive deep fusion for cancer subtype prediction
The identification of cancer subtypes is crucial for personalized treatment. Subtype prediction can be achieved by using multi-modal data collected from patients. Multi-modal cancer data contains hidden joint information that cannot be adequately tapped by current vector-based fusion methods. To address this, we propose a multi-modal adaptive deep fusion network ADFusion, which utilizes a hierarchical graph convolutional network HiGCN for high-quality representation of multi-modal cancer data. Subsequently, an adaptive deep fusion network based on deep equilibrium theory is designed to capture effectively multi-modal joint information, which is then fused with multi-modal feature vectors to produce the fused features. HiGCN includes co-expressed genes and sample similarity networks, which provide a more nuanced consideration of the relationships between genes, and also between samples, achieving superior representation of multi-modal genes data. Adaptive deep fusion network, with flexible non-fixed layer structure, is designed for mining multi-modal joint information, automatically adjusting its layers according to real-time training conditions, ensuring flexibility and broad applicability. ADFusion was evaluated across 5 public cancer datasets using 3 evaluation metrics, outperforming state-of-arts methods in all results. Additionally, ablation experiments, convergence analysis, and interpretability analysis also demonstrate the performance of ADFusion.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信