MulitDeepsurv: survival analysis of gastric cancer based on deep learning multimodal fusion models.

IF 2.9 2区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Biomedical optics express Pub Date : 2024-12-11 eCollection Date: 2025-01-01 DOI:10.1364/BOE.541570
Songren Mao, Jie Liu
{"title":"MulitDeepsurv: survival analysis of gastric cancer based on deep learning multimodal fusion models.","authors":"Songren Mao, Jie Liu","doi":"10.1364/BOE.541570","DOIUrl":null,"url":null,"abstract":"<p><p>Gastric cancer is a leading cause of cancer-related deaths globally. As mortality rates continue to rise, predicting cancer survival using multimodal data-including histopathological images, genomic data, and clinical information-has become increasingly crucial. However, extracting effective predictive features from this complex data has posed challenges for survival analysis due to the high dimensionality and heterogeneity of histopathology images and genomic data. Furthermore, existing methods often lack sufficient interaction between intra- and inter-modal features, significantly impacting model performance. To address these challenges, we developed a deep learning-based multimodal feature fusion model, MultiDeepsurv, designed to predict the survival of gastric cancer patients by integrating histopathological images, clinical data, and gene expression data. Our approach includes a two-branch hybrid network, GLFUnet, which leverages the attention mechanism for enhanced pathology image representation learning. Additionally, we employ a graph convolutional neural network (GCN) to extract features from gene expression data and clinical information. To capture the correlations between different modalities, we utilize the SFusion fusion strategy that employs a self-attention mechanism to learn potential correlations across modalities. Finally, these deeply processed features are fed into Cox regression models for an end-to-end survival analysis. Comprehensive experiments and analyses conducted on a gastric cancer cohort from The Cancer Genome Atlas (TCGA) demonstrate that our proposed MultiDeepsurv model outperforms other methods in terms of prognostic accuracy, with a C-index of 0.806 and an AUC of 0.842.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"16 1","pages":"126-141"},"PeriodicalIF":2.9000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11729289/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical optics express","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1364/BOE.541570","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Gastric cancer is a leading cause of cancer-related deaths globally. As mortality rates continue to rise, predicting cancer survival using multimodal data-including histopathological images, genomic data, and clinical information-has become increasingly crucial. However, extracting effective predictive features from this complex data has posed challenges for survival analysis due to the high dimensionality and heterogeneity of histopathology images and genomic data. Furthermore, existing methods often lack sufficient interaction between intra- and inter-modal features, significantly impacting model performance. To address these challenges, we developed a deep learning-based multimodal feature fusion model, MultiDeepsurv, designed to predict the survival of gastric cancer patients by integrating histopathological images, clinical data, and gene expression data. Our approach includes a two-branch hybrid network, GLFUnet, which leverages the attention mechanism for enhanced pathology image representation learning. Additionally, we employ a graph convolutional neural network (GCN) to extract features from gene expression data and clinical information. To capture the correlations between different modalities, we utilize the SFusion fusion strategy that employs a self-attention mechanism to learn potential correlations across modalities. Finally, these deeply processed features are fed into Cox regression models for an end-to-end survival analysis. Comprehensive experiments and analyses conducted on a gastric cancer cohort from The Cancer Genome Atlas (TCGA) demonstrate that our proposed MultiDeepsurv model outperforms other methods in terms of prognostic accuracy, with a C-index of 0.806 and an AUC of 0.842.

multideepsurv:基于深度学习多模态融合模型的胃癌生存分析。
胃癌是全球癌症相关死亡的主要原因。随着死亡率的持续上升,使用多模式数据(包括组织病理学图像、基因组数据和临床信息)预测癌症生存变得越来越重要。然而,由于组织病理学图像和基因组数据的高维性和异质性,从这些复杂的数据中提取有效的预测特征对生存分析提出了挑战。此外,现有方法往往缺乏模式内和模式间特征之间的充分交互,严重影响了模型的性能。为了应对这些挑战,我们开发了一种基于深度学习的多模态特征融合模型MultiDeepsurv,旨在通过整合组织病理图像、临床数据和基因表达数据来预测胃癌患者的生存。我们的方法包括一个双分支混合网络,GLFUnet,它利用注意机制来增强病理图像表征学习。此外,我们采用图卷积神经网络(GCN)从基因表达数据和临床信息中提取特征。为了捕获不同模态之间的相关性,我们利用SFusion融合策略,该策略采用自注意机制来学习模态之间的潜在相关性。最后,这些深度处理的特征被输入到Cox回归模型中,用于端到端生存分析。通过对来自The cancer Genome Atlas (TCGA)的胃癌队列进行的综合实验和分析表明,我们提出的MultiDeepsurv模型在预后准确性方面优于其他方法,C-index为0.806,AUC为0.842。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical optics express
Biomedical optics express BIOCHEMICAL RESEARCH METHODS-OPTICS
CiteScore
6.80
自引率
11.80%
发文量
633
审稿时长
1 months
期刊介绍: The journal''s scope encompasses fundamental research, technology development, biomedical studies and clinical applications. BOEx focuses on the leading edge topics in the field, including: Tissue optics and spectroscopy Novel microscopies Optical coherence tomography Diffuse and fluorescence tomography Photoacoustic and multimodal imaging Molecular imaging and therapies Nanophotonic biosensing Optical biophysics/photobiology Microfluidic optical devices Vision research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信