{"title":"融合细胞类型和整个幻灯片图像语义标记的可解释癌症生存预测。","authors":"Jinchao Chen, Pei Liu, Chen Chen, Ying Su, Jiajia Wang, Cheng Chen, Xiantao Ai, Xiaoyi Lv","doi":"10.1007/s12539-025-00744-0","DOIUrl":null,"url":null,"abstract":"<p><p>Survival prediction involves multiple factors, such as histopathological image data and omics data, making it a typical multimodal task. In this work, we introduce semantic annotations for genes in different cell types based on cell biology knowledge, enabling the model to achieve interpretability at the cellular level. Since these cell type annotations are derived from the unique sites of origin for each cancer type, they can be more closely aligned with morphological features in whole slide images (WSIs) and address the issue of genomic annotation ambiguity. We then propose a multimodal fusion model, SurvTransformer, with multi-layer attention to fuse cell type tags (CTTs) and WSIs for survival prediction. Finally, through attention and integrated gradient attribution, the model provides biologically meaningful interpretable analysis at three different levels: cell type, gene, and histopathology image. Comparative experiments show that SurvTransformer achieves the highest consistency index across four cancer datasets. The survival curves generated are also statistically significant. Ablation experiments show that SurvTransformer outperforms models based on different labeling methods and attention representations. In terms of interpretability, case studies validate the effectiveness of SurvTransformer at three levels: cell type, gene, and histopathological image.</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Interpretable Cancer Survival Prediction by Fusing Semantic Labelling of Cell Types and Whole Slide Images.\",\"authors\":\"Jinchao Chen, Pei Liu, Chen Chen, Ying Su, Jiajia Wang, Cheng Chen, Xiantao Ai, Xiaoyi Lv\",\"doi\":\"10.1007/s12539-025-00744-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Survival prediction involves multiple factors, such as histopathological image data and omics data, making it a typical multimodal task. In this work, we introduce semantic annotations for genes in different cell types based on cell biology knowledge, enabling the model to achieve interpretability at the cellular level. Since these cell type annotations are derived from the unique sites of origin for each cancer type, they can be more closely aligned with morphological features in whole slide images (WSIs) and address the issue of genomic annotation ambiguity. We then propose a multimodal fusion model, SurvTransformer, with multi-layer attention to fuse cell type tags (CTTs) and WSIs for survival prediction. Finally, through attention and integrated gradient attribution, the model provides biologically meaningful interpretable analysis at three different levels: cell type, gene, and histopathology image. Comparative experiments show that SurvTransformer achieves the highest consistency index across four cancer datasets. The survival curves generated are also statistically significant. Ablation experiments show that SurvTransformer outperforms models based on different labeling methods and attention representations. In terms of interpretability, case studies validate the effectiveness of SurvTransformer at three levels: cell type, gene, and histopathological image.</p>\",\"PeriodicalId\":13670,\"journal\":{\"name\":\"Interdisciplinary Sciences: Computational Life Sciences\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Interdisciplinary Sciences: Computational Life Sciences\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://doi.org/10.1007/s12539-025-00744-0\",\"RegionNum\":2,\"RegionCategory\":\"生物学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interdisciplinary Sciences: Computational Life Sciences","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1007/s12539-025-00744-0","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
Interpretable Cancer Survival Prediction by Fusing Semantic Labelling of Cell Types and Whole Slide Images.
Survival prediction involves multiple factors, such as histopathological image data and omics data, making it a typical multimodal task. In this work, we introduce semantic annotations for genes in different cell types based on cell biology knowledge, enabling the model to achieve interpretability at the cellular level. Since these cell type annotations are derived from the unique sites of origin for each cancer type, they can be more closely aligned with morphological features in whole slide images (WSIs) and address the issue of genomic annotation ambiguity. We then propose a multimodal fusion model, SurvTransformer, with multi-layer attention to fuse cell type tags (CTTs) and WSIs for survival prediction. Finally, through attention and integrated gradient attribution, the model provides biologically meaningful interpretable analysis at three different levels: cell type, gene, and histopathology image. Comparative experiments show that SurvTransformer achieves the highest consistency index across four cancer datasets. The survival curves generated are also statistically significant. Ablation experiments show that SurvTransformer outperforms models based on different labeling methods and attention representations. In terms of interpretability, case studies validate the effectiveness of SurvTransformer at three levels: cell type, gene, and histopathological image.
期刊介绍:
Interdisciplinary Sciences--Computational Life Sciences aims to cover the most recent and outstanding developments in interdisciplinary areas of sciences, especially focusing on computational life sciences, an area that is enjoying rapid development at the forefront of scientific research and technology.
The journal publishes original papers of significant general interest covering recent research and developments. Articles will be published rapidly by taking full advantage of internet technology for online submission and peer-reviewing of manuscripts, and then by publishing OnlineFirstTM through SpringerLink even before the issue is built or sent to the printer.
The editorial board consists of many leading scientists with international reputation, among others, Luc Montagnier (UNESCO, France), Dennis Salahub (University of Calgary, Canada), Weitao Yang (Duke University, USA). Prof. Dongqing Wei at the Shanghai Jiatong University is appointed as the editor-in-chief; he made important contributions in bioinformatics and computational physics and is best known for his ground-breaking works on the theory of ferroelectric liquids. With the help from a team of associate editors and the editorial board, an international journal with sound reputation shall be created.