{"title":"TG-Mamba:利用文本指导预测肺癌的肿瘤突变负担","authors":"Chunlin Yu, Xiangfu Meng, Yinhao Li, Zheng Zhao, Yongqin Zhang","doi":"10.1016/j.compmedimag.2025.102626","DOIUrl":null,"url":null,"abstract":"<div><div>Tumor mutation burden (TMB) is a crucial biomarker for predicting the response of lung cancer patients to immunotherapy. Traditionally, TMB is quantified through whole-exome sequencing (WES), but the high costs and time requirements of WES limit its widespread clinical use. To address this, we propose a deep learning model named TG-Mamba, capable of rapidly predicting TMB levels based on patients’ histopathological images and clinical information, and further estimating specific TMB values. Specifically, we employ a parallel feature extraction strategy. The upper layer consists of a series of text-guided attention modules designed to extract diagnostic textual features. Meanwhile, the lower layer leverages the VMamba backbone network for image feature extraction. To enhance performance, we design a novel hybrid module, Conv-SSM, which combines convolutional layers for local feature extraction with a state-space model (SSM) to capture global dependencies. During the feature extraction process, textual features progressively guide the extraction of image features, ensuring their effective integration. In a cohort of non-training lung cancer patients, TG-Mamba achieved an area under the receiver operating characteristic curve (AUC) of 0.994 in classification tasks and a mean absolute percentage error (MAPE) of 0.25 in regression tasks. These experimental results demonstrate TG-Mamba’s exceptional performance in TMB prediction, highlighting its potential to extend the benefits of immunotherapy to a broader population of lung cancer patients. The code for our model and the experimental data can be obtained at <span><span>https://github.com/ukeLin/TG-Mamba</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50631,"journal":{"name":"Computerized Medical Imaging and Graphics","volume":"124 ","pages":"Article 102626"},"PeriodicalIF":4.9000,"publicationDate":"2025-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TG-Mamba: Leveraging text guidance for predicting tumor mutation burden in lung cancer\",\"authors\":\"Chunlin Yu, Xiangfu Meng, Yinhao Li, Zheng Zhao, Yongqin Zhang\",\"doi\":\"10.1016/j.compmedimag.2025.102626\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Tumor mutation burden (TMB) is a crucial biomarker for predicting the response of lung cancer patients to immunotherapy. Traditionally, TMB is quantified through whole-exome sequencing (WES), but the high costs and time requirements of WES limit its widespread clinical use. To address this, we propose a deep learning model named TG-Mamba, capable of rapidly predicting TMB levels based on patients’ histopathological images and clinical information, and further estimating specific TMB values. Specifically, we employ a parallel feature extraction strategy. The upper layer consists of a series of text-guided attention modules designed to extract diagnostic textual features. Meanwhile, the lower layer leverages the VMamba backbone network for image feature extraction. To enhance performance, we design a novel hybrid module, Conv-SSM, which combines convolutional layers for local feature extraction with a state-space model (SSM) to capture global dependencies. During the feature extraction process, textual features progressively guide the extraction of image features, ensuring their effective integration. In a cohort of non-training lung cancer patients, TG-Mamba achieved an area under the receiver operating characteristic curve (AUC) of 0.994 in classification tasks and a mean absolute percentage error (MAPE) of 0.25 in regression tasks. These experimental results demonstrate TG-Mamba’s exceptional performance in TMB prediction, highlighting its potential to extend the benefits of immunotherapy to a broader population of lung cancer patients. The code for our model and the experimental data can be obtained at <span><span>https://github.com/ukeLin/TG-Mamba</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":50631,\"journal\":{\"name\":\"Computerized Medical Imaging and Graphics\",\"volume\":\"124 \",\"pages\":\"Article 102626\"},\"PeriodicalIF\":4.9000,\"publicationDate\":\"2025-08-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computerized Medical Imaging and Graphics\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0895611125001351\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computerized Medical Imaging and Graphics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895611125001351","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
TG-Mamba: Leveraging text guidance for predicting tumor mutation burden in lung cancer
Tumor mutation burden (TMB) is a crucial biomarker for predicting the response of lung cancer patients to immunotherapy. Traditionally, TMB is quantified through whole-exome sequencing (WES), but the high costs and time requirements of WES limit its widespread clinical use. To address this, we propose a deep learning model named TG-Mamba, capable of rapidly predicting TMB levels based on patients’ histopathological images and clinical information, and further estimating specific TMB values. Specifically, we employ a parallel feature extraction strategy. The upper layer consists of a series of text-guided attention modules designed to extract diagnostic textual features. Meanwhile, the lower layer leverages the VMamba backbone network for image feature extraction. To enhance performance, we design a novel hybrid module, Conv-SSM, which combines convolutional layers for local feature extraction with a state-space model (SSM) to capture global dependencies. During the feature extraction process, textual features progressively guide the extraction of image features, ensuring their effective integration. In a cohort of non-training lung cancer patients, TG-Mamba achieved an area under the receiver operating characteristic curve (AUC) of 0.994 in classification tasks and a mean absolute percentage error (MAPE) of 0.25 in regression tasks. These experimental results demonstrate TG-Mamba’s exceptional performance in TMB prediction, highlighting its potential to extend the benefits of immunotherapy to a broader population of lung cancer patients. The code for our model and the experimental data can be obtained at https://github.com/ukeLin/TG-Mamba.
期刊介绍:
The purpose of the journal Computerized Medical Imaging and Graphics is to act as a source for the exchange of research results concerning algorithmic advances, development, and application of digital imaging in disease detection, diagnosis, intervention, prevention, precision medicine, and population health. Included in the journal will be articles on novel computerized imaging or visualization techniques, including artificial intelligence and machine learning, augmented reality for surgical planning and guidance, big biomedical data visualization, computer-aided diagnosis, computerized-robotic surgery, image-guided therapy, imaging scanning and reconstruction, mobile and tele-imaging, radiomics, and imaging integration and modeling with other information relevant to digital health. The types of biomedical imaging include: magnetic resonance, computed tomography, ultrasound, nuclear medicine, X-ray, microwave, optical and multi-photon microscopy, video and sensory imaging, and the convergence of biomedical images with other non-imaging datasets.