{"title":"stGNN:基于深度图学习和统计建模的空间知情细胞型反卷积。","authors":"Juntong Zhu, Daoyuan Wang, Siqi Chen, Lili Meng, Yahui Long, Cheng Liang","doi":"10.1007/s12539-025-00728-0","DOIUrl":null,"url":null,"abstract":"<p><p>Recent advancements in spatial transcriptomics (ST) technologies have greatly revolutionized our understanding of tissue heterogeneity and cellular functions. However, popular ST, such as 10x Visium, still fall short in achieving true single-cell resolution, underscoring an urgent need for in-silico methods that can accurately resolve cell type composition within ST data. While several methods have been proposed, most rely solely on gene expression profiles, often neglecting spatial context, which results in suboptimal performance. Additionally, many deconvolution methods dependent on scRNA-seq data fail to align the distribution of ST and scRNA-seq reference data, consequently affecting the accuracy of cell type mapping. In this study, we propose stGNN, a novel spatially-informed graph learning framework powered by statistical modeling for resolving fine-grained cell type compositions in ST. To capture comprehensive features, we develop a dual encoding module, utilizing both a graph convolutional network (GCN) and an auto-encoder to learn spatial and non-spatial representations respectively. Following that, we further design an adaptive attention mechanism to integrate these representations layer-by-layer, capturing multi-scale spatial structures from low to high order and thus improving representation learning. Additionally, for model training, we adopt a negative log-likelihood loss function that aligns the distribution of ST data with scRNA-seq (or snRNA-seq) reference data, enhancing the accuracy of cell type proportion prediction in ST. To assess the performance of stGNN, we applied our proposed model to six ST datasets from various platforms, including 10x Visium, Slide-seqV2, and Visium HD, for cell type proportion estimation. Our results demonstrate that stGNN consistently outperforms seven state-of-the-art methods. Notably, when applied to mouse brain tissues, stGNN successfully resolves clear cortical layers at a high resolution. Additionally, we show that stGNN is able to effectively resolve ST at different resolutions. In summary, stGNN provides a powerful framework for analyzing the spatial distribution of diverse cell populations in complex tissue structures. stGNN's code is openly shared on https://github.com/LiangSDNULab/stGNN .</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"stGNN: Spatially Informed Cell-Type Deconvolution Based on Deep Graph Learning and Statistical Modeling.\",\"authors\":\"Juntong Zhu, Daoyuan Wang, Siqi Chen, Lili Meng, Yahui Long, Cheng Liang\",\"doi\":\"10.1007/s12539-025-00728-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Recent advancements in spatial transcriptomics (ST) technologies have greatly revolutionized our understanding of tissue heterogeneity and cellular functions. However, popular ST, such as 10x Visium, still fall short in achieving true single-cell resolution, underscoring an urgent need for in-silico methods that can accurately resolve cell type composition within ST data. While several methods have been proposed, most rely solely on gene expression profiles, often neglecting spatial context, which results in suboptimal performance. Additionally, many deconvolution methods dependent on scRNA-seq data fail to align the distribution of ST and scRNA-seq reference data, consequently affecting the accuracy of cell type mapping. In this study, we propose stGNN, a novel spatially-informed graph learning framework powered by statistical modeling for resolving fine-grained cell type compositions in ST. To capture comprehensive features, we develop a dual encoding module, utilizing both a graph convolutional network (GCN) and an auto-encoder to learn spatial and non-spatial representations respectively. Following that, we further design an adaptive attention mechanism to integrate these representations layer-by-layer, capturing multi-scale spatial structures from low to high order and thus improving representation learning. Additionally, for model training, we adopt a negative log-likelihood loss function that aligns the distribution of ST data with scRNA-seq (or snRNA-seq) reference data, enhancing the accuracy of cell type proportion prediction in ST. To assess the performance of stGNN, we applied our proposed model to six ST datasets from various platforms, including 10x Visium, Slide-seqV2, and Visium HD, for cell type proportion estimation. Our results demonstrate that stGNN consistently outperforms seven state-of-the-art methods. Notably, when applied to mouse brain tissues, stGNN successfully resolves clear cortical layers at a high resolution. Additionally, we show that stGNN is able to effectively resolve ST at different resolutions. In summary, stGNN provides a powerful framework for analyzing the spatial distribution of diverse cell populations in complex tissue structures. stGNN's code is openly shared on https://github.com/LiangSDNULab/stGNN .</p>\",\"PeriodicalId\":13670,\"journal\":{\"name\":\"Interdisciplinary Sciences: Computational Life Sciences\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.9000,\"publicationDate\":\"2025-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Interdisciplinary Sciences: Computational Life Sciences\",\"FirstCategoryId\":\"99\",\"ListUrlMain\":\"https://doi.org/10.1007/s12539-025-00728-0\",\"RegionNum\":2,\"RegionCategory\":\"生物学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"MATHEMATICAL & COMPUTATIONAL BIOLOGY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interdisciplinary Sciences: Computational Life Sciences","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1007/s12539-025-00728-0","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
stGNN: Spatially Informed Cell-Type Deconvolution Based on Deep Graph Learning and Statistical Modeling.
Recent advancements in spatial transcriptomics (ST) technologies have greatly revolutionized our understanding of tissue heterogeneity and cellular functions. However, popular ST, such as 10x Visium, still fall short in achieving true single-cell resolution, underscoring an urgent need for in-silico methods that can accurately resolve cell type composition within ST data. While several methods have been proposed, most rely solely on gene expression profiles, often neglecting spatial context, which results in suboptimal performance. Additionally, many deconvolution methods dependent on scRNA-seq data fail to align the distribution of ST and scRNA-seq reference data, consequently affecting the accuracy of cell type mapping. In this study, we propose stGNN, a novel spatially-informed graph learning framework powered by statistical modeling for resolving fine-grained cell type compositions in ST. To capture comprehensive features, we develop a dual encoding module, utilizing both a graph convolutional network (GCN) and an auto-encoder to learn spatial and non-spatial representations respectively. Following that, we further design an adaptive attention mechanism to integrate these representations layer-by-layer, capturing multi-scale spatial structures from low to high order and thus improving representation learning. Additionally, for model training, we adopt a negative log-likelihood loss function that aligns the distribution of ST data with scRNA-seq (or snRNA-seq) reference data, enhancing the accuracy of cell type proportion prediction in ST. To assess the performance of stGNN, we applied our proposed model to six ST datasets from various platforms, including 10x Visium, Slide-seqV2, and Visium HD, for cell type proportion estimation. Our results demonstrate that stGNN consistently outperforms seven state-of-the-art methods. Notably, when applied to mouse brain tissues, stGNN successfully resolves clear cortical layers at a high resolution. Additionally, we show that stGNN is able to effectively resolve ST at different resolutions. In summary, stGNN provides a powerful framework for analyzing the spatial distribution of diverse cell populations in complex tissue structures. stGNN's code is openly shared on https://github.com/LiangSDNULab/stGNN .
期刊介绍:
Interdisciplinary Sciences--Computational Life Sciences aims to cover the most recent and outstanding developments in interdisciplinary areas of sciences, especially focusing on computational life sciences, an area that is enjoying rapid development at the forefront of scientific research and technology.
The journal publishes original papers of significant general interest covering recent research and developments. Articles will be published rapidly by taking full advantage of internet technology for online submission and peer-reviewing of manuscripts, and then by publishing OnlineFirstTM through SpringerLink even before the issue is built or sent to the printer.
The editorial board consists of many leading scientists with international reputation, among others, Luc Montagnier (UNESCO, France), Dennis Salahub (University of Calgary, Canada), Weitao Yang (Duke University, USA). Prof. Dongqing Wei at the Shanghai Jiatong University is appointed as the editor-in-chief; he made important contributions in bioinformatics and computational physics and is best known for his ground-breaking works on the theory of ferroelectric liquids. With the help from a team of associate editors and the editorial board, an international journal with sound reputation shall be created.