stGNN: Spatially Informed Cell-Type Deconvolution Based on Deep Graph Learning and Statistical Modeling.

IF 3.9 2区 生物学 Q1 MATHEMATICAL & COMPUTATIONAL BIOLOGY
Juntong Zhu, Daoyuan Wang, Siqi Chen, Lili Meng, Yahui Long, Cheng Liang
{"title":"stGNN: Spatially Informed Cell-Type Deconvolution Based on Deep Graph Learning and Statistical Modeling.","authors":"Juntong Zhu, Daoyuan Wang, Siqi Chen, Lili Meng, Yahui Long, Cheng Liang","doi":"10.1007/s12539-025-00728-0","DOIUrl":null,"url":null,"abstract":"<p><p>Recent advancements in spatial transcriptomics (ST) technologies have greatly revolutionized our understanding of tissue heterogeneity and cellular functions. However, popular ST, such as 10x Visium, still fall short in achieving true single-cell resolution, underscoring an urgent need for in-silico methods that can accurately resolve cell type composition within ST data. While several methods have been proposed, most rely solely on gene expression profiles, often neglecting spatial context, which results in suboptimal performance. Additionally, many deconvolution methods dependent on scRNA-seq data fail to align the distribution of ST and scRNA-seq reference data, consequently affecting the accuracy of cell type mapping. In this study, we propose stGNN, a novel spatially-informed graph learning framework powered by statistical modeling for resolving fine-grained cell type compositions in ST. To capture comprehensive features, we develop a dual encoding module, utilizing both a graph convolutional network (GCN) and an auto-encoder to learn spatial and non-spatial representations respectively. Following that, we further design an adaptive attention mechanism to integrate these representations layer-by-layer, capturing multi-scale spatial structures from low to high order and thus improving representation learning. Additionally, for model training, we adopt a negative log-likelihood loss function that aligns the distribution of ST data with scRNA-seq (or snRNA-seq) reference data, enhancing the accuracy of cell type proportion prediction in ST. To assess the performance of stGNN, we applied our proposed model to six ST datasets from various platforms, including 10x Visium, Slide-seqV2, and Visium HD, for cell type proportion estimation. Our results demonstrate that stGNN consistently outperforms seven state-of-the-art methods. Notably, when applied to mouse brain tissues, stGNN successfully resolves clear cortical layers at a high resolution. Additionally, we show that stGNN is able to effectively resolve ST at different resolutions. In summary, stGNN provides a powerful framework for analyzing the spatial distribution of diverse cell populations in complex tissue structures. stGNN's code is openly shared on https://github.com/LiangSDNULab/stGNN .</p>","PeriodicalId":13670,"journal":{"name":"Interdisciplinary Sciences: Computational Life Sciences","volume":" ","pages":""},"PeriodicalIF":3.9000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interdisciplinary Sciences: Computational Life Sciences","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1007/s12539-025-00728-0","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Recent advancements in spatial transcriptomics (ST) technologies have greatly revolutionized our understanding of tissue heterogeneity and cellular functions. However, popular ST, such as 10x Visium, still fall short in achieving true single-cell resolution, underscoring an urgent need for in-silico methods that can accurately resolve cell type composition within ST data. While several methods have been proposed, most rely solely on gene expression profiles, often neglecting spatial context, which results in suboptimal performance. Additionally, many deconvolution methods dependent on scRNA-seq data fail to align the distribution of ST and scRNA-seq reference data, consequently affecting the accuracy of cell type mapping. In this study, we propose stGNN, a novel spatially-informed graph learning framework powered by statistical modeling for resolving fine-grained cell type compositions in ST. To capture comprehensive features, we develop a dual encoding module, utilizing both a graph convolutional network (GCN) and an auto-encoder to learn spatial and non-spatial representations respectively. Following that, we further design an adaptive attention mechanism to integrate these representations layer-by-layer, capturing multi-scale spatial structures from low to high order and thus improving representation learning. Additionally, for model training, we adopt a negative log-likelihood loss function that aligns the distribution of ST data with scRNA-seq (or snRNA-seq) reference data, enhancing the accuracy of cell type proportion prediction in ST. To assess the performance of stGNN, we applied our proposed model to six ST datasets from various platforms, including 10x Visium, Slide-seqV2, and Visium HD, for cell type proportion estimation. Our results demonstrate that stGNN consistently outperforms seven state-of-the-art methods. Notably, when applied to mouse brain tissues, stGNN successfully resolves clear cortical layers at a high resolution. Additionally, we show that stGNN is able to effectively resolve ST at different resolutions. In summary, stGNN provides a powerful framework for analyzing the spatial distribution of diverse cell populations in complex tissue structures. stGNN's code is openly shared on https://github.com/LiangSDNULab/stGNN .

stGNN:基于深度图学习和统计建模的空间知情细胞型反卷积。
空间转录组学(ST)技术的最新进展极大地改变了我们对组织异质性和细胞功能的理解。然而,流行的ST,如10x Visium,在实现真正的单细胞分辨率方面仍然不足,这强调了迫切需要能够准确解析ST数据中细胞类型组成的硅内方法。虽然已经提出了几种方法,但大多数方法仅依赖于基因表达谱,往往忽略了空间背景,这导致性能不佳。此外,许多依赖于scRNA-seq数据的反褶积方法无法对齐ST和scRNA-seq参考数据的分布,从而影响细胞类型定位的准确性。在这项研究中,我们提出了stGNN,这是一种新的空间信息图学习框架,由统计建模提供支持,用于解决st中的细粒度细胞类型组成。为了捕获全面的特征,我们开发了一个双编码模块,利用图卷积网络(GCN)和自动编码器分别学习空间和非空间表示。随后,我们进一步设计了一种自适应注意机制来逐层整合这些表征,从低阶到高阶捕获多尺度空间结构,从而提高表征学习。此外,对于模型训练,我们采用负对数似然损失函数,将ST数据的分布与scRNA-seq(或snRNA-seq)参考数据相一致,提高ST中细胞类型比例预测的准确性。为了评估stGNN的性能,我们将我们提出的模型应用于来自不同平台的6个ST数据集,包括10x Visium, Slide-seqV2和Visium HD,用于细胞类型比例估计。我们的研究结果表明,stGNN始终优于七种最先进的方法。值得注意的是,当应用于小鼠脑组织时,stGNN成功地以高分辨率分辨出清晰的皮层层。此外,我们表明stGNN能够有效地分辨不同分辨率的ST。总之,stGNN为分析复杂组织结构中不同细胞群的空间分布提供了一个强大的框架。stGNN的代码在https://github.com/LiangSDNULab/stGNN上公开共享。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Interdisciplinary Sciences: Computational Life Sciences
Interdisciplinary Sciences: Computational Life Sciences MATHEMATICAL & COMPUTATIONAL BIOLOGY-
CiteScore
8.60
自引率
4.20%
发文量
55
期刊介绍: Interdisciplinary Sciences--Computational Life Sciences aims to cover the most recent and outstanding developments in interdisciplinary areas of sciences, especially focusing on computational life sciences, an area that is enjoying rapid development at the forefront of scientific research and technology. The journal publishes original papers of significant general interest covering recent research and developments. Articles will be published rapidly by taking full advantage of internet technology for online submission and peer-reviewing of manuscripts, and then by publishing OnlineFirstTM through SpringerLink even before the issue is built or sent to the printer. The editorial board consists of many leading scientists with international reputation, among others, Luc Montagnier (UNESCO, France), Dennis Salahub (University of Calgary, Canada), Weitao Yang (Duke University, USA). Prof. Dongqing Wei at the Shanghai Jiatong University is appointed as the editor-in-chief; he made important contributions in bioinformatics and computational physics and is best known for his ground-breaking works on the theory of ferroelectric liquids. With the help from a team of associate editors and the editorial board, an international journal with sound reputation shall be created.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信