TriCLFF: a multi-modal feature fusion framework using contrastive learning for spatial domain identification.

IF 6.8 2区 生物学 Q1 BIOCHEMICAL RESEARCH METHODS
Fenglan Pang, Guangfu Xue, Wenyi Yang, Yideng Cai, Jinhao Que, Haoxiu Sun, Pingping Wang, Shuaiyu Su, Xiyun Jin, Qian Ding, Zuxiang Wang, Meng Luo, Yuexin Yang, Yi Lin, Renjie Tan, Yusong Liu, Zhaochun Xu, Qinghua Jiang
{"title":"TriCLFF: a multi-modal feature fusion framework using contrastive learning for spatial domain identification.","authors":"Fenglan Pang, Guangfu Xue, Wenyi Yang, Yideng Cai, Jinhao Que, Haoxiu Sun, Pingping Wang, Shuaiyu Su, Xiyun Jin, Qian Ding, Zuxiang Wang, Meng Luo, Yuexin Yang, Yi Lin, Renjie Tan, Yusong Liu, Zhaochun Xu, Qinghua Jiang","doi":"10.1093/bib/bbaf316","DOIUrl":null,"url":null,"abstract":"<p><p>Spatial transcriptomics (ST) encompasses rich multi-modal information related to cell state and organization. Precisely identifying spatial domains with consistent gene expression patterns and histological features is a critical task in ST analysis, which requires comprehensive integration of multi-modal information. Here, we propose TriCLFF, a contrastive learning-based multi-modal feature fusion framework, to effectively integrate spatial associations, gene expression levels, and histological features in a unified manner. Leveraging an advanced feature fusion mechanism, our proposed TriCLFF framework outperforms existing state-of-the-art methods in terms of accuracy and robustness across four datasets (mouse brain anterior, mouse olfactory bulb, human dorsolateral prefrontal cortex, and human breast cancer) from different platforms (10x Visium and Stereo-seq) for spatial domain identification. TriCLFF also facilitates the identification of finer-grained structures in breast cancer tissues and detects previously unknown gene expression patterns in the human dorsolateral prefrontal cortex, providing novel insights for understanding tissue functions. Overall, TriCLFF establishes an effective paradigm for integrating spatial multi-modal data, demonstrating its potential for advancing ST research. The source code of TriCLFF is available online at https://github.com/HBZZ168/TriCLFF.</p>","PeriodicalId":9209,"journal":{"name":"Briefings in bioinformatics","volume":"26 4","pages":""},"PeriodicalIF":6.8000,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12245166/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Briefings in bioinformatics","FirstCategoryId":"99","ListUrlMain":"https://doi.org/10.1093/bib/bbaf316","RegionNum":2,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Spatial transcriptomics (ST) encompasses rich multi-modal information related to cell state and organization. Precisely identifying spatial domains with consistent gene expression patterns and histological features is a critical task in ST analysis, which requires comprehensive integration of multi-modal information. Here, we propose TriCLFF, a contrastive learning-based multi-modal feature fusion framework, to effectively integrate spatial associations, gene expression levels, and histological features in a unified manner. Leveraging an advanced feature fusion mechanism, our proposed TriCLFF framework outperforms existing state-of-the-art methods in terms of accuracy and robustness across four datasets (mouse brain anterior, mouse olfactory bulb, human dorsolateral prefrontal cortex, and human breast cancer) from different platforms (10x Visium and Stereo-seq) for spatial domain identification. TriCLFF also facilitates the identification of finer-grained structures in breast cancer tissues and detects previously unknown gene expression patterns in the human dorsolateral prefrontal cortex, providing novel insights for understanding tissue functions. Overall, TriCLFF establishes an effective paradigm for integrating spatial multi-modal data, demonstrating its potential for advancing ST research. The source code of TriCLFF is available online at https://github.com/HBZZ168/TriCLFF.

TriCLFF:一个使用对比学习进行空间域识别的多模态特征融合框架。
空间转录组学(ST)包含了与细胞状态和组织相关的丰富的多模态信息。精确识别具有一致基因表达模式和组织学特征的空间域是ST分析的关键任务,这需要综合整合多模态信息。本文提出基于对比学习的多模态特征融合框架TriCLFF,将空间关联、基因表达水平和组织学特征进行统一整合。利用先进的特征融合机制,我们提出的TriCLFF框架在来自不同平台(10x Visium和Stereo-seq)的四个数据集(小鼠脑前部、小鼠嗅球、人类背外侧前额叶皮层和人类乳腺癌)的准确性和稳健性方面优于现有的最先进的方法,用于空间域识别。TriCLFF还有助于识别乳腺癌组织中更细粒度的结构,并检测人类背外侧前额叶皮层中以前未知的基因表达模式,为理解组织功能提供新的见解。总的来说,TriCLFF为整合空间多模态数据建立了一个有效的范例,显示了其推进ST研究的潜力。TriCLFF的源代码可在https://github.com/HBZZ168/TriCLFF上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Briefings in bioinformatics
Briefings in bioinformatics 生物-生化研究方法
CiteScore
13.20
自引率
13.70%
发文量
549
审稿时长
6 months
期刊介绍: Briefings in Bioinformatics is an international journal serving as a platform for researchers and educators in the life sciences. It also appeals to mathematicians, statisticians, and computer scientists applying their expertise to biological challenges. The journal focuses on reviews tailored for users of databases and analytical tools in contemporary genetics, molecular and systems biology. It stands out by offering practical assistance and guidance to non-specialists in computerized methodologies. Covering a wide range from introductory concepts to specific protocols and analyses, the papers address bacterial, plant, fungal, animal, and human data. The journal's detailed subject areas include genetic studies of phenotypes and genotypes, mapping, DNA sequencing, expression profiling, gene expression studies, microarrays, alignment methods, protein profiles and HMMs, lipids, metabolic and signaling pathways, structure determination and function prediction, phylogenetic studies, and education and training.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信