LangCell: Language-Cell Pre-training for Cell Identity Understanding

Suyuan Zhao, Jiahuan Zhang, Yizhen Luo, Yushuai Wu, Zaiqing Nie
{"title":"LangCell: Language-Cell Pre-training for Cell Identity Understanding","authors":"Suyuan Zhao, Jiahuan Zhang, Yizhen Luo, Yushuai Wu, Zaiqing Nie","doi":"arxiv-2405.06708","DOIUrl":null,"url":null,"abstract":"Cell identity encompasses various semantic aspects of a cell, including cell\ntype, pathway information, disease information, and more, which are essential\nfor biologists to gain insights into its biological characteristics.\nUnderstanding cell identity from the transcriptomic data, such as annotating\ncell types, have become an important task in bioinformatics. As these semantic\naspects are determined by human experts, it is impossible for AI models to\neffectively carry out cell identity understanding tasks without the supervision\nsignals provided by single-cell and label pairs. The single-cell pre-trained\nlanguage models (PLMs) currently used for this task are trained only on a\nsingle modality, transcriptomics data, lack an understanding of cell identity\nknowledge. As a result, they have to be fine-tuned for downstream tasks and\nstruggle when lacking labeled data with the desired semantic labels. To address\nthis issue, we propose an innovative solution by constructing a unified\nrepresentation of single-cell data and natural language during the pre-training\nphase, allowing the model to directly incorporate insights related to cell\nidentity. More specifically, we introduce \\textbf{LangCell}, the first\n\\textbf{Lang}uage-\\textbf{Cell} pre-training framework. LangCell utilizes texts\nenriched with cell identity information to gain a profound comprehension of\ncross-modal knowledge. Results from experiments conducted on different\nbenchmarks show that LangCell is the only single-cell PLM that can work\neffectively in zero-shot cell identity understanding scenarios, and also\nsignificantly outperforms existing models in few-shot and fine-tuning cell\nidentity understanding scenarios.","PeriodicalId":501070,"journal":{"name":"arXiv - QuanBio - Genomics","volume":"189 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Genomics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.06708","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Cell identity encompasses various semantic aspects of a cell, including cell type, pathway information, disease information, and more, which are essential for biologists to gain insights into its biological characteristics. Understanding cell identity from the transcriptomic data, such as annotating cell types, have become an important task in bioinformatics. As these semantic aspects are determined by human experts, it is impossible for AI models to effectively carry out cell identity understanding tasks without the supervision signals provided by single-cell and label pairs. The single-cell pre-trained language models (PLMs) currently used for this task are trained only on a single modality, transcriptomics data, lack an understanding of cell identity knowledge. As a result, they have to be fine-tuned for downstream tasks and struggle when lacking labeled data with the desired semantic labels. To address this issue, we propose an innovative solution by constructing a unified representation of single-cell data and natural language during the pre-training phase, allowing the model to directly incorporate insights related to cell identity. More specifically, we introduce \textbf{LangCell}, the first \textbf{Lang}uage-\textbf{Cell} pre-training framework. LangCell utilizes texts enriched with cell identity information to gain a profound comprehension of cross-modal knowledge. Results from experiments conducted on different benchmarks show that LangCell is the only single-cell PLM that can work effectively in zero-shot cell identity understanding scenarios, and also significantly outperforms existing models in few-shot and fine-tuning cell identity understanding scenarios.
LangCell:理解细胞特性的语言-细胞预培训
细胞身份涵盖了细胞的各种语义方面,包括细胞类型、通路信息、疾病信息等,这些对于生物学家深入了解细胞的生物学特性至关重要。从转录组数据中理解细胞身份,如标注细胞类型,已成为生物信息学的一项重要任务。从转录组数据中理解细胞身份(如标注细胞类型)已成为生物信息学中的重要任务。由于这些语义方面由人类专家决定,人工智能模型不可能在没有单细胞和标签对提供的监督信号的情况下有效执行细胞身份理解任务。目前用于该任务的单细胞预训练语言模型(PLM)仅在单模态转录组学数据上进行训练,缺乏对细胞身份知识的理解。因此,这些模型必须针对下游任务进行微调,而且在缺乏具有所需语义标签的标注数据时会遇到困难。为了解决这个问题,我们提出了一种创新的解决方案,即在预训练阶段构建单细胞数据和自然语言的统一表述,使模型能够直接纳入与细胞身份相关的见解。更具体地说,我们引入了第一个预训练框架--textbf{Lang}uage-textbf{Cell}。LangCell 利用富含细胞身份信息的文本来深刻理解跨模态知识。在不同基准上进行的实验结果表明,LangCell是唯一能在零次细胞身份理解场景中有效工作的单细胞PLM,而且在少次细胞身份理解和微调细胞身份理解场景中的表现也明显优于现有模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信