Assessing parameter efficient methods for pre-trained language model in annotating scRNA-seq data

IF 4.2 3区 生物学 Q1 BIOCHEMICAL RESEARCH METHODS
Yucheng Xia , Yuhang Liu , Tianhao Li , Sihan He , Hong Chang , Yaqing Wang , Yongqing Zhang , Wenyi Ge
{"title":"Assessing parameter efficient methods for pre-trained language model in annotating scRNA-seq data","authors":"Yucheng Xia ,&nbsp;Yuhang Liu ,&nbsp;Tianhao Li ,&nbsp;Sihan He ,&nbsp;Hong Chang ,&nbsp;Yaqing Wang ,&nbsp;Yongqing Zhang ,&nbsp;Wenyi Ge","doi":"10.1016/j.ymeth.2024.05.007","DOIUrl":null,"url":null,"abstract":"<div><p>Annotating cell types of single-cell RNA sequencing (scRNA-seq) data is crucial for studying cellular heterogeneity in the tumor microenvironment. Recently, large-scale pre-trained language models (PLMs) have achieved significant progress in cell-type annotation of scRNA-seq data. This approach effectively addresses previous methods' shortcomings in performance and generalization. However, fine-tuning PLMs for different downstream tasks demands considerable computational resources, rendering it impractical. Hence, a new research branch introduces parameter-efficient fine-tuning (PEFT). This involves optimizing a few parameters while leaving the majority unchanged, leading to substantial reductions in computational expenses. Here, we utilize scBERT, a large-scale pre-trained model, to explore the capabilities of three PEFT methods in scRNA-seq cell type annotation. Extensive benchmark studies across several datasets demonstrate the superior applicability of PEFT methods. Furthermore, downstream analysis using models obtained through PEFT showcases their utility in novel cell type discovery and model interpretability for potential marker genes. Our findings underscore the considerable potential of PEFT in PLM-based cell type annotation, presenting novel perspectives for the analysis of scRNA-seq data.</p></div>","PeriodicalId":390,"journal":{"name":"Methods","volume":"228 ","pages":"Pages 12-21"},"PeriodicalIF":4.2000,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Methods","FirstCategoryId":"99","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1046202324001233","RegionNum":3,"RegionCategory":"生物学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

Abstract

Annotating cell types of single-cell RNA sequencing (scRNA-seq) data is crucial for studying cellular heterogeneity in the tumor microenvironment. Recently, large-scale pre-trained language models (PLMs) have achieved significant progress in cell-type annotation of scRNA-seq data. This approach effectively addresses previous methods' shortcomings in performance and generalization. However, fine-tuning PLMs for different downstream tasks demands considerable computational resources, rendering it impractical. Hence, a new research branch introduces parameter-efficient fine-tuning (PEFT). This involves optimizing a few parameters while leaving the majority unchanged, leading to substantial reductions in computational expenses. Here, we utilize scBERT, a large-scale pre-trained model, to explore the capabilities of three PEFT methods in scRNA-seq cell type annotation. Extensive benchmark studies across several datasets demonstrate the superior applicability of PEFT methods. Furthermore, downstream analysis using models obtained through PEFT showcases their utility in novel cell type discovery and model interpretability for potential marker genes. Our findings underscore the considerable potential of PEFT in PLM-based cell type annotation, presenting novel perspectives for the analysis of scRNA-seq data.

评估用于标注 scRNA-seq 数据的预训练语言模型的参数效率方法
注释单细胞 RNA 测序(scRNA-seq)数据的细胞类型对于研究肿瘤微环境中的细胞异质性至关重要。最近,大规模预训练语言模型(PLM)在 scRNA-seq 数据的细胞类型注释方面取得了重大进展。这种方法有效解决了以往方法在性能和泛化方面的不足。然而,针对不同的下游任务对 PLM 进行微调需要大量的计算资源,因此并不现实。因此,一个新的研究分支引入了参数效率微调(PEFT)。这包括优化少数参数,而大部分参数保持不变,从而大幅降低计算费用。在这里,我们利用大规模预训练模型 scBERT 探索了三种 PEFT 方法在 scRNA-seq 细胞类型标注中的能力。对多个数据集进行的广泛基准研究证明了 PEFT 方法的优越适用性。此外,利用 PEFT 方法获得的模型进行的下游分析表明了这些方法在新型细胞类型发现中的实用性以及潜在标记基因模型的可解释性。我们的研究结果强调了 PEFT 在基于 PLM 的细胞类型注释中的巨大潜力,为 scRNA-seq 数据分析提供了新的视角。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Methods
Methods 生物-生化研究方法
CiteScore
9.80
自引率
2.10%
发文量
222
审稿时长
11.3 weeks
期刊介绍: Methods focuses on rapidly developing techniques in the experimental biological and medical sciences. Each topical issue, organized by a guest editor who is an expert in the area covered, consists solely of invited quality articles by specialist authors, many of them reviews. Issues are devoted to specific technical approaches with emphasis on clear detailed descriptions of protocols that allow them to be reproduced easily. The background information provided enables researchers to understand the principles underlying the methods; other helpful sections include comparisons of alternative methods giving the advantages and disadvantages of particular methods, guidance on avoiding potential pitfalls, and suggestions for troubleshooting.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信