从高通量测序数据中预测冠状病毒序列的基于自我关注的深度学习模型

ZhenNan Wang, ChaoMei Liu
{"title":"从高通量测序数据中预测冠状病毒序列的基于自我关注的深度学习模型","authors":"ZhenNan Wang, ChaoMei Liu","doi":"10.1101/2024.08.07.24311618","DOIUrl":null,"url":null,"abstract":"Transformer models have achieved excellent results in various tasks, primarily due to the self-attention mechanism. We explore using self-attention for detecting coronavirus sequences in high-throughput sequencing data, offering a novel approach for accurately identifying emerging and highly variable coronavirus strains. Coronavirus and human genome data were obtained from the Genomic Data Commons (GDC) and the National Genomics Data Center (NGDC) databases. After preprocessing, a simulated high-throughput sequencing dataset of coronavirus-infected samples was constructed. This dataset was divided into training, validation, and test datasets. The self-attention-based model was trained on the training datasets, tested on the validation and test datasets, and SARS-CoV-2 genome data were collected as an independent test datasets. The results showed that the self-attention-based model outperformed traditional bioinformatics methods in terms of performance on both the test and the independent test datasets, with a significant improvement in computation speed. The self-attention-based model can sensitively and rapidly detect coronavirus sequences from high-throughput sequencing data while exhibiting excellent generalization ability. It can accurately detect emerging and highly variable coronavirus strains, providing a new approach for identifying such viruses.","PeriodicalId":501509,"journal":{"name":"medRxiv - Infectious Diseases","volume":"191 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-attention based deep learning model for predicting the coronavirus sequences from high-throughput sequencing data\",\"authors\":\"ZhenNan Wang, ChaoMei Liu\",\"doi\":\"10.1101/2024.08.07.24311618\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Transformer models have achieved excellent results in various tasks, primarily due to the self-attention mechanism. We explore using self-attention for detecting coronavirus sequences in high-throughput sequencing data, offering a novel approach for accurately identifying emerging and highly variable coronavirus strains. Coronavirus and human genome data were obtained from the Genomic Data Commons (GDC) and the National Genomics Data Center (NGDC) databases. After preprocessing, a simulated high-throughput sequencing dataset of coronavirus-infected samples was constructed. This dataset was divided into training, validation, and test datasets. The self-attention-based model was trained on the training datasets, tested on the validation and test datasets, and SARS-CoV-2 genome data were collected as an independent test datasets. The results showed that the self-attention-based model outperformed traditional bioinformatics methods in terms of performance on both the test and the independent test datasets, with a significant improvement in computation speed. The self-attention-based model can sensitively and rapidly detect coronavirus sequences from high-throughput sequencing data while exhibiting excellent generalization ability. It can accurately detect emerging and highly variable coronavirus strains, providing a new approach for identifying such viruses.\",\"PeriodicalId\":501509,\"journal\":{\"name\":\"medRxiv - Infectious Diseases\",\"volume\":\"191 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Infectious Diseases\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.08.07.24311618\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Infectious Diseases","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.07.24311618","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

变形金刚模型在各种任务中都取得了优异的成绩,这主要归功于自我注意机制。我们探索在高通量测序数据中使用自注意检测冠状病毒序列,为准确识别新出现的高变异冠状病毒毒株提供了一种新方法。冠状病毒和人类基因组数据来自基因组数据公共中心(GDC)和国家基因组数据中心(NGDC)数据库。经过预处理后,构建了冠状病毒感染样本的模拟高通量测序数据集。该数据集分为训练数据集、验证数据集和测试数据集。基于自我注意的模型在训练数据集上进行了训练,在验证和测试数据集上进行了测试,并收集了 SARS-CoV-2 基因组数据作为独立的测试数据集。结果表明,基于自我注意力的模型在测试数据集和独立测试数据集上的性能均优于传统的生物信息学方法,计算速度也有显著提高。基于自我注意的模型可以灵敏、快速地从高通量测序数据中检测冠状病毒序列,同时表现出卓越的泛化能力。它能准确检测出新出现的高变异冠状病毒毒株,为识别这类病毒提供了一种新方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Self-attention based deep learning model for predicting the coronavirus sequences from high-throughput sequencing data
Transformer models have achieved excellent results in various tasks, primarily due to the self-attention mechanism. We explore using self-attention for detecting coronavirus sequences in high-throughput sequencing data, offering a novel approach for accurately identifying emerging and highly variable coronavirus strains. Coronavirus and human genome data were obtained from the Genomic Data Commons (GDC) and the National Genomics Data Center (NGDC) databases. After preprocessing, a simulated high-throughput sequencing dataset of coronavirus-infected samples was constructed. This dataset was divided into training, validation, and test datasets. The self-attention-based model was trained on the training datasets, tested on the validation and test datasets, and SARS-CoV-2 genome data were collected as an independent test datasets. The results showed that the self-attention-based model outperformed traditional bioinformatics methods in terms of performance on both the test and the independent test datasets, with a significant improvement in computation speed. The self-attention-based model can sensitively and rapidly detect coronavirus sequences from high-throughput sequencing data while exhibiting excellent generalization ability. It can accurately detect emerging and highly variable coronavirus strains, providing a new approach for identifying such viruses.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信