{"title":"Deep sequence representation learning for predicting human proteins with liquid-liquid phase separation propensity and synaptic functions","authors":"Anqi Wei, Liangjiang Wang","doi":"10.1145/3535508.3545550","DOIUrl":null,"url":null,"abstract":"With advancements in next-generation sequencing techniques, the whole protein sequence repertoire has increased to a great extent. In the meantime, deep learning techniques have promoted the development of computational methods to interpret large-scale proteomic data and facilitate functional studies of proteins. Inferring properties from protein amino acid sequences has been a long-standing problem in Bioinformatics. Extensive studies have successfully applied natural language processing (NLP) techniques for the representation learning of protein sequences. In this paper, we applied the deep sequence model - UDSMProt, to fine-tune and evaluate two protein prediction tasks: (1) predict proteins with liquid-liquid phase separation propensity and (2) predict synaptic proteins. Our results have shown that, without prior domain knowledge and only based on protein sequences, the fine-tuned language models achieved high classification accuracies and outperformed baseline models using compositional k-mer features in both tasks. Hence, it is promising to apply the protein language model to some learning tasks and the fine-tuned models can be used to predict protein candidates for biological studies.","PeriodicalId":354504,"journal":{"name":"Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 13th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3535508.3545550","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With advancements in next-generation sequencing techniques, the whole protein sequence repertoire has increased to a great extent. In the meantime, deep learning techniques have promoted the development of computational methods to interpret large-scale proteomic data and facilitate functional studies of proteins. Inferring properties from protein amino acid sequences has been a long-standing problem in Bioinformatics. Extensive studies have successfully applied natural language processing (NLP) techniques for the representation learning of protein sequences. In this paper, we applied the deep sequence model - UDSMProt, to fine-tune and evaluate two protein prediction tasks: (1) predict proteins with liquid-liquid phase separation propensity and (2) predict synaptic proteins. Our results have shown that, without prior domain knowledge and only based on protein sequences, the fine-tuned language models achieved high classification accuracies and outperformed baseline models using compositional k-mer features in both tasks. Hence, it is promising to apply the protein language model to some learning tasks and the fine-tuned models can be used to predict protein candidates for biological studies.