M. Ramprasath, K. Dhanasekaran, T. Karthick, R. Velumani, P. Sudhakaran
{"title":"An Extensive Study on Pretrained Models for Natural Language Processing Based on Transformers","authors":"M. Ramprasath, K. Dhanasekaran, T. Karthick, R. Velumani, P. Sudhakaran","doi":"10.1109/ICEARS53579.2022.9752241","DOIUrl":null,"url":null,"abstract":"In recent years, Pretraining Language Models based on Transformers -Natural Language Processing (PLMT-NLP) have been highly successful in nearly every NLP task. In the beginning, Generative Pre-trained model-based Transformer, BERT- Bidirectional Encoder model Representations using Transformers was used to develop these models. Models constructed on transformers, Self-supervise knowledge acquiring, and transfer learning establish the foundation of these designs. Transformed-based pre-trained models acquire common linguistic illustrations from vast amounts of textual information through self-supervised model and apply this information to downstream tasks. To eliminate the need to retrain downstream models, these models provide a solid foundation of knowledge. In this paper, the enhanced learning on PLMT-NLP has been discussed. Initially, a quick introduction to self-supervised learning is presented, then diverse core concepts used in PLMT-NLP are explained. Furthermore, a list of relevant libraries for working with PLMT-NLP has been provided. Lastly, the paper discusses about the upcoming research directions that will further improve these models. Because of its thoroughness and relevance to current PLMT-NLP developments, this survey study will positively serve as a valuable resource for those seeking to understand both basic ideas and new developments better.","PeriodicalId":252961,"journal":{"name":"2022 International Conference on Electronics and Renewable Systems (ICEARS)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Electronics and Renewable Systems (ICEARS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICEARS53579.2022.9752241","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
In recent years, Pretraining Language Models based on Transformers -Natural Language Processing (PLMT-NLP) have been highly successful in nearly every NLP task. In the beginning, Generative Pre-trained model-based Transformer, BERT- Bidirectional Encoder model Representations using Transformers was used to develop these models. Models constructed on transformers, Self-supervise knowledge acquiring, and transfer learning establish the foundation of these designs. Transformed-based pre-trained models acquire common linguistic illustrations from vast amounts of textual information through self-supervised model and apply this information to downstream tasks. To eliminate the need to retrain downstream models, these models provide a solid foundation of knowledge. In this paper, the enhanced learning on PLMT-NLP has been discussed. Initially, a quick introduction to self-supervised learning is presented, then diverse core concepts used in PLMT-NLP are explained. Furthermore, a list of relevant libraries for working with PLMT-NLP has been provided. Lastly, the paper discusses about the upcoming research directions that will further improve these models. Because of its thoroughness and relevance to current PLMT-NLP developments, this survey study will positively serve as a valuable resource for those seeking to understand both basic ideas and new developments better.